Search in sources :

Example 1 with SlicePredicate

use of org.apache.cassandra.thrift.SlicePredicate in project scale7-pelops by s7.

the class Selector method newColumnsPredicateAll.

/**
     * Create a new <code>SlicePredicate</code> instance that selects "all" columns
     * @param reversed                        Whether the results should be returned in reverse order
     * @param maxColCount                     The maximum number of columns to return
     * @return                                The new <code>SlicePredicate</code>
     */
public static SlicePredicate newColumnsPredicateAll(boolean reversed, int maxColCount) {
    SlicePredicate predicate = new SlicePredicate();
    predicate.setSlice_range(new SliceRange(Bytes.EMPTY.getBytes(), Bytes.EMPTY.getBytes(), reversed, maxColCount));
    return predicate;
}
Also used : SliceRange(org.apache.cassandra.thrift.SliceRange) SlicePredicate(org.apache.cassandra.thrift.SlicePredicate)

Example 2 with SlicePredicate

use of org.apache.cassandra.thrift.SlicePredicate in project scale7-pelops by s7.

the class Selector method getPageOfColumnsFromRow.

/**
     * Retrieve a page of columns composed from a segment of the sequence of columns in a row.
     * @param columnFamily                  The column family containing the row
     * @param rowKey                        The key of the row containing the columns
     * @param startBeyondName               The sequence of columns must begin with the smallest column name greater than this value. Pass <code>null</code> to start at the beginning of the sequence.
     * @param reversed                      Whether the scan should proceed in descending column name order
     * @param count                         The maximum number of columns that can be retrieved by the scan
     * @param cLevel                        The Cassandra consistency level with which to perform the operation
     * @return                              A page of columns
     * @throws PelopsException if an error occurs
     */
public List<Column> getPageOfColumnsFromRow(String columnFamily, Bytes rowKey, Bytes startBeyondName, boolean reversed, int count, ConsistencyLevel cLevel) throws PelopsException {
    SlicePredicate predicate;
    if (Bytes.nullSafeGet(startBeyondName) == null) {
        predicate = Selector.newColumnsPredicateAll(reversed, count);
        return getColumnsFromRow(columnFamily, rowKey, predicate, cLevel);
    } else {
        // cassandra will return the start row but the user is expecting a page of results beyond that point
        int incrementedCount = count + 1;
        predicate = Selector.newColumnsPredicate(startBeyondName, Bytes.EMPTY, reversed, incrementedCount);
        List<Column> columns = getColumnsFromRow(columnFamily, rowKey, predicate, cLevel);
        if (columns.size() > 0) {
            Column first = columns.get(0);
            if (first.name.equals(startBeyondName.getBytes()))
                return columns.subList(1, columns.size());
            else if (columns.size() == incrementedCount)
                return columns.subList(0, columns.size() - 1);
        }
        return columns;
    }
}
Also used : Column(org.apache.cassandra.thrift.Column) SuperColumn(org.apache.cassandra.thrift.SuperColumn) CounterColumn(org.apache.cassandra.thrift.CounterColumn) ColumnOrSuperColumn(org.apache.cassandra.thrift.ColumnOrSuperColumn) SlicePredicate(org.apache.cassandra.thrift.SlicePredicate)

Example 3 with SlicePredicate

use of org.apache.cassandra.thrift.SlicePredicate in project eiger by wlloyd.

the class WordCount method run.

public int run(String[] args) throws Exception {
    String outputReducerType = "filesystem";
    if (args != null && args[0].startsWith(OUTPUT_REDUCER_VAR)) {
        String[] s = args[0].split("=");
        if (s != null && s.length == 2)
            outputReducerType = s[1];
    }
    logger.info("output reducer type: " + outputReducerType);
    for (int i = 0; i < WordCountSetup.TEST_COUNT; i++) {
        String columnName = "text" + i;
        getConf().set(CONF_COLUMN_NAME, columnName);
        Job job = new Job(getConf(), "wordcount");
        job.setJarByClass(WordCount.class);
        job.setMapperClass(TokenizerMapper.class);
        if (outputReducerType.equalsIgnoreCase("filesystem")) {
            job.setCombinerClass(ReducerToFilesystem.class);
            job.setReducerClass(ReducerToFilesystem.class);
            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(IntWritable.class);
            FileOutputFormat.setOutputPath(job, new Path(OUTPUT_PATH_PREFIX + i));
        } else {
            job.setReducerClass(ReducerToCassandra.class);
            job.setMapOutputKeyClass(Text.class);
            job.setMapOutputValueClass(IntWritable.class);
            job.setOutputKeyClass(ByteBuffer.class);
            job.setOutputValueClass(List.class);
            job.setOutputFormatClass(ColumnFamilyOutputFormat.class);
            ConfigHelper.setOutputColumnFamily(job.getConfiguration(), KEYSPACE, OUTPUT_COLUMN_FAMILY);
        }
        job.setInputFormatClass(ColumnFamilyInputFormat.class);
        ConfigHelper.setRpcPort(job.getConfiguration(), "9160");
        ConfigHelper.setInitialAddress(job.getConfiguration(), "localhost");
        ConfigHelper.setPartitioner(job.getConfiguration(), "org.apache.cassandra.dht.RandomPartitioner");
        ConfigHelper.setInputColumnFamily(job.getConfiguration(), KEYSPACE, COLUMN_FAMILY);
        SlicePredicate predicate = new SlicePredicate().setColumn_names(Arrays.asList(ByteBufferUtil.bytes(columnName)));
        ConfigHelper.setInputSlicePredicate(job.getConfiguration(), predicate);
        job.waitForCompletion(true);
    }
    return 0;
}
Also used : Path(org.apache.hadoop.fs.Path) SlicePredicate(org.apache.cassandra.thrift.SlicePredicate) Job(org.apache.hadoop.mapreduce.Job)

Example 4 with SlicePredicate

use of org.apache.cassandra.thrift.SlicePredicate in project eiger by wlloyd.

the class RangeSliceCommandSerializer method deserialize.

public RangeSliceCommand deserialize(DataInput dis, int version) throws IOException {
    String keyspace = dis.readUTF();
    String columnFamily = dis.readUTF();
    int scLength = dis.readInt();
    ByteBuffer superColumn = null;
    if (scLength > 0) {
        byte[] buf = new byte[scLength];
        dis.readFully(buf);
        superColumn = ByteBuffer.wrap(buf);
    }
    TDeserializer dser = new TDeserializer(new TBinaryProtocol.Factory());
    SlicePredicate pred = new SlicePredicate();
    FBUtilities.deserialize(dser, pred, dis);
    List<IndexExpression> rowFilter = null;
    if (version >= MessagingService.VERSION_11) {
        int filterCount = dis.readInt();
        rowFilter = new ArrayList<IndexExpression>(filterCount);
        for (int i = 0; i < filterCount; i++) {
            IndexExpression expr = new IndexExpression();
            FBUtilities.deserialize(dser, expr, dis);
            rowFilter.add(expr);
        }
    }
    AbstractBounds<RowPosition> range = AbstractBounds.serializer().deserialize(dis, version).toRowBounds();
    int maxResults = dis.readInt();
    boolean maxIsColumns = false;
    if (version >= MessagingService.VERSION_11) {
        maxIsColumns = dis.readBoolean();
    }
    return new RangeSliceCommand(keyspace, columnFamily, superColumn, pred, range, rowFilter, maxResults, maxIsColumns);
}
Also used : TDeserializer(org.apache.thrift.TDeserializer) IndexExpression(org.apache.cassandra.thrift.IndexExpression) SlicePredicate(org.apache.cassandra.thrift.SlicePredicate) ByteBuffer(java.nio.ByteBuffer) TBinaryProtocol(org.apache.cassandra.thrift.TBinaryProtocol)

Example 5 with SlicePredicate

use of org.apache.cassandra.thrift.SlicePredicate in project scale7-pelops by s7.

the class Selector method newColumnsPredicate.

/**
     * Create a new <code>SlicePredicate</code> instance.
     * @param colNames                        The specific columns names to select in the slice
     * @return                                The new <code>SlicePredicate</code>
     */
public static SlicePredicate newColumnsPredicate(String... colNames) {
    List<ByteBuffer> asList = new ArrayList<ByteBuffer>(32);
    for (String colName : colNames) asList.add(fromUTF8(colName).getBytes());
    SlicePredicate predicate = new SlicePredicate();
    predicate.setColumn_names(asList);
    return predicate;
}
Also used : ArrayList(java.util.ArrayList) SlicePredicate(org.apache.cassandra.thrift.SlicePredicate) ByteBuffer(java.nio.ByteBuffer) Bytes.fromByteBuffer(org.scale7.cassandra.pelops.Bytes.fromByteBuffer)

Aggregations

SlicePredicate (org.apache.cassandra.thrift.SlicePredicate)15 ByteBuffer (java.nio.ByteBuffer)5 SliceRange (org.apache.cassandra.thrift.SliceRange)4 ArrayList (java.util.ArrayList)3 ColumnOrSuperColumn (org.apache.cassandra.thrift.ColumnOrSuperColumn)2 Deletion (org.apache.cassandra.thrift.Deletion)2 Mutation (org.apache.cassandra.thrift.Mutation)2 SuperColumn (org.apache.cassandra.thrift.SuperColumn)2 TBinaryProtocol (org.apache.cassandra.thrift.TBinaryProtocol)2 TDeserializer (org.apache.thrift.TDeserializer)2 Bytes.fromByteBuffer (org.scale7.cassandra.pelops.Bytes.fromByteBuffer)2 DataOutputStream (java.io.DataOutputStream)1 CFMetaData (org.apache.cassandra.config.CFMetaData)1 IDiskAtomFilter (org.apache.cassandra.db.filter.IDiskAtomFilter)1 IPartitioner (org.apache.cassandra.dht.IPartitioner)1 Token (org.apache.cassandra.dht.Token)1 InvalidRequestException (org.apache.cassandra.exceptions.InvalidRequestException)1 IsBootstrappingException (org.apache.cassandra.exceptions.IsBootstrappingException)1 RequestTimeoutException (org.apache.cassandra.exceptions.RequestTimeoutException)1 UnavailableException (org.apache.cassandra.exceptions.UnavailableException)1