Search in sources :

Example 1 with DataInputBuffer

use of org.apache.cassandra.io.util.DataInputBuffer in project cassandra by apache.

the class StreamingHistogramTest method testSerDe.

@Test
public void testSerDe() throws Exception {
    StreamingHistogram hist = new StreamingHistogram(5, 0, 1);
    long[] samples = new long[] { 23, 19, 10, 16, 36, 2, 9 };
    // add 7 points to histogram of 5 bins
    for (int i = 0; i < samples.length; i++) {
        hist.update(samples[i]);
    }
    DataOutputBuffer out = new DataOutputBuffer();
    StreamingHistogram.serializer.serialize(hist, out);
    byte[] bytes = out.toByteArray();
    StreamingHistogram deserialized = StreamingHistogram.serializer.deserialize(new DataInputBuffer(bytes));
    // deserialized histogram should have following values
    Map<Double, Long> expected1 = new LinkedHashMap<Double, Long>(5);
    expected1.put(2.0, 1L);
    expected1.put(9.5, 2L);
    expected1.put(17.5, 2L);
    expected1.put(23.0, 1L);
    expected1.put(36.0, 1L);
    Iterator<Map.Entry<Double, Long>> expectedItr = expected1.entrySet().iterator();
    for (Map.Entry<Number, long[]> actual : deserialized.getAsMap().entrySet()) {
        Map.Entry<Double, Long> entry = expectedItr.next();
        assertEquals(entry.getKey(), actual.getKey().doubleValue(), 0.01);
        assertEquals(entry.getValue().longValue(), actual.getValue()[0]);
    }
}
Also used : LinkedHashMap(java.util.LinkedHashMap) DataInputBuffer(org.apache.cassandra.io.util.DataInputBuffer) DataOutputBuffer(org.apache.cassandra.io.util.DataOutputBuffer) LinkedHashMap(java.util.LinkedHashMap) Map(java.util.Map) Test(org.junit.Test)

Example 2 with DataInputBuffer

use of org.apache.cassandra.io.util.DataInputBuffer in project cassandra by apache.

the class StandardAnalyzer method init.

public void init(StandardTokenizerOptions tokenizerOptions, AbstractType validator) {
    this.validator = validator;
    this.options = tokenizerOptions;
    this.filterPipeline = getFilterPipeline();
    Reader reader = new InputStreamReader(new DataInputBuffer(ByteBufferUtil.EMPTY_BYTE_BUFFER, false));
    this.scanner = new StandardTokenizerImpl(reader);
    this.inputReader = reader;
}
Also used : DataInputBuffer(org.apache.cassandra.io.util.DataInputBuffer) InputStreamReader(java.io.InputStreamReader) Reader(java.io.Reader) InputStreamReader(java.io.InputStreamReader)

Example 3 with DataInputBuffer

use of org.apache.cassandra.io.util.DataInputBuffer in project cassandra by apache.

the class CommitLogReader method readMutation.

/**
 * Deserializes and passes a Mutation to the ICommitLogReadHandler requested
 *
 * @param handler Handler that will take action based on deserialized Mutations
 * @param inputBuffer raw byte array w/Mutation data
 * @param size deserialized size of mutation
 * @param minPosition We need to suppress replay of mutations that are before the required minPosition
 * @param entryLocation filePointer offset of end of mutation within CommitLogSegment
 * @param desc CommitLogDescriptor being worked on
 */
@VisibleForTesting
protected void readMutation(CommitLogReadHandler handler, byte[] inputBuffer, int size, CommitLogPosition minPosition, final int entryLocation, final CommitLogDescriptor desc) throws IOException {
    // For now, we need to go through the motions of deserializing the mutation to determine its size and move
    // the file pointer forward accordingly, even if we're behind the requested minPosition within this SyncSegment.
    boolean shouldReplay = entryLocation > minPosition.position;
    final Mutation mutation;
    try (RebufferingInputStream bufIn = new DataInputBuffer(inputBuffer, 0, size)) {
        mutation = Mutation.serializer.deserialize(bufIn, desc.getMessagingVersion(), DeserializationHelper.Flag.LOCAL);
        // doublecheck that what we read is still] valid for the current schema
        for (PartitionUpdate upd : mutation.getPartitionUpdates()) upd.validate();
    } catch (UnknownTableException ex) {
        if (ex.id == null)
            return;
        AtomicInteger i = invalidMutations.get(ex.id);
        if (i == null) {
            i = new AtomicInteger(1);
            invalidMutations.put(ex.id, i);
        } else
            i.incrementAndGet();
        return;
    } catch (Throwable t) {
        JVMStabilityInspector.inspectThrowable(t);
        Path p = Files.createTempFile("mutation", "dat");
        try (DataOutputStream out = new DataOutputStream(Files.newOutputStream(p))) {
            out.write(inputBuffer, 0, size);
        }
        // Checksum passed so this error can't be permissible.
        handler.handleUnrecoverableError(new CommitLogReadException(String.format("Unexpected error deserializing mutation; saved to %s.  " + "This may be caused by replaying a mutation against a table with the same name but incompatible schema.  " + "Exception follows: %s", p.toString(), t), CommitLogReadErrorReason.MUTATION_ERROR, false));
        return;
    }
    if (logger.isTraceEnabled())
        logger.trace("Read mutation for {}.{}: {}", mutation.getKeyspaceName(), mutation.key(), "{" + StringUtils.join(mutation.getPartitionUpdates().iterator(), ", ") + "}");
    if (shouldReplay)
        handler.handleMutation(mutation, size, entryLocation, desc);
}
Also used : Path(java.nio.file.Path) UnknownTableException(org.apache.cassandra.exceptions.UnknownTableException) DataInputBuffer(org.apache.cassandra.io.util.DataInputBuffer) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) CommitLogReadException(org.apache.cassandra.db.commitlog.CommitLogReadHandler.CommitLogReadException) RebufferingInputStream(org.apache.cassandra.io.util.RebufferingInputStream) Mutation(org.apache.cassandra.db.Mutation) PartitionUpdate(org.apache.cassandra.db.partitions.PartitionUpdate) VisibleForTesting(com.google.common.annotations.VisibleForTesting)

Example 4 with DataInputBuffer

use of org.apache.cassandra.io.util.DataInputBuffer in project cassandra by apache.

the class PagingState method legacyDeserialize.

@SuppressWarnings({ "resource", "RedundantSuppression" })
private static PagingState legacyDeserialize(ByteBuffer bytes, ProtocolVersion protocolVersion) throws IOException {
    if (protocolVersion.isGreaterThan(ProtocolVersion.V3))
        throw new IllegalArgumentException();
    DataInputBuffer in = new DataInputBuffer(bytes, false);
    ByteBuffer partitionKey = readWithShortLength(in);
    ByteBuffer rawMark = readWithShortLength(in);
    int remaining = in.readInt();
    /*
         * 2.1/2.2 implementations of V3 protocol did not write remainingInPartition, but C* 3.0+ does, so we need
         * to handle both variants of V3 serialization for compatibility.
         */
    int remainingInPartition = in.available() > 0 ? in.readInt() : Integer.MAX_VALUE;
    return new PagingState(partitionKey.hasRemaining() ? partitionKey : null, rawMark.hasRemaining() ? new RowMark(rawMark, protocolVersion) : null, remaining, remainingInPartition);
}
Also used : DataInputBuffer(org.apache.cassandra.io.util.DataInputBuffer) ByteBuffer(java.nio.ByteBuffer)

Example 5 with DataInputBuffer

use of org.apache.cassandra.io.util.DataInputBuffer in project cassandra by apache.

the class SinglePartitionSliceCommandTest method staticColumnsAreReturned.

@Test
public void staticColumnsAreReturned() throws IOException {
    DecoratedKey key = metadata.partitioner.decorateKey(ByteBufferUtil.bytes("k1"));
    QueryProcessor.executeInternal("INSERT INTO ks.tbl (k, s) VALUES ('k1', 's')");
    Assert.assertFalse(QueryProcessor.executeInternal("SELECT s FROM ks.tbl WHERE k='k1'").isEmpty());
    ColumnFilter columnFilter = ColumnFilter.selection(RegularAndStaticColumns.of(s));
    ClusteringIndexSliceFilter sliceFilter = new ClusteringIndexSliceFilter(Slices.NONE, false);
    ReadCommand cmd = SinglePartitionReadCommand.create(metadata, FBUtilities.nowInSeconds(), columnFilter, RowFilter.NONE, DataLimits.NONE, key, sliceFilter);
    // check raw iterator for static cell
    try (ReadExecutionController executionController = cmd.executionController();
        UnfilteredPartitionIterator pi = cmd.executeLocally(executionController)) {
        checkForS(pi);
    }
    ReadResponse response;
    DataOutputBuffer out;
    DataInputPlus in;
    ReadResponse dst;
    // check (de)serialized iterator for memtable static cell
    try (ReadExecutionController executionController = cmd.executionController();
        UnfilteredPartitionIterator pi = cmd.executeLocally(executionController)) {
        response = ReadResponse.createDataResponse(pi, cmd, executionController.getRepairedDataInfo());
    }
    out = new DataOutputBuffer((int) ReadResponse.serializer.serializedSize(response, MessagingService.VERSION_30));
    ReadResponse.serializer.serialize(response, out, MessagingService.VERSION_30);
    in = new DataInputBuffer(out.buffer(), true);
    dst = ReadResponse.serializer.deserialize(in, MessagingService.VERSION_30);
    try (UnfilteredPartitionIterator pi = dst.makeIterator(cmd)) {
        checkForS(pi);
    }
    // check (de)serialized iterator for sstable static cell
    Schema.instance.getColumnFamilyStoreInstance(metadata.id).forceBlockingFlush();
    try (ReadExecutionController executionController = cmd.executionController();
        UnfilteredPartitionIterator pi = cmd.executeLocally(executionController)) {
        response = ReadResponse.createDataResponse(pi, cmd, executionController.getRepairedDataInfo());
    }
    out = new DataOutputBuffer((int) ReadResponse.serializer.serializedSize(response, MessagingService.VERSION_30));
    ReadResponse.serializer.serialize(response, out, MessagingService.VERSION_30);
    in = new DataInputBuffer(out.buffer(), true);
    dst = ReadResponse.serializer.deserialize(in, MessagingService.VERSION_30);
    try (UnfilteredPartitionIterator pi = dst.makeIterator(cmd)) {
        checkForS(pi);
    }
}
Also used : ClusteringIndexSliceFilter(org.apache.cassandra.db.filter.ClusteringIndexSliceFilter) DataInputBuffer(org.apache.cassandra.io.util.DataInputBuffer) DataOutputBuffer(org.apache.cassandra.io.util.DataOutputBuffer) UnfilteredPartitionIterator(org.apache.cassandra.db.partitions.UnfilteredPartitionIterator) DataInputPlus(org.apache.cassandra.io.util.DataInputPlus) ColumnFilter(org.apache.cassandra.db.filter.ColumnFilter) Test(org.junit.Test)

Aggregations

DataInputBuffer (org.apache.cassandra.io.util.DataInputBuffer)54 DataOutputBuffer (org.apache.cassandra.io.util.DataOutputBuffer)33 Test (org.junit.Test)23 IOException (java.io.IOException)14 ByteBuffer (java.nio.ByteBuffer)13 DataInputPlus (org.apache.cassandra.io.util.DataInputPlus)13 TableMetadata (org.apache.cassandra.schema.TableMetadata)6 Mutation (org.apache.cassandra.db.Mutation)5 UUID (java.util.UUID)4 RowUpdateBuilder (org.apache.cassandra.db.RowUpdateBuilder)4 DataOutputPlus (org.apache.cassandra.io.util.DataOutputPlus)4 InetAddressAndPort (org.apache.cassandra.locator.InetAddressAndPort)4 InputStreamReader (java.io.InputStreamReader)3 Reader (java.io.Reader)3 ArrayList (java.util.ArrayList)3 DataOutputBufferFixed (org.apache.cassandra.io.util.DataOutputBufferFixed)3 ByteBuf (io.netty.buffer.ByteBuf)2 LinkedHashMap (java.util.LinkedHashMap)2 Map (java.util.Map)2 ClusteringIndexSliceFilter (org.apache.cassandra.db.filter.ClusteringIndexSliceFilter)2