Search in sources :

Example 1 with CompressionInfo

use of org.apache.cassandra.streaming.compress.CompressionInfo in project cassandra by apache.

the class OutgoingFileMessage method serialize.

public synchronized void serialize(DataOutputStreamPlus out, int version, StreamSession session) throws IOException {
    if (completed) {
        return;
    }
    CompressionInfo compressionInfo = FileMessageHeader.serializer.serialize(header, out, version);
    final SSTableReader reader = ref.get();
    StreamWriter writer = compressionInfo == null ? new StreamWriter(reader, header.sections, session) : new CompressedStreamWriter(reader, header.sections, compressionInfo, session);
    writer.write(out);
}
Also used : SSTableReader(org.apache.cassandra.io.sstable.format.SSTableReader) StreamWriter(org.apache.cassandra.streaming.StreamWriter) CompressedStreamWriter(org.apache.cassandra.streaming.compress.CompressedStreamWriter) CompressionInfo(org.apache.cassandra.streaming.compress.CompressionInfo) CompressedStreamWriter(org.apache.cassandra.streaming.compress.CompressedStreamWriter)

Example 2 with CompressionInfo

use of org.apache.cassandra.streaming.compress.CompressionInfo in project cassandra by apache.

the class CompressedInputStreamTest method testCompressedReadWith.

/**
     * @param valuesToCheck array of longs of range(0-999)
     * @throws Exception
     */
private void testCompressedReadWith(long[] valuesToCheck, boolean testTruncate, boolean testException, double minCompressRatio) throws Exception {
    assert valuesToCheck != null && valuesToCheck.length > 0;
    // write compressed data file of longs
    File parentDir = new File(System.getProperty("java.io.tmpdir"));
    Descriptor desc = new Descriptor(parentDir, "ks", "cf", 1);
    File tmp = new File(desc.filenameFor(Component.DATA));
    MetadataCollector collector = new MetadataCollector(new ClusteringComparator(BytesType.instance));
    CompressionParams param = CompressionParams.snappy(32, minCompressRatio);
    Map<Long, Long> index = new HashMap<Long, Long>();
    try (CompressedSequentialWriter writer = new CompressedSequentialWriter(tmp, desc.filenameFor(Component.COMPRESSION_INFO), null, SequentialWriterOption.DEFAULT, param, collector)) {
        for (long l = 0L; l < 1000; l++) {
            index.put(l, writer.position());
            writer.writeLong(l);
        }
        writer.finish();
    }
    CompressionMetadata comp = CompressionMetadata.create(tmp.getAbsolutePath());
    List<Pair<Long, Long>> sections = new ArrayList<>();
    for (long l : valuesToCheck) {
        long position = index.get(l);
        sections.add(Pair.create(position, position + 8));
    }
    CompressionMetadata.Chunk[] chunks = comp.getChunksForSections(sections);
    long totalSize = comp.getTotalSizeForSections(sections);
    long expectedSize = 0;
    for (CompressionMetadata.Chunk c : chunks) expectedSize += c.length + 4;
    assertEquals(expectedSize, totalSize);
    // buffer up only relevant parts of file
    int size = 0;
    for (CompressionMetadata.Chunk c : chunks) // 4bytes CRC
    size += (c.length + 4);
    byte[] toRead = new byte[size];
    try (RandomAccessFile f = new RandomAccessFile(tmp, "r")) {
        int pos = 0;
        for (CompressionMetadata.Chunk c : chunks) {
            f.seek(c.offset);
            pos += f.read(toRead, pos, c.length + 4);
        }
    }
    if (testTruncate) {
        byte[] actuallyRead = new byte[50];
        System.arraycopy(toRead, 0, actuallyRead, 0, 50);
        toRead = actuallyRead;
    }
    // read buffer using CompressedInputStream
    CompressionInfo info = new CompressionInfo(chunks, param);
    if (testException) {
        testException(sections, info);
        return;
    }
    CompressedInputStream input = new CompressedInputStream(new ByteArrayInputStream(toRead), info, ChecksumType.CRC32, () -> 1.0);
    try (DataInputStream in = new DataInputStream(input)) {
        for (int i = 0; i < sections.size(); i++) {
            input.position(sections.get(i).left);
            long readValue = in.readLong();
            assertEquals("expected " + valuesToCheck[i] + " but was " + readValue, valuesToCheck[i], readValue);
        }
    }
}
Also used : CompressedSequentialWriter(org.apache.cassandra.io.compress.CompressedSequentialWriter) CompressionMetadata(org.apache.cassandra.io.compress.CompressionMetadata) ClusteringComparator(org.apache.cassandra.db.ClusteringComparator) CompressionInfo(org.apache.cassandra.streaming.compress.CompressionInfo) CompressionParams(org.apache.cassandra.schema.CompressionParams) CompressedInputStream(org.apache.cassandra.streaming.compress.CompressedInputStream) Descriptor(org.apache.cassandra.io.sstable.Descriptor) DatabaseDescriptor(org.apache.cassandra.config.DatabaseDescriptor) MetadataCollector(org.apache.cassandra.io.sstable.metadata.MetadataCollector) Pair(org.apache.cassandra.utils.Pair)

Aggregations

CompressionInfo (org.apache.cassandra.streaming.compress.CompressionInfo)2 DatabaseDescriptor (org.apache.cassandra.config.DatabaseDescriptor)1 ClusteringComparator (org.apache.cassandra.db.ClusteringComparator)1 CompressedSequentialWriter (org.apache.cassandra.io.compress.CompressedSequentialWriter)1 CompressionMetadata (org.apache.cassandra.io.compress.CompressionMetadata)1 Descriptor (org.apache.cassandra.io.sstable.Descriptor)1 SSTableReader (org.apache.cassandra.io.sstable.format.SSTableReader)1 MetadataCollector (org.apache.cassandra.io.sstable.metadata.MetadataCollector)1 CompressionParams (org.apache.cassandra.schema.CompressionParams)1 StreamWriter (org.apache.cassandra.streaming.StreamWriter)1 CompressedInputStream (org.apache.cassandra.streaming.compress.CompressedInputStream)1 CompressedStreamWriter (org.apache.cassandra.streaming.compress.CompressedStreamWriter)1 Pair (org.apache.cassandra.utils.Pair)1