Search in sources :

Example 96 with SSTableReader

use of org.apache.cassandra.io.sstable.format.SSTableReader in project cassandra by apache.

the class SizeEstimatesRecorder method estimateMeanPartitionSize.

private long estimateMeanPartitionSize(Collection<SSTableReader> sstables) {
    long sum = 0, count = 0;
    for (SSTableReader sstable : sstables) {
        long n = sstable.getEstimatedPartitionSize().count();
        sum += sstable.getEstimatedPartitionSize().mean() * n;
        count += n;
    }
    return count > 0 ? sum / count : 0;
}
Also used : SSTableReader(org.apache.cassandra.io.sstable.format.SSTableReader)

Example 97 with SSTableReader

use of org.apache.cassandra.io.sstable.format.SSTableReader in project cassandra by apache.

the class PartitionRangeReadCommand method queryStorage.

protected UnfilteredPartitionIterator queryStorage(final ColumnFamilyStore cfs, ReadExecutionController executionController) {
    ColumnFamilyStore.ViewFragment view = cfs.select(View.selectLive(dataRange().keyRange()));
    Tracing.trace("Executing seq scan across {} sstables for {}", view.sstables.size(), dataRange().keyRange().getString(metadata().partitionKeyType));
    // fetch data from current memtable, historical memtables, and SSTables in the correct order.
    final List<UnfilteredPartitionIterator> iterators = new ArrayList<>(Iterables.size(view.memtables) + view.sstables.size());
    try {
        for (Memtable memtable : view.memtables) {
            // We close on exception and on closing the result returned by this method
            @SuppressWarnings("resource") Memtable.MemtableUnfilteredPartitionIterator iter = memtable.makePartitionIterator(columnFilter(), dataRange());
            oldestUnrepairedTombstone = Math.min(oldestUnrepairedTombstone, iter.getMinLocalDeletionTime());
            iterators.add(iter);
        }
        for (SSTableReader sstable : view.sstables) {
            // We close on exception and on closing the result returned by this method
            @SuppressWarnings("resource") UnfilteredPartitionIterator iter = sstable.getScanner(columnFilter(), dataRange());
            iterators.add(iter);
            if (!sstable.isRepaired())
                oldestUnrepairedTombstone = Math.min(oldestUnrepairedTombstone, sstable.getMinLocalDeletionTime());
        }
        // iterators can be empty for offline tools
        return iterators.isEmpty() ? EmptyIterators.unfilteredPartition(metadata()) : checkCacheFilter(UnfilteredPartitionIterators.mergeLazily(iterators, nowInSec()), cfs);
    } catch (RuntimeException | Error e) {
        try {
            FBUtilities.closeAll(iterators);
        } catch (Exception suppressed) {
            e.addSuppressed(suppressed);
        }
        throw e;
    }
}
Also used : ArrayList(java.util.ArrayList) RequestExecutionException(org.apache.cassandra.exceptions.RequestExecutionException) IOException(java.io.IOException) SSTableReader(org.apache.cassandra.io.sstable.format.SSTableReader)

Example 98 with SSTableReader

use of org.apache.cassandra.io.sstable.format.SSTableReader in project cassandra by apache.

the class CompactionLogger method formatSSTables.

private JsonNode formatSSTables(AbstractCompactionStrategy strategy) {
    ArrayNode node = json.arrayNode();
    CompactionStrategyManager csm = csmRef.get();
    ColumnFamilyStore cfs = cfsRef.get();
    if (csm == null || cfs == null)
        return node;
    for (SSTableReader sstable : cfs.getLiveSSTables()) {
        if (csm.getCompactionStrategyFor(sstable) == strategy)
            node.add(formatSSTable(strategy, sstable));
    }
    return node;
}
Also used : SSTableReader(org.apache.cassandra.io.sstable.format.SSTableReader) ColumnFamilyStore(org.apache.cassandra.db.ColumnFamilyStore) ArrayNode(org.codehaus.jackson.node.ArrayNode)

Example 99 with SSTableReader

use of org.apache.cassandra.io.sstable.format.SSTableReader in project cassandra by apache.

the class CompactionManager method createWriterForAntiCompaction.

public static SSTableWriter createWriterForAntiCompaction(ColumnFamilyStore cfs, File compactionFileLocation, int expectedBloomFilterSize, long repairedAt, UUID pendingRepair, Collection<SSTableReader> sstables, LifecycleTransaction txn) {
    FileUtils.createDirectory(compactionFileLocation);
    int minLevel = Integer.MAX_VALUE;
    // after first migration to be able to drop the sstables back in their original place in the repaired sstable manifest
    for (SSTableReader sstable : sstables) {
        if (minLevel == Integer.MAX_VALUE)
            minLevel = sstable.getSSTableLevel();
        if (minLevel != sstable.getSSTableLevel()) {
            minLevel = 0;
            break;
        }
    }
    return SSTableWriter.create(cfs.newSSTableDescriptor(compactionFileLocation), (long) expectedBloomFilterSize, repairedAt, pendingRepair, cfs.metadata, new MetadataCollector(sstables, cfs.metadata().comparator, minLevel), SerializationHeader.make(cfs.metadata(), sstables), cfs.indexManager.listIndexes(), txn);
}
Also used : SSTableReader(org.apache.cassandra.io.sstable.format.SSTableReader) MetadataCollector(org.apache.cassandra.io.sstable.metadata.MetadataCollector)

Example 100 with SSTableReader

use of org.apache.cassandra.io.sstable.format.SSTableReader in project cassandra by apache.

the class CompactionManager method createMerkleTrees.

private static MerkleTrees createMerkleTrees(Iterable<SSTableReader> sstables, Collection<Range<Token>> ranges, ColumnFamilyStore cfs) {
    MerkleTrees tree = new MerkleTrees(cfs.getPartitioner());
    long allPartitions = 0;
    Map<Range<Token>, Long> rangePartitionCounts = Maps.newHashMapWithExpectedSize(ranges.size());
    for (Range<Token> range : ranges) {
        long numPartitions = 0;
        for (SSTableReader sstable : sstables) numPartitions += sstable.estimatedKeysForRanges(Collections.singleton(range));
        rangePartitionCounts.put(range, numPartitions);
        allPartitions += numPartitions;
    }
    for (Range<Token> range : ranges) {
        long numPartitions = rangePartitionCounts.get(range);
        double rangeOwningRatio = allPartitions > 0 ? (double) numPartitions / allPartitions : 0;
        // determine max tree depth proportional to range size to avoid blowing up memory with multiple tress,
        // capping at 20 to prevent large tree (CASSANDRA-11390)
        int maxDepth = rangeOwningRatio > 0 ? (int) Math.floor(20 - Math.log(1 / rangeOwningRatio) / Math.log(2)) : 0;
        // determine tree depth from number of partitions, capping at max tree depth (CASSANDRA-5263)
        int depth = numPartitions > 0 ? (int) Math.min(Math.ceil(Math.log(numPartitions) / Math.log(2)), maxDepth) : 0;
        tree.addMerkleTree((int) Math.pow(2, depth), range);
    }
    if (logger.isDebugEnabled()) {
        // MT serialize may take time
        logger.debug("Created {} merkle trees with merkle trees size {}, {} partitions, {} bytes", tree.ranges().size(), tree.size(), allPartitions, MerkleTrees.serializer.serializedSize(tree, 0));
    }
    return tree;
}
Also used : SSTableReader(org.apache.cassandra.io.sstable.format.SSTableReader) Token(org.apache.cassandra.dht.Token) Range(org.apache.cassandra.dht.Range)

Aggregations

SSTableReader (org.apache.cassandra.io.sstable.format.SSTableReader)289 Test (org.junit.Test)159 ColumnFamilyStore (org.apache.cassandra.db.ColumnFamilyStore)91 LifecycleTransaction (org.apache.cassandra.db.lifecycle.LifecycleTransaction)55 Keyspace (org.apache.cassandra.db.Keyspace)49 File (java.io.File)45 UUID (java.util.UUID)28 Range (org.apache.cassandra.dht.Range)28 Directories (org.apache.cassandra.db.Directories)27 Token (org.apache.cassandra.dht.Token)24 RandomAccessFile (java.io.RandomAccessFile)22 AbstractTransactionalTest (org.apache.cassandra.utils.concurrent.AbstractTransactionalTest)20 ArrayList (java.util.ArrayList)18 ByteBuffer (java.nio.ByteBuffer)17 HashSet (java.util.HashSet)16 SchemaLoader.createKeyspace (org.apache.cassandra.SchemaLoader.createKeyspace)16 DecoratedKey (org.apache.cassandra.db.DecoratedKey)16 RowUpdateBuilder (org.apache.cassandra.db.RowUpdateBuilder)16 CompactionController (org.apache.cassandra.db.compaction.CompactionController)14 CompactionIterator (org.apache.cassandra.db.compaction.CompactionIterator)13