Search in sources :

Example 1 with LongToken

use of org.apache.cassandra.dht.Murmur3Partitioner.LongToken in project cassandra by apache.

the class RepairMessageSerializationsTest method validationCompleteMessage_WithMerkleTree.

@Test
public void validationCompleteMessage_WithMerkleTree() throws IOException {
    MerkleTrees trees = new MerkleTrees(Murmur3Partitioner.instance);
    trees.addMerkleTree(256, new Range<>(new LongToken(1000), new LongToken(1001)));
    ValidationComplete deserialized = validationCompleteMessage(trees);
    // a simple check to make sure we got some merkle trees back.
    Assert.assertEquals(trees.size(), deserialized.trees.size());
}
Also used : MerkleTrees(org.apache.cassandra.utils.MerkleTrees) LongToken(org.apache.cassandra.dht.Murmur3Partitioner.LongToken) Test(org.junit.Test)

Example 2 with LongToken

use of org.apache.cassandra.dht.Murmur3Partitioner.LongToken in project cassandra by apache.

the class StorageServiceServerTest method testCreateRepairRangeFrom.

@Test
public void testCreateRepairRangeFrom() throws Exception {
    StorageService.instance.setPartitionerUnsafe(Murmur3Partitioner.instance);
    TokenMetadata metadata = StorageService.instance.getTokenMetadata();
    metadata.clearUnsafe();
    metadata.updateNormalToken(new LongToken(1000L), InetAddress.getByName("127.0.0.1"));
    metadata.updateNormalToken(new LongToken(2000L), InetAddress.getByName("127.0.0.2"));
    metadata.updateNormalToken(new LongToken(3000L), InetAddress.getByName("127.0.0.3"));
    metadata.updateNormalToken(new LongToken(4000L), InetAddress.getByName("127.0.0.4"));
    Collection<Range<Token>> repairRangeFrom = StorageService.instance.createRepairRangeFrom("1500", "3700");
    assert repairRangeFrom.size() == 3;
    assert repairRangeFrom.contains(new Range<Token>(new LongToken(1500L), new LongToken(2000L)));
    assert repairRangeFrom.contains(new Range<Token>(new LongToken(2000L), new LongToken(3000L)));
    assert repairRangeFrom.contains(new Range<Token>(new LongToken(3000L), new LongToken(3700L)));
    repairRangeFrom = StorageService.instance.createRepairRangeFrom("500", "700");
    assert repairRangeFrom.size() == 1;
    assert repairRangeFrom.contains(new Range<Token>(new LongToken(500L), new LongToken(700L)));
    repairRangeFrom = StorageService.instance.createRepairRangeFrom("500", "1700");
    assert repairRangeFrom.size() == 2;
    assert repairRangeFrom.contains(new Range<Token>(new LongToken(500L), new LongToken(1000L)));
    assert repairRangeFrom.contains(new Range<Token>(new LongToken(1000L), new LongToken(1700L)));
    repairRangeFrom = StorageService.instance.createRepairRangeFrom("2500", "2300");
    assert repairRangeFrom.size() == 5;
    assert repairRangeFrom.contains(new Range<Token>(new LongToken(2500L), new LongToken(3000L)));
    assert repairRangeFrom.contains(new Range<Token>(new LongToken(3000L), new LongToken(4000L)));
    assert repairRangeFrom.contains(new Range<Token>(new LongToken(4000L), new LongToken(1000L)));
    assert repairRangeFrom.contains(new Range<Token>(new LongToken(1000L), new LongToken(2000L)));
    assert repairRangeFrom.contains(new Range<Token>(new LongToken(2000L), new LongToken(2300L)));
    repairRangeFrom = StorageService.instance.createRepairRangeFrom("2000", "3000");
    assert repairRangeFrom.size() == 1;
    assert repairRangeFrom.contains(new Range<Token>(new LongToken(2000L), new LongToken(3000L)));
    repairRangeFrom = StorageService.instance.createRepairRangeFrom("2000", "2000");
    assert repairRangeFrom.size() == 0;
}
Also used : LongToken(org.apache.cassandra.dht.Murmur3Partitioner.LongToken) LongToken(org.apache.cassandra.dht.Murmur3Partitioner.LongToken) StringToken(org.apache.cassandra.dht.OrderPreservingPartitioner.StringToken) Token(org.apache.cassandra.dht.Token) TokenMetadata(org.apache.cassandra.locator.TokenMetadata) Range(org.apache.cassandra.dht.Range) Test(org.junit.Test)

Example 3 with LongToken

use of org.apache.cassandra.dht.Murmur3Partitioner.LongToken in project cassandra by apache.

the class Memtable method estimateRowOverhead.

private static int estimateRowOverhead(final int count) {
    // calculate row overhead
    try (final OpOrder.Group group = new OpOrder().start()) {
        int rowOverhead;
        MemtableAllocator allocator = MEMORY_POOL.newAllocator();
        ConcurrentNavigableMap<PartitionPosition, Object> partitions = new ConcurrentSkipListMap<>();
        final Object val = new Object();
        for (int i = 0; i < count; i++) partitions.put(allocator.clone(new BufferDecoratedKey(new LongToken(i), ByteBufferUtil.EMPTY_BYTE_BUFFER), group), val);
        double avgSize = ObjectSizes.measureDeep(partitions) / (double) count;
        rowOverhead = (int) ((avgSize - Math.floor(avgSize)) < 0.05 ? Math.floor(avgSize) : Math.ceil(avgSize));
        rowOverhead -= ObjectSizes.measureDeep(new LongToken(0));
        rowOverhead += AtomicBTreePartition.EMPTY_SIZE;
        allocator.setDiscarding();
        allocator.setDiscarded();
        return rowOverhead;
    }
}
Also used : OpOrder(org.apache.cassandra.utils.concurrent.OpOrder) MemtableAllocator(org.apache.cassandra.utils.memory.MemtableAllocator) LongToken(org.apache.cassandra.dht.Murmur3Partitioner.LongToken)

Aggregations

LongToken (org.apache.cassandra.dht.Murmur3Partitioner.LongToken)3 Test (org.junit.Test)2 StringToken (org.apache.cassandra.dht.OrderPreservingPartitioner.StringToken)1 Range (org.apache.cassandra.dht.Range)1 Token (org.apache.cassandra.dht.Token)1 TokenMetadata (org.apache.cassandra.locator.TokenMetadata)1 MerkleTrees (org.apache.cassandra.utils.MerkleTrees)1 OpOrder (org.apache.cassandra.utils.concurrent.OpOrder)1 MemtableAllocator (org.apache.cassandra.utils.memory.MemtableAllocator)1