Search in sources :

Example 6 with HStore

use of org.apache.hadoop.hbase.regionserver.HStore in project hbase by apache.

the class TestCompactedHFilesDischarger method testCleanerWithParallelScannersAfterCompaction.

@Test
public void testCleanerWithParallelScannersAfterCompaction() throws Exception {
    // Create the cleaner object
    CompactedHFilesDischarger cleaner = new CompactedHFilesDischarger(1000, (Stoppable) null, rss, false);
    // Add some data to the region and do some flushes
    for (int i = 1; i < 10; i++) {
        Put p = new Put(Bytes.toBytes("row" + i));
        p.addColumn(fam, qual1, val);
        region.put(p);
    }
    // flush them
    region.flush(true);
    for (int i = 11; i < 20; i++) {
        Put p = new Put(Bytes.toBytes("row" + i));
        p.addColumn(fam, qual1, val);
        region.put(p);
    }
    // flush them
    region.flush(true);
    for (int i = 21; i < 30; i++) {
        Put p = new Put(Bytes.toBytes("row" + i));
        p.addColumn(fam, qual1, val);
        region.put(p);
    }
    // flush them
    region.flush(true);
    Store store = region.getStore(fam);
    assertEquals(3, store.getStorefilesCount());
    Collection<StoreFile> storefiles = store.getStorefiles();
    Collection<StoreFile> compactedfiles = ((HStore) store).getStoreEngine().getStoreFileManager().getCompactedfiles();
    // None of the files should be in compacted state.
    for (StoreFile file : storefiles) {
        assertFalse(file.isCompactedAway());
    }
    // Do compaction
    region.compact(true);
    startScannerThreads();
    storefiles = store.getStorefiles();
    int usedReaderCount = 0;
    int unusedReaderCount = 0;
    for (StoreFile file : storefiles) {
        if (file.getRefCount() == 3) {
            usedReaderCount++;
        }
    }
    compactedfiles = ((HStore) store).getStoreEngine().getStoreFileManager().getCompactedfiles();
    for (StoreFile file : compactedfiles) {
        assertEquals("Refcount should be 3", 0, file.getRefCount());
        unusedReaderCount++;
    }
    // Though there are files we are not using them for reads
    assertEquals("unused reader count should be 3", 3, unusedReaderCount);
    assertEquals("used reader count should be 1", 1, usedReaderCount);
    // now run the cleaner
    cleaner.chore();
    countDown();
    assertEquals(1, store.getStorefilesCount());
    storefiles = store.getStorefiles();
    for (StoreFile file : storefiles) {
        // Should not be in compacted state
        assertFalse(file.isCompactedAway());
    }
    compactedfiles = ((HStore) store).getStoreEngine().getStoreFileManager().getCompactedfiles();
    assertTrue(compactedfiles.isEmpty());
}
Also used : CompactedHFilesDischarger(org.apache.hadoop.hbase.regionserver.CompactedHFilesDischarger) StoreFile(org.apache.hadoop.hbase.regionserver.StoreFile) HStore(org.apache.hadoop.hbase.regionserver.HStore) Store(org.apache.hadoop.hbase.regionserver.Store) HStore(org.apache.hadoop.hbase.regionserver.HStore) Put(org.apache.hadoop.hbase.client.Put) Test(org.junit.Test)

Example 7 with HStore

use of org.apache.hadoop.hbase.regionserver.HStore in project hbase by apache.

the class PerfTestCompactionPolicies method createMockStore.

private HStore createMockStore() {
    HStore s = mock(HStore.class);
    when(s.getStoreFileTtl()).thenReturn(Long.MAX_VALUE);
    when(s.getBlockingFileCount()).thenReturn(7L);
    return s;
}
Also used : HStore(org.apache.hadoop.hbase.regionserver.HStore)

Aggregations

HStore (org.apache.hadoop.hbase.regionserver.HStore)7 Test (org.junit.Test)6 Put (org.apache.hadoop.hbase.client.Put)5 Configuration (org.apache.hadoop.conf.Configuration)3 CompactedHFilesDischarger (org.apache.hadoop.hbase.regionserver.CompactedHFilesDischarger)3 Store (org.apache.hadoop.hbase.regionserver.Store)3 StoreFile (org.apache.hadoop.hbase.regionserver.StoreFile)3 HColumnDescriptor (org.apache.hadoop.hbase.HColumnDescriptor)2 HTableDescriptor (org.apache.hadoop.hbase.HTableDescriptor)2 IOException (java.io.IOException)1 ArrayList (java.util.ArrayList)1 CountDownLatch (java.util.concurrent.CountDownLatch)1 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 Path (org.apache.hadoop.fs.Path)1 Cell (org.apache.hadoop.hbase.Cell)1 HRegionInfo (org.apache.hadoop.hbase.HRegionInfo)1 Connection (org.apache.hadoop.hbase.client.Connection)1 Get (org.apache.hadoop.hbase.client.Get)1 Result (org.apache.hadoop.hbase.client.Result)1