Search in sources :

Example 6 with CompactionContext

use of org.apache.hadoop.hbase.regionserver.compactions.CompactionContext in project hbase by apache.

the class TestSplitTransactionOnCluster method testSplitFailedCompactionAndSplit.

@Test(timeout = 60000)
public void testSplitFailedCompactionAndSplit() throws Exception {
    final TableName tableName = TableName.valueOf(name.getMethodName());
    // Create table then get the single region for our new table.
    HTableDescriptor htd = new HTableDescriptor(tableName);
    byte[] cf = Bytes.toBytes("cf");
    htd.addFamily(new HColumnDescriptor(cf));
    admin.createTable(htd);
    for (int i = 0; cluster.getRegions(tableName).isEmpty() && i < 100; i++) {
        Thread.sleep(100);
    }
    assertEquals(1, cluster.getRegions(tableName).size());
    HRegion region = cluster.getRegions(tableName).get(0);
    Store store = region.getStore(cf);
    int regionServerIndex = cluster.getServerWith(region.getRegionInfo().getRegionName());
    HRegionServer regionServer = cluster.getRegionServer(regionServerIndex);
    Table t = TESTING_UTIL.getConnection().getTable(tableName);
    // insert data
    insertData(tableName, admin, t);
    insertData(tableName, admin, t);
    int fileNum = store.getStorefiles().size();
    // 0, Compaction Request
    store.triggerMajorCompaction();
    CompactionContext cc = store.requestCompaction();
    assertNotNull(cc);
    // 1, A timeout split
    // 1.1 close region
    assertEquals(2, region.close(false).get(cf).size());
    // 1.2 rollback and Region initialize again
    region.initialize();
    // 2, Run Compaction cc
    assertFalse(region.compact(cc, store, NoLimitThroughputController.INSTANCE));
    assertTrue(fileNum > store.getStorefiles().size());
    // 3, Split
    requestSplitRegion(regionServer, region, Bytes.toBytes("row3"));
    assertEquals(2, cluster.getRegions(tableName).size());
}
Also used : TableName(org.apache.hadoop.hbase.TableName) Table(org.apache.hadoop.hbase.client.Table) CompactionContext(org.apache.hadoop.hbase.regionserver.compactions.CompactionContext) HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor) Test(org.junit.Test)

Example 7 with CompactionContext

use of org.apache.hadoop.hbase.regionserver.compactions.CompactionContext in project hbase by apache.

the class TestStripeStoreEngine method testCompactionContextForceSelect.

@Test
public void testCompactionContextForceSelect() throws Exception {
    Configuration conf = HBaseConfiguration.create();
    int targetCount = 2;
    conf.setInt(StripeStoreConfig.INITIAL_STRIPE_COUNT_KEY, targetCount);
    conf.setInt(StripeStoreConfig.MIN_FILES_L0_KEY, 2);
    conf.set(StoreEngine.STORE_ENGINE_CLASS_KEY, TestStoreEngine.class.getName());
    TestStoreEngine se = createEngine(conf);
    StripeCompactor mockCompactor = mock(StripeCompactor.class);
    se.setCompactorOverride(mockCompactor);
    when(mockCompactor.compact(any(CompactionRequest.class), anyInt(), anyLong(), any(byte[].class), any(byte[].class), any(byte[].class), any(byte[].class), any(ThroughputController.class), any(User.class))).thenReturn(new ArrayList<>());
    // Produce 3 L0 files.
    StoreFile sf = createFile();
    ArrayList<StoreFile> compactUs = al(sf, createFile(), createFile());
    se.getStoreFileManager().loadFiles(compactUs);
    // Create a compaction that would want to split the stripe.
    CompactionContext compaction = se.createCompaction();
    compaction.select(al(), false, false, false);
    assertEquals(3, compaction.getRequest().getFiles().size());
    // Override the file list. Granted, overriding this compaction in this manner will
    // break things in real world, but we only want to verify the override.
    compactUs.remove(sf);
    CompactionRequest req = new CompactionRequest(compactUs);
    compaction.forceSelect(req);
    assertEquals(2, compaction.getRequest().getFiles().size());
    assertFalse(compaction.getRequest().getFiles().contains(sf));
    // Make sure the correct method it called on compactor.
    compaction.compact(NoLimitThroughputController.INSTANCE, null);
    verify(mockCompactor, times(1)).compact(compaction.getRequest(), targetCount, 0L, StripeStoreFileManager.OPEN_KEY, StripeStoreFileManager.OPEN_KEY, null, null, NoLimitThroughputController.INSTANCE, null);
}
Also used : User(org.apache.hadoop.hbase.security.User) Configuration(org.apache.hadoop.conf.Configuration) HBaseConfiguration(org.apache.hadoop.hbase.HBaseConfiguration) CompactionContext(org.apache.hadoop.hbase.regionserver.compactions.CompactionContext) ThroughputController(org.apache.hadoop.hbase.regionserver.throttle.ThroughputController) NoLimitThroughputController(org.apache.hadoop.hbase.regionserver.throttle.NoLimitThroughputController) CompactionRequest(org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest) StripeCompactor(org.apache.hadoop.hbase.regionserver.compactions.StripeCompactor) Test(org.junit.Test)

Aggregations

CompactionContext (org.apache.hadoop.hbase.regionserver.compactions.CompactionContext)7 Test (org.junit.Test)3 Configuration (org.apache.hadoop.conf.Configuration)2 HBaseConfiguration (org.apache.hadoop.hbase.HBaseConfiguration)2 HColumnDescriptor (org.apache.hadoop.hbase.HColumnDescriptor)2 CompactionRequest (org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest)2 NoLimitThroughputController (org.apache.hadoop.hbase.regionserver.throttle.NoLimitThroughputController)2 ThroughputController (org.apache.hadoop.hbase.regionserver.throttle.ThroughputController)2 VisibleForTesting (com.google.common.annotations.VisibleForTesting)1 IOException (java.io.IOException)1 InterruptedIOException (java.io.InterruptedIOException)1 Key (java.security.Key)1 SecureRandom (java.security.SecureRandom)1 ArrayList (java.util.ArrayList)1 ThreadPoolExecutor (java.util.concurrent.ThreadPoolExecutor)1 SecretKeySpec (javax.crypto.spec.SecretKeySpec)1 Cell (org.apache.hadoop.hbase.Cell)1 HTableDescriptor (org.apache.hadoop.hbase.HTableDescriptor)1 KeyValue (org.apache.hadoop.hbase.KeyValue)1 TableName (org.apache.hadoop.hbase.TableName)1