use of org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost in project hbase by apache.
the class TestRegionCoprocessorHost method testPreFlushScannerOpen.
@Test
public void testPreFlushScannerOpen() throws IOException {
RegionCoprocessorHost host = new RegionCoprocessorHost(region, rsServices, conf);
ScanInfo oldScanInfo = getScanInfo();
HStore store = mock(HStore.class);
when(store.getScanInfo()).thenReturn(oldScanInfo);
ScanInfo newScanInfo = host.preFlushScannerOpen(store, mock(FlushLifeCycleTracker.class));
verifyScanInfo(newScanInfo);
}
use of org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost in project hbase by apache.
the class TestRegionCoprocessorHost method testLoadDuplicateCoprocessor.
@Test
public void testLoadDuplicateCoprocessor() throws Exception {
conf.setBoolean(SKIP_LOAD_DUPLICATE_TABLE_COPROCESSOR, true);
conf.set(REGION_COPROCESSOR_CONF_KEY, SimpleRegionObserver.class.getName());
RegionCoprocessorHost host = new RegionCoprocessorHost(region, rsServices, conf);
// Only one coprocessor SimpleRegionObserver loaded
assertEquals(1, host.coprocEnvironments.size());
// Allow to load duplicate coprocessor
conf.setBoolean(SKIP_LOAD_DUPLICATE_TABLE_COPROCESSOR, false);
host = new RegionCoprocessorHost(region, rsServices, conf);
// Two duplicate coprocessors loaded
assertEquals(2, host.coprocEnvironments.size());
}
use of org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost in project hbase by apache.
the class TestRegionCoprocessorHost method testPreStoreScannerOpen.
@Test
public void testPreStoreScannerOpen() throws IOException {
RegionCoprocessorHost host = new RegionCoprocessorHost(region, rsServices, conf);
Scan scan = new Scan();
scan.setTimeRange(TimeRange.INITIAL_MIN_TIMESTAMP, TimeRange.INITIAL_MAX_TIMESTAMP);
assertTrue("Scan is not for all time", scan.getTimeRange().isAllTime());
// SimpleRegionObserver is set to update the ScanInfo parameters if the passed-in scan
// is for all time. this lets us exercise both that the Scan is wired up properly in the coproc
// and that we can customize the metadata
ScanInfo oldScanInfo = getScanInfo();
HStore store = mock(HStore.class);
when(store.getScanInfo()).thenReturn(oldScanInfo);
ScanInfo newScanInfo = host.preStoreScannerOpen(store, scan);
verifyScanInfo(newScanInfo);
}
use of org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost in project hbase by apache.
the class TestRegionObserverPreFlushAndPreCompact method testPreCompactReturningNull.
/**
* Ensure we get expected exception when we try to return null from a preCompact call.
* @throws IOException We expect it to throw {@link CoprocessorException}
*/
@Test(expected = CoprocessorException.class)
public void testPreCompactReturningNull() throws IOException {
RegionCoprocessorHost rch = getRegionCoprocessorHost();
rch.preCompact(null, null, null, null, null, null);
}
use of org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost in project hbase by apache.
the class TestRegionObserverStacking method initHRegion.
HRegion initHRegion(byte[] tableName, String callingMethod, Configuration conf, byte[]... families) throws IOException {
TableDescriptorBuilder builder = TableDescriptorBuilder.newBuilder(TableName.valueOf(tableName));
for (byte[] family : families) {
builder.setColumnFamily(ColumnFamilyDescriptorBuilder.of(family));
}
TableDescriptor tableDescriptor = builder.build();
ChunkCreator.initialize(MemStoreLAB.CHUNK_SIZE_DEFAULT, false, 0, 0, 0, null, MemStoreLAB.INDEX_CHUNK_SIZE_PERCENTAGE_DEFAULT);
RegionInfo info = RegionInfoBuilder.newBuilder(tableDescriptor.getTableName()).build();
Path path = new Path(DIR + callingMethod);
HRegion r = HBaseTestingUtil.createRegionAndWAL(info, path, conf, tableDescriptor);
// this following piece is a hack. currently a coprocessorHost
// is secretly loaded at OpenRegionHandler. we don't really
// start a region server here, so just manually create cphost
// and set it to region.
RegionCoprocessorHost host = new RegionCoprocessorHost(r, Mockito.mock(RegionServerServices.class), conf);
r.setCoprocessorHost(host);
return r;
}
Aggregations