Search in sources :

Example 1 with CoveredColumnIndexSpecifierBuilder

use of org.apache.phoenix.hbase.index.covered.example.CoveredColumnIndexSpecifierBuilder in project phoenix by apache.

the class FailForUnsupportedHBaseVersionsIT method testDoesNotStartRegionServerForUnsupportedCompressionAndVersion.

/**
     * Test that we correctly abort a RegionServer when we run tests with an unsupported HBase
     * version. The 'completeness' of this test requires that we run the test with both a version of
     * HBase that wouldn't be supported with WAL Compression. Currently, this is the default version
     * (0.94.4) so just running 'mvn test' will run the full test. However, this test will not fail
     * when running against a version of HBase with WALCompression enabled. Therefore, to fully test
     * this functionality, we need to run the test against both a supported and an unsupported version
     * of HBase (as long as we want to support an version of HBase that doesn't support custom WAL
     * Codecs).
     * @throws Exception on failure
     */
@Test(timeout = 300000)
public void testDoesNotStartRegionServerForUnsupportedCompressionAndVersion() throws Exception {
    Configuration conf = HBaseConfiguration.create();
    setUpConfigForMiniCluster(conf);
    IndexTestingUtils.setupConfig(conf);
    // enable WAL Compression
    conf.setBoolean(HConstants.ENABLE_WAL_COMPRESSION, true);
    // check the version to see if it isn't supported
    String version = VersionInfo.getVersion();
    boolean supported = false;
    if (Indexer.validateVersion(version, conf) == null) {
        supported = true;
    }
    // start the minicluster
    HBaseTestingUtility util = new HBaseTestingUtility(conf);
    util.startMiniCluster();
    try {
        // setup the primary table
        @SuppressWarnings("deprecation") HTableDescriptor desc = new HTableDescriptor("testDoesNotStartRegionServerForUnsupportedCompressionAndVersion");
        byte[] family = Bytes.toBytes("f");
        desc.addFamily(new HColumnDescriptor(family));
        // enable indexing to a non-existant index table
        String indexTableName = "INDEX_TABLE";
        ColumnGroup fam1 = new ColumnGroup(indexTableName);
        fam1.add(new CoveredColumn(family, CoveredColumn.ALL_QUALIFIERS));
        CoveredColumnIndexSpecifierBuilder builder = new CoveredColumnIndexSpecifierBuilder();
        builder.addIndexGroup(fam1);
        builder.build(desc);
        // get a reference to the regionserver, so we can ensure it aborts
        HRegionServer server = util.getMiniHBaseCluster().getRegionServer(0);
        // create the primary table
        HBaseAdmin admin = util.getHBaseAdmin();
        if (supported) {
            admin.createTable(desc);
            assertFalse("Hosting regeion server failed, even the HBase version (" + version + ") supports WAL Compression.", server.isAborted());
        } else {
            admin.createTableAsync(desc, null);
            // broken.
            while (!server.isAborted()) {
                LOG.debug("Waiting on regionserver to abort..");
            }
        }
    } finally {
        // cleanup
        util.shutdownMiniCluster();
    }
}
Also used : HBaseConfiguration(org.apache.hadoop.hbase.HBaseConfiguration) Configuration(org.apache.hadoop.conf.Configuration) HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor) HRegionServer(org.apache.hadoop.hbase.regionserver.HRegionServer) HBaseAdmin(org.apache.hadoop.hbase.client.HBaseAdmin) CoveredColumn(org.apache.phoenix.hbase.index.covered.example.CoveredColumn) HBaseTestingUtility(org.apache.hadoop.hbase.HBaseTestingUtility) CoveredColumnIndexSpecifierBuilder(org.apache.phoenix.hbase.index.covered.example.CoveredColumnIndexSpecifierBuilder) ColumnGroup(org.apache.phoenix.hbase.index.covered.example.ColumnGroup) Test(org.junit.Test) NeedsOwnMiniClusterTest(org.apache.phoenix.end2end.NeedsOwnMiniClusterTest)

Example 2 with CoveredColumnIndexSpecifierBuilder

use of org.apache.phoenix.hbase.index.covered.example.CoveredColumnIndexSpecifierBuilder in project phoenix by apache.

the class TestCoveredIndexSpecifierBuilder method testSimpleSerialziationDeserialization.

@Test
public void testSimpleSerialziationDeserialization() throws Exception {
    byte[] indexed_qualifer = Bytes.toBytes("indexed_qual");
    //setup the index 
    CoveredColumnIndexSpecifierBuilder builder = new CoveredColumnIndexSpecifierBuilder();
    ColumnGroup fam1 = new ColumnGroup(INDEX_TABLE);
    // match a single family:qualifier pair
    CoveredColumn col1 = new CoveredColumn(FAMILY, indexed_qualifer);
    fam1.add(col1);
    // matches the family2:* columns
    CoveredColumn col2 = new CoveredColumn(FAMILY2, null);
    fam1.add(col2);
    builder.addIndexGroup(fam1);
    ColumnGroup fam2 = new ColumnGroup(INDEX_TABLE2);
    // match a single family2:qualifier pair
    CoveredColumn col3 = new CoveredColumn(FAMILY2, indexed_qualifer);
    fam2.add(col3);
    builder.addIndexGroup(fam2);
    Configuration conf = new Configuration(false);
    //convert the map that HTableDescriptor gets into the conf the coprocessor receives
    Map<String, String> map = builder.convertToMap();
    for (Entry<String, String> entry : map.entrySet()) {
        conf.set(entry.getKey(), entry.getValue());
    }
    List<ColumnGroup> columns = CoveredColumnIndexSpecifierBuilder.getColumns(conf);
    assertEquals("Didn't deserialize the expected number of column groups", 2, columns.size());
    ColumnGroup group = columns.get(0);
    assertEquals("Didn't deserialize expected column in first group", col1, group.getColumnForTesting(0));
    assertEquals("Didn't deserialize expected column in first group", col2, group.getColumnForTesting(1));
    group = columns.get(1);
    assertEquals("Didn't deserialize expected column in second group", col3, group.getColumnForTesting(0));
}
Also used : CoveredColumn(org.apache.phoenix.hbase.index.covered.example.CoveredColumn) Configuration(org.apache.hadoop.conf.Configuration) CoveredColumnIndexSpecifierBuilder(org.apache.phoenix.hbase.index.covered.example.CoveredColumnIndexSpecifierBuilder) ColumnGroup(org.apache.phoenix.hbase.index.covered.example.ColumnGroup) Test(org.junit.Test)

Example 3 with CoveredColumnIndexSpecifierBuilder

use of org.apache.phoenix.hbase.index.covered.example.CoveredColumnIndexSpecifierBuilder in project phoenix by apache.

the class WALReplayWithIndexWritesAndCompressedWALIT method testReplayEditsWrittenViaHRegion.

/**
   * Test writing edits into an region, closing it, splitting logs, opening Region again. Verify
   * seqids.
   * @throws Exception on failure
   */
@SuppressWarnings("deprecation")
@Test
public void testReplayEditsWrittenViaHRegion() throws Exception {
    final String tableNameStr = "testReplayEditsWrittenViaHRegion";
    final HRegionInfo hri = new HRegionInfo(org.apache.hadoop.hbase.TableName.valueOf(tableNameStr), null, null, false);
    final Path basedir = FSUtils.getTableDir(hbaseRootDir, org.apache.hadoop.hbase.TableName.valueOf(tableNameStr));
    deleteDir(basedir);
    final HTableDescriptor htd = createBasic3FamilyHTD(tableNameStr);
    //setup basic indexing for the table
    // enable indexing to a non-existant index table
    byte[] family = new byte[] { 'a' };
    ColumnGroup fam1 = new ColumnGroup(INDEX_TABLE_NAME);
    fam1.add(new CoveredColumn(family, CoveredColumn.ALL_QUALIFIERS));
    CoveredColumnIndexSpecifierBuilder builder = new CoveredColumnIndexSpecifierBuilder();
    builder.addIndexGroup(fam1);
    builder.build(htd);
    // create the region + its WAL
    // FIXME: Uses private type
    HRegion region0 = HRegion.createHRegion(hri, hbaseRootDir, this.conf, htd);
    region0.close();
    region0.getWAL().close();
    WALFactory walFactory = new WALFactory(this.conf, null, "localhost,1234");
    WAL wal = createWAL(this.conf, walFactory);
    RegionServerServices mockRS = Mockito.mock(RegionServerServices.class);
    // mock out some of the internals of the RSS, so we can run CPs
    when(mockRS.getWAL(null)).thenReturn(wal);
    RegionServerAccounting rsa = Mockito.mock(RegionServerAccounting.class);
    when(mockRS.getRegionServerAccounting()).thenReturn(rsa);
    ServerName mockServerName = Mockito.mock(ServerName.class);
    when(mockServerName.getServerName()).thenReturn(tableNameStr + ",1234");
    when(mockRS.getServerName()).thenReturn(mockServerName);
    HRegion region = spy(new HRegion(basedir, wal, this.fs, this.conf, hri, htd, mockRS));
    region.initialize();
    when(region.getSequenceId()).thenReturn(0l);
    //make an attempted write to the primary that should also be indexed
    byte[] rowkey = Bytes.toBytes("indexed_row_key");
    Put p = new Put(rowkey);
    p.add(family, Bytes.toBytes("qual"), Bytes.toBytes("value"));
    region.put(p);
    // we should then see the server go down
    Mockito.verify(mockRS, Mockito.times(1)).abort(Mockito.anyString(), Mockito.any(Exception.class));
    // then create the index table so we are successful on WAL replay
    CoveredColumnIndexer.createIndexTable(UTIL.getHBaseAdmin(), INDEX_TABLE_NAME);
    // run the WAL split and setup the region
    runWALSplit(this.conf, walFactory);
    WAL wal2 = createWAL(this.conf, walFactory);
    HRegion region1 = new HRegion(basedir, wal2, this.fs, this.conf, hri, htd, mockRS);
    // initialize the region - this should replay the WALEdits from the WAL
    region1.initialize();
    // now check to ensure that we wrote to the index table
    HTable index = new HTable(UTIL.getConfiguration(), INDEX_TABLE_NAME);
    int indexSize = getKeyValueCount(index);
    assertEquals("Index wasn't propertly updated from WAL replay!", 1, indexSize);
    Get g = new Get(rowkey);
    final Result result = region1.get(g);
    assertEquals("Primary region wasn't updated from WAL replay!", 1, result.size());
    // cleanup the index table
    HBaseAdmin admin = UTIL.getHBaseAdmin();
    admin.disableTable(INDEX_TABLE_NAME);
    admin.deleteTable(INDEX_TABLE_NAME);
    admin.close();
}
Also used : Path(org.apache.hadoop.fs.Path) WAL(org.apache.hadoop.hbase.wal.WAL) RegionServerServices(org.apache.hadoop.hbase.regionserver.RegionServerServices) RegionServerAccounting(org.apache.hadoop.hbase.regionserver.RegionServerAccounting) HTable(org.apache.hadoop.hbase.client.HTable) Put(org.apache.hadoop.hbase.client.Put) IOException(java.io.IOException) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor) Result(org.apache.hadoop.hbase.client.Result) HRegionInfo(org.apache.hadoop.hbase.HRegionInfo) HRegion(org.apache.hadoop.hbase.regionserver.HRegion) HBaseAdmin(org.apache.hadoop.hbase.client.HBaseAdmin) CoveredColumn(org.apache.phoenix.hbase.index.covered.example.CoveredColumn) CoveredColumnIndexSpecifierBuilder(org.apache.phoenix.hbase.index.covered.example.CoveredColumnIndexSpecifierBuilder) ServerName(org.apache.hadoop.hbase.ServerName) Get(org.apache.hadoop.hbase.client.Get) WALFactory(org.apache.hadoop.hbase.wal.WALFactory) ColumnGroup(org.apache.phoenix.hbase.index.covered.example.ColumnGroup) Test(org.junit.Test) NeedsOwnMiniClusterTest(org.apache.phoenix.end2end.NeedsOwnMiniClusterTest)

Example 4 with CoveredColumnIndexSpecifierBuilder

use of org.apache.phoenix.hbase.index.covered.example.CoveredColumnIndexSpecifierBuilder in project phoenix by apache.

the class TestWALRecoveryCaching method testWaitsOnIndexRegionToReload.

//TODO: Jesse to fix
@SuppressWarnings("deprecation")
@Ignore("Configuration issue - valid test, just needs fixing")
@Test
public void testWaitsOnIndexRegionToReload() throws Exception {
    HBaseTestingUtility util = new HBaseTestingUtility();
    Configuration conf = util.getConfiguration();
    setUpConfigForMiniCluster(conf);
    // setup other useful stats
    IndexTestingUtils.setupConfig(conf);
    conf.setBoolean(Indexer.CHECK_VERSION_CONF_KEY, false);
    // make sure everything is setup correctly
    IndexManagementUtil.ensureMutableIndexingCorrectlyConfigured(conf);
    // start the cluster with 2 rs
    util.startMiniCluster(2);
    HBaseAdmin admin = util.getHBaseAdmin();
    // setup the index
    byte[] family = Bytes.toBytes("family");
    byte[] qual = Bytes.toBytes("qualifier");
    byte[] nonIndexedFamily = Bytes.toBytes("nonIndexedFamily");
    String indexedTableName = getIndexTableName();
    ColumnGroup columns = new ColumnGroup(indexedTableName);
    columns.add(new CoveredColumn(family, qual));
    CoveredColumnIndexSpecifierBuilder builder = new CoveredColumnIndexSpecifierBuilder();
    builder.addIndexGroup(columns);
    // create the primary table w/ indexing enabled
    HTableDescriptor primaryTable = new HTableDescriptor(testTable.getTableName());
    primaryTable.addFamily(new HColumnDescriptor(family));
    primaryTable.addFamily(new HColumnDescriptor(nonIndexedFamily));
    builder.addArbitraryConfigForTesting(Indexer.RecoveryFailurePolicyKeyForTesting, ReleaseLatchOnFailurePolicy.class.getName());
    builder.build(primaryTable);
    admin.createTable(primaryTable);
    // create the index table
    HTableDescriptor indexTableDesc = new HTableDescriptor(Bytes.toBytes(getIndexTableName()));
    indexTableDesc.addCoprocessor(IndexTableBlockingReplayObserver.class.getName());
    CoveredColumnIndexer.createIndexTable(admin, indexTableDesc);
    // figure out where our tables live
    ServerName shared = ensureTablesLiveOnSameServer(util.getMiniHBaseCluster(), Bytes.toBytes(indexedTableName), testTable.getTableName());
    // load some data into the table
    Put p = new Put(Bytes.toBytes("row"));
    p.add(family, qual, Bytes.toBytes("value"));
    HTable primary = new HTable(conf, testTable.getTableName());
    primary.put(p);
    primary.flushCommits();
    // turn on the recovery latch
    allowIndexTableToRecover = new CountDownLatch(1);
    // kill the server where the tables live - this should trigger distributed log splitting
    // find the regionserver that matches the passed server
    List<Region> online = new ArrayList<Region>();
    online.addAll(getRegionsFromServerForTable(util.getMiniHBaseCluster(), shared, testTable.getTableName()));
    online.addAll(getRegionsFromServerForTable(util.getMiniHBaseCluster(), shared, Bytes.toBytes(indexedTableName)));
    // log all the current state of the server
    LOG.info("Current Server/Region paring: ");
    for (RegionServerThread t : util.getMiniHBaseCluster().getRegionServerThreads()) {
        // check all the conditions for the server to be done
        HRegionServer server = t.getRegionServer();
        if (server.isStopping() || server.isStopped() || server.isAborted()) {
            LOG.info("\t== Offline: " + server.getServerName());
            continue;
        }
        List<HRegionInfo> regions = ProtobufUtil.getOnlineRegions(server.getRSRpcServices());
        LOG.info("\t" + server.getServerName() + " regions: " + regions);
    }
    LOG.debug("Killing server " + shared);
    util.getMiniHBaseCluster().killRegionServer(shared);
    LOG.debug("Waiting on server " + shared + "to die");
    util.getMiniHBaseCluster().waitForRegionServerToStop(shared, TIMEOUT);
    // force reassign the regions from the table
    // LOG.debug("Forcing region reassignment from the killed server: " + shared);
    // for (HRegion region : online) {
    // util.getMiniHBaseCluster().getMaster().assign(region.getRegionName());
    // }
    System.out.println(" ====== Killed shared server ==== ");
    // make a second put that (1), isn't indexed, so we can be sure of the index state and (2)
    // ensures that our table is back up
    Put p2 = new Put(p.getRow());
    p2.add(nonIndexedFamily, Bytes.toBytes("Not indexed"), Bytes.toBytes("non-indexed value"));
    primary.put(p2);
    primary.flushCommits();
    // make sure that we actually failed the write once (within a 5 minute window)
    assertTrue("Didn't find an error writing to index table within timeout!", allowIndexTableToRecover.await(ONE_MIN * 5, TimeUnit.MILLISECONDS));
    // scan the index to make sure it has the one entry, (that had to be replayed from the WAL,
    // since we hard killed the server)
    Scan s = new Scan();
    HTable index = new HTable(conf, getIndexTableName());
    ResultScanner scanner = index.getScanner(s);
    int count = 0;
    for (Result r : scanner) {
        LOG.info("Got index table result:" + r);
        count++;
    }
    assertEquals("Got an unexpected found of index rows", 1, count);
    // cleanup
    scanner.close();
    index.close();
    primary.close();
    util.shutdownMiniCluster();
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) ArrayList(java.util.ArrayList) HTable(org.apache.hadoop.hbase.client.HTable) Result(org.apache.hadoop.hbase.client.Result) HRegionInfo(org.apache.hadoop.hbase.HRegionInfo) ResultScanner(org.apache.hadoop.hbase.client.ResultScanner) HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor) CountDownLatch(java.util.concurrent.CountDownLatch) Put(org.apache.hadoop.hbase.client.Put) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor) HRegionServer(org.apache.hadoop.hbase.regionserver.HRegionServer) HBaseAdmin(org.apache.hadoop.hbase.client.HBaseAdmin) CoveredColumn(org.apache.phoenix.hbase.index.covered.example.CoveredColumn) HBaseTestingUtility(org.apache.hadoop.hbase.HBaseTestingUtility) CoveredColumnIndexSpecifierBuilder(org.apache.phoenix.hbase.index.covered.example.CoveredColumnIndexSpecifierBuilder) ServerName(org.apache.hadoop.hbase.ServerName) Region(org.apache.hadoop.hbase.regionserver.Region) Scan(org.apache.hadoop.hbase.client.Scan) RegionServerThread(org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread) ColumnGroup(org.apache.phoenix.hbase.index.covered.example.ColumnGroup) Ignore(org.junit.Ignore) Test(org.junit.Test)

Aggregations

ColumnGroup (org.apache.phoenix.hbase.index.covered.example.ColumnGroup)4 CoveredColumn (org.apache.phoenix.hbase.index.covered.example.CoveredColumn)4 CoveredColumnIndexSpecifierBuilder (org.apache.phoenix.hbase.index.covered.example.CoveredColumnIndexSpecifierBuilder)4 Test (org.junit.Test)4 Configuration (org.apache.hadoop.conf.Configuration)3 HTableDescriptor (org.apache.hadoop.hbase.HTableDescriptor)3 HBaseAdmin (org.apache.hadoop.hbase.client.HBaseAdmin)3 HBaseTestingUtility (org.apache.hadoop.hbase.HBaseTestingUtility)2 HColumnDescriptor (org.apache.hadoop.hbase.HColumnDescriptor)2 HRegionInfo (org.apache.hadoop.hbase.HRegionInfo)2 ServerName (org.apache.hadoop.hbase.ServerName)2 HTable (org.apache.hadoop.hbase.client.HTable)2 Put (org.apache.hadoop.hbase.client.Put)2 Result (org.apache.hadoop.hbase.client.Result)2 HRegionServer (org.apache.hadoop.hbase.regionserver.HRegionServer)2 NeedsOwnMiniClusterTest (org.apache.phoenix.end2end.NeedsOwnMiniClusterTest)2 IOException (java.io.IOException)1 ArrayList (java.util.ArrayList)1 CountDownLatch (java.util.concurrent.CountDownLatch)1 Path (org.apache.hadoop.fs.Path)1