Search in sources :

Example 56 with HBaseTestingUtility

use of org.apache.hadoop.hbase.HBaseTestingUtility in project phoenix by apache.

the class SystemTablePermissionsIT method testSystemTablePermissions.

@Test
public void testSystemTablePermissions() throws Exception {
    testUtil = new HBaseTestingUtility();
    clientProperties = new Properties();
    Configuration conf = testUtil.getConfiguration();
    setCommonConfigProperties(conf);
    conf.set(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, "false");
    clientProperties.setProperty(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, "false");
    testUtil.startMiniCluster(1);
    final UserGroupInformation superUser = UserGroupInformation.createUserForTesting(SUPERUSER, new String[0]);
    final UserGroupInformation regularUser = UserGroupInformation.createUserForTesting("user", new String[0]);
    superUser.doAs(new PrivilegedExceptionAction<Void>() {

        @Override
        public Void run() throws Exception {
            createTable();
            readTable();
            return null;
        }
    });
    Set<String> tables = getHBaseTables();
    assertTrue("HBase tables do not include expected Phoenix tables: " + tables, tables.containsAll(PHOENIX_SYSTEM_TABLES));
    // Grant permission to the system tables for the unprivileged user
    superUser.doAs(new PrivilegedExceptionAction<Void>() {

        @Override
        public Void run() throws Exception {
            try {
                grantPermissions(regularUser.getShortUserName(), PHOENIX_SYSTEM_TABLES, Action.EXEC, Action.READ);
                grantPermissions(regularUser.getShortUserName(), Collections.singleton(TABLE_NAME), Action.READ);
            } catch (Throwable e) {
                if (e instanceof Exception) {
                    throw (Exception) e;
                } else {
                    throw new Exception(e);
                }
            }
            return null;
        }
    });
    // Make sure that the unprivileged user can read the table
    regularUser.doAs(new PrivilegedExceptionAction<Void>() {

        @Override
        public Void run() throws Exception {
            // We expect this to not throw an error
            readTable();
            return null;
        }
    });
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) HBaseTestingUtility(org.apache.hadoop.hbase.HBaseTestingUtility) Properties(java.util.Properties) SQLException(java.sql.SQLException) IOException(java.io.IOException) UserGroupInformation(org.apache.hadoop.security.UserGroupInformation) Test(org.junit.Test)

Example 57 with HBaseTestingUtility

use of org.apache.hadoop.hbase.HBaseTestingUtility in project phoenix by apache.

the class ScannerLeaseRenewalIT method setUp.

@BeforeClass
public static void setUp() throws Exception {
    Configuration conf = HBaseConfiguration.create();
    hbaseTestUtil = new HBaseTestingUtility(conf);
    setUpConfigForMiniCluster(conf);
    conf.setLong(HConstants.HBASE_CLIENT_SCANNER_TIMEOUT_PERIOD, LEASE_TIMEOUT_PERIOD_MILLIS);
    hbaseTestUtil.startMiniCluster();
    // establish url and quorum. Need to use PhoenixDriver and not PhoenixTestDriver
    zkQuorum = "localhost:" + hbaseTestUtil.getZkCluster().getClientPort();
    url = PhoenixRuntime.JDBC_PROTOCOL + PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR + zkQuorum;
    Properties driverProps = PropertiesUtil.deepCopy(TEST_PROPERTIES);
    driverProps.put(RENEW_LEASE_THREAD_POOL_SIZE, Long.toString(4));
    // if this property is false, tests will fail with UnknownScannerException errors. 
    driverProps.put(RENEW_LEASE_ENABLED, Boolean.toString(true));
    driverProps.put(RENEW_LEASE_THRESHOLD_MILLISECONDS, Long.toString(LEASE_TIMEOUT_PERIOD_MILLIS / 2));
    driverProps.put(RUN_RENEW_LEASE_FREQUENCY_INTERVAL_MILLISECONDS, Long.toString(LEASE_TIMEOUT_PERIOD_MILLIS / 4));
    driverProps.put(HBASE_CLIENT_SCANNER_TIMEOUT_PERIOD, Long.toString(LEASE_TIMEOUT_PERIOD_MILLIS));
    // use round robin iterator
    driverProps.put(FORCE_ROW_KEY_ORDER_ATTRIB, Boolean.toString(false));
    DriverManager.registerDriver(PhoenixDriver.INSTANCE);
    try (PhoenixConnection phxConn = DriverManager.getConnection(url, driverProps).unwrap(PhoenixConnection.class)) {
        // run test methods only if we are at the hbase version that supports lease renewal.
        Assume.assumeTrue(phxConn.getQueryServices().supportsFeature(Feature.RENEW_LEASE));
    }
}
Also used : PhoenixConnection(org.apache.phoenix.jdbc.PhoenixConnection) Configuration(org.apache.hadoop.conf.Configuration) HBaseConfiguration(org.apache.hadoop.hbase.HBaseConfiguration) HBaseTestingUtility(org.apache.hadoop.hbase.HBaseTestingUtility) Properties(java.util.Properties) BeforeClass(org.junit.BeforeClass)

Example 58 with HBaseTestingUtility

use of org.apache.hadoop.hbase.HBaseTestingUtility in project phoenix by apache.

the class FailForUnsupportedHBaseVersionsIT method testDoesNotStartRegionServerForUnsupportedCompressionAndVersion.

/**
     * Test that we correctly abort a RegionServer when we run tests with an unsupported HBase
     * version. The 'completeness' of this test requires that we run the test with both a version of
     * HBase that wouldn't be supported with WAL Compression. Currently, this is the default version
     * (0.94.4) so just running 'mvn test' will run the full test. However, this test will not fail
     * when running against a version of HBase with WALCompression enabled. Therefore, to fully test
     * this functionality, we need to run the test against both a supported and an unsupported version
     * of HBase (as long as we want to support an version of HBase that doesn't support custom WAL
     * Codecs).
     * @throws Exception on failure
     */
@Test(timeout = 300000)
public void testDoesNotStartRegionServerForUnsupportedCompressionAndVersion() throws Exception {
    Configuration conf = HBaseConfiguration.create();
    setUpConfigForMiniCluster(conf);
    IndexTestingUtils.setupConfig(conf);
    // enable WAL Compression
    conf.setBoolean(HConstants.ENABLE_WAL_COMPRESSION, true);
    // check the version to see if it isn't supported
    String version = VersionInfo.getVersion();
    boolean supported = false;
    if (Indexer.validateVersion(version, conf) == null) {
        supported = true;
    }
    // start the minicluster
    HBaseTestingUtility util = new HBaseTestingUtility(conf);
    util.startMiniCluster();
    try {
        // setup the primary table
        @SuppressWarnings("deprecation") HTableDescriptor desc = new HTableDescriptor("testDoesNotStartRegionServerForUnsupportedCompressionAndVersion");
        byte[] family = Bytes.toBytes("f");
        desc.addFamily(new HColumnDescriptor(family));
        // enable indexing to a non-existant index table
        String indexTableName = "INDEX_TABLE";
        ColumnGroup fam1 = new ColumnGroup(indexTableName);
        fam1.add(new CoveredColumn(family, CoveredColumn.ALL_QUALIFIERS));
        CoveredColumnIndexSpecifierBuilder builder = new CoveredColumnIndexSpecifierBuilder();
        builder.addIndexGroup(fam1);
        builder.build(desc);
        // get a reference to the regionserver, so we can ensure it aborts
        HRegionServer server = util.getMiniHBaseCluster().getRegionServer(0);
        // create the primary table
        HBaseAdmin admin = util.getHBaseAdmin();
        if (supported) {
            admin.createTable(desc);
            assertFalse("Hosting regeion server failed, even the HBase version (" + version + ") supports WAL Compression.", server.isAborted());
        } else {
            admin.createTableAsync(desc, null);
            // broken.
            while (!server.isAborted()) {
                LOG.debug("Waiting on regionserver to abort..");
            }
        }
    } finally {
        // cleanup
        util.shutdownMiniCluster();
    }
}
Also used : HBaseConfiguration(org.apache.hadoop.hbase.HBaseConfiguration) Configuration(org.apache.hadoop.conf.Configuration) HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor) HRegionServer(org.apache.hadoop.hbase.regionserver.HRegionServer) HBaseAdmin(org.apache.hadoop.hbase.client.HBaseAdmin) CoveredColumn(org.apache.phoenix.hbase.index.covered.example.CoveredColumn) HBaseTestingUtility(org.apache.hadoop.hbase.HBaseTestingUtility) CoveredColumnIndexSpecifierBuilder(org.apache.phoenix.hbase.index.covered.example.CoveredColumnIndexSpecifierBuilder) ColumnGroup(org.apache.phoenix.hbase.index.covered.example.ColumnGroup) Test(org.junit.Test) NeedsOwnMiniClusterTest(org.apache.phoenix.end2end.NeedsOwnMiniClusterTest)

Example 59 with HBaseTestingUtility

use of org.apache.hadoop.hbase.HBaseTestingUtility in project phoenix by apache.

the class MutableIndexReplicationIT method setupConfigsAndStartCluster.

private static void setupConfigsAndStartCluster() throws Exception {
    // cluster-1 lives at regular HBase home, so we don't need to change how phoenix handles
    // lookups
    //        conf1.set(HConstants.ZOOKEEPER_ZNODE_PARENT, "/1");
    // smaller log roll size to trigger more events
    setUpConfigForMiniCluster(conf1);
    conf1.setFloat("hbase.regionserver.logroll.multiplier", 0.0003f);
    conf1.setInt("replication.source.size.capacity", 10240);
    conf1.setLong("replication.source.sleepforretries", 100);
    conf1.setInt("hbase.regionserver.maxlogs", 10);
    conf1.setLong("hbase.master.logcleaner.ttl", 10);
    conf1.setInt("zookeeper.recovery.retry", 1);
    conf1.setInt("zookeeper.recovery.retry.intervalmill", 10);
    conf1.setBoolean(HConstants.REPLICATION_ENABLE_KEY, HConstants.REPLICATION_ENABLE_DEFAULT);
    conf1.setBoolean("dfs.support.append", true);
    conf1.setLong(HConstants.THREAD_WAKE_FREQUENCY, 100);
    conf1.setInt("replication.stats.thread.period.seconds", 5);
    conf1.setBoolean("hbase.tests.use.shortcircuit.reads", false);
    utility1 = new HBaseTestingUtility(conf1);
    utility1.startMiniZKCluster();
    MiniZooKeeperCluster miniZK = utility1.getZkCluster();
    // Have to reset conf1 in case zk cluster location different
    // than default
    conf1 = utility1.getConfiguration();
    zkw1 = new ZooKeeperWatcher(conf1, "cluster1", null, true);
    admin = new ReplicationAdmin(conf1);
    LOG.info("Setup first Zk");
    // Base conf2 on conf1 so it gets the right zk cluster, and general cluster configs
    conf2 = HBaseConfiguration.create(conf1);
    conf2.set(HConstants.ZOOKEEPER_ZNODE_PARENT, "/2");
    conf2.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 6);
    conf2.setBoolean(HConstants.REPLICATION_ENABLE_KEY, HConstants.REPLICATION_ENABLE_DEFAULT);
    conf2.setBoolean("dfs.support.append", true);
    conf2.setBoolean("hbase.tests.use.shortcircuit.reads", false);
    utility2 = new HBaseTestingUtility(conf2);
    utility2.setZkCluster(miniZK);
    zkw2 = new ZooKeeperWatcher(conf2, "cluster2", null, true);
    //replicate from cluster 1 -> cluster 2, but not back again
    admin.addPeer("1", utility2.getClusterKey());
    LOG.info("Setup second Zk");
    utility1.startMiniCluster(2);
    utility2.startMiniCluster(2);
}
Also used : HBaseTestingUtility(org.apache.hadoop.hbase.HBaseTestingUtility) ZooKeeperWatcher(org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher) ReplicationAdmin(org.apache.hadoop.hbase.client.replication.ReplicationAdmin) MiniZooKeeperCluster(org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster)

Example 60 with HBaseTestingUtility

use of org.apache.hadoop.hbase.HBaseTestingUtility in project phoenix by apache.

the class BaseHivePhoenixStoreIT method setup.

public static void setup(HiveTestUtil.MiniClusterType clusterType) throws Exception {
    String hadoopConfDir = System.getenv("HADOOP_CONF_DIR");
    if (null != hadoopConfDir && !hadoopConfDir.isEmpty()) {
        LOG.warn("WARNING: HADOOP_CONF_DIR is set in the environment which may cause " + "issues with test execution via MiniDFSCluster");
    }
    hbaseTestUtil = new HBaseTestingUtility();
    conf = hbaseTestUtil.getConfiguration();
    setUpConfigForMiniCluster(conf);
    conf.set(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
    hiveOutputDir = new Path(hbaseTestUtil.getDataTestDir(), "hive_output").toString();
    File outputDir = new File(hiveOutputDir);
    outputDir.mkdirs();
    hiveLogDir = new Path(hbaseTestUtil.getDataTestDir(), "hive_log").toString();
    File logDir = new File(hiveLogDir);
    logDir.mkdirs();
    // Setup Hive mini Server
    Path testRoot = hbaseTestUtil.getDataTestDir();
    System.setProperty("test.tmp.dir", testRoot.toString());
    System.setProperty("test.warehouse.dir", (new Path(testRoot, "warehouse")).toString());
    try {
        qt = new HiveTestUtil(hiveOutputDir, hiveLogDir, clusterType, null);
    } catch (Exception e) {
        LOG.error("Unexpected exception in setup", e);
        fail("Unexpected exception in setup");
    }
    //Start HBase cluster
    hbaseCluster = hbaseTestUtil.startMiniCluster(3);
    MiniDFSCluster x = hbaseTestUtil.getDFSCluster();
    Class.forName(PhoenixDriver.class.getName());
    zkQuorum = "localhost:" + hbaseTestUtil.getZkCluster().getClientPort();
    Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
    props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
    conn = DriverManager.getConnection(PhoenixRuntime.JDBC_PROTOCOL + PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR + zkQuorum, props);
    // Setup Hive Output Folder
    Statement stmt = conn.createStatement();
    stmt.execute("create table t(a integer primary key,b varchar)");
}
Also used : Path(org.apache.hadoop.fs.Path) MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) HBaseTestingUtility(org.apache.hadoop.hbase.HBaseTestingUtility) PhoenixDriver(org.apache.phoenix.jdbc.PhoenixDriver) Properties(java.util.Properties) File(java.io.File) IOException(java.io.IOException)

Aggregations

HBaseTestingUtility (org.apache.hadoop.hbase.HBaseTestingUtility)136 Configuration (org.apache.hadoop.conf.Configuration)50 BeforeClass (org.junit.BeforeClass)49 Test (org.junit.Test)42 HBaseConfiguration (org.apache.hadoop.hbase.HBaseConfiguration)35 Path (org.apache.hadoop.fs.Path)29 Admin (org.apache.hadoop.hbase.client.Admin)24 FileSystem (org.apache.hadoop.fs.FileSystem)22 HTableDescriptor (org.apache.hadoop.hbase.HTableDescriptor)20 HColumnDescriptor (org.apache.hadoop.hbase.HColumnDescriptor)18 HRegionInfo (org.apache.hadoop.hbase.HRegionInfo)16 Before (org.junit.Before)14 MiniHBaseCluster (org.apache.hadoop.hbase.MiniHBaseCluster)11 ZooKeeperWatcher (org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher)11 MiniZooKeeperCluster (org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster)10 Table (org.apache.hadoop.hbase.client.Table)8 HFileSystem (org.apache.hadoop.hbase.fs.HFileSystem)8 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)8 FileStatus (org.apache.hadoop.fs.FileStatus)7 Result (org.apache.hadoop.hbase.client.Result)7