Search in sources :

Example 21 with TableNotFoundException

use of org.apache.phoenix.schema.TableNotFoundException in project phoenix by apache.

the class ConnectionQueryServicesImpl method getTable.

private PTable getTable(PName tenantId, String fullTableName, long timestamp) throws SQLException {
    PTable table;
    try {
        PMetaData metadata = latestMetaData;
        throwConnectionClosedIfNullMetaData();
        table = metadata.getTableRef(new PTableKey(tenantId, fullTableName)).getTable();
        if (table.getTimeStamp() >= timestamp) {
            // the case
            throw new TableNotFoundException(table.getSchemaName().getString(), table.getTableName().getString());
        }
    } catch (TableNotFoundException e) {
        byte[] schemaName = Bytes.toBytes(SchemaUtil.getSchemaNameFromFullName(fullTableName));
        byte[] tableName = Bytes.toBytes(SchemaUtil.getTableNameFromFullName(fullTableName));
        MetaDataMutationResult result = this.getTable(tenantId, schemaName, tableName, HConstants.LATEST_TIMESTAMP, timestamp);
        table = result.getTable();
        if (table == null) {
            throw e;
        }
    }
    return table;
}
Also used : TableNotFoundException(org.apache.phoenix.schema.TableNotFoundException) PMetaData(org.apache.phoenix.schema.PMetaData) PTableKey(org.apache.phoenix.schema.PTableKey) MetaDataMutationResult(org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult) PTable(org.apache.phoenix.schema.PTable)

Example 22 with TableNotFoundException

use of org.apache.phoenix.schema.TableNotFoundException in project phoenix by apache.

the class ConnectionQueryServicesImpl method addTable.

@Override
public void addTable(PTable table, long resolvedTime) throws SQLException {
    synchronized (latestMetaDataLock) {
        try {
            throwConnectionClosedIfNullMetaData();
            // If existing table isn't older than new table, don't replace
            // If a client opens a connection at an earlier timestamp, this can happen
            PTable existingTable = latestMetaData.getTableRef(new PTableKey(table.getTenantId(), table.getName().getString())).getTable();
            if (existingTable.getTimeStamp() >= table.getTimeStamp()) {
                return;
            }
        } catch (TableNotFoundException e) {
        }
        latestMetaData.addTable(table, resolvedTime);
        latestMetaDataLock.notifyAll();
    }
}
Also used : TableNotFoundException(org.apache.phoenix.schema.TableNotFoundException) PTableKey(org.apache.phoenix.schema.PTableKey) PTable(org.apache.phoenix.schema.PTable)

Example 23 with TableNotFoundException

use of org.apache.phoenix.schema.TableNotFoundException in project phoenix by apache.

the class ConnectionQueryServicesImpl method getTableRegionLocation.

@Override
public HRegionLocation getTableRegionLocation(byte[] tableName, byte[] row) throws SQLException {
    /*
         * Use HConnection.getRegionLocation as it uses the cache in HConnection, to get the region
         * to which specified row belongs to.
         */
    int retryCount = 0, maxRetryCount = 1;
    boolean reload = false;
    while (true) {
        try {
            return connection.getRegionLocation(TableName.valueOf(tableName), row, reload);
        } catch (org.apache.hadoop.hbase.TableNotFoundException e) {
            String fullName = Bytes.toString(tableName);
            throw new TableNotFoundException(SchemaUtil.getSchemaNameFromFullName(fullName), SchemaUtil.getTableNameFromFullName(fullName));
        } catch (IOException e) {
            if (retryCount++ < maxRetryCount) {
                // One retry, in case split occurs while navigating
                reload = true;
                continue;
            }
            throw new SQLExceptionInfo.Builder(SQLExceptionCode.GET_TABLE_REGIONS_FAIL).setRootCause(e).build().buildException();
        }
    }
}
Also used : TableNotFoundException(org.apache.phoenix.schema.TableNotFoundException) KeyValueBuilder(org.apache.phoenix.hbase.index.util.KeyValueBuilder) NonTxIndexBuilder(org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder) ThreadFactoryBuilder(com.google.common.util.concurrent.ThreadFactoryBuilder) PhoenixIndexBuilder(org.apache.phoenix.index.PhoenixIndexBuilder) IOException(java.io.IOException) PhoenixIOException(org.apache.phoenix.exception.PhoenixIOException) PTinyint(org.apache.phoenix.schema.types.PTinyint) PUnsignedTinyint(org.apache.phoenix.schema.types.PUnsignedTinyint) MultiRowMutationEndpoint(org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint)

Example 24 with TableNotFoundException

use of org.apache.phoenix.schema.TableNotFoundException in project phoenix by apache.

the class UpdateCacheAcrossDifferentClientsIT method testUpdateCacheFrequencyWithAddAndDropTable.

@Test
public void testUpdateCacheFrequencyWithAddAndDropTable() throws Exception {
    // Create connections 1 and 2
    // Must update config before starting server
    Properties longRunningProps = new Properties();
    longRunningProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
    longRunningProps.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
    Connection conn1 = DriverManager.getConnection(getUrl(), longRunningProps);
    String url2 = getUrl() + PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR + "LongRunningQueries";
    Connection conn2 = DriverManager.getConnection(url2, longRunningProps);
    conn1.setAutoCommit(true);
    conn2.setAutoCommit(true);
    String tableName = generateUniqueName();
    String tableCreateQuery = "create table " + tableName + " (k VARCHAR PRIMARY KEY, v1 VARCHAR, v2 VARCHAR)" + " UPDATE_CACHE_FREQUENCY=1000000000";
    String dropTableQuery = "DROP table " + tableName;
    try {
        conn1.createStatement().execute(tableCreateQuery);
        conn1.createStatement().execute("upsert into " + tableName + " values ('row1', 'value1', 'key1')");
        conn1.createStatement().execute("upsert into " + tableName + " values ('row2', 'value2', 'key2')");
        conn1.commit();
        ResultSet rs = conn1.createStatement().executeQuery("select * from " + tableName);
        assertTrue(rs.next());
        assertTrue(rs.next());
        rs = conn2.createStatement().executeQuery("select * from " + tableName);
        assertTrue(rs.next());
        assertTrue(rs.next());
        //Drop table from conn1
        conn1.createStatement().execute(dropTableQuery);
        try {
            rs = conn1.createStatement().executeQuery("select * from " + tableName);
            fail("Should throw TableNotFoundException after dropping table");
        } catch (TableNotFoundException e) {
        //Expected
        }
        try {
            rs = conn2.createStatement().executeQuery("select * from " + tableName);
            fail("Should throw TableNotFoundException after dropping table");
        } catch (TableNotFoundException e) {
        //Expected
        }
    } finally {
        conn1.close();
        conn2.close();
    }
}
Also used : TableNotFoundException(org.apache.phoenix.schema.TableNotFoundException) Connection(java.sql.Connection) ResultSet(java.sql.ResultSet) Properties(java.util.Properties) Test(org.junit.Test)

Example 25 with TableNotFoundException

use of org.apache.phoenix.schema.TableNotFoundException in project phoenix by apache.

the class QueryDatabaseMetaDataIT method testCreateViewOnExistingTable.

@SuppressWarnings("deprecation")
@Test
public void testCreateViewOnExistingTable() throws Exception {
    PhoenixConnection pconn = DriverManager.getConnection(getUrl(), PropertiesUtil.deepCopy(TEST_PROPERTIES)).unwrap(PhoenixConnection.class);
    String tableName = MDTEST_NAME;
    String schemaName = MDTEST_SCHEMA_NAME;
    byte[] cfB = Bytes.toBytes(SchemaUtil.normalizeIdentifier("b"));
    byte[] cfC = Bytes.toBytes("c");
    byte[][] familyNames = new byte[][] { cfB, cfC };
    byte[] htableName = SchemaUtil.getTableNameAsBytes(schemaName, tableName);
    HBaseAdmin admin = pconn.getQueryServices().getAdmin();
    try {
        admin.disableTable(htableName);
        admin.deleteTable(htableName);
    } catch (org.apache.hadoop.hbase.TableNotFoundException e) {
    }
    HTableDescriptor descriptor = new HTableDescriptor(htableName);
    for (byte[] familyName : familyNames) {
        HColumnDescriptor columnDescriptor = new HColumnDescriptor(familyName);
        descriptor.addFamily(columnDescriptor);
    }
    admin.createTable(descriptor);
    admin.close();
    long ts = nextTimestamp();
    Properties props = new Properties();
    props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts + 5));
    Connection conn1 = DriverManager.getConnection(getUrl(), props);
    String createStmt = "create view bogusTable" + "   (id char(1) not null primary key,\n" + "    a.col1 integer,\n" + "    d.col2 bigint)\n";
    try {
        conn1.createStatement().execute(createStmt);
        fail();
    } catch (TableNotFoundException e) {
    // expected to fail b/c table doesn't exist
    } catch (ReadOnlyTableException e) {
    // expected to fail b/c table doesn't exist
    }
    createStmt = "create view " + MDTEST_NAME + "   (id char(1) not null primary key,\n" + "    a.col1 integer,\n" + "    b.col2 bigint)\n";
    try {
        conn1.createStatement().execute(createStmt);
        fail();
    } catch (ReadOnlyTableException e) {
    // expected to fail b/c cf a doesn't exist
    }
    createStmt = "create view " + MDTEST_NAME + "   (id char(1) not null primary key,\n" + "    b.col1 integer,\n" + "    c.col2 bigint)\n";
    try {
        conn1.createStatement().execute(createStmt);
        fail();
    } catch (ReadOnlyTableException e) {
    // expected to fail b/c cf C doesn't exist (case issue)
    }
    createStmt = "create view " + MDTEST_NAME + "   (id char(1) not null primary key,\n" + "    b.col1 integer,\n" + "    \"c\".col2 bigint) IMMUTABLE_ROWS=true \n";
    // should be ok now
    conn1.createStatement().execute(createStmt);
    conn1.close();
    props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts + 6));
    PhoenixConnection conn2 = DriverManager.getConnection(getUrl(), props).unwrap(PhoenixConnection.class);
    ResultSet rs = conn2.getMetaData().getTables(null, null, MDTEST_NAME, null);
    assertTrue(rs.next());
    assertEquals(ViewType.MAPPED.name(), rs.getString(PhoenixDatabaseMetaData.VIEW_TYPE));
    assertFalse(rs.next());
    String deleteStmt = "DELETE FROM " + MDTEST_NAME;
    PreparedStatement ps = conn2.prepareStatement(deleteStmt);
    try {
        ps.execute();
        fail();
    } catch (ReadOnlyTableException e) {
    // expected to fail b/c table is read-only
    }
    String upsert = "UPSERT INTO " + MDTEST_NAME + "(id,col1,col2) VALUES(?,?,?)";
    ps = conn2.prepareStatement(upsert);
    try {
        ps.setString(1, Integer.toString(0));
        ps.setInt(2, 1);
        ps.setInt(3, 2);
        ps.execute();
        fail();
    } catch (ReadOnlyTableException e) {
    // expected to fail b/c table is read-only
    }
    HTableInterface htable = conn2.getQueryServices().getTable(SchemaUtil.getTableNameAsBytes(MDTEST_SCHEMA_NAME, MDTEST_NAME));
    Put put = new Put(Bytes.toBytes("0"));
    put.add(cfB, Bytes.toBytes("COL1"), ts + 6, PInteger.INSTANCE.toBytes(1));
    put.add(cfC, Bytes.toBytes("COL2"), ts + 6, PLong.INSTANCE.toBytes(2));
    htable.put(put);
    conn2.close();
    props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts + 10));
    Connection conn7 = DriverManager.getConnection(getUrl(), props);
    // Should be ok b/c we've marked the view with IMMUTABLE_ROWS=true
    conn7.createStatement().execute("CREATE INDEX idx ON " + MDTEST_NAME + "(B.COL1)");
    String select = "SELECT col1 FROM " + MDTEST_NAME + " WHERE col2=?";
    ps = conn7.prepareStatement(select);
    ps.setInt(1, 2);
    rs = ps.executeQuery();
    assertTrue(rs.next());
    assertEquals(1, rs.getInt(1));
    assertFalse(rs.next());
    props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts + 12));
    Connection conn75 = DriverManager.getConnection(getUrl(), props);
    String dropTable = "DROP TABLE " + MDTEST_NAME;
    ps = conn75.prepareStatement(dropTable);
    try {
        ps.execute();
        fail();
    } catch (TableNotFoundException e) {
    // expected to fail b/c it is a view
    }
    String dropView = "DROP VIEW " + MDTEST_NAME;
    ps = conn75.prepareStatement(dropView);
    ps.execute();
    conn75.close();
    props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts + 15));
    Connection conn8 = DriverManager.getConnection(getUrl(), props);
    createStmt = "create view " + MDTEST_NAME + "   (id char(1) not null primary key,\n" + "    b.col1 integer,\n" + "    \"c\".col2 bigint) IMMUTABLE_ROWS=true\n";
    // should be ok to create a view with IMMUTABLE_ROWS = true
    conn8.createStatement().execute(createStmt);
    conn8.close();
    props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts + 20));
    Connection conn9 = DriverManager.getConnection(getUrl(), props);
    conn9.createStatement().execute("CREATE INDEX idx ON " + MDTEST_NAME + "(B.COL1)");
    props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts + 30));
    Connection conn91 = DriverManager.getConnection(getUrl(), props);
    ps = conn91.prepareStatement(dropView);
    ps.execute();
    conn91.close();
    props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts + 35));
    Connection conn92 = DriverManager.getConnection(getUrl(), props);
    createStmt = "create view " + MDTEST_NAME + "   (id char(1) not null primary key,\n" + "    b.col1 integer,\n" + "    \"c\".col2 bigint) as\n" + " select * from " + MDTEST_NAME + " where b.col1 = 1";
    conn92.createStatement().execute(createStmt);
    conn92.close();
    put = new Put(Bytes.toBytes("1"));
    put.add(cfB, Bytes.toBytes("COL1"), ts + 39, PInteger.INSTANCE.toBytes(3));
    put.add(cfC, Bytes.toBytes("COL2"), ts + 39, PLong.INSTANCE.toBytes(4));
    htable.put(put);
    props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts + 40));
    Connection conn92a = DriverManager.getConnection(getUrl(), props);
    rs = conn92a.createStatement().executeQuery("select count(*) from " + MDTEST_NAME);
    assertTrue(rs.next());
    assertEquals(1, rs.getInt(1));
    conn92a.close();
    props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts + 45));
    Connection conn93 = DriverManager.getConnection(getUrl(), props);
    try {
        String alterView = "alter view " + MDTEST_NAME + " drop column b.col1";
        conn93.createStatement().execute(alterView);
        fail();
    } catch (SQLException e) {
        assertEquals(SQLExceptionCode.CANNOT_MUTATE_TABLE.getErrorCode(), e.getErrorCode());
    }
    conn93.close();
    props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts + 50));
    Connection conn94 = DriverManager.getConnection(getUrl(), props);
    String alterView = "alter view " + MDTEST_NAME + " drop column \"c\".col2";
    conn94.createStatement().execute(alterView);
    conn94.close();
}
Also used : PhoenixConnection(org.apache.phoenix.jdbc.PhoenixConnection) HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor) SQLException(java.sql.SQLException) Connection(java.sql.Connection) PhoenixConnection(org.apache.phoenix.jdbc.PhoenixConnection) PreparedStatement(java.sql.PreparedStatement) Properties(java.util.Properties) HTableInterface(org.apache.hadoop.hbase.client.HTableInterface) Put(org.apache.hadoop.hbase.client.Put) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor) ReadOnlyTableException(org.apache.phoenix.schema.ReadOnlyTableException) HBaseAdmin(org.apache.hadoop.hbase.client.HBaseAdmin) TableNotFoundException(org.apache.phoenix.schema.TableNotFoundException) ResultSet(java.sql.ResultSet) Test(org.junit.Test)

Aggregations

TableNotFoundException (org.apache.phoenix.schema.TableNotFoundException)32 Connection (java.sql.Connection)14 PhoenixConnection (org.apache.phoenix.jdbc.PhoenixConnection)13 PTable (org.apache.phoenix.schema.PTable)13 Properties (java.util.Properties)11 Test (org.junit.Test)11 PTableKey (org.apache.phoenix.schema.PTableKey)10 MetaDataMutationResult (org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult)8 SQLException (java.sql.SQLException)7 ResultSet (java.sql.ResultSet)5 PColumn (org.apache.phoenix.schema.PColumn)5 PreparedStatement (java.sql.PreparedStatement)4 HBaseAdmin (org.apache.hadoop.hbase.client.HBaseAdmin)4 HTableInterface (org.apache.hadoop.hbase.client.HTableInterface)4 Scan (org.apache.hadoop.hbase.client.Scan)4 PhoenixIndexBuilder (org.apache.phoenix.index.PhoenixIndexBuilder)4 MetaDataClient (org.apache.phoenix.schema.MetaDataClient)4 ThreadFactoryBuilder (com.google.common.util.concurrent.ThreadFactoryBuilder)3 IOException (java.io.IOException)3 PhoenixIOException (org.apache.phoenix.exception.PhoenixIOException)3