Search in sources :

Example 1 with TableNotExistException

use of org.apache.flink.table.catalog.exceptions.TableNotExistException in project flink by apache.

the class HiveCatalogITCase method testTableWithPrimaryKey.

@Test
public void testTableWithPrimaryKey() {
    TableEnvironment tableEnv = TableEnvironment.create(EnvironmentSettings.inStreamingMode());
    tableEnv.getConfig().getConfiguration().setInteger(TABLE_EXEC_RESOURCE_DEFAULT_PARALLELISM, 1);
    tableEnv.registerCatalog("catalog1", hiveCatalog);
    tableEnv.useCatalog("catalog1");
    final String createTable = "CREATE TABLE pk_src (\n" + "  uuid varchar(40) not null,\n" + "  price DECIMAL(10, 2),\n" + "  currency STRING,\n" + "  ts6 TIMESTAMP(6),\n" + "  ts AS CAST(ts6 AS TIMESTAMP(3)),\n" + "  WATERMARK FOR ts AS ts,\n" + "  constraint ct1 PRIMARY KEY(uuid) NOT ENFORCED)\n" + "  WITH (\n" + "    'connector.type' = 'filesystem'," + "    'connector.path' = 'file://fakePath'," + "    'format.type' = 'csv')";
    tableEnv.executeSql(createTable);
    TableSchema tableSchema = tableEnv.getCatalog(tableEnv.getCurrentCatalog()).map(catalog -> {
        try {
            final ObjectPath tablePath = ObjectPath.fromString(catalog.getDefaultDatabase() + '.' + "pk_src");
            return catalog.getTable(tablePath).getSchema();
        } catch (TableNotExistException e) {
            return null;
        }
    }).orElse(null);
    assertThat(tableSchema).isNotNull();
    assertThat(tableSchema.getPrimaryKey()).hasValue(UniqueConstraint.primaryKey("ct1", Collections.singletonList("uuid")));
    tableEnv.executeSql("DROP TABLE pk_src");
}
Also used : Arrays(java.util.Arrays) Schema(org.apache.flink.table.api.Schema) FileUtils(org.apache.flink.util.FileUtils) Assertions.assertThat(org.assertj.core.api.Assertions.assertThat) CatalogTable(org.apache.flink.table.catalog.CatalogTable) FLINK_PROPERTY_PREFIX(org.apache.flink.table.catalog.CatalogPropertiesUtil.FLINK_PROPERTY_PREFIX) Future(java.util.concurrent.Future) Map(java.util.Map) URI(java.net.URI) Path(java.nio.file.Path) TableEnvironment(org.apache.flink.table.api.TableEnvironment) AfterClass(org.junit.AfterClass) Expressions.$(org.apache.flink.table.api.Expressions.$) TableSchema(org.apache.flink.table.api.TableSchema) Table(org.apache.flink.table.api.Table) TestCollectionTableFactory(org.apache.flink.table.planner.factories.utils.TestCollectionTableFactory) Executors(java.util.concurrent.Executors) List(java.util.List) FactoryUtil(org.apache.flink.table.factories.FactoryUtil) ManagedTableFactory(org.apache.flink.table.factories.ManagedTableFactory) Row(org.apache.flink.types.Row) UniqueConstraint(org.apache.flink.table.api.constraints.UniqueConstraint) ObjectIdentifier(org.apache.flink.table.catalog.ObjectIdentifier) BeforeClass(org.junit.BeforeClass) ByteArrayOutputStream(java.io.ByteArrayOutputStream) TABLE_EXEC_RESOURCE_DEFAULT_PARALLELISM(org.apache.flink.table.api.config.ExecutionConfigOptions.TABLE_EXEC_RESOURCE_DEFAULT_PARALLELISM) HashMap(java.util.HashMap) Callable(java.util.concurrent.Callable) ObjectPath(org.apache.flink.table.catalog.ObjectPath) AtomicReference(java.util.concurrent.atomic.AtomicReference) ArrayList(java.util.ArrayList) CatalogView(org.apache.flink.table.catalog.CatalogView) Catalog(org.apache.flink.table.catalog.Catalog) TestManagedTableFactory(org.apache.flink.table.factories.TestManagedTableFactory) ExecutorService(java.util.concurrent.ExecutorService) AbstractDataType(org.apache.flink.table.types.AbstractDataType) CatalogTableImpl(org.apache.flink.table.catalog.CatalogTableImpl) PrintStream(java.io.PrintStream) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) Files(java.nio.file.Files) Configuration(org.apache.flink.configuration.Configuration) DataTypes(org.apache.flink.table.api.DataTypes) Test(org.junit.Test) CatalogBaseTable(org.apache.flink.table.catalog.CatalogBaseTable) CollectionUtil(org.apache.flink.util.CollectionUtil) File(java.io.File) TimeUnit(java.util.concurrent.TimeUnit) CONNECTOR(org.apache.flink.table.factories.FactoryUtil.CONNECTOR) Rule(org.junit.Rule) CoreOptions(org.apache.flink.configuration.CoreOptions) Paths(java.nio.file.Paths) SqlDialect(org.apache.flink.table.api.SqlDialect) EnvironmentSettings(org.apache.flink.table.api.EnvironmentSettings) BufferedReader(java.io.BufferedReader) FileReader(java.io.FileReader) Comparator(java.util.Comparator) Collections(java.util.Collections) TemporaryFolder(org.junit.rules.TemporaryFolder) ObjectPath(org.apache.flink.table.catalog.ObjectPath) TableSchema(org.apache.flink.table.api.TableSchema) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) TableEnvironment(org.apache.flink.table.api.TableEnvironment) Test(org.junit.Test)

Example 2 with TableNotExistException

use of org.apache.flink.table.catalog.exceptions.TableNotExistException in project flink by apache.

the class HiveCatalog method getPartitionColumnStatistics.

@Override
public CatalogColumnStatistics getPartitionColumnStatistics(ObjectPath tablePath, CatalogPartitionSpec partitionSpec) throws PartitionNotExistException, CatalogException {
    try {
        Partition partition = getHivePartition(tablePath, partitionSpec);
        Table hiveTable = getHiveTable(tablePath);
        String partName = getEscapedPartitionName(tablePath, partitionSpec, hiveTable);
        List<String> partNames = new ArrayList<>();
        partNames.add(partName);
        Map<String, List<ColumnStatisticsObj>> partitionColumnStatistics = client.getPartitionColumnStatistics(partition.getDbName(), partition.getTableName(), partNames, getFieldNames(partition.getSd().getCols()));
        List<ColumnStatisticsObj> columnStatisticsObjs = partitionColumnStatistics.get(partName);
        if (columnStatisticsObjs != null && !columnStatisticsObjs.isEmpty()) {
            return new CatalogColumnStatistics(HiveStatsUtil.createCatalogColumnStats(columnStatisticsObjs, hiveVersion));
        } else {
            return CatalogColumnStatistics.UNKNOWN;
        }
    } catch (TableNotExistException | PartitionSpecInvalidException e) {
        throw new PartitionNotExistException(getName(), tablePath, partitionSpec);
    } catch (TException e) {
        throw new CatalogException(String.format("Failed to get table stats of table %s 's partition %s", tablePath.getFullName(), String.valueOf(partitionSpec)), e);
    }
}
Also used : TException(org.apache.thrift.TException) Partition(org.apache.hadoop.hive.metastore.api.Partition) CatalogPartition(org.apache.flink.table.catalog.CatalogPartition) CatalogTable(org.apache.flink.table.catalog.CatalogTable) SqlCreateHiveTable(org.apache.flink.sql.parser.hive.ddl.SqlCreateHiveTable) Table(org.apache.hadoop.hive.metastore.api.Table) CatalogBaseTable(org.apache.flink.table.catalog.CatalogBaseTable) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) ArrayList(java.util.ArrayList) CatalogException(org.apache.flink.table.catalog.exceptions.CatalogException) CatalogColumnStatistics(org.apache.flink.table.catalog.stats.CatalogColumnStatistics) ColumnStatisticsObj(org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj) ArrayList(java.util.ArrayList) List(java.util.List) PartitionNotExistException(org.apache.flink.table.catalog.exceptions.PartitionNotExistException) PartitionSpecInvalidException(org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException)

Example 3 with TableNotExistException

use of org.apache.flink.table.catalog.exceptions.TableNotExistException in project flink by apache.

the class HiveCatalog method getPartition.

@Override
public CatalogPartition getPartition(ObjectPath tablePath, CatalogPartitionSpec partitionSpec) throws PartitionNotExistException, CatalogException {
    checkNotNull(tablePath, "Table path cannot be null");
    checkNotNull(partitionSpec, "CatalogPartitionSpec cannot be null");
    try {
        Partition hivePartition = getHivePartition(tablePath, partitionSpec);
        Map<String, String> properties = hivePartition.getParameters();
        properties.put(SqlCreateHiveTable.TABLE_LOCATION_URI, hivePartition.getSd().getLocation());
        String comment = properties.remove(HiveCatalogConfig.COMMENT);
        return new CatalogPartitionImpl(properties, comment);
    } catch (NoSuchObjectException | MetaException | TableNotExistException | PartitionSpecInvalidException e) {
        throw new PartitionNotExistException(getName(), tablePath, partitionSpec, e);
    } catch (TException e) {
        throw new CatalogException(String.format("Failed to get partition %s of table %s", partitionSpec, tablePath), e);
    }
}
Also used : TException(org.apache.thrift.TException) Partition(org.apache.hadoop.hive.metastore.api.Partition) CatalogPartition(org.apache.flink.table.catalog.CatalogPartition) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) CatalogException(org.apache.flink.table.catalog.exceptions.CatalogException) NoSuchObjectException(org.apache.hadoop.hive.metastore.api.NoSuchObjectException) PartitionNotExistException(org.apache.flink.table.catalog.exceptions.PartitionNotExistException) PartitionSpecInvalidException(org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException) CatalogPartitionImpl(org.apache.flink.table.catalog.CatalogPartitionImpl) MetaException(org.apache.hadoop.hive.metastore.api.MetaException)

Example 4 with TableNotExistException

use of org.apache.flink.table.catalog.exceptions.TableNotExistException in project flink by apache.

the class HiveCatalog method alterPartitionColumnStatistics.

@Override
public void alterPartitionColumnStatistics(ObjectPath tablePath, CatalogPartitionSpec partitionSpec, CatalogColumnStatistics columnStatistics, boolean ignoreIfNotExists) throws PartitionNotExistException, CatalogException {
    try {
        Partition hivePartition = getHivePartition(tablePath, partitionSpec);
        Table hiveTable = getHiveTable(tablePath);
        String partName = getEscapedPartitionName(tablePath, partitionSpec, hiveTable);
        client.updatePartitionColumnStatistics(HiveStatsUtil.createPartitionColumnStats(hivePartition, partName, columnStatistics.getColumnStatisticsData(), hiveVersion));
    } catch (TableNotExistException | PartitionSpecInvalidException e) {
        if (!ignoreIfNotExists) {
            throw new PartitionNotExistException(getName(), tablePath, partitionSpec, e);
        }
    } catch (TException e) {
        throw new CatalogException(String.format("Failed to alter table column stats of table %s 's partition %s", tablePath.getFullName(), String.valueOf(partitionSpec)), e);
    }
}
Also used : TException(org.apache.thrift.TException) Partition(org.apache.hadoop.hive.metastore.api.Partition) CatalogPartition(org.apache.flink.table.catalog.CatalogPartition) CatalogTable(org.apache.flink.table.catalog.CatalogTable) SqlCreateHiveTable(org.apache.flink.sql.parser.hive.ddl.SqlCreateHiveTable) Table(org.apache.hadoop.hive.metastore.api.Table) CatalogBaseTable(org.apache.flink.table.catalog.CatalogBaseTable) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) CatalogException(org.apache.flink.table.catalog.exceptions.CatalogException) PartitionNotExistException(org.apache.flink.table.catalog.exceptions.PartitionNotExistException) PartitionSpecInvalidException(org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException)

Example 5 with TableNotExistException

use of org.apache.flink.table.catalog.exceptions.TableNotExistException in project flink by apache.

the class HiveCatalog method partitionExists.

// ------ partitions ------
@Override
public boolean partitionExists(ObjectPath tablePath, CatalogPartitionSpec partitionSpec) throws CatalogException {
    checkNotNull(tablePath, "Table path cannot be null");
    checkNotNull(partitionSpec, "CatalogPartitionSpec cannot be null");
    try {
        return getHivePartition(tablePath, partitionSpec) != null;
    } catch (NoSuchObjectException | TableNotExistException | PartitionSpecInvalidException e) {
        return false;
    } catch (TException e) {
        throw new CatalogException(String.format("Failed to get partition %s of table %s", partitionSpec, tablePath), e);
    }
}
Also used : TException(org.apache.thrift.TException) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) CatalogException(org.apache.flink.table.catalog.exceptions.CatalogException) NoSuchObjectException(org.apache.hadoop.hive.metastore.api.NoSuchObjectException) PartitionSpecInvalidException(org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException)

Aggregations

TableNotExistException (org.apache.flink.table.catalog.exceptions.TableNotExistException)25 CatalogException (org.apache.flink.table.catalog.exceptions.CatalogException)15 TException (org.apache.thrift.TException)11 CatalogBaseTable (org.apache.flink.table.catalog.CatalogBaseTable)10 CatalogTable (org.apache.flink.table.catalog.CatalogTable)10 SqlCreateHiveTable (org.apache.flink.sql.parser.hive.ddl.SqlCreateHiveTable)8 PartitionSpecInvalidException (org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException)8 Table (org.apache.hadoop.hive.metastore.api.Table)8 ObjectPath (org.apache.flink.table.catalog.ObjectPath)7 PartitionNotExistException (org.apache.flink.table.catalog.exceptions.PartitionNotExistException)7 CatalogPartition (org.apache.flink.table.catalog.CatalogPartition)6 List (java.util.List)5 NoSuchObjectException (org.apache.hadoop.hive.metastore.api.NoSuchObjectException)5 ArrayList (java.util.ArrayList)4 Catalog (org.apache.flink.table.catalog.Catalog)4 Partition (org.apache.hadoop.hive.metastore.api.Partition)4 HashMap (java.util.HashMap)3 Map (java.util.Map)3 TableException (org.apache.flink.table.api.TableException)3 TableSchema (org.apache.flink.table.api.TableSchema)3