Search in sources :

Example 11 with TableNotExistException

use of org.apache.flink.table.catalog.exceptions.TableNotExistException in project flink by apache.

the class GenericInMemoryCatalog method alterTable.

@Override
public void alterTable(ObjectPath tablePath, CatalogBaseTable newTable, boolean ignoreIfNotExists) throws TableNotExistException {
    checkNotNull(tablePath);
    checkNotNull(newTable);
    CatalogBaseTable existingTable = tables.get(tablePath);
    if (existingTable != null) {
        if (existingTable.getTableKind() != newTable.getTableKind()) {
            throw new CatalogException(String.format("Table types don't match. Existing table is '%s' and new table is '%s'.", existingTable.getTableKind(), newTable.getTableKind()));
        }
        tables.put(tablePath, newTable.copy());
    } else if (!ignoreIfNotExists) {
        throw new TableNotExistException(getName(), tablePath);
    }
}
Also used : TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) CatalogException(org.apache.flink.table.catalog.exceptions.CatalogException)

Example 12 with TableNotExistException

use of org.apache.flink.table.catalog.exceptions.TableNotExistException in project flink by apache.

the class PushPartitionIntoTableSourceScanRule method readPartitionsAndPrune.

private List<Map<String, String>> readPartitionsAndPrune(RexBuilder rexBuilder, FlinkContext context, TableSourceTable tableSourceTable, Function<List<Map<String, String>>, List<Map<String, String>>> pruner, Seq<RexNode> partitionPredicate, List<String> inputFieldNames) {
    // get partitions from table/catalog and prune
    Optional<Catalog> catalogOptional = tableSourceTable.contextResolvedTable().getCatalog();
    DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
    Optional<List<Map<String, String>>> optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
    if (optionalPartitions.isPresent()) {
        return pruner.apply(optionalPartitions.get());
    } else {
        // we will read partitions from catalog if table doesn't support listPartitions.
        if (!catalogOptional.isPresent()) {
            throw new TableException(String.format("Table '%s' connector doesn't provide partitions, and it cannot be loaded from the catalog", tableSourceTable.contextResolvedTable().getIdentifier().asSummaryString()));
        }
        try {
            return readPartitionFromCatalogAndPrune(rexBuilder, context, catalogOptional.get(), tableSourceTable.contextResolvedTable().getIdentifier(), inputFieldNames, partitionPredicate, pruner);
        } catch (TableNotExistException tableNotExistException) {
            throw new TableException(String.format("Table %s is not found in catalog.", tableSourceTable.contextResolvedTable().getIdentifier().asSummaryString()));
        } catch (TableNotPartitionedException tableNotPartitionedException) {
            throw new TableException(String.format("Table %s is not a partitionable source. Validator should have checked it.", tableSourceTable.contextResolvedTable().getIdentifier().asSummaryString()), tableNotPartitionedException);
        }
    }
}
Also used : TableException(org.apache.flink.table.api.TableException) TableNotPartitionedException(org.apache.flink.table.catalog.exceptions.TableNotPartitionedException) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) List(java.util.List) ArrayList(java.util.ArrayList) Catalog(org.apache.flink.table.catalog.Catalog) DynamicTableSource(org.apache.flink.table.connector.source.DynamicTableSource) SupportsPartitionPushDown(org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown)

Example 13 with TableNotExistException

use of org.apache.flink.table.catalog.exceptions.TableNotExistException in project flink by apache.

the class HiveCatalog method alterPartition.

@Override
public void alterPartition(ObjectPath tablePath, CatalogPartitionSpec partitionSpec, CatalogPartition newPartition, boolean ignoreIfNotExists) throws PartitionNotExistException, CatalogException {
    checkNotNull(tablePath, "Table path cannot be null");
    checkNotNull(partitionSpec, "CatalogPartitionSpec cannot be null");
    checkNotNull(newPartition, "New partition cannot be null");
    // the target doesn't exist
    try {
        Table hiveTable = getHiveTable(tablePath);
        boolean isHiveTable = isHiveTable(hiveTable.getParameters());
        if (!isHiveTable) {
            throw new CatalogException("Currently only supports partition for hive tables");
        }
        Partition hivePartition = getHivePartition(hiveTable, partitionSpec);
        if (hivePartition == null) {
            if (ignoreIfNotExists) {
                return;
            }
            throw new PartitionNotExistException(getName(), tablePath, partitionSpec);
        }
        AlterTableOp op = HiveTableUtil.extractAlterTableOp(newPartition.getProperties());
        if (op == null) {
            throw new CatalogException(ALTER_TABLE_OP + " is missing for alter table operation");
        }
        alterTableViaProperties(op, null, null, hivePartition.getParameters(), newPartition.getProperties(), hivePartition.getSd());
        client.alter_partition(tablePath.getDatabaseName(), tablePath.getObjectName(), hivePartition);
    } catch (NoSuchObjectException e) {
        if (!ignoreIfNotExists) {
            throw new PartitionNotExistException(getName(), tablePath, partitionSpec, e);
        }
    } catch (InvalidOperationException | MetaException | TableNotExistException | PartitionSpecInvalidException e) {
        throw new PartitionNotExistException(getName(), tablePath, partitionSpec, e);
    } catch (TException e) {
        throw new CatalogException(String.format("Failed to alter existing partition with new partition %s of table %s", partitionSpec, tablePath), e);
    }
}
Also used : TException(org.apache.thrift.TException) Partition(org.apache.hadoop.hive.metastore.api.Partition) CatalogPartition(org.apache.flink.table.catalog.CatalogPartition) CatalogTable(org.apache.flink.table.catalog.CatalogTable) SqlCreateHiveTable(org.apache.flink.sql.parser.hive.ddl.SqlCreateHiveTable) Table(org.apache.hadoop.hive.metastore.api.Table) CatalogBaseTable(org.apache.flink.table.catalog.CatalogBaseTable) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) CatalogException(org.apache.flink.table.catalog.exceptions.CatalogException) AlterTableOp(org.apache.flink.sql.parser.hive.ddl.SqlAlterHiveTable.AlterTableOp) InvalidOperationException(org.apache.hadoop.hive.metastore.api.InvalidOperationException) NoSuchObjectException(org.apache.hadoop.hive.metastore.api.NoSuchObjectException) PartitionNotExistException(org.apache.flink.table.catalog.exceptions.PartitionNotExistException) PartitionSpecInvalidException(org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException) MetaException(org.apache.hadoop.hive.metastore.api.MetaException)

Example 14 with TableNotExistException

use of org.apache.flink.table.catalog.exceptions.TableNotExistException in project flink by apache.

the class HiveCatalog method renameTable.

@Override
public void renameTable(ObjectPath tablePath, String newTableName, boolean ignoreIfNotExists) throws TableNotExistException, TableAlreadyExistException, CatalogException {
    checkNotNull(tablePath, "tablePath cannot be null");
    checkArgument(!isNullOrWhitespaceOnly(newTableName), "newTableName cannot be null or empty");
    try {
        // Thus, check the table existence explicitly
        if (tableExists(tablePath)) {
            ObjectPath newPath = new ObjectPath(tablePath.getDatabaseName(), newTableName);
            // Thus, check the table existence explicitly
            if (tableExists(newPath)) {
                throw new TableAlreadyExistException(getName(), newPath);
            } else {
                Table table = getHiveTable(tablePath);
                table.setTableName(newTableName);
                client.alter_table(tablePath.getDatabaseName(), tablePath.getObjectName(), table);
            }
        } else if (!ignoreIfNotExists) {
            throw new TableNotExistException(getName(), tablePath);
        }
    } catch (TException e) {
        throw new CatalogException(String.format("Failed to rename table %s", tablePath.getFullName()), e);
    }
}
Also used : TException(org.apache.thrift.TException) ObjectPath(org.apache.flink.table.catalog.ObjectPath) TableAlreadyExistException(org.apache.flink.table.catalog.exceptions.TableAlreadyExistException) CatalogTable(org.apache.flink.table.catalog.CatalogTable) SqlCreateHiveTable(org.apache.flink.sql.parser.hive.ddl.SqlCreateHiveTable) Table(org.apache.hadoop.hive.metastore.api.Table) CatalogBaseTable(org.apache.flink.table.catalog.CatalogBaseTable) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) CatalogException(org.apache.flink.table.catalog.exceptions.CatalogException)

Example 15 with TableNotExistException

use of org.apache.flink.table.catalog.exceptions.TableNotExistException in project flink by apache.

the class HiveCatalog method alterTableStatistics.

// ------ stats ------
@Override
public void alterTableStatistics(ObjectPath tablePath, CatalogTableStatistics tableStatistics, boolean ignoreIfNotExists) throws TableNotExistException, CatalogException {
    try {
        Table hiveTable = getHiveTable(tablePath);
        // versions, so error out
        if (!isTablePartitioned(hiveTable) && hiveVersion.compareTo("1.2.1") < 0) {
            throw new CatalogException("Alter table stats is not supported in Hive version " + hiveVersion);
        }
        // Set table stats
        if (statsChanged(tableStatistics, hiveTable.getParameters())) {
            updateStats(tableStatistics, hiveTable.getParameters());
            client.alter_table(tablePath.getDatabaseName(), tablePath.getObjectName(), hiveTable);
        }
    } catch (TableNotExistException e) {
        if (!ignoreIfNotExists) {
            throw e;
        }
    } catch (TException e) {
        throw new CatalogException(String.format("Failed to alter table stats of table %s", tablePath.getFullName()), e);
    }
}
Also used : TException(org.apache.thrift.TException) CatalogTable(org.apache.flink.table.catalog.CatalogTable) SqlCreateHiveTable(org.apache.flink.sql.parser.hive.ddl.SqlCreateHiveTable) Table(org.apache.hadoop.hive.metastore.api.Table) CatalogBaseTable(org.apache.flink.table.catalog.CatalogBaseTable) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) CatalogException(org.apache.flink.table.catalog.exceptions.CatalogException)

Aggregations

TableNotExistException (org.apache.flink.table.catalog.exceptions.TableNotExistException)25 CatalogException (org.apache.flink.table.catalog.exceptions.CatalogException)15 TException (org.apache.thrift.TException)11 CatalogBaseTable (org.apache.flink.table.catalog.CatalogBaseTable)10 CatalogTable (org.apache.flink.table.catalog.CatalogTable)10 SqlCreateHiveTable (org.apache.flink.sql.parser.hive.ddl.SqlCreateHiveTable)8 PartitionSpecInvalidException (org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException)8 Table (org.apache.hadoop.hive.metastore.api.Table)8 ObjectPath (org.apache.flink.table.catalog.ObjectPath)7 PartitionNotExistException (org.apache.flink.table.catalog.exceptions.PartitionNotExistException)7 CatalogPartition (org.apache.flink.table.catalog.CatalogPartition)6 List (java.util.List)5 NoSuchObjectException (org.apache.hadoop.hive.metastore.api.NoSuchObjectException)5 ArrayList (java.util.ArrayList)4 Catalog (org.apache.flink.table.catalog.Catalog)4 Partition (org.apache.hadoop.hive.metastore.api.Partition)4 HashMap (java.util.HashMap)3 Map (java.util.Map)3 TableException (org.apache.flink.table.api.TableException)3 TableSchema (org.apache.flink.table.api.TableSchema)3