Search in sources :

Example 6 with ValidationException

use of org.apache.flink.table.api.ValidationException in project flink by apache.

the class FileSystemTableFactory method formatFactoryExists.

/**
 * Returns true if the format factory can be found using the given factory base class and
 * identifier.
 */
private boolean formatFactoryExists(Context context, Class<?> factoryClass) {
    Configuration options = Configuration.fromMap(context.getCatalogTable().getOptions());
    String identifier = options.get(FactoryUtil.FORMAT);
    if (identifier == null) {
        throw new ValidationException(String.format("Table options do not contain an option key '%s' for discovering a format.", FactoryUtil.FORMAT.key()));
    }
    final List<Factory> factories = new LinkedList<>();
    ServiceLoader.load(Factory.class, context.getClassLoader()).iterator().forEachRemaining(factories::add);
    final List<Factory> foundFactories = factories.stream().filter(f -> factoryClass.isAssignableFrom(f.getClass())).collect(Collectors.toList());
    final List<Factory> matchingFactories = foundFactories.stream().filter(f -> f.factoryIdentifier().equals(identifier)).collect(Collectors.toList());
    return !matchingFactories.isEmpty();
}
Also used : DecodingFormatFactory(org.apache.flink.table.factories.DecodingFormatFactory) EncodingFormat(org.apache.flink.table.connector.format.EncodingFormat) Factory(org.apache.flink.table.factories.Factory) SerializationFormatFactory(org.apache.flink.table.factories.SerializationFormatFactory) SHORT_IDS(java.time.ZoneId.SHORT_IDS) BulkWriterFormatFactory(org.apache.flink.connector.file.table.factories.BulkWriterFormatFactory) DeserializationFormatFactory(org.apache.flink.table.factories.DeserializationFormatFactory) HashSet(java.util.HashSet) DecodingFormat(org.apache.flink.table.connector.format.DecodingFormat) DynamicTableSourceFactory(org.apache.flink.table.factories.DynamicTableSourceFactory) TableFactory(org.apache.flink.table.factories.TableFactory) ConfigOption(org.apache.flink.configuration.ConfigOption) LinkedList(java.util.LinkedList) EncodingFormatFactory(org.apache.flink.table.factories.EncodingFormatFactory) BulkReaderFormatFactory(org.apache.flink.connector.file.table.factories.BulkReaderFormatFactory) DynamicTableSource(org.apache.flink.table.connector.source.DynamicTableSource) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) DynamicTableSinkFactory(org.apache.flink.table.factories.DynamicTableSinkFactory) Configuration(org.apache.flink.configuration.Configuration) Set(java.util.Set) ServiceLoader(java.util.ServiceLoader) Collectors(java.util.stream.Collectors) List(java.util.List) Stream(java.util.stream.Stream) FactoryUtil(org.apache.flink.table.factories.FactoryUtil) ValidationException(org.apache.flink.table.api.ValidationException) Internal(org.apache.flink.annotation.Internal) ValidationException(org.apache.flink.table.api.ValidationException) Configuration(org.apache.flink.configuration.Configuration) DecodingFormatFactory(org.apache.flink.table.factories.DecodingFormatFactory) Factory(org.apache.flink.table.factories.Factory) SerializationFormatFactory(org.apache.flink.table.factories.SerializationFormatFactory) BulkWriterFormatFactory(org.apache.flink.connector.file.table.factories.BulkWriterFormatFactory) DeserializationFormatFactory(org.apache.flink.table.factories.DeserializationFormatFactory) DynamicTableSourceFactory(org.apache.flink.table.factories.DynamicTableSourceFactory) TableFactory(org.apache.flink.table.factories.TableFactory) EncodingFormatFactory(org.apache.flink.table.factories.EncodingFormatFactory) BulkReaderFormatFactory(org.apache.flink.connector.file.table.factories.BulkReaderFormatFactory) DynamicTableSinkFactory(org.apache.flink.table.factories.DynamicTableSinkFactory) LinkedList(java.util.LinkedList)

Example 7 with ValidationException

use of org.apache.flink.table.api.ValidationException in project flink by apache.

the class HiveParserDDLSemanticAnalyzer method convertCreateTable.

private Operation convertCreateTable(HiveParserASTNode ast) throws SemanticException {
    String[] qualifiedTabName = HiveParserBaseSemanticAnalyzer.getQualifiedTableName((HiveParserASTNode) ast.getChild(0));
    String dbDotTab = HiveParserBaseSemanticAnalyzer.getDotName(qualifiedTabName);
    String likeTableName;
    List<FieldSchema> cols = new ArrayList<>();
    List<FieldSchema> partCols = new ArrayList<>();
    List<PrimaryKey> primaryKeys = new ArrayList<>();
    List<NotNullConstraint> notNulls = new ArrayList<>();
    String comment = null;
    String location = null;
    Map<String, String> tblProps = null;
    boolean ifNotExists = false;
    boolean isExt = false;
    boolean isTemporary = false;
    HiveParserASTNode selectStmt = null;
    // regular CREATE TABLE
    final int createTable = 0;
    // CREATE TABLE LIKE ... (CTLT)
    final int ctlt = 1;
    // CREATE TABLE AS SELECT ... (CTAS)
    final int ctas = 2;
    int commandType = createTable;
    HiveParserBaseSemanticAnalyzer.HiveParserRowFormatParams rowFormatParams = new HiveParserBaseSemanticAnalyzer.HiveParserRowFormatParams();
    HiveParserStorageFormat storageFormat = new HiveParserStorageFormat(conf);
    LOG.info("Creating table " + dbDotTab + " position=" + ast.getCharPositionInLine());
    int numCh = ast.getChildCount();
    // 3) CTAS does not support partitioning (for now).
    for (int num = 1; num < numCh; num++) {
        HiveParserASTNode child = (HiveParserASTNode) ast.getChild(num);
        if (storageFormat.fillStorageFormat(child)) {
            continue;
        }
        switch(child.getToken().getType()) {
            case HiveASTParser.TOK_IFNOTEXISTS:
                ifNotExists = true;
                break;
            case HiveASTParser.KW_EXTERNAL:
                isExt = true;
                break;
            case HiveASTParser.KW_TEMPORARY:
                isTemporary = true;
                break;
            case HiveASTParser.TOK_LIKETABLE:
                if (child.getChildCount() > 0) {
                    likeTableName = HiveParserBaseSemanticAnalyzer.getUnescapedName((HiveParserASTNode) child.getChild(0));
                    if (likeTableName != null) {
                        if (commandType == ctas) {
                            throw new ValidationException(ErrorMsg.CTAS_CTLT_COEXISTENCE.getMsg());
                        }
                        if (cols.size() != 0) {
                            throw new ValidationException(ErrorMsg.CTLT_COLLST_COEXISTENCE.getMsg());
                        }
                    }
                    commandType = ctlt;
                    handleUnsupportedOperation("CREATE TABLE LIKE is not supported");
                }
                break;
            case // CTAS
            HiveASTParser.TOK_QUERY:
                if (commandType == ctlt) {
                    throw new ValidationException(ErrorMsg.CTAS_CTLT_COEXISTENCE.getMsg());
                }
                if (cols.size() != 0) {
                    throw new ValidationException(ErrorMsg.CTAS_COLLST_COEXISTENCE.getMsg());
                }
                if (partCols.size() != 0) {
                    throw new ValidationException(ErrorMsg.CTAS_PARCOL_COEXISTENCE.getMsg());
                }
                if (isExt) {
                    throw new ValidationException(ErrorMsg.CTAS_EXTTBL_COEXISTENCE.getMsg());
                }
                commandType = ctas;
                selectStmt = child;
                break;
            case HiveASTParser.TOK_TABCOLLIST:
                cols = HiveParserBaseSemanticAnalyzer.getColumns(child, true, primaryKeys, notNulls);
                break;
            case HiveASTParser.TOK_TABLECOMMENT:
                comment = HiveParserBaseSemanticAnalyzer.unescapeSQLString(child.getChild(0).getText());
                break;
            case HiveASTParser.TOK_TABLEPARTCOLS:
                partCols = HiveParserBaseSemanticAnalyzer.getColumns((HiveParserASTNode) child.getChild(0), false);
                break;
            case HiveASTParser.TOK_TABLEROWFORMAT:
                rowFormatParams.analyzeRowFormat(child);
                break;
            case HiveASTParser.TOK_TABLELOCATION:
                location = HiveParserBaseSemanticAnalyzer.unescapeSQLString(child.getChild(0).getText());
                break;
            case HiveASTParser.TOK_TABLEPROPERTIES:
                tblProps = getProps((HiveParserASTNode) child.getChild(0));
                break;
            case HiveASTParser.TOK_TABLESERIALIZER:
                child = (HiveParserASTNode) child.getChild(0);
                storageFormat.setSerde(HiveParserBaseSemanticAnalyzer.unescapeSQLString(child.getChild(0).getText()));
                if (child.getChildCount() == 2) {
                    HiveParserBaseSemanticAnalyzer.readProps((HiveParserASTNode) (child.getChild(1).getChild(0)), storageFormat.getSerdeProps());
                }
                break;
            case HiveASTParser.TOK_ALTERTABLE_BUCKETS:
                handleUnsupportedOperation("Bucketed table is not supported");
                break;
            case HiveASTParser.TOK_TABLESKEWED:
                handleUnsupportedOperation("Skewed table is not supported");
                break;
            default:
                throw new ValidationException("Unknown AST node for CREATE TABLE: " + child);
        }
    }
    if (storageFormat.getStorageHandler() != null) {
        handleUnsupportedOperation("Storage handler table is not supported");
    }
    if (commandType == createTable || commandType == ctlt) {
        queryState.setCommandType(HiveOperation.CREATETABLE);
    } else {
        queryState.setCommandType(HiveOperation.CREATETABLE_AS_SELECT);
    }
    storageFormat.fillDefaultStorageFormat(isExt, false);
    if (isTemporary) {
        if (partCols.size() > 0) {
            handleUnsupportedOperation("Partition columns are not supported on temporary tables");
        }
        handleUnsupportedOperation("Temporary hive table is not supported");
    }
    // Handle different types of CREATE TABLE command
    switch(commandType) {
        case // REGULAR CREATE TABLE DDL
        createTable:
            tblProps = addDefaultProperties(tblProps);
            return convertCreateTable(dbDotTab, isExt, ifNotExists, isTemporary, cols, partCols, comment, location, tblProps, rowFormatParams, storageFormat, primaryKeys, notNulls);
        case // create table like <tbl_name>
        ctlt:
            tblProps = addDefaultProperties(tblProps);
            throw new SemanticException("CREATE TABLE LIKE is not supported yet");
        case // create table as select
        ctas:
            tblProps = addDefaultProperties(tblProps);
            // analyze the query
            HiveParserCalcitePlanner calcitePlanner = hiveParser.createCalcitePlanner(context, queryState, hiveShim);
            calcitePlanner.setCtasCols(cols);
            RelNode queryRelNode = calcitePlanner.genLogicalPlan(selectStmt);
            // create a table to represent the dest table
            String[] dbTblName = dbDotTab.split("\\.");
            Table destTable = new Table(Table.getEmptyTable(dbTblName[0], dbTblName[1]));
            destTable.getSd().setCols(cols);
            Tuple4<ObjectIdentifier, QueryOperation, Map<String, String>, Boolean> insertOperationInfo = dmlHelper.createInsertOperationInfo(queryRelNode, destTable, Collections.emptyMap(), Collections.emptyList(), false);
            CreateTableOperation createTableOperation = convertCreateTable(dbDotTab, isExt, ifNotExists, isTemporary, cols, partCols, comment, location, tblProps, rowFormatParams, storageFormat, primaryKeys, notNulls);
            return new CreateTableASOperation(createTableOperation, insertOperationInfo.f2, insertOperationInfo.f1, insertOperationInfo.f3);
        default:
            throw new ValidationException("Unrecognized command.");
    }
}
Also used : ValidationException(org.apache.flink.table.api.ValidationException) FieldSchema(org.apache.hadoop.hive.metastore.api.FieldSchema) HiveParserRowFormatParams(org.apache.flink.table.planner.delegation.hive.copy.HiveParserBaseSemanticAnalyzer.HiveParserRowFormatParams) ArrayList(java.util.ArrayList) PrimaryKey(org.apache.flink.table.planner.delegation.hive.copy.HiveParserBaseSemanticAnalyzer.PrimaryKey) CreateTableOperation(org.apache.flink.table.operations.ddl.CreateTableOperation) CreateTableASOperation(org.apache.flink.table.operations.ddl.CreateTableASOperation) NotNullConstraint(org.apache.flink.table.planner.delegation.hive.copy.HiveParserBaseSemanticAnalyzer.NotNullConstraint) SemanticException(org.apache.hadoop.hive.ql.parse.SemanticException) HiveParserCalcitePlanner(org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner) ObjectIdentifier(org.apache.flink.table.catalog.ObjectIdentifier) QueryOperation(org.apache.flink.table.operations.QueryOperation) HiveParserASTNode(org.apache.flink.table.planner.delegation.hive.copy.HiveParserASTNode) CatalogTable(org.apache.flink.table.catalog.CatalogTable) SqlCreateHiveTable(org.apache.flink.sql.parser.hive.ddl.SqlCreateHiveTable) Table(org.apache.hadoop.hive.ql.metadata.Table) ContextResolvedTable(org.apache.flink.table.catalog.ContextResolvedTable) CatalogBaseTable(org.apache.flink.table.catalog.CatalogBaseTable) HiveParserStorageFormat(org.apache.flink.table.planner.delegation.hive.copy.HiveParserStorageFormat) NotNullConstraint(org.apache.flink.table.planner.delegation.hive.copy.HiveParserBaseSemanticAnalyzer.NotNullConstraint) UniqueConstraint(org.apache.flink.table.api.constraints.UniqueConstraint) HiveParserRowFormatParams(org.apache.flink.table.planner.delegation.hive.copy.HiveParserBaseSemanticAnalyzer.HiveParserRowFormatParams) RelNode(org.apache.calcite.rel.RelNode) HiveParserBaseSemanticAnalyzer(org.apache.flink.table.planner.delegation.hive.copy.HiveParserBaseSemanticAnalyzer) Map(java.util.Map) LinkedHashMap(java.util.LinkedHashMap) HashMap(java.util.HashMap)

Example 8 with ValidationException

use of org.apache.flink.table.api.ValidationException in project flink by apache.

the class HiveParserDDLSemanticAnalyzer method convertAlterDatabaseOwner.

private Operation convertAlterDatabaseOwner(HiveParserASTNode ast) {
    String dbName = HiveParserBaseSemanticAnalyzer.getUnescapedName((HiveParserASTNode) ast.getChild(0));
    PrincipalDesc principalDesc = HiveParserAuthorizationParseUtils.getPrincipalDesc((HiveParserASTNode) ast.getChild(1));
    // The syntax should not allow these fields to be null, but lets verify
    String nullCmdMsg = "can't be null in alter database set owner command";
    if (principalDesc.getName() == null) {
        throw new ValidationException("Owner name " + nullCmdMsg);
    }
    if (principalDesc.getType() == null) {
        throw new ValidationException("Owner type " + nullCmdMsg);
    }
    CatalogDatabase originDB = getDatabase(dbName);
    Map<String, String> props = new HashMap<>(originDB.getProperties());
    props.put(ALTER_DATABASE_OP, SqlAlterHiveDatabase.AlterHiveDatabaseOp.CHANGE_OWNER.name());
    props.put(DATABASE_OWNER_NAME, principalDesc.getName());
    props.put(DATABASE_OWNER_TYPE, principalDesc.getType().name().toLowerCase());
    CatalogDatabase newDB = new CatalogDatabaseImpl(props, originDB.getComment());
    return new AlterDatabaseOperation(catalogManager.getCurrentCatalog(), dbName, newDB);
}
Also used : CatalogDatabase(org.apache.flink.table.catalog.CatalogDatabase) PrincipalDesc(org.apache.hadoop.hive.ql.plan.PrincipalDesc) AlterDatabaseOperation(org.apache.flink.table.operations.ddl.AlterDatabaseOperation) ValidationException(org.apache.flink.table.api.ValidationException) LinkedHashMap(java.util.LinkedHashMap) HashMap(java.util.HashMap) CatalogDatabaseImpl(org.apache.flink.table.catalog.CatalogDatabaseImpl)

Example 9 with ValidationException

use of org.apache.flink.table.api.ValidationException in project flink by apache.

the class HiveParserDDLSemanticAnalyzer method convertDropTable.

private Operation convertDropTable(HiveParserASTNode ast, TableType expectedType) {
    String tableName = HiveParserBaseSemanticAnalyzer.getUnescapedName((HiveParserASTNode) ast.getChild(0));
    boolean ifExists = (ast.getFirstChildWithType(HiveASTParser.TOK_IFEXISTS) != null);
    ObjectIdentifier identifier = parseObjectIdentifier(tableName);
    CatalogBaseTable baseTable = getCatalogBaseTable(identifier, true);
    if (expectedType == TableType.VIRTUAL_VIEW) {
        if (baseTable instanceof CatalogTable) {
            throw new ValidationException("DROP VIEW for a table is not allowed");
        }
        return new DropViewOperation(identifier, ifExists, false);
    } else {
        if (baseTable instanceof CatalogView) {
            throw new ValidationException("DROP TABLE for a view is not allowed");
        }
        return new DropTableOperation(identifier, ifExists, false);
    }
}
Also used : CatalogBaseTable(org.apache.flink.table.catalog.CatalogBaseTable) ValidationException(org.apache.flink.table.api.ValidationException) DropViewOperation(org.apache.flink.table.operations.ddl.DropViewOperation) DropTableOperation(org.apache.flink.table.operations.ddl.DropTableOperation) CatalogTable(org.apache.flink.table.catalog.CatalogTable) CatalogView(org.apache.flink.table.catalog.CatalogView) ObjectIdentifier(org.apache.flink.table.catalog.ObjectIdentifier)

Example 10 with ValidationException

use of org.apache.flink.table.api.ValidationException in project flink by apache.

the class HiveParserDDLSemanticAnalyzer method getDatabase.

private CatalogDatabase getDatabase(String databaseName) {
    Catalog catalog = catalogManager.getCatalog(catalogManager.getCurrentCatalog()).get();
    CatalogDatabase database;
    try {
        database = catalog.getDatabase(databaseName);
    } catch (DatabaseNotExistException e) {
        throw new ValidationException(String.format("Database %s not exists", databaseName), e);
    }
    return database;
}
Also used : CatalogDatabase(org.apache.flink.table.catalog.CatalogDatabase) ValidationException(org.apache.flink.table.api.ValidationException) DatabaseNotExistException(org.apache.flink.table.catalog.exceptions.DatabaseNotExistException) Catalog(org.apache.flink.table.catalog.Catalog) HiveCatalog(org.apache.flink.table.catalog.hive.HiveCatalog)

Aggregations

ValidationException (org.apache.flink.table.api.ValidationException)143 DataType (org.apache.flink.table.types.DataType)25 Test (org.junit.Test)23 HashMap (java.util.HashMap)21 ObjectIdentifier (org.apache.flink.table.catalog.ObjectIdentifier)19 LogicalType (org.apache.flink.table.types.logical.LogicalType)18 TableException (org.apache.flink.table.api.TableException)17 List (java.util.List)14 CatalogBaseTable (org.apache.flink.table.catalog.CatalogBaseTable)14 QueryOperation (org.apache.flink.table.operations.QueryOperation)14 LinkedHashMap (java.util.LinkedHashMap)13 DescriptorProperties (org.apache.flink.table.descriptors.DescriptorProperties)13 CatalogTable (org.apache.flink.table.catalog.CatalogTable)12 Expression (org.apache.flink.table.expressions.Expression)12 TableSchema (org.apache.flink.table.api.TableSchema)11 Catalog (org.apache.flink.table.catalog.Catalog)11 ContextResolvedTable (org.apache.flink.table.catalog.ContextResolvedTable)11 ArrayList (java.util.ArrayList)10 Map (java.util.Map)10 Internal (org.apache.flink.annotation.Internal)10