Search in sources :

Example 31 with InvalidOperationException

use of org.apache.hadoop.hive.metastore.api.InvalidOperationException in project flink by apache.

the class HiveCatalog method alterPartition.

@Override
public void alterPartition(ObjectPath tablePath, CatalogPartitionSpec partitionSpec, CatalogPartition newPartition, boolean ignoreIfNotExists) throws PartitionNotExistException, CatalogException {
    checkNotNull(tablePath, "Table path cannot be null");
    checkNotNull(partitionSpec, "CatalogPartitionSpec cannot be null");
    checkNotNull(newPartition, "New partition cannot be null");
    // the target doesn't exist
    try {
        Table hiveTable = getHiveTable(tablePath);
        boolean isHiveTable = isHiveTable(hiveTable.getParameters());
        if (!isHiveTable) {
            throw new CatalogException("Currently only supports partition for hive tables");
        }
        Partition hivePartition = getHivePartition(hiveTable, partitionSpec);
        if (hivePartition == null) {
            if (ignoreIfNotExists) {
                return;
            }
            throw new PartitionNotExistException(getName(), tablePath, partitionSpec);
        }
        AlterTableOp op = HiveTableUtil.extractAlterTableOp(newPartition.getProperties());
        if (op == null) {
            throw new CatalogException(ALTER_TABLE_OP + " is missing for alter table operation");
        }
        alterTableViaProperties(op, null, null, hivePartition.getParameters(), newPartition.getProperties(), hivePartition.getSd());
        client.alter_partition(tablePath.getDatabaseName(), tablePath.getObjectName(), hivePartition);
    } catch (NoSuchObjectException e) {
        if (!ignoreIfNotExists) {
            throw new PartitionNotExistException(getName(), tablePath, partitionSpec, e);
        }
    } catch (InvalidOperationException | MetaException | TableNotExistException | PartitionSpecInvalidException e) {
        throw new PartitionNotExistException(getName(), tablePath, partitionSpec, e);
    } catch (TException e) {
        throw new CatalogException(String.format("Failed to alter existing partition with new partition %s of table %s", partitionSpec, tablePath), e);
    }
}
Also used : TException(org.apache.thrift.TException) Partition(org.apache.hadoop.hive.metastore.api.Partition) CatalogPartition(org.apache.flink.table.catalog.CatalogPartition) CatalogTable(org.apache.flink.table.catalog.CatalogTable) SqlCreateHiveTable(org.apache.flink.sql.parser.hive.ddl.SqlCreateHiveTable) Table(org.apache.hadoop.hive.metastore.api.Table) CatalogBaseTable(org.apache.flink.table.catalog.CatalogBaseTable) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) CatalogException(org.apache.flink.table.catalog.exceptions.CatalogException) AlterTableOp(org.apache.flink.sql.parser.hive.ddl.SqlAlterHiveTable.AlterTableOp) InvalidOperationException(org.apache.hadoop.hive.metastore.api.InvalidOperationException) NoSuchObjectException(org.apache.hadoop.hive.metastore.api.NoSuchObjectException) PartitionNotExistException(org.apache.flink.table.catalog.exceptions.PartitionNotExistException) PartitionSpecInvalidException(org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException) MetaException(org.apache.hadoop.hive.metastore.api.MetaException)

Example 32 with InvalidOperationException

use of org.apache.hadoop.hive.metastore.api.InvalidOperationException in project hive by apache.

the class AuthorizationPreEventListener method invalidOperationException.

private InvalidOperationException invalidOperationException(Exception e) {
    InvalidOperationException ex = new InvalidOperationException(e.getMessage());
    ex.initCause(e.getCause());
    return ex;
}
Also used : InvalidOperationException(org.apache.hadoop.hive.metastore.api.InvalidOperationException)

Example 33 with InvalidOperationException

use of org.apache.hadoop.hive.metastore.api.InvalidOperationException in project hive by apache.

the class HiveMetaStoreAuthorizer method onEvent.

@Override
public final void onEvent(PreEventContext preEventContext) throws MetaException, NoSuchObjectException, InvalidOperationException {
    LOG.debug("==> HiveMetaStoreAuthorizer.onEvent(): EventType=" + preEventContext.getEventType());
    try {
        HiveMetaStoreAuthzInfo authzContext = buildAuthzContext(preEventContext);
        if (!skipAuthorization(authzContext)) {
            HiveAuthorizer hiveAuthorizer = createHiveMetaStoreAuthorizer();
            checkPrivileges(authzContext, hiveAuthorizer);
        }
    } catch (Exception e) {
        LOG.error("HiveMetaStoreAuthorizer.onEvent(): failed", e);
        throw new MetaException(e.getMessage());
    }
    LOG.debug("<== HiveMetaStoreAuthorizer.onEvent(): EventType=" + preEventContext.getEventType());
}
Also used : HiveAuthorizer(org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAuthorizer) MetaException(org.apache.hadoop.hive.metastore.api.MetaException) IOException(java.io.IOException) InvalidOperationException(org.apache.hadoop.hive.metastore.api.InvalidOperationException) NoSuchObjectException(org.apache.hadoop.hive.metastore.api.NoSuchObjectException) HiveException(org.apache.hadoop.hive.ql.metadata.HiveException) MetaException(org.apache.hadoop.hive.metastore.api.MetaException)

Example 34 with InvalidOperationException

use of org.apache.hadoop.hive.metastore.api.InvalidOperationException in project hive by apache.

the class AbstractAlterTableOperation method finalizeAlterTableWithWriteIdOp.

private void finalizeAlterTableWithWriteIdOp(Table table, Table oldTable, List<Partition> partitions, DDLOperationContext context, EnvironmentContext environmentContext) throws HiveException {
    if (partitions == null) {
        updateModifiedParameters(table.getTTable().getParameters(), context.getConf());
        table.checkValidity(context.getConf());
    } else {
        for (Partition partition : partitions) {
            updateModifiedParameters(partition.getParameters(), context.getConf());
        }
    }
    try {
        environmentContext.putToProperties(HiveMetaHook.ALTER_TABLE_OPERATION_TYPE, desc.getType().name());
        if (desc.getType() == AlterTableType.ADDPROPS) {
            Map<String, String> oldTableParameters = oldTable.getParameters();
            environmentContext.putToProperties(HiveMetaHook.SET_PROPERTIES, table.getParameters().entrySet().stream().filter(e -> !oldTableParameters.containsKey(e.getKey()) || !oldTableParameters.get(e.getKey()).equals(e.getValue())).map(Map.Entry::getKey).collect(Collectors.joining(HiveMetaHook.PROPERTIES_SEPARATOR)));
        } else if (desc.getType() == AlterTableType.DROPPROPS) {
            Map<String, String> newTableParameters = table.getParameters();
            environmentContext.putToProperties(HiveMetaHook.UNSET_PROPERTIES, oldTable.getParameters().entrySet().stream().filter(e -> !newTableParameters.containsKey(e.getKey())).map(Map.Entry::getKey).collect(Collectors.joining(HiveMetaHook.PROPERTIES_SEPARATOR)));
        }
        if (partitions == null) {
            long writeId = desc.getWriteId() != null ? desc.getWriteId() : 0;
            try {
                context.getDb().alterTable(desc.getDbTableName(), table, desc.isCascade(), environmentContext, true, writeId);
            } catch (HiveException ex) {
                if (Boolean.valueOf(environmentContext.getProperties().getOrDefault(HiveMetaHook.INITIALIZE_ROLLBACK_MIGRATION, "false"))) {
                    // in case of rollback of alter table do the following:
                    // 1. restore serde info and input/output format
                    // 2. remove table columns which are used to be partition columns
                    // 3. add partition columns
                    table.getSd().setInputFormat(oldTable.getSd().getInputFormat());
                    table.getSd().setOutputFormat(oldTable.getSd().getOutputFormat());
                    table.getSd().setSerdeInfo(oldTable.getSd().getSerdeInfo());
                    table.getSd().getCols().removeAll(oldTable.getPartitionKeys());
                    table.setPartCols(oldTable.getPartitionKeys());
                    table.getParameters().clear();
                    table.getParameters().putAll(oldTable.getParameters());
                    context.getDb().alterTable(desc.getDbTableName(), table, desc.isCascade(), environmentContext, true, writeId);
                    throw new HiveException("Error occurred during hive table migration to iceberg. Table properties " + "and serde info was reverted to its original value. Partition info was lost during the migration " + "process, but it can be reverted by running MSCK REPAIR on table/partition level.\n" + "Retrying the migration without issuing MSCK REPAIR on a partitioned table will result in an empty " + "iceberg table.");
                } else {
                    throw ex;
                }
            }
        } else {
            // Note: this is necessary for UPDATE_STATISTICS command, that operates via ADDPROPS (why?).
            // For any other updates, we don't want to do txn check on partitions when altering table.
            boolean isTxn = false;
            if (desc.getPartitionSpec() != null && desc.getType() == AlterTableType.ADDPROPS) {
                // ADDPROPS is used to add replication properties like repl.last.id, which isn't
                // transactional change. In case of replication check for transactional properties
                // explicitly.
                Map<String, String> props = desc.getProps();
                if (desc.getReplicationSpec() != null && desc.getReplicationSpec().isInReplicationScope()) {
                    isTxn = (props.get(StatsSetupConst.COLUMN_STATS_ACCURATE) != null);
                } else {
                    isTxn = true;
                }
            }
            String qualifiedName = TableName.getDbTable(table.getTTable().getDbName(), table.getTTable().getTableName());
            context.getDb().alterPartitions(qualifiedName, partitions, environmentContext, isTxn);
        }
        // Add constraints if necessary
        if (desc instanceof AbstractAlterTableWithConstraintsDesc) {
            AlterTableAddConstraintOperation.addConstraints((AbstractAlterTableWithConstraintsDesc) desc, context.getDb());
        }
    } catch (InvalidOperationException e) {
        LOG.error("alter table: ", e);
        throw new HiveException(e, ErrorMsg.GENERIC_ERROR);
    }
    // Don't acquire locks for any of these, we have already asked for them in AbstractBaseAlterTableAnalyzer.
    if (partitions != null) {
        for (Partition partition : partitions) {
            context.getWork().getInputs().add(new ReadEntity(partition));
            DDLUtils.addIfAbsentByName(new WriteEntity(partition, WriteEntity.WriteType.DDL_NO_LOCK), context);
        }
    } else {
        context.getWork().getInputs().add(new ReadEntity(oldTable));
        DDLUtils.addIfAbsentByName(new WriteEntity(table, WriteEntity.WriteType.DDL_NO_LOCK), context);
    }
}
Also used : DDLOperation(org.apache.hadoop.hive.ql.ddl.DDLOperation) DDLOperationContext(org.apache.hadoop.hive.ql.ddl.DDLOperationContext) WriteEntity(org.apache.hadoop.hive.ql.hooks.WriteEntity) HiveConf(org.apache.hadoop.hive.conf.HiveConf) HiveMetaHook(org.apache.hadoop.hive.metastore.HiveMetaHook) EnvironmentContext(org.apache.hadoop.hive.metastore.api.EnvironmentContext) Table(org.apache.hadoop.hive.ql.metadata.Table) Collectors(java.util.stream.Collectors) StringUtils(org.apache.commons.lang3.StringUtils) SessionState(org.apache.hadoop.hive.ql.session.SessionState) ReadEntity(org.apache.hadoop.hive.ql.hooks.ReadEntity) ArrayList(java.util.ArrayList) Partition(org.apache.hadoop.hive.ql.metadata.Partition) List(java.util.List) AlterTableAddConstraintOperation(org.apache.hadoop.hive.ql.ddl.table.constraint.add.AlterTableAddConstraintOperation) StatsSetupConst(org.apache.hadoop.hive.common.StatsSetupConst) Map(java.util.Map) TableName(org.apache.hadoop.hive.common.TableName) DDLUtils(org.apache.hadoop.hive.ql.ddl.DDLUtils) StorageDescriptor(org.apache.hadoop.hive.metastore.api.StorageDescriptor) InvalidOperationException(org.apache.hadoop.hive.metastore.api.InvalidOperationException) ErrorMsg(org.apache.hadoop.hive.ql.ErrorMsg) HiveException(org.apache.hadoop.hive.ql.metadata.HiveException) Partition(org.apache.hadoop.hive.ql.metadata.Partition) HiveException(org.apache.hadoop.hive.ql.metadata.HiveException) ReadEntity(org.apache.hadoop.hive.ql.hooks.ReadEntity) InvalidOperationException(org.apache.hadoop.hive.metastore.api.InvalidOperationException) Map(java.util.Map) WriteEntity(org.apache.hadoop.hive.ql.hooks.WriteEntity)

Example 35 with InvalidOperationException

use of org.apache.hadoop.hive.metastore.api.InvalidOperationException in project hive by apache.

the class HCatClientHMSImpl method renameTable.

@Override
public void renameTable(String dbName, String oldName, String newName) throws HCatException {
    Table tbl;
    try {
        Table oldtbl = hmsClient.getTable(checkDB(dbName), oldName);
        if (oldtbl != null) {
            // TODO : Should be moved out.
            if (oldtbl.getParameters().get(org.apache.hadoop.hive.metastore.api.hive_metastoreConstants.META_TABLE_STORAGE) != null) {
                throw new HCatException("Cannot use rename command on a non-native table");
            }
            tbl = new Table(oldtbl);
            tbl.setTableName(newName);
            hmsClient.alter_table(checkDB(dbName), oldName, tbl);
        }
    } catch (MetaException e) {
        throw new HCatException("MetaException while renaming table", e);
    } catch (NoSuchObjectException e) {
        throw new ObjectNotFoundException("NoSuchObjectException while renaming table", e);
    } catch (InvalidOperationException e) {
        throw new HCatException("InvalidOperationException while renaming table", e);
    } catch (TException e) {
        throw new ConnectionFailureException("TException while renaming table", e);
    }
}
Also used : TException(org.apache.thrift.TException) Table(org.apache.hadoop.hive.metastore.api.Table) HCatException(org.apache.hive.hcatalog.common.HCatException) InvalidOperationException(org.apache.hadoop.hive.metastore.api.InvalidOperationException) NoSuchObjectException(org.apache.hadoop.hive.metastore.api.NoSuchObjectException) MetaException(org.apache.hadoop.hive.metastore.api.MetaException)

Aggregations

InvalidOperationException (org.apache.hadoop.hive.metastore.api.InvalidOperationException)51 NoSuchObjectException (org.apache.hadoop.hive.metastore.api.NoSuchObjectException)26 MetaException (org.apache.hadoop.hive.metastore.api.MetaException)23 IOException (java.io.IOException)19 ArrayList (java.util.ArrayList)18 Table (org.apache.hadoop.hive.metastore.api.Table)17 InvalidObjectException (org.apache.hadoop.hive.metastore.api.InvalidObjectException)16 TException (org.apache.thrift.TException)15 Partition (org.apache.hadoop.hive.metastore.api.Partition)14 FileSystem (org.apache.hadoop.fs.FileSystem)12 Path (org.apache.hadoop.fs.Path)12 List (java.util.List)10 AlreadyExistsException (org.apache.hadoop.hive.metastore.api.AlreadyExistsException)10 InvalidInputException (org.apache.hadoop.hive.metastore.api.InvalidInputException)10 MWMResourcePlan (org.apache.hadoop.hive.metastore.model.MWMResourcePlan)9 SQLException (java.sql.SQLException)8 FieldSchema (org.apache.hadoop.hive.metastore.api.FieldSchema)8 Test (org.junit.Test)8 LinkedList (java.util.LinkedList)7 Database (org.apache.hadoop.hive.metastore.api.Database)7