Search in sources :

Example 11 with AlreadyExistsException

use of org.apache.hadoop.hive.metastore.api.AlreadyExistsException in project hive by apache.

the class Hive method createTable.

/**
   * Creates the table with the given objects. It takes additional arguments for
   * primary keys and foreign keys associated with the table.
   *
   * @param tbl
   *          a table object
   * @param ifNotExists
   *          if true, ignore AlreadyExistsException
   * @param primaryKeys
   *          primary key columns associated with the table
   * @param foreignKeys
   *          foreign key columns associated with the table
   * @throws HiveException
   */
public void createTable(Table tbl, boolean ifNotExists, List<SQLPrimaryKey> primaryKeys, List<SQLForeignKey> foreignKeys) throws HiveException {
    try {
        if (tbl.getDbName() == null || "".equals(tbl.getDbName().trim())) {
            tbl.setDbName(SessionState.get().getCurrentDatabase());
        }
        if (tbl.getCols().size() == 0 || tbl.getSd().getColsSize() == 0) {
            tbl.setFields(MetaStoreUtils.getFieldsFromDeserializer(tbl.getTableName(), tbl.getDeserializer()));
        }
        tbl.checkValidity(conf);
        if (tbl.getParameters() != null) {
            tbl.getParameters().remove(hive_metastoreConstants.DDL_TIME);
        }
        org.apache.hadoop.hive.metastore.api.Table tTbl = tbl.getTTable();
        PrincipalPrivilegeSet principalPrivs = new PrincipalPrivilegeSet();
        SessionState ss = SessionState.get();
        if (ss != null) {
            CreateTableAutomaticGrant grants = ss.getCreateTableGrants();
            if (grants != null) {
                principalPrivs.setUserPrivileges(grants.getUserGrants());
                principalPrivs.setGroupPrivileges(grants.getGroupGrants());
                principalPrivs.setRolePrivileges(grants.getRoleGrants());
                tTbl.setPrivileges(principalPrivs);
            }
        }
        if (primaryKeys == null && foreignKeys == null) {
            getMSC().createTable(tTbl);
        } else {
            getMSC().createTableWithConstraints(tTbl, primaryKeys, foreignKeys);
        }
    } catch (AlreadyExistsException e) {
        if (!ifNotExists) {
            throw new HiveException(e);
        }
    } catch (Exception e) {
        throw new HiveException(e);
    }
}
Also used : SessionState(org.apache.hadoop.hive.ql.session.SessionState) AlreadyExistsException(org.apache.hadoop.hive.metastore.api.AlreadyExistsException) PrincipalPrivilegeSet(org.apache.hadoop.hive.metastore.api.PrincipalPrivilegeSet) CreateTableAutomaticGrant(org.apache.hadoop.hive.ql.session.CreateTableAutomaticGrant) AlreadyExistsException(org.apache.hadoop.hive.metastore.api.AlreadyExistsException) InvalidOperationException(org.apache.hadoop.hive.metastore.api.InvalidOperationException) TException(org.apache.thrift.TException) IOException(java.io.IOException) ExecutionException(java.util.concurrent.ExecutionException) SerDeException(org.apache.hadoop.hive.serde2.SerDeException) NoSuchObjectException(org.apache.hadoop.hive.metastore.api.NoSuchObjectException) MetaException(org.apache.hadoop.hive.metastore.api.MetaException) HiveMetaException(org.apache.hadoop.hive.metastore.HiveMetaException) FileNotFoundException(java.io.FileNotFoundException) JDODataStoreException(javax.jdo.JDODataStoreException)

Example 12 with AlreadyExistsException

use of org.apache.hadoop.hive.metastore.api.AlreadyExistsException in project hive by apache.

the class HCatClientHMSImpl method addPartition.

@Override
public void addPartition(HCatAddPartitionDesc partInfo) throws HCatException {
    Table tbl = null;
    try {
        tbl = hmsClient.getTable(partInfo.getDatabaseName(), partInfo.getTableName());
        // TODO: Should be moved out.
        if (tbl.getPartitionKeysSize() == 0) {
            throw new HCatException("The table " + partInfo.getTableName() + " is not partitioned.");
        }
        HCatTable hcatTable = new HCatTable(tbl);
        HCatPartition hcatPartition = partInfo.getHCatPartition();
        // This is only required to support the deprecated methods in HCatAddPartitionDesc.Builder.
        if (hcatPartition == null) {
            hcatPartition = partInfo.getHCatPartition(hcatTable);
        }
        hmsClient.add_partition(hcatPartition.toHivePartition());
    } catch (InvalidObjectException e) {
        throw new HCatException("InvalidObjectException while adding partition.", e);
    } catch (AlreadyExistsException e) {
        throw new HCatException("AlreadyExistsException while adding partition.", e);
    } catch (MetaException e) {
        throw new HCatException("MetaException while adding partition.", e);
    } catch (NoSuchObjectException e) {
        throw new ObjectNotFoundException("The table " + partInfo.getTableName() + " is could not be found.", e);
    } catch (TException e) {
        throw new ConnectionFailureException("TException while adding partition.", e);
    }
}
Also used : TException(org.apache.thrift.TException) Table(org.apache.hadoop.hive.metastore.api.Table) AlreadyExistsException(org.apache.hadoop.hive.metastore.api.AlreadyExistsException) HCatException(org.apache.hive.hcatalog.common.HCatException) InvalidObjectException(org.apache.hadoop.hive.metastore.api.InvalidObjectException) NoSuchObjectException(org.apache.hadoop.hive.metastore.api.NoSuchObjectException) MetaException(org.apache.hadoop.hive.metastore.api.MetaException)

Example 13 with AlreadyExistsException

use of org.apache.hadoop.hive.metastore.api.AlreadyExistsException in project hive by apache.

the class HCatClientHMSImpl method addPartitions.

/*
   * @param partInfoList
   *  @return The size of the list of partitions.
   * @throws HCatException,ConnectionFailureException
   * @see org.apache.hive.hcatalog.api.HCatClient#addPartitions(java.util.List)
   */
@Override
public int addPartitions(List<HCatAddPartitionDesc> partInfoList) throws HCatException {
    int numPartitions = -1;
    if ((partInfoList == null) || (partInfoList.size() == 0)) {
        throw new HCatException("The partition list is null or empty.");
    }
    Table tbl = null;
    try {
        tbl = hmsClient.getTable(partInfoList.get(0).getDatabaseName(), partInfoList.get(0).getTableName());
        HCatTable hcatTable = new HCatTable(tbl);
        ArrayList<Partition> ptnList = new ArrayList<Partition>();
        for (HCatAddPartitionDesc desc : partInfoList) {
            HCatPartition hCatPartition = desc.getHCatPartition();
            // This is required only to support the deprecated HCatAddPartitionDesc.Builder interfaces.
            if (hCatPartition == null) {
                hCatPartition = desc.getHCatPartition(hcatTable);
            }
            ptnList.add(hCatPartition.toHivePartition());
        }
        numPartitions = hmsClient.add_partitions(ptnList);
    } catch (InvalidObjectException e) {
        throw new HCatException("InvalidObjectException while adding partition.", e);
    } catch (AlreadyExistsException e) {
        throw new HCatException("AlreadyExistsException while adding partition.", e);
    } catch (MetaException e) {
        throw new HCatException("MetaException while adding partition.", e);
    } catch (NoSuchObjectException e) {
        throw new ObjectNotFoundException("The table " + partInfoList.get(0).getTableName() + " is could not be found.", e);
    } catch (TException e) {
        throw new ConnectionFailureException("TException while adding partition.", e);
    }
    return numPartitions;
}
Also used : TException(org.apache.thrift.TException) Partition(org.apache.hadoop.hive.metastore.api.Partition) Table(org.apache.hadoop.hive.metastore.api.Table) AlreadyExistsException(org.apache.hadoop.hive.metastore.api.AlreadyExistsException) HCatException(org.apache.hive.hcatalog.common.HCatException) ArrayList(java.util.ArrayList) InvalidObjectException(org.apache.hadoop.hive.metastore.api.InvalidObjectException) NoSuchObjectException(org.apache.hadoop.hive.metastore.api.NoSuchObjectException) MetaException(org.apache.hadoop.hive.metastore.api.MetaException)

Example 14 with AlreadyExistsException

use of org.apache.hadoop.hive.metastore.api.AlreadyExistsException in project metacat by Netflix.

the class HiveConnectorPartitionService method savePartitions.

/**
     * {@inheritDoc}.
     */
@Override
public PartitionsSaveResponse savePartitions(@Nonnull @NonNull final ConnectorContext requestContext, @Nonnull @NonNull final QualifiedName tableName, @Nonnull @NonNull final PartitionsSaveRequest partitionsSaveRequest) {
    final String databasename = tableName.getDatabaseName();
    final String tablename = tableName.getTableName();
    // New partitions
    final List<Partition> hivePartitions = Lists.newArrayList();
    try {
        final Table table = metacatHiveClient.getTableByName(databasename, tablename);
        final List<PartitionInfo> partitionInfos = partitionsSaveRequest.getPartitions();
        // New partition ids
        final List<String> addedPartitionIds = Lists.newArrayList();
        // Updated partition ids
        final List<String> existingPartitionIds = Lists.newArrayList();
        // Existing partitions
        final List<Partition> existingHivePartitions = Lists.newArrayList();
        // Existing partition map
        Map<String, Partition> existingPartitionMap = Collections.emptyMap();
        if (partitionsSaveRequest.getCheckIfExists()) {
            final List<String> partitionNames = partitionInfos.stream().map(partition -> {
                final String partitionName = partition.getName().getPartitionName();
                PartitionUtil.validatePartitionName(partitionName, getPartitionKeys(table.getPartitionKeys()));
                return partitionName;
            }).collect(Collectors.toList());
            existingPartitionMap = getPartitionsByNames(table, partitionNames);
        }
        final TableInfo tableInfo = hiveMetacatConverters.toTableInfo(tableName, table);
        for (PartitionInfo partitionInfo : partitionInfos) {
            final String partitionName = partitionInfo.getName().getPartitionName();
            final Partition hivePartition = existingPartitionMap.get(partitionName);
            if (hivePartition == null) {
                addedPartitionIds.add(partitionName);
                hivePartitions.add(hiveMetacatConverters.fromPartitionInfo(tableInfo, partitionInfo));
            } else {
                //unless we alterifExists
                if (partitionsSaveRequest.getAlterIfExists()) {
                    final Partition existingPartition = hiveMetacatConverters.fromPartitionInfo(tableInfo, partitionInfo);
                    existingPartitionIds.add(partitionName);
                    existingPartition.setParameters(hivePartition.getParameters());
                    existingPartition.setCreateTime(hivePartition.getCreateTime());
                    existingPartition.setLastAccessTime(hivePartition.getLastAccessTime());
                    existingHivePartitions.add(existingPartition);
                }
            }
        }
        final Set<String> deletePartitionIds = Sets.newHashSet();
        if (!partitionsSaveRequest.getAlterIfExists()) {
            deletePartitionIds.addAll(existingPartitionIds);
        }
        if (partitionsSaveRequest.getPartitionIdsForDeletes() != null) {
            deletePartitionIds.addAll(partitionsSaveRequest.getPartitionIdsForDeletes());
        }
        if (partitionsSaveRequest.getAlterIfExists() && !existingHivePartitions.isEmpty()) {
            copyTableSdToPartitionSd(existingHivePartitions, table);
            metacatHiveClient.alterPartitions(databasename, tablename, existingHivePartitions);
        }
        copyTableSdToPartitionSd(hivePartitions, table);
        metacatHiveClient.addDropPartitions(databasename, tablename, hivePartitions, Lists.newArrayList(deletePartitionIds));
        final PartitionsSaveResponse result = new PartitionsSaveResponse();
        result.setAdded(addedPartitionIds);
        result.setUpdated(existingPartitionIds);
        return result;
    } catch (NoSuchObjectException exception) {
        if (exception.getMessage() != null && exception.getMessage().startsWith("Partition doesn't exist")) {
            throw new PartitionNotFoundException(tableName, "", exception);
        } else {
            throw new TableNotFoundException(tableName, exception);
        }
    } catch (MetaException | InvalidObjectException exception) {
        throw new InvalidMetaException("One or more partitions are invalid.", exception);
    } catch (AlreadyExistsException e) {
        final List<String> ids = getFakePartitionName(hivePartitions);
        throw new PartitionAlreadyExistsException(tableName, ids, e);
    } catch (TException exception) {
        throw new ConnectorException(String.format("Failed savePartitions hive table %s", tableName), exception);
    }
}
Also used : MetaException(org.apache.hadoop.hive.metastore.api.MetaException) SortOrder(com.netflix.metacat.common.dto.SortOrder) HashMap(java.util.HashMap) SerDeInfo(org.apache.hadoop.hive.metastore.api.SerDeInfo) Partition(org.apache.hadoop.hive.metastore.api.Partition) Function(java.util.function.Function) Warehouse(org.apache.hadoop.hive.metastore.Warehouse) ArrayList(java.util.ArrayList) AlreadyExistsException(org.apache.hadoop.hive.metastore.api.AlreadyExistsException) Inject(javax.inject.Inject) LinkedHashMap(java.util.LinkedHashMap) Strings(com.google.common.base.Strings) ConnectorPartitionService(com.netflix.metacat.common.server.connectors.ConnectorPartitionService) InvalidMetaException(com.netflix.metacat.common.server.connectors.exception.InvalidMetaException) Lists(com.google.common.collect.Lists) ConnectorException(com.netflix.metacat.common.server.connectors.exception.ConnectorException) PartitionInfo(com.netflix.metacat.common.server.connectors.model.PartitionInfo) Map(java.util.Map) ConnectorContext(com.netflix.metacat.common.server.connectors.ConnectorContext) Named(javax.inject.Named) HiveConnectorInfoConverter(com.netflix.metacat.connector.hive.converters.HiveConnectorInfoConverter) PartitionUtil(com.netflix.metacat.common.server.partition.util.PartitionUtil) StorageDescriptor(org.apache.hadoop.hive.metastore.api.StorageDescriptor) Nonnull(javax.annotation.Nonnull) Nullable(javax.annotation.Nullable) NonNull(lombok.NonNull) Pageable(com.netflix.metacat.common.dto.Pageable) TException(org.apache.thrift.TException) Set(java.util.Set) QualifiedName(com.netflix.metacat.common.QualifiedName) InvalidObjectException(org.apache.hadoop.hive.metastore.api.InvalidObjectException) TableNotFoundException(com.netflix.metacat.common.server.connectors.exception.TableNotFoundException) Collectors(java.util.stream.Collectors) Sets(com.google.common.collect.Sets) Table(org.apache.hadoop.hive.metastore.api.Table) PartitionsSaveResponse(com.netflix.metacat.common.server.connectors.model.PartitionsSaveResponse) FieldSchema(org.apache.hadoop.hive.metastore.api.FieldSchema) List(java.util.List) TableInfo(com.netflix.metacat.common.server.connectors.model.TableInfo) PartitionAlreadyExistsException(com.netflix.metacat.common.server.connectors.exception.PartitionAlreadyExistsException) PartitionsSaveRequest(com.netflix.metacat.common.server.connectors.model.PartitionsSaveRequest) PartitionListRequest(com.netflix.metacat.common.server.connectors.model.PartitionListRequest) ConnectorUtils(com.netflix.metacat.common.server.connectors.ConnectorUtils) PartitionNotFoundException(com.netflix.metacat.common.server.connectors.exception.PartitionNotFoundException) Collections(java.util.Collections) NoSuchObjectException(org.apache.hadoop.hive.metastore.api.NoSuchObjectException) Sort(com.netflix.metacat.common.dto.Sort) TException(org.apache.thrift.TException) Partition(org.apache.hadoop.hive.metastore.api.Partition) Table(org.apache.hadoop.hive.metastore.api.Table) AlreadyExistsException(org.apache.hadoop.hive.metastore.api.AlreadyExistsException) PartitionAlreadyExistsException(com.netflix.metacat.common.server.connectors.exception.PartitionAlreadyExistsException) InvalidMetaException(com.netflix.metacat.common.server.connectors.exception.InvalidMetaException) TableNotFoundException(com.netflix.metacat.common.server.connectors.exception.TableNotFoundException) PartitionNotFoundException(com.netflix.metacat.common.server.connectors.exception.PartitionNotFoundException) ConnectorException(com.netflix.metacat.common.server.connectors.exception.ConnectorException) PartitionsSaveResponse(com.netflix.metacat.common.server.connectors.model.PartitionsSaveResponse) TableInfo(com.netflix.metacat.common.server.connectors.model.TableInfo) NoSuchObjectException(org.apache.hadoop.hive.metastore.api.NoSuchObjectException) InvalidObjectException(org.apache.hadoop.hive.metastore.api.InvalidObjectException) ArrayList(java.util.ArrayList) List(java.util.List) PartitionInfo(com.netflix.metacat.common.server.connectors.model.PartitionInfo) PartitionAlreadyExistsException(com.netflix.metacat.common.server.connectors.exception.PartitionAlreadyExistsException) MetaException(org.apache.hadoop.hive.metastore.api.MetaException) InvalidMetaException(com.netflix.metacat.common.server.connectors.exception.InvalidMetaException)

Example 15 with AlreadyExistsException

use of org.apache.hadoop.hive.metastore.api.AlreadyExistsException in project hive by apache.

the class Hive method createTable.

/**
 * Creates the table with the given objects. It takes additional arguments for
 * primary keys and foreign keys associated with the table.
 *
 * @param tbl
 *          a table object
 * @param ifNotExists
 *          if true, ignore AlreadyExistsException
 * @param primaryKeys
 *          primary key columns associated with the table
 * @param foreignKeys
 *          foreign key columns associated with the table
 * @param uniqueConstraints
 *          UNIQUE constraints associated with the table
 * @param notNullConstraints
 *          NOT NULL constraints associated with the table
 * @param defaultConstraints
 *          DEFAULT constraints associated with the table
 * @param checkConstraints
 *          CHECK constraints associated with the table
 * @throws HiveException
 */
public void createTable(Table tbl, boolean ifNotExists, List<SQLPrimaryKey> primaryKeys, List<SQLForeignKey> foreignKeys, List<SQLUniqueConstraint> uniqueConstraints, List<SQLNotNullConstraint> notNullConstraints, List<SQLDefaultConstraint> defaultConstraints, List<SQLCheckConstraint> checkConstraints) throws HiveException {
    try {
        if (tbl.getDbName() == null || "".equals(tbl.getDbName().trim())) {
            tbl.setDbName(SessionState.get().getCurrentDatabase());
        }
        if (tbl.getCols().size() == 0 || tbl.getSd().getColsSize() == 0) {
            tbl.setFields(HiveMetaStoreUtils.getFieldsFromDeserializer(tbl.getTableName(), tbl.getDeserializer()));
        }
        tbl.checkValidity(conf);
        if (tbl.getParameters() != null) {
            tbl.getParameters().remove(hive_metastoreConstants.DDL_TIME);
        }
        org.apache.hadoop.hive.metastore.api.Table tTbl = tbl.getTTable();
        PrincipalPrivilegeSet principalPrivs = new PrincipalPrivilegeSet();
        SessionState ss = SessionState.get();
        if (ss != null) {
            CreateTableAutomaticGrant grants = ss.getCreateTableGrants();
            if (grants != null) {
                principalPrivs.setUserPrivileges(grants.getUserGrants());
                principalPrivs.setGroupPrivileges(grants.getGroupGrants());
                principalPrivs.setRolePrivileges(grants.getRoleGrants());
                tTbl.setPrivileges(principalPrivs);
            }
        }
        if (primaryKeys == null && foreignKeys == null && uniqueConstraints == null && notNullConstraints == null && defaultConstraints == null && checkConstraints == null) {
            getMSC().createTable(tTbl);
        } else {
            getMSC().createTableWithConstraints(tTbl, primaryKeys, foreignKeys, uniqueConstraints, notNullConstraints, defaultConstraints, checkConstraints);
        }
    } catch (AlreadyExistsException e) {
        if (!ifNotExists) {
            throw new HiveException(e);
        }
    } catch (Exception e) {
        throw new HiveException(e);
    }
}
Also used : SessionState(org.apache.hadoop.hive.ql.session.SessionState) AlreadyExistsException(org.apache.hadoop.hive.metastore.api.AlreadyExistsException) PrincipalPrivilegeSet(org.apache.hadoop.hive.metastore.api.PrincipalPrivilegeSet) CreateTableAutomaticGrant(org.apache.hadoop.hive.ql.session.CreateTableAutomaticGrant) AlreadyExistsException(org.apache.hadoop.hive.metastore.api.AlreadyExistsException) InvalidOperationException(org.apache.hadoop.hive.metastore.api.InvalidOperationException) TException(org.apache.thrift.TException) IOException(java.io.IOException) ExecutionException(java.util.concurrent.ExecutionException) SerDeException(org.apache.hadoop.hive.serde2.SerDeException) NoSuchObjectException(org.apache.hadoop.hive.metastore.api.NoSuchObjectException) MetaException(org.apache.hadoop.hive.metastore.api.MetaException) HiveMetaException(org.apache.hadoop.hive.metastore.HiveMetaException) FileNotFoundException(java.io.FileNotFoundException) JDODataStoreException(javax.jdo.JDODataStoreException)

Aggregations

AlreadyExistsException (org.apache.hadoop.hive.metastore.api.AlreadyExistsException)30 MetaException (org.apache.hadoop.hive.metastore.api.MetaException)24 NoSuchObjectException (org.apache.hadoop.hive.metastore.api.NoSuchObjectException)24 TException (org.apache.thrift.TException)23 IOException (java.io.IOException)16 InvalidObjectException (org.apache.hadoop.hive.metastore.api.InvalidObjectException)16 InvalidOperationException (org.apache.hadoop.hive.metastore.api.InvalidOperationException)13 Table (org.apache.hadoop.hive.metastore.api.Table)12 ArrayList (java.util.ArrayList)9 JDODataStoreException (javax.jdo.JDODataStoreException)9 Partition (org.apache.hadoop.hive.metastore.api.Partition)8 InvalidInputException (org.apache.hadoop.hive.metastore.api.InvalidInputException)7 ExecutionException (java.util.concurrent.ExecutionException)6 FieldSchema (org.apache.hadoop.hive.metastore.api.FieldSchema)6 StorageDescriptor (org.apache.hadoop.hive.metastore.api.StorageDescriptor)6 QualifiedName (com.netflix.metacat.common.QualifiedName)5 ConnectorException (com.netflix.metacat.common.server.connectors.exception.ConnectorException)5 InvalidMetaException (com.netflix.metacat.common.server.connectors.exception.InvalidMetaException)5 List (java.util.List)5 SerDeInfo (org.apache.hadoop.hive.metastore.api.SerDeInfo)5