Search in sources :

Example 1 with RequestPartsSpec

use of org.apache.hadoop.hive.metastore.api.RequestPartsSpec in project metacat by Netflix.

the class CatalogThriftHiveMetastore method drop_partitions_req.

/**
 * {@inheritDoc}
 */
@Override
public DropPartitionsResult drop_partitions_req(final DropPartitionsRequest request) throws TException {
    return requestWrapper("drop_partitions_req", new Object[] { request }, () -> {
        final String databaseName = request.getDbName();
        final String tableName = request.getTblName();
        final boolean ifExists = request.isSetIfExists() && request.isIfExists();
        final boolean needResult = !request.isSetNeedResult() || request.isNeedResult();
        final List<Partition> parts = Lists.newArrayList();
        final List<String> partNames = Lists.newArrayList();
        int minCount = 0;
        final RequestPartsSpec spec = request.getParts();
        if (spec.isSetExprs()) {
            final Table table = get_table(databaseName, tableName);
            // Dropping by expressions.
            for (DropPartitionsExpr expr : spec.getExprs()) {
                // At least one partition per expression, if not ifExists
                ++minCount;
                final PartitionsByExprResult partitionsByExprResult = get_partitions_by_expr(new PartitionsByExprRequest(databaseName, tableName, expr.bufferForExpr()));
                if (partitionsByExprResult.isHasUnknownPartitions()) {
                    // Expr is built by DDLSA, it should only contain part cols and simple ops
                    throw new MetaException("Unexpected unknown partitions to drop");
                }
                parts.addAll(partitionsByExprResult.getPartitions());
            }
            final List<String> colNames = new ArrayList<>(table.getPartitionKeys().size());
            for (FieldSchema col : table.getPartitionKeys()) {
                colNames.add(col.getName());
            }
            if (!colNames.isEmpty()) {
                parts.forEach(partition -> partNames.add(FileUtils.makePartName(colNames, partition.getValues())));
            }
        } else if (spec.isSetNames()) {
            partNames.addAll(spec.getNames());
            minCount = partNames.size();
            parts.addAll(get_partitions_by_names(databaseName, tableName, partNames));
        } else {
            throw new MetaException("Partition spec is not set");
        }
        if ((parts.size() < minCount) && !ifExists) {
            throw new NoSuchObjectException("Some partitions to drop are missing");
        }
        partV1.deletePartitions(catalogName, databaseName, tableName, partNames);
        final DropPartitionsResult result = new DropPartitionsResult();
        if (needResult) {
            result.setPartitions(parts);
        }
        return result;
    });
}
Also used : Partition(org.apache.hadoop.hive.metastore.api.Partition) DropPartitionsExpr(org.apache.hadoop.hive.metastore.api.DropPartitionsExpr) Table(org.apache.hadoop.hive.metastore.api.Table) PartitionsByExprResult(org.apache.hadoop.hive.metastore.api.PartitionsByExprResult) DropPartitionsResult(org.apache.hadoop.hive.metastore.api.DropPartitionsResult) PartitionsByExprRequest(org.apache.hadoop.hive.metastore.api.PartitionsByExprRequest) FieldSchema(org.apache.hadoop.hive.metastore.api.FieldSchema) ArrayList(java.util.ArrayList) RequestPartsSpec(org.apache.hadoop.hive.metastore.api.RequestPartsSpec) NoSuchObjectException(org.apache.hadoop.hive.metastore.api.NoSuchObjectException) MetaException(org.apache.hadoop.hive.metastore.api.MetaException)

Example 2 with RequestPartsSpec

use of org.apache.hadoop.hive.metastore.api.RequestPartsSpec in project metacat by Netflix.

the class MetacatHiveClient method dropHivePartitions.

private void dropHivePartitions(final HiveMetastoreClient client, final String dbName, final String tableName, final List<String> partitionNames) throws TException {
    if (partitionNames != null && !partitionNames.isEmpty()) {
        final DropPartitionsRequest request = new DropPartitionsRequest(dbName, tableName, new RequestPartsSpec(RequestPartsSpec._Fields.NAMES, partitionNames));
        request.setDeleteData(false);
        client.drop_partitions_req(request);
    }
}
Also used : RequestPartsSpec(org.apache.hadoop.hive.metastore.api.RequestPartsSpec) DropPartitionsRequest(org.apache.hadoop.hive.metastore.api.DropPartitionsRequest)

Aggregations

RequestPartsSpec (org.apache.hadoop.hive.metastore.api.RequestPartsSpec)2 ArrayList (java.util.ArrayList)1 DropPartitionsExpr (org.apache.hadoop.hive.metastore.api.DropPartitionsExpr)1 DropPartitionsRequest (org.apache.hadoop.hive.metastore.api.DropPartitionsRequest)1 DropPartitionsResult (org.apache.hadoop.hive.metastore.api.DropPartitionsResult)1 FieldSchema (org.apache.hadoop.hive.metastore.api.FieldSchema)1 MetaException (org.apache.hadoop.hive.metastore.api.MetaException)1 NoSuchObjectException (org.apache.hadoop.hive.metastore.api.NoSuchObjectException)1 Partition (org.apache.hadoop.hive.metastore.api.Partition)1 PartitionsByExprRequest (org.apache.hadoop.hive.metastore.api.PartitionsByExprRequest)1 PartitionsByExprResult (org.apache.hadoop.hive.metastore.api.PartitionsByExprResult)1 Table (org.apache.hadoop.hive.metastore.api.Table)1