Search in sources :

Example 11 with EncryptionKey

use of com.amazonaws.athena.connector.lambda.security.EncryptionKey in project aws-athena-query-federation by awslabs.

the class ExampleMetadataHandler method doGetSplits.

/**
 * For each partition we generate a pre-determined number of splits based on the NUM_PARTS_PER_SPLIT setting. This
 * method also demonstrates how to handle calls for batches of partitions and also leverage this API's ability
 * to paginated. A connector for a real data source would likely query that source's metadata to determine if/how
 * to split up the read operations for a particular partition.
 *
 * @param allocator Tool for creating and managing Apache Arrow Blocks.
 * @param request Provides details of the catalog, database, table, andpartition(s) being queried as well as
 * any filter predicate.
 * @return A GetSplitsResponse which contains a list of splits as an optional continuation token if we were not
 * able to generate all splits for the partitions in this batch.
 */
@Override
public GetSplitsResponse doGetSplits(BlockAllocator allocator, GetSplitsRequest request) {
    logCaller(request);
    logger.info("doGetSplits: spill location " + makeSpillLocation(request));
    /**
     * It is important to try and throw any throttling events before writing data since Athena may not be able to
     * continue the query, due to consistency errors, if you throttle after writing data.
     */
    if (simulateThrottle > 0 && count++ % simulateThrottle == 0) {
        logger.info("readWithConstraint: throwing throttle Exception!");
        throw new FederationThrottleException("Please slow down for this simulated throttling event");
    }
    ContinuationToken requestToken = ContinuationToken.decode(request.getContinuationToken());
    int partitionContd = requestToken.getPartition();
    int partContd = requestToken.getPart();
    Set<Split> splits = new HashSet<>();
    Block partitions = request.getPartitions();
    for (int curPartition = partitionContd; curPartition < partitions.getRowCount(); curPartition++) {
        // We use the makeEncryptionKey() method from our parent class to make an EncryptionKey
        EncryptionKey encryptionKey = makeEncryptionKey();
        // We prepare to read our custom metadata fields from the partition so that we can pass this info to the split(s)
        FieldReader locationReader = partitions.getFieldReader(SplitProperties.LOCATION.getId());
        locationReader.setPosition(curPartition);
        FieldReader storageClassReader = partitions.getFieldReader(SplitProperties.SERDE.getId());
        storageClassReader.setPosition(curPartition);
        // table scan operations (aka splits)
        for (int curPart = partContd; curPart < NUM_PARTS_PER_SPLIT; curPart++) {
            if (splits.size() >= MAX_SPLITS_PER_REQUEST) {
                // a continuation token.
                return new GetSplitsResponse(request.getCatalogName(), splits, ContinuationToken.encode(curPartition, curPart));
            }
            // We use makeSpillLocation(...) from our parent class to get a unique SpillLocation for each split
            Split.Builder splitBuilder = Split.newBuilder(makeSpillLocation(request), encryptionEnabled ? encryptionKey : null).add(SplitProperties.LOCATION.getId(), String.valueOf(locationReader.readText())).add(SplitProperties.SERDE.getId(), String.valueOf(storageClassReader.readText())).add(SplitProperties.SPLIT_PART.getId(), String.valueOf(curPart));
            // will likely vary. Our example only supports a limited number of partition column types.
            for (String next : request.getPartitionCols()) {
                FieldReader reader = partitions.getFieldReader(next);
                reader.setPosition(curPartition);
                switch(reader.getMinorType()) {
                    case UINT2:
                        splitBuilder.add(next, Integer.valueOf(reader.readCharacter()).toString());
                        break;
                    case UINT4:
                    case INT:
                        splitBuilder.add(next, String.valueOf(reader.readInteger()));
                        break;
                    case UINT8:
                    case BIGINT:
                        splitBuilder.add(next, String.valueOf(reader.readLong()));
                        break;
                    default:
                        throw new RuntimeException("Unsupported partition column type. " + reader.getMinorType());
                }
            }
            splits.add(splitBuilder.build());
        }
        // part continuation only applies within a partition so we complete that partial partition and move on
        // to the next one.
        partContd = 0;
    }
    return new GetSplitsResponse(request.getCatalogName(), splits, null);
}
Also used : EncryptionKey(com.amazonaws.athena.connector.lambda.security.EncryptionKey) GetSplitsResponse(com.amazonaws.athena.connector.lambda.metadata.GetSplitsResponse) Block(com.amazonaws.athena.connector.lambda.data.Block) FederationThrottleException(com.amazonaws.athena.connector.lambda.exceptions.FederationThrottleException) Split(com.amazonaws.athena.connector.lambda.domain.Split) FieldReader(org.apache.arrow.vector.complex.reader.FieldReader) HashSet(java.util.HashSet)

Aggregations

EncryptionKey (com.amazonaws.athena.connector.lambda.security.EncryptionKey)11 S3SpillLocation (com.amazonaws.athena.connector.lambda.domain.spill.S3SpillLocation)6 SpillLocation (com.amazonaws.athena.connector.lambda.domain.spill.SpillLocation)6 Split (com.amazonaws.athena.connector.lambda.domain.Split)5 GetSplitsResponse (com.amazonaws.athena.connector.lambda.metadata.GetSplitsResponse)4 Block (com.amazonaws.athena.connector.lambda.data.Block)3 TableName (com.amazonaws.athena.connector.lambda.domain.TableName)3 AllOrNoneValueSet (com.amazonaws.athena.connector.lambda.domain.predicate.AllOrNoneValueSet)3 Constraints (com.amazonaws.athena.connector.lambda.domain.predicate.Constraints)3 EquatableValueSet (com.amazonaws.athena.connector.lambda.domain.predicate.EquatableValueSet)3 ValueSet (com.amazonaws.athena.connector.lambda.domain.predicate.ValueSet)3 ReadRecordsRequest (com.amazonaws.athena.connector.lambda.records.ReadRecordsRequest)3 RemoteReadRecordsResponse (com.amazonaws.athena.connector.lambda.records.RemoteReadRecordsResponse)3 HashMap (java.util.HashMap)3 Before (org.junit.Before)3 RecordResponse (com.amazonaws.athena.connector.lambda.records.RecordResponse)2 ObjectMetadata (com.amazonaws.services.s3.model.ObjectMetadata)2 ByteArrayInputStream (java.io.ByteArrayInputStream)2 HashSet (java.util.HashSet)2 ArrowType (org.apache.arrow.vector.types.pojo.ArrowType)2