Search in sources :

Example 26 with Constraint

use of io.trino.spi.connector.Constraint in project trino by trinodb.

the class InformationSchemaMetadata method calculatePrefixesWithSchemaName.

private Set<QualifiedTablePrefix> calculatePrefixesWithSchemaName(ConnectorSession connectorSession, TupleDomain<ColumnHandle> constraint, Optional<Predicate<Map<ColumnHandle, NullableValue>>> predicate) {
    Optional<Set<String>> schemas = filterString(constraint, SCHEMA_COLUMN_HANDLE);
    if (schemas.isPresent()) {
        return schemas.get().stream().filter(this::isLowerCase).filter(schema -> predicate.isEmpty() || predicate.get().test(schemaAsFixedValues(schema))).map(schema -> new QualifiedTablePrefix(catalogName, schema)).collect(toImmutableSet());
    }
    if (predicate.isEmpty()) {
        return ImmutableSet.of(new QualifiedTablePrefix(catalogName));
    }
    Session session = ((FullConnectorSession) connectorSession).getSession();
    return listSchemaNames(session).filter(prefix -> predicate.get().test(schemaAsFixedValues(prefix.getSchemaName().get()))).collect(toImmutableSet());
}
Also used : Arrays(java.util.Arrays) VIEWS(io.trino.connector.informationschema.InformationSchemaTable.VIEWS) TABLES(io.trino.connector.informationschema.InformationSchemaTable.TABLES) Preconditions.checkArgument(com.google.common.base.Preconditions.checkArgument) ConnectorTableHandle(io.trino.spi.connector.ConnectorTableHandle) Map(java.util.Map) Slices.utf8Slice(io.airlift.slice.Slices.utf8Slice) ENGLISH(java.util.Locale.ENGLISH) MetadataUtil.findColumnMetadata(io.trino.metadata.MetadataUtil.findColumnMetadata) TABLE_REDIRECTION_ERROR(io.trino.spi.StandardErrorCode.TABLE_REDIRECTION_ERROR) ImmutableSet(com.google.common.collect.ImmutableSet) ImmutableMap(com.google.common.collect.ImmutableMap) Range(io.trino.spi.predicate.Range) ROLE_AUTHORIZATION_DESCRIPTORS(io.trino.connector.informationschema.InformationSchemaTable.ROLE_AUTHORIZATION_DESCRIPTORS) Predicate(java.util.function.Predicate) Collections.emptyList(java.util.Collections.emptyList) Domain(io.trino.spi.predicate.Domain) EquatableValueSet(io.trino.spi.predicate.EquatableValueSet) Collection(java.util.Collection) ImmutableList.toImmutableList(com.google.common.collect.ImmutableList.toImmutableList) Set(java.util.Set) TrinoException(io.trino.spi.TrinoException) LimitApplicationResult(io.trino.spi.connector.LimitApplicationResult) SchemaTableName(io.trino.spi.connector.SchemaTableName) List(java.util.List) ImmutableMap.toImmutableMap(com.google.common.collect.ImmutableMap.toImmutableMap) Stream(java.util.stream.Stream) SchemaTablePrefix(io.trino.spi.connector.SchemaTablePrefix) Function.identity(java.util.function.Function.identity) Optional(java.util.Optional) INFORMATION_SCHEMA(io.trino.connector.informationschema.InformationSchemaTable.INFORMATION_SCHEMA) ConnectorMetadata(io.trino.spi.connector.ConnectorMetadata) Session(io.trino.Session) Constraint(io.trino.spi.connector.Constraint) Slice(io.airlift.slice.Slice) FullConnectorSession(io.trino.FullConnectorSession) NullableValue(io.trino.spi.predicate.NullableValue) ColumnMetadata(io.trino.spi.connector.ColumnMetadata) TABLE_PRIVILEGES(io.trino.connector.informationschema.InformationSchemaTable.TABLE_PRIVILEGES) VarcharType.createUnboundedVarcharType(io.trino.spi.type.VarcharType.createUnboundedVarcharType) ConnectorTableMetadata(io.trino.spi.connector.ConnectorTableMetadata) OptionalLong(java.util.OptionalLong) ImmutableList(com.google.common.collect.ImmutableList) Verify.verify(com.google.common.base.Verify.verify) Objects.requireNonNull(java.util.Objects.requireNonNull) ColumnHandle(io.trino.spi.connector.ColumnHandle) ImmutableSet.toImmutableSet(com.google.common.collect.ImmutableSet.toImmutableSet) COLUMNS(io.trino.connector.informationschema.InformationSchemaTable.COLUMNS) QualifiedTablePrefix(io.trino.metadata.QualifiedTablePrefix) ConstraintApplicationResult(io.trino.spi.connector.ConstraintApplicationResult) ConnectorSession(io.trino.spi.connector.ConnectorSession) TupleDomain(io.trino.spi.predicate.TupleDomain) ConnectorTableProperties(io.trino.spi.connector.ConnectorTableProperties) QualifiedObjectName(io.trino.metadata.QualifiedObjectName) Metadata(io.trino.metadata.Metadata) SortedRangeSet(io.trino.spi.predicate.SortedRangeSet) ImmutableSet(com.google.common.collect.ImmutableSet) EquatableValueSet(io.trino.spi.predicate.EquatableValueSet) Set(java.util.Set) ImmutableSet.toImmutableSet(com.google.common.collect.ImmutableSet.toImmutableSet) SortedRangeSet(io.trino.spi.predicate.SortedRangeSet) QualifiedTablePrefix(io.trino.metadata.QualifiedTablePrefix) Session(io.trino.Session) FullConnectorSession(io.trino.FullConnectorSession) ConnectorSession(io.trino.spi.connector.ConnectorSession) FullConnectorSession(io.trino.FullConnectorSession)

Example 27 with Constraint

use of io.trino.spi.connector.Constraint in project trino by trinodb.

the class InformationSchemaMetadata method calculatePrefixesWithTableName.

private Set<QualifiedTablePrefix> calculatePrefixesWithTableName(InformationSchemaTable informationSchemaTable, ConnectorSession connectorSession, Set<QualifiedTablePrefix> prefixes, TupleDomain<ColumnHandle> constraint, Optional<Predicate<Map<ColumnHandle, NullableValue>>> predicate) {
    Session session = ((FullConnectorSession) connectorSession).getSession();
    Optional<Set<String>> tables = filterString(constraint, TABLE_NAME_COLUMN_HANDLE);
    if (tables.isPresent()) {
        return prefixes.stream().peek(prefix -> verify(prefix.asQualifiedObjectName().isEmpty())).flatMap(prefix -> prefix.getSchemaName().map(schemaName -> Stream.of(prefix)).orElseGet(() -> listSchemaNames(session))).flatMap(prefix -> tables.get().stream().filter(this::isLowerCase).map(table -> new QualifiedObjectName(catalogName, prefix.getSchemaName().get(), table))).filter(objectName -> {
            if (!isColumnsEnumeratingTable(informationSchemaTable) || metadata.isMaterializedView(session, objectName) || metadata.isView(session, objectName)) {
                return true;
            }
            // This is a columns enumerating table and the object is not a view
            try {
                // filtering in case the source table does not exist or there is a problem with redirection.
                return metadata.getRedirectionAwareTableHandle(session, objectName).getTableHandle().isPresent();
            } catch (TrinoException e) {
                if (e.getErrorCode().equals(TABLE_REDIRECTION_ERROR.toErrorCode())) {
                    // Ignore redirection errors for listing, treat as if the table does not exist
                    return false;
                }
                throw e;
            }
        }).filter(objectName -> predicate.isEmpty() || predicate.get().test(asFixedValues(objectName))).map(QualifiedObjectName::asQualifiedTablePrefix).collect(toImmutableSet());
    }
    if (predicate.isEmpty() || !isColumnsEnumeratingTable(informationSchemaTable)) {
        return prefixes;
    }
    return prefixes.stream().flatMap(prefix -> Stream.concat(metadata.listTables(session, prefix).stream(), metadata.listViews(session, prefix).stream())).filter(objectName -> predicate.get().test(asFixedValues(objectName))).map(QualifiedObjectName::asQualifiedTablePrefix).collect(toImmutableSet());
}
Also used : Arrays(java.util.Arrays) VIEWS(io.trino.connector.informationschema.InformationSchemaTable.VIEWS) TABLES(io.trino.connector.informationschema.InformationSchemaTable.TABLES) Preconditions.checkArgument(com.google.common.base.Preconditions.checkArgument) ConnectorTableHandle(io.trino.spi.connector.ConnectorTableHandle) Map(java.util.Map) Slices.utf8Slice(io.airlift.slice.Slices.utf8Slice) ENGLISH(java.util.Locale.ENGLISH) MetadataUtil.findColumnMetadata(io.trino.metadata.MetadataUtil.findColumnMetadata) TABLE_REDIRECTION_ERROR(io.trino.spi.StandardErrorCode.TABLE_REDIRECTION_ERROR) ImmutableSet(com.google.common.collect.ImmutableSet) ImmutableMap(com.google.common.collect.ImmutableMap) Range(io.trino.spi.predicate.Range) ROLE_AUTHORIZATION_DESCRIPTORS(io.trino.connector.informationschema.InformationSchemaTable.ROLE_AUTHORIZATION_DESCRIPTORS) Predicate(java.util.function.Predicate) Collections.emptyList(java.util.Collections.emptyList) Domain(io.trino.spi.predicate.Domain) EquatableValueSet(io.trino.spi.predicate.EquatableValueSet) Collection(java.util.Collection) ImmutableList.toImmutableList(com.google.common.collect.ImmutableList.toImmutableList) Set(java.util.Set) TrinoException(io.trino.spi.TrinoException) LimitApplicationResult(io.trino.spi.connector.LimitApplicationResult) SchemaTableName(io.trino.spi.connector.SchemaTableName) List(java.util.List) ImmutableMap.toImmutableMap(com.google.common.collect.ImmutableMap.toImmutableMap) Stream(java.util.stream.Stream) SchemaTablePrefix(io.trino.spi.connector.SchemaTablePrefix) Function.identity(java.util.function.Function.identity) Optional(java.util.Optional) INFORMATION_SCHEMA(io.trino.connector.informationschema.InformationSchemaTable.INFORMATION_SCHEMA) ConnectorMetadata(io.trino.spi.connector.ConnectorMetadata) Session(io.trino.Session) Constraint(io.trino.spi.connector.Constraint) Slice(io.airlift.slice.Slice) FullConnectorSession(io.trino.FullConnectorSession) NullableValue(io.trino.spi.predicate.NullableValue) ColumnMetadata(io.trino.spi.connector.ColumnMetadata) TABLE_PRIVILEGES(io.trino.connector.informationschema.InformationSchemaTable.TABLE_PRIVILEGES) VarcharType.createUnboundedVarcharType(io.trino.spi.type.VarcharType.createUnboundedVarcharType) ConnectorTableMetadata(io.trino.spi.connector.ConnectorTableMetadata) OptionalLong(java.util.OptionalLong) ImmutableList(com.google.common.collect.ImmutableList) Verify.verify(com.google.common.base.Verify.verify) Objects.requireNonNull(java.util.Objects.requireNonNull) ColumnHandle(io.trino.spi.connector.ColumnHandle) ImmutableSet.toImmutableSet(com.google.common.collect.ImmutableSet.toImmutableSet) COLUMNS(io.trino.connector.informationschema.InformationSchemaTable.COLUMNS) QualifiedTablePrefix(io.trino.metadata.QualifiedTablePrefix) ConstraintApplicationResult(io.trino.spi.connector.ConstraintApplicationResult) ConnectorSession(io.trino.spi.connector.ConnectorSession) TupleDomain(io.trino.spi.predicate.TupleDomain) ConnectorTableProperties(io.trino.spi.connector.ConnectorTableProperties) QualifiedObjectName(io.trino.metadata.QualifiedObjectName) Metadata(io.trino.metadata.Metadata) SortedRangeSet(io.trino.spi.predicate.SortedRangeSet) ImmutableSet(com.google.common.collect.ImmutableSet) EquatableValueSet(io.trino.spi.predicate.EquatableValueSet) Set(java.util.Set) ImmutableSet.toImmutableSet(com.google.common.collect.ImmutableSet.toImmutableSet) SortedRangeSet(io.trino.spi.predicate.SortedRangeSet) TrinoException(io.trino.spi.TrinoException) QualifiedObjectName(io.trino.metadata.QualifiedObjectName) Session(io.trino.Session) FullConnectorSession(io.trino.FullConnectorSession) ConnectorSession(io.trino.spi.connector.ConnectorSession) FullConnectorSession(io.trino.FullConnectorSession)

Example 28 with Constraint

use of io.trino.spi.connector.Constraint in project trino by trinodb.

the class DeltaLakeSplitManager method getSplits.

private Stream<DeltaLakeSplit> getSplits(ConnectorTransactionHandle transaction, DeltaLakeTableHandle tableHandle, ConnectorSession session, Optional<DataSize> maxScannedFileSize, Set<ColumnHandle> columnsCoveredByDynamicFilter, Constraint constraint) {
    DeltaLakeMetastore metastore = getMetastore(session, transaction);
    String tableLocation = metastore.getTableLocation(tableHandle.getSchemaTableName(), session);
    List<AddFileEntry> validDataFiles = metastore.getValidDataFiles(tableHandle.getSchemaTableName(), session);
    TupleDomain<DeltaLakeColumnHandle> enforcedPartitionConstraint = tableHandle.getEnforcedPartitionConstraint();
    TupleDomain<DeltaLakeColumnHandle> nonPartitionConstraint = tableHandle.getNonPartitionConstraint();
    // Delta Lake handles updates and deletes by copying entire data files, minus updates/deletes. Because of this we can only have one Split/UpdatablePageSource
    // per file.
    boolean splittable = tableHandle.getWriteType().isEmpty();
    AtomicInteger remainingInitialSplits = new AtomicInteger(maxInitialSplits);
    Optional<Instant> filesModifiedAfter = tableHandle.getAnalyzeHandle().flatMap(AnalyzeHandle::getFilesModifiedAfter);
    Optional<Long> maxScannedFileSizeInBytes = maxScannedFileSize.map(DataSize::toBytes);
    Set<String> predicatedColumnNames = Stream.concat(nonPartitionConstraint.getDomains().orElseThrow().keySet().stream(), columnsCoveredByDynamicFilter.stream().map(DeltaLakeColumnHandle.class::cast)).map(// TODO is DeltaLakeColumnHandle.name normalized?
    column -> column.getName().toLowerCase(ENGLISH)).collect(toImmutableSet());
    List<ColumnMetadata> schema = extractSchema(tableHandle.getMetadataEntry(), typeManager);
    List<ColumnMetadata> predicatedColumns = schema.stream().filter(// ColumnMetadata.name is lowercase
    column -> predicatedColumnNames.contains(column.getName())).collect(toImmutableList());
    return validDataFiles.stream().flatMap(addAction -> {
        if (tableHandle.getAnalyzeHandle().isPresent() && !tableHandle.getAnalyzeHandle().get().isInitialAnalyze() && !addAction.isDataChange()) {
            // skip files which do not introduce data change on non-initial ANALYZE
            return Stream.empty();
        }
        if (filesModifiedAfter.isPresent() && addAction.getModificationTime() <= filesModifiedAfter.get().toEpochMilli()) {
            return Stream.empty();
        }
        if (maxScannedFileSizeInBytes.isPresent() && addAction.getSize() > maxScannedFileSizeInBytes.get()) {
            return Stream.empty();
        }
        Map<DeltaLakeColumnHandle, Domain> enforcedDomains = enforcedPartitionConstraint.getDomains().orElseThrow();
        if (!partitionMatchesPredicate(addAction.getCanonicalPartitionValues(), enforcedDomains)) {
            return Stream.empty();
        }
        TupleDomain<DeltaLakeColumnHandle> statisticsPredicate = createStatisticsPredicate(addAction, predicatedColumns, tableHandle.getMetadataEntry().getCanonicalPartitionColumns());
        if (!nonPartitionConstraint.overlaps(statisticsPredicate)) {
            return Stream.empty();
        }
        if (constraint.predicate().isPresent()) {
            Map<String, Optional<String>> partitionValues = addAction.getCanonicalPartitionValues();
            Map<ColumnHandle, NullableValue> deserializedValues = constraint.getPredicateColumns().orElseThrow().stream().filter(column -> column instanceof DeltaLakeColumnHandle).filter(column -> partitionValues.containsKey(((DeltaLakeColumnHandle) column).getName())).collect(toImmutableMap(identity(), column -> {
                DeltaLakeColumnHandle deltaLakeColumn = (DeltaLakeColumnHandle) column;
                return NullableValue.of(deltaLakeColumn.getType(), deserializePartitionValue(deltaLakeColumn, addAction.getCanonicalPartitionValues().get(deltaLakeColumn.getName())));
            }));
            if (!constraint.predicate().get().test(deserializedValues)) {
                return Stream.empty();
            }
        }
        return splitsForFile(session, addAction, tableLocation, addAction.getCanonicalPartitionValues(), statisticsPredicate, splittable, remainingInitialSplits).stream();
    });
}
Also used : ConnectorSplitManager(io.trino.spi.connector.ConnectorSplitManager) Constraint(io.trino.spi.connector.Constraint) DeltaLakeSessionProperties.getMaxInitialSplitSize(io.trino.plugin.deltalake.DeltaLakeSessionProperties.getMaxInitialSplitSize) URLDecoder(java.net.URLDecoder) NullableValue(io.trino.spi.predicate.NullableValue) ColumnMetadata(io.trino.spi.connector.ColumnMetadata) BiFunction(java.util.function.BiFunction) DeltaLakeSchemaSupport.extractSchema(io.trino.plugin.deltalake.transactionlog.DeltaLakeSchemaSupport.extractSchema) AddFileEntry(io.trino.plugin.deltalake.transactionlog.AddFileEntry) FixedSplitSource(io.trino.spi.connector.FixedSplitSource) Inject(javax.inject.Inject) DeltaLakeMetastore(io.trino.plugin.deltalake.metastore.DeltaLakeMetastore) DeltaLakeMetadata.createStatisticsPredicate(io.trino.plugin.deltalake.DeltaLakeMetadata.createStatisticsPredicate) ImmutableList(com.google.common.collect.ImmutableList) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) ConnectorTableHandle(io.trino.spi.connector.ConnectorTableHandle) Map(java.util.Map) Objects.requireNonNull(java.util.Objects.requireNonNull) ColumnHandle(io.trino.spi.connector.ColumnHandle) ImmutableSet.toImmutableSet(com.google.common.collect.ImmutableSet.toImmutableSet) ENGLISH(java.util.Locale.ENGLISH) ExecutorService(java.util.concurrent.ExecutorService) UTF_8(java.nio.charset.StandardCharsets.UTF_8) Domain(io.trino.spi.predicate.Domain) ImmutableList.toImmutableList(com.google.common.collect.ImmutableList.toImmutableList) Set(java.util.Set) ConnectorSplitSource(io.trino.spi.connector.ConnectorSplitSource) ConnectorSession(io.trino.spi.connector.ConnectorSession) TupleDomain(io.trino.spi.predicate.TupleDomain) Instant(java.time.Instant) DeltaLakeSessionProperties.getMaxSplitSize(io.trino.plugin.deltalake.DeltaLakeSessionProperties.getMaxSplitSize) DataSize(io.airlift.units.DataSize) List(java.util.List) ImmutableMap.toImmutableMap(com.google.common.collect.ImmutableMap.toImmutableMap) ClassLoaderSafeConnectorSplitSource(io.trino.plugin.base.classloader.ClassLoaderSafeConnectorSplitSource) Stream(java.util.stream.Stream) DynamicFilter(io.trino.spi.connector.DynamicFilter) Function.identity(java.util.function.Function.identity) Optional(java.util.Optional) TypeManager(io.trino.spi.type.TypeManager) TransactionLogParser.deserializePartitionValue(io.trino.plugin.deltalake.transactionlog.TransactionLogParser.deserializePartitionValue) HiveTransactionHandle(io.trino.plugin.hive.HiveTransactionHandle) ConnectorTransactionHandle(io.trino.spi.connector.ConnectorTransactionHandle) ColumnMetadata(io.trino.spi.connector.ColumnMetadata) DataSize(io.airlift.units.DataSize) ColumnHandle(io.trino.spi.connector.ColumnHandle) Optional(java.util.Optional) Instant(java.time.Instant) NullableValue(io.trino.spi.predicate.NullableValue) DeltaLakeMetastore(io.trino.plugin.deltalake.metastore.DeltaLakeMetastore) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) AddFileEntry(io.trino.plugin.deltalake.transactionlog.AddFileEntry) Domain(io.trino.spi.predicate.Domain) TupleDomain(io.trino.spi.predicate.TupleDomain)

Example 29 with Constraint

use of io.trino.spi.connector.Constraint in project trino by trinodb.

the class IcebergMetadata method applyFilter.

@Override
public Optional<ConstraintApplicationResult<ConnectorTableHandle>> applyFilter(ConnectorSession session, ConnectorTableHandle handle, Constraint constraint) {
    IcebergTableHandle table = (IcebergTableHandle) handle;
    Table icebergTable = catalog.loadTable(session, table.getSchemaTableName());
    Set<Integer> partitionSourceIds = identityPartitionColumnsInAllSpecs(icebergTable);
    BiPredicate<IcebergColumnHandle, Domain> isIdentityPartition = (column, domain) -> partitionSourceIds.contains(column.getId());
    TupleDomain<IcebergColumnHandle> newEnforcedConstraint = constraint.getSummary().transformKeys(IcebergColumnHandle.class::cast).filter(isIdentityPartition).intersect(table.getEnforcedPredicate());
    TupleDomain<IcebergColumnHandle> remainingConstraint = constraint.getSummary().transformKeys(IcebergColumnHandle.class::cast).filter(isIdentityPartition.negate());
    TupleDomain<IcebergColumnHandle> newUnenforcedConstraint = remainingConstraint.filter((columnHandle, predicate) -> !isStructuralType(columnHandle.getType())).intersect(table.getUnenforcedPredicate());
    if (newEnforcedConstraint.equals(table.getEnforcedPredicate()) && newUnenforcedConstraint.equals(table.getUnenforcedPredicate())) {
        return Optional.empty();
    }
    return Optional.of(new ConstraintApplicationResult<>(new IcebergTableHandle(table.getSchemaName(), table.getTableName(), table.getTableType(), table.getSnapshotId(), newUnenforcedConstraint, newEnforcedConstraint, table.getProjectedColumns(), table.getNameMappingJson()), remainingConstraint.transformKeys(ColumnHandle.class::cast), false));
}
Also used : IcebergUtil.getPartitionKeys(io.trino.plugin.iceberg.IcebergUtil.getPartitionKeys) TrinoCatalog(io.trino.plugin.iceberg.catalog.TrinoCatalog) FileSystem(org.apache.hadoop.fs.FileSystem) ConnectorTableExecuteHandle(io.trino.spi.connector.ConnectorTableExecuteHandle) HiveApplyProjectionUtil.replaceWithNewVariables(io.trino.plugin.hive.HiveApplyProjectionUtil.replaceWithNewVariables) Collections.singletonList(java.util.Collections.singletonList) NOT_SUPPORTED(io.trino.spi.StandardErrorCode.NOT_SUPPORTED) TableNotFoundException(io.trino.spi.connector.TableNotFoundException) Matcher(java.util.regex.Matcher) ConnectorOutputTableHandle(io.trino.spi.connector.ConnectorOutputTableHandle) ConnectorTableHandle(io.trino.spi.connector.ConnectorTableHandle) Map(java.util.Map) RewriteFiles(org.apache.iceberg.RewriteFiles) ProjectionApplicationResult(io.trino.spi.connector.ProjectionApplicationResult) HdfsEnvironment(io.trino.plugin.hive.HdfsEnvironment) CloseableIterable(org.apache.iceberg.io.CloseableIterable) IcebergUtil.newCreateTableTransaction(io.trino.plugin.iceberg.IcebergUtil.newCreateTableTransaction) Domain(io.trino.spi.predicate.Domain) ImmutableList.toImmutableList(com.google.common.collect.ImmutableList.toImmutableList) IcebergTableExecuteHandle(io.trino.plugin.iceberg.procedure.IcebergTableExecuteHandle) Set(java.util.Set) Schema(org.apache.iceberg.Schema) ColumnIdentity.primitiveColumnIdentity(io.trino.plugin.iceberg.ColumnIdentity.primitiveColumnIdentity) SchemaTableName(io.trino.spi.connector.SchemaTableName) PartitionSpecParser(org.apache.iceberg.PartitionSpecParser) Collectors.joining(java.util.stream.Collectors.joining) Type(org.apache.iceberg.types.Type) UncheckedIOException(java.io.UncheckedIOException) ImmutableMap.toImmutableMap(com.google.common.collect.ImmutableMap.toImmutableMap) TrinoPrincipal(io.trino.spi.security.TrinoPrincipal) CatalogSchemaTableName(io.trino.spi.connector.CatalogSchemaTableName) SchemaTablePrefix(io.trino.spi.connector.SchemaTablePrefix) RemoteIterator(org.apache.hadoop.fs.RemoteIterator) Iterables(com.google.common.collect.Iterables) ConnectorTableLayout(io.trino.spi.connector.ConnectorTableLayout) ConnectorInsertTableHandle(io.trino.spi.connector.ConnectorInsertTableHandle) IcebergUtil.deserializePartitionValue(io.trino.plugin.iceberg.IcebergUtil.deserializePartitionValue) Slice(io.airlift.slice.Slice) NullableValue(io.trino.spi.predicate.NullableValue) ColumnMetadata(io.trino.spi.connector.ColumnMetadata) ConnectorTableMetadata(io.trino.spi.connector.ConnectorTableMetadata) Variable(io.trino.spi.expression.Variable) Supplier(java.util.function.Supplier) OptionalLong(java.util.OptionalLong) MaterializedViewFreshness(io.trino.spi.connector.MaterializedViewFreshness) ColumnHandle(io.trino.spi.connector.ColumnHandle) ImmutableSet.toImmutableSet(com.google.common.collect.ImmutableSet.toImmutableSet) FILE_FORMAT_PROPERTY(io.trino.plugin.iceberg.IcebergTableProperties.FILE_FORMAT_PROPERTY) OPTIMIZE(io.trino.plugin.iceberg.procedure.IcebergTableProcedureId.OPTIMIZE) ConstraintApplicationResult(io.trino.spi.connector.ConstraintApplicationResult) DEPENDS_ON_TABLES(io.trino.plugin.iceberg.catalog.hms.TrinoHiveCatalog.DEPENDS_ON_TABLES) Table(org.apache.iceberg.Table) IOException(java.io.IOException) ConnectorSession(io.trino.spi.connector.ConnectorSession) ICEBERG_INVALID_METADATA(io.trino.plugin.iceberg.IcebergErrorCode.ICEBERG_INVALID_METADATA) ConnectorTableProperties(io.trino.spi.connector.ConnectorTableProperties) DiscretePredicates(io.trino.spi.connector.DiscretePredicates) ConnectorExpression(io.trino.spi.expression.ConnectorExpression) PartitionFields.parsePartitionFields(io.trino.plugin.iceberg.PartitionFields.parsePartitionFields) ArrayDeque(java.util.ArrayDeque) MaterializedViewNotFoundException(io.trino.spi.connector.MaterializedViewNotFoundException) TypeConverter.toTrinoType(io.trino.plugin.iceberg.TypeConverter.toTrinoType) IcebergUtil.getColumns(io.trino.plugin.iceberg.IcebergUtil.getColumns) IcebergSessionProperties.isProjectionPushdownEnabled(io.trino.plugin.iceberg.IcebergSessionProperties.isProjectionPushdownEnabled) IcebergTableProcedureId(io.trino.plugin.iceberg.procedure.IcebergTableProcedureId) IcebergSessionProperties.isStatisticsEnabled(io.trino.plugin.iceberg.IcebergSessionProperties.isStatisticsEnabled) AppendFiles(org.apache.iceberg.AppendFiles) NO_RETRIES(io.trino.spi.connector.RetryMode.NO_RETRIES) ConnectorMaterializedViewDefinition(io.trino.spi.connector.ConnectorMaterializedViewDefinition) TypeConverter.toIcebergType(io.trino.plugin.iceberg.TypeConverter.toIcebergType) PartitionField(org.apache.iceberg.PartitionField) DataFiles(org.apache.iceberg.DataFiles) CatalogSchemaName(io.trino.spi.connector.CatalogSchemaName) LOCATION_PROPERTY(io.trino.plugin.iceberg.IcebergTableProperties.LOCATION_PROPERTY) Path(org.apache.hadoop.fs.Path) DATA(io.trino.plugin.iceberg.TableType.DATA) ConnectorViewDefinition(io.trino.spi.connector.ConnectorViewDefinition) FileScanTask(org.apache.iceberg.FileScanTask) DataFile(org.apache.iceberg.DataFile) Splitter(com.google.common.base.Splitter) IcebergTableProperties.getPartitioning(io.trino.plugin.iceberg.IcebergTableProperties.getPartitioning) IcebergUtil.getFileFormat(io.trino.plugin.iceberg.IcebergUtil.getFileFormat) ImmutableSet(com.google.common.collect.ImmutableSet) ImmutableMap(com.google.common.collect.ImmutableMap) Collection(java.util.Collection) HiveWrittenPartitions(io.trino.plugin.hive.HiveWrittenPartitions) LocatedFileStatus(org.apache.hadoop.fs.LocatedFileStatus) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) ComputedStatistics(io.trino.spi.statistics.ComputedStatistics) TrinoException(io.trino.spi.TrinoException) TableScan(org.apache.iceberg.TableScan) ConnectorOutputMetadata(io.trino.spi.connector.ConnectorOutputMetadata) String.format(java.lang.String.format) SchemaParser(org.apache.iceberg.SchemaParser) DataSize(io.airlift.units.DataSize) HdfsContext(io.trino.plugin.hive.HdfsEnvironment.HdfsContext) List(java.util.List) BIGINT(io.trino.spi.type.BigintType.BIGINT) ClassLoaderSafeSystemTable(io.trino.plugin.base.classloader.ClassLoaderSafeSystemTable) IcebergUtil.getTableComment(io.trino.plugin.iceberg.IcebergUtil.getTableComment) Assignment(io.trino.spi.connector.Assignment) HiveApplyProjectionUtil(io.trino.plugin.hive.HiveApplyProjectionUtil) BeginTableExecuteResult(io.trino.spi.connector.BeginTableExecuteResult) PartitionSpec(org.apache.iceberg.PartitionSpec) Function.identity(java.util.function.Function.identity) TableProperties(org.apache.iceberg.TableProperties) Optional(java.util.Optional) ProjectedColumnRepresentation(io.trino.plugin.hive.HiveApplyProjectionUtil.ProjectedColumnRepresentation) ConnectorMetadata(io.trino.spi.connector.ConnectorMetadata) Pattern(java.util.regex.Pattern) SystemTable(io.trino.spi.connector.SystemTable) JsonCodec(io.airlift.json.JsonCodec) Constraint(io.trino.spi.connector.Constraint) IcebergOptimizeHandle(io.trino.plugin.iceberg.procedure.IcebergOptimizeHandle) Logger(io.airlift.log.Logger) PartitionFields.toPartitionFields(io.trino.plugin.iceberg.PartitionFields.toPartitionFields) HashMap(java.util.HashMap) Deque(java.util.Deque) Function(java.util.function.Function) ExpressionConverter.toIcebergExpression(io.trino.plugin.iceberg.ExpressionConverter.toIcebergExpression) PARTITIONING_PROPERTY(io.trino.plugin.iceberg.IcebergTableProperties.PARTITIONING_PROPERTY) HashSet(java.util.HashSet) BiPredicate(java.util.function.BiPredicate) ImmutableList(com.google.common.collect.ImmutableList) Verify.verify(com.google.common.base.Verify.verify) Objects.requireNonNull(java.util.Objects.requireNonNull) Suppliers(com.google.common.base.Suppliers) TableStatistics(io.trino.spi.statistics.TableStatistics) HiveApplyProjectionUtil.extractSupportedProjectedColumns(io.trino.plugin.hive.HiveApplyProjectionUtil.extractSupportedProjectedColumns) VerifyException(com.google.common.base.VerifyException) RetryMode(io.trino.spi.connector.RetryMode) Iterator(java.util.Iterator) TupleDomain(io.trino.spi.predicate.TupleDomain) IcebergUtil.toIcebergSchema(io.trino.plugin.iceberg.IcebergUtil.toIcebergSchema) Transaction(org.apache.iceberg.Transaction) Comparator(java.util.Comparator) TypeManager(io.trino.spi.type.TypeManager) HiveUtil.isStructuralType(io.trino.plugin.hive.util.HiveUtil.isStructuralType) Snapshot(org.apache.iceberg.Snapshot) ColumnHandle(io.trino.spi.connector.ColumnHandle) Table(org.apache.iceberg.Table) ClassLoaderSafeSystemTable(io.trino.plugin.base.classloader.ClassLoaderSafeSystemTable) SystemTable(io.trino.spi.connector.SystemTable) Domain(io.trino.spi.predicate.Domain) TupleDomain(io.trino.spi.predicate.TupleDomain)

Example 30 with Constraint

use of io.trino.spi.connector.Constraint in project trino by trinodb.

the class TestInformationSchemaMetadata method testInformationSchemaPredicatePushdownWithConstraintPredicate.

@Test
public void testInformationSchemaPredicatePushdownWithConstraintPredicate() {
    TransactionId transactionId = transactionManager.beginTransaction(false);
    Constraint constraint = new Constraint(TupleDomain.all(), TestInformationSchemaMetadata::testConstraint, testConstraintColumns());
    ConnectorSession session = createNewSession(transactionId);
    ConnectorMetadata metadata = new InformationSchemaMetadata("test_catalog", this.metadata);
    InformationSchemaTableHandle tableHandle = (InformationSchemaTableHandle) metadata.getTableHandle(session, new SchemaTableName("information_schema", "columns"));
    tableHandle = metadata.applyFilter(session, tableHandle, constraint).map(ConstraintApplicationResult::getHandle).map(InformationSchemaTableHandle.class::cast).orElseThrow(AssertionError::new);
    assertEquals(tableHandle.getPrefixes(), ImmutableSet.of(new QualifiedTablePrefix("test_catalog", "test_schema", "test_view")));
}
Also used : InformationSchemaTableHandle(io.trino.connector.informationschema.InformationSchemaTableHandle) Constraint(io.trino.spi.connector.Constraint) ConnectorSession(io.trino.spi.connector.ConnectorSession) ConnectorMetadata(io.trino.spi.connector.ConnectorMetadata) InformationSchemaMetadata(io.trino.connector.informationschema.InformationSchemaMetadata) SchemaTableName(io.trino.spi.connector.SchemaTableName) TransactionId(io.trino.transaction.TransactionId) Test(org.testng.annotations.Test)

Aggregations

Constraint (io.trino.spi.connector.Constraint)41 ColumnHandle (io.trino.spi.connector.ColumnHandle)32 SchemaTableName (io.trino.spi.connector.SchemaTableName)28 TupleDomain (io.trino.spi.predicate.TupleDomain)27 ConnectorSession (io.trino.spi.connector.ConnectorSession)26 Domain (io.trino.spi.predicate.Domain)23 Test (org.testng.annotations.Test)23 ConnectorTableHandle (io.trino.spi.connector.ConnectorTableHandle)20 ConnectorMetadata (io.trino.spi.connector.ConnectorMetadata)18 ConstraintApplicationResult (io.trino.spi.connector.ConstraintApplicationResult)18 Map (java.util.Map)18 Objects.requireNonNull (java.util.Objects.requireNonNull)18 Optional (java.util.Optional)18 ImmutableMap (com.google.common.collect.ImmutableMap)17 List (java.util.List)17 ImmutableList.toImmutableList (com.google.common.collect.ImmutableList.toImmutableList)16 ImmutableList (com.google.common.collect.ImmutableList)15 Set (java.util.Set)15 ImmutableMap.toImmutableMap (com.google.common.collect.ImmutableMap.toImmutableMap)14 ColumnMetadata (io.trino.spi.connector.ColumnMetadata)13