Search in sources :

Example 6 with SystemTable

use of io.trino.spi.connector.SystemTable in project trino by trinodb.

the class BigQueryMetadata method getViewDefinitionSystemTable.

private Optional<SystemTable> getViewDefinitionSystemTable(ConnectorSession session, SchemaTableName viewDefinitionTableName, SchemaTableName sourceTableName) {
    BigQueryClient client = bigQueryClientFactory.create(session);
    String projectId = getProjectId(client);
    String remoteSchemaName = client.toRemoteDataset(projectId, sourceTableName.getSchemaName()).map(RemoteDatabaseObject::getOnlyRemoteName).orElseThrow(() -> new TableNotFoundException(viewDefinitionTableName));
    String remoteTableName = client.toRemoteTable(projectId, remoteSchemaName, sourceTableName.getTableName()).map(RemoteDatabaseObject::getOnlyRemoteName).orElseThrow(() -> new TableNotFoundException(viewDefinitionTableName));
    TableInfo tableInfo = client.getTable(TableId.of(projectId, remoteSchemaName, remoteTableName)).orElseThrow(() -> new TableNotFoundException(viewDefinitionTableName));
    if (!(tableInfo.getDefinition() instanceof ViewDefinition)) {
        throw new TableNotFoundException(viewDefinitionTableName);
    }
    List<ColumnMetadata> columns = ImmutableList.of(new ColumnMetadata("query", VarcharType.VARCHAR));
    List<Type> types = columns.stream().map(ColumnMetadata::getType).collect(toImmutableList());
    Optional<String> query = Optional.ofNullable(((ViewDefinition) tableInfo.getDefinition()).getQuery());
    Iterable<List<Object>> propertyValues = ImmutableList.of(ImmutableList.of(query.orElse("NULL")));
    return Optional.of(createSystemTable(new ConnectorTableMetadata(sourceTableName, columns), constraint -> new InMemoryRecordSet(types, propertyValues).cursor()));
}
Also used : ViewDefinition(com.google.cloud.bigquery.ViewDefinition) StandardTableDefinition(com.google.cloud.bigquery.StandardTableDefinition) TableId(com.google.cloud.bigquery.TableId) SchemaNotFoundException(io.trino.spi.connector.SchemaNotFoundException) Preconditions.checkArgument(com.google.common.base.Preconditions.checkArgument) TableNotFoundException(io.trino.spi.connector.TableNotFoundException) VIEW(com.google.cloud.bigquery.TableDefinition.Type.VIEW) Schema(com.google.cloud.bigquery.Schema) ConnectorTableHandle(io.trino.spi.connector.ConnectorTableHandle) Map(java.util.Map) ProjectionApplicationResult(io.trino.spi.connector.ProjectionApplicationResult) RemoteDatabaseObject(io.trino.plugin.bigquery.BigQueryClient.RemoteDatabaseObject) ENGLISH(java.util.Locale.ENGLISH) TABLE(com.google.cloud.bigquery.TableDefinition.Type.TABLE) Field(com.google.cloud.bigquery.Field) TableDefinition(com.google.cloud.bigquery.TableDefinition) ImmutableSet(com.google.common.collect.ImmutableSet) ImmutableMap(com.google.common.collect.ImmutableMap) ImmutableList.toImmutableList(com.google.common.collect.ImmutableList.toImmutableList) Set(java.util.Set) TrinoException(io.trino.spi.TrinoException) Streams(com.google.common.collect.Streams) SchemaTableName(io.trino.spi.connector.SchemaTableName) Preconditions.checkState(com.google.common.base.Preconditions.checkState) List(java.util.List) ImmutableMap.toImmutableMap(com.google.common.collect.ImmutableMap.toImmutableMap) Stream(java.util.stream.Stream) BIGQUERY_LISTING_DATASET_ERROR(io.trino.plugin.bigquery.BigQueryErrorCode.BIGQUERY_LISTING_DATASET_ERROR) TrinoPrincipal(io.trino.spi.security.TrinoPrincipal) SchemaTablePrefix(io.trino.spi.connector.SchemaTablePrefix) Assignment(io.trino.spi.connector.Assignment) Function.identity(java.util.function.Function.identity) Optional(java.util.Optional) ConnectorMetadata(io.trino.spi.connector.ConnectorMetadata) SystemTable(io.trino.spi.connector.SystemTable) Constraint(io.trino.spi.connector.Constraint) Logger(io.airlift.log.Logger) ColumnMetadata(io.trino.spi.connector.ColumnMetadata) Type(io.trino.spi.type.Type) BigQueryException(com.google.cloud.bigquery.BigQueryException) BigQueryType.toField(io.trino.plugin.bigquery.BigQueryType.toField) ConnectorTableMetadata(io.trino.spi.connector.ConnectorTableMetadata) DatasetId(com.google.cloud.bigquery.DatasetId) Function(java.util.function.Function) Inject(javax.inject.Inject) VarcharType(io.trino.spi.type.VarcharType) ImmutableList(com.google.common.collect.ImmutableList) Objects.requireNonNull(java.util.Objects.requireNonNull) ColumnHandle(io.trino.spi.connector.ColumnHandle) Table(com.google.cloud.bigquery.Table) ConstraintApplicationResult(io.trino.spi.connector.ConstraintApplicationResult) RecordCursor(io.trino.spi.connector.RecordCursor) ConnectorSession(io.trino.spi.connector.ConnectorSession) TupleDomain(io.trino.spi.predicate.TupleDomain) InMemoryRecordSet(io.trino.spi.connector.InMemoryRecordSet) ConnectorTableProperties(io.trino.spi.connector.ConnectorTableProperties) ConnectorExpression(io.trino.spi.expression.ConnectorExpression) DatasetInfo(com.google.cloud.bigquery.DatasetInfo) TableInfo(com.google.cloud.bigquery.TableInfo) ConnectorTransactionHandle(io.trino.spi.connector.ConnectorTransactionHandle) ColumnMetadata(io.trino.spi.connector.ColumnMetadata) ViewDefinition(com.google.cloud.bigquery.ViewDefinition) InMemoryRecordSet(io.trino.spi.connector.InMemoryRecordSet) TableNotFoundException(io.trino.spi.connector.TableNotFoundException) Type(io.trino.spi.type.Type) VarcharType(io.trino.spi.type.VarcharType) TableInfo(com.google.cloud.bigquery.TableInfo) ImmutableList.toImmutableList(com.google.common.collect.ImmutableList.toImmutableList) List(java.util.List) ImmutableList(com.google.common.collect.ImmutableList) ConnectorTableMetadata(io.trino.spi.connector.ConnectorTableMetadata)

Example 7 with SystemTable

use of io.trino.spi.connector.SystemTable in project trino by trinodb.

the class IcebergMetadata method getRawSystemTable.

private Optional<SystemTable> getRawSystemTable(ConnectorSession session, SchemaTableName tableName) {
    IcebergTableName name = IcebergTableName.from(tableName.getTableName());
    if (name.getTableType() == DATA) {
        return Optional.empty();
    }
    // load the base table for the system table
    Table table;
    try {
        table = catalog.loadTable(session, new SchemaTableName(tableName.getSchemaName(), name.getTableName()));
    } catch (TableNotFoundException e) {
        return Optional.empty();
    }
    SchemaTableName systemTableName = new SchemaTableName(tableName.getSchemaName(), name.getTableNameWithType());
    switch(name.getTableType()) {
        case DATA:
            // Handled above.
            break;
        case HISTORY:
            if (name.getSnapshotId().isPresent()) {
                throw new TrinoException(NOT_SUPPORTED, "Snapshot ID not supported for history table: " + systemTableName);
            }
            return Optional.of(new HistoryTable(systemTableName, table));
        case SNAPSHOTS:
            if (name.getSnapshotId().isPresent()) {
                throw new TrinoException(NOT_SUPPORTED, "Snapshot ID not supported for snapshots table: " + systemTableName);
            }
            return Optional.of(new SnapshotsTable(systemTableName, typeManager, table));
        case PARTITIONS:
            return Optional.of(new PartitionTable(systemTableName, typeManager, table, getSnapshotId(table, name.getSnapshotId())));
        case MANIFESTS:
            return Optional.of(new ManifestsTable(systemTableName, table, getSnapshotId(table, name.getSnapshotId())));
        case FILES:
            return Optional.of(new FilesTable(systemTableName, typeManager, table, getSnapshotId(table, name.getSnapshotId())));
        case PROPERTIES:
            return Optional.of(new PropertiesTable(systemTableName, table));
    }
    return Optional.empty();
}
Also used : TableNotFoundException(io.trino.spi.connector.TableNotFoundException) Table(org.apache.iceberg.Table) ClassLoaderSafeSystemTable(io.trino.plugin.base.classloader.ClassLoaderSafeSystemTable) SystemTable(io.trino.spi.connector.SystemTable) TrinoException(io.trino.spi.TrinoException) SchemaTableName(io.trino.spi.connector.SchemaTableName) CatalogSchemaTableName(io.trino.spi.connector.CatalogSchemaTableName)

Example 8 with SystemTable

use of io.trino.spi.connector.SystemTable in project trino by trinodb.

the class CoordinatorSystemTablesProvider method getSystemTable.

@Override
public Optional<SystemTable> getSystemTable(ConnectorSession session, SchemaTableName tableName) {
    Optional<SystemTable> staticSystemTable = staticProvider.getSystemTable(session, tableName);
    if (staticSystemTable.isPresent()) {
        return staticSystemTable;
    }
    if (!isCoordinatorTransaction(session)) {
        // this is a session from another coordinator, so there are no dynamic tables here for that session
        return Optional.empty();
    }
    Optional<SystemTable> systemTable = metadata.getSystemTable(((FullConnectorSession) session).getSession(), new QualifiedObjectName(catalogName, tableName.getSchemaName(), tableName.getTableName()));
    // dynamic tables require access to the transaction and thus can only run on the current coordinator
    if (systemTable.isPresent() && systemTable.get().getDistribution() != SINGLE_COORDINATOR) {
        throw new TrinoException(GENERIC_INTERNAL_ERROR, "Distribution for dynamic system table must be " + SINGLE_COORDINATOR);
    }
    return systemTable;
}
Also used : TrinoException(io.trino.spi.TrinoException) SystemTable(io.trino.spi.connector.SystemTable) QualifiedObjectName(io.trino.metadata.QualifiedObjectName)

Example 9 with SystemTable

use of io.trino.spi.connector.SystemTable in project trino by trinodb.

the class PartitionsSystemTableProvider method getSystemTable.

@Override
public Optional<SystemTable> getSystemTable(HiveMetadata metadata, ConnectorSession session, SchemaTableName tableName) {
    if (!PARTITIONS.matches(tableName)) {
        return Optional.empty();
    }
    SchemaTableName sourceTableName = PARTITIONS.getSourceTableName(tableName);
    Table sourceTable = metadata.getMetastore().getTable(sourceTableName.getSchemaName(), sourceTableName.getTableName()).orElse(null);
    if (sourceTable == null || isDeltaLakeTable(sourceTable) || isIcebergTable(sourceTable)) {
        return Optional.empty();
    }
    verifyOnline(sourceTableName, Optional.empty(), getProtectMode(sourceTable), sourceTable.getParameters());
    HiveTableHandle sourceTableHandle = new HiveTableHandle(sourceTableName.getSchemaName(), sourceTableName.getTableName(), sourceTable.getParameters(), getPartitionKeyColumnHandles(sourceTable, typeManager), getRegularColumnHandles(sourceTable, typeManager, getTimestampPrecision(session)), getHiveBucketHandle(session, sourceTable, typeManager));
    List<HiveColumnHandle> partitionColumns = sourceTableHandle.getPartitionColumns();
    if (partitionColumns.isEmpty()) {
        return Optional.empty();
    }
    List<Type> partitionColumnTypes = partitionColumns.stream().map(HiveColumnHandle::getType).collect(toImmutableList());
    List<ColumnMetadata> partitionSystemTableColumns = partitionColumns.stream().map(column -> ColumnMetadata.builder().setName(column.getName()).setType(column.getType()).setComment(column.getComment()).setHidden(column.isHidden()).build()).collect(toImmutableList());
    Map<Integer, HiveColumnHandle> fieldIdToColumnHandle = IntStream.range(0, partitionColumns.size()).boxed().collect(toImmutableMap(identity(), partitionColumns::get));
    return Optional.of(createSystemTable(new ConnectorTableMetadata(tableName, partitionSystemTableColumns), constraint -> {
        Constraint targetConstraint = new Constraint(constraint.transformKeys(fieldIdToColumnHandle::get));
        Iterable<List<Object>> records = () -> stream(partitionManager.getPartitions(metadata.getMetastore(), sourceTableHandle, targetConstraint).getPartitions()).map(hivePartition -> IntStream.range(0, partitionColumns.size()).mapToObj(fieldIdToColumnHandle::get).map(columnHandle -> hivePartition.getKeys().get(columnHandle).getValue()).collect(// nullable
        toList())).iterator();
        return new InMemoryRecordSet(partitionColumnTypes, records).cursor();
    }));
}
Also used : IntStream(java.util.stream.IntStream) Constraint(io.trino.spi.connector.Constraint) ColumnMetadata(io.trino.spi.connector.ColumnMetadata) HiveUtil.isDeltaLakeTable(io.trino.plugin.hive.util.HiveUtil.isDeltaLakeTable) Type(io.trino.spi.type.Type) ConnectorTableMetadata(io.trino.spi.connector.ConnectorTableMetadata) Inject(javax.inject.Inject) Map(java.util.Map) Objects.requireNonNull(java.util.Objects.requireNonNull) PARTITIONS(io.trino.plugin.hive.SystemTableHandler.PARTITIONS) HiveUtil.isIcebergTable(io.trino.plugin.hive.util.HiveUtil.isIcebergTable) Table(io.trino.plugin.hive.metastore.Table) ImmutableList.toImmutableList(com.google.common.collect.ImmutableList.toImmutableList) HiveUtil.getRegularColumnHandles(io.trino.plugin.hive.util.HiveUtil.getRegularColumnHandles) HiveSessionProperties.getTimestampPrecision(io.trino.plugin.hive.HiveSessionProperties.getTimestampPrecision) ConnectorSession(io.trino.spi.connector.ConnectorSession) InMemoryRecordSet(io.trino.spi.connector.InMemoryRecordSet) SchemaTableName(io.trino.spi.connector.SchemaTableName) Streams.stream(com.google.common.collect.Streams.stream) HiveBucketing.getHiveBucketHandle(io.trino.plugin.hive.util.HiveBucketing.getHiveBucketHandle) List(java.util.List) ImmutableMap.toImmutableMap(com.google.common.collect.ImmutableMap.toImmutableMap) Collectors.toList(java.util.stream.Collectors.toList) MetastoreUtil.verifyOnline(io.trino.plugin.hive.metastore.MetastoreUtil.verifyOnline) MetastoreUtil.getProtectMode(io.trino.plugin.hive.metastore.MetastoreUtil.getProtectMode) Function.identity(java.util.function.Function.identity) Optional(java.util.Optional) SystemTables.createSystemTable(io.trino.plugin.hive.util.SystemTables.createSystemTable) HiveUtil.getPartitionKeyColumnHandles(io.trino.plugin.hive.util.HiveUtil.getPartitionKeyColumnHandles) TypeManager(io.trino.spi.type.TypeManager) SystemTable(io.trino.spi.connector.SystemTable) ColumnMetadata(io.trino.spi.connector.ColumnMetadata) HiveUtil.isDeltaLakeTable(io.trino.plugin.hive.util.HiveUtil.isDeltaLakeTable) HiveUtil.isIcebergTable(io.trino.plugin.hive.util.HiveUtil.isIcebergTable) Table(io.trino.plugin.hive.metastore.Table) SystemTables.createSystemTable(io.trino.plugin.hive.util.SystemTables.createSystemTable) SystemTable(io.trino.spi.connector.SystemTable) Constraint(io.trino.spi.connector.Constraint) SchemaTableName(io.trino.spi.connector.SchemaTableName) InMemoryRecordSet(io.trino.spi.connector.InMemoryRecordSet) Type(io.trino.spi.type.Type) ConnectorTableMetadata(io.trino.spi.connector.ConnectorTableMetadata)

Example 10 with SystemTable

use of io.trino.spi.connector.SystemTable in project trino by trinodb.

the class IndexedTpchConnectorFactory method create.

@Override
public Connector create(String catalogName, Map<String, String> properties, ConnectorContext context) {
    int splitsPerNode = getSplitsPerNode(properties);
    TpchIndexedData indexedData = new TpchIndexedData(indexSpec);
    NodeManager nodeManager = context.getNodeManager();
    return new Connector() {

        @Override
        public ConnectorTransactionHandle beginTransaction(IsolationLevel isolationLevel, boolean readOnly, boolean autoCommit) {
            return TpchTransactionHandle.INSTANCE;
        }

        @Override
        public ConnectorMetadata getMetadata(ConnectorSession session, ConnectorTransactionHandle transactionHandle) {
            return new TpchIndexMetadata(indexedData);
        }

        @Override
        public ConnectorSplitManager getSplitManager() {
            return new TpchSplitManager(nodeManager, splitsPerNode);
        }

        @Override
        public ConnectorRecordSetProvider getRecordSetProvider() {
            return new TpchRecordSetProvider(DecimalTypeMapping.DOUBLE);
        }

        @Override
        public ConnectorIndexProvider getIndexProvider() {
            return new TpchIndexProvider(indexedData);
        }

        @Override
        public Set<SystemTable> getSystemTables() {
            return ImmutableSet.of(new ExampleSystemTable());
        }

        @Override
        public ConnectorNodePartitioningProvider getNodePartitioningProvider() {
            return new TpchNodePartitioningProvider(nodeManager, splitsPerNode);
        }
    };
}
Also used : Connector(io.trino.spi.connector.Connector) IsolationLevel(io.trino.spi.transaction.IsolationLevel) ConnectorTransactionHandle(io.trino.spi.connector.ConnectorTransactionHandle) TpchSplitManager(io.trino.plugin.tpch.TpchSplitManager) NodeManager(io.trino.spi.NodeManager) TpchNodePartitioningProvider(io.trino.plugin.tpch.TpchNodePartitioningProvider) ConnectorSession(io.trino.spi.connector.ConnectorSession) SystemTable(io.trino.spi.connector.SystemTable) TpchRecordSetProvider(io.trino.plugin.tpch.TpchRecordSetProvider)

Aggregations

SystemTable (io.trino.spi.connector.SystemTable)10 SchemaTableName (io.trino.spi.connector.SchemaTableName)6 ImmutableList.toImmutableList (com.google.common.collect.ImmutableList.toImmutableList)5 TrinoException (io.trino.spi.TrinoException)5 ColumnHandle (io.trino.spi.connector.ColumnHandle)5 ColumnMetadata (io.trino.spi.connector.ColumnMetadata)5 ConnectorSession (io.trino.spi.connector.ConnectorSession)5 ImmutableList (com.google.common.collect.ImmutableList)4 ConnectorTableMetadata (io.trino.spi.connector.ConnectorTableMetadata)4 TableNotFoundException (io.trino.spi.connector.TableNotFoundException)4 List (java.util.List)4 Map (java.util.Map)4 Optional (java.util.Optional)4 Constraint (io.trino.spi.connector.Constraint)3 InMemoryRecordSet (io.trino.spi.connector.InMemoryRecordSet)3 Type (io.trino.spi.type.Type)3 Objects.requireNonNull (java.util.Objects.requireNonNull)3 Preconditions.checkArgument (com.google.common.base.Preconditions.checkArgument)2 ImmutableMap (com.google.common.collect.ImmutableMap)2 ImmutableSet (com.google.common.collect.ImmutableSet)2