Search in sources :

Example 1 with BeginTableExecuteResult

use of io.trino.spi.connector.BeginTableExecuteResult in project trino by trinodb.

the class MetadataManager method beginTableExecute.

@Override
public BeginTableExecuteResult<TableExecuteHandle, TableHandle> beginTableExecute(Session session, TableExecuteHandle tableExecuteHandle, TableHandle sourceHandle) {
    CatalogName catalogName = tableExecuteHandle.getCatalogName();
    CatalogMetadata catalogMetadata = getCatalogMetadataForWrite(session, catalogName);
    ConnectorMetadata metadata = catalogMetadata.getMetadata(session);
    BeginTableExecuteResult<ConnectorTableExecuteHandle, ConnectorTableHandle> connectorBeginResult = metadata.beginTableExecute(session.toConnectorSession(), tableExecuteHandle.getConnectorHandle(), sourceHandle.getConnectorHandle());
    return new BeginTableExecuteResult<>(tableExecuteHandle.withConnectorHandle(connectorBeginResult.getTableExecuteHandle()), sourceHandle.withConnectorHandle(connectorBeginResult.getSourceHandle()));
}
Also used : ConnectorTableExecuteHandle(io.trino.spi.connector.ConnectorTableExecuteHandle) CatalogName(io.trino.connector.CatalogName) ConnectorMetadata(io.trino.spi.connector.ConnectorMetadata) BeginTableExecuteResult(io.trino.spi.connector.BeginTableExecuteResult) ConnectorTableHandle(io.trino.spi.connector.ConnectorTableHandle)

Example 2 with BeginTableExecuteResult

use of io.trino.spi.connector.BeginTableExecuteResult in project trino by trinodb.

the class HiveMetadata method beginOptimize.

private BeginTableExecuteResult<ConnectorTableExecuteHandle, ConnectorTableHandle> beginOptimize(ConnectorSession session, ConnectorTableExecuteHandle tableExecuteHandle, ConnectorTableHandle sourceTableHandle) {
    HiveTableExecuteHandle hiveExecuteHandle = (HiveTableExecuteHandle) tableExecuteHandle;
    HiveTableHandle hiveSourceTableHandle = (HiveTableHandle) sourceTableHandle;
    WriteInfo writeInfo = locationService.getQueryWriteInfo(hiveExecuteHandle.getLocationHandle());
    String writeDeclarationId = metastore.declareIntentionToWrite(session, writeInfo.getWriteMode(), writeInfo.getWritePath(), hiveExecuteHandle.getSchemaTableName());
    return new BeginTableExecuteResult<>(hiveExecuteHandle.withWriteDeclarationId(writeDeclarationId), hiveSourceTableHandle.withMaxScannedFileSize(hiveExecuteHandle.getMaxScannedFileSize()).withRecordScannedFiles(true));
}
Also used : WriteInfo(io.trino.plugin.hive.LocationService.WriteInfo) BeginTableExecuteResult(io.trino.spi.connector.BeginTableExecuteResult)

Example 3 with BeginTableExecuteResult

use of io.trino.spi.connector.BeginTableExecuteResult in project trino by trinodb.

the class IcebergMetadata method beginOptimize.

private BeginTableExecuteResult<ConnectorTableExecuteHandle, ConnectorTableHandle> beginOptimize(ConnectorSession session, IcebergTableExecuteHandle executeHandle, IcebergTableHandle table) {
    IcebergOptimizeHandle optimizeHandle = (IcebergOptimizeHandle) executeHandle.getProcedureHandle();
    Table icebergTable = catalog.loadTable(session, table.getSchemaTableName());
    verify(transaction == null, "transaction already set");
    transaction = icebergTable.newTransaction();
    return new BeginTableExecuteResult<>(executeHandle, table.forOptimize(true, optimizeHandle.getMaxScannedFileSize()));
}
Also used : IcebergOptimizeHandle(io.trino.plugin.iceberg.procedure.IcebergOptimizeHandle) Table(org.apache.iceberg.Table) ClassLoaderSafeSystemTable(io.trino.plugin.base.classloader.ClassLoaderSafeSystemTable) SystemTable(io.trino.spi.connector.SystemTable) BeginTableExecuteResult(io.trino.spi.connector.BeginTableExecuteResult)

Example 4 with BeginTableExecuteResult

use of io.trino.spi.connector.BeginTableExecuteResult in project trino by trinodb.

the class DeltaLakeMetadata method beginOptimize.

private BeginTableExecuteResult<ConnectorTableExecuteHandle, ConnectorTableHandle> beginOptimize(ConnectorSession session, DeltaLakeTableExecuteHandle executeHandle, DeltaLakeTableHandle table) {
    DeltaTableOptimizeHandle optimizeHandle = (DeltaTableOptimizeHandle) executeHandle.getProcedureHandle();
    if (!allowWrite(session, table)) {
        String fileSystem = new Path(table.getLocation()).toUri().getScheme();
        throw new TrinoException(NOT_SUPPORTED, format("Optimize is not supported on the %s filesystem", fileSystem));
    }
    checkSupportedWriterVersion(session, table.getSchemaTableName());
    return new BeginTableExecuteResult<>(executeHandle.withProcedureHandle(optimizeHandle.withCurrentVersion(table.getReadVersion())), table.forOptimize(true, optimizeHandle.getMaxScannedFileSize()));
}
Also used : DeltaTableOptimizeHandle(io.trino.plugin.deltalake.procedure.DeltaTableOptimizeHandle) Path(org.apache.hadoop.fs.Path) TrinoException(io.trino.spi.TrinoException) BeginTableExecuteResult(io.trino.spi.connector.BeginTableExecuteResult)

Aggregations

BeginTableExecuteResult (io.trino.spi.connector.BeginTableExecuteResult)4 CatalogName (io.trino.connector.CatalogName)1 ClassLoaderSafeSystemTable (io.trino.plugin.base.classloader.ClassLoaderSafeSystemTable)1 DeltaTableOptimizeHandle (io.trino.plugin.deltalake.procedure.DeltaTableOptimizeHandle)1 WriteInfo (io.trino.plugin.hive.LocationService.WriteInfo)1 IcebergOptimizeHandle (io.trino.plugin.iceberg.procedure.IcebergOptimizeHandle)1 TrinoException (io.trino.spi.TrinoException)1 ConnectorMetadata (io.trino.spi.connector.ConnectorMetadata)1 ConnectorTableExecuteHandle (io.trino.spi.connector.ConnectorTableExecuteHandle)1 ConnectorTableHandle (io.trino.spi.connector.ConnectorTableHandle)1 SystemTable (io.trino.spi.connector.SystemTable)1 Path (org.apache.hadoop.fs.Path)1 Table (org.apache.iceberg.Table)1