Search in sources :

Example 1 with StagedTable

use of org.apache.spark.sql.connector.catalog.StagedTable in project iceberg by apache.

the class SparkSessionCatalog method stageCreateOrReplace.

@Override
public StagedTable stageCreateOrReplace(Identifier ident, StructType schema, Transform[] partitions, Map<String, String> properties) throws NoSuchNamespaceException {
    String provider = properties.get("provider");
    TableCatalog catalog;
    if (useIceberg(provider)) {
        if (asStagingCatalog != null) {
            return asStagingCatalog.stageCreateOrReplace(ident, schema, partitions, properties);
        }
        catalog = icebergCatalog;
    } else {
        catalog = getSessionCatalog();
    }
    // drop the table if it exists
    catalog.dropTable(ident);
    try {
        // create the table with the session catalog, then wrap it in a staged table that will delete to roll back
        Table sessionCatalogTable = catalog.createTable(ident, schema, partitions, properties);
        return new RollbackStagedTable(catalog, ident, sessionCatalogTable);
    } catch (TableAlreadyExistsException e) {
        // the table was deleted, but now already exists again. retry the replace.
        return stageCreateOrReplace(ident, schema, partitions, properties);
    }
}
Also used : TableAlreadyExistsException(org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException) StagedTable(org.apache.spark.sql.connector.catalog.StagedTable) Table(org.apache.spark.sql.connector.catalog.Table) TableCatalog(org.apache.spark.sql.connector.catalog.TableCatalog) StagingTableCatalog(org.apache.spark.sql.connector.catalog.StagingTableCatalog)

Example 2 with StagedTable

use of org.apache.spark.sql.connector.catalog.StagedTable in project iceberg by apache.

the class SparkSessionCatalog method stageCreate.

@Override
public StagedTable stageCreate(Identifier ident, StructType schema, Transform[] partitions, Map<String, String> properties) throws TableAlreadyExistsException, NoSuchNamespaceException {
    String provider = properties.get("provider");
    TableCatalog catalog;
    if (useIceberg(provider)) {
        if (asStagingCatalog != null) {
            return asStagingCatalog.stageCreate(ident, schema, partitions, properties);
        }
        catalog = icebergCatalog;
    } else {
        catalog = getSessionCatalog();
    }
    // create the table with the session catalog, then wrap it in a staged table that will delete to roll back
    Table table = catalog.createTable(ident, schema, partitions, properties);
    return new RollbackStagedTable(catalog, ident, table);
}
Also used : StagedTable(org.apache.spark.sql.connector.catalog.StagedTable) Table(org.apache.spark.sql.connector.catalog.Table) TableCatalog(org.apache.spark.sql.connector.catalog.TableCatalog) StagingTableCatalog(org.apache.spark.sql.connector.catalog.StagingTableCatalog)

Example 3 with StagedTable

use of org.apache.spark.sql.connector.catalog.StagedTable in project iceberg by apache.

the class SparkSessionCatalog method stageReplace.

@Override
public StagedTable stageReplace(Identifier ident, StructType schema, Transform[] partitions, Map<String, String> properties) throws NoSuchNamespaceException, NoSuchTableException {
    String provider = properties.get("provider");
    TableCatalog catalog;
    if (useIceberg(provider)) {
        if (asStagingCatalog != null) {
            return asStagingCatalog.stageReplace(ident, schema, partitions, properties);
        }
        catalog = icebergCatalog;
    } else {
        catalog = getSessionCatalog();
    }
    // attempt to drop the table and fail if it doesn't exist
    if (!catalog.dropTable(ident)) {
        throw new NoSuchTableException(ident);
    }
    try {
        // create the table with the session catalog, then wrap it in a staged table that will delete to roll back
        Table table = catalog.createTable(ident, schema, partitions, properties);
        return new RollbackStagedTable(catalog, ident, table);
    } catch (TableAlreadyExistsException e) {
        // the table was deleted, but now already exists again. retry the replace.
        return stageReplace(ident, schema, partitions, properties);
    }
}
Also used : TableAlreadyExistsException(org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException) StagedTable(org.apache.spark.sql.connector.catalog.StagedTable) Table(org.apache.spark.sql.connector.catalog.Table) TableCatalog(org.apache.spark.sql.connector.catalog.TableCatalog) StagingTableCatalog(org.apache.spark.sql.connector.catalog.StagingTableCatalog) NoSuchTableException(org.apache.spark.sql.catalyst.analysis.NoSuchTableException)

Aggregations

StagedTable (org.apache.spark.sql.connector.catalog.StagedTable)3 StagingTableCatalog (org.apache.spark.sql.connector.catalog.StagingTableCatalog)3 Table (org.apache.spark.sql.connector.catalog.Table)3 TableCatalog (org.apache.spark.sql.connector.catalog.TableCatalog)3 TableAlreadyExistsException (org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException)2 NoSuchTableException (org.apache.spark.sql.catalyst.analysis.NoSuchTableException)1