Search in sources :

Example 1 with BigQueryClient

use of com.google.cloud.bigquery.connector.common.BigQueryClient in project spark-bigquery-connector by GoogleCloudDataproc.

the class BigQueryDataSourceV2 method createWriter.

/**
 * Returning a DataSourceWriter for the specified parameters. In case the table already exist and
 * the SaveMode is "Ignore", an Optional.empty() is returned.
 */
@Override
public Optional<DataSourceWriter> createWriter(String writeUUID, StructType schema, SaveMode mode, DataSourceOptions options) {
    Injector injector = createInjector(schema, options.asMap(), new BigQueryDataSourceWriterModule(writeUUID, schema, mode));
    // first verify if we need to do anything at all, based on the table existence and the save
    // mode.
    BigQueryClient bigQueryClient = injector.getInstance(BigQueryClient.class);
    SparkBigQueryConfig config = injector.getInstance(SparkBigQueryConfig.class);
    TableInfo table = bigQueryClient.getTable(config.getTableId());
    if (table != null) {
        // table already exists
        if (mode == SaveMode.Ignore) {
            return Optional.empty();
        }
        if (mode == SaveMode.ErrorIfExists) {
            throw new IllegalArgumentException(String.format("SaveMode is set to ErrorIfExists and table '%s' already exists. Did you want " + "to add data to the table by setting the SaveMode to Append? Example: " + "df.write.format.options.mode(\"append\").save()", BigQueryUtil.friendlyTableName(table.getTableId())));
        }
    } else {
        // table does not exist
        // If the CreateDisposition is CREATE_NEVER, and the table does not exist,
        // there's no point in writing the data to GCS in the first place as it going
        // to fail on the BigQuery side.
        boolean createNever = config.getCreateDisposition().map(createDisposition -> createDisposition == JobInfo.CreateDisposition.CREATE_NEVER).orElse(false);
        if (createNever) {
            throw new IllegalArgumentException(String.format("For table %s Create Disposition is CREATE_NEVER and the table does not exists." + " Aborting the insert", BigQueryUtil.friendlyTableName(config.getTableId())));
        }
    }
    DataSourceWriterContext dataSourceWriterContext = null;
    switch(config.getWriteMethod()) {
        case DIRECT:
            dataSourceWriterContext = injector.getInstance(BigQueryDirectDataSourceWriterContext.class);
            break;
        case INDIRECT:
            dataSourceWriterContext = injector.getInstance(BigQueryIndirectDataSourceWriterContext.class);
            break;
    }
    return Optional.of(new BigQueryDataSourceWriter(dataSourceWriterContext));
}
Also used : BigQueryDataSourceWriterModule(com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceWriterModule) WriteSupport(org.apache.spark.sql.sources.v2.WriteSupport) StructType(org.apache.spark.sql.types.StructType) SaveMode(org.apache.spark.sql.SaveMode) ReadSupport(org.apache.spark.sql.sources.v2.ReadSupport) JobInfo(com.google.cloud.bigquery.JobInfo) BigQueryClient(com.google.cloud.bigquery.connector.common.BigQueryClient) BigQueryIndirectDataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.BigQueryIndirectDataSourceWriterContext) SparkBigQueryConfig(com.google.cloud.spark.bigquery.SparkBigQueryConfig) BigQueryDataSourceReaderModule(com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceReaderModule) Injector(com.google.inject.Injector) BigQueryDataSourceReaderContext(com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceReaderContext) DataSourceWriter(org.apache.spark.sql.sources.v2.writer.DataSourceWriter) DataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.DataSourceWriterContext) BigQueryDataSourceWriterModule(com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceWriterModule) Optional(java.util.Optional) TableInfo(com.google.cloud.bigquery.TableInfo) BigQueryUtil(com.google.cloud.bigquery.connector.common.BigQueryUtil) BigQueryDirectDataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.BigQueryDirectDataSourceWriterContext) DataSourceOptions(org.apache.spark.sql.sources.v2.DataSourceOptions) DataSourceV2(org.apache.spark.sql.sources.v2.DataSourceV2) DataSourceReader(org.apache.spark.sql.sources.v2.reader.DataSourceReader) BigQueryIndirectDataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.BigQueryIndirectDataSourceWriterContext) DataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.DataSourceWriterContext) BigQueryDirectDataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.BigQueryDirectDataSourceWriterContext) BigQueryClient(com.google.cloud.bigquery.connector.common.BigQueryClient) SparkBigQueryConfig(com.google.cloud.spark.bigquery.SparkBigQueryConfig) BigQueryDirectDataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.BigQueryDirectDataSourceWriterContext) Injector(com.google.inject.Injector) BigQueryIndirectDataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.BigQueryIndirectDataSourceWriterContext) TableInfo(com.google.cloud.bigquery.TableInfo)

Example 2 with BigQueryClient

use of com.google.cloud.bigquery.connector.common.BigQueryClient in project spark-bigquery-connector by GoogleCloudDataproc.

the class IntegrationTestUtils method runQuery.

public static void runQuery(String query) {
    BigQueryClient bigQueryClient = new BigQueryClient(getBigquery(), Optional.empty(), Optional.empty(), destinationTableCache, ImmutableMap.of());
    bigQueryClient.query(query);
}
Also used : BigQueryClient(com.google.cloud.bigquery.connector.common.BigQueryClient)

Aggregations

BigQueryClient (com.google.cloud.bigquery.connector.common.BigQueryClient)2 JobInfo (com.google.cloud.bigquery.JobInfo)1 TableInfo (com.google.cloud.bigquery.TableInfo)1 BigQueryUtil (com.google.cloud.bigquery.connector.common.BigQueryUtil)1 SparkBigQueryConfig (com.google.cloud.spark.bigquery.SparkBigQueryConfig)1 BigQueryDataSourceReaderContext (com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceReaderContext)1 BigQueryDataSourceReaderModule (com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceReaderModule)1 BigQueryDataSourceWriterModule (com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceWriterModule)1 BigQueryDirectDataSourceWriterContext (com.google.cloud.spark.bigquery.v2.context.BigQueryDirectDataSourceWriterContext)1 BigQueryIndirectDataSourceWriterContext (com.google.cloud.spark.bigquery.v2.context.BigQueryIndirectDataSourceWriterContext)1 DataSourceWriterContext (com.google.cloud.spark.bigquery.v2.context.DataSourceWriterContext)1 Injector (com.google.inject.Injector)1 Optional (java.util.Optional)1 SaveMode (org.apache.spark.sql.SaveMode)1 DataSourceOptions (org.apache.spark.sql.sources.v2.DataSourceOptions)1 DataSourceV2 (org.apache.spark.sql.sources.v2.DataSourceV2)1 ReadSupport (org.apache.spark.sql.sources.v2.ReadSupport)1 WriteSupport (org.apache.spark.sql.sources.v2.WriteSupport)1 DataSourceReader (org.apache.spark.sql.sources.v2.reader.DataSourceReader)1 DataSourceWriter (org.apache.spark.sql.sources.v2.writer.DataSourceWriter)1