Search in sources :

Example 1 with SparkBigQueryConfig

use of com.google.cloud.spark.bigquery.SparkBigQueryConfig in project OpenLineage by OpenLineage.

the class MockBigQueryRelationProvider method createRelationInternal.

@Override
public BigQueryRelation createRelationInternal(SQLContext sqlContext, Map<String, String> parameters, Option<StructType> schema) {
    Injector injector = INJECTOR.createGuiceInjector(sqlContext, parameters, schema);
    SparkBigQueryConfig config = injector.getInstance(SparkBigQueryConfig.class);
    BigQueryClient bigQueryClient = injector.getInstance(BigQueryClient.class);
    TableInfo tableInfo = bigQueryClient.getReadTable(config.toReadTableOptions());
    Dataset<Row> testRecords = injector.getInstance(new Key<Dataset<Row>>() {
    });
    return new MockBigQueryRelation(config, tableInfo, sqlContext, testRecords);
}
Also used : SparkBigQueryConfig(com.google.cloud.spark.bigquery.SparkBigQueryConfig) BigQueryClient(com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.connector.common.BigQueryClient) Injector(com.google.cloud.spark.bigquery.repackaged.com.google.inject.Injector) Dataset(org.apache.spark.sql.Dataset) Row(org.apache.spark.sql.Row)

Example 2 with SparkBigQueryConfig

use of com.google.cloud.spark.bigquery.SparkBigQueryConfig in project spark-bigquery-connector by GoogleCloudDataproc.

the class BigQueryDataSourceV2 method createWriter.

/**
 * Returning a DataSourceWriter for the specified parameters. In case the table already exist and
 * the SaveMode is "Ignore", an Optional.empty() is returned.
 */
@Override
public Optional<DataSourceWriter> createWriter(String writeUUID, StructType schema, SaveMode mode, DataSourceOptions options) {
    Injector injector = createInjector(schema, options.asMap(), new BigQueryDataSourceWriterModule(writeUUID, schema, mode));
    // first verify if we need to do anything at all, based on the table existence and the save
    // mode.
    BigQueryClient bigQueryClient = injector.getInstance(BigQueryClient.class);
    SparkBigQueryConfig config = injector.getInstance(SparkBigQueryConfig.class);
    TableInfo table = bigQueryClient.getTable(config.getTableId());
    if (table != null) {
        // table already exists
        if (mode == SaveMode.Ignore) {
            return Optional.empty();
        }
        if (mode == SaveMode.ErrorIfExists) {
            throw new IllegalArgumentException(String.format("SaveMode is set to ErrorIfExists and table '%s' already exists. Did you want " + "to add data to the table by setting the SaveMode to Append? Example: " + "df.write.format.options.mode(\"append\").save()", BigQueryUtil.friendlyTableName(table.getTableId())));
        }
    } else {
        // table does not exist
        // If the CreateDisposition is CREATE_NEVER, and the table does not exist,
        // there's no point in writing the data to GCS in the first place as it going
        // to fail on the BigQuery side.
        boolean createNever = config.getCreateDisposition().map(createDisposition -> createDisposition == JobInfo.CreateDisposition.CREATE_NEVER).orElse(false);
        if (createNever) {
            throw new IllegalArgumentException(String.format("For table %s Create Disposition is CREATE_NEVER and the table does not exists." + " Aborting the insert", BigQueryUtil.friendlyTableName(config.getTableId())));
        }
    }
    DataSourceWriterContext dataSourceWriterContext = null;
    switch(config.getWriteMethod()) {
        case DIRECT:
            dataSourceWriterContext = injector.getInstance(BigQueryDirectDataSourceWriterContext.class);
            break;
        case INDIRECT:
            dataSourceWriterContext = injector.getInstance(BigQueryIndirectDataSourceWriterContext.class);
            break;
    }
    return Optional.of(new BigQueryDataSourceWriter(dataSourceWriterContext));
}
Also used : BigQueryDataSourceWriterModule(com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceWriterModule) WriteSupport(org.apache.spark.sql.sources.v2.WriteSupport) StructType(org.apache.spark.sql.types.StructType) SaveMode(org.apache.spark.sql.SaveMode) ReadSupport(org.apache.spark.sql.sources.v2.ReadSupport) JobInfo(com.google.cloud.bigquery.JobInfo) BigQueryClient(com.google.cloud.bigquery.connector.common.BigQueryClient) BigQueryIndirectDataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.BigQueryIndirectDataSourceWriterContext) SparkBigQueryConfig(com.google.cloud.spark.bigquery.SparkBigQueryConfig) BigQueryDataSourceReaderModule(com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceReaderModule) Injector(com.google.inject.Injector) BigQueryDataSourceReaderContext(com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceReaderContext) DataSourceWriter(org.apache.spark.sql.sources.v2.writer.DataSourceWriter) DataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.DataSourceWriterContext) BigQueryDataSourceWriterModule(com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceWriterModule) Optional(java.util.Optional) TableInfo(com.google.cloud.bigquery.TableInfo) BigQueryUtil(com.google.cloud.bigquery.connector.common.BigQueryUtil) BigQueryDirectDataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.BigQueryDirectDataSourceWriterContext) DataSourceOptions(org.apache.spark.sql.sources.v2.DataSourceOptions) DataSourceV2(org.apache.spark.sql.sources.v2.DataSourceV2) DataSourceReader(org.apache.spark.sql.sources.v2.reader.DataSourceReader) BigQueryIndirectDataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.BigQueryIndirectDataSourceWriterContext) DataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.DataSourceWriterContext) BigQueryDirectDataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.BigQueryDirectDataSourceWriterContext) BigQueryClient(com.google.cloud.bigquery.connector.common.BigQueryClient) SparkBigQueryConfig(com.google.cloud.spark.bigquery.SparkBigQueryConfig) BigQueryDirectDataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.BigQueryDirectDataSourceWriterContext) Injector(com.google.inject.Injector) BigQueryIndirectDataSourceWriterContext(com.google.cloud.spark.bigquery.v2.context.BigQueryIndirectDataSourceWriterContext) TableInfo(com.google.cloud.bigquery.TableInfo)

Example 3 with SparkBigQueryConfig

use of com.google.cloud.spark.bigquery.SparkBigQueryConfig in project OpenLineage by OpenLineage.

the class LogicalPlanSerializerTest method testSerializeBigQueryPlan.

@Test
public void testSerializeBigQueryPlan() throws IOException {
    String query = "SELECT date FROM bigquery-public-data.google_analytics_sample.test";
    System.setProperty("GOOGLE_CLOUD_PROJECT", "test_serialization");
    SparkBigQueryConfig config = SparkBigQueryConfig.from(ImmutableMap.of("query", query, "dataset", "test-dataset", "maxparallelism", "2", "partitionexpirationms", "2"), ImmutableMap.of(), new Configuration(), 10, SQLConf.get(), "", Optional.empty());
    BigQueryRelation bigQueryRelation = new BigQueryRelation(config, TableInfo.newBuilder(TableId.of("dataset", "test"), new TestTableDefinition()).build(), mock(SQLContext.class));
    LogicalRelation logicalRelation = new LogicalRelation(bigQueryRelation, Seq$.MODULE$.<AttributeReference>newBuilder().$plus$eq(new AttributeReference("name", StringType$.MODULE$, false, Metadata.empty(), ExprId.apply(1L), Seq$.MODULE$.<String>empty())).result(), Option.empty(), false);
    InsertIntoDataSourceCommand command = new InsertIntoDataSourceCommand(logicalRelation, logicalRelation, false);
    Map<String, Object> commandActualNode = objectMapper.readValue(logicalPlanSerializer.serialize(command), mapTypeReference);
    Map<String, Object> bigqueryActualNode = objectMapper.readValue(logicalPlanSerializer.serialize(logicalRelation), mapTypeReference);
    Path expectedCommandNodePath = Paths.get("src", "test", "resources", "test_data", "serde", "insertintods-node.json");
    Path expectedBigQueryRelationNodePath = Paths.get("src", "test", "resources", "test_data", "serde", "bigqueryrelation-node.json");
    Map<String, Object> expectedCommandNode = objectMapper.readValue(expectedCommandNodePath.toFile(), mapTypeReference);
    Map<String, Object> expectedBigQueryRelationNode = objectMapper.readValue(expectedBigQueryRelationNodePath.toFile(), mapTypeReference);
    assertThat(commandActualNode).satisfies(new MatchesMapRecursively(expectedCommandNode, Collections.singleton("exprId")));
    assertThat(bigqueryActualNode).satisfies(new MatchesMapRecursively(expectedBigQueryRelationNode, Collections.singleton("exprId")));
}
Also used : Path(java.nio.file.Path) SparkBigQueryConfig(com.google.cloud.spark.bigquery.SparkBigQueryConfig) Configuration(org.apache.hadoop.conf.Configuration) AttributeReference(org.apache.spark.sql.catalyst.expressions.AttributeReference) InsertIntoDataSourceCommand(org.apache.spark.sql.execution.datasources.InsertIntoDataSourceCommand) LogicalRelation(org.apache.spark.sql.execution.datasources.LogicalRelation) BigQueryRelation(com.google.cloud.spark.bigquery.BigQueryRelation) SQLContext(org.apache.spark.sql.SQLContext) Test(org.junit.jupiter.api.Test)

Aggregations

SparkBigQueryConfig (com.google.cloud.spark.bigquery.SparkBigQueryConfig)3 JobInfo (com.google.cloud.bigquery.JobInfo)1 TableInfo (com.google.cloud.bigquery.TableInfo)1 BigQueryClient (com.google.cloud.bigquery.connector.common.BigQueryClient)1 BigQueryUtil (com.google.cloud.bigquery.connector.common.BigQueryUtil)1 BigQueryRelation (com.google.cloud.spark.bigquery.BigQueryRelation)1 BigQueryClient (com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.connector.common.BigQueryClient)1 Injector (com.google.cloud.spark.bigquery.repackaged.com.google.inject.Injector)1 BigQueryDataSourceReaderContext (com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceReaderContext)1 BigQueryDataSourceReaderModule (com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceReaderModule)1 BigQueryDataSourceWriterModule (com.google.cloud.spark.bigquery.v2.context.BigQueryDataSourceWriterModule)1 BigQueryDirectDataSourceWriterContext (com.google.cloud.spark.bigquery.v2.context.BigQueryDirectDataSourceWriterContext)1 BigQueryIndirectDataSourceWriterContext (com.google.cloud.spark.bigquery.v2.context.BigQueryIndirectDataSourceWriterContext)1 DataSourceWriterContext (com.google.cloud.spark.bigquery.v2.context.DataSourceWriterContext)1 Injector (com.google.inject.Injector)1 Path (java.nio.file.Path)1 Optional (java.util.Optional)1 Configuration (org.apache.hadoop.conf.Configuration)1 Dataset (org.apache.spark.sql.Dataset)1 Row (org.apache.spark.sql.Row)1