use of com.google.cloud.bigquery.connector.common.ReadRowsHelper in project spark-bigquery-connector by GoogleCloudDataproc.
the class BigQueryInputPartitionContext method createPartitionReaderContext.
@Override
public InputPartitionReaderContext<InternalRow> createPartitionReaderContext() {
ReadRowsRequest.Builder readRowsRequest = ReadRowsRequest.newBuilder().setReadStream(streamName);
ReadRowsHelper readRowsHelper = new ReadRowsHelper(bigQueryReadClientFactory, readRowsRequest, options);
Iterator<ReadRowsResponse> readRowsResponses = readRowsHelper.readRows();
return new BigQueryInputPartitionReaderContext(readRowsResponses, converter, readRowsHelper);
}
use of com.google.cloud.bigquery.connector.common.ReadRowsHelper in project spark-bigquery-connector by GoogleCloudDataproc.
the class ArrowInputPartitionContext method createPartitionReaderContext.
public InputPartitionReaderContext<ColumnarBatch> createPartitionReaderContext() {
BigQueryStorageReadRowsTracer tracer = tracerFactory.newReadRowsTracer(Joiner.on(",").join(streamNames));
List<ReadRowsRequest.Builder> readRowsRequests = streamNames.stream().map(name -> ReadRowsRequest.newBuilder().setReadStream(name)).collect(Collectors.toList());
ReadRowsHelper readRowsHelper = new ReadRowsHelper(bigQueryReadClientFactory, readRowsRequests, options);
tracer.startStream();
Iterator<ReadRowsResponse> readRowsResponses = readRowsHelper.readRows();
return new ArrowColumnBatchPartitionReaderContext(readRowsResponses, serializedArrowSchema, readRowsHelper, selectedFields, tracer, userProvidedSchema.toJavaUtil(), options.numBackgroundThreads());
}
Aggregations