Search in sources :

Example 1 with ReadFromTableFn

use of keyviz.ReadData.ReadFromTableFn in project java-docs-samples by GoogleCloudPlatform.

the class KeyVizArtTest method testWriteAndRead.

@Test
public void testWriteAndRead() {
    LoadData.main(new String[] { "--bigtableProjectId=" + projectId, "--bigtableInstanceId=" + instanceId, "--bigtableTableId=" + TABLE_ID, "--gigabytesWritten=" + GIGABYTES_WRITTEN, "--megabytesPerRow=" + MEGABYTES_PER_ROW });
    long count = 0;
    try (Connection connection = BigtableConfiguration.connect(projectId, instanceId)) {
        Table table = connection.getTable(TableName.valueOf(TABLE_ID));
        Scan scan = new Scan();
        ResultScanner rows = table.getScanner(scan);
        for (Result row : rows) {
            count++;
        }
    } catch (IOException e) {
        System.out.println("Unable to initialize service client, as a network error occurred: \n" + e.toString());
    }
    assertEquals(10, count);
    ReadDataOptions options = PipelineOptionsFactory.fromArgs("--bigtableProjectId=" + projectId, "--bigtableInstanceId=" + instanceId, "--bigtableTableId=" + TABLE_ID, "--gigabytesWritten=" + GIGABYTES_WRITTEN, "--megabytesPerRow=" + MEGABYTES_PER_ROW, "--filePath=gs://keyviz-art/maxgrid.txt").withValidation().as(ReadDataOptions.class);
    Pipeline p = Pipeline.create(options);
    CloudBigtableTableConfiguration bigtableTableConfig = new CloudBigtableTableConfiguration.Builder().withProjectId(options.getBigtableProjectId()).withInstanceId(options.getBigtableInstanceId()).withTableId(options.getBigtableTableId()).build();
    // Initiates a new pipeline every second
    p.apply(Create.of(1L)).apply(ParDo.of(new ReadFromTableFn(bigtableTableConfig, options)));
    p.run().waitUntilFinish();
    String output = bout.toString();
    assertThat(output).contains("got 10 rows");
    options = PipelineOptionsFactory.fromArgs("--bigtableProjectId=" + projectId, "--bigtableInstanceId=" + instanceId, "--bigtableTableId=" + TABLE_ID, "--gigabytesWritten=" + GIGABYTES_WRITTEN, "--megabytesPerRow=" + MEGABYTES_PER_ROW, "--filePath=gs://keyviz-art/halfgrid.txt").withValidation().as(ReadDataOptions.class);
    p = Pipeline.create(options);
    // Initiates a new pipeline every second
    p.apply(Create.of(1L)).apply(ParDo.of(new ReadFromTableFn(bigtableTableConfig, options)));
    p.run().waitUntilFinish();
    output = bout.toString();
    assertThat(output).contains("got 5 rows");
}
Also used : Table(org.apache.hadoop.hbase.client.Table) ResultScanner(org.apache.hadoop.hbase.client.ResultScanner) ReadDataOptions(keyviz.ReadData.ReadDataOptions) Connection(org.apache.hadoop.hbase.client.Connection) IOException(java.io.IOException) Result(org.apache.hadoop.hbase.client.Result) Pipeline(org.apache.beam.sdk.Pipeline) CloudBigtableTableConfiguration(com.google.cloud.bigtable.beam.CloudBigtableTableConfiguration) ReadFromTableFn(keyviz.ReadData.ReadFromTableFn) Scan(org.apache.hadoop.hbase.client.Scan) Test(org.junit.Test)

Aggregations

CloudBigtableTableConfiguration (com.google.cloud.bigtable.beam.CloudBigtableTableConfiguration)1 IOException (java.io.IOException)1 ReadDataOptions (keyviz.ReadData.ReadDataOptions)1 ReadFromTableFn (keyviz.ReadData.ReadFromTableFn)1 Pipeline (org.apache.beam.sdk.Pipeline)1 Connection (org.apache.hadoop.hbase.client.Connection)1 Result (org.apache.hadoop.hbase.client.Result)1 ResultScanner (org.apache.hadoop.hbase.client.ResultScanner)1 Scan (org.apache.hadoop.hbase.client.Scan)1 Table (org.apache.hadoop.hbase.client.Table)1 Test (org.junit.Test)1