Search in sources :

Example 61 with PCollection

use of org.apache.beam.sdk.values.PCollection in project beam by apache.

the class CreateStreamTest method testInStreamingModeCountByKey.

@Test
public void testInStreamingModeCountByKey() throws Exception {
    Instant instant = new Instant(0);
    CreateStream<KV<Integer, Long>> kvSource = CreateStream.of(KvCoder.of(VarIntCoder.of(), VarLongCoder.of()), batchDuration()).emptyBatch().advanceWatermarkForNextBatch(instant).nextBatch(TimestampedValue.of(KV.of(1, 100L), instant.plus(Duration.standardSeconds(3L))), TimestampedValue.of(KV.of(1, 300L), instant.plus(Duration.standardSeconds(4L)))).advanceWatermarkForNextBatch(instant.plus(Duration.standardSeconds(7L))).nextBatch(TimestampedValue.of(KV.of(1, 400L), instant.plus(Duration.standardSeconds(8L)))).advanceNextBatchWatermarkToInfinity();
    PCollection<KV<Integer, Long>> output = p.apply("create kv Source", kvSource).apply("window input", Window.<KV<Integer, Long>>into(FixedWindows.of(Duration.standardSeconds(3L))).withAllowedLateness(Duration.ZERO)).apply(Count.perKey());
    PAssert.that("Wrong count value ", output).satisfies((SerializableFunction<Iterable<KV<Integer, Long>>, Void>) input -> {
        for (KV<Integer, Long> element : input) {
            if (element.getKey() == 1) {
                Long countValue = element.getValue();
                assertNotEquals("Count Value is 0 !!!", 0L, countValue.longValue());
            } else {
                fail("Unknown key in the output PCollection");
            }
        }
        return null;
    });
    p.run();
}
Also used : AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Count(org.apache.beam.sdk.transforms.Count) SerializableFunction(org.apache.beam.sdk.transforms.SerializableFunction) PCollectionList(org.apache.beam.sdk.values.PCollectionList) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) CreateStream(org.apache.beam.runners.spark.io.CreateStream) Window(org.apache.beam.sdk.transforms.windowing.Window) PCollectionTuple(org.apache.beam.sdk.values.PCollectionTuple) GlobalWindow(org.apache.beam.sdk.transforms.windowing.GlobalWindow) Assert.fail(org.junit.Assert.fail) TimestampedValue(org.apache.beam.sdk.values.TimestampedValue) Flatten(org.apache.beam.sdk.transforms.Flatten) KvCoder(org.apache.beam.sdk.coders.KvCoder) Matchers.allOf(org.hamcrest.Matchers.allOf) Matchers.lessThanOrEqualTo(org.hamcrest.Matchers.lessThanOrEqualTo) Sum(org.apache.beam.sdk.transforms.Sum) VarLongCoder(org.apache.beam.sdk.coders.VarLongCoder) Category(org.junit.experimental.categories.Category) Serializable(java.io.Serializable) DefaultTrigger(org.apache.beam.sdk.transforms.windowing.DefaultTrigger) ParDo(org.apache.beam.sdk.transforms.ParDo) Matchers.equalTo(org.hamcrest.Matchers.equalTo) Matchers.is(org.hamcrest.Matchers.is) AfterPane(org.apache.beam.sdk.transforms.windowing.AfterPane) SparkPipelineOptions(org.apache.beam.runners.spark.SparkPipelineOptions) Values(org.apache.beam.sdk.transforms.Values) KV(org.apache.beam.sdk.values.KV) StreamingTest(org.apache.beam.runners.spark.StreamingTest) AfterWatermark(org.apache.beam.sdk.transforms.windowing.AfterWatermark) Combine(org.apache.beam.sdk.transforms.Combine) Duration(org.joda.time.Duration) TupleTagList(org.apache.beam.sdk.values.TupleTagList) ReuseSparkContextRule(org.apache.beam.runners.spark.ReuseSparkContextRule) StringUtf8Coder(org.apache.beam.sdk.coders.StringUtf8Coder) TupleTag(org.apache.beam.sdk.values.TupleTag) TestPipeline(org.apache.beam.sdk.testing.TestPipeline) Never(org.apache.beam.sdk.transforms.windowing.Never) MatcherAssert.assertThat(org.hamcrest.MatcherAssert.assertThat) ExpectedException(org.junit.rules.ExpectedException) DoFn(org.apache.beam.sdk.transforms.DoFn) Matchers.greaterThanOrEqualTo(org.hamcrest.Matchers.greaterThanOrEqualTo) GroupByKey(org.apache.beam.sdk.transforms.GroupByKey) WithKeys(org.apache.beam.sdk.transforms.WithKeys) PAssert(org.apache.beam.sdk.testing.PAssert) FixedWindows(org.apache.beam.sdk.transforms.windowing.FixedWindows) IOException(java.io.IOException) Test(org.junit.Test) PCollection(org.apache.beam.sdk.values.PCollection) Assert.assertNotEquals(org.junit.Assert.assertNotEquals) Rule(org.junit.Rule) AfterProcessingTime(org.apache.beam.sdk.transforms.windowing.AfterProcessingTime) BoundedWindow(org.apache.beam.sdk.transforms.windowing.BoundedWindow) Instant(org.joda.time.Instant) VarIntCoder(org.apache.beam.sdk.coders.VarIntCoder) IntervalWindow(org.apache.beam.sdk.transforms.windowing.IntervalWindow) Instant(org.joda.time.Instant) KV(org.apache.beam.sdk.values.KV) StreamingTest(org.apache.beam.runners.spark.StreamingTest) Test(org.junit.Test)

Example 62 with PCollection

use of org.apache.beam.sdk.values.PCollection in project beam by apache.

the class SparkCoGroupByKeyStreamingTest method testInStreamingMode.

@Category(StreamingTest.class)
@Test
public void testInStreamingMode() throws Exception {
    Instant instant = new Instant(0);
    CreateStream<KV<Integer, Integer>> source1 = CreateStream.of(KvCoder.of(VarIntCoder.of(), VarIntCoder.of()), batchDuration()).emptyBatch().advanceWatermarkForNextBatch(instant).nextBatch(TimestampedValue.of(KV.of(1, 1), instant), TimestampedValue.of(KV.of(1, 2), instant), TimestampedValue.of(KV.of(1, 3), instant)).advanceWatermarkForNextBatch(instant.plus(Duration.standardSeconds(1L))).nextBatch(TimestampedValue.of(KV.of(2, 4), instant.plus(Duration.standardSeconds(1L))), TimestampedValue.of(KV.of(2, 5), instant.plus(Duration.standardSeconds(1L))), TimestampedValue.of(KV.of(2, 6), instant.plus(Duration.standardSeconds(1L)))).advanceNextBatchWatermarkToInfinity();
    CreateStream<KV<Integer, Integer>> source2 = CreateStream.of(KvCoder.of(VarIntCoder.of(), VarIntCoder.of()), batchDuration()).emptyBatch().advanceWatermarkForNextBatch(instant).nextBatch(TimestampedValue.of(KV.of(1, 11), instant), TimestampedValue.of(KV.of(1, 12), instant), TimestampedValue.of(KV.of(1, 13), instant)).advanceWatermarkForNextBatch(instant.plus(Duration.standardSeconds(1L))).nextBatch(TimestampedValue.of(KV.of(2, 14), instant.plus(Duration.standardSeconds(1L))), TimestampedValue.of(KV.of(2, 15), instant.plus(Duration.standardSeconds(1L))), TimestampedValue.of(KV.of(2, 16), instant.plus(Duration.standardSeconds(1L)))).advanceNextBatchWatermarkToInfinity();
    PCollection<KV<Integer, Integer>> input1 = pipeline.apply("create source1", source1).apply("window input1", Window.<KV<Integer, Integer>>into(FixedWindows.of(Duration.standardSeconds(3L))).withAllowedLateness(Duration.ZERO));
    PCollection<KV<Integer, Integer>> input2 = pipeline.apply("create source2", source2).apply("window input2", Window.<KV<Integer, Integer>>into(FixedWindows.of(Duration.standardSeconds(3L))).withAllowedLateness(Duration.ZERO));
    PCollection<KV<Integer, CoGbkResult>> output = KeyedPCollectionTuple.of(INPUT1_TAG, input1).and(INPUT2_TAG, input2).apply(CoGroupByKey.create());
    PAssert.that("Wrong output of the join using CoGroupByKey in streaming mode", output).satisfies((SerializableFunction<Iterable<KV<Integer, CoGbkResult>>, Void>) input -> {
        assertEquals("Wrong size of the output PCollection", 2, Iterables.size(input));
        for (KV<Integer, CoGbkResult> element : input) {
            if (element.getKey() == 1) {
                Iterable<Integer> input1Elements = element.getValue().getAll(INPUT1_TAG);
                assertEquals("Wrong number of values for output elements for tag input1 and key 1", 3, Iterables.size(input1Elements));
                assertThat("Elements of PCollection input1 for key \"1\" are not present in the output PCollection", input1Elements, containsInAnyOrder(1, 2, 3));
                Iterable<Integer> input2Elements = element.getValue().getAll(INPUT2_TAG);
                assertEquals("Wrong number of values for output elements for tag input2 and key 1", 3, Iterables.size(input2Elements));
                assertThat("Elements of PCollection input2 for key \"1\" are not present in the output PCollection", input2Elements, containsInAnyOrder(11, 12, 13));
            } else if (element.getKey() == 2) {
                Iterable<Integer> input1Elements = element.getValue().getAll(INPUT1_TAG);
                assertEquals("Wrong number of values for output elements for tag input1 and key 2", 3, Iterables.size(input1Elements));
                assertThat("Elements of PCollection input1 for key \"2\" are not present in the output PCollection", input1Elements, containsInAnyOrder(4, 5, 6));
                Iterable<Integer> input2Elements = element.getValue().getAll(INPUT2_TAG);
                assertEquals("Wrong number of values for output elements for tag input2 and key 2", 3, Iterables.size(input2Elements));
                assertThat("Elements of PCollection input2 for key \"2\" are not present in the output PCollection", input2Elements, containsInAnyOrder(14, 15, 16));
            } else {
                fail("Unknown key in the output PCollection");
            }
        }
        return null;
    });
    pipeline.run();
}
Also used : KV(org.apache.beam.sdk.values.KV) StreamingTest(org.apache.beam.runners.spark.StreamingTest) Duration(org.joda.time.Duration) SerializableFunction(org.apache.beam.sdk.transforms.SerializableFunction) ReuseSparkContextRule(org.apache.beam.runners.spark.ReuseSparkContextRule) CoGbkResult(org.apache.beam.sdk.transforms.join.CoGbkResult) TupleTag(org.apache.beam.sdk.values.TupleTag) CreateStream(org.apache.beam.runners.spark.io.CreateStream) TestPipeline(org.apache.beam.sdk.testing.TestPipeline) Iterables(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.Iterables) Window(org.apache.beam.sdk.transforms.windowing.Window) Assert.fail(org.junit.Assert.fail) MatcherAssert.assertThat(org.hamcrest.MatcherAssert.assertThat) KeyedPCollectionTuple(org.apache.beam.sdk.transforms.join.KeyedPCollectionTuple) TimestampedValue(org.apache.beam.sdk.values.TimestampedValue) KvCoder(org.apache.beam.sdk.coders.KvCoder) PAssert(org.apache.beam.sdk.testing.PAssert) FixedWindows(org.apache.beam.sdk.transforms.windowing.FixedWindows) Test(org.junit.Test) PCollection(org.apache.beam.sdk.values.PCollection) Category(org.junit.experimental.categories.Category) CoGroupByKey(org.apache.beam.sdk.transforms.join.CoGroupByKey) Rule(org.junit.Rule) Matchers.containsInAnyOrder(org.hamcrest.Matchers.containsInAnyOrder) Instant(org.joda.time.Instant) VarIntCoder(org.apache.beam.sdk.coders.VarIntCoder) SparkPipelineOptions(org.apache.beam.runners.spark.SparkPipelineOptions) Assert.assertEquals(org.junit.Assert.assertEquals) Instant(org.joda.time.Instant) KV(org.apache.beam.sdk.values.KV) CoGbkResult(org.apache.beam.sdk.transforms.join.CoGbkResult) Category(org.junit.experimental.categories.Category) StreamingTest(org.apache.beam.runners.spark.StreamingTest) Test(org.junit.Test)

Example 63 with PCollection

use of org.apache.beam.sdk.values.PCollection in project beam by apache.

the class BigQueryIOWriteTest method writeDynamicDestinations.

public void writeDynamicDestinations(boolean schemas, boolean autoSharding) throws Exception {
    final Schema schema = Schema.builder().addField("name", FieldType.STRING).addField("id", FieldType.INT32).build();
    final Pattern userPattern = Pattern.compile("([a-z]+)([0-9]+)");
    final PCollectionView<List<String>> sideInput1 = p.apply("Create SideInput 1", Create.of("a", "b", "c").withCoder(StringUtf8Coder.of())).apply("asList", View.asList());
    final PCollectionView<Map<String, String>> sideInput2 = p.apply("Create SideInput2", Create.of(KV.of("a", "a"), KV.of("b", "b"), KV.of("c", "c"))).apply("AsMap", View.asMap());
    final List<String> allUsernames = ImmutableList.of("bill", "bob", "randolph");
    List<String> userList = Lists.newArrayList();
    // WriteGroupedRecordsToFiles.
    for (int i = 0; i < BatchLoads.DEFAULT_MAX_NUM_WRITERS_PER_BUNDLE * 10; ++i) {
        // Every user has 10 nicknames.
        for (int j = 0; j < 10; ++j) {
            String nickname = allUsernames.get(ThreadLocalRandom.current().nextInt(allUsernames.size()));
            userList.add(nickname + i);
        }
    }
    PCollection<String> users = p.apply("CreateUsers", Create.of(userList)).apply(Window.into(new PartitionedGlobalWindows<>(arg -> arg)));
    if (useStreaming) {
        users = users.setIsBoundedInternal(PCollection.IsBounded.UNBOUNDED);
    }
    if (schemas) {
        users = users.setSchema(schema, TypeDescriptors.strings(), user -> {
            Matcher matcher = userPattern.matcher(user);
            checkState(matcher.matches());
            return Row.withSchema(schema).addValue(matcher.group(1)).addValue(Integer.valueOf(matcher.group(2))).build();
        }, r -> r.getString(0) + r.getInt32(1));
    }
    // Use a partition decorator to verify that partition decorators are supported.
    final String partitionDecorator = "20171127";
    BigQueryIO.Write<String> write = BigQueryIO.<String>write().withTestServices(fakeBqServices).withMaxFilesPerBundle(5).withMaxFileSize(10).withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED).to(new StringLongDestinations() {

        @Override
        public Long getDestination(ValueInSingleWindow<String> element) {
            assertThat(element.getWindow(), Matchers.instanceOf(PartitionedGlobalWindow.class));
            Matcher matcher = userPattern.matcher(element.getValue());
            checkState(matcher.matches());
            // a table.
            return Long.valueOf(matcher.group(2));
        }

        @Override
        public TableDestination getTable(Long userId) {
            verifySideInputs();
            // Each user in it's own table.
            return new TableDestination("dataset-id.userid-" + userId + "$" + partitionDecorator, "table for userid " + userId);
        }

        @Override
        public TableSchema getSchema(Long userId) {
            verifySideInputs();
            return new TableSchema().setFields(ImmutableList.of(new TableFieldSchema().setName("name").setType("STRING"), new TableFieldSchema().setName("id").setType("INTEGER")));
        }

        @Override
        public List<PCollectionView<?>> getSideInputs() {
            return ImmutableList.of(sideInput1, sideInput2);
        }

        private void verifySideInputs() {
            assertThat(sideInput(sideInput1), containsInAnyOrder("a", "b", "c"));
            Map<String, String> mapSideInput = sideInput(sideInput2);
            assertEquals(3, mapSideInput.size());
            assertThat(mapSideInput, allOf(hasEntry("a", "a"), hasEntry("b", "b"), hasEntry("c", "c")));
        }
    }).withoutValidation();
    if (schemas) {
        write = write.useBeamSchema();
    } else {
        write = write.withFormatFunction(user -> {
            Matcher matcher = userPattern.matcher(user);
            checkState(matcher.matches());
            return new TableRow().set("name", matcher.group(1)).set("id", matcher.group(2));
        });
    }
    if (autoSharding) {
        write = write.withAutoSharding();
    }
    WriteResult results = users.apply("WriteBigQuery", write);
    if (!useStreaming && !useStorageApi) {
        PCollection<TableDestination> successfulBatchInserts = results.getSuccessfulTableLoads();
        TableDestination[] expectedTables = userList.stream().map(user -> {
            Matcher matcher = userPattern.matcher(user);
            checkState(matcher.matches());
            String userId = matcher.group(2);
            return new TableDestination(String.format("project-id:dataset-id.userid-%s$20171127", userId), String.format("table for userid %s", userId));
        }).distinct().toArray(TableDestination[]::new);
        PAssert.that(successfulBatchInserts.apply(Distinct.create())).containsInAnyOrder(expectedTables);
    }
    p.run();
    Map<Long, List<TableRow>> expectedTableRows = Maps.newHashMap();
    for (String anUserList : userList) {
        Matcher matcher = userPattern.matcher(anUserList);
        checkState(matcher.matches());
        String nickname = matcher.group(1);
        Long userid = Long.valueOf(matcher.group(2));
        List<TableRow> expected = expectedTableRows.computeIfAbsent(userid, k -> Lists.newArrayList());
        expected.add(new TableRow().set("name", nickname).set("id", userid.toString()));
    }
    for (Map.Entry<Long, List<TableRow>> entry : expectedTableRows.entrySet()) {
        assertThat(fakeDatasetService.getAllRows("project-id", "dataset-id", "userid-" + entry.getKey()), containsInAnyOrder(Iterables.toArray(entry.getValue(), TableRow.class)));
    }
}
Also used : ExpectedLogs(org.apache.beam.sdk.testing.ExpectedLogs) SerializableCoder(org.apache.beam.sdk.coders.SerializableCoder) ValueInSingleWindow(org.apache.beam.sdk.values.ValueInSingleWindow) BigQueryHelpers.toJsonString(org.apache.beam.sdk.io.gcp.bigquery.BigQueryHelpers.toJsonString) ImmutableMap(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.ImmutableMap) SimpleFunction(org.apache.beam.sdk.transforms.SimpleFunction) Encoder(org.apache.avro.io.Encoder) ResultCoder(org.apache.beam.sdk.io.gcp.bigquery.WritePartition.ResultCoder) Matcher(java.util.regex.Matcher) DoFnTester(org.apache.beam.sdk.transforms.DoFnTester) Create(org.apache.beam.sdk.transforms.Create) Map(java.util.Map) Window(org.apache.beam.sdk.transforms.windowing.Window) GlobalWindow(org.apache.beam.sdk.transforms.windowing.GlobalWindow) FakeBigQueryServices(org.apache.beam.sdk.io.gcp.testing.FakeBigQueryServices) EnumSet(java.util.EnumSet) ValueProvider(org.apache.beam.sdk.options.ValueProvider) GenericDatumWriter(org.apache.avro.generic.GenericDatumWriter) KvCoder(org.apache.beam.sdk.coders.KvCoder) Matchers.allOf(org.hamcrest.Matchers.allOf) Set(java.util.Set) WindowFn(org.apache.beam.sdk.transforms.windowing.WindowFn) FieldType(org.apache.beam.sdk.schemas.Schema.FieldType) Serializable(java.io.Serializable) IncompatibleWindowException(org.apache.beam.sdk.transforms.windowing.IncompatibleWindowException) Assert.assertFalse(org.junit.Assert.assertFalse) AutoValue(com.google.auto.value.AutoValue) TestStream(org.apache.beam.sdk.testing.TestStream) Matchers.is(org.hamcrest.Matchers.is) DisplayDataMatchers.hasDisplayItem(org.apache.beam.sdk.transforms.display.DisplayDataMatchers.hasDisplayItem) Write(org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write) Method(org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.Method) Preconditions.checkNotNull(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkNotNull) KV(org.apache.beam.sdk.values.KV) FakeDatasetService(org.apache.beam.sdk.io.gcp.testing.FakeDatasetService) Duration(org.joda.time.Duration) RunWith(org.junit.runner.RunWith) View(org.apache.beam.sdk.transforms.View) ArrayList(java.util.ArrayList) GenericData(org.apache.avro.generic.GenericData) Distinct(org.apache.beam.sdk.transforms.Distinct) Multimap(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.Multimap) TupleTag(org.apache.beam.sdk.values.TupleTag) ThreadLocalRandom(java.util.concurrent.ThreadLocalRandom) TestPipeline(org.apache.beam.sdk.testing.TestPipeline) Preconditions.checkArgument(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkArgument) Maps(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.Maps) StreamSupport(java.util.stream.StreamSupport) JavaFieldSchema(org.apache.beam.sdk.schemas.JavaFieldSchema) MatcherAssert.assertThat(org.hamcrest.MatcherAssert.assertThat) Row(org.apache.beam.sdk.values.Row) Result(org.apache.beam.sdk.io.gcp.bigquery.WriteTables.Result) Before(org.junit.Before) TableReference(com.google.api.services.bigquery.model.TableReference) TableFieldSchema(com.google.api.services.bigquery.model.TableFieldSchema) Files(java.nio.file.Files) PAssert(org.apache.beam.sdk.testing.PAssert) NonMergingWindowFn(org.apache.beam.sdk.transforms.windowing.NonMergingWindowFn) Parameter(org.junit.runners.Parameterized.Parameter) Assert.assertTrue(org.junit.Assert.assertTrue) IOException(java.io.IOException) ShardedKeyCoder(org.apache.beam.sdk.coders.ShardedKeyCoder) Test(org.junit.Test) Schema(org.apache.beam.sdk.schemas.Schema) File(java.io.File) Assert.assertNull(org.junit.Assert.assertNull) Paths(java.nio.file.Paths) Preconditions.checkState(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState) PCollectionView(org.apache.beam.sdk.values.PCollectionView) BoundedWindow(org.apache.beam.sdk.transforms.windowing.BoundedWindow) AtomicCoder(org.apache.beam.sdk.coders.AtomicCoder) DefaultSchema(org.apache.beam.sdk.schemas.annotations.DefaultSchema) FakeJobService(org.apache.beam.sdk.io.gcp.testing.FakeJobService) Assert.assertEquals(org.junit.Assert.assertEquals) SerializableFunction(org.apache.beam.sdk.transforms.SerializableFunction) TimePartitioning(com.google.api.services.bigquery.model.TimePartitioning) Iterables(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.Iterables) After(org.junit.After) TableRow(com.google.api.services.bigquery.model.TableRow) Assert.fail(org.junit.Assert.fail) TableSchema(com.google.api.services.bigquery.model.TableSchema) ArrayListMultimap(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.ArrayListMultimap) ShardedKey(org.apache.beam.sdk.values.ShardedKey) Parameterized(org.junit.runners.Parameterized) MapElements(org.apache.beam.sdk.transforms.MapElements) DatumWriter(org.apache.avro.io.DatumWriter) Collection(java.util.Collection) GenerateSequence(org.apache.beam.sdk.io.GenerateSequence) CreateDisposition(org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.CreateDisposition) Description(org.junit.runner.Description) Collectors(java.util.stream.Collectors) List(java.util.List) Clustering(com.google.api.services.bigquery.model.Clustering) Matchers.containsInAnyOrder(org.hamcrest.Matchers.containsInAnyOrder) TableDataInsertAllResponse(com.google.api.services.bigquery.model.TableDataInsertAllResponse) Matchers.equalTo(org.hamcrest.Matchers.equalTo) TypeDescriptors(org.apache.beam.sdk.values.TypeDescriptors) ImmutableList(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.ImmutableList) Pattern(java.util.regex.Pattern) ErrorProto(com.google.api.services.bigquery.model.ErrorProto) Statement(org.junit.runners.model.Statement) TestRule(org.junit.rules.TestRule) Parameters(org.junit.runners.Parameterized.Parameters) Coder(org.apache.beam.sdk.coders.Coder) HashMap(java.util.HashMap) SerializableFunctions(org.apache.beam.sdk.transforms.SerializableFunctions) StringUtf8Coder(org.apache.beam.sdk.coders.StringUtf8Coder) SchemaUpdateOption(org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.SchemaUpdateOption) WindowMappingFn(org.apache.beam.sdk.transforms.windowing.WindowMappingFn) SchemaCreate(org.apache.beam.sdk.schemas.annotations.SchemaCreate) Job(com.google.api.services.bigquery.model.Job) PipelineOptions(org.apache.beam.sdk.options.PipelineOptions) ExpectedException(org.junit.rules.ExpectedException) Nullable(org.checkerframework.checker.nullness.qual.Nullable) Matchers.hasEntry(org.hamcrest.Matchers.hasEntry) OutputStream(java.io.OutputStream) DisplayData(org.apache.beam.sdk.transforms.display.DisplayData) GenericRecord(org.apache.avro.generic.GenericRecord) Lists(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.Lists) Matchers(org.hamcrest.Matchers) PCollection(org.apache.beam.sdk.values.PCollection) Table(com.google.api.services.bigquery.model.Table) Rule(org.junit.Rule) Instant(org.joda.time.Instant) Collections(java.util.Collections) JobConfigurationLoad(com.google.api.services.bigquery.model.JobConfigurationLoad) TemporaryFolder(org.junit.rules.TemporaryFolder) InputStream(java.io.InputStream) TableSchema(com.google.api.services.bigquery.model.TableSchema) Matcher(java.util.regex.Matcher) JavaFieldSchema(org.apache.beam.sdk.schemas.JavaFieldSchema) TableFieldSchema(com.google.api.services.bigquery.model.TableFieldSchema) Schema(org.apache.beam.sdk.schemas.Schema) DefaultSchema(org.apache.beam.sdk.schemas.annotations.DefaultSchema) TableSchema(com.google.api.services.bigquery.model.TableSchema) BigQueryHelpers.toJsonString(org.apache.beam.sdk.io.gcp.bigquery.BigQueryHelpers.toJsonString) TableFieldSchema(com.google.api.services.bigquery.model.TableFieldSchema) ArrayList(java.util.ArrayList) List(java.util.List) ImmutableList(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.ImmutableList) Pattern(java.util.regex.Pattern) PCollectionView(org.apache.beam.sdk.values.PCollectionView) TableRow(com.google.api.services.bigquery.model.TableRow) ValueInSingleWindow(org.apache.beam.sdk.values.ValueInSingleWindow) ImmutableMap(org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.ImmutableMap) Map(java.util.Map) HashMap(java.util.HashMap)

Example 64 with PCollection

use of org.apache.beam.sdk.values.PCollection in project beam by apache.

the class BigQueryIOReadTest method testReadFromTable.

private void testReadFromTable(boolean useTemplateCompatibility, boolean useReadTableRows) throws IOException, InterruptedException {
    Table sometable = new Table();
    sometable.setSchema(new TableSchema().setFields(ImmutableList.of(new TableFieldSchema().setName("name").setType("STRING"), new TableFieldSchema().setName("number").setType("INTEGER"))));
    sometable.setTableReference(new TableReference().setProjectId("non-executing-project").setDatasetId("somedataset").setTableId("sometable"));
    sometable.setNumBytes(1024L * 1024L);
    FakeDatasetService fakeDatasetService = new FakeDatasetService();
    fakeDatasetService.createDataset("non-executing-project", "somedataset", "", "", null);
    fakeDatasetService.createTable(sometable);
    List<TableRow> records = Lists.newArrayList(new TableRow().set("name", "a").set("number", 1L), new TableRow().set("name", "b").set("number", 2L), new TableRow().set("name", "c").set("number", 3L));
    fakeDatasetService.insertAll(sometable.getTableReference(), records, null);
    FakeBigQueryServices fakeBqServices = new FakeBigQueryServices().withJobService(new FakeJobService()).withDatasetService(fakeDatasetService);
    PTransform<PBegin, PCollection<TableRow>> readTransform;
    if (useReadTableRows) {
        BigQueryIO.Read read = BigQueryIO.read().from("non-executing-project:somedataset.sometable").withTestServices(fakeBqServices).withoutValidation();
        readTransform = useTemplateCompatibility ? read.withTemplateCompatibility() : read;
    } else {
        BigQueryIO.TypedRead<TableRow> read = BigQueryIO.readTableRows().from("non-executing-project:somedataset.sometable").withTestServices(fakeBqServices).withoutValidation();
        readTransform = useTemplateCompatibility ? read.withTemplateCompatibility() : read;
    }
    PCollection<KV<String, Long>> output = p.apply(readTransform).apply(ParDo.of(new DoFn<TableRow, KV<String, Long>>() {

        @ProcessElement
        public void processElement(ProcessContext c) throws Exception {
            c.output(KV.of((String) c.element().get("name"), Long.valueOf((String) c.element().get("number"))));
        }
    }));
    PAssert.that(output).containsInAnyOrder(ImmutableList.of(KV.of("a", 1L), KV.of("b", 2L), KV.of("c", 3L)));
    p.run();
}
Also used : Table(com.google.api.services.bigquery.model.Table) TableSchema(com.google.api.services.bigquery.model.TableSchema) KV(org.apache.beam.sdk.values.KV) ByteString(com.google.protobuf.ByteString) PBegin(org.apache.beam.sdk.values.PBegin) TableFieldSchema(com.google.api.services.bigquery.model.TableFieldSchema) PCollection(org.apache.beam.sdk.values.PCollection) TableReference(com.google.api.services.bigquery.model.TableReference) BigQueryResourceNaming.createTempTableReference(org.apache.beam.sdk.io.gcp.bigquery.BigQueryResourceNaming.createTempTableReference) FakeDatasetService(org.apache.beam.sdk.io.gcp.testing.FakeDatasetService) DoFn(org.apache.beam.sdk.transforms.DoFn) FakeJobService(org.apache.beam.sdk.io.gcp.testing.FakeJobService) TableRow(com.google.api.services.bigquery.model.TableRow) FakeBigQueryServices(org.apache.beam.sdk.io.gcp.testing.FakeBigQueryServices)

Example 65 with PCollection

use of org.apache.beam.sdk.values.PCollection in project beam by apache.

the class DeadLetteredTransform method expandInternal.

// Required to capture the generic type parameter of the PCollection.
private <RealInputT extends InputT> PCollection<OutputT> expandInternal(PCollection<RealInputT> input) {
    Coder<RealInputT> coder = input.getCoder();
    SerializableFunction<RealInputT, OutputT> localTransform = transform::apply;
    MapElements.MapWithFailures<RealInputT, OutputT, Failure> mapWithFailures = MapElements.into(transform.getOutputTypeDescriptor()).via(localTransform).exceptionsInto(TypeDescriptor.of(Failure.class)).exceptionsVia(x -> {
        try (ByteArrayOutputStream os = new ByteArrayOutputStream()) {
            coder.encode(x.element(), os);
            return Failure.newBuilder().setPayload(os.toByteArray()).setError(String.format("%s%n%n%s", x.exception().getMessage(), ExceptionUtils.getStackTrace(x.exception()))).build();
        }
    });
    Result<PCollection<OutputT>, Failure> result = mapWithFailures.expand(input);
    result.failures().apply(deadLetter);
    return result.output();
}
Also used : PCollection(org.apache.beam.sdk.values.PCollection) MapElements(org.apache.beam.sdk.transforms.MapElements) ByteArrayOutputStream(java.io.ByteArrayOutputStream)

Aggregations

PCollection (org.apache.beam.sdk.values.PCollection)199 Test (org.junit.Test)133 KV (org.apache.beam.sdk.values.KV)62 TestPipeline (org.apache.beam.sdk.testing.TestPipeline)61 Map (java.util.Map)59 List (java.util.List)58 Rule (org.junit.Rule)57 RunWith (org.junit.runner.RunWith)54 PAssert (org.apache.beam.sdk.testing.PAssert)52 Instant (org.joda.time.Instant)46 Duration (org.joda.time.Duration)45 JUnit4 (org.junit.runners.JUnit4)45 ParDo (org.apache.beam.sdk.transforms.ParDo)44 TupleTag (org.apache.beam.sdk.values.TupleTag)42 Pipeline (org.apache.beam.sdk.Pipeline)41 Create (org.apache.beam.sdk.transforms.Create)41 ArrayList (java.util.ArrayList)40 Serializable (java.io.Serializable)39 PTransform (org.apache.beam.sdk.transforms.PTransform)37 Row (org.apache.beam.sdk.values.Row)37