Search in sources :

Example 61 with ReducingStateDescriptor

use of org.apache.flink.api.common.state.ReducingStateDescriptor in project flink by apache.

the class AllWindowTranslationTest method testReduceEventTime.

// ------------------------------------------------------------------------
// reduce() translation tests
// ------------------------------------------------------------------------
@Test
@SuppressWarnings("rawtypes")
public void testReduceEventTime() throws Exception {
    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    DataStream<Tuple2<String, Integer>> source = env.fromElements(Tuple2.of("hello", 1), Tuple2.of("hello", 2));
    DataStream<Tuple2<String, Integer>> window1 = source.windowAll(SlidingEventTimeWindows.of(Time.of(1, TimeUnit.SECONDS), Time.of(100, TimeUnit.MILLISECONDS))).reduce(new DummyReducer());
    OneInputTransformation<Tuple2<String, Integer>, Tuple2<String, Integer>> transform = (OneInputTransformation<Tuple2<String, Integer>, Tuple2<String, Integer>>) window1.getTransformation();
    OneInputStreamOperator<Tuple2<String, Integer>, Tuple2<String, Integer>> operator = transform.getOperator();
    Assert.assertTrue(operator instanceof WindowOperator);
    WindowOperator<String, Tuple2<String, Integer>, ?, ?, ?> winOperator = (WindowOperator<String, Tuple2<String, Integer>, ?, ?, ?>) operator;
    Assert.assertTrue(winOperator.getTrigger() instanceof EventTimeTrigger);
    Assert.assertTrue(winOperator.getWindowAssigner() instanceof SlidingEventTimeWindows);
    Assert.assertTrue(winOperator.getStateDescriptor() instanceof ReducingStateDescriptor);
    processElementAndEnsureOutput(winOperator, winOperator.getKeySelector(), BasicTypeInfo.STRING_TYPE_INFO, new Tuple2<>("hello", 1));
}
Also used : ReducingStateDescriptor(org.apache.flink.api.common.state.ReducingStateDescriptor) SlidingEventTimeWindows(org.apache.flink.streaming.api.windowing.assigners.SlidingEventTimeWindows) Tuple2(org.apache.flink.api.java.tuple.Tuple2) StreamExecutionEnvironment(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment) OneInputTransformation(org.apache.flink.streaming.api.transformations.OneInputTransformation) EventTimeTrigger(org.apache.flink.streaming.api.windowing.triggers.EventTimeTrigger) Test(org.junit.Test)

Example 62 with ReducingStateDescriptor

use of org.apache.flink.api.common.state.ReducingStateDescriptor in project flink by apache.

the class WindowOperatorTest method testProcessingTimeTumblingWindows.

@Test
public void testProcessingTimeTumblingWindows() throws Throwable {
    final int windowSize = 3;
    ReducingStateDescriptor<Tuple2<String, Integer>> stateDesc = new ReducingStateDescriptor<>("window-contents", new SumReducer(), STRING_INT_TUPLE.createSerializer(new ExecutionConfig()));
    WindowOperator<String, Tuple2<String, Integer>, Tuple2<String, Integer>, Tuple2<String, Integer>, TimeWindow> operator = new WindowOperator<>(TumblingProcessingTimeWindows.of(Time.of(windowSize, TimeUnit.SECONDS)), new TimeWindow.Serializer(), new TupleKeySelector(), BasicTypeInfo.STRING_TYPE_INFO.createSerializer(new ExecutionConfig()), stateDesc, new InternalSingleValueWindowFunction<>(new PassThroughWindowFunction<String, TimeWindow, Tuple2<String, Integer>>()), ProcessingTimeTrigger.create(), 0, null);
    OneInputStreamOperatorTestHarness<Tuple2<String, Integer>, Tuple2<String, Integer>> testHarness = createTestHarness(operator);
    ConcurrentLinkedQueue<Object> expectedOutput = new ConcurrentLinkedQueue<>();
    testHarness.open();
    testHarness.setProcessingTime(3);
    // timestamp is ignored in processing time
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), Long.MAX_VALUE));
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 7000));
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 7000));
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 1), 7000));
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 1), 7000));
    testHarness.setProcessingTime(5000);
    expectedOutput.add(new StreamRecord<>(new Tuple2<>("key2", 3), 2999));
    expectedOutput.add(new StreamRecord<>(new Tuple2<>("key1", 2), 2999));
    TestHarnessUtil.assertOutputEqualsSorted("Output was not correct.", expectedOutput, testHarness.getOutput(), new Tuple2ResultSortComparator());
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 1), 7000));
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 1), 7000));
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 1), 7000));
    testHarness.setProcessingTime(7000);
    expectedOutput.add(new StreamRecord<>(new Tuple2<>("key1", 3), 5999));
    TestHarnessUtil.assertOutputEqualsSorted("Output was not correct.", expectedOutput, testHarness.getOutput(), new Tuple2ResultSortComparator());
    testHarness.close();
}
Also used : ReducingStateDescriptor(org.apache.flink.api.common.state.ReducingStateDescriptor) ExecutionConfig(org.apache.flink.api.common.ExecutionConfig) TimeWindow(org.apache.flink.streaming.api.windowing.windows.TimeWindow) TypeHint(org.apache.flink.api.common.typeinfo.TypeHint) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) PassThroughWindowFunction(org.apache.flink.streaming.api.functions.windowing.PassThroughWindowFunction) Tuple2(org.apache.flink.api.java.tuple.Tuple2) ConcurrentLinkedQueue(java.util.concurrent.ConcurrentLinkedQueue) Test(org.junit.Test)

Example 63 with ReducingStateDescriptor

use of org.apache.flink.api.common.state.ReducingStateDescriptor in project flink by apache.

the class WindowOperatorTest method testNotSideOutputDueToLatenessSessionWithLateness.

@Test
public void testNotSideOutputDueToLatenessSessionWithLateness() throws Exception {
    // same as testSideOutputDueToLatenessSessionWithLateness() but with an accumulating
    // trigger, i.e.
    // one that does not return FIRE_AND_PURGE when firing but just FIRE. The expected
    // results are therefore slightly different.
    final int gapSize = 3;
    final long lateness = 10;
    ReducingStateDescriptor<Tuple2<String, Integer>> stateDesc = new ReducingStateDescriptor<>("window-contents", new SumReducer(), STRING_INT_TUPLE.createSerializer(new ExecutionConfig()));
    WindowOperator<String, Tuple2<String, Integer>, Tuple2<String, Integer>, Tuple3<String, Long, Long>, TimeWindow> operator = new WindowOperator<>(EventTimeSessionWindows.withGap(Time.seconds(gapSize)), new TimeWindow.Serializer(), new TupleKeySelector(), BasicTypeInfo.STRING_TYPE_INFO.createSerializer(new ExecutionConfig()), stateDesc, new InternalSingleValueWindowFunction<>(new ReducedSessionWindowFunction()), EventTimeTrigger.create(), lateness, lateOutputTag);
    OneInputStreamOperatorTestHarness<Tuple2<String, Integer>, Tuple3<String, Long, Long>> testHarness = createTestHarness(operator);
    testHarness.open();
    ConcurrentLinkedQueue<Object> expected = new ConcurrentLinkedQueue<>();
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 1000));
    testHarness.processWatermark(new Watermark(1999));
    expected.add(new Watermark(1999));
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 2000));
    testHarness.processWatermark(new Watermark(4998));
    expected.add(new Watermark(4998));
    // this will not be sideoutput because the session we're adding two has maxTimestamp
    // after the current watermark
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 4500));
    // new session
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 8500));
    testHarness.processWatermark(new Watermark(7400));
    expected.add(new Watermark(7400));
    // this will merge the two sessions into one
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 7000));
    testHarness.processWatermark(new Watermark(11501));
    expected.add(new StreamRecord<>(new Tuple3<>("key2-5", 1000L, 11500L), 11499));
    expected.add(new Watermark(11501));
    // new session
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 11600));
    testHarness.processWatermark(new Watermark(14600));
    expected.add(new StreamRecord<>(new Tuple3<>("key2-1", 11600L, 14600L), 14599));
    expected.add(new Watermark(14600));
    // because of the small allowed lateness and because the trigger is accumulating
    // this will be merged into the session (11600-14600) and therefore will not
    // be sideoutput as late
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 10000));
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 14500));
    // adding ("key2", 1) extended the session to (10000-146000) for which
    // maxTimestamp <= currentWatermark. Therefore, we immediately get a firing
    // with the current version of EventTimeTrigger/EventTimeTriggerAccum
    expected.add(new StreamRecord<>(new Tuple3<>("key2-2", 10000L, 14600L), 14599));
    ConcurrentLinkedQueue<Object> actual = testHarness.getOutput();
    ConcurrentLinkedQueue<StreamRecord<Tuple2<String, Integer>>> sideActual = testHarness.getSideOutput(lateOutputTag);
    TestHarnessUtil.assertOutputEqualsSorted("Output was not correct.", expected, actual, new Tuple3ResultSortComparator());
    assertEquals(null, sideActual);
    testHarness.processWatermark(new Watermark(20000));
    expected.add(new StreamRecord<>(new Tuple3<>("key2-3", 10000L, 17500L), 17499));
    expected.add(new Watermark(20000));
    testHarness.processWatermark(new Watermark(100000));
    expected.add(new Watermark(100000));
    actual = testHarness.getOutput();
    sideActual = testHarness.getSideOutput(lateOutputTag);
    TestHarnessUtil.assertOutputEqualsSorted("Output was not correct.", expected, actual, new Tuple3ResultSortComparator());
    assertEquals(null, sideActual);
    testHarness.close();
}
Also used : ExecutionConfig(org.apache.flink.api.common.ExecutionConfig) ReducingStateDescriptor(org.apache.flink.api.common.state.ReducingStateDescriptor) StreamRecord(org.apache.flink.streaming.runtime.streamrecord.StreamRecord) TimeWindow(org.apache.flink.streaming.api.windowing.windows.TimeWindow) TypeHint(org.apache.flink.api.common.typeinfo.TypeHint) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Tuple2(org.apache.flink.api.java.tuple.Tuple2) Tuple3(org.apache.flink.api.java.tuple.Tuple3) ConcurrentLinkedQueue(java.util.concurrent.ConcurrentLinkedQueue) Watermark(org.apache.flink.streaming.api.watermark.Watermark) Test(org.junit.Test)

Example 64 with ReducingStateDescriptor

use of org.apache.flink.api.common.state.ReducingStateDescriptor in project flink by apache.

the class WindowOperatorTest method testTumblingEventTimeWindowsReduce.

@Test
@SuppressWarnings("unchecked")
public void testTumblingEventTimeWindowsReduce() throws Exception {
    closeCalled.set(0);
    final int windowSize = 3;
    ReducingStateDescriptor<Tuple2<String, Integer>> stateDesc = new ReducingStateDescriptor<>("window-contents", new SumReducer(), STRING_INT_TUPLE.createSerializer(new ExecutionConfig()));
    WindowOperator<String, Tuple2<String, Integer>, Tuple2<String, Integer>, Tuple2<String, Integer>, TimeWindow> operator = new WindowOperator<>(TumblingEventTimeWindows.of(Time.of(windowSize, TimeUnit.SECONDS)), new TimeWindow.Serializer(), new TupleKeySelector(), BasicTypeInfo.STRING_TYPE_INFO.createSerializer(new ExecutionConfig()), stateDesc, new InternalSingleValueWindowFunction<>(new PassThroughWindowFunction<String, TimeWindow, Tuple2<String, Integer>>()), EventTimeTrigger.create(), 0, null);
    testTumblingEventTimeWindows(operator);
}
Also used : ReducingStateDescriptor(org.apache.flink.api.common.state.ReducingStateDescriptor) ExecutionConfig(org.apache.flink.api.common.ExecutionConfig) TimeWindow(org.apache.flink.streaming.api.windowing.windows.TimeWindow) TypeHint(org.apache.flink.api.common.typeinfo.TypeHint) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) PassThroughWindowFunction(org.apache.flink.streaming.api.functions.windowing.PassThroughWindowFunction) Tuple2(org.apache.flink.api.java.tuple.Tuple2) Test(org.junit.Test)

Example 65 with ReducingStateDescriptor

use of org.apache.flink.api.common.state.ReducingStateDescriptor in project flink by apache.

the class WindowOperatorTest method testNotSideOutputDueToLatenessSessionWithHugeLateness.

@Test
public void testNotSideOutputDueToLatenessSessionWithHugeLateness() throws Exception {
    final int gapSize = 3;
    final long lateness = 10000;
    ReducingStateDescriptor<Tuple2<String, Integer>> stateDesc = new ReducingStateDescriptor<>("window-contents", new SumReducer(), STRING_INT_TUPLE.createSerializer(new ExecutionConfig()));
    WindowOperator<String, Tuple2<String, Integer>, Tuple2<String, Integer>, Tuple3<String, Long, Long>, TimeWindow> operator = new WindowOperator<>(EventTimeSessionWindows.withGap(Time.seconds(gapSize)), new TimeWindow.Serializer(), new TupleKeySelector(), BasicTypeInfo.STRING_TYPE_INFO.createSerializer(new ExecutionConfig()), stateDesc, new InternalSingleValueWindowFunction<>(new ReducedSessionWindowFunction()), EventTimeTrigger.create(), lateness, lateOutputTag);
    OneInputStreamOperatorTestHarness<Tuple2<String, Integer>, Tuple3<String, Long, Long>> testHarness = createTestHarness(operator);
    testHarness.open();
    ConcurrentLinkedQueue<Object> expected = new ConcurrentLinkedQueue<>();
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 1000));
    testHarness.processWatermark(new Watermark(1999));
    expected.add(new Watermark(1999));
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 2000));
    testHarness.processWatermark(new Watermark(4998));
    expected.add(new Watermark(4998));
    // this will not be sideoutput because the session we're adding two has maxTimestamp
    // after the current watermark
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 4500));
    // new session
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 8500));
    testHarness.processWatermark(new Watermark(7400));
    expected.add(new Watermark(7400));
    // this will merge the two sessions into one
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 7000));
    testHarness.processWatermark(new Watermark(11501));
    expected.add(new StreamRecord<>(new Tuple3<>("key2-5", 1000L, 11500L), 11499));
    expected.add(new Watermark(11501));
    // new session
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 11600));
    testHarness.processWatermark(new Watermark(14600));
    expected.add(new StreamRecord<>(new Tuple3<>("key2-1", 11600L, 14600L), 14599));
    expected.add(new Watermark(14600));
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 10000));
    // the maxTimestamp of the merged session is already late,
    // so we get an immediate firing
    expected.add(new StreamRecord<>(new Tuple3<>("key2-7", 1000L, 14600L), 14599));
    ConcurrentLinkedQueue<Object> actual = testHarness.getOutput();
    ConcurrentLinkedQueue<StreamRecord<Tuple2<String, Integer>>> sideActual = testHarness.getSideOutput(lateOutputTag);
    TestHarnessUtil.assertOutputEqualsSorted("Output was not correct.", expected, actual, new Tuple3ResultSortComparator());
    assertEquals(null, sideActual);
    testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 14500));
    testHarness.processWatermark(new Watermark(20000));
    expected.add(new StreamRecord<>(new Tuple3<>("key2-8", 1000L, 17500L), 17499));
    expected.add(new Watermark(20000));
    testHarness.processWatermark(new Watermark(100000));
    expected.add(new Watermark(100000));
    actual = testHarness.getOutput();
    sideActual = testHarness.getSideOutput(lateOutputTag);
    TestHarnessUtil.assertOutputEqualsSorted("Output was not correct.", expected, actual, new Tuple3ResultSortComparator());
    assertEquals(null, sideActual);
    testHarness.close();
}
Also used : ExecutionConfig(org.apache.flink.api.common.ExecutionConfig) ReducingStateDescriptor(org.apache.flink.api.common.state.ReducingStateDescriptor) StreamRecord(org.apache.flink.streaming.runtime.streamrecord.StreamRecord) TimeWindow(org.apache.flink.streaming.api.windowing.windows.TimeWindow) TypeHint(org.apache.flink.api.common.typeinfo.TypeHint) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Tuple2(org.apache.flink.api.java.tuple.Tuple2) Tuple3(org.apache.flink.api.java.tuple.Tuple3) ConcurrentLinkedQueue(java.util.concurrent.ConcurrentLinkedQueue) Watermark(org.apache.flink.streaming.api.watermark.Watermark) Test(org.junit.Test)

Aggregations

ReducingStateDescriptor (org.apache.flink.api.common.state.ReducingStateDescriptor)67 Test (org.junit.Test)60 Tuple2 (org.apache.flink.api.java.tuple.Tuple2)51 TimeWindow (org.apache.flink.streaming.api.windowing.windows.TimeWindow)38 ExecutionConfig (org.apache.flink.api.common.ExecutionConfig)35 TypeHint (org.apache.flink.api.common.typeinfo.TypeHint)27 ConcurrentLinkedQueue (java.util.concurrent.ConcurrentLinkedQueue)26 StreamExecutionEnvironment (org.apache.flink.streaming.api.environment.StreamExecutionEnvironment)23 Watermark (org.apache.flink.streaming.api.watermark.Watermark)21 Tuple3 (org.apache.flink.api.java.tuple.Tuple3)19 PassThroughWindowFunction (org.apache.flink.streaming.api.functions.windowing.PassThroughWindowFunction)19 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)17 OneInputTransformation (org.apache.flink.streaming.api.transformations.OneInputTransformation)17 StreamRecord (org.apache.flink.streaming.runtime.streamrecord.StreamRecord)14 ListStateDescriptor (org.apache.flink.api.common.state.ListStateDescriptor)10 EventTimeTrigger (org.apache.flink.streaming.api.windowing.triggers.EventTimeTrigger)9 KeyedOneInputStreamOperatorTestHarness (org.apache.flink.streaming.util.KeyedOneInputStreamOperatorTestHarness)8 TypeSerializer (org.apache.flink.api.common.typeutils.TypeSerializer)7 OperatorSubtaskState (org.apache.flink.runtime.checkpoint.OperatorSubtaskState)7 AtomicLong (java.util.concurrent.atomic.AtomicLong)6