Search in sources :

Example 11 with OneInputStreamOperatorTestHarness

use of org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness in project flink by apache.

the class ElasticsearchSinkBaseTest method testBulkFailureRethrownOnCheckpoint.

/** Tests that any bulk failure in the listener callbacks is rethrown on an immediately following checkpoint. */
@Test
public void testBulkFailureRethrownOnCheckpoint() throws Throwable {
    final DummyElasticsearchSink<String> sink = new DummyElasticsearchSink<>(new HashMap<String, String>(), new SimpleSinkFunction<String>(), new NoOpFailureHandler());
    final OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(sink));
    testHarness.open();
    // setup the next bulk request, and let the whole bulk request fail
    sink.setFailNextBulkRequestCompletely(new Exception("artificial failure for bulk request"));
    testHarness.processElement(new StreamRecord<>("msg"));
    verify(sink.getMockBulkProcessor(), times(1)).add(any(ActionRequest.class));
    // manually execute the next bulk request
    sink.manualBulkRequestWithAllPendingRequests();
    try {
        testHarness.snapshot(1L, 1000L);
    } catch (Exception e) {
        // the snapshot should have failed with the bulk request failure
        Assert.assertTrue(e.getCause().getCause().getMessage().contains("artificial failure for bulk request"));
        // test succeeded
        return;
    }
    Assert.fail();
}
Also used : NoOpFailureHandler(org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) ActionRequest(org.elasticsearch.action.ActionRequest) Test(org.junit.Test)

Example 12 with OneInputStreamOperatorTestHarness

use of org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness in project flink by apache.

the class ElasticsearchSinkBaseTest method testItemFailureRethrownOnCheckpointAfterFlush.

/**
	 * Tests that any item failure in the listener callbacks due to flushing on an immediately following checkpoint
	 * is rethrown; we set a timeout because the test will not finish if the logic is broken
	 */
@Test(timeout = 5000)
public void testItemFailureRethrownOnCheckpointAfterFlush() throws Throwable {
    final DummyElasticsearchSink<String> sink = new DummyElasticsearchSink<>(new HashMap<String, String>(), new SimpleSinkFunction<String>(), new NoOpFailureHandler());
    final OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(sink));
    testHarness.open();
    // setup the next bulk request, and its mock item failures
    List<Exception> mockResponsesList = new ArrayList<>(2);
    // the first request in a bulk will succeed
    mockResponsesList.add(null);
    // the second request in a bulk will fail
    mockResponsesList.add(new Exception("artificial failure for record"));
    sink.setMockItemFailuresListForNextBulkItemResponses(mockResponsesList);
    testHarness.processElement(new StreamRecord<>("msg-1"));
    verify(sink.getMockBulkProcessor(), times(1)).add(any(ActionRequest.class));
    // manually execute the next bulk request (1 request only, thus should succeed)
    sink.manualBulkRequestWithAllPendingRequests();
    // setup the requests to be flushed in the snapshot
    testHarness.processElement(new StreamRecord<>("msg-2"));
    testHarness.processElement(new StreamRecord<>("msg-3"));
    verify(sink.getMockBulkProcessor(), times(3)).add(any(ActionRequest.class));
    CheckedThread snapshotThread = new CheckedThread() {

        @Override
        public void go() throws Exception {
            testHarness.snapshot(1L, 1000L);
        }
    };
    snapshotThread.start();
    // the snapshot should eventually be blocked before snapshot triggers flushing
    while (snapshotThread.getState() != Thread.State.WAITING) {
        Thread.sleep(10);
    }
    // let the snapshot-triggered flush continue (2 records in the bulk, so the 2nd one should fail)
    sink.continueFlush();
    try {
        snapshotThread.sync();
    } catch (Exception e) {
        // the snapshot should have failed with the failure from the 2nd request
        Assert.assertTrue(e.getCause().getCause().getMessage().contains("artificial failure for record"));
        // test succeeded
        return;
    }
    Assert.fail();
}
Also used : NoOpFailureHandler(org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler) ArrayList(java.util.ArrayList) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) CheckedThread(org.apache.flink.core.testutils.CheckedThread) ActionRequest(org.elasticsearch.action.ActionRequest) Test(org.junit.Test)

Example 13 with OneInputStreamOperatorTestHarness

use of org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness in project flink by apache.

the class ElasticsearchSinkBaseTest method testItemFailureRethrownOnCheckpoint.

/** Tests that any item failure in the listener callbacks is rethrown on an immediately following checkpoint. */
@Test
public void testItemFailureRethrownOnCheckpoint() throws Throwable {
    final DummyElasticsearchSink<String> sink = new DummyElasticsearchSink<>(new HashMap<String, String>(), new SimpleSinkFunction<String>(), new NoOpFailureHandler());
    final OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(sink));
    testHarness.open();
    // setup the next bulk request, and its mock item failures
    sink.setMockItemFailuresListForNextBulkItemResponses(Collections.singletonList(new Exception("artificial failure for record")));
    testHarness.processElement(new StreamRecord<>("msg"));
    verify(sink.getMockBulkProcessor(), times(1)).add(any(ActionRequest.class));
    // manually execute the next bulk request
    sink.manualBulkRequestWithAllPendingRequests();
    try {
        testHarness.snapshot(1L, 1000L);
    } catch (Exception e) {
        // the snapshot should have failed with the failure
        Assert.assertTrue(e.getCause().getCause().getMessage().contains("artificial failure for record"));
        // test succeeded
        return;
    }
    Assert.fail();
}
Also used : NoOpFailureHandler(org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) ActionRequest(org.elasticsearch.action.ActionRequest) Test(org.junit.Test)

Example 14 with OneInputStreamOperatorTestHarness

use of org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness in project flink by apache.

the class FlinkKafkaProducerBaseTest method testAsyncErrorRethrownOnCheckpointAfterFlush.

/**
	 * Test ensuring that if an async exception is caught for one of the flushed requests on checkpoint,
	 * it should be rethrown; we set a timeout because the test will not finish if the logic is broken.
	 *
	 * Note that this test does not test the snapshot method is blocked correctly when there are pending recorrds.
	 * The test for that is covered in testAtLeastOnceProducer.
	 */
@SuppressWarnings("unchecked")
@Test(timeout = 5000)
public void testAsyncErrorRethrownOnCheckpointAfterFlush() throws Throwable {
    final DummyFlinkKafkaProducer<String> producer = new DummyFlinkKafkaProducer<>(FakeStandardProducerConfig.get(), null);
    producer.setFlushOnCheckpoint(true);
    final KafkaProducer<?, ?> mockProducer = producer.getMockKafkaProducer();
    final OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(producer));
    testHarness.open();
    testHarness.processElement(new StreamRecord<>("msg-1"));
    testHarness.processElement(new StreamRecord<>("msg-2"));
    testHarness.processElement(new StreamRecord<>("msg-3"));
    verify(mockProducer, times(3)).send(any(ProducerRecord.class), any(Callback.class));
    // only let the first callback succeed for now
    producer.getPendingCallbacks().get(0).onCompletion(null, null);
    CheckedThread snapshotThread = new CheckedThread() {

        @Override
        public void go() throws Exception {
            // this should block at first, since there are still two pending records that needs to be flushed
            testHarness.snapshot(123L, 123L);
        }
    };
    snapshotThread.start();
    // let the 2nd message fail with an async exception
    producer.getPendingCallbacks().get(1).onCompletion(null, new Exception("artificial async failure for 2nd message"));
    producer.getPendingCallbacks().get(2).onCompletion(null, null);
    try {
        snapshotThread.sync();
    } catch (Exception e) {
        // the snapshot should have failed with the async exception
        Assert.assertTrue(e.getCause().getMessage().contains("artificial async failure for 2nd message"));
        // test succeeded
        return;
    }
    Assert.fail();
}
Also used : Mockito.anyString(org.mockito.Mockito.anyString) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) CheckedThread(org.apache.flink.core.testutils.CheckedThread) Callback(org.apache.kafka.clients.producer.Callback) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) Test(org.junit.Test)

Example 15 with OneInputStreamOperatorTestHarness

use of org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness in project flink by apache.

the class FlinkKafkaProducerBaseTest method testAtLeastOnceProducer.

/**
	 * Test ensuring that the producer is not dropping buffered records;
	 * we set a timeout because the test will not finish if the logic is broken
	 */
@SuppressWarnings("unchecked")
@Test(timeout = 10000)
public void testAtLeastOnceProducer() throws Throwable {
    final DummyFlinkKafkaProducer<String> producer = new DummyFlinkKafkaProducer<>(FakeStandardProducerConfig.get(), null);
    producer.setFlushOnCheckpoint(true);
    final KafkaProducer<?, ?> mockProducer = producer.getMockKafkaProducer();
    final OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(producer));
    testHarness.open();
    testHarness.processElement(new StreamRecord<>("msg-1"));
    testHarness.processElement(new StreamRecord<>("msg-2"));
    testHarness.processElement(new StreamRecord<>("msg-3"));
    verify(mockProducer, times(3)).send(any(ProducerRecord.class), any(Callback.class));
    Assert.assertEquals(3, producer.getPendingSize());
    // start a thread to perform checkpointing
    CheckedThread snapshotThread = new CheckedThread() {

        @Override
        public void go() throws Exception {
            // this should block until all records are flushed;
            // if the snapshot implementation returns before pending records are flushed,
            testHarness.snapshot(123L, 123L);
        }
    };
    snapshotThread.start();
    // before proceeding, make sure that flushing has started and that the snapshot is still blocked;
    // this would block forever if the snapshot didn't perform a flush
    producer.waitUntilFlushStarted();
    Assert.assertTrue("Snapshot returned before all records were flushed", snapshotThread.isAlive());
    // now, complete the callbacks
    producer.getPendingCallbacks().get(0).onCompletion(null, null);
    Assert.assertTrue("Snapshot returned before all records were flushed", snapshotThread.isAlive());
    Assert.assertEquals(2, producer.getPendingSize());
    producer.getPendingCallbacks().get(1).onCompletion(null, null);
    Assert.assertTrue("Snapshot returned before all records were flushed", snapshotThread.isAlive());
    Assert.assertEquals(1, producer.getPendingSize());
    producer.getPendingCallbacks().get(2).onCompletion(null, null);
    Assert.assertEquals(0, producer.getPendingSize());
    // this would fail with an exception if flushing wasn't completed before the snapshot method returned
    snapshotThread.sync();
    testHarness.close();
}
Also used : Callback(org.apache.kafka.clients.producer.Callback) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) Mockito.anyString(org.mockito.Mockito.anyString) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) CheckedThread(org.apache.flink.core.testutils.CheckedThread) Test(org.junit.Test)

Aggregations

OneInputStreamOperatorTestHarness (org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness)38 Test (org.junit.Test)36 Watermark (org.apache.flink.streaming.api.watermark.Watermark)10 ArrayList (java.util.ArrayList)9 ConcurrentLinkedQueue (java.util.concurrent.ConcurrentLinkedQueue)8 ExecutionConfig (org.apache.flink.api.common.ExecutionConfig)7 ActionRequest (org.elasticsearch.action.ActionRequest)7 StreamStateHandle (org.apache.flink.runtime.state.StreamStateHandle)6 NoOpFailureHandler (org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler)6 StreamRecord (org.apache.flink.streaming.runtime.streamrecord.StreamRecord)6 OperatorStateHandles (org.apache.flink.streaming.runtime.tasks.OperatorStateHandles)6 KeyedOneInputStreamOperatorTestHarness (org.apache.flink.streaming.util.KeyedOneInputStreamOperatorTestHarness)6 CheckedThread (org.apache.flink.core.testutils.CheckedThread)5 ContinuousFileReaderOperator (org.apache.flink.streaming.api.functions.source.ContinuousFileReaderOperator)5 TimestampedFileInputSplit (org.apache.flink.streaming.api.functions.source.TimestampedFileInputSplit)5 PrepareForTest (org.powermock.core.classloader.annotations.PrepareForTest)5 Tuple2 (org.apache.flink.api.java.tuple.Tuple2)4 FileInputSplit (org.apache.flink.core.fs.FileInputSplit)4 Path (org.apache.flink.core.fs.Path)4 Callback (org.apache.kafka.clients.producer.Callback)3