Search in sources :

Example 1 with NoOpFailureHandler

use of org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler in project flink by apache.

the class ElasticsearchSinkBaseTest method testBulkFailureRethrownOnOnCheckpointAfterFlush.

/**
	 * Tests that any bulk failure in the listener callbacks due to flushing on an immediately following checkpoint
	 * is rethrown; we set a timeout because the test will not finish if the logic is broken.
	 */
@Test(timeout = 5000)
public void testBulkFailureRethrownOnOnCheckpointAfterFlush() throws Throwable {
    final DummyElasticsearchSink<String> sink = new DummyElasticsearchSink<>(new HashMap<String, String>(), new SimpleSinkFunction<String>(), new NoOpFailureHandler());
    final OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(sink));
    testHarness.open();
    // setup the next bulk request, and let bulk request succeed
    sink.setMockItemFailuresListForNextBulkItemResponses(Collections.singletonList((Exception) null));
    testHarness.processElement(new StreamRecord<>("msg-1"));
    verify(sink.getMockBulkProcessor(), times(1)).add(any(ActionRequest.class));
    // manually execute the next bulk request
    sink.manualBulkRequestWithAllPendingRequests();
    // setup the requests to be flushed in the snapshot
    testHarness.processElement(new StreamRecord<>("msg-2"));
    testHarness.processElement(new StreamRecord<>("msg-3"));
    verify(sink.getMockBulkProcessor(), times(3)).add(any(ActionRequest.class));
    CheckedThread snapshotThread = new CheckedThread() {

        @Override
        public void go() throws Exception {
            testHarness.snapshot(1L, 1000L);
        }
    };
    snapshotThread.start();
    // the snapshot should eventually be blocked before snapshot triggers flushing
    while (snapshotThread.getState() != Thread.State.WAITING) {
        Thread.sleep(10);
    }
    // for the snapshot-triggered flush, we let the bulk request fail completely
    sink.setFailNextBulkRequestCompletely(new Exception("artificial failure for bulk request"));
    // let the snapshot-triggered flush continue (bulk request should fail completely)
    sink.continueFlush();
    try {
        snapshotThread.sync();
    } catch (Exception e) {
        // the snapshot should have failed with the bulk request failure
        Assert.assertTrue(e.getCause().getCause().getMessage().contains("artificial failure for bulk request"));
        // test succeeded
        return;
    }
    Assert.fail();
}
Also used : NoOpFailureHandler(org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) CheckedThread(org.apache.flink.core.testutils.CheckedThread) ActionRequest(org.elasticsearch.action.ActionRequest) Test(org.junit.Test)

Example 2 with NoOpFailureHandler

use of org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler in project flink by apache.

the class ElasticsearchSinkBaseTest method testBulkFailureRethrownOnCheckpoint.

/** Tests that any bulk failure in the listener callbacks is rethrown on an immediately following checkpoint. */
@Test
public void testBulkFailureRethrownOnCheckpoint() throws Throwable {
    final DummyElasticsearchSink<String> sink = new DummyElasticsearchSink<>(new HashMap<String, String>(), new SimpleSinkFunction<String>(), new NoOpFailureHandler());
    final OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(sink));
    testHarness.open();
    // setup the next bulk request, and let the whole bulk request fail
    sink.setFailNextBulkRequestCompletely(new Exception("artificial failure for bulk request"));
    testHarness.processElement(new StreamRecord<>("msg"));
    verify(sink.getMockBulkProcessor(), times(1)).add(any(ActionRequest.class));
    // manually execute the next bulk request
    sink.manualBulkRequestWithAllPendingRequests();
    try {
        testHarness.snapshot(1L, 1000L);
    } catch (Exception e) {
        // the snapshot should have failed with the bulk request failure
        Assert.assertTrue(e.getCause().getCause().getMessage().contains("artificial failure for bulk request"));
        // test succeeded
        return;
    }
    Assert.fail();
}
Also used : NoOpFailureHandler(org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) ActionRequest(org.elasticsearch.action.ActionRequest) Test(org.junit.Test)

Example 3 with NoOpFailureHandler

use of org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler in project flink by apache.

the class ElasticsearchSinkBaseTest method testItemFailureRethrownOnCheckpointAfterFlush.

/**
	 * Tests that any item failure in the listener callbacks due to flushing on an immediately following checkpoint
	 * is rethrown; we set a timeout because the test will not finish if the logic is broken
	 */
@Test(timeout = 5000)
public void testItemFailureRethrownOnCheckpointAfterFlush() throws Throwable {
    final DummyElasticsearchSink<String> sink = new DummyElasticsearchSink<>(new HashMap<String, String>(), new SimpleSinkFunction<String>(), new NoOpFailureHandler());
    final OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(sink));
    testHarness.open();
    // setup the next bulk request, and its mock item failures
    List<Exception> mockResponsesList = new ArrayList<>(2);
    // the first request in a bulk will succeed
    mockResponsesList.add(null);
    // the second request in a bulk will fail
    mockResponsesList.add(new Exception("artificial failure for record"));
    sink.setMockItemFailuresListForNextBulkItemResponses(mockResponsesList);
    testHarness.processElement(new StreamRecord<>("msg-1"));
    verify(sink.getMockBulkProcessor(), times(1)).add(any(ActionRequest.class));
    // manually execute the next bulk request (1 request only, thus should succeed)
    sink.manualBulkRequestWithAllPendingRequests();
    // setup the requests to be flushed in the snapshot
    testHarness.processElement(new StreamRecord<>("msg-2"));
    testHarness.processElement(new StreamRecord<>("msg-3"));
    verify(sink.getMockBulkProcessor(), times(3)).add(any(ActionRequest.class));
    CheckedThread snapshotThread = new CheckedThread() {

        @Override
        public void go() throws Exception {
            testHarness.snapshot(1L, 1000L);
        }
    };
    snapshotThread.start();
    // the snapshot should eventually be blocked before snapshot triggers flushing
    while (snapshotThread.getState() != Thread.State.WAITING) {
        Thread.sleep(10);
    }
    // let the snapshot-triggered flush continue (2 records in the bulk, so the 2nd one should fail)
    sink.continueFlush();
    try {
        snapshotThread.sync();
    } catch (Exception e) {
        // the snapshot should have failed with the failure from the 2nd request
        Assert.assertTrue(e.getCause().getCause().getMessage().contains("artificial failure for record"));
        // test succeeded
        return;
    }
    Assert.fail();
}
Also used : NoOpFailureHandler(org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler) ArrayList(java.util.ArrayList) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) CheckedThread(org.apache.flink.core.testutils.CheckedThread) ActionRequest(org.elasticsearch.action.ActionRequest) Test(org.junit.Test)

Example 4 with NoOpFailureHandler

use of org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler in project flink by apache.

the class ElasticsearchSinkBaseTest method testItemFailureRethrownOnCheckpoint.

/** Tests that any item failure in the listener callbacks is rethrown on an immediately following checkpoint. */
@Test
public void testItemFailureRethrownOnCheckpoint() throws Throwable {
    final DummyElasticsearchSink<String> sink = new DummyElasticsearchSink<>(new HashMap<String, String>(), new SimpleSinkFunction<String>(), new NoOpFailureHandler());
    final OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(sink));
    testHarness.open();
    // setup the next bulk request, and its mock item failures
    sink.setMockItemFailuresListForNextBulkItemResponses(Collections.singletonList(new Exception("artificial failure for record")));
    testHarness.processElement(new StreamRecord<>("msg"));
    verify(sink.getMockBulkProcessor(), times(1)).add(any(ActionRequest.class));
    // manually execute the next bulk request
    sink.manualBulkRequestWithAllPendingRequests();
    try {
        testHarness.snapshot(1L, 1000L);
    } catch (Exception e) {
        // the snapshot should have failed with the failure
        Assert.assertTrue(e.getCause().getCause().getMessage().contains("artificial failure for record"));
        // test succeeded
        return;
    }
    Assert.fail();
}
Also used : NoOpFailureHandler(org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) ActionRequest(org.elasticsearch.action.ActionRequest) Test(org.junit.Test)

Example 5 with NoOpFailureHandler

use of org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler in project flink by apache.

the class ElasticsearchSinkBaseTest method testItemFailureRethrownOnInvoke.

/** Tests that any item failure in the listener callbacks is rethrown on an immediately following invoke call. */
@Test
public void testItemFailureRethrownOnInvoke() throws Throwable {
    final DummyElasticsearchSink<String> sink = new DummyElasticsearchSink<>(new HashMap<String, String>(), new SimpleSinkFunction<String>(), new NoOpFailureHandler());
    final OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(sink));
    testHarness.open();
    // setup the next bulk request, and its mock item failures
    sink.setMockItemFailuresListForNextBulkItemResponses(Collections.singletonList(new Exception("artificial failure for record")));
    testHarness.processElement(new StreamRecord<>("msg"));
    verify(sink.getMockBulkProcessor(), times(1)).add(any(ActionRequest.class));
    // manually execute the next bulk request
    sink.manualBulkRequestWithAllPendingRequests();
    try {
        testHarness.processElement(new StreamRecord<>("next msg"));
    } catch (Exception e) {
        // the invoke should have failed with the failure
        Assert.assertTrue(e.getCause().getMessage().contains("artificial failure for record"));
        // test succeeded
        return;
    }
    Assert.fail();
}
Also used : NoOpFailureHandler(org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) ActionRequest(org.elasticsearch.action.ActionRequest) Test(org.junit.Test)

Aggregations

NoOpFailureHandler (org.apache.flink.streaming.connectors.elasticsearch.util.NoOpFailureHandler)6 OneInputStreamOperatorTestHarness (org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness)6 ActionRequest (org.elasticsearch.action.ActionRequest)6 Test (org.junit.Test)6 CheckedThread (org.apache.flink.core.testutils.CheckedThread)2 ArrayList (java.util.ArrayList)1