Search in sources :

Example 16 with CheckedThread

use of org.apache.flink.core.testutils.CheckedThread in project flink by apache.

the class FlinkKinesisProducerTest method testAtLeastOnceProducer.

/**
 * Test ensuring that the producer is not dropping buffered records; we set a timeout because
 * the test will not finish if the logic is broken.
 */
@SuppressWarnings({ "unchecked", "ResultOfMethodCallIgnored" })
@Test(timeout = 10000)
public void testAtLeastOnceProducer() throws Throwable {
    final DummyFlinkKinesisProducer<String> producer = new DummyFlinkKinesisProducer<>(new SimpleStringSchema());
    OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(producer));
    testHarness.open();
    testHarness.processElement(new StreamRecord<>("msg-1"));
    testHarness.processElement(new StreamRecord<>("msg-2"));
    testHarness.processElement(new StreamRecord<>("msg-3"));
    // start a thread to perform checkpointing
    CheckedThread snapshotThread = new CheckedThread() {

        @Override
        public void go() throws Exception {
            // this should block until all records are flushed;
            // if the snapshot implementation returns before pending records are
            // flushed,
            testHarness.snapshot(123L, 123L);
        }
    };
    snapshotThread.start();
    // before proceeding, make sure that flushing has started and that the snapshot is still
    // blocked;
    // this would block forever if the snapshot didn't perform a flush
    producer.waitUntilFlushStarted();
    Assert.assertTrue("Snapshot returned before all records were flushed", snapshotThread.isAlive());
    // now, complete the callbacks
    UserRecordResult result = mock(UserRecordResult.class);
    when(result.isSuccessful()).thenReturn(true);
    producer.getPendingRecordFutures().get(0).set(result);
    Assert.assertTrue("Snapshot returned before all records were flushed", snapshotThread.isAlive());
    producer.getPendingRecordFutures().get(1).set(result);
    Assert.assertTrue("Snapshot returned before all records were flushed", snapshotThread.isAlive());
    producer.getPendingRecordFutures().get(2).set(result);
    // this would fail with an exception if flushing wasn't completed before the snapshot method
    // returned
    snapshotThread.sync();
    testHarness.close();
}
Also used : SimpleStringSchema(org.apache.flink.api.common.serialization.SimpleStringSchema) Matchers.anyString(org.mockito.Matchers.anyString) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) CheckedThread(org.apache.flink.core.testutils.CheckedThread) UserRecordResult(com.amazonaws.services.kinesis.producer.UserRecordResult) Test(org.junit.Test)

Example 17 with CheckedThread

use of org.apache.flink.core.testutils.CheckedThread in project flink by apache.

the class FlinkKinesisProducerTest method testAsyncErrorRethrownAfterFlush.

/**
 * Test ensuring that if an async exception is caught for one of the flushed requests on
 * checkpoint, it should be rethrown; we set a timeout because the test will not finish if the
 * logic is broken.
 *
 * <p>Note that this test does not test the snapshot method is blocked correctly when there are
 * pending records. The test for that is covered in testAtLeastOnceProducer.
 */
@SuppressWarnings("ResultOfMethodCallIgnored")
@Test(timeout = 10000)
public void testAsyncErrorRethrownAfterFlush() throws Throwable {
    final DummyFlinkKinesisProducer<String> producer = new DummyFlinkKinesisProducer<>(new SimpleStringSchema());
    OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(producer));
    testHarness.open();
    testHarness.processElement(new StreamRecord<>("msg-1"));
    testHarness.processElement(new StreamRecord<>("msg-2"));
    testHarness.processElement(new StreamRecord<>("msg-3"));
    // only let the first record succeed for now
    UserRecordResult result = mock(UserRecordResult.class);
    when(result.isSuccessful()).thenReturn(true);
    producer.getPendingRecordFutures().get(0).set(result);
    CheckedThread snapshotThread = new CheckedThread() {

        @Override
        public void go() throws Exception {
            // this should block at first, since there are still two pending records
            // that needs to be flushed
            testHarness.snapshot(123L, 123L);
        }
    };
    snapshotThread.start();
    // let the 2nd message fail with an async exception
    producer.getPendingRecordFutures().get(1).setException(new Exception("artificial async failure for 2nd message"));
    producer.getPendingRecordFutures().get(2).set(mock(UserRecordResult.class));
    try {
        snapshotThread.sync();
    } catch (Exception e) {
        // after the flush, the async exception should have been rethrown
        Assert.assertTrue(ExceptionUtils.findThrowableWithMessage(e, "artificial async failure for 2nd message").isPresent());
        // test succeeded
        return;
    }
    Assert.fail();
}
Also used : SimpleStringSchema(org.apache.flink.api.common.serialization.SimpleStringSchema) Matchers.anyString(org.mockito.Matchers.anyString) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) UserRecordResult(com.amazonaws.services.kinesis.producer.UserRecordResult) CheckedThread(org.apache.flink.core.testutils.CheckedThread) ExpectedException(org.junit.rules.ExpectedException) Test(org.junit.Test)

Example 18 with CheckedThread

use of org.apache.flink.core.testutils.CheckedThread in project flink by apache.

the class BlobCachePutTest method testTransientBlobCacheGetStorageLocationConcurrent.

private void testTransientBlobCacheGetStorageLocationConcurrent(@Nullable final JobID jobId) throws Exception {
    final Configuration config = new Configuration();
    try (BlobServer server = new BlobServer(config, temporaryFolder.newFolder(), new VoidBlobStore());
        final TransientBlobCache cache = new TransientBlobCache(config, temporaryFolder.newFolder(), new InetSocketAddress("localhost", server.getPort()))) {
        server.start();
        BlobKey key = new TransientBlobKey();
        CheckedThread[] threads = new CheckedThread[] { new TransientBlobCacheGetStorageLocation(cache, jobId, key), new TransientBlobCacheGetStorageLocation(cache, jobId, key), new TransientBlobCacheGetStorageLocation(cache, jobId, key) };
        checkedThreadSimpleTest(threads);
    }
}
Also used : Configuration(org.apache.flink.configuration.Configuration) InetSocketAddress(java.net.InetSocketAddress) CheckedThread(org.apache.flink.core.testutils.CheckedThread)

Example 19 with CheckedThread

use of org.apache.flink.core.testutils.CheckedThread in project flink by apache.

the class BlobCachePutTest method testPermanentBlobCacheGetStorageLocationConcurrentForJob.

/**
 * Tests concurrent calls to {@link PermanentBlobCache#getStorageLocation(JobID, BlobKey)}.
 */
@Test
public void testPermanentBlobCacheGetStorageLocationConcurrentForJob() throws Exception {
    final JobID jobId = new JobID();
    final Configuration config = new Configuration();
    try (BlobServer server = new BlobServer(config, temporaryFolder.newFolder(), new VoidBlobStore());
        final PermanentBlobCache cache = new PermanentBlobCache(config, temporaryFolder.newFolder(), new VoidBlobStore(), new InetSocketAddress("localhost", server.getPort()))) {
        server.start();
        BlobKey key = new PermanentBlobKey();
        CheckedThread[] threads = new CheckedThread[] { new PermanentBlobCacheGetStorageLocation(cache, jobId, key), new PermanentBlobCacheGetStorageLocation(cache, jobId, key), new PermanentBlobCacheGetStorageLocation(cache, jobId, key) };
        checkedThreadSimpleTest(threads);
    }
}
Also used : Configuration(org.apache.flink.configuration.Configuration) InetSocketAddress(java.net.InetSocketAddress) CheckedThread(org.apache.flink.core.testutils.CheckedThread) JobID(org.apache.flink.api.common.JobID) Test(org.junit.Test)

Example 20 with CheckedThread

use of org.apache.flink.core.testutils.CheckedThread in project flink by apache.

the class SimpleJdbcConnectionProviderDriverClassConcurrentLoadingTest method testDriverClassConcurrentLoading.

@Test(timeout = 5000)
public void testDriverClassConcurrentLoading() throws Exception {
    ClassLoader classLoader = getClass().getClassLoader();
    assertFalse(isClassLoaded(classLoader, FakeDBUtils.DRIVER1_CLASS_NAME));
    assertFalse(isClassLoaded(classLoader, FakeDBUtils.DRIVER2_CLASS_NAME));
    JdbcConnectionOptions connectionOptions1 = new JdbcConnectionOptions.JdbcConnectionOptionsBuilder().withUrl(FakeDBUtils.TEST_DB_URL).withDriverName(FakeDBUtils.DRIVER1_CLASS_NAME).build();
    JdbcConnectionOptions connectionOptions2 = new JdbcConnectionOptions.JdbcConnectionOptionsBuilder().withUrl(FakeDBUtils.TEST_DB_URL).withDriverName(FakeDBUtils.DRIVER2_CLASS_NAME).build();
    CountDownLatch startLatch = new CountDownLatch(1);
    Function<JdbcConnectionOptions, CheckedThread> connectionThreadCreator = options -> {
        CheckedThread thread = new CheckedThread() {

            @Override
            public void go() throws Exception {
                startLatch.await();
                JdbcConnectionProvider connectionProvider = new SimpleJdbcConnectionProvider(options);
                Connection connection = connectionProvider.getOrEstablishConnection();
                connection.close();
            }
        };
        thread.setName("Loading " + options.getDriverName());
        thread.setDaemon(true);
        return thread;
    };
    CheckedThread connectionThread1 = connectionThreadCreator.apply(connectionOptions1);
    CheckedThread connectionThread2 = connectionThreadCreator.apply(connectionOptions2);
    connectionThread1.start();
    connectionThread2.start();
    Thread.sleep(2);
    startLatch.countDown();
    connectionThread1.sync();
    connectionThread2.sync();
    assertTrue(isClassLoaded(classLoader, FakeDBUtils.DRIVER1_CLASS_NAME));
    assertTrue(isClassLoaded(classLoader, FakeDBUtils.DRIVER2_CLASS_NAME));
}
Also used : CountDownLatch(java.util.concurrent.CountDownLatch) CheckedThread(org.apache.flink.core.testutils.CheckedThread) Connection(java.sql.Connection) FakeDBUtils(org.apache.flink.connector.jdbc.fakedb.FakeDBUtils) Assert.assertFalse(org.junit.Assert.assertFalse) Assert.assertTrue(org.junit.Assert.assertTrue) Test(org.junit.Test) JdbcConnectionOptions(org.apache.flink.connector.jdbc.JdbcConnectionOptions) Method(java.lang.reflect.Method) Function(java.util.function.Function) Connection(java.sql.Connection) JdbcConnectionOptions(org.apache.flink.connector.jdbc.JdbcConnectionOptions) CountDownLatch(java.util.concurrent.CountDownLatch) CheckedThread(org.apache.flink.core.testutils.CheckedThread) Test(org.junit.Test)

Aggregations

CheckedThread (org.apache.flink.core.testutils.CheckedThread)45 Test (org.junit.Test)41 SimpleStringSchema (org.apache.flink.api.common.serialization.SimpleStringSchema)12 HashMap (java.util.HashMap)8 LinkedList (java.util.LinkedList)8 OneInputStreamOperatorTestHarness (org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness)8 TestableKinesisDataFetcher (org.apache.flink.streaming.connectors.kinesis.testutils.TestableKinesisDataFetcher)7 KinesisStreamShardState (org.apache.flink.streaming.connectors.kinesis.model.KinesisStreamShardState)6 IOException (java.io.IOException)5 Map (java.util.Map)5 CountDownLatch (java.util.concurrent.CountDownLatch)5 OneShotLatch (org.apache.flink.core.testutils.OneShotLatch)5 SequenceNumber (org.apache.flink.streaming.connectors.kinesis.model.SequenceNumber)5 Shard (com.amazonaws.services.kinesis.model.Shard)4 File (java.io.File)4 Random (java.util.Random)4 UserRecordResult (com.amazonaws.services.kinesis.producer.UserRecordResult)3 ArrayList (java.util.ArrayList)3 CompletableFuture (java.util.concurrent.CompletableFuture)3 Configuration (org.apache.flink.configuration.Configuration)3