Search in sources :

Example 1 with UserRecordResult

use of com.amazonaws.services.kinesis.producer.UserRecordResult in project flink by apache.

the class FlinkKinesisProducerTest method testAtLeastOnceProducer.

/**
 * Test ensuring that the producer is not dropping buffered records; we set a timeout because
 * the test will not finish if the logic is broken.
 */
@SuppressWarnings({ "unchecked", "ResultOfMethodCallIgnored" })
@Test(timeout = 10000)
public void testAtLeastOnceProducer() throws Throwable {
    final DummyFlinkKinesisProducer<String> producer = new DummyFlinkKinesisProducer<>(new SimpleStringSchema());
    OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(producer));
    testHarness.open();
    testHarness.processElement(new StreamRecord<>("msg-1"));
    testHarness.processElement(new StreamRecord<>("msg-2"));
    testHarness.processElement(new StreamRecord<>("msg-3"));
    // start a thread to perform checkpointing
    CheckedThread snapshotThread = new CheckedThread() {

        @Override
        public void go() throws Exception {
            // this should block until all records are flushed;
            // if the snapshot implementation returns before pending records are
            // flushed,
            testHarness.snapshot(123L, 123L);
        }
    };
    snapshotThread.start();
    // before proceeding, make sure that flushing has started and that the snapshot is still
    // blocked;
    // this would block forever if the snapshot didn't perform a flush
    producer.waitUntilFlushStarted();
    Assert.assertTrue("Snapshot returned before all records were flushed", snapshotThread.isAlive());
    // now, complete the callbacks
    UserRecordResult result = mock(UserRecordResult.class);
    when(result.isSuccessful()).thenReturn(true);
    producer.getPendingRecordFutures().get(0).set(result);
    Assert.assertTrue("Snapshot returned before all records were flushed", snapshotThread.isAlive());
    producer.getPendingRecordFutures().get(1).set(result);
    Assert.assertTrue("Snapshot returned before all records were flushed", snapshotThread.isAlive());
    producer.getPendingRecordFutures().get(2).set(result);
    // this would fail with an exception if flushing wasn't completed before the snapshot method
    // returned
    snapshotThread.sync();
    testHarness.close();
}
Also used : SimpleStringSchema(org.apache.flink.api.common.serialization.SimpleStringSchema) Matchers.anyString(org.mockito.Matchers.anyString) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) CheckedThread(org.apache.flink.core.testutils.CheckedThread) UserRecordResult(com.amazonaws.services.kinesis.producer.UserRecordResult) Test(org.junit.Test)

Example 2 with UserRecordResult

use of com.amazonaws.services.kinesis.producer.UserRecordResult in project flink by apache.

the class FlinkKinesisProducerTest method testAsyncErrorRethrownAfterFlush.

/**
 * Test ensuring that if an async exception is caught for one of the flushed requests on
 * checkpoint, it should be rethrown; we set a timeout because the test will not finish if the
 * logic is broken.
 *
 * <p>Note that this test does not test the snapshot method is blocked correctly when there are
 * pending records. The test for that is covered in testAtLeastOnceProducer.
 */
@SuppressWarnings("ResultOfMethodCallIgnored")
@Test(timeout = 10000)
public void testAsyncErrorRethrownAfterFlush() throws Throwable {
    final DummyFlinkKinesisProducer<String> producer = new DummyFlinkKinesisProducer<>(new SimpleStringSchema());
    OneInputStreamOperatorTestHarness<String, Object> testHarness = new OneInputStreamOperatorTestHarness<>(new StreamSink<>(producer));
    testHarness.open();
    testHarness.processElement(new StreamRecord<>("msg-1"));
    testHarness.processElement(new StreamRecord<>("msg-2"));
    testHarness.processElement(new StreamRecord<>("msg-3"));
    // only let the first record succeed for now
    UserRecordResult result = mock(UserRecordResult.class);
    when(result.isSuccessful()).thenReturn(true);
    producer.getPendingRecordFutures().get(0).set(result);
    CheckedThread snapshotThread = new CheckedThread() {

        @Override
        public void go() throws Exception {
            // this should block at first, since there are still two pending records
            // that needs to be flushed
            testHarness.snapshot(123L, 123L);
        }
    };
    snapshotThread.start();
    // let the 2nd message fail with an async exception
    producer.getPendingRecordFutures().get(1).setException(new Exception("artificial async failure for 2nd message"));
    producer.getPendingRecordFutures().get(2).set(mock(UserRecordResult.class));
    try {
        snapshotThread.sync();
    } catch (Exception e) {
        // after the flush, the async exception should have been rethrown
        Assert.assertTrue(ExceptionUtils.findThrowableWithMessage(e, "artificial async failure for 2nd message").isPresent());
        // test succeeded
        return;
    }
    Assert.fail();
}
Also used : SimpleStringSchema(org.apache.flink.api.common.serialization.SimpleStringSchema) Matchers.anyString(org.mockito.Matchers.anyString) OneInputStreamOperatorTestHarness(org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness) UserRecordResult(com.amazonaws.services.kinesis.producer.UserRecordResult) CheckedThread(org.apache.flink.core.testutils.CheckedThread) ExpectedException(org.junit.rules.ExpectedException) Test(org.junit.Test)

Example 3 with UserRecordResult

use of com.amazonaws.services.kinesis.producer.UserRecordResult in project flink by apache.

the class FlinkKinesisProducer method invoke.

@Override
public void invoke(OUT value, Context context) throws Exception {
    if (this.producer == null) {
        throw new RuntimeException("Kinesis producer has been closed");
    }
    checkAndPropagateAsyncError();
    boolean didWaitForFlush = enforceQueueLimit();
    if (didWaitForFlush) {
        checkAndPropagateAsyncError();
    }
    String stream = defaultStream;
    String partition = defaultPartition;
    ByteBuffer serialized = schema.serialize(value);
    // maybe set custom stream
    String customStream = schema.getTargetStream(value);
    if (customStream != null) {
        stream = customStream;
    }
    String explicitHashkey = null;
    // maybe set custom partition
    if (customPartitioner != null) {
        partition = customPartitioner.getPartitionId(value);
        explicitHashkey = customPartitioner.getExplicitHashKey(value);
    }
    if (stream == null) {
        if (failOnError) {
            throw new RuntimeException("No target stream set");
        } else {
            LOG.warn("No target stream set. Skipping record");
            return;
        }
    }
    ListenableFuture<UserRecordResult> cb = producer.addUserRecord(stream, partition, explicitHashkey, serialized);
    Futures.addCallback(cb, callback, MoreExecutors.directExecutor());
}
Also used : UserRecordResult(com.amazonaws.services.kinesis.producer.UserRecordResult) ByteBuffer(java.nio.ByteBuffer)

Example 4 with UserRecordResult

use of com.amazonaws.services.kinesis.producer.UserRecordResult in project flink by apache.

the class FlinkKinesisProducer method invoke.

@Override
public void invoke(OUT value) throws Exception {
    if (this.producer == null) {
        throw new RuntimeException("Kinesis producer has been closed");
    }
    if (thrownException != null) {
        String errorMessages = "";
        if (thrownException instanceof UserRecordFailedException) {
            List<Attempt> attempts = ((UserRecordFailedException) thrownException).getResult().getAttempts();
            for (Attempt attempt : attempts) {
                if (attempt.getErrorMessage() != null) {
                    errorMessages += attempt.getErrorMessage() + "\n";
                }
            }
        }
        if (failOnError) {
            throw new RuntimeException("An exception was thrown while processing a record: " + errorMessages, thrownException);
        } else {
            LOG.warn("An exception was thrown while processing a record: {}", thrownException, errorMessages);
            // reset
            thrownException = null;
        }
    }
    String stream = defaultStream;
    String partition = defaultPartition;
    ByteBuffer serialized = schema.serialize(value);
    // maybe set custom stream
    String customStream = schema.getTargetStream(value);
    if (customStream != null) {
        stream = customStream;
    }
    String explicitHashkey = null;
    // maybe set custom partition
    if (customPartitioner != null) {
        partition = customPartitioner.getPartitionId(value);
        explicitHashkey = customPartitioner.getExplicitHashKey(value);
    }
    if (stream == null) {
        if (failOnError) {
            throw new RuntimeException("No target stream set");
        } else {
            LOG.warn("No target stream set. Skipping record");
            return;
        }
    }
    ListenableFuture<UserRecordResult> cb = producer.addUserRecord(stream, partition, explicitHashkey, serialized);
    Futures.addCallback(cb, callback);
}
Also used : Attempt(com.amazonaws.services.kinesis.producer.Attempt) UserRecordFailedException(com.amazonaws.services.kinesis.producer.UserRecordFailedException) UserRecordResult(com.amazonaws.services.kinesis.producer.UserRecordResult) ByteBuffer(java.nio.ByteBuffer)

Example 5 with UserRecordResult

use of com.amazonaws.services.kinesis.producer.UserRecordResult in project beam by apache.

the class KinesisProducerMock method addUserRecord.

@Override
public synchronized ListenableFuture<UserRecordResult> addUserRecord(String stream, String partitionKey, String explicitHashKey, ByteBuffer data) {
    seqNumber.incrementAndGet();
    SettableFuture<UserRecordResult> f = SettableFuture.create();
    f.set(new UserRecordResult(new ArrayList<>(), String.valueOf(seqNumber.get()), explicitHashKey, !isFailedFlush));
    if (kinesisService.getExistedStream().equals(stream)) {
        addedRecords.add(new UserRecord(stream, partitionKey, explicitHashKey, data));
    }
    return f;
}
Also used : UserRecord(com.amazonaws.services.kinesis.producer.UserRecord) ArrayList(java.util.ArrayList) UserRecordResult(com.amazonaws.services.kinesis.producer.UserRecordResult)

Aggregations

UserRecordResult (com.amazonaws.services.kinesis.producer.UserRecordResult)7 ByteBuffer (java.nio.ByteBuffer)3 SimpleStringSchema (org.apache.flink.api.common.serialization.SimpleStringSchema)3 CheckedThread (org.apache.flink.core.testutils.CheckedThread)3 OneInputStreamOperatorTestHarness (org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness)3 Attempt (com.amazonaws.services.kinesis.producer.Attempt)2 UserRecordFailedException (com.amazonaws.services.kinesis.producer.UserRecordFailedException)2 Test (org.junit.Test)2 Matchers.anyString (org.mockito.Matchers.anyString)2 AwsSdkMetrics (com.amazonaws.metrics.AwsSdkMetrics)1 KinesisProducer (com.amazonaws.services.kinesis.producer.KinesisProducer)1 KinesisProducerConfiguration (com.amazonaws.services.kinesis.producer.KinesisProducerConfiguration)1 UserRecord (com.amazonaws.services.kinesis.producer.UserRecord)1 FutureCallback (com.google.common.util.concurrent.FutureCallback)1 Futures (com.google.common.util.concurrent.Futures)1 ListenableFuture (com.google.common.util.concurrent.ListenableFuture)1 MoreExecutors (com.google.common.util.concurrent.MoreExecutors)1 Field (java.lang.reflect.Field)1 ArrayList (java.util.ArrayList)1 List (java.util.List)1