Search in sources :

Example 21 with TestingReaderOutput

use of org.apache.flink.connector.testutils.source.reader.TestingReaderOutput in project flink by splunk.

the class PulsarSourceReaderTestBase method assigningEmptySplits.

@TestTemplate
void assigningEmptySplits(PulsarSourceReaderBase<Integer> reader, Boundedness boundedness, String topicName) throws Exception {
    final PulsarPartitionSplit emptySplit = createPartitionSplit(topicName, 0, Boundedness.CONTINUOUS_UNBOUNDED, MessageId.latest);
    reader.addSplits(Collections.singletonList(emptySplit));
    TestingReaderOutput<Integer> output = new TestingReaderOutput<>();
    InputStatus status = reader.pollNext(output);
    assertThat(status).isEqualTo(InputStatus.NOTHING_AVAILABLE);
    reader.close();
}
Also used : TestingReaderOutput(org.apache.flink.connector.testutils.source.reader.TestingReaderOutput) InputStatus(org.apache.flink.core.io.InputStatus) PulsarPartitionSplit(org.apache.flink.connector.pulsar.source.split.PulsarPartitionSplit) TestTemplate(org.junit.jupiter.api.TestTemplate)

Example 22 with TestingReaderOutput

use of org.apache.flink.connector.testutils.source.reader.TestingReaderOutput in project flink-cdc-connectors by ververica.

the class MySqlSourceReaderTest method testNoDuplicateRecordsWhenKeepUpdating.

@Test
public void testNoDuplicateRecordsWhenKeepUpdating() throws Exception {
    inventoryDatabase.createAndInitialize();
    String tableName = inventoryDatabase.getDatabaseName() + ".products";
    // use default split size which is large to make sure we only have one snapshot split
    final MySqlSourceConfig sourceConfig = new MySqlSourceConfigFactory().startupOptions(StartupOptions.initial()).databaseList(inventoryDatabase.getDatabaseName()).tableList(tableName).includeSchemaChanges(false).hostname(MYSQL_CONTAINER.getHost()).port(MYSQL_CONTAINER.getDatabasePort()).username(customerDatabase.getUsername()).password(customerDatabase.getPassword()).serverTimeZone(ZoneId.of("UTC").toString()).createConfig(0);
    final MySqlSnapshotSplitAssigner assigner = new MySqlSnapshotSplitAssigner(sourceConfig, DEFAULT_PARALLELISM, Collections.singletonList(TableId.parse(tableName)), false);
    assigner.open();
    MySqlSnapshotSplit snapshotSplit = (MySqlSnapshotSplit) assigner.getNext().get();
    // should contain only one split
    assertFalse(assigner.getNext().isPresent());
    // and the split is a full range one
    assertNull(snapshotSplit.getSplitStart());
    assertNull(snapshotSplit.getSplitEnd());
    final AtomicBoolean finishReading = new AtomicBoolean(false);
    final CountDownLatch updatingExecuted = new CountDownLatch(1);
    TestingReaderContext testingReaderContext = new TestingReaderContext();
    MySqlSourceReader<SourceRecord> reader = createReader(sourceConfig, testingReaderContext);
    reader.start();
    Thread updateWorker = new Thread(() -> {
        try (Connection connection = inventoryDatabase.getJdbcConnection();
            Statement statement = connection.createStatement()) {
            boolean flagSet = false;
            while (!finishReading.get()) {
                statement.execute("UPDATE products SET  description='" + UUID.randomUUID().toString() + "' WHERE id=101");
                if (!flagSet) {
                    updatingExecuted.countDown();
                    flagSet = true;
                }
            }
        } catch (Exception throwables) {
            throwables.printStackTrace();
        }
    });
    // start to keep updating the products table
    updateWorker.start();
    // wait until the updating executed
    updatingExecuted.await();
    // start to read chunks of the products table
    reader.addSplits(Collections.singletonList(snapshotSplit));
    reader.notifyNoMoreSplits();
    TestingReaderOutput<SourceRecord> output = new TestingReaderOutput<>();
    while (true) {
        InputStatus status = reader.pollNext(output);
        if (status == InputStatus.END_OF_INPUT) {
            break;
        }
        if (status == InputStatus.NOTHING_AVAILABLE) {
            reader.isAvailable().get();
        }
    }
    // stop the updating worker
    finishReading.set(true);
    updateWorker.join();
    // check the result
    ArrayList<SourceRecord> emittedRecords = output.getEmittedRecords();
    Map<Object, SourceRecord> recordByKey = new HashMap<>();
    for (SourceRecord record : emittedRecords) {
        SourceRecord existed = recordByKey.get(record.key());
        if (existed != null) {
            fail(String.format("The emitted record contains duplicate records on key\n%s\n%s\n", existed, record));
        } else {
            recordByKey.put(record.key(), record);
        }
    }
}
Also used : MySqlSourceConfigFactory(com.ververica.cdc.connectors.mysql.source.config.MySqlSourceConfigFactory) HashMap(java.util.HashMap) Statement(java.sql.Statement) Connection(java.sql.Connection) JdbcConnection(io.debezium.jdbc.JdbcConnection) MySqlConnection(io.debezium.connector.mysql.MySqlConnection) MySqlSnapshotSplitAssigner(com.ververica.cdc.connectors.mysql.source.assigners.MySqlSnapshotSplitAssigner) MySqlSnapshotSplit(com.ververica.cdc.connectors.mysql.source.split.MySqlSnapshotSplit) MySqlSourceConfig(com.ververica.cdc.connectors.mysql.source.config.MySqlSourceConfig) CountDownLatch(java.util.concurrent.CountDownLatch) SourceRecord(org.apache.kafka.connect.source.SourceRecord) SQLException(java.sql.SQLException) TestingReaderOutput(org.apache.flink.connector.testutils.source.reader.TestingReaderOutput) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) InputStatus(org.apache.flink.core.io.InputStatus) TestingReaderContext(org.apache.flink.connector.testutils.source.reader.TestingReaderContext) Test(org.junit.Test)

Aggregations

TestingReaderOutput (org.apache.flink.connector.testutils.source.reader.TestingReaderOutput)22 InputStatus (org.apache.flink.core.io.InputStatus)13 TestingReaderContext (org.apache.flink.connector.testutils.source.reader.TestingReaderContext)10 Test (org.junit.Test)10 MockSourceSplit (org.apache.flink.api.connector.source.mocks.MockSourceSplit)9 MockBaseSource (org.apache.flink.connector.base.source.reader.mocks.MockBaseSource)9 SourceReaderContext (org.apache.flink.api.connector.source.SourceReaderContext)6 KafkaPartitionSplit (org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit)6 TopicPartition (org.apache.kafka.common.TopicPartition)6 Test (org.junit.jupiter.api.Test)6 TestTemplate (org.junit.jupiter.api.TestTemplate)6 Source (org.apache.flink.api.connector.source.Source)3 MockSource (org.apache.flink.api.connector.source.mocks.MockSource)3 PulsarPartitionSplit (org.apache.flink.connector.pulsar.source.split.PulsarPartitionSplit)3 Counter (org.apache.flink.metrics.Counter)3 MetricListener (org.apache.flink.metrics.testutils.MetricListener)3 AdminClient (org.apache.kafka.clients.admin.AdminClient)3 OffsetAndMetadata (org.apache.kafka.clients.consumer.OffsetAndMetadata)3 MySqlSnapshotSplitAssigner (com.ververica.cdc.connectors.mysql.source.assigners.MySqlSnapshotSplitAssigner)1 MySqlSourceConfig (com.ververica.cdc.connectors.mysql.source.config.MySqlSourceConfig)1