Search in sources :

Example 1 with Builder

use of org.sourcelab.kafka.connect.apiclient.request.dto.NewConnectorDefinition.Builder in project kafka-connect-cosmosdb by microsoft.

the class SourceConnectorIT method testResumeFromLatestOffsetsMultipleWorkers.

/**
 * Testing connector with multiple workers reading from latest offset recorded in the Kafka source partition.
 *
 * Recreate connector with multiple workers let it process data so offsets are recorded in the Kafka source partition.
 * Then create new connector with a long timeout (300s) and using latest offsets and create new items
 * in Cosmos DB. Delete this connector so these new items won't be processed. Finally, recreate the
 * connector with multiple workers and normal timeout, let it resume from the last offset, and check that
 * only the new items are added.
 */
@Test
public void testResumeFromLatestOffsetsMultipleWorkers() throws InterruptedException, ExecutionException {
    // Create source connector with multi-worker config
    Map<String, String> currentParams = connectConfig.build().getConfig();
    Builder multiWorkerConfigBuilder = connectConfig.withConfig("tasks.max", 2).withConfig("connect.cosmos.containers.topicmap", currentParams.get("connect.cosmos.containers.topicmap") + String.format(",%s#%s", SECOND_KAFKA_TOPIC, SECOND_COSMOS_CONTAINER));
    connectClient.addConnector(multiWorkerConfigBuilder.build());
    // Allow time for Source connector to setup resources
    sleep(8000);
    // Create items in Cosmos DB to register initial offsets
    targetContainer.createItem(new Person("Test Person", RandomUtils.nextLong(1L, 9999999L) + ""));
    secondContainer.createItem(new Person("Another Person", RandomUtils.nextLong(1L, 9999999L) + ""));
    sleep(8000);
    connectClient.deleteConnector(connectorName);
    Person person = new Person("Frodo Baggins", RandomUtils.nextLong(1L, 9999999L) + "");
    Person secondPerson = new Person("Sam Wise", RandomUtils.nextLong(1L, 9999999L) + "");
    targetContainer.createItem(person);
    secondContainer.createItem(secondPerson);
    // Allow time for Source connector to start up, but delete it quickly so it won't process data
    connectClient.addConnector(multiWorkerConfigBuilder.withConfig("connect.cosmos.task.timeout", 300000L).withConfig("connect.cosmos.offset.useLatest", true).build());
    sleep(10000);
    connectClient.deleteConnector(connectorName);
    // Ensure that the record is not in the Kafka topic
    Optional<ConsumerRecord<String, JsonNode>> resultRecord = searchConsumerRecords(person);
    Assert.assertNull("Person A can be retrieved from messages.", resultRecord.orElse(null));
    resultRecord = searchConsumerRecords(secondPerson);
    Assert.assertNull("Person B can be retrieved from messages.", resultRecord.orElse(null));
    // Recreate connector with default settings
    connectClient.addConnector(connectConfig.withConfig("connect.cosmos.task.timeout", 5000L).withConfig("connect.cosmos.offset.useLatest", true).build());
    // Allow connector to process records
    sleep(14000);
    // Verify that record is now in the Kafka topic
    resultRecord = searchConsumerRecords(person);
    Assert.assertNotNull("Person A could not be retrieved from messages", resultRecord.orElse(null));
    resultRecord = searchConsumerRecords(secondPerson);
    Assert.assertNotNull("Person B could not be retrieved from messages", resultRecord.orElse(null));
}
Also used : CosmosClientBuilder(com.azure.cosmos.CosmosClientBuilder) Builder(org.sourcelab.kafka.connect.apiclient.request.dto.NewConnectorDefinition.Builder) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) Test(org.junit.Test) IntegrationTest(com.azure.cosmos.kafka.connect.IntegrationTest)

Aggregations

CosmosClientBuilder (com.azure.cosmos.CosmosClientBuilder)1 IntegrationTest (com.azure.cosmos.kafka.connect.IntegrationTest)1 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)1 Test (org.junit.Test)1 Builder (org.sourcelab.kafka.connect.apiclient.request.dto.NewConnectorDefinition.Builder)1