Search in sources :

Example 1 with Product

use of io.confluent.examples.streams.avro.Product in project kafka-streams-examples by confluentinc.

the class GlobalKTablesExampleDriver method generateProducts.

static List<Product> generateProducts(final String bootstrapServers, final String schemaRegistryUrl, final int count) {
    final SpecificAvroSerde<Product> productSerde = createSerde(schemaRegistryUrl);
    final Properties producerProperties = new Properties();
    producerProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    final KafkaProducer<Long, Product> producer = new KafkaProducer<>(producerProperties, Serdes.Long().serializer(), productSerde.serializer());
    final List<Product> allProducts = new ArrayList<>();
    for (long i = 0; i < count; i++) {
        final Product product = new Product(randomString(10), randomString(RECORDS_TO_GENERATE), randomString(20));
        allProducts.add(product);
        producer.send(new ProducerRecord<>(PRODUCT_TOPIC, i, product));
    }
    producer.close();
    return allProducts;
}
Also used : KafkaProducer(org.apache.kafka.clients.producer.KafkaProducer) ArrayList(java.util.ArrayList) Product(io.confluent.examples.streams.avro.Product) Properties(java.util.Properties)

Example 2 with Product

use of io.confluent.examples.streams.avro.Product in project kafka-streams-examples by confluentinc.

the class GlobalKTablesExample method createStreams.

public static KafkaStreams createStreams(final String bootstrapServers, final String schemaRegistryUrl, final String stateDir) {
    final Properties streamsConfiguration = new Properties();
    // Give the Streams application a unique name.  The name must be unique in the Kafka cluster
    // against which the application is run.
    streamsConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG, "global-tables-example");
    streamsConfiguration.put(StreamsConfig.CLIENT_ID_CONFIG, "global-tables-example-client");
    // Where to find Kafka broker(s).
    streamsConfiguration.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    streamsConfiguration.put(StreamsConfig.STATE_DIR_CONFIG, stateDir);
    // Set to earliest so we don't miss any data that arrived in the topics before the process
    // started
    streamsConfiguration.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    // create and configure the SpecificAvroSerdes required in this example
    final SpecificAvroSerde<Order> orderSerde = new SpecificAvroSerde<>();
    final Map<String, String> serdeConfig = Collections.singletonMap(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, schemaRegistryUrl);
    orderSerde.configure(serdeConfig, false);
    final SpecificAvroSerde<Customer> customerSerde = new SpecificAvroSerde<>();
    customerSerde.configure(serdeConfig, false);
    final SpecificAvroSerde<Product> productSerde = new SpecificAvroSerde<>();
    productSerde.configure(serdeConfig, false);
    final SpecificAvroSerde<EnrichedOrder> enrichedOrdersSerde = new SpecificAvroSerde<>();
    enrichedOrdersSerde.configure(serdeConfig, false);
    final StreamsBuilder builder = new StreamsBuilder();
    // Get the stream of orders
    final KStream<Long, Order> ordersStream = builder.stream(ORDER_TOPIC, Consumed.with(Serdes.Long(), orderSerde));
    // Create a global table for customers. The data from this global table
    // will be fully replicated on each instance of this application.
    final GlobalKTable<Long, Customer> customers = builder.globalTable(CUSTOMER_TOPIC, Materialized.<Long, Customer, KeyValueStore<Bytes, byte[]>>as(CUSTOMER_STORE).withKeySerde(Serdes.Long()).withValueSerde(customerSerde));
    // Create a global table for products. The data from this global table
    // will be fully replicated on each instance of this application.
    final GlobalKTable<Long, Product> products = builder.globalTable(PRODUCT_TOPIC, Materialized.<Long, Product, KeyValueStore<Bytes, byte[]>>as(PRODUCT_STORE).withKeySerde(Serdes.Long()).withValueSerde(productSerde));
    // Join the orders stream to the customer global table. As this is global table
    // we can use a non-key based join with out needing to repartition the input stream
    final KStream<Long, CustomerOrder> customerOrdersStream = ordersStream.join(customers, (orderId, order) -> order.getCustomerId(), (order, customer) -> new CustomerOrder(customer, order));
    // Join the enriched customer order stream with the product global table. As this is global table
    // we can use a non-key based join without needing to repartition the input stream
    final KStream<Long, EnrichedOrder> enrichedOrdersStream = customerOrdersStream.join(products, (orderId, customerOrder) -> customerOrder.productId(), (customerOrder, product) -> new EnrichedOrder(product, customerOrder.customer, customerOrder.order));
    // write the enriched order to the enriched-order topic
    enrichedOrdersStream.to(ENRICHED_ORDER_TOPIC, Produced.with(Serdes.Long(), enrichedOrdersSerde));
    return new KafkaStreams(builder.build(), new StreamsConfig(streamsConfiguration));
}
Also used : EnrichedOrder(io.confluent.examples.streams.avro.EnrichedOrder) Order(io.confluent.examples.streams.avro.Order) KafkaStreams(org.apache.kafka.streams.KafkaStreams) Customer(io.confluent.examples.streams.avro.Customer) Product(io.confluent.examples.streams.avro.Product) Properties(java.util.Properties) StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) Bytes(org.apache.kafka.common.utils.Bytes) SpecificAvroSerde(io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde) EnrichedOrder(io.confluent.examples.streams.avro.EnrichedOrder) StreamsConfig(org.apache.kafka.streams.StreamsConfig)

Example 3 with Product

use of io.confluent.examples.streams.avro.Product in project kafka-streams-examples by confluentinc.

the class GlobalKTablesExampleTest method shouldDemonstrateGlobalKTableJoins.

@Test
public void shouldDemonstrateGlobalKTableJoins() throws Exception {
    final List<Customer> customers = GlobalKTablesExampleDriver.generateCustomers(CLUSTER.bootstrapServers(), CLUSTER.schemaRegistryUrl(), 100);
    final List<Product> products = GlobalKTablesExampleDriver.generateProducts(CLUSTER.bootstrapServers(), CLUSTER.schemaRegistryUrl(), 100);
    final List<Order> orders = GlobalKTablesExampleDriver.generateOrders(CLUSTER.bootstrapServers(), CLUSTER.schemaRegistryUrl(), 100, 100, 50);
    // start up the streams instances
    streamInstanceOne.start();
    streamInstanceTwo.start();
    final Properties consumerProps = new Properties();
    consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, CLUSTER.bootstrapServers());
    consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "global-tables-consumer");
    consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, Serdes.Long().deserializer().getClass());
    consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
    consumerProps.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, CLUSTER.schemaRegistryUrl());
    consumerProps.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true);
    // receive the enriched orders
    final List<EnrichedOrder> enrichedOrders = IntegrationTestUtils.waitUntilMinValuesRecordsReceived(consumerProps, ENRICHED_ORDER_TOPIC, 50, 60000);
    // verify that all the data comes from the generated set
    for (final EnrichedOrder enrichedOrder : enrichedOrders) {
        assertThat(customers, hasItem(enrichedOrder.getCustomer()));
        assertThat(products, hasItem(enrichedOrder.getProduct()));
        assertThat(orders, hasItem(enrichedOrder.getOrder()));
    }
    // demonstrate that global table data is available on all instances
    verifyAllCustomersInStore(customers, streamInstanceOne.store(CUSTOMER_STORE, QueryableStoreTypes.keyValueStore()));
    verifyAllCustomersInStore(customers, streamInstanceTwo.store(CUSTOMER_STORE, QueryableStoreTypes.keyValueStore()));
    verifyAllProductsInStore(products, streamInstanceOne.store(PRODUCT_STORE, QueryableStoreTypes.keyValueStore()));
    verifyAllProductsInStore(products, streamInstanceTwo.store(PRODUCT_STORE, QueryableStoreTypes.keyValueStore()));
}
Also used : Order(io.confluent.examples.streams.avro.Order) EnrichedOrder(io.confluent.examples.streams.avro.EnrichedOrder) Customer(io.confluent.examples.streams.avro.Customer) Product(io.confluent.examples.streams.avro.Product) Properties(java.util.Properties) EnrichedOrder(io.confluent.examples.streams.avro.EnrichedOrder) Test(org.junit.Test)

Aggregations

Product (io.confluent.examples.streams.avro.Product)3 Properties (java.util.Properties)3 Customer (io.confluent.examples.streams.avro.Customer)2 EnrichedOrder (io.confluent.examples.streams.avro.EnrichedOrder)2 Order (io.confluent.examples.streams.avro.Order)2 SpecificAvroSerde (io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde)1 ArrayList (java.util.ArrayList)1 KafkaProducer (org.apache.kafka.clients.producer.KafkaProducer)1 Bytes (org.apache.kafka.common.utils.Bytes)1 KafkaStreams (org.apache.kafka.streams.KafkaStreams)1 StreamsBuilder (org.apache.kafka.streams.StreamsBuilder)1 StreamsConfig (org.apache.kafka.streams.StreamsConfig)1 Test (org.junit.Test)1