Search in sources :

Example 1 with Order

use of io.confluent.examples.streams.avro.microservices.Order in project kafka-streams-examples by confluentinc.

the class FraudService method processStreams.

private KafkaStreams processStreams(final String bootstrapServers, final String stateDir) {
    // Latch onto instances of the orders and inventory topics
    StreamsBuilder builder = new StreamsBuilder();
    KStream<String, Order> orders = builder.stream(ORDERS.name(), Consumed.with(ORDERS.keySerde(), ORDERS.valueSerde())).filter((id, order) -> OrderState.CREATED.equals(order.getState()));
    // Create an aggregate of the total value by customer and hold it with the order. We use session windows to
    // detect periods of activity.
    KTable<Windowed<Long>, OrderValue> aggregate = orders.groupBy((id, order) -> order.getCustomerId(), Serialized.with(Serdes.Long(), ORDERS.valueSerde())).windowedBy(SessionWindows.with(60 * MIN)).aggregate(OrderValue::new, // Calculate running total for each customer within this window
    (custId, order, total) -> new OrderValue(order, total.getValue() + order.getQuantity() * order.getPrice()), // include a merger as we're using session windows.
    (k, a, b) -> simpleMerge(a, b), Materialized.with(null, Schemas.ORDER_VALUE_SERDE));
    // Ditch the windowing and rekey
    KStream<String, OrderValue> ordersWithTotals = aggregate.toStream((windowedKey, orderValue) -> windowedKey.key()).filter(// When elements are evicted from a session window they create delete events. Filter these out.
    (k, v) -> v != null).selectKey((id, orderValue) -> orderValue.getOrder().getId());
    // Now branch the stream into two, for pass and fail, based on whether the windowed total is over Fraud Limit
    KStream<String, OrderValue>[] forks = ordersWithTotals.branch((id, orderValue) -> orderValue.getValue() >= FRAUD_LIMIT, (id, orderValue) -> orderValue.getValue() < FRAUD_LIMIT);
    forks[0].mapValues(orderValue -> new OrderValidation(orderValue.getOrder().getId(), FRAUD_CHECK, FAIL)).to(ORDER_VALIDATIONS.name(), Produced.with(ORDER_VALIDATIONS.keySerde(), ORDER_VALIDATIONS.valueSerde()));
    forks[1].mapValues(orderValue -> new OrderValidation(orderValue.getOrder().getId(), FRAUD_CHECK, PASS)).to(ORDER_VALIDATIONS.name(), Produced.with(ORDER_VALIDATIONS.keySerde(), ORDER_VALIDATIONS.valueSerde()));
    // disable caching to ensure a complete aggregate changelog. This is a little trick we need to apply
    // as caching in Kafka Streams will conflate subsequent updates for the same key. Disabling caching ensures
    // we get a complete "changelog" from the aggregate(...) step above (i.e. every input event will have a
    // corresponding output event.
    Properties props = baseStreamsConfig(bootstrapServers, stateDir, FRAUD_SERVICE_APP_ID);
    props.setProperty(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, "0");
    return new KafkaStreams(builder.build(), props);
}
Also used : Order(io.confluent.examples.streams.avro.microservices.Order) StreamsConfig(org.apache.kafka.streams.StreamsConfig) Produced(org.apache.kafka.streams.kstream.Produced) SessionWindows(org.apache.kafka.streams.kstream.SessionWindows) Serialized(org.apache.kafka.streams.kstream.Serialized) LoggerFactory(org.slf4j.LoggerFactory) FRAUD_CHECK(io.confluent.examples.streams.avro.microservices.OrderValidationType.FRAUD_CHECK) KStream(org.apache.kafka.streams.kstream.KStream) Consumed(org.apache.kafka.streams.Consumed) Windowed(org.apache.kafka.streams.kstream.Windowed) Serdes(org.apache.kafka.common.serialization.Serdes) ORDER_VALIDATIONS(io.confluent.examples.streams.microservices.domain.Schemas.Topics.ORDER_VALIDATIONS) Order(io.confluent.examples.streams.avro.microservices.Order) OrderValue(io.confluent.examples.streams.avro.microservices.OrderValue) OrderState(io.confluent.examples.streams.avro.microservices.OrderState) MicroserviceUtils.parseArgsAndConfigure(io.confluent.examples.streams.microservices.util.MicroserviceUtils.parseArgsAndConfigure) StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) KTable(org.apache.kafka.streams.kstream.KTable) Logger(org.slf4j.Logger) Properties(java.util.Properties) ORDERS(io.confluent.examples.streams.microservices.domain.Schemas.Topics.ORDERS) Schemas(io.confluent.examples.streams.microservices.domain.Schemas) MicroserviceUtils.addShutdownHookAndBlock(io.confluent.examples.streams.microservices.util.MicroserviceUtils.addShutdownHookAndBlock) FAIL(io.confluent.examples.streams.avro.microservices.OrderValidationResult.FAIL) OrderValidation(io.confluent.examples.streams.avro.microservices.OrderValidation) Materialized(org.apache.kafka.streams.kstream.Materialized) PASS(io.confluent.examples.streams.avro.microservices.OrderValidationResult.PASS) MicroserviceUtils.baseStreamsConfig(io.confluent.examples.streams.microservices.util.MicroserviceUtils.baseStreamsConfig) KafkaStreams(org.apache.kafka.streams.KafkaStreams) KafkaStreams(org.apache.kafka.streams.KafkaStreams) OrderValue(io.confluent.examples.streams.avro.microservices.OrderValue) KStream(org.apache.kafka.streams.kstream.KStream) OrderValidation(io.confluent.examples.streams.avro.microservices.OrderValidation) Properties(java.util.Properties) StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) Windowed(org.apache.kafka.streams.kstream.Windowed)

Example 2 with Order

use of io.confluent.examples.streams.avro.microservices.Order in project kafka-streams-examples by confluentinc.

the class InventoryService method processStreams.

private KafkaStreams processStreams(final String bootstrapServers, final String stateDir) {
    // Latch onto instances of the orders and inventory topics
    StreamsBuilder builder = new StreamsBuilder();
    KStream<String, Order> orders = builder.stream(Topics.ORDERS.name(), Consumed.with(Topics.ORDERS.keySerde(), Topics.ORDERS.valueSerde()));
    KTable<Product, Integer> warehouseInventory = builder.table(Topics.WAREHOUSE_INVENTORY.name(), Consumed.with(Topics.WAREHOUSE_INVENTORY.keySerde(), Topics.WAREHOUSE_INVENTORY.valueSerde()));
    // Create a store to reserve inventory whilst the order is processed.
    // This will be prepopulated from Kafka before the service starts processing
    StoreBuilder reservedStock = Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(RESERVED_STOCK_STORE_NAME), Topics.WAREHOUSE_INVENTORY.keySerde(), Serdes.Long()).withLoggingEnabled(new HashMap<>());
    builder.addStateStore(reservedStock);
    // First change orders stream to be keyed by Product (so we can join with warehouse inventory)
    orders.selectKey((id, order) -> order.getProduct()).filter((id, order) -> OrderState.CREATED.equals(order.getState())).join(warehouseInventory, KeyValue::new, Joined.with(Topics.WAREHOUSE_INVENTORY.keySerde(), Topics.ORDERS.valueSerde(), Serdes.Integer())).transform(InventoryValidator::new, RESERVED_STOCK_STORE_NAME).to(Topics.ORDER_VALIDATIONS.name(), Produced.with(Topics.ORDER_VALIDATIONS.keySerde(), Topics.ORDER_VALIDATIONS.valueSerde()));
    return new KafkaStreams(builder.build(), MicroserviceUtils.baseStreamsConfig(bootstrapServers, stateDir, INVENTORY_SERVICE_APP_ID));
}
Also used : StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) Order(io.confluent.examples.streams.avro.microservices.Order) Produced(org.apache.kafka.streams.kstream.Produced) Stores(org.apache.kafka.streams.state.Stores) LoggerFactory(org.slf4j.LoggerFactory) HashMap(java.util.HashMap) KStream(org.apache.kafka.streams.kstream.KStream) Joined(org.apache.kafka.streams.kstream.Joined) MicroserviceUtils(io.confluent.examples.streams.microservices.util.MicroserviceUtils) Consumed(org.apache.kafka.streams.Consumed) KeyValueStore(org.apache.kafka.streams.state.KeyValueStore) Serdes(org.apache.kafka.common.serialization.Serdes) INVENTORY_CHECK(io.confluent.examples.streams.avro.microservices.OrderValidationType.INVENTORY_CHECK) Order(io.confluent.examples.streams.avro.microservices.Order) MicroserviceUtils.parseArgsAndConfigure(io.confluent.examples.streams.microservices.util.MicroserviceUtils.parseArgsAndConfigure) OrderState(io.confluent.examples.streams.avro.microservices.OrderState) StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) KTable(org.apache.kafka.streams.kstream.KTable) Logger(org.slf4j.Logger) Transformer(org.apache.kafka.streams.kstream.Transformer) KeyValue(org.apache.kafka.streams.KeyValue) StoreBuilder(org.apache.kafka.streams.state.StoreBuilder) Topics(io.confluent.examples.streams.microservices.domain.Schemas.Topics) ProcessorContext(org.apache.kafka.streams.processor.ProcessorContext) MicroserviceUtils.addShutdownHookAndBlock(io.confluent.examples.streams.microservices.util.MicroserviceUtils.addShutdownHookAndBlock) FAIL(io.confluent.examples.streams.avro.microservices.OrderValidationResult.FAIL) PASS(io.confluent.examples.streams.avro.microservices.OrderValidationResult.PASS) OrderValidation(io.confluent.examples.streams.avro.microservices.OrderValidation) Product(io.confluent.examples.streams.avro.microservices.Product) KafkaStreams(org.apache.kafka.streams.KafkaStreams) KafkaStreams(org.apache.kafka.streams.KafkaStreams) KeyValue(org.apache.kafka.streams.KeyValue) StoreBuilder(org.apache.kafka.streams.state.StoreBuilder) Product(io.confluent.examples.streams.avro.microservices.Product)

Example 3 with Order

use of io.confluent.examples.streams.avro.microservices.Order in project kafka-streams-examples by confluentinc.

the class EmailServiceTest method shouldSendEmailWithValidContents.

@Test
public void shouldSendEmailWithValidContents() throws Exception {
    // Given one order, customer and payment
    String orderId = id(0L);
    Order order = new Order(orderId, 15L, CREATED, UNDERPANTS, 3, 5.00d);
    Customer customer = new Customer(15L, "Franz", "Kafka", "frans@thedarkside.net", "oppression street, prague, cze");
    Payment payment = new Payment("Payment:1234", orderId, "CZK", 1000.00d);
    emailService = new EmailService(details -> {
        assertThat(details.customer).isEqualTo(customer);
        assertThat(details.payment).isEqualTo(payment);
        assertThat(details.order).isEqualTo(order);
        complete = true;
    });
    send(Topics.CUSTOMERS, Collections.singleton(new KeyValue<>(customer.getId(), customer)));
    send(Topics.ORDERS, Collections.singleton(new KeyValue<>(order.getId(), order)));
    send(Topics.PAYMENTS, Collections.singleton(new KeyValue<>(payment.getId(), payment)));
    // When
    emailService.start(CLUSTER.bootstrapServers());
    // Then
    TestUtils.waitForCondition(() -> complete, "Email was never sent.");
}
Also used : Order(io.confluent.examples.streams.avro.microservices.Order) Payment(io.confluent.examples.streams.avro.microservices.Payment) MicroserviceTestUtils(io.confluent.examples.streams.microservices.util.MicroserviceTestUtils) BeforeClass(org.junit.BeforeClass) TestUtils(org.apache.kafka.test.TestUtils) Assertions.assertThat(org.assertj.core.api.Assertions.assertThat) KeyValue(org.apache.kafka.streams.KeyValue) OrderId.id(io.confluent.examples.streams.microservices.domain.beans.OrderId.id) Test(org.junit.Test) Customer(io.confluent.examples.streams.avro.microservices.Customer) UNDERPANTS(io.confluent.examples.streams.avro.microservices.Product.UNDERPANTS) Topics(io.confluent.examples.streams.microservices.domain.Schemas.Topics) Schemas(io.confluent.examples.streams.microservices.domain.Schemas) After(org.junit.After) Order(io.confluent.examples.streams.avro.microservices.Order) Collections(java.util.Collections) CREATED(io.confluent.examples.streams.avro.microservices.OrderState.CREATED) Payment(io.confluent.examples.streams.avro.microservices.Payment) KeyValue(org.apache.kafka.streams.KeyValue) Customer(io.confluent.examples.streams.avro.microservices.Customer) Test(org.junit.Test)

Example 4 with Order

use of io.confluent.examples.streams.avro.microservices.Order in project kafka-streams-examples by confluentinc.

the class OrderDetailsService method startService.

private void startService(String bootstrapServers) {
    startConsumer(bootstrapServers);
    startProducer(bootstrapServers);
    try {
        Map<TopicPartition, OffsetAndMetadata> consumedOffsets = new HashMap<>();
        consumer.subscribe(singletonList(Topics.ORDERS.name()));
        producer.initTransactions();
        while (running) {
            ConsumerRecords<String, Order> records = consumer.poll(100);
            if (records.count() > 0) {
                producer.beginTransaction();
                for (ConsumerRecord<String, Order> record : records) {
                    Order order = record.value();
                    if (OrderState.CREATED.equals(order.getState())) {
                        // Validate the order then send the result (but note we are in a transaction so
                        // nothing will be "seen" downstream until we commit the transaction below)
                        producer.send(result(order, isValid(order) ? PASS : FAIL));
                        recordOffset(consumedOffsets, record);
                    }
                }
                producer.sendOffsetsToTransaction(consumedOffsets, CONSUMER_GROUP_ID);
                producer.commitTransaction();
            }
        }
    } finally {
        close();
    }
}
Also used : Order(io.confluent.examples.streams.avro.microservices.Order) HashMap(java.util.HashMap) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata)

Example 5 with Order

use of io.confluent.examples.streams.avro.microservices.Order in project kafka-streams-examples by confluentinc.

the class OrdersService method submitOrder.

/**
 * Persist an Order to Kafka. Returns once the order is successfully written to R nodes where
 * R is the replication factor configured in Kafka.
 *
 * @param order the order to add
 * @param timeout the max time to wait for the response from Kafka before timing out the POST
 */
@POST
@ManagedAsync
@Path("/orders")
@Consumes(MediaType.APPLICATION_JSON)
public void submitOrder(final OrderBean order, @QueryParam("timeout") @DefaultValue(CALL_TIMEOUT) final Long timeout, @Suspended final AsyncResponse response) {
    setTimeout(timeout, response);
    Order bean = fromBean(order);
    producer.send(new ProducerRecord<>(ORDERS.name(), bean.getId(), bean), callback(response, bean.getId()));
}
Also used : Order(io.confluent.examples.streams.avro.microservices.Order) Path(javax.ws.rs.Path) POST(javax.ws.rs.POST) Consumes(javax.ws.rs.Consumes) ManagedAsync(org.glassfish.jersey.server.ManagedAsync)

Aggregations

Order (io.confluent.examples.streams.avro.microservices.Order)11 OrderValidation (io.confluent.examples.streams.avro.microservices.OrderValidation)5 MicroserviceUtils.addShutdownHookAndBlock (io.confluent.examples.streams.microservices.util.MicroserviceUtils.addShutdownHookAndBlock)4 MicroserviceUtils.parseArgsAndConfigure (io.confluent.examples.streams.microservices.util.MicroserviceUtils.parseArgsAndConfigure)4 KafkaStreams (org.apache.kafka.streams.KafkaStreams)4 KStream (org.apache.kafka.streams.kstream.KStream)4 Logger (org.slf4j.Logger)4 LoggerFactory (org.slf4j.LoggerFactory)4 OrderState (io.confluent.examples.streams.avro.microservices.OrderState)3 FAIL (io.confluent.examples.streams.avro.microservices.OrderValidationResult.FAIL)3 PASS (io.confluent.examples.streams.avro.microservices.OrderValidationResult.PASS)3 Schemas (io.confluent.examples.streams.microservices.domain.Schemas)3 Topics (io.confluent.examples.streams.microservices.domain.Schemas.Topics)3 ORDERS (io.confluent.examples.streams.microservices.domain.Schemas.Topics.ORDERS)3 MicroserviceUtils.baseStreamsConfig (io.confluent.examples.streams.microservices.util.MicroserviceUtils.baseStreamsConfig)3 Joined (org.apache.kafka.streams.kstream.Joined)3 Test (org.junit.Test)3 Customer (io.confluent.examples.streams.avro.microservices.Customer)2 CREATED (io.confluent.examples.streams.avro.microservices.OrderState.CREATED)2 VALIDATED (io.confluent.examples.streams.avro.microservices.OrderState.VALIDATED)2