Search in sources :

Example 1 with FetchedMessage

use of org.apache.twill.kafka.client.FetchedMessage in project cdap by caskdata.

the class MetricsMessageCallback method onReceived.

@Override
public long onReceived(Iterator<FetchedMessage> messages) {
    // Decode the metrics records.
    ByteBufferInputStream is = new ByteBufferInputStream(null);
    List<MetricValues> records = Lists.newArrayList();
    long nextOffset = 0L;
    while (messages.hasNext()) {
        FetchedMessage input = messages.next();
        nextOffset = input.getNextOffset();
        try {
            MetricValues metricValues = recordReader.read(new BinaryDecoder(is.reset(input.getPayload())), recordSchema);
            records.add(metricValues);
        } catch (IOException e) {
            LOG.warn("Failed to decode message to MetricValue. Skipped. {}", e.getMessage());
        }
    }
    if (records.isEmpty()) {
        LOG.info("No records to process.");
        return nextOffset;
    }
    long now = System.currentTimeMillis();
    try {
        addProcessingStats(records, now);
        metricStore.add(records);
    } catch (Exception e) {
        // SimpleKafkaConsumer will log the error, and continue on past these messages
        throw new RuntimeException("Failed to add metrics data to a store", e);
    }
    recordsProcessed += records.size();
    // avoid logging more than once a minute
    if (now > lastLoggedMillis + TimeUnit.MINUTES.toMillis(1)) {
        lastLoggedMillis = now;
        LOG.debug("{} metrics records processed. Last record time: {}.", recordsProcessed, records.get(records.size() - 1).getTimestamp());
    }
    return nextOffset;
}
Also used : ByteBufferInputStream(co.cask.common.io.ByteBufferInputStream) MetricValues(co.cask.cdap.api.metrics.MetricValues) FetchedMessage(org.apache.twill.kafka.client.FetchedMessage) IOException(java.io.IOException) BinaryDecoder(co.cask.cdap.common.io.BinaryDecoder) IOException(java.io.IOException)

Example 2 with FetchedMessage

use of org.apache.twill.kafka.client.FetchedMessage in project cdap by caskdata.

the class KafkaTester method getPublishedMessages.

/**
   * Return a list of messages from the specified Kafka topic.
   *
   * @param topic the specified Kafka topic
   * @param expectedNumMsgs the expected number of messages
   * @param offset the Kafka offset
   * @param converter converter function to convert payload bytebuffer into type T
   * @param <T> the type of each message
   * @return a list of messages from the specified Kafka topic
   */
public <T> List<T> getPublishedMessages(String topic, Set<Integer> partitions, int expectedNumMsgs, int offset, final Function<FetchedMessage, T> converter) throws InterruptedException {
    final CountDownLatch latch = new CountDownLatch(expectedNumMsgs);
    final CountDownLatch stopLatch = new CountDownLatch(1);
    final List<T> actual = new ArrayList<>(expectedNumMsgs);
    KafkaConsumer.Preparer preparer = kafkaClient.getConsumer().prepare();
    for (int partition : partitions) {
        preparer.add(topic, partition, offset);
    }
    Cancellable cancellable = preparer.consume(new KafkaConsumer.MessageCallback() {

        @Override
        public long onReceived(Iterator<FetchedMessage> messages) {
            long offset = 0L;
            while (messages.hasNext()) {
                FetchedMessage message = messages.next();
                actual.add(converter.apply(message));
                latch.countDown();
                offset = message.getNextOffset();
            }
            return offset;
        }

        @Override
        public void finished() {
            stopLatch.countDown();
        }
    });
    Assert.assertTrue(String.format("Expected %d messages but found %d messages", expectedNumMsgs, actual.size()), latch.await(15, TimeUnit.SECONDS));
    cancellable.cancel();
    Assert.assertTrue(stopLatch.await(15, TimeUnit.SECONDS));
    return actual;
}
Also used : Cancellable(org.apache.twill.common.Cancellable) ArrayList(java.util.ArrayList) KafkaConsumer(org.apache.twill.kafka.client.KafkaConsumer) FetchedMessage(org.apache.twill.kafka.client.FetchedMessage) CountDownLatch(java.util.concurrent.CountDownLatch)

Example 3 with FetchedMessage

use of org.apache.twill.kafka.client.FetchedMessage in project cdap by caskdata.

the class KafkaOffsetResolverTest method waitForAllLogsPublished.

private void waitForAllLogsPublished(String topic, int logsNum) throws InterruptedException {
    final CountDownLatch latch = new CountDownLatch(logsNum);
    final CountDownLatch stopLatch = new CountDownLatch(1);
    Cancellable cancel = KAFKA_TESTER.getKafkaClient().getConsumer().prepare().add(topic, 0, 0).consume(new KafkaConsumer.MessageCallback() {

        @Override
        public long onReceived(Iterator<FetchedMessage> messages) {
            long nextOffset = -1L;
            while (messages.hasNext()) {
                FetchedMessage message = messages.next();
                nextOffset = message.getNextOffset();
                Assert.assertTrue(latch.getCount() > 0);
                latch.countDown();
            }
            return nextOffset;
        }

        @Override
        public void finished() {
            stopLatch.countDown();
        }
    });
    Assert.assertTrue(latch.await(5, TimeUnit.SECONDS));
    cancel.cancel();
    Assert.assertTrue(stopLatch.await(1, TimeUnit.SECONDS));
}
Also used : Cancellable(org.apache.twill.common.Cancellable) KafkaConsumer(org.apache.twill.kafka.client.KafkaConsumer) FetchedMessage(org.apache.twill.kafka.client.FetchedMessage) CountDownLatch(java.util.concurrent.CountDownLatch)

Example 4 with FetchedMessage

use of org.apache.twill.kafka.client.FetchedMessage in project cdap by caskdata.

the class TestKafkaLogging method testPartitionKey.

// Note: LogReader.getLog is tested in LogSaverTest for distributed mode
@Test
public void testPartitionKey() throws Exception {
    CConfiguration cConf = KAFKA_TESTER.getCConf();
    // set kafka partition key to application
    cConf.set(Constants.Logging.LOG_PUBLISH_PARTITION_KEY, "application");
    Logger logger = LoggerFactory.getLogger("TestKafkaLogging");
    LoggingContext loggingContext = new FlowletLoggingContext("TKL_NS_2", "APP_2", "FLOW_2", "FLOWLET_2", "RUN2", "INSTANCE2");
    LoggingContextAccessor.setLoggingContext(loggingContext);
    for (int i = 0; i < 40; ++i) {
        logger.warn("TKL_NS_2 Test log message {} {} {}", i, "arg1", "arg2", new Exception("test exception"));
    }
    loggingContext = new FlowletLoggingContext("TKL_NS_2", "APP_2", "FLOW_3", "FLOWLET_3", "RUN3", "INSTANCE3");
    LoggingContextAccessor.setLoggingContext(loggingContext);
    for (int i = 0; i < 40; ++i) {
        logger.warn("TKL_NS_2 Test log message {} {} {}", i, "arg1", "arg2", new Exception("test exception"));
    }
    final Multimap<Integer, String> actual = ArrayListMultimap.create();
    KAFKA_TESTER.getPublishedMessages(KAFKA_TESTER.getCConf().get(Constants.Logging.KAFKA_TOPIC), ImmutableSet.of(0, 1), 40, new Function<FetchedMessage, String>() {

        @Override
        public String apply(final FetchedMessage input) {
            try {
                Map.Entry<Integer, String> entry = convertFetchedMessage(input);
                actual.put(entry.getKey(), entry.getValue());
            } catch (IOException e) {
            // should never happen
            }
            return "";
        }
    });
    boolean isPresent = false;
    // check if all the logs from same app went to same partition
    for (Map.Entry<Integer, Collection<String>> entry : actual.asMap().entrySet()) {
        if (entry.getValue().contains("TKL_NS_2:APP_2")) {
            if (isPresent) {
                // if we have already found another partition with application context, assert false
                Assert.assertFalse("Only one partition should have application logging context", isPresent);
            }
            isPresent = true;
        }
    }
    // reset kafka partition key
    cConf.set(Constants.Logging.LOG_PUBLISH_PARTITION_KEY, "program");
}
Also used : LoggingContext(co.cask.cdap.common.logging.LoggingContext) FlowletLoggingContext(co.cask.cdap.logging.context.FlowletLoggingContext) IOException(java.io.IOException) Logger(org.slf4j.Logger) CConfiguration(co.cask.cdap.common.conf.CConfiguration) IOException(java.io.IOException) Collection(java.util.Collection) FetchedMessage(org.apache.twill.kafka.client.FetchedMessage) FlowletLoggingContext(co.cask.cdap.logging.context.FlowletLoggingContext) HashMap(java.util.HashMap) Map(java.util.Map) Test(org.junit.Test)

Aggregations

FetchedMessage (org.apache.twill.kafka.client.FetchedMessage)4 IOException (java.io.IOException)2 CountDownLatch (java.util.concurrent.CountDownLatch)2 Cancellable (org.apache.twill.common.Cancellable)2 KafkaConsumer (org.apache.twill.kafka.client.KafkaConsumer)2 MetricValues (co.cask.cdap.api.metrics.MetricValues)1 CConfiguration (co.cask.cdap.common.conf.CConfiguration)1 BinaryDecoder (co.cask.cdap.common.io.BinaryDecoder)1 LoggingContext (co.cask.cdap.common.logging.LoggingContext)1 FlowletLoggingContext (co.cask.cdap.logging.context.FlowletLoggingContext)1 ByteBufferInputStream (co.cask.common.io.ByteBufferInputStream)1 ArrayList (java.util.ArrayList)1 Collection (java.util.Collection)1 HashMap (java.util.HashMap)1 Map (java.util.Map)1 Test (org.junit.Test)1 Logger (org.slf4j.Logger)1