Search in sources :

Example 1 with KafkaConfig

use of org.apache.samza.config.KafkaConfig in project samza by apache.

the class KafkaSystemAdmin method toKafkaSpec.

/**
 * Converts a StreamSpec into a KafkaStreamSpec. Special handling for coordinator and changelog stream.
 * @param spec a StreamSpec object
 * @return KafkaStreamSpec object
 */
public KafkaStreamSpec toKafkaSpec(StreamSpec spec) {
    KafkaStreamSpec kafkaSpec;
    if (spec.isChangeLogStream()) {
        String topicName = spec.getPhysicalName();
        ChangelogInfo topicMeta = changelogTopicMetaInformation.get(topicName);
        if (topicMeta == null) {
            throw new StreamValidationException("Unable to find topic information for topic " + topicName);
        }
        kafkaSpec = new KafkaStreamSpec(spec.getId(), topicName, systemName, spec.getPartitionCount(), topicMeta.getReplicationFactor(), topicMeta.getKafkaProperties());
    } else if (spec.isCoordinatorStream()) {
        kafkaSpec = new KafkaStreamSpec(spec.getId(), spec.getPhysicalName(), systemName, 1, coordinatorStreamReplicationFactor, coordinatorStreamProperties);
    } else if (spec.isCheckpointStream()) {
        Properties checkpointTopicProperties = new Properties();
        checkpointTopicProperties.putAll(spec.getConfig());
        kafkaSpec = KafkaStreamSpec.fromSpec(StreamSpec.createCheckpointStreamSpec(spec.getPhysicalName(), spec.getSystemName())).copyWithReplicationFactor(Integer.parseInt(new KafkaConfig(config).getCheckpointReplicationFactor().get())).copyWithProperties(checkpointTopicProperties);
    } else if (intermediateStreamProperties.containsKey(spec.getId())) {
        kafkaSpec = KafkaStreamSpec.fromSpec(spec);
        Properties properties = kafkaSpec.getProperties();
        properties.putAll(intermediateStreamProperties.get(spec.getId()));
        kafkaSpec = kafkaSpec.copyWithProperties(properties);
    } else {
        kafkaSpec = KafkaStreamSpec.fromSpec(spec);
        // we check if there is a system-level rf config specified, else we use KafkaConfig.topic-default-rf
        int replicationFactorFromSystemConfig = Integer.valueOf(new KafkaConfig(config).getSystemDefaultReplicationFactor(spec.getSystemName(), KafkaConfig.TOPIC_DEFAULT_REPLICATION_FACTOR()));
        LOG.info("Using replication-factor: {} for StreamSpec: {}", replicationFactorFromSystemConfig, spec);
        return new KafkaStreamSpec(kafkaSpec.getId(), kafkaSpec.getPhysicalName(), kafkaSpec.getSystemName(), kafkaSpec.getPartitionCount(), replicationFactorFromSystemConfig, kafkaSpec.getProperties());
    }
    return kafkaSpec;
}
Also used : Properties(java.util.Properties) StreamValidationException(org.apache.samza.system.StreamValidationException) KafkaConfig(org.apache.samza.config.KafkaConfig)

Example 2 with KafkaConfig

use of org.apache.samza.config.KafkaConfig in project samza by apache.

the class TestKafkaCheckpointManagerFactory method testGetCheckpointTopicProperties.

@Test
public void testGetCheckpointTopicProperties() {
    Map<String, String> config = new HashMap<>();
    Properties properties = new KafkaConfig(new MapConfig(config)).getCheckpointTopicProperties();
    assertEquals(properties.getProperty("cleanup.policy"), "compact");
    assertEquals(properties.getProperty("segment.bytes"), String.valueOf(KafkaConfig.DEFAULT_CHECKPOINT_SEGMENT_BYTES()));
    config.put(ApplicationConfig.APP_MODE, ApplicationConfig.ApplicationMode.BATCH.name());
    properties = new KafkaConfig(new MapConfig(config)).getCheckpointTopicProperties();
    assertEquals(properties.getProperty("cleanup.policy"), "compact,delete");
    assertEquals(properties.getProperty("segment.bytes"), String.valueOf(KafkaConfig.DEFAULT_CHECKPOINT_SEGMENT_BYTES()));
    assertEquals(properties.getProperty("retention.ms"), String.valueOf(KafkaConfig.DEFAULT_RETENTION_MS_FOR_BATCH()));
}
Also used : HashMap(java.util.HashMap) MapConfig(org.apache.samza.config.MapConfig) Properties(java.util.Properties) KafkaConfig(org.apache.samza.config.KafkaConfig) Test(org.junit.Test)

Example 3 with KafkaConfig

use of org.apache.samza.config.KafkaConfig in project samza by apache.

the class KafkaSystemConsumer method setFetchThresholds.

private void setFetchThresholds() {
    // get the thresholds, and set defaults if not defined.
    KafkaConfig kafkaConfig = new KafkaConfig(config);
    Option<String> fetchThresholdOption = kafkaConfig.getConsumerFetchThreshold(systemName);
    long fetchThreshold = FETCH_THRESHOLD;
    if (fetchThresholdOption.isDefined()) {
        fetchThreshold = Long.valueOf(fetchThresholdOption.get());
    }
    Option<String> fetchThresholdBytesOption = kafkaConfig.getConsumerFetchThresholdBytes(systemName);
    long fetchThresholdBytes = FETCH_THRESHOLD_BYTES;
    if (fetchThresholdBytesOption.isDefined()) {
        fetchThresholdBytes = Long.valueOf(fetchThresholdBytesOption.get());
    }
    int numPartitions = topicPartitionsToOffset.size();
    if (numPartitions > 0) {
        perPartitionFetchThreshold = fetchThreshold / numPartitions;
        if (fetchThresholdBytesEnabled) {
            // currently this feature cannot be enabled, because we do not have the size of the messages available.
            // messages get double buffered, hence divide by 2
            perPartitionFetchThresholdBytes = (fetchThresholdBytes / 2) / numPartitions;
        }
    }
    LOG.info("{}: fetchThresholdBytes = {}; fetchThreshold={}; numPartitions={}, perPartitionFetchThreshold={}, perPartitionFetchThresholdBytes(0 if disabled)={}", this, fetchThresholdBytes, fetchThreshold, numPartitions, perPartitionFetchThreshold, perPartitionFetchThresholdBytes);
}
Also used : KafkaConfig(org.apache.samza.config.KafkaConfig)

Aggregations

KafkaConfig (org.apache.samza.config.KafkaConfig)3 Properties (java.util.Properties)2 HashMap (java.util.HashMap)1 MapConfig (org.apache.samza.config.MapConfig)1 StreamValidationException (org.apache.samza.system.StreamValidationException)1 Test (org.junit.Test)1