Search in sources :

Example 6 with PhysicalSchema

use of io.confluent.ksql.schema.ksql.PhysicalSchema in project ksql by confluentinc.

the class TableSelectBuilder method build.

@SuppressWarnings("unchecked")
public static <K> KTableHolder<K> build(final KTableHolder<K> table, final TableSelect<K> step, final RuntimeBuildContext buildContext, final Optional<Formats> formats, final MaterializedFactory materializedFactory) {
    final LogicalSchema sourceSchema = table.getSchema();
    final QueryContext queryContext = step.getProperties().getQueryContext();
    final Selection<K> selection = Selection.of(sourceSchema, step.getKeyColumnNames(), step.getSelectExpressions(), buildContext.getKsqlConfig(), buildContext.getFunctionRegistry());
    final SelectValueMapper<K> selectMapper = selection.getMapper();
    final ProcessingLogger logger = buildContext.getProcessingLogger(queryContext);
    final Named selectName = Named.as(StreamsUtil.buildOpName(queryContext));
    final Optional<MaterializationInfo.Builder> matBuilder = table.getMaterializationBuilder();
    final boolean forceMaterialize = !matBuilder.isPresent();
    final Serde<K> keySerde;
    final Serde<GenericRow> valSerde;
    if (formats.isPresent()) {
        final Formats materializationFormat = formats.get();
        final PhysicalSchema physicalSchema = PhysicalSchema.from(selection.getSchema(), materializationFormat.getKeyFeatures(), materializationFormat.getValueFeatures());
        keySerde = (Serde<K>) buildContext.buildKeySerde(materializationFormat.getKeyFormat(), physicalSchema, queryContext);
        valSerde = buildContext.buildValueSerde(materializationFormat.getValueFormat(), physicalSchema, queryContext);
        if (forceMaterialize) {
            final Stacker stacker = Stacker.of(step.getProperties().getQueryContext());
            final String stateStoreName = StreamsUtil.buildOpName(stacker.push(PROJECT_OP).getQueryContext());
            final Materialized<K, GenericRow, KeyValueStore<Bytes, byte[]>> materialized = materializedFactory.create(keySerde, valSerde, stateStoreName);
            final KTable<K, GenericRow> transFormedTable = table.getTable().transformValues(() -> new KsTransformer<>(selectMapper.getTransformer(logger)), materialized);
            return KTableHolder.materialized(transFormedTable, selection.getSchema(), table.getExecutionKeyFactory(), MaterializationInfo.builder(stateStoreName, selection.getSchema()));
        }
    } else {
        keySerde = null;
        valSerde = null;
    }
    final KTable<K, GenericRow> transFormedTable = table.getTable().transformValues(() -> new KsTransformer<>(selectMapper.getTransformer(logger)), Materialized.with(keySerde, valSerde), selectName);
    final Optional<MaterializationInfo.Builder> materialization = matBuilder.map(b -> b.map(pl -> (KsqlTransformer<Object, GenericRow>) selectMapper.getTransformer(pl), selection.getSchema(), queryContext));
    return table.withTable(transFormedTable, selection.getSchema()).withMaterialization(materialization);
}
Also used : TableSelect(io.confluent.ksql.execution.plan.TableSelect) PhysicalSchema(io.confluent.ksql.schema.ksql.PhysicalSchema) KTable(org.apache.kafka.streams.kstream.KTable) RuntimeBuildContext(io.confluent.ksql.execution.runtime.RuntimeBuildContext) KsqlTransformer(io.confluent.ksql.execution.transform.KsqlTransformer) QueryContext(io.confluent.ksql.execution.context.QueryContext) MaterializationInfo(io.confluent.ksql.execution.materialization.MaterializationInfo) Formats(io.confluent.ksql.execution.plan.Formats) LogicalSchema(io.confluent.ksql.schema.ksql.LogicalSchema) Bytes(org.apache.kafka.common.utils.Bytes) KTableHolder(io.confluent.ksql.execution.plan.KTableHolder) SelectValueMapper(io.confluent.ksql.execution.transform.select.SelectValueMapper) KsTransformer(io.confluent.ksql.execution.streams.transform.KsTransformer) GenericRow(io.confluent.ksql.GenericRow) Stacker(io.confluent.ksql.execution.context.QueryContext.Stacker) Serde(org.apache.kafka.common.serialization.Serde) Named(org.apache.kafka.streams.kstream.Named) KeyValueStore(org.apache.kafka.streams.state.KeyValueStore) ProcessingLogger(io.confluent.ksql.logging.processing.ProcessingLogger) Materialized(org.apache.kafka.streams.kstream.Materialized) Optional(java.util.Optional) Selection(io.confluent.ksql.execution.transform.select.Selection) ProcessingLogger(io.confluent.ksql.logging.processing.ProcessingLogger) Named(org.apache.kafka.streams.kstream.Named) Stacker(io.confluent.ksql.execution.context.QueryContext.Stacker) LogicalSchema(io.confluent.ksql.schema.ksql.LogicalSchema) KeyValueStore(org.apache.kafka.streams.state.KeyValueStore) QueryContext(io.confluent.ksql.execution.context.QueryContext) Formats(io.confluent.ksql.execution.plan.Formats) GenericRow(io.confluent.ksql.GenericRow) PhysicalSchema(io.confluent.ksql.schema.ksql.PhysicalSchema) KsqlTransformer(io.confluent.ksql.execution.transform.KsqlTransformer)

Example 7 with PhysicalSchema

use of io.confluent.ksql.schema.ksql.PhysicalSchema in project ksql by confluentinc.

the class TableSuppressBuilder method build.

@VisibleForTesting
@SuppressWarnings("unchecked")
<K> KTableHolder<K> build(final KTableHolder<K> table, final TableSuppress<K> step, final RuntimeBuildContext buildContext, final ExecutionKeyFactory<K> executionKeyFactory, final PhysicalSchemaFactory physicalSchemaFactory, final BiFunction<Serde<K>, Serde<GenericRow>, Materialized> materializedFactory) {
    final PhysicalSchema physicalSchema = physicalSchemaFactory.create(table.getSchema(), step.getInternalFormats().getKeyFeatures(), step.getInternalFormats().getValueFeatures());
    final QueryContext queryContext = QueryContext.Stacker.of(step.getProperties().getQueryContext()).push(SUPPRESS_OP_NAME).getQueryContext();
    final Serde<K> keySerde = executionKeyFactory.buildKeySerde(step.getInternalFormats().getKeyFormat(), physicalSchema, queryContext);
    final Serde<GenericRow> valueSerde = buildContext.buildValueSerde(step.getInternalFormats().getValueFormat(), physicalSchema, queryContext);
    final Materialized<K, GenericRow, KeyValueStore<Bytes, byte[]>> materialized = materializedFactory.apply(keySerde, valueSerde);
    final Suppressed.StrictBufferConfig strictBufferConfig;
    final long maxBytes = buildContext.getKsqlConfig().getLong(KsqlConfig.KSQL_SUPPRESS_BUFFER_SIZE_BYTES);
    if (maxBytes < 0) {
        strictBufferConfig = Suppressed.BufferConfig.unbounded();
    } else {
        strictBufferConfig = Suppressed.BufferConfig.maxBytes(maxBytes).shutDownWhenFull();
    }
    /* This is a dummy transformValues() call, we do this to ensure that the correct materialized
    with the correct key and val serdes is passed on when we call suppress
     */
    final KTable<K, GenericRow> suppressed = table.getTable().transformValues((() -> new KsTransformer<>((k, v, ctx) -> v)), materialized).suppress((Suppressed<? super K>) Suppressed.untilWindowCloses(strictBufferConfig).withName(SUPPRESS_OP_NAME));
    return table.withTable(suppressed, table.getSchema());
}
Also used : PhysicalSchema(io.confluent.ksql.schema.ksql.PhysicalSchema) KTable(org.apache.kafka.streams.kstream.KTable) TableSuppress(io.confluent.ksql.execution.plan.TableSuppress) RuntimeBuildContext(io.confluent.ksql.execution.runtime.RuntimeBuildContext) QueryContext(io.confluent.ksql.execution.context.QueryContext) BiFunction(java.util.function.BiFunction) Suppressed(org.apache.kafka.streams.kstream.Suppressed) KsqlConfig(io.confluent.ksql.util.KsqlConfig) ExecutionKeyFactory(io.confluent.ksql.execution.plan.ExecutionKeyFactory) LogicalSchema(io.confluent.ksql.schema.ksql.LogicalSchema) Bytes(org.apache.kafka.common.utils.Bytes) KTableHolder(io.confluent.ksql.execution.plan.KTableHolder) KsTransformer(io.confluent.ksql.execution.streams.transform.KsTransformer) GenericRow(io.confluent.ksql.GenericRow) Serde(org.apache.kafka.common.serialization.Serde) KeyValueStore(org.apache.kafka.streams.state.KeyValueStore) Materialized(org.apache.kafka.streams.kstream.Materialized) VisibleForTesting(com.google.common.annotations.VisibleForTesting) SerdeFeatures(io.confluent.ksql.serde.SerdeFeatures) KeyValueStore(org.apache.kafka.streams.state.KeyValueStore) QueryContext(io.confluent.ksql.execution.context.QueryContext) GenericRow(io.confluent.ksql.GenericRow) PhysicalSchema(io.confluent.ksql.schema.ksql.PhysicalSchema) Suppressed(org.apache.kafka.streams.kstream.Suppressed) VisibleForTesting(com.google.common.annotations.VisibleForTesting)

Example 8 with PhysicalSchema

use of io.confluent.ksql.schema.ksql.PhysicalSchema in project ksql by confluentinc.

the class JsonFormatTest method produceInitData.

private static void produceInitData() {
    TEST_HARNESS.produceRows(inputTopic, ORDER_DATA_PROVIDER, KAFKA, JSON);
    final LogicalSchema messageSchema = LogicalSchema.builder().keyColumn(SystemColumns.ROWKEY_NAME, SqlTypes.STRING).valueColumn(ColumnName.of("MESSAGE"), SqlTypes.STRING).build();
    final GenericKey messageKey = genericKey("1");
    final GenericRow messageRow = genericRow("{\"log\":{\"@timestamp\":\"2017-05-30T16:44:22.175Z\",\"@version\":\"1\"," + "\"caasVersion\":\"0.0.2\",\"cloud\":\"aws\",\"logs\":[{\"entry\":\"first\"}],\"clusterId\":\"cp99\",\"clusterName\":\"kafka\",\"cpComponentId\":\"kafka\",\"host\":\"kafka-1-wwl0p\",\"k8sId\":\"k8s13\",\"k8sName\":\"perf\",\"level\":\"ERROR\",\"logger\":\"kafka.server.ReplicaFetcherThread\",\"message\":\"Found invalid messages during fetch for partition [foo512,172] offset 0 error Record is corrupt (stored crc = 1321230880, computed crc = 1139143803)\",\"networkId\":\"vpc-d8c7a9bf\",\"region\":\"us-west-2\",\"serverId\":\"1\",\"skuId\":\"sku5\",\"source\":\"kafka\",\"tenantId\":\"t47\",\"tenantName\":\"perf-test\",\"thread\":\"ReplicaFetcherThread-0-2\",\"zone\":\"us-west-2a\"},\"stream\":\"stdout\",\"time\":2017}");
    final Map<GenericKey, GenericRow> records = new HashMap<>();
    records.put(messageKey, messageRow);
    final PhysicalSchema schema = PhysicalSchema.from(messageSchema, SerdeFeatures.of(), SerdeFeatures.of());
    TEST_HARNESS.produceRows(messageLogTopic, records.entrySet(), schema, KAFKA, JSON);
}
Also used : GenericRow(io.confluent.ksql.GenericRow) PhysicalSchema(io.confluent.ksql.schema.ksql.PhysicalSchema) HashMap(java.util.HashMap) LogicalSchema(io.confluent.ksql.schema.ksql.LogicalSchema) GenericKey(io.confluent.ksql.GenericKey)

Example 9 with PhysicalSchema

use of io.confluent.ksql.schema.ksql.PhysicalSchema in project ksql by confluentinc.

the class JsonFormatTest method readNormalResults.

private Map<GenericKey, GenericRow> readNormalResults(final String resultTopic, final int expectedNumMessages) {
    final DataSource source = metaStore.getSource(SourceName.of(streamName));
    final PhysicalSchema resultSchema = PhysicalSchema.from(source.getSchema(), source.getKsqlTopic().getKeyFormat().getFeatures(), source.getKsqlTopic().getValueFormat().getFeatures());
    return TEST_HARNESS.verifyAvailableUniqueRows(resultTopic, expectedNumMessages, KAFKA, JSON, resultSchema);
}
Also used : PhysicalSchema(io.confluent.ksql.schema.ksql.PhysicalSchema) DataSource(io.confluent.ksql.metastore.model.DataSource)

Example 10 with PhysicalSchema

use of io.confluent.ksql.schema.ksql.PhysicalSchema in project ksql by confluentinc.

the class ReplaceIntTest method assertForSource.

private void assertForSource(final String sourceName, final String topic, final Map<GenericKey, GenericRow> expected) {
    DataSource source = ksqlContext.getMetaStore().getSource(SourceName.of(sourceName));
    PhysicalSchema resultSchema = PhysicalSchema.from(source.getSchema(), source.getKsqlTopic().getKeyFormat().getFeatures(), source.getKsqlTopic().getValueFormat().getFeatures());
    assertThat(TEST_HARNESS.verifyAvailableUniqueRows(topic, expected.size(), FormatFactory.KAFKA, FormatFactory.JSON, resultSchema), is(expected));
}
Also used : PhysicalSchema(io.confluent.ksql.schema.ksql.PhysicalSchema) DataSource(io.confluent.ksql.metastore.model.DataSource)

Aggregations

PhysicalSchema (io.confluent.ksql.schema.ksql.PhysicalSchema)34 GenericRow (io.confluent.ksql.GenericRow)21 GenericKey (io.confluent.ksql.GenericKey)12 LogicalSchema (io.confluent.ksql.schema.ksql.LogicalSchema)11 QueryContext (io.confluent.ksql.execution.context.QueryContext)9 Test (org.junit.Test)7 Formats (io.confluent.ksql.execution.plan.Formats)6 RuntimeBuildContext (io.confluent.ksql.execution.runtime.RuntimeBuildContext)6 DataSource (io.confluent.ksql.metastore.model.DataSource)6 IntegrationTest (io.confluent.common.utils.IntegrationTest)5 ProcessingLogger (io.confluent.ksql.logging.processing.ProcessingLogger)4 Serde (org.apache.kafka.common.serialization.Serde)4 Windowed (org.apache.kafka.streams.kstream.Windowed)4 KeyValueStore (org.apache.kafka.streams.state.KeyValueStore)4 KTableHolder (io.confluent.ksql.execution.plan.KTableHolder)3 SourceBuilderUtils.getPhysicalSchema (io.confluent.ksql.execution.streams.SourceBuilderUtils.getPhysicalSchema)3 RestClientException (io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException)2 KsqlSchemaAuthorizationException (io.confluent.ksql.exception.KsqlSchemaAuthorizationException)2 KsqlTopicAuthorizationException (io.confluent.ksql.exception.KsqlTopicAuthorizationException)2 CodeGenRunner (io.confluent.ksql.execution.codegen.CodeGenRunner)2