Search in sources :

Example 1 with MaterializationException

use of io.confluent.ksql.execution.streams.materialization.MaterializationException in project ksql by confluentinc.

the class KsStateStore method store.

<T> T store(final QueryableStoreType<T> queryableStoreType, final int partition) {
    try {
        final boolean enableStaleStores = ksqlConfig.getBoolean(KsqlConfig.KSQL_QUERY_PULL_ENABLE_STANDBY_READS);
        final boolean sharedRuntime = kafkaStreams instanceof KafkaStreamsNamedTopologyWrapper;
        final StoreQueryParameters<T> parameters = sharedRuntime ? NamedTopologyStoreQueryParameters.fromNamedTopologyAndStoreNameAndType(queryId, stateStoreName, queryableStoreType).withPartition(partition) : StoreQueryParameters.fromNameAndType(stateStoreName, queryableStoreType).withPartition(partition);
        return enableStaleStores ? kafkaStreams.store(parameters.enableStaleStores()) : kafkaStreams.store(parameters);
    } catch (final Exception e) {
        final State state = kafkaStreams.state();
        if (state != State.RUNNING) {
            throw new NotRunningException("The query was not in a running state. state: " + state);
        }
        throw new MaterializationException("State store currently unavailable: " + stateStoreName, e);
    }
}
Also used : KafkaStreamsNamedTopologyWrapper(org.apache.kafka.streams.processor.internals.namedtopology.KafkaStreamsNamedTopologyWrapper) State(org.apache.kafka.streams.KafkaStreams.State) NotRunningException(io.confluent.ksql.execution.streams.materialization.NotRunningException) MaterializationException(io.confluent.ksql.execution.streams.materialization.MaterializationException) NotRunningException(io.confluent.ksql.execution.streams.materialization.NotRunningException) MaterializationException(io.confluent.ksql.execution.streams.materialization.MaterializationException)

Example 2 with MaterializationException

use of io.confluent.ksql.execution.streams.materialization.MaterializationException in project ksql by confluentinc.

the class KsMaterializedSessionTableIQv2 method get.

@Override
public KsMaterializedQueryResult<WindowedRow> get(final GenericKey key, final int partition, final Range<Instant> windowStart, final Range<Instant> windowEnd, final Optional<Position> position) {
    try {
        final WindowRangeQuery<GenericKey, GenericRow> query = WindowRangeQuery.withKey(key);
        StateQueryRequest<KeyValueIterator<Windowed<GenericKey>, GenericRow>> request = inStore(stateStore.getStateStoreName()).withQuery(query);
        if (position.isPresent()) {
            request = request.withPositionBound(PositionBound.at(position.get()));
        }
        final StateQueryResult<KeyValueIterator<Windowed<GenericKey>, GenericRow>> result = stateStore.getKafkaStreams().query(request);
        final QueryResult<KeyValueIterator<Windowed<GenericKey>, GenericRow>> queryResult = result.getPartitionResults().get(partition);
        if (queryResult.isFailure()) {
            throw failedQueryException(queryResult);
        }
        try (KeyValueIterator<Windowed<GenericKey>, GenericRow> it = queryResult.getResult()) {
            final Builder<WindowedRow> builder = ImmutableList.builder();
            while (it.hasNext()) {
                final KeyValue<Windowed<GenericKey>, GenericRow> next = it.next();
                final Window wnd = next.key.window();
                if (!windowStart.contains(wnd.startTime())) {
                    continue;
                }
                if (!windowEnd.contains(wnd.endTime())) {
                    continue;
                }
                final long rowTime = wnd.end();
                final WindowedRow row = WindowedRow.of(stateStore.schema(), next.key, next.value, rowTime);
                builder.add(row);
            }
            return KsMaterializedQueryResult.rowIteratorWithPosition(builder.build().iterator(), queryResult.getPosition());
        }
    } catch (final NotUpToBoundException | MaterializationException e) {
        throw e;
    } catch (final Exception e) {
        throw new MaterializationException("Failed to get value from materialized table", e);
    }
}
Also used : Window(org.apache.kafka.streams.kstream.Window) MaterializationException(io.confluent.ksql.execution.streams.materialization.MaterializationException) MaterializationException(io.confluent.ksql.execution.streams.materialization.MaterializationException) GenericRow(io.confluent.ksql.GenericRow) Windowed(org.apache.kafka.streams.kstream.Windowed) KeyValueIterator(org.apache.kafka.streams.state.KeyValueIterator) GenericKey(io.confluent.ksql.GenericKey) WindowedRow(io.confluent.ksql.execution.streams.materialization.WindowedRow)

Example 3 with MaterializationException

use of io.confluent.ksql.execution.streams.materialization.MaterializationException in project ksql by confluentinc.

the class KsMaterializedWindowTableIQv2 method get.

@Override
public KsMaterializedQueryResult<WindowedRow> get(final GenericKey key, final int partition, final Range<Instant> windowStartBounds, final Range<Instant> windowEndBounds, final Optional<Position> position) {
    try {
        final Instant lower = calculateLowerBound(windowStartBounds, windowEndBounds);
        final Instant upper = calculateUpperBound(windowStartBounds, windowEndBounds);
        final WindowKeyQuery<GenericKey, ValueAndTimestamp<GenericRow>> query = WindowKeyQuery.withKeyAndWindowStartRange(key, lower, upper);
        StateQueryRequest<WindowStoreIterator<ValueAndTimestamp<GenericRow>>> request = inStore(stateStore.getStateStoreName()).withQuery(query);
        if (position.isPresent()) {
            request = request.withPositionBound(PositionBound.at(position.get()));
        }
        final KafkaStreams streams = stateStore.getKafkaStreams();
        final StateQueryResult<WindowStoreIterator<ValueAndTimestamp<GenericRow>>> result = streams.query(request);
        final QueryResult<WindowStoreIterator<ValueAndTimestamp<GenericRow>>> queryResult = result.getPartitionResults().get(partition);
        if (queryResult.isFailure()) {
            throw failedQueryException(queryResult);
        }
        if (queryResult.getResult() == null) {
            return KsMaterializedQueryResult.rowIteratorWithPosition(Collections.emptyIterator(), queryResult.getPosition());
        }
        try (WindowStoreIterator<ValueAndTimestamp<GenericRow>> it = queryResult.getResult()) {
            final Builder<WindowedRow> builder = ImmutableList.builder();
            while (it.hasNext()) {
                final KeyValue<Long, ValueAndTimestamp<GenericRow>> next = it.next();
                final Instant windowStart = Instant.ofEpochMilli(next.key);
                if (!windowStartBounds.contains(windowStart)) {
                    continue;
                }
                final Instant windowEnd = windowStart.plus(windowSize);
                if (!windowEndBounds.contains(windowEnd)) {
                    continue;
                }
                final TimeWindow window = new TimeWindow(windowStart.toEpochMilli(), windowEnd.toEpochMilli());
                final WindowedRow row = WindowedRow.of(stateStore.schema(), new Windowed<>(key, window), next.value.value(), next.value.timestamp());
                builder.add(row);
            }
            return KsMaterializedQueryResult.rowIteratorWithPosition(builder.build().iterator(), queryResult.getPosition());
        }
    } catch (final NotUpToBoundException | MaterializationException e) {
        throw e;
    } catch (final Exception e) {
        throw new MaterializationException("Failed to get value from materialized table", e);
    }
}
Also used : WindowStoreIterator(org.apache.kafka.streams.state.WindowStoreIterator) KafkaStreams(org.apache.kafka.streams.KafkaStreams) Instant(java.time.Instant) TimeWindow(org.apache.kafka.streams.kstream.internals.TimeWindow) MaterializationException(io.confluent.ksql.execution.streams.materialization.MaterializationException) MaterializationException(io.confluent.ksql.execution.streams.materialization.MaterializationException) ValueAndTimestamp(org.apache.kafka.streams.state.ValueAndTimestamp) GenericRow(io.confluent.ksql.GenericRow) GenericKey(io.confluent.ksql.GenericKey) WindowedRow(io.confluent.ksql.execution.streams.materialization.WindowedRow)

Example 4 with MaterializationException

use of io.confluent.ksql.execution.streams.materialization.MaterializationException in project ksql by confluentinc.

the class HARouting method executeRounds.

private void executeRounds(final ServiceContext serviceContext, final PullPhysicalPlan pullPhysicalPlan, final ConfiguredStatement<Query> statement, final RoutingOptions routingOptions, final LogicalSchema outputSchema, final QueryId queryId, final List<KsqlPartitionLocation> locations, final PullQueryQueue pullQueryQueue, final CompletableFuture<Void> shouldCancelRequests, final Optional<ConsistencyOffsetVector> consistencyOffsetVector) throws InterruptedException {
    final ExecutorCompletionService<PartitionFetchResult> completionService = new ExecutorCompletionService<>(routerExecutorService);
    final int totalPartitions = locations.size();
    int processedPartitions = 0;
    final Map<Integer, List<Exception>> exceptionsPerPartition = new HashMap<>();
    for (final KsqlPartitionLocation partition : locations) {
        final KsqlNode node = getNodeForRound(partition, routingOptions);
        pullQueryMetrics.ifPresent(queryExecutorMetrics -> queryExecutorMetrics.recordPartitionFetchRequest(1));
        completionService.submit(() -> routeQuery.routeQuery(node, partition, statement, serviceContext, routingOptions, pullQueryMetrics, pullPhysicalPlan, outputSchema, queryId, pullQueryQueue, shouldCancelRequests, consistencyOffsetVector));
    }
    while (processedPartitions < totalPartitions) {
        final Future<PartitionFetchResult> future = completionService.take();
        try {
            final PartitionFetchResult fetchResult = future.get();
            if (fetchResult.isError()) {
                exceptionsPerPartition.computeIfAbsent(fetchResult.location.getPartition(), v -> new ArrayList<>()).add(fetchResult.exception.get());
                final KsqlPartitionLocation nextRoundPartition = nextNode(fetchResult.getLocation());
                final KsqlNode node = getNodeForRound(nextRoundPartition, routingOptions);
                pullQueryMetrics.ifPresent(queryExecutorMetrics -> queryExecutorMetrics.recordResubmissionRequest(1));
                completionService.submit(() -> routeQuery.routeQuery(node, nextRoundPartition, statement, serviceContext, routingOptions, pullQueryMetrics, pullPhysicalPlan, outputSchema, queryId, pullQueryQueue, shouldCancelRequests, consistencyOffsetVector));
            } else {
                Preconditions.checkState(fetchResult.getResult() == RoutingResult.SUCCESS);
                processedPartitions++;
            }
        } catch (final Exception e) {
            final MaterializationException exception = new MaterializationException("Unable to execute pull query: " + e.getMessage());
            for (Entry<Integer, List<Exception>> entry : exceptionsPerPartition.entrySet()) {
                for (Exception excp : entry.getValue()) {
                    exception.addSuppressed(excp);
                }
            }
            throw exception;
        }
    }
    pullQueryQueue.close();
}
Also used : ThreadFactoryBuilder(com.google.common.util.concurrent.ThreadFactoryBuilder) Query(io.confluent.ksql.parser.tree.Query) StreamedRow(io.confluent.ksql.rest.entity.StreamedRow) MaterializationException(io.confluent.ksql.execution.streams.materialization.MaterializationException) RoutingFilterFactory(io.confluent.ksql.execution.streams.RoutingFilter.RoutingFilterFactory) BiFunction(java.util.function.BiFunction) ServiceContext(io.confluent.ksql.services.ServiceContext) LoggerFactory(org.slf4j.LoggerFactory) RoutingOptions(io.confluent.ksql.execution.streams.RoutingOptions) KsqlNode(io.confluent.ksql.execution.streams.materialization.Locator.KsqlNode) Header(io.confluent.ksql.rest.entity.StreamedRow.Header) HashMap(java.util.HashMap) CompletableFuture(java.util.concurrent.CompletableFuture) RestResponse(io.confluent.ksql.rest.client.RestResponse) AtomicReference(java.util.concurrent.atomic.AtomicReference) ArrayList(java.util.ArrayList) Future(java.util.concurrent.Future) ImmutableList(com.google.common.collect.ImmutableList) NotUpToBoundException(io.confluent.ksql.execution.streams.materialization.ks.NotUpToBoundException) Host(io.confluent.ksql.execution.streams.RoutingFilter.Host) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Map(java.util.Map) QueryId(io.confluent.ksql.query.QueryId) ExecutorService(java.util.concurrent.ExecutorService) KsqlRequestConfig(io.confluent.ksql.util.KsqlRequestConfig) Logger(org.slf4j.Logger) ImmutableMap(com.google.common.collect.ImmutableMap) ConfiguredStatement(io.confluent.ksql.statement.ConfiguredStatement) KsqlConfig(io.confluent.ksql.util.KsqlConfig) LogicalSchema(io.confluent.ksql.schema.ksql.LogicalSchema) Collectors(java.util.stream.Collectors) Executors(java.util.concurrent.Executors) Objects(java.util.Objects) Consumer(java.util.function.Consumer) List(java.util.List) PullQueryExecutorMetrics(io.confluent.ksql.internal.PullQueryExecutorMetrics) ConsistencyOffsetVector(io.confluent.ksql.util.ConsistencyOffsetVector) Entry(java.util.Map.Entry) KsqlException(io.confluent.ksql.util.KsqlException) Optional(java.util.Optional) Preconditions(com.google.common.base.Preconditions) VisibleForTesting(com.google.common.annotations.VisibleForTesting) KsqlPartitionLocation(io.confluent.ksql.execution.streams.materialization.Locator.KsqlPartitionLocation) PullPhysicalPlanType(io.confluent.ksql.physical.pull.PullPhysicalPlan.PullPhysicalPlanType) PullQueryQueue(io.confluent.ksql.query.PullQueryQueue) ExecutorCompletionService(java.util.concurrent.ExecutorCompletionService) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) ExecutorCompletionService(java.util.concurrent.ExecutorCompletionService) KsqlNode(io.confluent.ksql.execution.streams.materialization.Locator.KsqlNode) MaterializationException(io.confluent.ksql.execution.streams.materialization.MaterializationException) NotUpToBoundException(io.confluent.ksql.execution.streams.materialization.ks.NotUpToBoundException) KsqlException(io.confluent.ksql.util.KsqlException) MaterializationException(io.confluent.ksql.execution.streams.materialization.MaterializationException) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Entry(java.util.Map.Entry) ArrayList(java.util.ArrayList) ImmutableList(com.google.common.collect.ImmutableList) List(java.util.List) KsqlPartitionLocation(io.confluent.ksql.execution.streams.materialization.Locator.KsqlPartitionLocation)

Example 5 with MaterializationException

use of io.confluent.ksql.execution.streams.materialization.MaterializationException in project ksql by confluentinc.

the class HARouting method handlePullQuery.

public CompletableFuture<Void> handlePullQuery(final ServiceContext serviceContext, final PullPhysicalPlan pullPhysicalPlan, final ConfiguredStatement<Query> statement, final RoutingOptions routingOptions, final LogicalSchema outputSchema, final QueryId queryId, final PullQueryQueue pullQueryQueue, final CompletableFuture<Void> shouldCancelRequests, final Optional<ConsistencyOffsetVector> consistencyOffsetVector) {
    final List<KsqlPartitionLocation> allLocations = pullPhysicalPlan.getMaterialization().locator().locate(pullPhysicalPlan.getKeys(), routingOptions, routingFilterFactory, pullPhysicalPlan.getPlanType() == PullPhysicalPlanType.RANGE_SCAN);
    final Map<Integer, List<Host>> emptyPartitions = allLocations.stream().filter(loc -> loc.getNodes().stream().noneMatch(node -> node.getHost().isSelected())).collect(Collectors.toMap(KsqlPartitionLocation::getPartition, loc -> loc.getNodes().stream().map(KsqlNode::getHost).collect(Collectors.toList())));
    if (!emptyPartitions.isEmpty()) {
        final MaterializationException materializationException = new MaterializationException("Unable to execute pull query. " + emptyPartitions.entrySet().stream().map(kv -> String.format("Partition %s failed to find valid host. Hosts scanned: %s", kv.getKey(), kv.getValue())).collect(Collectors.joining(", ", "[", "]")));
        LOG.debug(materializationException.getMessage());
        throw materializationException;
    }
    // at this point we should filter out the hosts that we should not route to
    final List<KsqlPartitionLocation> locations = allLocations.stream().map(KsqlPartitionLocation::removeFilteredHosts).collect(Collectors.toList());
    final CompletableFuture<Void> completableFuture = new CompletableFuture<>();
    coordinatorExecutorService.submit(() -> {
        try {
            executeRounds(serviceContext, pullPhysicalPlan, statement, routingOptions, outputSchema, queryId, locations, pullQueryQueue, shouldCancelRequests, consistencyOffsetVector);
            completableFuture.complete(null);
        } catch (Throwable t) {
            completableFuture.completeExceptionally(t);
        }
    });
    return completableFuture;
}
Also used : AtomicInteger(java.util.concurrent.atomic.AtomicInteger) ThreadFactoryBuilder(com.google.common.util.concurrent.ThreadFactoryBuilder) Query(io.confluent.ksql.parser.tree.Query) StreamedRow(io.confluent.ksql.rest.entity.StreamedRow) MaterializationException(io.confluent.ksql.execution.streams.materialization.MaterializationException) RoutingFilterFactory(io.confluent.ksql.execution.streams.RoutingFilter.RoutingFilterFactory) BiFunction(java.util.function.BiFunction) ServiceContext(io.confluent.ksql.services.ServiceContext) LoggerFactory(org.slf4j.LoggerFactory) RoutingOptions(io.confluent.ksql.execution.streams.RoutingOptions) KsqlNode(io.confluent.ksql.execution.streams.materialization.Locator.KsqlNode) Header(io.confluent.ksql.rest.entity.StreamedRow.Header) HashMap(java.util.HashMap) CompletableFuture(java.util.concurrent.CompletableFuture) RestResponse(io.confluent.ksql.rest.client.RestResponse) AtomicReference(java.util.concurrent.atomic.AtomicReference) ArrayList(java.util.ArrayList) Future(java.util.concurrent.Future) ImmutableList(com.google.common.collect.ImmutableList) NotUpToBoundException(io.confluent.ksql.execution.streams.materialization.ks.NotUpToBoundException) Host(io.confluent.ksql.execution.streams.RoutingFilter.Host) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Map(java.util.Map) QueryId(io.confluent.ksql.query.QueryId) ExecutorService(java.util.concurrent.ExecutorService) KsqlRequestConfig(io.confluent.ksql.util.KsqlRequestConfig) Logger(org.slf4j.Logger) ImmutableMap(com.google.common.collect.ImmutableMap) ConfiguredStatement(io.confluent.ksql.statement.ConfiguredStatement) KsqlConfig(io.confluent.ksql.util.KsqlConfig) LogicalSchema(io.confluent.ksql.schema.ksql.LogicalSchema) Collectors(java.util.stream.Collectors) Executors(java.util.concurrent.Executors) Objects(java.util.Objects) Consumer(java.util.function.Consumer) List(java.util.List) PullQueryExecutorMetrics(io.confluent.ksql.internal.PullQueryExecutorMetrics) ConsistencyOffsetVector(io.confluent.ksql.util.ConsistencyOffsetVector) Entry(java.util.Map.Entry) KsqlException(io.confluent.ksql.util.KsqlException) Optional(java.util.Optional) Preconditions(com.google.common.base.Preconditions) VisibleForTesting(com.google.common.annotations.VisibleForTesting) KsqlPartitionLocation(io.confluent.ksql.execution.streams.materialization.Locator.KsqlPartitionLocation) PullPhysicalPlanType(io.confluent.ksql.physical.pull.PullPhysicalPlan.PullPhysicalPlanType) PullQueryQueue(io.confluent.ksql.query.PullQueryQueue) ExecutorCompletionService(java.util.concurrent.ExecutorCompletionService) CompletableFuture(java.util.concurrent.CompletableFuture) KsqlPartitionLocation(io.confluent.ksql.execution.streams.materialization.Locator.KsqlPartitionLocation) ArrayList(java.util.ArrayList) ImmutableList(com.google.common.collect.ImmutableList) List(java.util.List) MaterializationException(io.confluent.ksql.execution.streams.materialization.MaterializationException)

Aggregations

MaterializationException (io.confluent.ksql.execution.streams.materialization.MaterializationException)13 GenericKey (io.confluent.ksql.GenericKey)10 ImmutableList (com.google.common.collect.ImmutableList)7 GenericRow (io.confluent.ksql.GenericRow)7 Objects (java.util.Objects)7 Optional (java.util.Optional)7 ValueAndTimestamp (org.apache.kafka.streams.state.ValueAndTimestamp)7 Streams (com.google.common.collect.Streams)5 WindowedRow (io.confluent.ksql.execution.streams.materialization.WindowedRow)5 IteratorUtil (io.confluent.ksql.util.IteratorUtil)4 Collections (java.util.Collections)4 Position (org.apache.kafka.streams.query.Position)4 KeyValueIterator (org.apache.kafka.streams.state.KeyValueIterator)4 VisibleForTesting (com.google.common.annotations.VisibleForTesting)3 Preconditions (com.google.common.base.Preconditions)3 Instant (java.time.Instant)3 TimeWindow (org.apache.kafka.streams.kstream.internals.TimeWindow)3 FailureReason (org.apache.kafka.streams.query.FailureReason)3 PositionBound (org.apache.kafka.streams.query.PositionBound)3 QueryResult (org.apache.kafka.streams.query.QueryResult)3