Search in sources :

Example 11 with PlanNodeId

use of io.prestosql.spi.plan.PlanNodeId in project hetu-core by openlookeng.

the class TestDetermineSemiJoinDistributionType method testPartitionsWhenBothTablesEqual.

@Test
public void testPartitionsWhenBothTablesEqual() {
    int aRows = 10_000;
    int bRows = 10_000;
    assertDetermineSemiJoinDistributionType().setSystemProperty(JOIN_DISTRIBUTION_TYPE, JoinDistributionType.AUTOMATIC.name()).overrideStats("valuesA", PlanNodeStatsEstimate.builder().setOutputRowCount(aRows).addSymbolStatistics(ImmutableMap.of(new Symbol("A1"), SymbolStatsEstimate.unknown())).build()).overrideStats("valuesB", PlanNodeStatsEstimate.builder().setOutputRowCount(bRows).addSymbolStatistics(ImmutableMap.of(new Symbol("B1"), SymbolStatsEstimate.unknown())).build()).on(p -> p.semiJoin(p.values(new PlanNodeId("valuesA"), aRows, p.symbol("A1", BIGINT)), p.values(new PlanNodeId("valuesB"), bRows, p.symbol("B1", BIGINT)), p.symbol("A1"), p.symbol("B1"), p.symbol("output"), Optional.empty(), Optional.empty(), Optional.empty(), Optional.empty())).matches(semiJoin("A1", "B1", "output", Optional.of(PARTITIONED), values(ImmutableMap.of("A1", 0)), values(ImmutableMap.of("B1", 0))));
}
Also used : PlanBuilder.constantExpressions(io.prestosql.sql.planner.iterative.rule.test.PlanBuilder.constantExpressions) SymbolStatsEstimate(io.prestosql.cost.SymbolStatsEstimate) REPLICATED(io.prestosql.sql.planner.plan.SemiJoinNode.DistributionType.REPLICATED) PlanMatchPattern.semiJoin(io.prestosql.sql.planner.assertions.PlanMatchPattern.semiJoin) JoinDistributionType(io.prestosql.sql.analyzer.FeaturesConfig.JoinDistributionType) Test(org.testng.annotations.Test) PlanMatchPattern.values(io.prestosql.sql.planner.assertions.PlanMatchPattern.values) ImmutableList(com.google.common.collect.ImmutableList) Type(io.prestosql.spi.type.Type) RuleAssert(io.prestosql.sql.planner.iterative.rule.test.RuleAssert) BIGINT(io.prestosql.spi.type.BigintType.BIGINT) TaskCountEstimator(io.prestosql.cost.TaskCountEstimator) Symbol(io.prestosql.spi.plan.Symbol) PlanNodeId(io.prestosql.spi.plan.PlanNodeId) AfterClass(org.testng.annotations.AfterClass) PlanNodeStatsEstimate(io.prestosql.cost.PlanNodeStatsEstimate) ImmutableMap(com.google.common.collect.ImmutableMap) BeforeClass(org.testng.annotations.BeforeClass) CostComparator(io.prestosql.cost.CostComparator) PARTITIONED(io.prestosql.sql.planner.plan.SemiJoinNode.DistributionType.PARTITIONED) VarcharType.createUnboundedVarcharType(io.prestosql.spi.type.VarcharType.createUnboundedVarcharType) Optional(java.util.Optional) JOIN_MAX_BROADCAST_TABLE_SIZE(io.prestosql.SystemSessionProperties.JOIN_MAX_BROADCAST_TABLE_SIZE) RuleTester(io.prestosql.sql.planner.iterative.rule.test.RuleTester) JOIN_DISTRIBUTION_TYPE(io.prestosql.SystemSessionProperties.JOIN_DISTRIBUTION_TYPE) PlanNodeId(io.prestosql.spi.plan.PlanNodeId) Symbol(io.prestosql.spi.plan.Symbol) Test(org.testng.annotations.Test)

Example 12 with PlanNodeId

use of io.prestosql.spi.plan.PlanNodeId in project hetu-core by openlookeng.

the class TestDetermineSemiJoinDistributionType method testReplicatesWhenFilterMuchSmaller.

@Test
public void testReplicatesWhenFilterMuchSmaller() {
    int aRows = 10_000;
    int bRows = 100;
    assertDetermineSemiJoinDistributionType().setSystemProperty(JOIN_DISTRIBUTION_TYPE, JoinDistributionType.AUTOMATIC.name()).overrideStats("valuesA", PlanNodeStatsEstimate.builder().setOutputRowCount(aRows).addSymbolStatistics(ImmutableMap.of(new Symbol("A1"), SymbolStatsEstimate.unknown())).build()).overrideStats("valuesB", PlanNodeStatsEstimate.builder().setOutputRowCount(bRows).addSymbolStatistics(ImmutableMap.of(new Symbol("B1"), SymbolStatsEstimate.unknown())).build()).on(p -> p.semiJoin(p.values(new PlanNodeId("valuesA"), aRows, p.symbol("A1", BIGINT)), p.values(new PlanNodeId("valuesB"), bRows, p.symbol("B1", BIGINT)), p.symbol("A1"), p.symbol("B1"), p.symbol("output"), Optional.empty(), Optional.empty(), Optional.empty(), Optional.empty())).matches(semiJoin("A1", "B1", "output", Optional.of(REPLICATED), values(ImmutableMap.of("A1", 0)), values(ImmutableMap.of("B1", 0))));
}
Also used : PlanBuilder.constantExpressions(io.prestosql.sql.planner.iterative.rule.test.PlanBuilder.constantExpressions) SymbolStatsEstimate(io.prestosql.cost.SymbolStatsEstimate) REPLICATED(io.prestosql.sql.planner.plan.SemiJoinNode.DistributionType.REPLICATED) PlanMatchPattern.semiJoin(io.prestosql.sql.planner.assertions.PlanMatchPattern.semiJoin) JoinDistributionType(io.prestosql.sql.analyzer.FeaturesConfig.JoinDistributionType) Test(org.testng.annotations.Test) PlanMatchPattern.values(io.prestosql.sql.planner.assertions.PlanMatchPattern.values) ImmutableList(com.google.common.collect.ImmutableList) Type(io.prestosql.spi.type.Type) RuleAssert(io.prestosql.sql.planner.iterative.rule.test.RuleAssert) BIGINT(io.prestosql.spi.type.BigintType.BIGINT) TaskCountEstimator(io.prestosql.cost.TaskCountEstimator) Symbol(io.prestosql.spi.plan.Symbol) PlanNodeId(io.prestosql.spi.plan.PlanNodeId) AfterClass(org.testng.annotations.AfterClass) PlanNodeStatsEstimate(io.prestosql.cost.PlanNodeStatsEstimate) ImmutableMap(com.google.common.collect.ImmutableMap) BeforeClass(org.testng.annotations.BeforeClass) CostComparator(io.prestosql.cost.CostComparator) PARTITIONED(io.prestosql.sql.planner.plan.SemiJoinNode.DistributionType.PARTITIONED) VarcharType.createUnboundedVarcharType(io.prestosql.spi.type.VarcharType.createUnboundedVarcharType) Optional(java.util.Optional) JOIN_MAX_BROADCAST_TABLE_SIZE(io.prestosql.SystemSessionProperties.JOIN_MAX_BROADCAST_TABLE_SIZE) RuleTester(io.prestosql.sql.planner.iterative.rule.test.RuleTester) JOIN_DISTRIBUTION_TYPE(io.prestosql.SystemSessionProperties.JOIN_DISTRIBUTION_TYPE) PlanNodeId(io.prestosql.spi.plan.PlanNodeId) Symbol(io.prestosql.spi.plan.Symbol) Test(org.testng.annotations.Test)

Example 13 with PlanNodeId

use of io.prestosql.spi.plan.PlanNodeId in project hetu-core by openlookeng.

the class StatsAndCosts method create.

public static StatsAndCosts create(PlanNode root, StatsProvider statsProvider, CostProvider costProvider) {
    Iterable<PlanNode> planIterator = Traverser.forTree(PlanNode::getSources).depthFirstPreOrder(root);
    ImmutableMap.Builder<PlanNodeId, PlanNodeStatsEstimate> tmpStats = ImmutableMap.builder();
    ImmutableMap.Builder<PlanNodeId, PlanCostEstimate> tmpCosts = ImmutableMap.builder();
    Set<PlanNodeId> visitedPlanNodeId = new HashSet<>();
    for (PlanNode node : planIterator) {
        PlanNodeId id = node.getId();
        if (visitedPlanNodeId.contains(id)) {
            // This can happen only incase of CTE reuse optimization.
            continue;
        }
        visitedPlanNodeId.add(id);
        tmpStats.put(id, statsProvider.getStats(node));
        tmpCosts.put(id, costProvider.getCost(node));
    }
    return new StatsAndCosts(tmpStats.build(), tmpCosts.build());
}
Also used : PlanNodeId(io.prestosql.spi.plan.PlanNodeId) PlanNode(io.prestosql.spi.plan.PlanNode) ImmutableMap(com.google.common.collect.ImmutableMap) HashSet(java.util.HashSet)

Example 14 with PlanNodeId

use of io.prestosql.spi.plan.PlanNodeId in project hetu-core by openlookeng.

the class PlanNodeStatsSummarizer method getPlanNodeStats.

private static List<PlanNodeStats> getPlanNodeStats(TaskStats taskStats) {
    // Best effort to reconstruct the plan nodes from operators.
    // Because stats are collected separately from query execution,
    // it's possible that some or all of them are missing or out of date.
    // For example, a LIMIT clause can cause a query to finish before stats
    // are collected from the leaf stages.
    Set<PlanNodeId> planNodeIds = new HashSet<>();
    Map<PlanNodeId, Long> planNodeInputPositions = new HashMap<>();
    Map<PlanNodeId, Long> planNodeInputBytes = new HashMap<>();
    Map<PlanNodeId, Long> planNodeOutputPositions = new HashMap<>();
    Map<PlanNodeId, Long> planNodeOutputBytes = new HashMap<>();
    Map<PlanNodeId, Long> planNodeScheduledMillis = new HashMap<>();
    Map<PlanNodeId, Long> planNodeCpuMillis = new HashMap<>();
    Map<PlanNodeId, Map<String, OperatorInputStats>> operatorInputStats = new HashMap<>();
    Map<PlanNodeId, Map<String, OperatorHashCollisionsStats>> operatorHashCollisionsStats = new HashMap<>();
    Map<PlanNodeId, WindowOperatorStats> windowNodeStats = new HashMap<>();
    for (PipelineStats pipelineStats : taskStats.getPipelines()) {
        // Due to eventual consistently collected stats, these could be empty
        if (pipelineStats.getOperatorSummaries().isEmpty()) {
            continue;
        }
        Set<PlanNodeId> processedNodes = new HashSet<>();
        PlanNodeId inputPlanNode = pipelineStats.getOperatorSummaries().iterator().next().getPlanNodeId();
        PlanNodeId outputPlanNode = getLast(pipelineStats.getOperatorSummaries()).getPlanNodeId();
        // Gather input statistics
        for (OperatorStats operatorStats : pipelineStats.getOperatorSummaries()) {
            PlanNodeId planNodeId = operatorStats.getPlanNodeId();
            planNodeIds.add(planNodeId);
            long scheduledMillis = operatorStats.getAddInputWall().toMillis() + operatorStats.getGetOutputWall().toMillis() + operatorStats.getFinishWall().toMillis();
            planNodeScheduledMillis.merge(planNodeId, scheduledMillis, Long::sum);
            long cpuMillis = operatorStats.getAddInputCpu().toMillis() + operatorStats.getGetOutputCpu().toMillis() + operatorStats.getFinishCpu().toMillis();
            planNodeCpuMillis.merge(planNodeId, cpuMillis, Long::sum);
            // A pipeline like hash build before join might link to another "internal" pipelines which provide actual input for this plan node
            if (operatorStats.getPlanNodeId().equals(inputPlanNode) && !pipelineStats.isInputPipeline()) {
                continue;
            }
            if (processedNodes.contains(planNodeId)) {
                continue;
            }
            operatorInputStats.merge(planNodeId, ImmutableMap.of(operatorStats.getOperatorType(), new OperatorInputStats(operatorStats.getTotalDrivers(), operatorStats.getInputPositions(), operatorStats.getSumSquaredInputPositions())), (map1, map2) -> mergeMaps(map1, map2, OperatorInputStats::merge));
            if (operatorStats.getInfo() instanceof HashCollisionsInfo) {
                HashCollisionsInfo hashCollisionsInfo = (HashCollisionsInfo) operatorStats.getInfo();
                operatorHashCollisionsStats.merge(planNodeId, ImmutableMap.of(operatorStats.getOperatorType(), new OperatorHashCollisionsStats(hashCollisionsInfo.getWeightedHashCollisions(), hashCollisionsInfo.getWeightedSumSquaredHashCollisions(), hashCollisionsInfo.getWeightedExpectedHashCollisions())), (map1, map2) -> mergeMaps(map1, map2, OperatorHashCollisionsStats::merge));
            }
            // The only statistics we have for Window Functions are very low level, thus displayed only in VERBOSE mode
            if (operatorStats.getInfo() instanceof WindowInfo) {
                WindowInfo windowInfo = (WindowInfo) operatorStats.getInfo();
                windowNodeStats.merge(planNodeId, WindowOperatorStats.create(windowInfo), (left, right) -> left.mergeWith(right));
            }
            planNodeInputPositions.merge(planNodeId, operatorStats.getInputPositions(), Long::sum);
            planNodeInputBytes.merge(planNodeId, operatorStats.getInputDataSize().toBytes(), Long::sum);
            processedNodes.add(planNodeId);
        }
        // Gather output statistics
        processedNodes.clear();
        for (OperatorStats operatorStats : reverse(pipelineStats.getOperatorSummaries())) {
            PlanNodeId planNodeId = operatorStats.getPlanNodeId();
            // An "internal" pipeline like a hash build, links to another pipeline which is the actual output for this plan node
            if (operatorStats.getPlanNodeId().equals(outputPlanNode) && !pipelineStats.isOutputPipeline()) {
                continue;
            }
            if (processedNodes.contains(planNodeId)) {
                continue;
            }
            planNodeOutputPositions.merge(planNodeId, operatorStats.getOutputPositions(), Long::sum);
            planNodeOutputBytes.merge(planNodeId, operatorStats.getOutputDataSize().toBytes(), Long::sum);
            processedNodes.add(planNodeId);
        }
    }
    List<PlanNodeStats> stats = new ArrayList<>();
    for (PlanNodeId planNodeId : planNodeIds) {
        if (!planNodeInputPositions.containsKey(planNodeId)) {
            continue;
        }
        PlanNodeStats nodeStats;
        // It's possible there will be no output stats because all the pipelines that we observed were non-output.
        // For example in a query like SELECT * FROM a JOIN b ON c = d LIMIT 1
        // It's possible to observe stats after the build starts, but before the probe does
        // and therefore only have scheduled time, but no output stats
        long outputPositions = planNodeOutputPositions.getOrDefault(planNodeId, 0L);
        if (operatorHashCollisionsStats.containsKey(planNodeId)) {
            nodeStats = new HashCollisionPlanNodeStats(planNodeId, new Duration(planNodeScheduledMillis.get(planNodeId), MILLISECONDS), new Duration(planNodeCpuMillis.get(planNodeId), MILLISECONDS), planNodeInputPositions.get(planNodeId), succinctDataSize(planNodeInputBytes.get(planNodeId), BYTE), outputPositions, succinctDataSize(planNodeOutputBytes.getOrDefault(planNodeId, 0L), BYTE), operatorInputStats.get(planNodeId), operatorHashCollisionsStats.get(planNodeId));
        } else if (windowNodeStats.containsKey(planNodeId)) {
            nodeStats = new WindowPlanNodeStats(planNodeId, new Duration(planNodeScheduledMillis.get(planNodeId), MILLISECONDS), new Duration(planNodeCpuMillis.get(planNodeId), MILLISECONDS), planNodeInputPositions.get(planNodeId), succinctDataSize(planNodeInputBytes.get(planNodeId), BYTE), outputPositions, succinctDataSize(planNodeOutputBytes.getOrDefault(planNodeId, 0L), BYTE), operatorInputStats.get(planNodeId), windowNodeStats.get(planNodeId));
        } else {
            nodeStats = new PlanNodeStats(planNodeId, new Duration(planNodeScheduledMillis.get(planNodeId), MILLISECONDS), new Duration(planNodeCpuMillis.get(planNodeId), MILLISECONDS), planNodeInputPositions.get(planNodeId), succinctDataSize(planNodeInputBytes.get(planNodeId), BYTE), outputPositions, succinctDataSize(planNodeOutputBytes.getOrDefault(planNodeId, 0L), BYTE), operatorInputStats.get(planNodeId));
        }
        stats.add(nodeStats);
    }
    return stats;
}
Also used : PipelineStats(io.prestosql.operator.PipelineStats) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) Duration(io.airlift.units.Duration) OperatorStats(io.prestosql.operator.OperatorStats) HashCollisionsInfo(io.prestosql.operator.HashCollisionsInfo) WindowInfo(io.prestosql.operator.WindowInfo) PlanNodeId(io.prestosql.spi.plan.PlanNodeId) ImmutableMap(com.google.common.collect.ImmutableMap) HashMap(java.util.HashMap) Map(java.util.Map) HashSet(java.util.HashSet)

Example 15 with PlanNodeId

use of io.prestosql.spi.plan.PlanNodeId in project hetu-core by openlookeng.

the class PlanPrinter method formatFragment.

private static String formatFragment(Function<TableScanNode, TableInfo> tableInfoSupplier, ValuePrinter valuePrinter, PlanFragment fragment, Optional<StageInfo> stageInfo, Optional<Map<PlanNodeId, PlanNodeStats>> planNodeStats, boolean verbose, List<PlanFragment> allFragments, Metadata metadata) {
    StringBuilder builder = new StringBuilder();
    builder.append(format("Fragment %s [%s]\n", fragment.getId(), fragment.getPartitioning()));
    if (stageInfo.isPresent()) {
        StageStats stageStats = stageInfo.get().getStageStats();
        double avgPositionsPerTask = stageInfo.get().getTasks().stream().mapToLong(task -> task.getStats().getProcessedInputPositions()).average().orElse(Double.NaN);
        double squaredDifferences = stageInfo.get().getTasks().stream().mapToDouble(task -> Math.pow(task.getStats().getProcessedInputPositions() - avgPositionsPerTask, 2)).sum();
        double sdAmongTasks = Math.sqrt(squaredDifferences / stageInfo.get().getTasks().size());
        builder.append(indentString(1)).append(format("CPU: %s, Scheduled: %s, Input: %s (%s); per task: avg.: %s std.dev.: %s, Output: %s (%s)\n", stageStats.getTotalCpuTime().convertToMostSuccinctTimeUnit(), stageStats.getTotalScheduledTime().convertToMostSuccinctTimeUnit(), formatPositions(stageStats.getProcessedInputPositions()), stageStats.getProcessedInputDataSize(), formatDouble(avgPositionsPerTask), formatDouble(sdAmongTasks), formatPositions(stageStats.getOutputPositions()), stageStats.getOutputDataSize()));
    }
    PartitioningScheme partitioningScheme = fragment.getPartitioningScheme();
    builder.append(indentString(1)).append(format("Output layout: [%s]\n", Joiner.on(", ").join(partitioningScheme.getOutputLayout())));
    boolean replicateNullsAndAny = partitioningScheme.isReplicateNullsAndAny();
    List<String> arguments = partitioningScheme.getPartitioning().getArguments().stream().map(argument -> {
        if (argument.isConstant()) {
            NullableValue constant = argument.getConstant();
            String printableValue = valuePrinter.castToVarchar(constant.getType(), constant.getValue());
            return constant.getType().getDisplayName() + "(" + printableValue + ")";
        }
        return argument.getColumn().toString();
    }).collect(toImmutableList());
    builder.append(indentString(1));
    if (replicateNullsAndAny) {
        builder.append(format("Output partitioning: %s (replicate nulls and any) [%s]%s\n", partitioningScheme.getPartitioning().getHandle(), Joiner.on(", ").join(arguments), formatHash(partitioningScheme.getHashColumn())));
    } else {
        builder.append(format("Output partitioning: %s [%s]%s\n", partitioningScheme.getPartitioning().getHandle(), Joiner.on(", ").join(arguments), formatHash(partitioningScheme.getHashColumn())));
    }
    builder.append(indentString(1)).append(format("Stage Execution Strategy: %s\n", fragment.getStageExecutionDescriptor().getStageExecutionStrategy()));
    TypeProvider typeProvider = TypeProvider.copyOf(allFragments.stream().flatMap(f -> f.getSymbols().entrySet().stream()).distinct().collect(toImmutableMap(Map.Entry::getKey, Map.Entry::getValue)));
    builder.append(new PlanPrinter(fragment.getRoot(), typeProvider, Optional.of(fragment.getStageExecutionDescriptor()), tableInfoSupplier, valuePrinter, fragment.getStatsAndCosts(), planNodeStats, metadata).toText(verbose, 1)).append("\n");
    return builder.toString();
}
Also used : TableDeleteNode(io.prestosql.sql.planner.plan.TableDeleteNode) SortNode(io.prestosql.sql.planner.plan.SortNode) SubPlan(io.prestosql.sql.planner.SubPlan) LogicalRowExpressions(io.prestosql.expressions.LogicalRowExpressions) REUSE_STRATEGY_PRODUCER(io.prestosql.spi.operator.ReuseExchangeOperator.STRATEGY.REUSE_STRATEGY_PRODUCER) TypeProvider(io.prestosql.sql.planner.TypeProvider) NullableValue(io.prestosql.spi.predicate.NullableValue) PlanFragmentId(io.prestosql.sql.planner.plan.PlanFragmentId) CTEScanNode(io.prestosql.spi.plan.CTEScanNode) AggregationNode(io.prestosql.spi.plan.AggregationNode) TableUpdateNode(io.prestosql.sql.planner.plan.TableUpdateNode) Map(java.util.Map) OutputNode(io.prestosql.sql.planner.plan.OutputNode) Partitioning(io.prestosql.sql.planner.Partitioning) TopNRankingNumberNode(io.prestosql.sql.planner.plan.TopNRankingNumberNode) PlanNodeId(io.prestosql.spi.plan.PlanNodeId) TypedSymbol(io.prestosql.sql.planner.planprinter.NodeRepresentation.TypedSymbol) RowExpressionDeterminismEvaluator(io.prestosql.sql.relational.RowExpressionDeterminismEvaluator) CreateIndexNode(io.prestosql.sql.planner.plan.CreateIndexNode) SortExpressionContext(io.prestosql.sql.planner.SortExpressionContext) ImmutableList.toImmutableList(com.google.common.collect.ImmutableList.toImmutableList) TableScanNode(io.prestosql.spi.plan.TableScanNode) Set(java.util.Set) DynamicFilters(io.prestosql.sql.DynamicFilters) IndexSourceNode(io.prestosql.sql.planner.plan.IndexSourceNode) PlanNode(io.prestosql.spi.plan.PlanNode) MILLISECONDS(java.util.concurrent.TimeUnit.MILLISECONDS) ProjectNode(io.prestosql.spi.plan.ProjectNode) Metadata(io.prestosql.metadata.Metadata) Collectors.joining(java.util.stream.Collectors.joining) SymbolUtils.toSymbolReference(io.prestosql.sql.planner.SymbolUtils.toSymbolReference) SpatialJoinNode(io.prestosql.sql.planner.plan.SpatialJoinNode) ImmutableMap.toImmutableMap(com.google.common.collect.ImmutableMap.toImmutableMap) Stream(java.util.stream.Stream) Domain(io.prestosql.spi.predicate.Domain) StatisticAggregations(io.prestosql.sql.planner.plan.StatisticAggregations) ColumnStatisticMetadata(io.prestosql.spi.statistics.ColumnStatisticMetadata) StatisticsWriterNode(io.prestosql.sql.planner.plan.StatisticsWriterNode) VacuumTableNode(io.prestosql.sql.planner.plan.VacuumTableNode) DistinctLimitNode(io.prestosql.sql.planner.plan.DistinctLimitNode) Joiner(com.google.common.base.Joiner) GroupIdNode(io.prestosql.spi.plan.GroupIdNode) Iterables(com.google.common.collect.Iterables) IntersectNode(io.prestosql.spi.plan.IntersectNode) Marker(io.prestosql.spi.predicate.Marker) DynamicFilters.extractDynamicFilters(io.prestosql.sql.DynamicFilters.extractDynamicFilters) StageExecutionDescriptor(io.prestosql.operator.StageExecutionDescriptor) AssignUniqueId(io.prestosql.sql.planner.plan.AssignUniqueId) UnnestNode(io.prestosql.sql.planner.plan.UnnestNode) ArrayList(java.util.ArrayList) Lists(com.google.common.collect.Lists) Session(io.prestosql.Session) DeleteNode(io.prestosql.sql.planner.plan.DeleteNode) Functions(com.google.common.base.Functions) PlanNodeStatsEstimate(io.prestosql.cost.PlanNodeStatsEstimate) Assignments(io.prestosql.spi.plan.Assignments) ComparisonExpression(io.prestosql.sql.tree.ComparisonExpression) VariableReferenceExpression(io.prestosql.spi.relation.VariableReferenceExpression) TextRenderer.formatPositions(io.prestosql.sql.planner.planprinter.TextRenderer.formatPositions) ValuesNode(io.prestosql.spi.plan.ValuesNode) ColumnHandle(io.prestosql.spi.connector.ColumnHandle) SampleNode(io.prestosql.sql.planner.plan.SampleNode) WindowNode(io.prestosql.spi.plan.WindowNode) LimitNode(io.prestosql.spi.plan.LimitNode) Expression(io.prestosql.sql.tree.Expression) OffsetNode(io.prestosql.sql.planner.plan.OffsetNode) StatisticAggregationsDescriptor(io.prestosql.sql.planner.plan.StatisticAggregationsDescriptor) Scope(io.prestosql.sql.planner.plan.ExchangeNode.Scope) UpdateIndexNode(io.prestosql.sql.planner.plan.UpdateIndexNode) Duration(io.airlift.units.Duration) PlanNodeStatsSummarizer.aggregateStageStats(io.prestosql.sql.planner.planprinter.PlanNodeStatsSummarizer.aggregateStageStats) TableStatisticType(io.prestosql.spi.statistics.TableStatisticType) PartitioningScheme(io.prestosql.sql.planner.PartitioningScheme) TableFinishNode(io.prestosql.sql.planner.plan.TableFinishNode) PlanCostEstimate(io.prestosql.cost.PlanCostEstimate) ExchangeNode(io.prestosql.sql.planner.plan.ExchangeNode) Preconditions.checkArgument(com.google.common.base.Preconditions.checkArgument) FilterNode(io.prestosql.spi.plan.FilterNode) TextRenderer.indentString(io.prestosql.sql.planner.planprinter.TextRenderer.indentString) TextRenderer.formatDouble(io.prestosql.sql.planner.planprinter.TextRenderer.formatDouble) Type(io.prestosql.spi.type.Type) ApplyNode(io.prestosql.sql.planner.plan.ApplyNode) ImmutableSet(com.google.common.collect.ImmutableSet) JoinNodeUtils(io.prestosql.sql.planner.optimizations.JoinNodeUtils) Collection(java.util.Collection) Streams(com.google.common.collect.Streams) IndexJoinNode(io.prestosql.sql.planner.plan.IndexJoinNode) CubeFinishNode(io.prestosql.sql.planner.plan.CubeFinishNode) Collectors(java.util.stream.Collectors) RowNumberNode(io.prestosql.sql.planner.plan.RowNumberNode) String.format(java.lang.String.format) Preconditions.checkState(com.google.common.base.Preconditions.checkState) List(java.util.List) FunctionResolution(io.prestosql.sql.relational.FunctionResolution) EnforceSingleRowNode(io.prestosql.sql.planner.plan.EnforceSingleRowNode) TopNNode(io.prestosql.spi.plan.TopNNode) StageInfo.getAllStages(io.prestosql.execution.StageInfo.getAllStages) StageInfo(io.prestosql.execution.StageInfo) Entry(java.util.Map.Entry) UnionNode(io.prestosql.spi.plan.UnionNode) Optional(java.util.Optional) ExceptNode(io.prestosql.spi.plan.ExceptNode) Arrays.stream(java.util.Arrays.stream) StageStats(io.prestosql.execution.StageStats) InternalPlanVisitor(io.prestosql.sql.planner.plan.InternalPlanVisitor) SINGLE_DISTRIBUTION(io.prestosql.sql.planner.SystemPartitioningHandle.SINGLE_DISTRIBUTION) LateralJoinNode(io.prestosql.sql.planner.plan.LateralJoinNode) StatsAndCosts(io.prestosql.cost.StatsAndCosts) RemoteSourceNode(io.prestosql.sql.planner.plan.RemoteSourceNode) TableHandle(io.prestosql.spi.metadata.TableHandle) Function(java.util.function.Function) TRUE_LITERAL(io.prestosql.sql.tree.BooleanLiteral.TRUE_LITERAL) SemiJoinNode(io.prestosql.sql.planner.plan.SemiJoinNode) ImmutableList(com.google.common.collect.ImmutableList) OrderingScheme(io.prestosql.spi.plan.OrderingScheme) GraphvizPrinter(io.prestosql.util.GraphvizPrinter) Verify.verify(com.google.common.base.Verify.verify) Range(io.prestosql.spi.predicate.Range) Objects.requireNonNull(java.util.Objects.requireNonNull) LinkedList(java.util.LinkedList) MarkDistinctNode(io.prestosql.spi.plan.MarkDistinctNode) JoinNode(io.prestosql.spi.plan.JoinNode) Symbol(io.prestosql.spi.plan.Symbol) PlanFragment(io.prestosql.sql.planner.PlanFragment) TableWriterNode(io.prestosql.sql.planner.plan.TableWriterNode) StageExecutionDescriptor.ungroupedExecution(io.prestosql.operator.StageExecutionDescriptor.ungroupedExecution) REUSE_STRATEGY_CONSUMER(io.prestosql.spi.operator.ReuseExchangeOperator.STRATEGY.REUSE_STRATEGY_CONSUMER) TableInfo(io.prestosql.execution.TableInfo) CaseFormat(com.google.common.base.CaseFormat) GroupReference(io.prestosql.spi.plan.GroupReference) TupleDomain(io.prestosql.spi.predicate.TupleDomain) UpdateNode(io.prestosql.sql.planner.plan.UpdateNode) UPPER_UNDERSCORE(com.google.common.base.CaseFormat.UPPER_UNDERSCORE) Collectors.toList(java.util.stream.Collectors.toList) Aggregation(io.prestosql.spi.plan.AggregationNode.Aggregation) RowExpression(io.prestosql.spi.relation.RowExpression) SortExpressionExtractor(io.prestosql.sql.planner.SortExpressionExtractor) ExplainAnalyzeNode(io.prestosql.sql.planner.plan.ExplainAnalyzeNode) Entry(java.util.Map.Entry) PlanNodeStatsSummarizer.aggregateStageStats(io.prestosql.sql.planner.planprinter.PlanNodeStatsSummarizer.aggregateStageStats) StageStats(io.prestosql.execution.StageStats) PartitioningScheme(io.prestosql.sql.planner.PartitioningScheme) NullableValue(io.prestosql.spi.predicate.NullableValue) TypeProvider(io.prestosql.sql.planner.TypeProvider) TextRenderer.indentString(io.prestosql.sql.planner.planprinter.TextRenderer.indentString)

Aggregations

PlanNodeId (io.prestosql.spi.plan.PlanNodeId)234 Test (org.testng.annotations.Test)157 Page (io.prestosql.spi.Page)99 MaterializedResult (io.prestosql.testing.MaterializedResult)67 Type (io.prestosql.spi.type.Type)66 ImmutableList (com.google.common.collect.ImmutableList)46 RowPagesBuilder (io.prestosql.RowPagesBuilder)45 Symbol (io.prestosql.spi.plan.Symbol)45 DataSize (io.airlift.units.DataSize)39 ImmutableMap (com.google.common.collect.ImmutableMap)37 Optional (java.util.Optional)35 Split (io.prestosql.metadata.Split)31 OperatorAssertion.toMaterializedResult (io.prestosql.operator.OperatorAssertion.toMaterializedResult)29 OperatorFactory (io.prestosql.operator.OperatorFactory)27 UUID (java.util.UUID)25 JoinNode (io.prestosql.spi.plan.JoinNode)24 Metadata (io.prestosql.metadata.Metadata)23 RowExpression (io.prestosql.spi.relation.RowExpression)23 BIGINT (io.prestosql.spi.type.BigintType.BIGINT)22 SymbolStatsEstimate (io.prestosql.cost.SymbolStatsEstimate)20