Search in sources :

Example 36 with AbstractPlanNode

use of org.voltdb.plannodes.AbstractPlanNode in project voltdb by VoltDB.

the class PlanAssembler method switchToIndexScanForGroupBy.

/**
     * For a seqscan feeding a GROUP BY, consider substituting an IndexScan
     * that pre-sorts by the GROUP BY keys.
     * If a candidate is already an indexscan,
     * simply calculate GROUP BY column coverage
     *
     * @param candidate
     * @param gbInfo
     * @return true when planner can switch to index scan
     *         from a sequential scan, and when the index scan
     *         has no parent plan node or the candidate is already
     *         an indexscan and covers all or some GROUP BY columns
     */
private boolean switchToIndexScanForGroupBy(AbstractPlanNode candidate, IndexGroupByInfo gbInfo) {
    if (!m_parsedSelect.isGrouped()) {
        return false;
    }
    if (candidate instanceof IndexScanPlanNode) {
        calculateIndexGroupByInfo((IndexScanPlanNode) candidate, gbInfo);
        if (gbInfo.m_coveredGroupByColumns != null && !gbInfo.m_coveredGroupByColumns.isEmpty()) {
            // The candidate index does cover all or some
            // of the GROUP BY columns and can be serialized
            gbInfo.m_indexAccess = candidate;
            return true;
        }
        return false;
    }
    AbstractPlanNode sourceSeqScan = findSeqScanCandidateForGroupBy(candidate);
    if (sourceSeqScan == null) {
        return false;
    }
    assert (sourceSeqScan instanceof SeqScanPlanNode);
    AbstractPlanNode parent = null;
    if (sourceSeqScan.getParentCount() > 0) {
        parent = sourceSeqScan.getParent(0);
    }
    AbstractPlanNode indexAccess = indexAccessForGroupByExprs((SeqScanPlanNode) sourceSeqScan, gbInfo);
    if (indexAccess.getPlanNodeType() != PlanNodeType.INDEXSCAN) {
        // does not find proper index to replace sequential scan
        return false;
    }
    gbInfo.m_indexAccess = indexAccess;
    if (parent != null) {
        // have a parent and would like to replace
        // the sequential scan with an index scan
        indexAccess.clearParents();
        // For two children join node, index 0 is its outer side
        parent.replaceChild(0, indexAccess);
        return false;
    }
    // parent is null and switched to index scan from sequential scan
    return true;
}
Also used : AbstractPlanNode(org.voltdb.plannodes.AbstractPlanNode) SeqScanPlanNode(org.voltdb.plannodes.SeqScanPlanNode) IndexScanPlanNode(org.voltdb.plannodes.IndexScanPlanNode)

Example 37 with AbstractPlanNode

use of org.voltdb.plannodes.AbstractPlanNode in project voltdb by VoltDB.

the class PlanAssembler method addCoordinatorToDMLNode.

/**
     * Add a receive node, a sum or limit node, and a send node to the given DML node.
     * If the DML target is a replicated table, it will add a limit node,
     * otherwise it adds a sum node.
     *
     * @param dmlRoot
     * @param isReplicated Whether or not the target table is a replicated table.
     * @return
     */
private static AbstractPlanNode addCoordinatorToDMLNode(AbstractPlanNode dmlRoot, boolean isReplicated) {
    dmlRoot = SubPlanAssembler.addSendReceivePair(dmlRoot);
    AbstractPlanNode sumOrLimitNode;
    if (isReplicated) {
        // Replicated table DML result doesn't need to be summed. All partitions should
        // modify the same number of tuples in replicated table, so just pick the result from
        // any partition.
        LimitPlanNode limitNode = new LimitPlanNode();
        sumOrLimitNode = limitNode;
        limitNode.setLimit(1);
    } else {
        // create the nodes being pushed on top of dmlRoot.
        AggregatePlanNode countNode = new AggregatePlanNode();
        sumOrLimitNode = countNode;
        // configure the count aggregate (sum) node to produce a single
        // output column containing the result of the sum.
        // Create a TVE that should match the tuple count input column
        // This TVE is magic.
        // really really need to make this less hard-wired
        TupleValueExpression count_tve = new TupleValueExpression(AbstractParsedStmt.TEMP_TABLE_NAME, AbstractParsedStmt.TEMP_TABLE_NAME, "modified_tuples", "modified_tuples", 0);
        count_tve.setValueType(VoltType.BIGINT);
        count_tve.setValueSize(VoltType.BIGINT.getLengthInBytesForFixedTypes());
        countNode.addAggregate(ExpressionType.AGGREGATE_SUM, false, 0, count_tve);
        // The output column. Not really based on a TVE (it is really the
        // count expression represented by the count configured above). But
        // this is sufficient for now.  This looks identical to the above
        // TVE but it's logically different so we'll create a fresh one.
        TupleValueExpression tve = new TupleValueExpression(AbstractParsedStmt.TEMP_TABLE_NAME, AbstractParsedStmt.TEMP_TABLE_NAME, "modified_tuples", "modified_tuples", 0);
        tve.setValueType(VoltType.BIGINT);
        tve.setValueSize(VoltType.BIGINT.getLengthInBytesForFixedTypes());
        NodeSchema count_schema = new NodeSchema();
        count_schema.addColumn(AbstractParsedStmt.TEMP_TABLE_NAME, AbstractParsedStmt.TEMP_TABLE_NAME, "modified_tuples", "modified_tuples", tve);
        countNode.setOutputSchema(count_schema);
    }
    // connect the nodes to build the graph
    sumOrLimitNode.addAndLinkChild(dmlRoot);
    SendPlanNode sendNode = new SendPlanNode();
    sendNode.addAndLinkChild(sumOrLimitNode);
    return sendNode;
}
Also used : AbstractPlanNode(org.voltdb.plannodes.AbstractPlanNode) TupleValueExpression(org.voltdb.expressions.TupleValueExpression) HashAggregatePlanNode(org.voltdb.plannodes.HashAggregatePlanNode) AggregatePlanNode(org.voltdb.plannodes.AggregatePlanNode) PartialAggregatePlanNode(org.voltdb.plannodes.PartialAggregatePlanNode) SendPlanNode(org.voltdb.plannodes.SendPlanNode) LimitPlanNode(org.voltdb.plannodes.LimitPlanNode) NodeSchema(org.voltdb.plannodes.NodeSchema)

Example 38 with AbstractPlanNode

use of org.voltdb.plannodes.AbstractPlanNode in project voltdb by VoltDB.

the class SelectSubPlanAssembler method getSelectSubPlanForJoinNode.

/**
     * Given a specific join node and access path set for inner and outer tables, construct the plan
     * that gives the right tuples.
     *
     * @param joinNode The join node to build the plan for.
     * @param isInnerTable True if the join node is the inner node in the join
     * @return A completed plan-sub-graph that should match the correct tuples from the
     * correct tables.
     */
private AbstractPlanNode getSelectSubPlanForJoinNode(JoinNode joinNode) {
    assert (joinNode != null);
    if (joinNode instanceof BranchNode) {
        BranchNode branchJoinNode = (BranchNode) joinNode;
        // Outer node
        AbstractPlanNode outerScanPlan = getSelectSubPlanForJoinNode(branchJoinNode.getLeftNode());
        if (outerScanPlan == null) {
            return null;
        }
        // Inner Node.
        AbstractPlanNode innerScanPlan = getSelectSubPlanForJoinNode((branchJoinNode).getRightNode());
        if (innerScanPlan == null) {
            return null;
        }
        // Join Node
        IndexSortablePlanNode answer = getSelectSubPlanForJoin(branchJoinNode, outerScanPlan, innerScanPlan);
        // branch node is an inner join.
        if ((answer != null) && (branchJoinNode.getJoinType() == JoinType.INNER) && outerScanPlan instanceof IndexSortablePlanNode) {
            IndexUseForOrderBy indexUseForJoin = answer.indexUse();
            IndexUseForOrderBy indexUseFromScan = ((IndexSortablePlanNode) outerScanPlan).indexUse();
            indexUseForJoin.setWindowFunctionUsesIndex(indexUseFromScan.getWindowFunctionUsesIndex());
            indexUseForJoin.setWindowFunctionIsCompatibleWithOrderBy(indexUseFromScan.isWindowFunctionCompatibleWithOrderBy());
            indexUseForJoin.setFinalExpressionOrderFromIndexScan(indexUseFromScan.getFinalExpressionOrderFromIndexScan());
            indexUseForJoin.setSortOrderFromIndexScan(indexUseFromScan.getSortOrderFromIndexScan());
        }
        if (answer == null) {
            return null;
        }
        return answer.planNode();
    }
    // End of recursion
    AbstractPlanNode scanNode = getAccessPlanForTable(joinNode);
    // Connect the sub-query tree if any
    if (joinNode instanceof SubqueryLeafNode) {
        StmtSubqueryScan tableScan = ((SubqueryLeafNode) joinNode).getSubqueryScan();
        CompiledPlan subQueryPlan = tableScan.getBestCostPlan();
        assert (subQueryPlan != null);
        assert (subQueryPlan.rootPlanGraph != null);
        // The sub-query best cost plan needs to be un-linked from the previous parent plan
        // it's the same child plan that gets re-attached to many parents one at a time
        subQueryPlan.rootPlanGraph.disconnectParents();
        scanNode.addAndLinkChild(subQueryPlan.rootPlanGraph);
    }
    return scanNode;
}
Also used : AbstractPlanNode(org.voltdb.plannodes.AbstractPlanNode) IndexSortablePlanNode(org.voltdb.plannodes.IndexSortablePlanNode) StmtSubqueryScan(org.voltdb.planner.parseinfo.StmtSubqueryScan) BranchNode(org.voltdb.planner.parseinfo.BranchNode) SubqueryLeafNode(org.voltdb.planner.parseinfo.SubqueryLeafNode) IndexUseForOrderBy(org.voltdb.plannodes.IndexUseForOrderBy)

Example 39 with AbstractPlanNode

use of org.voltdb.plannodes.AbstractPlanNode in project voltdb by VoltDB.

the class PlanAssembler method getNextSelectPlan.

private CompiledPlan getNextSelectPlan() {
    assert (m_subAssembler != null);
    // A matview reaggregation template plan may have been initialized
    // with a post-predicate expression moved from the statement's
    // join tree prior to any subquery planning.
    // Since normally subquery planning is driven from the join tree,
    // any subqueries that are moved out of the join tree would need
    // to be planned separately.
    // This planning would need to be done prior to calling
    // m_subAssembler.nextPlan()
    // because it can have query partitioning implications.
    // Under the current query limitations, the partitioning implications
    // are very simple -- subqueries are not allowed in multipartition
    // queries against partitioned data, so detection of a subquery in
    // the same query as a matview reaggregation can just return an error,
    // without any need for subquery planning here.
    HashAggregatePlanNode reAggNode = null;
    HashAggregatePlanNode mvReAggTemplate = m_parsedSelect.m_mvFixInfo.getReAggregationPlanNode();
    if (mvReAggTemplate != null) {
        reAggNode = new HashAggregatePlanNode(mvReAggTemplate);
        AbstractExpression postPredicate = reAggNode.getPostPredicate();
        if (postPredicate != null && postPredicate.hasSubquerySubexpression()) {
            // For now, this is just a special case violation of the limitation on
            // use of subquery expressions in MP queries on partitioned data.
            // That special case was going undetected when we didn't flag it here.
            m_recentErrorMsg = IN_EXISTS_SCALAR_ERROR_MESSAGE;
            return null;
        }
    // // Something more along these lines would have to be enabled
    // // to allow expression subqueries to be used in multi-partition
    // // matview queries.
    // if (!getBestCostPlanForExpressionSubQueries(subqueryExprs)) {
    //     // There was at least one sub-query and we should have a compiled plan for it
    //    return null;
    // }
    }
    AbstractPlanNode subSelectRoot = m_subAssembler.nextPlan();
    if (subSelectRoot == null) {
        m_recentErrorMsg = m_subAssembler.m_recentErrorMsg;
        return null;
    }
    AbstractPlanNode root = subSelectRoot;
    boolean mvFixNeedsProjection = false;
    /*
         * If the access plan for the table in the join order was for a
         * distributed table scan there must be a send/receive pair at the top
         * EXCEPT for the special outer join case in which a replicated table
         * was on the OUTER side of an outer join across from the (joined) scan
         * of the partitioned table(s) (all of them) in the query. In that case,
         * the one required send/receive pair is already in the plan below the
         * inner side of a NestLoop join.
         */
    if (m_partitioning.requiresTwoFragments()) {
        boolean mvFixInfoCoordinatorNeeded = true;
        boolean mvFixInfoEdgeCaseOuterJoin = false;
        ArrayList<AbstractPlanNode> receivers = root.findAllNodesOfClass(AbstractReceivePlanNode.class);
        if (receivers.size() == 1) {
            // Edge cases: left outer join with replicated table.
            if (m_parsedSelect.m_mvFixInfo.needed()) {
                mvFixInfoCoordinatorNeeded = false;
                AbstractPlanNode receiveNode = receivers.get(0);
                if (receiveNode.getParent(0) instanceof NestLoopPlanNode) {
                    if (subSelectRoot.hasInlinedIndexScanOfTable(m_parsedSelect.m_mvFixInfo.getMVTableName())) {
                        return getNextSelectPlan();
                    }
                    List<AbstractPlanNode> nljs = receiveNode.findAllNodesOfType(PlanNodeType.NESTLOOP);
                    List<AbstractPlanNode> nlijs = receiveNode.findAllNodesOfType(PlanNodeType.NESTLOOPINDEX);
                    // This is like a single table case.
                    if (nljs.size() + nlijs.size() == 0) {
                        mvFixInfoEdgeCaseOuterJoin = true;
                    }
                    root = handleMVBasedMultiPartQuery(reAggNode, root, mvFixInfoEdgeCaseOuterJoin);
                }
            }
        } else {
            if (receivers.size() > 0) {
                throw new PlanningErrorException("This special case join between an outer replicated table and " + "an inner partitioned table is too complex and is not supported.");
            }
            root = SubPlanAssembler.addSendReceivePair(root);
            // Root is a receive node here.
            assert (root instanceof ReceivePlanNode);
            if (m_parsedSelect.mayNeedAvgPushdown()) {
                m_parsedSelect.switchOptimalSuiteForAvgPushdown();
            }
            if (m_parsedSelect.m_tableList.size() > 1 && m_parsedSelect.m_mvFixInfo.needed() && subSelectRoot.hasInlinedIndexScanOfTable(m_parsedSelect.m_mvFixInfo.getMVTableName())) {
                // So, in-lined index scan of Nested loop index join can not be possible.
                return getNextSelectPlan();
            }
        }
        root = handleAggregationOperators(root);
        // Process the re-aggregate plan node and insert it into the plan.
        if (m_parsedSelect.m_mvFixInfo.needed() && mvFixInfoCoordinatorNeeded) {
            AbstractPlanNode tmpRoot = root;
            root = handleMVBasedMultiPartQuery(reAggNode, root, mvFixInfoEdgeCaseOuterJoin);
            if (root != tmpRoot) {
                mvFixNeedsProjection = true;
            }
        }
    } else {
        /*
             * There is no receive node and root is a single partition plan.
             */
        // If there is no receive plan node and no distributed plan has been generated,
        // the fix set for MV is not needed.
        m_parsedSelect.m_mvFixInfo.setNeeded(false);
        root = handleAggregationOperators(root);
    }
    // add a PartitionByPlanNode here.
    if (m_parsedSelect.hasWindowFunctionExpression()) {
        root = handleWindowedOperators(root);
    }
    if (m_parsedSelect.hasOrderByColumns()) {
        root = handleOrderBy(m_parsedSelect, root);
        if (m_parsedSelect.isComplexOrderBy() && root instanceof OrderByPlanNode) {
            AbstractPlanNode child = root.getChild(0);
            AbstractPlanNode grandChild = child.getChild(0);
            // swap the ORDER BY and complex aggregate Projection node
            if (child instanceof ProjectionPlanNode) {
                root.unlinkChild(child);
                child.unlinkChild(grandChild);
                child.addAndLinkChild(root);
                root.addAndLinkChild(grandChild);
                // update the new root
                root = child;
            } else if (m_parsedSelect.hasDistinctWithGroupBy() && child.getPlanNodeType() == PlanNodeType.HASHAGGREGATE && grandChild.getPlanNodeType() == PlanNodeType.PROJECTION) {
                AbstractPlanNode grandGrandChild = grandChild.getChild(0);
                child.clearParents();
                root.clearChildren();
                grandGrandChild.clearParents();
                grandChild.clearChildren();
                grandChild.addAndLinkChild(root);
                root.addAndLinkChild(grandGrandChild);
                root = child;
            }
        }
    }
    // node.
    if (mvFixNeedsProjection || needProjectionNode(root)) {
        root = addProjection(root);
    }
    if (m_parsedSelect.hasLimitOrOffset()) {
        root = handleSelectLimitOperator(root);
    }
    CompiledPlan plan = new CompiledPlan();
    plan.rootPlanGraph = root;
    plan.setReadOnly(true);
    boolean orderIsDeterministic = m_parsedSelect.isOrderDeterministic();
    boolean hasLimitOrOffset = m_parsedSelect.hasLimitOrOffset();
    String contentDeterminismMessage = m_parsedSelect.getContentDeterminismMessage();
    plan.statementGuaranteesDeterminism(hasLimitOrOffset, orderIsDeterministic, contentDeterminismMessage);
    // Apply the micro-optimization:
    // LIMIT push down, Table count / Counting Index, Optimized Min/Max
    MicroOptimizationRunner.applyAll(plan, m_parsedSelect);
    return plan;
}
Also used : AbstractPlanNode(org.voltdb.plannodes.AbstractPlanNode) OrderByPlanNode(org.voltdb.plannodes.OrderByPlanNode) AbstractReceivePlanNode(org.voltdb.plannodes.AbstractReceivePlanNode) MergeReceivePlanNode(org.voltdb.plannodes.MergeReceivePlanNode) ReceivePlanNode(org.voltdb.plannodes.ReceivePlanNode) HashAggregatePlanNode(org.voltdb.plannodes.HashAggregatePlanNode) NestLoopPlanNode(org.voltdb.plannodes.NestLoopPlanNode) AbstractExpression(org.voltdb.expressions.AbstractExpression) ProjectionPlanNode(org.voltdb.plannodes.ProjectionPlanNode)

Example 40 with AbstractPlanNode

use of org.voltdb.plannodes.AbstractPlanNode in project voltdb by VoltDB.

the class PlanAssembler method handleAggregationOperators.

private AbstractPlanNode handleAggregationOperators(AbstractPlanNode root) {
    /*
         * "Select A from T group by A" is grouped but has no aggregate operator
         * expressions. Catch that case by checking the grouped flag
         */
    if (m_parsedSelect.hasAggregateOrGroupby()) {
        AggregatePlanNode aggNode = null;
        // i.e., on the coordinator
        AggregatePlanNode topAggNode = null;
        IndexGroupByInfo gbInfo = new IndexGroupByInfo();
        if (root instanceof AbstractReceivePlanNode) {
            // for distinct that does not group by partition column
            if (!m_parsedSelect.hasAggregateDistinct() || m_parsedSelect.hasPartitionColumnInGroupby()) {
                AbstractPlanNode candidate = root.getChild(0).getChild(0);
                gbInfo.m_multiPartition = true;
                switchToIndexScanForGroupBy(candidate, gbInfo);
            }
        } else if (switchToIndexScanForGroupBy(root, gbInfo)) {
            root = gbInfo.m_indexAccess;
        }
        boolean needHashAgg = gbInfo.needHashAggregator(root, m_parsedSelect);
        // Construct the aggregate nodes
        if (needHashAgg) {
            if (m_parsedSelect.m_mvFixInfo.needed()) {
                // TODO: may optimize this edge case in future
                aggNode = new HashAggregatePlanNode();
            } else {
                if (gbInfo.isChangedToSerialAggregate()) {
                    assert (root instanceof ReceivePlanNode);
                    aggNode = new AggregatePlanNode();
                } else if (gbInfo.isChangedToPartialAggregate()) {
                    aggNode = new PartialAggregatePlanNode(gbInfo.m_coveredGroupByColumns);
                } else {
                    aggNode = new HashAggregatePlanNode();
                }
                topAggNode = new HashAggregatePlanNode();
            }
        } else {
            aggNode = new AggregatePlanNode();
            if (!m_parsedSelect.m_mvFixInfo.needed()) {
                topAggNode = new AggregatePlanNode();
            }
        }
        NodeSchema agg_schema = new NodeSchema();
        NodeSchema top_agg_schema = new NodeSchema();
        for (int outputColumnIndex = 0; outputColumnIndex < m_parsedSelect.m_aggResultColumns.size(); outputColumnIndex += 1) {
            ParsedColInfo col = m_parsedSelect.m_aggResultColumns.get(outputColumnIndex);
            AbstractExpression rootExpr = col.expression;
            AbstractExpression agg_input_expr = null;
            SchemaColumn schema_col = null;
            SchemaColumn top_schema_col = null;
            if (rootExpr instanceof AggregateExpression) {
                ExpressionType agg_expression_type = rootExpr.getExpressionType();
                agg_input_expr = rootExpr.getLeft();
                // A bit of a hack: ProjectionNodes after the
                // aggregate node need the output columns here to
                // contain TupleValueExpressions (effectively on a temp table).
                // So we construct one based on the output of the
                // aggregate expression, the column alias provided by HSQL,
                // and the offset into the output table schema for the
                // aggregate node that we're computing.
                // Oh, oh, it's magic, you know..
                TupleValueExpression tve = new TupleValueExpression(AbstractParsedStmt.TEMP_TABLE_NAME, AbstractParsedStmt.TEMP_TABLE_NAME, "", col.alias, rootExpr, outputColumnIndex);
                tve.setDifferentiator(col.differentiator);
                boolean is_distinct = ((AggregateExpression) rootExpr).isDistinct();
                aggNode.addAggregate(agg_expression_type, is_distinct, outputColumnIndex, agg_input_expr);
                schema_col = new SchemaColumn(AbstractParsedStmt.TEMP_TABLE_NAME, AbstractParsedStmt.TEMP_TABLE_NAME, "", col.alias, tve, outputColumnIndex);
                top_schema_col = new SchemaColumn(AbstractParsedStmt.TEMP_TABLE_NAME, AbstractParsedStmt.TEMP_TABLE_NAME, "", col.alias, tve, outputColumnIndex);
                /*
                     * Special case count(*), count(), sum(), min() and max() to
                     * push them down to each partition. It will do the
                     * push-down if the select columns only contains the listed
                     * aggregate operators and other group-by columns. If the
                     * select columns includes any other aggregates, it will not
                     * do the push-down. - nshi
                     */
                if (topAggNode != null) {
                    ExpressionType top_expression_type = agg_expression_type;
                    /*
                         * For count(*), count() and sum(), the pushed-down
                         * aggregate node doesn't change. An extra sum()
                         * aggregate node is added to the coordinator to sum up
                         * the numbers from all the partitions. The input schema
                         * and the output schema of the sum() aggregate node is
                         * the same as the output schema of the push-down
                         * aggregate node.
                         *
                         * If DISTINCT is specified, don't do push-down for
                         * count() and sum() when not group by partition column.
                         * An exception is the aggregation arguments are the
                         * partition column (ENG-4980).
                         */
                    if (agg_expression_type == ExpressionType.AGGREGATE_COUNT_STAR || agg_expression_type == ExpressionType.AGGREGATE_COUNT || agg_expression_type == ExpressionType.AGGREGATE_SUM) {
                        if (is_distinct && !(m_parsedSelect.hasPartitionColumnInGroupby() || canPushDownDistinctAggregation((AggregateExpression) rootExpr))) {
                            topAggNode = null;
                        } else {
                            // for aggregate distinct when group by
                            // partition column, the top aggregate node
                            // will be dropped later, thus there is no
                            // effect to assign the top_expression_type.
                            top_expression_type = ExpressionType.AGGREGATE_SUM;
                        }
                    } else /*
                         * For min() and max(), the pushed-down aggregate node
                         * doesn't change. An extra aggregate node of the same
                         * type is added to the coordinator. The input schema
                         * and the output schema of the top aggregate node is
                         * the same as the output schema of the pushed-down
                         * aggregate node.
                         *
                         * APPROX_COUNT_DISTINCT can be similarly pushed down, but
                         * must be split into two different functions, which is
                         * done later, from pushDownAggregate().
                         */
                    if (agg_expression_type != ExpressionType.AGGREGATE_MIN && agg_expression_type != ExpressionType.AGGREGATE_MAX && agg_expression_type != ExpressionType.AGGREGATE_APPROX_COUNT_DISTINCT) {
                        /*
                             * Unsupported aggregate for push-down (AVG for example).
                             */
                        topAggNode = null;
                    }
                    if (topAggNode != null) {
                        /*
                             * Input column of the top aggregate node is the
                             * output column of the push-down aggregate node
                             */
                        boolean topDistinctFalse = false;
                        topAggNode.addAggregate(top_expression_type, topDistinctFalse, outputColumnIndex, tve);
                    }
                }
            // end if we have a top agg node
            } else {
                // has already been broken down.
                assert (!rootExpr.hasAnySubexpressionOfClass(AggregateExpression.class));
                /*
                     * These columns are the pass through columns that are not being
                     * aggregated on. These are the ones from the SELECT list. They
                     * MUST already exist in the child node's output. Find them and
                     * add them to the aggregate's output.
                     */
                schema_col = new SchemaColumn(col.tableName, col.tableAlias, col.columnName, col.alias, col.expression, outputColumnIndex);
                AbstractExpression topExpr = null;
                if (col.groupBy) {
                    topExpr = m_parsedSelect.m_groupByExpressions.get(col.alias);
                } else {
                    topExpr = col.expression;
                }
                top_schema_col = new SchemaColumn(col.tableName, col.tableAlias, col.columnName, col.alias, topExpr, outputColumnIndex);
            }
            agg_schema.addColumn(schema_col);
            top_agg_schema.addColumn(top_schema_col);
        }
        for (ParsedColInfo col : m_parsedSelect.groupByColumns()) {
            aggNode.addGroupByExpression(col.expression);
            if (topAggNode != null) {
                topAggNode.addGroupByExpression(m_parsedSelect.m_groupByExpressions.get(col.alias));
            }
        }
        aggNode.setOutputSchema(agg_schema);
        if (topAggNode != null) {
            if (m_parsedSelect.hasComplexGroupby()) {
                topAggNode.setOutputSchema(top_agg_schema);
            } else {
                topAggNode.setOutputSchema(agg_schema);
            }
        }
        // Never push down aggregation for MV fix case.
        root = pushDownAggregate(root, aggNode, topAggNode, m_parsedSelect);
    }
    return handleDistinctWithGroupby(root);
}
Also used : AbstractPlanNode(org.voltdb.plannodes.AbstractPlanNode) TupleValueExpression(org.voltdb.expressions.TupleValueExpression) AbstractReceivePlanNode(org.voltdb.plannodes.AbstractReceivePlanNode) HashAggregatePlanNode(org.voltdb.plannodes.HashAggregatePlanNode) AggregatePlanNode(org.voltdb.plannodes.AggregatePlanNode) PartialAggregatePlanNode(org.voltdb.plannodes.PartialAggregatePlanNode) AbstractReceivePlanNode(org.voltdb.plannodes.AbstractReceivePlanNode) MergeReceivePlanNode(org.voltdb.plannodes.MergeReceivePlanNode) ReceivePlanNode(org.voltdb.plannodes.ReceivePlanNode) PartialAggregatePlanNode(org.voltdb.plannodes.PartialAggregatePlanNode) HashAggregatePlanNode(org.voltdb.plannodes.HashAggregatePlanNode) SchemaColumn(org.voltdb.plannodes.SchemaColumn) AggregateExpression(org.voltdb.expressions.AggregateExpression) Constraint(org.voltdb.catalog.Constraint) AbstractExpression(org.voltdb.expressions.AbstractExpression) ExpressionType(org.voltdb.types.ExpressionType) NodeSchema(org.voltdb.plannodes.NodeSchema)

Aggregations

AbstractPlanNode (org.voltdb.plannodes.AbstractPlanNode)259 AbstractExpression (org.voltdb.expressions.AbstractExpression)55 IndexScanPlanNode (org.voltdb.plannodes.IndexScanPlanNode)48 ProjectionPlanNode (org.voltdb.plannodes.ProjectionPlanNode)48 AbstractScanPlanNode (org.voltdb.plannodes.AbstractScanPlanNode)46 SeqScanPlanNode (org.voltdb.plannodes.SeqScanPlanNode)44 AggregatePlanNode (org.voltdb.plannodes.AggregatePlanNode)37 NestLoopPlanNode (org.voltdb.plannodes.NestLoopPlanNode)37 HashAggregatePlanNode (org.voltdb.plannodes.HashAggregatePlanNode)29 OrderByPlanNode (org.voltdb.plannodes.OrderByPlanNode)29 ReceivePlanNode (org.voltdb.plannodes.ReceivePlanNode)27 SendPlanNode (org.voltdb.plannodes.SendPlanNode)27 MergeReceivePlanNode (org.voltdb.plannodes.MergeReceivePlanNode)20 AbstractReceivePlanNode (org.voltdb.plannodes.AbstractReceivePlanNode)16 NestLoopIndexPlanNode (org.voltdb.plannodes.NestLoopIndexPlanNode)16 SchemaColumn (org.voltdb.plannodes.SchemaColumn)15 NodeSchema (org.voltdb.plannodes.NodeSchema)14 UnionPlanNode (org.voltdb.plannodes.UnionPlanNode)14 LimitPlanNode (org.voltdb.plannodes.LimitPlanNode)12 PlanNodeType (org.voltdb.types.PlanNodeType)12