Search in sources :

Example 6 with JoinStrategy

use of org.apache.derby.iapi.sql.compile.JoinStrategy in project derby by apache.

the class FromBaseTable method isOneRowResultSet.

/**
 * Return whether or not the underlying ResultSet tree will return
 * a single row, at most.  This method is intended to be used during
 * generation, after the "truly" best conglomerate has been chosen.
 * This is important for join nodes where we can save the extra next
 * on the right side if we know that it will return at most 1 row.
 *
 * @return Whether or not the underlying ResultSet tree will return a single row.
 * @exception StandardException		Thrown on error
 */
@Override
boolean isOneRowResultSet() throws StandardException {
    // EXISTS FBT will only return a single row
    if (existsBaseTable) {
        return true;
    }
    /* For hash join, we need to consider both the qualification
		 * and hash join predicates and we consider them against all
		 * conglomerates since we are looking for any uniqueness
		 * condition that holds on the columns in the hash table, 
		 * otherwise we just consider the predicates in the 
		 * restriction list and the conglomerate being scanned.

		 */
    AccessPath ap = getTrulyTheBestAccessPath();
    JoinStrategy trulyTheBestJoinStrategy = ap.getJoinStrategy();
    PredicateList pl;
    if (trulyTheBestJoinStrategy.isHashJoin()) {
        pl = new PredicateList(getContextManager());
        if (storeRestrictionList != null) {
            pl.nondestructiveAppend(storeRestrictionList);
        }
        if (nonStoreRestrictionList != null) {
            pl.nondestructiveAppend(nonStoreRestrictionList);
        }
        return isOneRowResultSet(pl);
    } else {
        return isOneRowResultSet(getTrulyTheBestAccessPath().getConglomerateDescriptor(), restrictionList);
    }
}
Also used : OptimizablePredicateList(org.apache.derby.iapi.sql.compile.OptimizablePredicateList) AccessPath(org.apache.derby.iapi.sql.compile.AccessPath) JoinStrategy(org.apache.derby.iapi.sql.compile.JoinStrategy)

Example 7 with JoinStrategy

use of org.apache.derby.iapi.sql.compile.JoinStrategy in project derby by apache.

the class FromBaseTable method changeAccessPath.

/**
 * @see ResultSetNode#changeAccessPath
 *
 * @exception StandardException		Thrown on error
 */
@Override
ResultSetNode changeAccessPath() throws StandardException {
    ResultSetNode retval;
    AccessPath ap = getTrulyTheBestAccessPath();
    ConglomerateDescriptor trulyTheBestConglomerateDescriptor = ap.getConglomerateDescriptor();
    JoinStrategy trulyTheBestJoinStrategy = ap.getJoinStrategy();
    Optimizer opt = ap.getOptimizer();
    if (optimizerTracingIsOn()) {
        getOptimizerTracer().traceChangingAccessPathForTable(tableNumber);
    }
    if (SanityManager.DEBUG) {
        SanityManager.ASSERT(trulyTheBestConglomerateDescriptor != null, "Should only modify access path after conglomerate has been chosen.");
    }
    /*
		** Make sure user-specified bulk fetch is OK with the chosen join
		** strategy.
		*/
    if (bulkFetch != UNSET) {
        if (!trulyTheBestJoinStrategy.bulkFetchOK()) {
            throw StandardException.newException(SQLState.LANG_INVALID_BULK_FETCH_WITH_JOIN_TYPE, trulyTheBestJoinStrategy.getName());
        } else // bulkFetch has no meaning for hash join, just ignore it
        if (trulyTheBestJoinStrategy.ignoreBulkFetch()) {
            disableBulkFetch();
        } else // bug 4431 - ignore bulkfetch property if it's 1 row resultset
        if (isOneRowResultSet()) {
            disableBulkFetch();
        }
    }
    // bulkFetch = 1 is the same as no bulk fetch
    if (bulkFetch == 1) {
        disableBulkFetch();
    }
    /* Remove any redundant join clauses.  A redundant join clause is one
		 * where there are other join clauses in the same equivalence class
		 * after it in the PredicateList.
		 */
    restrictionList.removeRedundantPredicates();
    /*
		** Divide up the predicates for different processing phases of the
		** best join strategy.
		*/
    storeRestrictionList = new PredicateList(getContextManager());
    nonStoreRestrictionList = new PredicateList(getContextManager());
    requalificationRestrictionList = new PredicateList(getContextManager());
    trulyTheBestJoinStrategy.divideUpPredicateLists(this, restrictionList, storeRestrictionList, nonStoreRestrictionList, requalificationRestrictionList, getDataDictionary());
    /* Check to see if we are going to do execution-time probing
		 * of an index using IN-list values.  We can tell by looking
		 * at the restriction list: if there is an IN-list probe
		 * predicate that is also a start/stop key then we know that
		 * we're going to do execution-time probing.  In that case
		 * we disable bulk fetching to minimize the number of non-
		 * matching rows that we read from disk.  RESOLVE: Do we
		 * really need to completely disable bulk fetching here,
		 * or can we do something else?
		 */
    for (Predicate pred : restrictionList) {
        if (pred.isInListProbePredicate() && pred.isStartKey()) {
            disableBulkFetch();
            multiProbing = true;
            break;
        }
    }
    /*
		** Consider turning on bulkFetch if it is turned
		** off.  Only turn it on if it is a not an updatable
		** scan and if it isn't a oneRowResultSet, and
		** not a subquery, and it is OK to use bulk fetch
		** with the chosen join strategy.  NOTE: the subquery logic
		** could be more sophisticated -- we are taking
		** the safe route in avoiding reading extra
		** data for something like:
		**
		**	select x from t where x in (select y from t)
	 	**
		** In this case we want to stop the subquery
		** evaluation as soon as something matches.
		*/
    if (trulyTheBestJoinStrategy.bulkFetchOK() && !(trulyTheBestJoinStrategy.ignoreBulkFetch()) && !bulkFetchTurnedOff && (bulkFetch == UNSET) && !forUpdate() && !isOneRowResultSet() && getLevel() == 0 && !validatingCheckConstraint) {
        bulkFetch = getDefaultBulkFetch();
    }
    /* Statement is dependent on the chosen conglomerate. */
    getCompilerContext().createDependency(trulyTheBestConglomerateDescriptor);
    /* No need to modify access path if conglomerate is the heap */
    if (!trulyTheBestConglomerateDescriptor.isIndex()) {
        /*
			** We need a little special logic for SYSSTATEMENTS
			** here.  SYSSTATEMENTS has a hidden column at the
			** end.  When someone does a select * we don't want
			** to get that column from the store.  So we'll always
			** generate a partial read bitSet if we are scanning
			** SYSSTATEMENTS to ensure we don't get the hidden
			** column.
			*/
        boolean isSysstatements = tableName.equals("SYS", "SYSSTATEMENTS");
        /* Template must reflect full row.
			 * Compact RCL down to partial row.
			 */
        templateColumns = getResultColumns();
        referencedCols = getResultColumns().getReferencedFormatableBitSet(isCursorTargetTable(), isSysstatements, false);
        setResultColumns(getResultColumns().compactColumns(isCursorTargetTable(), isSysstatements));
        return this;
    }
    /* Derby-1087: use data page when returning an updatable resultset */
    if (ap.getCoveringIndexScan() && (!isCursorTargetTable())) {
        /* Massage resultColumns so that it matches the index. */
        setResultColumns(newResultColumns(getResultColumns(), trulyTheBestConglomerateDescriptor, baseConglomerateDescriptor, false));
        /* We are going against the index.  The template row must be the full index row.
			 * The template row will have the RID but the result row will not
			 * since there is no need to go to the data page.
			 */
        templateColumns = newResultColumns(getResultColumns(), trulyTheBestConglomerateDescriptor, baseConglomerateDescriptor, false);
        templateColumns.addRCForRID();
        // If this is for update then we need to get the RID in the result row
        if (forUpdate()) {
            getResultColumns().addRCForRID();
        }
        /* Compact RCL down to the partial row.  We always want a new
			 * RCL and FormatableBitSet because this is a covering index.  (This is 
			 * because we don't want the RID in the partial row returned
			 * by the store.)
			 */
        referencedCols = getResultColumns().getReferencedFormatableBitSet(isCursorTargetTable(), true, false);
        setResultColumns(getResultColumns().compactColumns(isCursorTargetTable(), true));
        getResultColumns().setIndexRow(baseConglomerateDescriptor.getConglomerateNumber(), forUpdate());
        return this;
    }
    /* Statement is dependent on the base conglomerate if this is 
		 * a non-covering index. 
		 */
    getCompilerContext().createDependency(baseConglomerateDescriptor);
    /*
		** On bulkFetch, we need to add the restrictions from
		** the TableScan and reapply them  here.
		*/
    if (bulkFetch != UNSET) {
        restrictionList.copyPredicatesToOtherList(requalificationRestrictionList);
    }
    /*
		** We know the chosen conglomerate is an index.  We need to allocate
		** an IndexToBaseRowNode above us, and to change the result column
		** list for this FromBaseTable to reflect the columns in the index.
		** We also need to shift "cursor target table" status from this
		** FromBaseTable to the new IndexToBaseRowNow (because that's where
		** a cursor can fetch the current row).
		*/
    ResultColumnList newResultColumns = newResultColumns(getResultColumns(), trulyTheBestConglomerateDescriptor, baseConglomerateDescriptor, true);
    /* Compact the RCL for the IndexToBaseRowNode down to
		 * the partial row for the heap.  The referenced BitSet
		 * will reflect only those columns coming from the heap.
		 * (ie, it won't reflect columns coming from the index.)
		 * NOTE: We need to re-get all of the columns from the heap
		 * when doing a bulk fetch because we will be requalifying
		 * the row in the IndexRowToBaseRow.
		 */
    // Get the BitSet for all of the referenced columns
    FormatableBitSet indexReferencedCols = null;
    FormatableBitSet heapReferencedCols;
    if ((bulkFetch == UNSET) && (requalificationRestrictionList == null || requalificationRestrictionList.size() == 0)) {
        /* No BULK FETCH or requalification, XOR off the columns coming from the heap 
			 * to get the columns coming from the index.
			 */
        indexReferencedCols = getResultColumns().getReferencedFormatableBitSet(isCursorTargetTable(), true, false);
        heapReferencedCols = getResultColumns().getReferencedFormatableBitSet(isCursorTargetTable(), true, true);
        if (heapReferencedCols != null) {
            indexReferencedCols.xor(heapReferencedCols);
        }
    } else {
        // BULK FETCH or requalification - re-get all referenced columns from the heap
        heapReferencedCols = getResultColumns().getReferencedFormatableBitSet(isCursorTargetTable(), true, false);
    }
    ResultColumnList heapRCL = getResultColumns().compactColumns(isCursorTargetTable(), false);
    heapRCL.setIndexRow(baseConglomerateDescriptor.getConglomerateNumber(), forUpdate());
    retval = new IndexToBaseRowNode(this, baseConglomerateDescriptor, heapRCL, isCursorTargetTable(), heapReferencedCols, indexReferencedCols, requalificationRestrictionList, forUpdate(), tableProperties, getContextManager());
    /*
		** The template row is all the columns.  The
		** result set is the compacted column list.
		*/
    setResultColumns(newResultColumns);
    templateColumns = newResultColumns(getResultColumns(), trulyTheBestConglomerateDescriptor, baseConglomerateDescriptor, false);
    /* Since we are doing a non-covered index scan, if bulkFetch is on, then
		 * the only columns that we need to get are those columns referenced in the start and stop positions
		 * and the qualifiers (and the RID) because we will need to re-get all of the other
		 * columns from the heap anyway.
		 * At this point in time, columns referenced anywhere in the column tree are 
		 * marked as being referenced.  So, we clear all of the references, walk the 
		 * predicate list and remark the columns referenced from there and then add
		 * the RID before compacting the columns.
		 */
    if (bulkFetch != UNSET) {
        getResultColumns().markAllUnreferenced();
        storeRestrictionList.markReferencedColumns();
        if (nonStoreRestrictionList != null) {
            nonStoreRestrictionList.markReferencedColumns();
        }
    }
    getResultColumns().addRCForRID();
    templateColumns.addRCForRID();
    // Compact the RCL for the index scan down to the partial row.
    referencedCols = getResultColumns().getReferencedFormatableBitSet(isCursorTargetTable(), false, false);
    setResultColumns(getResultColumns().compactColumns(isCursorTargetTable(), false));
    getResultColumns().setIndexRow(baseConglomerateDescriptor.getConglomerateNumber(), forUpdate());
    /* We must remember if this was the cursorTargetTable
 		 * in order to get the right locking on the scan.
		 */
    getUpdateLocks = isCursorTargetTable();
    setCursorTargetTable(false);
    return retval;
}
Also used : OptimizablePredicateList(org.apache.derby.iapi.sql.compile.OptimizablePredicateList) Optimizer(org.apache.derby.iapi.sql.compile.Optimizer) AccessPath(org.apache.derby.iapi.sql.compile.AccessPath) JoinStrategy(org.apache.derby.iapi.sql.compile.JoinStrategy) FormatableBitSet(org.apache.derby.iapi.services.io.FormatableBitSet) ConglomerateDescriptor(org.apache.derby.iapi.sql.dictionary.ConglomerateDescriptor) OptimizablePredicate(org.apache.derby.iapi.sql.compile.OptimizablePredicate)

Example 8 with JoinStrategy

use of org.apache.derby.iapi.sql.compile.JoinStrategy in project derby by apache.

the class XMLOptTrace method formatPlanSummary.

/**
 * <p>
 * Produce a string representation of the plan being considered now.
 * The string has the following grammar:
 * </p>
 *
 * <pre>
 * join :== factor OP factor
 *
 * OP :== "*" | "#"
 *
 * factor :== factor | conglomerateName
 * </pre>
 */
private String formatPlanSummary(int[] planOrder, int planType) {
    try {
        OptimizerPlan plan = null;
        StringBuilder buffer = new StringBuilder();
        boolean avoidSort = (planType == Optimizer.SORT_AVOIDANCE_PLAN);
        // a negative optimizable number indicates the end of the plan
        int planLength = 0;
        for (; planLength < planOrder.length; planLength++) {
            if (planOrder[planLength] < 0) {
                break;
            }
        }
        for (int i = 0; i < planLength; i++) {
            int listIndex = planOrder[i];
            if (listIndex >= _currentQueryBlock.optimizableList.size()) {
                // should never happen!
                buffer.append("{ UNKNOWN LIST INDEX " + listIndex + " } ");
                continue;
            }
            Optimizable optimizable = _currentQueryBlock.optimizableList.getOptimizable(listIndex);
            AccessPath ap = avoidSort ? optimizable.getBestSortAvoidancePath() : optimizable.getBestAccessPath();
            JoinStrategy js = ap.getJoinStrategy();
            UniqueTupleDescriptor utd = OptimizerImpl.isTableFunction(optimizable) ? ((StaticMethodCallNode) ((FromVTI) ((ProjectRestrictNode) optimizable).getChildResult()).getMethodCall()).ad : ap.getConglomerateDescriptor();
            OptimizerPlan current = (utd == null) ? new OptimizerPlan.DeadEnd(getOptimizableName(optimizable).toString()) : OptimizerPlan.makeRowSource(utd, _lcc.getDataDictionary());
            if (plan != null) {
                current = new OptimizerPlan.Join(js, plan, current);
            }
            plan = current;
        }
        return plan.toString();
    } catch (Exception e) {
        return e.getMessage();
    }
}
Also used : AccessPath(org.apache.derby.iapi.sql.compile.AccessPath) Optimizable(org.apache.derby.iapi.sql.compile.Optimizable) JoinStrategy(org.apache.derby.iapi.sql.compile.JoinStrategy) OptimizerPlan(org.apache.derby.iapi.sql.compile.OptimizerPlan) UniqueTupleDescriptor(org.apache.derby.iapi.sql.dictionary.UniqueTupleDescriptor) StandardException(org.apache.derby.shared.common.error.StandardException) ParserConfigurationException(javax.xml.parsers.ParserConfigurationException)

Aggregations

JoinStrategy (org.apache.derby.iapi.sql.compile.JoinStrategy)8 AccessPath (org.apache.derby.iapi.sql.compile.AccessPath)5 OptimizablePredicate (org.apache.derby.iapi.sql.compile.OptimizablePredicate)3 OptimizablePredicateList (org.apache.derby.iapi.sql.compile.OptimizablePredicateList)3 FormatableBitSet (org.apache.derby.iapi.services.io.FormatableBitSet)2 ConglomerateDescriptor (org.apache.derby.iapi.sql.dictionary.ConglomerateDescriptor)2 ParserConfigurationException (javax.xml.parsers.ParserConfigurationException)1 FormatableArrayHolder (org.apache.derby.iapi.services.io.FormatableArrayHolder)1 FormatableIntHolder (org.apache.derby.iapi.services.io.FormatableIntHolder)1 CostEstimate (org.apache.derby.iapi.sql.compile.CostEstimate)1 Optimizable (org.apache.derby.iapi.sql.compile.Optimizable)1 Optimizer (org.apache.derby.iapi.sql.compile.Optimizer)1 OptimizerPlan (org.apache.derby.iapi.sql.compile.OptimizerPlan)1 IndexRowGenerator (org.apache.derby.iapi.sql.dictionary.IndexRowGenerator)1 UniqueTupleDescriptor (org.apache.derby.iapi.sql.dictionary.UniqueTupleDescriptor)1 StoreCostController (org.apache.derby.iapi.store.access.StoreCostController)1 DataValueDescriptor (org.apache.derby.iapi.types.DataValueDescriptor)1 StandardException (org.apache.derby.shared.common.error.StandardException)1