Search in sources :

Example 16 with TableRef

use of org.apache.phoenix.schema.TableRef in project phoenix by apache.

the class MutationState method checkpointIfNeccessary.

public boolean checkpointIfNeccessary(MutationPlan plan) throws SQLException {
    Transaction currentTx = getTransaction();
    if (getTransaction() == null || plan.getTargetRef() == null || plan.getTargetRef().getTable() == null || !plan.getTargetRef().getTable().isTransactional()) {
        return false;
    }
    Set<TableRef> sources = plan.getSourceRefs();
    if (sources.isEmpty()) {
        return false;
    }
    // For a DELETE statement, we're always querying the table being deleted from. This isn't
    // a problem, but it potentially could be if there are other references to the same table
    // nested in the DELETE statement (as a sub query or join, for example).
    TableRef ignoreForExcludeCurrent = plan.getOperation() == Operation.DELETE && sources.size() == 1 ? plan.getTargetRef() : null;
    boolean excludeCurrent = false;
    String targetPhysicalName = plan.getTargetRef().getTable().getPhysicalName().getString();
    for (TableRef source : sources) {
        if (source.getTable().isTransactional() && !source.equals(ignoreForExcludeCurrent)) {
            String sourcePhysicalName = source.getTable().getPhysicalName().getString();
            if (targetPhysicalName.equals(sourcePhysicalName)) {
                excludeCurrent = true;
                break;
            }
        }
    }
    // it from being visible.
    if (excludeCurrent) {
        // If any source tables have uncommitted data prior to last checkpoint,
        // then we must create a new checkpoint.
        boolean hasUncommittedData = false;
        for (TableRef source : sources) {
            String sourcePhysicalName = source.getTable().getPhysicalName().getString();
            // reading and writing to the same table.
            if (source.getTable().isTransactional() && (isExternalTxContext || uncommittedPhysicalNames.contains(sourcePhysicalName))) {
                hasUncommittedData = true;
                break;
            }
        }
        if (hasUncommittedData) {
            try {
                if (txContext == null) {
                    currentTx = tx = connection.getQueryServices().getTransactionSystemClient().checkpoint(currentTx);
                } else {
                    txContext.checkpoint();
                    currentTx = tx = txContext.getCurrentTransaction();
                }
                // Since we've checkpointed, we can clear out uncommitted set, since a statement run afterwards
                // should see all this data.
                uncommittedPhysicalNames.clear();
            } catch (TransactionFailureException e) {
                throw new SQLException(e);
            }
        }
        // Since we're querying our own table while mutating it, we must exclude
        // see our current mutations, otherwise we can get erroneous results (for DELETE)
        // or get into an infinite loop (for UPSERT SELECT).
        currentTx.setVisibility(VisibilityLevel.SNAPSHOT_EXCLUDE_CURRENT);
        return true;
    }
    return false;
}
Also used : TransactionFailureException(org.apache.tephra.TransactionFailureException) Transaction(org.apache.tephra.Transaction) SQLException(java.sql.SQLException) PTableRef(org.apache.phoenix.schema.PTableRef) TableRef(org.apache.phoenix.schema.TableRef)

Example 17 with TableRef

use of org.apache.phoenix.schema.TableRef in project phoenix by apache.

the class MutationState method sendUncommitted.

/**
     * Support read-your-own-write semantics by sending uncommitted data to HBase prior to running a
     * query. In this way, they are visible to subsequent reads but are not actually committed until
     * commit is called.
     * @param tableRefs
     * @return true if any data was sent and false otherwise.
     * @throws SQLException
     */
public boolean sendUncommitted(Iterator<TableRef> tableRefs) throws SQLException {
    Transaction currentTx = getTransaction();
    if (currentTx != null) {
        // Initialize visibility so that transactions see their own writes.
        // The checkpoint() method will set it to not see writes if necessary.
        currentTx.setVisibility(VisibilityLevel.SNAPSHOT);
    }
    Iterator<TableRef> filteredTableRefs = Iterators.filter(tableRefs, new Predicate<TableRef>() {

        @Override
        public boolean apply(TableRef tableRef) {
            return tableRef.getTable().isTransactional();
        }
    });
    if (filteredTableRefs.hasNext()) {
        // FIXME: strip table alias to prevent equality check from failing due to alias mismatch on null alias.
        // We really should be keying the tables based on the physical table name.
        List<TableRef> strippedAliases = Lists.newArrayListWithExpectedSize(mutations.keySet().size());
        while (filteredTableRefs.hasNext()) {
            TableRef tableRef = filteredTableRefs.next();
            strippedAliases.add(new TableRef(null, tableRef.getTable(), tableRef.getTimeStamp(), tableRef.getLowerBoundTimeStamp(), tableRef.hasDynamicCols()));
        }
        startTransaction();
        send(strippedAliases.iterator());
        return true;
    }
    return false;
}
Also used : Transaction(org.apache.tephra.Transaction) PTableRef(org.apache.phoenix.schema.PTableRef) TableRef(org.apache.phoenix.schema.TableRef)

Example 18 with TableRef

use of org.apache.phoenix.schema.TableRef in project phoenix by apache.

the class CorrelatePlanTest method testCorrelatePlan.

private void testCorrelatePlan(Object[][] leftRelation, Object[][] rightRelation, int leftCorrelColumn, int rightCorrelColumn, JoinType type, Object[][] expectedResult, Integer offset) throws SQLException {
    TableRef leftTable = createProjectedTableFromLiterals(leftRelation[0]);
    TableRef rightTable = createProjectedTableFromLiterals(rightRelation[0]);
    String varName = "$cor0";
    RuntimeContext runtimeContext = new RuntimeContextImpl();
    runtimeContext.defineCorrelateVariable(varName, leftTable);
    QueryPlan leftPlan = newLiteralResultIterationPlan(leftRelation, offset);
    QueryPlan rightPlan = newLiteralResultIterationPlan(rightRelation, offset);
    Expression columnExpr = new ColumnRef(rightTable, rightCorrelColumn).newColumnExpression();
    Expression fieldAccess = new CorrelateVariableFieldAccessExpression(runtimeContext, varName, new ColumnRef(leftTable, leftCorrelColumn).newColumnExpression());
    Expression filter = ComparisonExpression.create(CompareOp.EQUAL, Arrays.asList(columnExpr, fieldAccess), CONTEXT.getTempPtr(), false);
    rightPlan = new ClientScanPlan(CONTEXT, SelectStatement.SELECT_ONE, rightTable, RowProjector.EMPTY_PROJECTOR, null, null, filter, OrderBy.EMPTY_ORDER_BY, rightPlan);
    PTable joinedTable = JoinCompiler.joinProjectedTables(leftTable.getTable(), rightTable.getTable(), type);
    CorrelatePlan correlatePlan = new CorrelatePlan(leftPlan, rightPlan, varName, type, false, runtimeContext, joinedTable, leftTable.getTable(), rightTable.getTable(), leftTable.getTable().getColumns().size());
    ResultIterator iter = correlatePlan.iterator();
    ImmutableBytesWritable ptr = new ImmutableBytesWritable();
    for (Object[] row : expectedResult) {
        Tuple next = iter.next();
        assertNotNull(next);
        for (int i = 0; i < row.length; i++) {
            PColumn column = joinedTable.getColumns().get(i);
            boolean eval = new ProjectedColumnExpression(column, joinedTable, column.getName().getString()).evaluate(next, ptr);
            Object o = eval ? column.getDataType().toObject(ptr) : null;
            assertEquals(row[i], o);
        }
    }
}
Also used : ImmutableBytesWritable(org.apache.hadoop.hbase.io.ImmutableBytesWritable) CorrelateVariableFieldAccessExpression(org.apache.phoenix.expression.CorrelateVariableFieldAccessExpression) ResultIterator(org.apache.phoenix.iterate.ResultIterator) ProjectedColumnExpression(org.apache.phoenix.expression.ProjectedColumnExpression) QueryPlan(org.apache.phoenix.compile.QueryPlan) PTable(org.apache.phoenix.schema.PTable) PColumn(org.apache.phoenix.schema.PColumn) Expression(org.apache.phoenix.expression.Expression) ProjectedColumnExpression(org.apache.phoenix.expression.ProjectedColumnExpression) LiteralExpression(org.apache.phoenix.expression.LiteralExpression) ComparisonExpression(org.apache.phoenix.expression.ComparisonExpression) CorrelateVariableFieldAccessExpression(org.apache.phoenix.expression.CorrelateVariableFieldAccessExpression) ColumnRef(org.apache.phoenix.schema.ColumnRef) TableRef(org.apache.phoenix.schema.TableRef) Tuple(org.apache.phoenix.schema.tuple.Tuple) SingleKeyValueTuple(org.apache.phoenix.schema.tuple.SingleKeyValueTuple)

Example 19 with TableRef

use of org.apache.phoenix.schema.TableRef in project phoenix by apache.

the class LiteralResultIteratorPlanTest method createProjectedTableFromLiterals.

private TableRef createProjectedTableFromLiterals(Object[] row) {
    List<PColumn> columns = Lists.<PColumn>newArrayList();
    for (int i = 0; i < row.length; i++) {
        String name = ParseNodeFactory.createTempAlias();
        Expression expr = LiteralExpression.newConstant(row[i]);
        PName colName = PNameFactory.newName(name);
        columns.add(new PColumnImpl(PNameFactory.newName(name), PNameFactory.newName(VALUE_COLUMN_FAMILY), expr.getDataType(), expr.getMaxLength(), expr.getScale(), expr.isNullable(), i, expr.getSortOrder(), null, null, false, name, false, false, colName.getBytes()));
    }
    try {
        PTable pTable = PTableImpl.makePTable(null, PName.EMPTY_NAME, PName.EMPTY_NAME, PTableType.SUBQUERY, null, MetaDataProtocol.MIN_TABLE_TIMESTAMP, PTable.INITIAL_SEQ_NUM, null, null, columns, null, null, Collections.<PTable>emptyList(), false, Collections.<PName>emptyList(), null, null, false, false, false, null, null, null, true, false, 0, 0L, false, null, false, ImmutableStorageScheme.ONE_CELL_PER_COLUMN, QualifierEncodingScheme.NON_ENCODED_QUALIFIERS, EncodedCQCounter.NULL_COUNTER, true);
        TableRef sourceTable = new TableRef(pTable);
        List<ColumnRef> sourceColumnRefs = Lists.<ColumnRef>newArrayList();
        for (PColumn column : sourceTable.getTable().getColumns()) {
            sourceColumnRefs.add(new ColumnRef(sourceTable, column.getPosition()));
        }
        return new TableRef(TupleProjectionCompiler.createProjectedTable(sourceTable, sourceColumnRefs, false));
    } catch (SQLException e) {
        throw new RuntimeException(e);
    }
}
Also used : PColumnImpl(org.apache.phoenix.schema.PColumnImpl) SQLException(java.sql.SQLException) PTable(org.apache.phoenix.schema.PTable) PColumn(org.apache.phoenix.schema.PColumn) Expression(org.apache.phoenix.expression.Expression) ProjectedColumnExpression(org.apache.phoenix.expression.ProjectedColumnExpression) LiteralExpression(org.apache.phoenix.expression.LiteralExpression) PName(org.apache.phoenix.schema.PName) ColumnRef(org.apache.phoenix.schema.ColumnRef) TableRef(org.apache.phoenix.schema.TableRef)

Example 20 with TableRef

use of org.apache.phoenix.schema.TableRef in project phoenix by apache.

the class TupleProjectionCompiler method createProjectedTable.

public static PTable createProjectedTable(SelectStatement select, StatementContext context) throws SQLException {
    Preconditions.checkArgument(!select.isJoin());
    // Non-group-by or group-by aggregations will create its own projected result.
    if (select.getInnerSelectStatement() != null || select.getFrom() == null || select.isAggregate() || select.isDistinct() || (context.getResolver().getTables().get(0).getTable().getType() != PTableType.TABLE && context.getResolver().getTables().get(0).getTable().getType() != PTableType.INDEX && context.getResolver().getTables().get(0).getTable().getType() != PTableType.VIEW))
        return null;
    List<PColumn> projectedColumns = new ArrayList<PColumn>();
    boolean isWildcard = false;
    Set<String> families = new HashSet<String>();
    ColumnRefVisitor visitor = new ColumnRefVisitor(context);
    TableRef tableRef = context.getCurrentTable();
    PTable table = tableRef.getTable();
    for (AliasedNode aliasedNode : select.getSelect()) {
        ParseNode node = aliasedNode.getNode();
        if (node instanceof WildcardParseNode) {
            if (((WildcardParseNode) node).isRewrite()) {
                TableRef parentTableRef = FromCompiler.getResolver(NODE_FACTORY.namedTable(null, TableName.create(table.getSchemaName().getString(), table.getParentTableName().getString())), context.getConnection()).resolveTable(table.getSchemaName().getString(), table.getParentTableName().getString());
                for (PColumn column : parentTableRef.getTable().getColumns()) {
                    NODE_FACTORY.column(null, '"' + IndexUtil.getIndexColumnName(column) + '"', null).accept(visitor);
                }
            }
            isWildcard = true;
        } else if (node instanceof FamilyWildcardParseNode) {
            FamilyWildcardParseNode familyWildcardNode = (FamilyWildcardParseNode) node;
            String familyName = familyWildcardNode.getName();
            if (familyWildcardNode.isRewrite()) {
                TableRef parentTableRef = FromCompiler.getResolver(NODE_FACTORY.namedTable(null, TableName.create(table.getSchemaName().getString(), table.getParentTableName().getString())), context.getConnection()).resolveTable(table.getSchemaName().getString(), table.getParentTableName().getString());
                for (PColumn column : parentTableRef.getTable().getColumnFamily(familyName).getColumns()) {
                    NODE_FACTORY.column(null, '"' + IndexUtil.getIndexColumnName(column) + '"', null).accept(visitor);
                }
            }
            families.add(familyName);
        } else {
            node.accept(visitor);
        }
    }
    if (!isWildcard) {
        for (OrderByNode orderBy : select.getOrderBy()) {
            orderBy.getNode().accept(visitor);
        }
    }
    boolean hasSaltingColumn = table.getBucketNum() != null;
    int position = hasSaltingColumn ? 1 : 0;
    // Always project PK columns first in case there are some PK columns added by alter table.
    for (int i = position; i < table.getPKColumns().size(); i++) {
        PColumn sourceColumn = table.getPKColumns().get(i);
        ColumnRef sourceColumnRef = new ColumnRef(tableRef, sourceColumn.getPosition());
        PColumn column = new ProjectedColumn(sourceColumn.getName(), sourceColumn.getFamilyName(), position++, sourceColumn.isNullable(), sourceColumnRef, null);
        projectedColumns.add(column);
    }
    for (PColumn sourceColumn : table.getColumns()) {
        if (SchemaUtil.isPKColumn(sourceColumn))
            continue;
        ColumnRef sourceColumnRef = new ColumnRef(tableRef, sourceColumn.getPosition());
        if (!isWildcard && !visitor.columnRefSet.contains(sourceColumnRef) && !families.contains(sourceColumn.getFamilyName().getString()))
            continue;
        PColumn column = new ProjectedColumn(sourceColumn.getName(), sourceColumn.getFamilyName(), position++, sourceColumn.isNullable(), sourceColumnRef, sourceColumn.getColumnQualifierBytes());
        projectedColumns.add(column);
        // Wildcard or FamilyWildcard will be handled by ProjectionCompiler.
        if (!isWildcard && !families.contains(sourceColumn.getFamilyName())) {
            EncodedColumnsUtil.setColumns(column, table, context.getScan());
        }
    }
    // add LocalIndexDataColumnRef
    for (LocalIndexDataColumnRef sourceColumnRef : visitor.localIndexColumnRefSet) {
        PColumn column = new ProjectedColumn(sourceColumnRef.getColumn().getName(), sourceColumnRef.getColumn().getFamilyName(), position++, sourceColumnRef.getColumn().isNullable(), sourceColumnRef, sourceColumnRef.getColumn().getColumnQualifierBytes());
        projectedColumns.add(column);
    }
    return PTableImpl.makePTable(table.getTenantId(), table.getSchemaName(), table.getTableName(), PTableType.PROJECTED, table.getIndexState(), table.getTimeStamp(), table.getSequenceNumber(), table.getPKName(), table.getBucketNum(), projectedColumns, table.getParentSchemaName(), table.getParentName(), table.getIndexes(), table.isImmutableRows(), Collections.<PName>emptyList(), null, null, table.isWALDisabled(), table.isMultiTenant(), table.getStoreNulls(), table.getViewType(), table.getViewIndexId(), table.getIndexType(), table.rowKeyOrderOptimizable(), table.isTransactional(), table.getUpdateCacheFrequency(), table.getIndexDisableTimestamp(), table.isNamespaceMapped(), table.getAutoPartitionSeqName(), table.isAppendOnlySchema(), table.getImmutableStorageScheme(), table.getEncodingScheme(), table.getEncodedCQCounter(), table.useStatsForParallelization());
}
Also used : ArrayList(java.util.ArrayList) OrderByNode(org.apache.phoenix.parse.OrderByNode) FamilyWildcardParseNode(org.apache.phoenix.parse.FamilyWildcardParseNode) WildcardParseNode(org.apache.phoenix.parse.WildcardParseNode) AliasedNode(org.apache.phoenix.parse.AliasedNode) LocalIndexDataColumnRef(org.apache.phoenix.schema.LocalIndexDataColumnRef) PTable(org.apache.phoenix.schema.PTable) ProjectedColumn(org.apache.phoenix.schema.ProjectedColumn) PColumn(org.apache.phoenix.schema.PColumn) FamilyWildcardParseNode(org.apache.phoenix.parse.FamilyWildcardParseNode) FamilyWildcardParseNode(org.apache.phoenix.parse.FamilyWildcardParseNode) WildcardParseNode(org.apache.phoenix.parse.WildcardParseNode) ColumnParseNode(org.apache.phoenix.parse.ColumnParseNode) ParseNode(org.apache.phoenix.parse.ParseNode) ColumnRef(org.apache.phoenix.schema.ColumnRef) LocalIndexDataColumnRef(org.apache.phoenix.schema.LocalIndexDataColumnRef) TableRef(org.apache.phoenix.schema.TableRef) HashSet(java.util.HashSet)

Aggregations

TableRef (org.apache.phoenix.schema.TableRef)43 PTable (org.apache.phoenix.schema.PTable)30 PColumn (org.apache.phoenix.schema.PColumn)16 Expression (org.apache.phoenix.expression.Expression)14 SQLException (java.sql.SQLException)13 ColumnRef (org.apache.phoenix.schema.ColumnRef)13 LiteralExpression (org.apache.phoenix.expression.LiteralExpression)12 PhoenixConnection (org.apache.phoenix.jdbc.PhoenixConnection)12 Scan (org.apache.hadoop.hbase.client.Scan)11 ParseNode (org.apache.phoenix.parse.ParseNode)11 SelectStatement (org.apache.phoenix.parse.SelectStatement)10 ArrayList (java.util.ArrayList)9 ImmutableBytesWritable (org.apache.hadoop.hbase.io.ImmutableBytesWritable)9 PTableRef (org.apache.phoenix.schema.PTableRef)8 List (java.util.List)7 Map (java.util.Map)7 ImmutableBytesPtr (org.apache.phoenix.hbase.index.util.ImmutableBytesPtr)7 Hint (org.apache.phoenix.parse.HintNode.Hint)7 Tuple (org.apache.phoenix.schema.tuple.Tuple)6 ProjectedColumnExpression (org.apache.phoenix.expression.ProjectedColumnExpression)5