Search in sources :

Example 21 with RelDataType

use of org.apache.calcite.rel.type.RelDataType in project hive by apache.

the class HiveRelFieldTrimmer method trimFields.

/**
   * Variant of {@link #trimFields(RelNode, ImmutableBitSet, Set)} for
   * {@link org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveMultiJoin}.
   */
public TrimResult trimFields(HiveMultiJoin join, ImmutableBitSet fieldsUsed, Set<RelDataTypeField> extraFields) {
    final int fieldCount = join.getRowType().getFieldCount();
    final RexNode conditionExpr = join.getCondition();
    final List<RexNode> joinFilters = join.getJoinFilters();
    // Add in fields used in the condition.
    final Set<RelDataTypeField> combinedInputExtraFields = new LinkedHashSet<RelDataTypeField>(extraFields);
    RelOptUtil.InputFinder inputFinder = new RelOptUtil.InputFinder(combinedInputExtraFields);
    inputFinder.inputBitSet.addAll(fieldsUsed);
    conditionExpr.accept(inputFinder);
    final ImmutableBitSet fieldsUsedPlus = inputFinder.inputBitSet.build();
    int inputStartPos = 0;
    int changeCount = 0;
    int newFieldCount = 0;
    List<RelNode> newInputs = new ArrayList<RelNode>();
    List<Mapping> inputMappings = new ArrayList<Mapping>();
    for (RelNode input : join.getInputs()) {
        final RelDataType inputRowType = input.getRowType();
        final int inputFieldCount = inputRowType.getFieldCount();
        // Compute required mapping.
        ImmutableBitSet.Builder inputFieldsUsed = ImmutableBitSet.builder();
        for (int bit : fieldsUsedPlus) {
            if (bit >= inputStartPos && bit < inputStartPos + inputFieldCount) {
                inputFieldsUsed.set(bit - inputStartPos);
            }
        }
        Set<RelDataTypeField> inputExtraFields = Collections.<RelDataTypeField>emptySet();
        TrimResult trimResult = trimChild(join, input, inputFieldsUsed.build(), inputExtraFields);
        newInputs.add(trimResult.left);
        if (trimResult.left != input) {
            ++changeCount;
        }
        final Mapping inputMapping = trimResult.right;
        inputMappings.add(inputMapping);
        // Move offset to point to start of next input.
        inputStartPos += inputFieldCount;
        newFieldCount += inputMapping.getTargetCount();
    }
    Mapping mapping = Mappings.create(MappingType.INVERSE_SURJECTION, fieldCount, newFieldCount);
    int offset = 0;
    int newOffset = 0;
    for (int i = 0; i < inputMappings.size(); i++) {
        Mapping inputMapping = inputMappings.get(i);
        for (IntPair pair : inputMapping) {
            mapping.set(pair.source + offset, pair.target + newOffset);
        }
        offset += inputMapping.getSourceCount();
        newOffset += inputMapping.getTargetCount();
    }
    if (changeCount == 0 && mapping.isIdentity()) {
        return new TrimResult(join, Mappings.createIdentity(fieldCount));
    }
    // Build new join.
    final RexVisitor<RexNode> shuttle = new RexPermuteInputsShuttle(mapping, newInputs.toArray(new RelNode[newInputs.size()]));
    RexNode newConditionExpr = conditionExpr.accept(shuttle);
    List<RexNode> newJoinFilters = Lists.newArrayList();
    for (RexNode joinFilter : joinFilters) {
        newJoinFilters.add(joinFilter.accept(shuttle));
    }
    final RelDataType newRowType = RelOptUtil.permute(join.getCluster().getTypeFactory(), join.getRowType(), mapping);
    final RelNode newJoin = new HiveMultiJoin(join.getCluster(), newInputs, newConditionExpr, newRowType, join.getJoinInputs(), join.getJoinTypes(), newJoinFilters);
    return new TrimResult(newJoin, mapping);
}
Also used : LinkedHashSet(java.util.LinkedHashSet) ImmutableBitSet(org.apache.calcite.util.ImmutableBitSet) HiveMultiJoin(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveMultiJoin) RelOptUtil(org.apache.calcite.plan.RelOptUtil) ArrayList(java.util.ArrayList) Mapping(org.apache.calcite.util.mapping.Mapping) RelDataType(org.apache.calcite.rel.type.RelDataType) IntPair(org.apache.calcite.util.mapping.IntPair) RelDataTypeField(org.apache.calcite.rel.type.RelDataTypeField) RelNode(org.apache.calcite.rel.RelNode) RexPermuteInputsShuttle(org.apache.calcite.rex.RexPermuteInputsShuttle) RexNode(org.apache.calcite.rex.RexNode)

Example 22 with RelDataType

use of org.apache.calcite.rel.type.RelDataType in project hive by apache.

the class HiveProject method create.

/**
   * Creates a HiveProject with no sort keys.
   *
   * @param child
   *          input relational expression
   * @param exps
   *          set of expressions for the input columns
   * @param fieldNames
   *          aliases of the expressions
   */
public static HiveProject create(RelNode child, List<? extends RexNode> exps, List<String> fieldNames) throws CalciteSemanticException {
    RelOptCluster cluster = child.getCluster();
    // 1 Ensure columnNames are unique - CALCITE-411
    if (fieldNames != null && !Util.isDistinct(fieldNames)) {
        String msg = "Select list contains multiple expressions with the same name." + fieldNames;
        throw new CalciteSemanticException(msg, UnsupportedFeature.Same_name_in_multiple_expressions);
    }
    RelDataType rowType = RexUtil.createStructType(cluster.getTypeFactory(), exps, fieldNames);
    return create(cluster, child, exps, rowType, Collections.<RelCollation>emptyList());
}
Also used : RelOptCluster(org.apache.calcite.plan.RelOptCluster) RelDataType(org.apache.calcite.rel.type.RelDataType) CalciteSemanticException(org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException)

Example 23 with RelDataType

use of org.apache.calcite.rel.type.RelDataType in project hive by apache.

the class HiveTableScan method project.

@Override
public RelNode project(ImmutableBitSet fieldsUsed, Set<RelDataTypeField> extraFields, RelBuilder relBuilder) {
    // 1. If the schema is the same then bail out
    final int fieldCount = getRowType().getFieldCount();
    if (fieldsUsed.equals(ImmutableBitSet.range(fieldCount)) && extraFields.isEmpty()) {
        return this;
    }
    // 2. Make sure there is no dynamic addition of virtual cols
    if (extraFields != null && !extraFields.isEmpty()) {
        throw new RuntimeException("Hive TS does not support adding virtual columns dynamically");
    }
    // 3. Create new TS schema that is a subset of original
    final List<RelDataTypeField> fields = getRowType().getFieldList();
    List<RelDataType> fieldTypes = new LinkedList<RelDataType>();
    List<String> fieldNames = new LinkedList<String>();
    List<RexNode> exprList = new ArrayList<RexNode>();
    RexBuilder rexBuilder = getCluster().getRexBuilder();
    for (int i : fieldsUsed) {
        RelDataTypeField field = fields.get(i);
        fieldTypes.add(field.getType());
        fieldNames.add(field.getName());
        exprList.add(rexBuilder.makeInputRef(this, i));
    }
    // 4. Build new TS
    HiveTableScan newHT = copy(getCluster().getTypeFactory().createStructType(fieldTypes, fieldNames));
    // 5. Add Proj on top of TS
    HiveProject hp = (HiveProject) relBuilder.push(newHT).project(exprList, new ArrayList<String>(fieldNames)).build();
    // 6. Set synthetic flag, so that we would push filter below this one
    hp.setSynthetic();
    return hp;
}
Also used : ArrayList(java.util.ArrayList) RelDataType(org.apache.calcite.rel.type.RelDataType) LinkedList(java.util.LinkedList) RelDataTypeField(org.apache.calcite.rel.type.RelDataTypeField) RexBuilder(org.apache.calcite.rex.RexBuilder) RexNode(org.apache.calcite.rex.RexNode)

Example 24 with RelDataType

use of org.apache.calcite.rel.type.RelDataType in project druid by druid-io.

the class RowSignature method getRelDataType.

/**
   * Returns a Calcite RelDataType corresponding to this row signature.
   *
   * @param typeFactory factory for type construction
   *
   * @return Calcite row type
   */
public RelDataType getRelDataType(final RelDataTypeFactory typeFactory) {
    final RelDataTypeFactory.FieldInfoBuilder builder = typeFactory.builder();
    for (final String columnName : columnNames) {
        final ValueType columnType = getColumnType(columnName);
        final RelDataType type;
        if (Column.TIME_COLUMN_NAME.equals(columnName)) {
            type = typeFactory.createSqlType(SqlTypeName.TIMESTAMP);
        } else {
            switch(columnType) {
                case STRING:
                    // Note that there is no attempt here to handle multi-value in any special way. Maybe one day...
                    type = typeFactory.createTypeWithCharsetAndCollation(typeFactory.createSqlType(SqlTypeName.VARCHAR), Calcites.defaultCharset(), SqlCollation.IMPLICIT);
                    break;
                case LONG:
                    type = typeFactory.createSqlType(SqlTypeName.BIGINT);
                    break;
                case FLOAT:
                    type = typeFactory.createSqlType(SqlTypeName.FLOAT);
                    break;
                case COMPLEX:
                    // Loses information about exactly what kind of complex column this is.
                    type = typeFactory.createSqlType(SqlTypeName.OTHER);
                    break;
                default:
                    throw new ISE("WTF?! valueType[%s] not translatable?", columnType);
            }
        }
        builder.add(columnName, type);
    }
    return builder.build();
}
Also used : ValueType(io.druid.segment.column.ValueType) RelDataTypeFactory(org.apache.calcite.rel.type.RelDataTypeFactory) RelDataType(org.apache.calcite.rel.type.RelDataType) ISE(io.druid.java.util.common.ISE)

Example 25 with RelDataType

use of org.apache.calcite.rel.type.RelDataType in project druid by druid-io.

the class DruidQueryBuilder method fullScan.

public static DruidQueryBuilder fullScan(final RowSignature rowSignature, final RelDataTypeFactory relDataTypeFactory) {
    final RelDataType rowType = rowSignature.getRelDataType(relDataTypeFactory);
    final List<String> rowOrder = rowSignature.getRowOrder();
    return new DruidQueryBuilder(null, null, null, null, null, rowType, rowOrder);
}
Also used : RelDataType(org.apache.calcite.rel.type.RelDataType)

Aggregations

RelDataType (org.apache.calcite.rel.type.RelDataType)88 RexNode (org.apache.calcite.rex.RexNode)48 RexBuilder (org.apache.calcite.rex.RexBuilder)28 RelNode (org.apache.calcite.rel.RelNode)27 RelDataTypeField (org.apache.calcite.rel.type.RelDataTypeField)25 ArrayList (java.util.ArrayList)21 RexInputRef (org.apache.calcite.rex.RexInputRef)16 RelDataTypeFactory (org.apache.calcite.rel.type.RelDataTypeFactory)14 AggregateCall (org.apache.calcite.rel.core.AggregateCall)13 ImmutableList (com.google.common.collect.ImmutableList)9 BigDecimal (java.math.BigDecimal)8 CalciteSemanticException (org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException)8 SqlAggFunction (org.apache.calcite.sql.SqlAggFunction)7 ImmutableBitSet (org.apache.calcite.util.ImmutableBitSet)7 RelOptCluster (org.apache.calcite.plan.RelOptCluster)6 RelBuilder (org.apache.calcite.tools.RelBuilder)6 Prel (org.apache.drill.exec.planner.physical.Prel)6 ProjectPrel (org.apache.drill.exec.planner.physical.ProjectPrel)6 Builder (com.google.common.collect.ImmutableList.Builder)5 LinkedList (java.util.LinkedList)5