Search in sources :

Example 1 with AsterixTupleFilterFactory

use of org.apache.asterix.runtime.base.AsterixTupleFilterFactory in project asterixdb by apache.

the class MetadataProvider method createTupleFilterFactory.

private AsterixTupleFilterFactory createTupleFilterFactory(IOperatorSchema[] inputSchemas, IVariableTypeEnvironment typeEnv, ILogicalExpression filterExpr, JobGenContext context) throws AlgebricksException {
    // No filtering condition.
    if (filterExpr == null) {
        return null;
    }
    IExpressionRuntimeProvider expressionRuntimeProvider = context.getExpressionRuntimeProvider();
    IScalarEvaluatorFactory filterEvalFactory = expressionRuntimeProvider.createEvaluatorFactory(filterExpr, typeEnv, inputSchemas, context);
    return new AsterixTupleFilterFactory(filterEvalFactory, context.getBinaryBooleanInspectorFactory());
}
Also used : IExpressionRuntimeProvider(org.apache.hyracks.algebricks.core.algebra.expressions.IExpressionRuntimeProvider) AsterixTupleFilterFactory(org.apache.asterix.runtime.base.AsterixTupleFilterFactory) IScalarEvaluatorFactory(org.apache.hyracks.algebricks.runtime.base.IScalarEvaluatorFactory)

Example 2 with AsterixTupleFilterFactory

use of org.apache.asterix.runtime.base.AsterixTupleFilterFactory in project asterixdb by apache.

the class MetadataProvider method getIndexInsertOrDeleteOrUpsertRuntime.

private Pair<IOperatorDescriptor, AlgebricksPartitionConstraint> getIndexInsertOrDeleteOrUpsertRuntime(IndexOperation indexOp, IDataSourceIndex<String, DataSourceId> dataSourceIndex, IOperatorSchema propagatedSchema, IOperatorSchema[] inputSchemas, IVariableTypeEnvironment typeEnv, List<LogicalVariable> primaryKeys, List<LogicalVariable> secondaryKeys, List<LogicalVariable> additionalNonKeyFields, ILogicalExpression filterExpr, RecordDescriptor inputRecordDesc, JobGenContext context, JobSpecification spec, boolean bulkload, List<LogicalVariable> prevSecondaryKeys, LogicalVariable prevAdditionalFilteringKey) throws AlgebricksException {
    String indexName = dataSourceIndex.getId();
    String dataverseName = dataSourceIndex.getDataSource().getId().getDataverseName();
    String datasetName = dataSourceIndex.getDataSource().getId().getDatasourceName();
    Dataset dataset = MetadataManagerUtil.findExistingDataset(mdTxnCtx, dataverseName, datasetName);
    Index secondaryIndex;
    try {
        secondaryIndex = MetadataManager.INSTANCE.getIndex(mdTxnCtx, dataset.getDataverseName(), dataset.getDatasetName(), indexName);
    } catch (MetadataException e) {
        throw new AlgebricksException(e);
    }
    ArrayList<LogicalVariable> prevAdditionalFilteringKeys = null;
    if (indexOp == IndexOperation.UPSERT && prevAdditionalFilteringKey != null) {
        prevAdditionalFilteringKeys = new ArrayList<>();
        prevAdditionalFilteringKeys.add(prevAdditionalFilteringKey);
    }
    AsterixTupleFilterFactory filterFactory = createTupleFilterFactory(inputSchemas, typeEnv, filterExpr, context);
    switch(secondaryIndex.getIndexType()) {
        case BTREE:
            return getBTreeRuntime(dataverseName, datasetName, indexName, propagatedSchema, primaryKeys, secondaryKeys, additionalNonKeyFields, filterFactory, inputRecordDesc, context, spec, indexOp, bulkload, prevSecondaryKeys, prevAdditionalFilteringKeys);
        case RTREE:
            return getRTreeRuntime(dataverseName, datasetName, indexName, propagatedSchema, primaryKeys, secondaryKeys, additionalNonKeyFields, filterFactory, inputRecordDesc, context, spec, indexOp, bulkload, prevSecondaryKeys, prevAdditionalFilteringKeys);
        case SINGLE_PARTITION_WORD_INVIX:
        case SINGLE_PARTITION_NGRAM_INVIX:
        case LENGTH_PARTITIONED_WORD_INVIX:
        case LENGTH_PARTITIONED_NGRAM_INVIX:
            return getInvertedIndexRuntime(dataverseName, datasetName, indexName, propagatedSchema, primaryKeys, secondaryKeys, additionalNonKeyFields, filterFactory, inputRecordDesc, context, spec, indexOp, secondaryIndex.getIndexType(), bulkload, prevSecondaryKeys, prevAdditionalFilteringKeys);
        default:
            throw new AlgebricksException(indexOp.name() + "Insert, upsert, and delete not implemented for index type: " + secondaryIndex.getIndexType());
    }
}
Also used : LogicalVariable(org.apache.hyracks.algebricks.core.algebra.base.LogicalVariable) Dataset(org.apache.asterix.metadata.entities.Dataset) AsterixTupleFilterFactory(org.apache.asterix.runtime.base.AsterixTupleFilterFactory) AlgebricksException(org.apache.hyracks.algebricks.common.exceptions.AlgebricksException) Index(org.apache.asterix.metadata.entities.Index) IDataSourceIndex(org.apache.hyracks.algebricks.core.algebra.metadata.IDataSourceIndex) MetadataException(org.apache.asterix.metadata.MetadataException)

Aggregations

AsterixTupleFilterFactory (org.apache.asterix.runtime.base.AsterixTupleFilterFactory)2 MetadataException (org.apache.asterix.metadata.MetadataException)1 Dataset (org.apache.asterix.metadata.entities.Dataset)1 Index (org.apache.asterix.metadata.entities.Index)1 AlgebricksException (org.apache.hyracks.algebricks.common.exceptions.AlgebricksException)1 LogicalVariable (org.apache.hyracks.algebricks.core.algebra.base.LogicalVariable)1 IExpressionRuntimeProvider (org.apache.hyracks.algebricks.core.algebra.expressions.IExpressionRuntimeProvider)1 IDataSourceIndex (org.apache.hyracks.algebricks.core.algebra.metadata.IDataSourceIndex)1 IScalarEvaluatorFactory (org.apache.hyracks.algebricks.runtime.base.IScalarEvaluatorFactory)1