Search in sources :

Example 1 with AlgebricksException

use of org.apache.hyracks.algebricks.common.exceptions.AlgebricksException in project asterixdb by apache.

the class InvertedIndexPOperator method buildInvertedIndexRuntime.

public Pair<IOperatorDescriptor, AlgebricksPartitionConstraint> buildInvertedIndexRuntime(MetadataProvider metadataProvider, JobGenContext context, JobSpecification jobSpec, AbstractUnnestMapOperator unnestMap, IOperatorSchema opSchema, boolean retainInput, boolean retainMissing, String datasetName, Dataset dataset, String indexName, ATypeTag searchKeyType, int[] keyFields, SearchModifierType searchModifierType, IAlgebricksConstantValue similarityThreshold, int[] minFilterFieldIndexes, int[] maxFilterFieldIndexes, boolean isFullTextSearchQuery) throws AlgebricksException {
    try {
        IAObject simThresh = ((AsterixConstantValue) similarityThreshold).getObject();
        int numPrimaryKeys = dataset.getPrimaryKeys().size();
        Index secondaryIndex = MetadataManager.INSTANCE.getIndex(metadataProvider.getMetadataTxnContext(), dataset.getDataverseName(), dataset.getDatasetName(), indexName);
        if (secondaryIndex == null) {
            throw new AlgebricksException("Code generation error: no index " + indexName + " for dataset " + datasetName);
        }
        IVariableTypeEnvironment typeEnv = context.getTypeEnvironment(unnestMap);
        RecordDescriptor outputRecDesc = JobGenHelper.mkRecordDescriptor(typeEnv, opSchema, context);
        Pair<IFileSplitProvider, AlgebricksPartitionConstraint> secondarySplitsAndConstraint = metadataProvider.getSplitProviderAndConstraints(dataset, indexName);
        // TODO: Here we assume there is only one search key field.
        int queryField = keyFields[0];
        // Get tokenizer and search modifier factories.
        IInvertedIndexSearchModifierFactory searchModifierFactory = InvertedIndexAccessMethod.getSearchModifierFactory(searchModifierType, simThresh, secondaryIndex);
        IBinaryTokenizerFactory queryTokenizerFactory = InvertedIndexAccessMethod.getBinaryTokenizerFactory(searchModifierType, searchKeyType, secondaryIndex);
        IIndexDataflowHelperFactory dataflowHelperFactory = new IndexDataflowHelperFactory(metadataProvider.getStorageComponentProvider().getStorageManager(), secondarySplitsAndConstraint.first);
        LSMInvertedIndexSearchOperatorDescriptor invIndexSearchOp = new LSMInvertedIndexSearchOperatorDescriptor(jobSpec, outputRecDesc, queryField, dataflowHelperFactory, queryTokenizerFactory, searchModifierFactory, retainInput, retainMissing, context.getMissingWriterFactory(), dataset.getSearchCallbackFactory(metadataProvider.getStorageComponentProvider(), secondaryIndex, ((JobEventListenerFactory) jobSpec.getJobletEventListenerFactory()).getJobId(), IndexOperation.SEARCH, null), minFilterFieldIndexes, maxFilterFieldIndexes, isFullTextSearchQuery, numPrimaryKeys, false);
        return new Pair<>(invIndexSearchOp, secondarySplitsAndConstraint.second);
    } catch (MetadataException e) {
        throw new AlgebricksException(e);
    }
}
Also used : RecordDescriptor(org.apache.hyracks.api.dataflow.value.RecordDescriptor) IFileSplitProvider(org.apache.hyracks.dataflow.std.file.IFileSplitProvider) IAObject(org.apache.asterix.om.base.IAObject) AlgebricksException(org.apache.hyracks.algebricks.common.exceptions.AlgebricksException) Index(org.apache.asterix.metadata.entities.Index) IDataSourceIndex(org.apache.hyracks.algebricks.core.algebra.metadata.IDataSourceIndex) IInvertedIndexSearchModifierFactory(org.apache.hyracks.storage.am.lsm.invertedindex.api.IInvertedIndexSearchModifierFactory) JobEventListenerFactory(org.apache.asterix.runtime.job.listener.JobEventListenerFactory) AlgebricksPartitionConstraint(org.apache.hyracks.algebricks.common.constraints.AlgebricksPartitionConstraint) MetadataException(org.apache.asterix.metadata.MetadataException) LSMInvertedIndexSearchOperatorDescriptor(org.apache.hyracks.storage.am.lsm.invertedindex.dataflow.LSMInvertedIndexSearchOperatorDescriptor) AsterixConstantValue(org.apache.asterix.om.constants.AsterixConstantValue) IIndexDataflowHelperFactory(org.apache.hyracks.storage.am.common.dataflow.IIndexDataflowHelperFactory) IBinaryTokenizerFactory(org.apache.hyracks.storage.am.lsm.invertedindex.tokenizers.IBinaryTokenizerFactory) AlgebricksPartitionConstraint(org.apache.hyracks.algebricks.common.constraints.AlgebricksPartitionConstraint) IndexDataflowHelperFactory(org.apache.hyracks.storage.am.common.dataflow.IndexDataflowHelperFactory) IIndexDataflowHelperFactory(org.apache.hyracks.storage.am.common.dataflow.IIndexDataflowHelperFactory) IVariableTypeEnvironment(org.apache.hyracks.algebricks.core.algebra.expressions.IVariableTypeEnvironment) Pair(org.apache.hyracks.algebricks.common.utils.Pair)

Example 2 with AlgebricksException

use of org.apache.hyracks.algebricks.common.exceptions.AlgebricksException in project asterixdb by apache.

the class QueryLogicalExpressionJobGen method createSerializableAggregateFunctionFactory.

@Override
public ISerializedAggregateEvaluatorFactory createSerializableAggregateFunctionFactory(AggregateFunctionCallExpression expr, IVariableTypeEnvironment env, IOperatorSchema[] inputSchemas, JobGenContext context) throws AlgebricksException {
    IScalarEvaluatorFactory[] args = codegenArguments(expr, env, inputSchemas, context);
    IFunctionDescriptor fd = getFunctionDescriptor(expr, env, context);
    switch(fd.getFunctionDescriptorTag()) {
        case AGGREGATE:
            {
                if (BuiltinFunctions.isAggregateFunctionSerializable(fd.getIdentifier())) {
                    AggregateFunctionCallExpression serialAggExpr = BuiltinFunctions.makeSerializableAggregateFunctionExpression(fd.getIdentifier(), expr.getArguments());
                    IFunctionDescriptor afdd = getFunctionDescriptor(serialAggExpr, env, context);
                    return afdd.createSerializableAggregateEvaluatorFactory(args);
                } else {
                    throw new AlgebricksException("Trying to create a serializable aggregate from a non-serializable aggregate function descriptor. (fi=" + expr.getFunctionIdentifier() + ")");
                }
            }
        case SERIALAGGREGATE:
            {
                return fd.createSerializableAggregateEvaluatorFactory(args);
            }
        default:
            throw new IllegalStateException("Invalid function descriptor " + fd.getFunctionDescriptorTag() + " expected " + FunctionDescriptorTag.SERIALAGGREGATE + " or " + FunctionDescriptorTag.AGGREGATE);
    }
}
Also used : AggregateFunctionCallExpression(org.apache.hyracks.algebricks.core.algebra.expressions.AggregateFunctionCallExpression) IFunctionDescriptor(org.apache.asterix.om.functions.IFunctionDescriptor) AlgebricksException(org.apache.hyracks.algebricks.common.exceptions.AlgebricksException) IScalarEvaluatorFactory(org.apache.hyracks.algebricks.runtime.base.IScalarEvaluatorFactory)

Example 3 with AlgebricksException

use of org.apache.hyracks.algebricks.common.exceptions.AlgebricksException in project asterixdb by apache.

the class QueryTranslator method handleNodegroupDropStatement.

protected void handleNodegroupDropStatement(MetadataProvider metadataProvider, Statement stmt) throws Exception {
    NodeGroupDropStatement stmtDelete = (NodeGroupDropStatement) stmt;
    String nodegroupName = stmtDelete.getNodeGroupName().getValue();
    MetadataTransactionContext mdTxnCtx = MetadataManager.INSTANCE.beginTransaction();
    metadataProvider.setMetadataTxnContext(mdTxnCtx);
    MetadataLockManager.INSTANCE.acquireNodeGroupWriteLock(metadataProvider.getLocks(), nodegroupName);
    try {
        NodeGroup ng = MetadataManager.INSTANCE.getNodegroup(mdTxnCtx, nodegroupName);
        if (ng == null) {
            if (!stmtDelete.getIfExists()) {
                throw new AlgebricksException("There is no nodegroup with this name " + nodegroupName + ".");
            }
        } else {
            MetadataManager.INSTANCE.dropNodegroup(mdTxnCtx, nodegroupName, false);
        }
        MetadataManager.INSTANCE.commitTransaction(mdTxnCtx);
    } catch (Exception e) {
        abort(e, e, mdTxnCtx);
        throw e;
    } finally {
        metadataProvider.getLocks().unlock();
    }
}
Also used : AlgebricksException(org.apache.hyracks.algebricks.common.exceptions.AlgebricksException) MetadataTransactionContext(org.apache.asterix.metadata.MetadataTransactionContext) NodeGroupDropStatement(org.apache.asterix.lang.common.statement.NodeGroupDropStatement) ACIDException(org.apache.asterix.common.exceptions.ACIDException) MetadataException(org.apache.asterix.metadata.MetadataException) AlgebricksException(org.apache.hyracks.algebricks.common.exceptions.AlgebricksException) HyracksDataException(org.apache.hyracks.api.exceptions.HyracksDataException) CompilationException(org.apache.asterix.common.exceptions.CompilationException) IOException(java.io.IOException) RemoteException(java.rmi.RemoteException) AsterixException(org.apache.asterix.common.exceptions.AsterixException) NodeGroup(org.apache.asterix.metadata.entities.NodeGroup)

Example 4 with AlgebricksException

use of org.apache.hyracks.algebricks.common.exceptions.AlgebricksException in project asterixdb by apache.

the class QueryTranslator method handleExternalDatasetRefreshStatement.

protected void handleExternalDatasetRefreshStatement(MetadataProvider metadataProvider, Statement stmt, IHyracksClientConnection hcc) throws Exception {
    RefreshExternalDatasetStatement stmtRefresh = (RefreshExternalDatasetStatement) stmt;
    String dataverseName = getActiveDataverse(stmtRefresh.getDataverseName());
    String datasetName = stmtRefresh.getDatasetName().getValue();
    TransactionState transactionState = TransactionState.COMMIT;
    MetadataTransactionContext mdTxnCtx = MetadataManager.INSTANCE.beginTransaction();
    boolean bActiveTxn = true;
    metadataProvider.setMetadataTxnContext(mdTxnCtx);
    JobSpecification spec = null;
    Dataset ds = null;
    List<ExternalFile> metadataFiles = null;
    List<ExternalFile> deletedFiles = null;
    List<ExternalFile> addedFiles = null;
    List<ExternalFile> appendedFiles = null;
    List<Index> indexes = null;
    Dataset transactionDataset = null;
    boolean lockAquired = false;
    boolean success = false;
    MetadataLockManager.INSTANCE.refreshDatasetBegin(metadataProvider.getLocks(), dataverseName, dataverseName + "." + datasetName);
    try {
        ds = metadataProvider.findDataset(dataverseName, datasetName);
        // Dataset exists ?
        if (ds == null) {
            throw new AlgebricksException("There is no dataset with this name " + datasetName + " in dataverse " + dataverseName);
        }
        // Dataset external ?
        if (ds.getDatasetType() != DatasetType.EXTERNAL) {
            throw new AlgebricksException("dataset " + datasetName + " in dataverse " + dataverseName + " is not an external dataset");
        }
        // Dataset has indexes ?
        indexes = MetadataManager.INSTANCE.getDatasetIndexes(mdTxnCtx, dataverseName, datasetName);
        if (indexes.isEmpty()) {
            throw new AlgebricksException("External dataset " + datasetName + " in dataverse " + dataverseName + " doesn't have any index");
        }
        // Record transaction time
        Date txnTime = new Date();
        // refresh lock here
        ExternalDatasetsRegistry.INSTANCE.refreshBegin(ds);
        lockAquired = true;
        // Get internal files
        metadataFiles = MetadataManager.INSTANCE.getDatasetExternalFiles(mdTxnCtx, ds);
        deletedFiles = new ArrayList<>();
        addedFiles = new ArrayList<>();
        appendedFiles = new ArrayList<>();
        // Now we compare snapshot with external file system
        if (ExternalIndexingOperations.isDatasetUptodate(ds, metadataFiles, addedFiles, deletedFiles, appendedFiles)) {
            ((ExternalDatasetDetails) ds.getDatasetDetails()).setRefreshTimestamp(txnTime);
            MetadataManager.INSTANCE.updateDataset(mdTxnCtx, ds);
            MetadataManager.INSTANCE.commitTransaction(mdTxnCtx);
            // latch will be released in the finally clause
            return;
        }
        // At this point, we know data has changed in the external file system, record
        // transaction in metadata and start
        transactionDataset = ExternalIndexingOperations.createTransactionDataset(ds);
        /*
             * Remove old dataset record and replace it with a new one
             */
        MetadataManager.INSTANCE.updateDataset(mdTxnCtx, transactionDataset);
        // Add delta files to the metadata
        for (ExternalFile file : addedFiles) {
            MetadataManager.INSTANCE.addExternalFile(mdTxnCtx, file);
        }
        for (ExternalFile file : appendedFiles) {
            MetadataManager.INSTANCE.addExternalFile(mdTxnCtx, file);
        }
        for (ExternalFile file : deletedFiles) {
            MetadataManager.INSTANCE.addExternalFile(mdTxnCtx, file);
        }
        // Create the files index update job
        spec = ExternalIndexingOperations.buildFilesIndexUpdateOp(ds, metadataFiles, addedFiles, appendedFiles, metadataProvider);
        MetadataManager.INSTANCE.commitTransaction(mdTxnCtx);
        bActiveTxn = false;
        transactionState = TransactionState.BEGIN;
        // run the files update job
        JobUtils.runJob(hcc, spec, true);
        for (Index index : indexes) {
            if (!ExternalIndexingOperations.isFileIndex(index)) {
                spec = ExternalIndexingOperations.buildIndexUpdateOp(ds, index, metadataFiles, addedFiles, appendedFiles, metadataProvider);
                // run the files update job
                JobUtils.runJob(hcc, spec, true);
            }
        }
        // all index updates has completed successfully, record transaction state
        spec = ExternalIndexingOperations.buildCommitJob(ds, indexes, metadataProvider);
        // Aquire write latch again -> start a transaction and record the decision to commit
        mdTxnCtx = MetadataManager.INSTANCE.beginTransaction();
        metadataProvider.setMetadataTxnContext(mdTxnCtx);
        bActiveTxn = true;
        ((ExternalDatasetDetails) transactionDataset.getDatasetDetails()).setState(TransactionState.READY_TO_COMMIT);
        ((ExternalDatasetDetails) transactionDataset.getDatasetDetails()).setRefreshTimestamp(txnTime);
        MetadataManager.INSTANCE.updateDataset(mdTxnCtx, transactionDataset);
        MetadataManager.INSTANCE.commitTransaction(mdTxnCtx);
        bActiveTxn = false;
        transactionState = TransactionState.READY_TO_COMMIT;
        // We don't release the latch since this job is expected to be quick
        JobUtils.runJob(hcc, spec, true);
        // Start a new metadata transaction to record the final state of the transaction
        mdTxnCtx = MetadataManager.INSTANCE.beginTransaction();
        metadataProvider.setMetadataTxnContext(mdTxnCtx);
        bActiveTxn = true;
        for (ExternalFile file : metadataFiles) {
            if (file.getPendingOp() == ExternalFilePendingOp.DROP_OP) {
                MetadataManager.INSTANCE.dropExternalFile(mdTxnCtx, file);
            } else if (file.getPendingOp() == ExternalFilePendingOp.NO_OP) {
                Iterator<ExternalFile> iterator = appendedFiles.iterator();
                while (iterator.hasNext()) {
                    ExternalFile appendedFile = iterator.next();
                    if (file.getFileName().equals(appendedFile.getFileName())) {
                        // delete existing file
                        MetadataManager.INSTANCE.dropExternalFile(mdTxnCtx, file);
                        // delete existing appended file
                        MetadataManager.INSTANCE.dropExternalFile(mdTxnCtx, appendedFile);
                        // add the original file with appended information
                        appendedFile.setFileNumber(file.getFileNumber());
                        appendedFile.setPendingOp(ExternalFilePendingOp.NO_OP);
                        MetadataManager.INSTANCE.addExternalFile(mdTxnCtx, appendedFile);
                        iterator.remove();
                    }
                }
            }
        }
        // remove the deleted files delta
        for (ExternalFile file : deletedFiles) {
            MetadataManager.INSTANCE.dropExternalFile(mdTxnCtx, file);
        }
        // insert new files
        for (ExternalFile file : addedFiles) {
            MetadataManager.INSTANCE.dropExternalFile(mdTxnCtx, file);
            file.setPendingOp(ExternalFilePendingOp.NO_OP);
            MetadataManager.INSTANCE.addExternalFile(mdTxnCtx, file);
        }
        // mark the transaction as complete
        ((ExternalDatasetDetails) transactionDataset.getDatasetDetails()).setState(TransactionState.COMMIT);
        MetadataManager.INSTANCE.updateDataset(mdTxnCtx, transactionDataset);
        // commit metadata transaction
        MetadataManager.INSTANCE.commitTransaction(mdTxnCtx);
        success = true;
    } catch (Exception e) {
        if (bActiveTxn) {
            abort(e, e, mdTxnCtx);
        }
        if (transactionState == TransactionState.READY_TO_COMMIT) {
            throw new IllegalStateException("System is inconsistent state: commit of (" + dataverseName + "." + datasetName + ") refresh couldn't carry out the commit phase", e);
        }
        if (transactionState == TransactionState.COMMIT) {
            // Nothing to do , everything should be clean
            throw e;
        }
        if (transactionState == TransactionState.BEGIN) {
            // transaction failed, need to do the following
            // clean NCs removing transaction components
            mdTxnCtx = MetadataManager.INSTANCE.beginTransaction();
            bActiveTxn = true;
            metadataProvider.setMetadataTxnContext(mdTxnCtx);
            spec = ExternalIndexingOperations.buildAbortOp(ds, indexes, metadataProvider);
            MetadataManager.INSTANCE.commitTransaction(mdTxnCtx);
            bActiveTxn = false;
            try {
                JobUtils.runJob(hcc, spec, true);
            } catch (Exception e2) {
                // This should never happen -- fix throw illegal
                e.addSuppressed(e2);
                throw new IllegalStateException("System is in inconsistent state. Failed to abort refresh", e);
            }
            // return the state of the dataset to committed
            try {
                mdTxnCtx = MetadataManager.INSTANCE.beginTransaction();
                for (ExternalFile file : deletedFiles) {
                    MetadataManager.INSTANCE.dropExternalFile(mdTxnCtx, file);
                }
                for (ExternalFile file : addedFiles) {
                    MetadataManager.INSTANCE.dropExternalFile(mdTxnCtx, file);
                }
                for (ExternalFile file : appendedFiles) {
                    MetadataManager.INSTANCE.dropExternalFile(mdTxnCtx, file);
                }
                MetadataManager.INSTANCE.updateDataset(mdTxnCtx, ds);
                // commit metadata transaction
                MetadataManager.INSTANCE.commitTransaction(mdTxnCtx);
            } catch (Exception e2) {
                abort(e, e2, mdTxnCtx);
                e.addSuppressed(e2);
                throw new IllegalStateException("System is in inconsistent state. Failed to drop delta files", e);
            }
        }
    } finally {
        if (lockAquired) {
            ExternalDatasetsRegistry.INSTANCE.refreshEnd(ds, success);
        }
        metadataProvider.getLocks().unlock();
    }
}
Also used : TransactionState(org.apache.asterix.common.config.DatasetConfig.TransactionState) IHyracksDataset(org.apache.hyracks.api.dataset.IHyracksDataset) IDataset(org.apache.asterix.common.metadata.IDataset) Dataset(org.apache.asterix.metadata.entities.Dataset) AlgebricksException(org.apache.hyracks.algebricks.common.exceptions.AlgebricksException) MetadataTransactionContext(org.apache.asterix.metadata.MetadataTransactionContext) Index(org.apache.asterix.metadata.entities.Index) ExternalFile(org.apache.asterix.external.indexing.ExternalFile) Date(java.util.Date) ACIDException(org.apache.asterix.common.exceptions.ACIDException) MetadataException(org.apache.asterix.metadata.MetadataException) AlgebricksException(org.apache.hyracks.algebricks.common.exceptions.AlgebricksException) HyracksDataException(org.apache.hyracks.api.exceptions.HyracksDataException) CompilationException(org.apache.asterix.common.exceptions.CompilationException) IOException(java.io.IOException) RemoteException(java.rmi.RemoteException) AsterixException(org.apache.asterix.common.exceptions.AsterixException) RefreshExternalDatasetStatement(org.apache.asterix.lang.common.statement.RefreshExternalDatasetStatement) ExternalDatasetDetails(org.apache.asterix.metadata.entities.ExternalDatasetDetails) Iterator(java.util.Iterator) JobSpecification(org.apache.hyracks.api.job.JobSpecification)

Example 5 with AlgebricksException

use of org.apache.hyracks.algebricks.common.exceptions.AlgebricksException in project asterixdb by apache.

the class QueryTranslator method handleIndexDropStatement.

protected void handleIndexDropStatement(MetadataProvider metadataProvider, Statement stmt, IHyracksClientConnection hcc) throws Exception {
    IndexDropStatement stmtIndexDrop = (IndexDropStatement) stmt;
    String datasetName = stmtIndexDrop.getDatasetName().getValue();
    String dataverseName = getActiveDataverse(stmtIndexDrop.getDataverseName());
    ProgressState progress = ProgressState.NO_PROGRESS;
    MetadataTransactionContext mdTxnCtx = MetadataManager.INSTANCE.beginTransaction();
    boolean bActiveTxn = true;
    metadataProvider.setMetadataTxnContext(mdTxnCtx);
    List<JobSpecification> jobsToExecute = new ArrayList<>();
    MetadataLockManager.INSTANCE.dropIndexBegin(metadataProvider.getLocks(), dataverseName, dataverseName + "." + datasetName);
    String indexName = null;
    // For external index
    boolean dropFilesIndex = false;
    try {
        Dataset ds = metadataProvider.findDataset(dataverseName, datasetName);
        if (ds == null) {
            throw new AlgebricksException("There is no dataset with this name " + datasetName + " in dataverse " + dataverseName);
        }
        ActiveLifecycleListener activeListener = (ActiveLifecycleListener) appCtx.getActiveLifecycleListener();
        ActiveJobNotificationHandler activeEventHandler = activeListener.getNotificationHandler();
        IActiveEntityEventsListener[] listeners = activeEventHandler.getEventListeners();
        StringBuilder builder = null;
        for (IActiveEntityEventsListener listener : listeners) {
            if (listener.isEntityUsingDataset(ds)) {
                if (builder == null) {
                    builder = new StringBuilder();
                }
                builder.append(new FeedConnectionId(listener.getEntityId(), datasetName) + "\n");
            }
        }
        if (builder != null) {
            throw new CompilationException("Dataset" + datasetName + " is currently being fed into by the following active entities: " + builder.toString());
        }
        if (ds.getDatasetType() == DatasetType.INTERNAL) {
            indexName = stmtIndexDrop.getIndexName().getValue();
            Index index = MetadataManager.INSTANCE.getIndex(mdTxnCtx, dataverseName, datasetName, indexName);
            if (index == null) {
                if (stmtIndexDrop.getIfExists()) {
                    MetadataManager.INSTANCE.commitTransaction(mdTxnCtx);
                    return;
                } else {
                    throw new AlgebricksException("There is no index with this name " + indexName + ".");
                }
            }
            // #. prepare a job to drop the index in NC.
            jobsToExecute.add(IndexUtil.buildDropIndexJobSpec(index, metadataProvider, ds));
            // #. mark PendingDropOp on the existing index
            MetadataManager.INSTANCE.dropIndex(mdTxnCtx, dataverseName, datasetName, indexName);
            MetadataManager.INSTANCE.addIndex(mdTxnCtx, new Index(dataverseName, datasetName, indexName, index.getIndexType(), index.getKeyFieldNames(), index.getKeyFieldSourceIndicators(), index.getKeyFieldTypes(), index.isEnforcingKeyFileds(), index.isPrimaryIndex(), MetadataUtil.PENDING_DROP_OP));
            // #. commit the existing transaction before calling runJob.
            MetadataManager.INSTANCE.commitTransaction(mdTxnCtx);
            bActiveTxn = false;
            progress = ProgressState.ADDED_PENDINGOP_RECORD_TO_METADATA;
            for (JobSpecification jobSpec : jobsToExecute) {
                JobUtils.runJob(hcc, jobSpec, true);
            }
            // #. begin a new transaction
            mdTxnCtx = MetadataManager.INSTANCE.beginTransaction();
            bActiveTxn = true;
            metadataProvider.setMetadataTxnContext(mdTxnCtx);
            // #. finally, delete the existing index
            MetadataManager.INSTANCE.dropIndex(mdTxnCtx, dataverseName, datasetName, indexName);
        } else {
            // External dataset
            indexName = stmtIndexDrop.getIndexName().getValue();
            Index index = MetadataManager.INSTANCE.getIndex(mdTxnCtx, dataverseName, datasetName, indexName);
            if (index == null) {
                if (stmtIndexDrop.getIfExists()) {
                    MetadataManager.INSTANCE.commitTransaction(mdTxnCtx);
                    return;
                } else {
                    throw new AlgebricksException("There is no index with this name " + indexName + ".");
                }
            } else if (ExternalIndexingOperations.isFileIndex(index)) {
                throw new AlgebricksException("Dropping a dataset's files index is not allowed.");
            }
            // #. prepare a job to drop the index in NC.
            jobsToExecute.add(IndexUtil.buildDropIndexJobSpec(index, metadataProvider, ds));
            List<Index> datasetIndexes = MetadataManager.INSTANCE.getDatasetIndexes(mdTxnCtx, dataverseName, datasetName);
            if (datasetIndexes.size() == 2) {
                dropFilesIndex = true;
                // only one index + the files index, we need to delete both of the indexes
                for (Index externalIndex : datasetIndexes) {
                    if (ExternalIndexingOperations.isFileIndex(externalIndex)) {
                        jobsToExecute.add(ExternalIndexingOperations.buildDropFilesIndexJobSpec(metadataProvider, ds));
                        // #. mark PendingDropOp on the existing files index
                        MetadataManager.INSTANCE.dropIndex(mdTxnCtx, dataverseName, datasetName, externalIndex.getIndexName());
                        MetadataManager.INSTANCE.addIndex(mdTxnCtx, new Index(dataverseName, datasetName, externalIndex.getIndexName(), externalIndex.getIndexType(), externalIndex.getKeyFieldNames(), externalIndex.getKeyFieldSourceIndicators(), index.getKeyFieldTypes(), index.isEnforcingKeyFileds(), externalIndex.isPrimaryIndex(), MetadataUtil.PENDING_DROP_OP));
                    }
                }
            }
            // #. mark PendingDropOp on the existing index
            MetadataManager.INSTANCE.dropIndex(mdTxnCtx, dataverseName, datasetName, indexName);
            MetadataManager.INSTANCE.addIndex(mdTxnCtx, new Index(dataverseName, datasetName, indexName, index.getIndexType(), index.getKeyFieldNames(), index.getKeyFieldSourceIndicators(), index.getKeyFieldTypes(), index.isEnforcingKeyFileds(), index.isPrimaryIndex(), MetadataUtil.PENDING_DROP_OP));
            // #. commit the existing transaction before calling runJob.
            MetadataManager.INSTANCE.commitTransaction(mdTxnCtx);
            bActiveTxn = false;
            progress = ProgressState.ADDED_PENDINGOP_RECORD_TO_METADATA;
            for (JobSpecification jobSpec : jobsToExecute) {
                JobUtils.runJob(hcc, jobSpec, true);
            }
            // #. begin a new transaction
            mdTxnCtx = MetadataManager.INSTANCE.beginTransaction();
            bActiveTxn = true;
            metadataProvider.setMetadataTxnContext(mdTxnCtx);
            // #. finally, delete the existing index
            MetadataManager.INSTANCE.dropIndex(mdTxnCtx, dataverseName, datasetName, indexName);
            if (dropFilesIndex) {
                // delete the files index too
                MetadataManager.INSTANCE.dropIndex(mdTxnCtx, dataverseName, datasetName, IndexingConstants.getFilesIndexName(datasetName));
                MetadataManager.INSTANCE.dropDatasetExternalFiles(mdTxnCtx, ds);
                ExternalDatasetsRegistry.INSTANCE.removeDatasetInfo(ds);
            }
        }
        MetadataManager.INSTANCE.commitTransaction(mdTxnCtx);
    } catch (Exception e) {
        if (bActiveTxn) {
            abort(e, e, mdTxnCtx);
        }
        if (progress == ProgressState.ADDED_PENDINGOP_RECORD_TO_METADATA) {
            // remove the all indexes in NC
            try {
                for (JobSpecification jobSpec : jobsToExecute) {
                    JobUtils.runJob(hcc, jobSpec, true);
                }
            } catch (Exception e2) {
                // do no throw exception since still the metadata needs to be compensated.
                e.addSuppressed(e2);
            }
            // remove the record from the metadata.
            mdTxnCtx = MetadataManager.INSTANCE.beginTransaction();
            metadataProvider.setMetadataTxnContext(mdTxnCtx);
            try {
                MetadataManager.INSTANCE.dropIndex(metadataProvider.getMetadataTxnContext(), dataverseName, datasetName, indexName);
                if (dropFilesIndex) {
                    MetadataManager.INSTANCE.dropIndex(metadataProvider.getMetadataTxnContext(), dataverseName, datasetName, IndexingConstants.getFilesIndexName(datasetName));
                }
                MetadataManager.INSTANCE.commitTransaction(mdTxnCtx);
            } catch (Exception e2) {
                e.addSuppressed(e2);
                abort(e, e2, mdTxnCtx);
                throw new IllegalStateException("System is inconsistent state: pending index(" + dataverseName + "." + datasetName + "." + indexName + ") couldn't be removed from the metadata", e);
            }
        }
        throw e;
    } finally {
        metadataProvider.getLocks().unlock();
        ExternalDatasetsRegistry.INSTANCE.releaseAcquiredLocks(metadataProvider);
    }
}
Also used : ProgressState(org.apache.asterix.common.utils.JobUtils.ProgressState) CompilationException(org.apache.asterix.common.exceptions.CompilationException) IHyracksDataset(org.apache.hyracks.api.dataset.IHyracksDataset) IDataset(org.apache.asterix.common.metadata.IDataset) Dataset(org.apache.asterix.metadata.entities.Dataset) ArrayList(java.util.ArrayList) AlgebricksException(org.apache.hyracks.algebricks.common.exceptions.AlgebricksException) MetadataTransactionContext(org.apache.asterix.metadata.MetadataTransactionContext) Index(org.apache.asterix.metadata.entities.Index) ACIDException(org.apache.asterix.common.exceptions.ACIDException) MetadataException(org.apache.asterix.metadata.MetadataException) AlgebricksException(org.apache.hyracks.algebricks.common.exceptions.AlgebricksException) HyracksDataException(org.apache.hyracks.api.exceptions.HyracksDataException) CompilationException(org.apache.asterix.common.exceptions.CompilationException) IOException(java.io.IOException) RemoteException(java.rmi.RemoteException) AsterixException(org.apache.asterix.common.exceptions.AsterixException) IActiveEntityEventsListener(org.apache.asterix.active.IActiveEntityEventsListener) ActiveLifecycleListener(org.apache.asterix.active.ActiveLifecycleListener) IndexDropStatement(org.apache.asterix.lang.common.statement.IndexDropStatement) FeedConnectionId(org.apache.asterix.external.feed.management.FeedConnectionId) JobSpecification(org.apache.hyracks.api.job.JobSpecification) ActiveJobNotificationHandler(org.apache.asterix.active.ActiveJobNotificationHandler)

Aggregations

AlgebricksException (org.apache.hyracks.algebricks.common.exceptions.AlgebricksException)134 MetadataException (org.apache.asterix.metadata.MetadataException)42 ArrayList (java.util.ArrayList)39 HyracksDataException (org.apache.hyracks.api.exceptions.HyracksDataException)39 ILogicalExpression (org.apache.hyracks.algebricks.core.algebra.base.ILogicalExpression)38 LogicalVariable (org.apache.hyracks.algebricks.core.algebra.base.LogicalVariable)37 AsterixException (org.apache.asterix.common.exceptions.AsterixException)36 IOException (java.io.IOException)35 CompilationException (org.apache.asterix.common.exceptions.CompilationException)33 Dataset (org.apache.asterix.metadata.entities.Dataset)31 IAType (org.apache.asterix.om.types.IAType)31 Mutable (org.apache.commons.lang3.mutable.Mutable)30 ILogicalOperator (org.apache.hyracks.algebricks.core.algebra.base.ILogicalOperator)30 Pair (org.apache.hyracks.algebricks.common.utils.Pair)28 Index (org.apache.asterix.metadata.entities.Index)26 RemoteException (java.rmi.RemoteException)25 ACIDException (org.apache.asterix.common.exceptions.ACIDException)24 AlgebricksPartitionConstraint (org.apache.hyracks.algebricks.common.constraints.AlgebricksPartitionConstraint)23 MetadataTransactionContext (org.apache.asterix.metadata.MetadataTransactionContext)22 AString (org.apache.asterix.om.base.AString)21