Search in sources :

Example 6 with ExecutionSetupException

use of org.apache.drill.common.exceptions.ExecutionSetupException in project drill by apache.

the class StoragePluginRegistryImpl method create.

private StoragePlugin create(String name, StoragePluginConfig pluginConfig) throws ExecutionSetupException {
    StoragePlugin plugin = null;
    Constructor<? extends StoragePlugin> c = availablePlugins.get(pluginConfig.getClass());
    if (c == null) {
        throw new ExecutionSetupException(String.format("Failure finding StoragePlugin constructor for config %s", pluginConfig));
    }
    try {
        plugin = c.newInstance(pluginConfig, context, name);
        plugin.start();
        return plugin;
    } catch (InstantiationException | IllegalAccessException | IllegalArgumentException | InvocationTargetException | IOException e) {
        Throwable t = e instanceof InvocationTargetException ? ((InvocationTargetException) e).getTargetException() : e;
        if (t instanceof ExecutionSetupException) {
            throw ((ExecutionSetupException) t);
        }
        throw new ExecutionSetupException(String.format("Failure setting up new storage plugin configuration for config %s", pluginConfig), t);
    }
}
Also used : ExecutionSetupException(org.apache.drill.common.exceptions.ExecutionSetupException) IOException(java.io.IOException) InfoSchemaStoragePlugin(org.apache.drill.exec.store.ischema.InfoSchemaStoragePlugin) InvocationTargetException(java.lang.reflect.InvocationTargetException)

Example 7 with ExecutionSetupException

use of org.apache.drill.common.exceptions.ExecutionSetupException in project drill by apache.

the class Foreman method getQueryWorkUnit.

private QueryWorkUnit getQueryWorkUnit(final PhysicalPlan plan) throws ExecutionSetupException {
    final PhysicalOperator rootOperator = plan.getSortedOperators(false).iterator().next();
    final Fragment rootFragment = rootOperator.accept(MakeFragmentsVisitor.INSTANCE, null);
    final SimpleParallelizer parallelizer = new SimpleParallelizer(queryContext);
    final QueryWorkUnit queryWorkUnit = parallelizer.getFragments(queryContext.getOptions().getOptionList(), queryContext.getCurrentEndpoint(), queryId, queryContext.getActiveEndpoints(), drillbitContext.getPlanReader(), rootFragment, initiatingClient.getSession(), queryContext.getQueryContextInfo());
    if (logger.isTraceEnabled()) {
        final StringBuilder sb = new StringBuilder();
        sb.append("PlanFragments for query ");
        sb.append(queryId);
        sb.append('\n');
        final List<PlanFragment> planFragments = queryWorkUnit.getFragments();
        final int fragmentCount = planFragments.size();
        int fragmentIndex = 0;
        for (final PlanFragment planFragment : planFragments) {
            final FragmentHandle fragmentHandle = planFragment.getHandle();
            sb.append("PlanFragment(");
            sb.append(++fragmentIndex);
            sb.append('/');
            sb.append(fragmentCount);
            sb.append(") major_fragment_id ");
            sb.append(fragmentHandle.getMajorFragmentId());
            sb.append(" minor_fragment_id ");
            sb.append(fragmentHandle.getMinorFragmentId());
            sb.append('\n');
            final DrillbitEndpoint endpointAssignment = planFragment.getAssignment();
            sb.append("  DrillbitEndpoint address ");
            sb.append(endpointAssignment.getAddress());
            sb.append('\n');
            String jsonString = "<<malformed JSON>>";
            sb.append("  fragment_json: ");
            final ObjectMapper objectMapper = new ObjectMapper();
            try {
                final Object json = objectMapper.readValue(planFragment.getFragmentJson(), Object.class);
                jsonString = objectMapper.defaultPrettyPrintingWriter().writeValueAsString(json);
            } catch (final Exception e) {
            // we've already set jsonString to a fallback value
            }
            sb.append(jsonString);
            logger.trace(sb.toString());
        }
    }
    return queryWorkUnit;
}
Also used : QueryWorkUnit(org.apache.drill.exec.work.QueryWorkUnit) SimpleParallelizer(org.apache.drill.exec.planner.fragment.SimpleParallelizer) FragmentHandle(org.apache.drill.exec.proto.ExecProtos.FragmentHandle) PlanFragment(org.apache.drill.exec.proto.BitControl.PlanFragment) Fragment(org.apache.drill.exec.planner.fragment.Fragment) PlanFragment(org.apache.drill.exec.proto.BitControl.PlanFragment) DrillbitEndpoint(org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint) UserException(org.apache.drill.common.exceptions.UserException) RpcException(org.apache.drill.exec.rpc.RpcException) InvalidProtocolBufferException(com.google.protobuf.InvalidProtocolBufferException) OptimizerException(org.apache.drill.exec.exception.OptimizerException) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException) ExecutionSetupException(org.apache.drill.common.exceptions.ExecutionSetupException) IOException(java.io.IOException) DrillbitEndpoint(org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint) PhysicalOperator(org.apache.drill.exec.physical.base.PhysicalOperator) ObjectMapper(org.codehaus.jackson.map.ObjectMapper)

Example 8 with ExecutionSetupException

use of org.apache.drill.common.exceptions.ExecutionSetupException in project drill by apache.

the class MapRDBScanBatchCreator method getBatch.

@Override
public ScanBatch getBatch(FragmentContext context, MapRDBSubScan subScan, List<RecordBatch> children) throws ExecutionSetupException {
    Preconditions.checkArgument(children.isEmpty());
    List<RecordReader> readers = Lists.newArrayList();
    for (MapRDBSubScanSpec scanSpec : subScan.getRegionScanSpecList()) {
        try {
            if (BinaryTableGroupScan.TABLE_BINARY.equals(subScan.getTableType())) {
                readers.add(new HBaseRecordReader(subScan.getFormatPlugin().getConnection(), getHBaseSubScanSpec(scanSpec), subScan.getColumns(), context));
            } else {
                readers.add(new MaprDBJsonRecordReader(scanSpec, subScan.getFormatPluginConfig(), subScan.getColumns(), context));
            }
        } catch (Exception e1) {
            throw new ExecutionSetupException(e1);
        }
    }
    return new ScanBatch(subScan, context, readers.iterator());
}
Also used : ExecutionSetupException(org.apache.drill.common.exceptions.ExecutionSetupException) HBaseRecordReader(org.apache.drill.exec.store.hbase.HBaseRecordReader) MaprDBJsonRecordReader(org.apache.drill.exec.store.mapr.db.json.MaprDBJsonRecordReader) HBaseRecordReader(org.apache.drill.exec.store.hbase.HBaseRecordReader) RecordReader(org.apache.drill.exec.store.RecordReader) ScanBatch(org.apache.drill.exec.physical.impl.ScanBatch) MaprDBJsonRecordReader(org.apache.drill.exec.store.mapr.db.json.MaprDBJsonRecordReader) ExecutionSetupException(org.apache.drill.common.exceptions.ExecutionSetupException)

Example 9 with ExecutionSetupException

use of org.apache.drill.common.exceptions.ExecutionSetupException in project drill by apache.

the class SequenceFileRecordReader method setup.

@Override
public void setup(OperatorContext context, OutputMutator output) throws ExecutionSetupException {
    final SequenceFileAsBinaryInputFormat inputFormat = new SequenceFileAsBinaryInputFormat();
    final JobConf jobConf = new JobConf(dfs.getConf());
    jobConf.setInputFormat(inputFormat.getClass());
    reader = getRecordReader(inputFormat, jobConf);
    final MaterializedField keyField = MaterializedField.create(keySchema, KEY_TYPE);
    final MaterializedField valueField = MaterializedField.create(valueSchema, VALUE_TYPE);
    try {
        keyVector = output.addField(keyField, NullableVarBinaryVector.class);
        valueVector = output.addField(valueField, NullableVarBinaryVector.class);
    } catch (SchemaChangeException sce) {
        throw new ExecutionSetupException("Error in setting up sequencefile reader.", sce);
    }
}
Also used : ExecutionSetupException(org.apache.drill.common.exceptions.ExecutionSetupException) SchemaChangeException(org.apache.drill.exec.exception.SchemaChangeException) NullableVarBinaryVector(org.apache.drill.exec.vector.NullableVarBinaryVector) SequenceFileAsBinaryInputFormat(org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat) MaterializedField(org.apache.drill.exec.record.MaterializedField) JobConf(org.apache.hadoop.mapred.JobConf)

Example 10 with ExecutionSetupException

use of org.apache.drill.common.exceptions.ExecutionSetupException in project drill by apache.

the class ExtendedMockRecordReader method setup.

@Override
public void setup(OperatorContext context, OutputMutator output) throws ExecutionSetupException {
    try {
        final int estimateRowSize = getEstimatedRecordSize();
        valueVectors = new ValueVector[fields.length];
        int batchSize = config.getBatchSize();
        if (batchSize == 0) {
            batchSize = 10 * 1024 * 1024;
        }
        batchRecordCount = Math.max(1, batchSize / estimateRowSize);
        batchRecordCount = Math.min(batchRecordCount, Character.MAX_VALUE);
        for (int i = 0; i < fields.length; i++) {
            final ColumnDef col = fields[i];
            final MajorType type = col.getConfig().getMajorType();
            final MaterializedField field = MaterializedField.create(col.getName(), type);
            final Class<? extends ValueVector> vvClass = TypeHelper.getValueVectorClass(field.getType().getMinorType(), field.getDataMode());
            valueVectors[i] = output.addField(field, vvClass);
        }
    } catch (SchemaChangeException e) {
        throw new ExecutionSetupException("Failure while setting up fields", e);
    }
}
Also used : ExecutionSetupException(org.apache.drill.common.exceptions.ExecutionSetupException) SchemaChangeException(org.apache.drill.exec.exception.SchemaChangeException) MajorType(org.apache.drill.common.types.TypeProtos.MajorType) MaterializedField(org.apache.drill.exec.record.MaterializedField)

Aggregations

ExecutionSetupException (org.apache.drill.common.exceptions.ExecutionSetupException)94 IOException (java.io.IOException)43 ScanBatch (org.apache.drill.exec.physical.impl.ScanBatch)26 SchemaPath (org.apache.drill.common.expression.SchemaPath)25 RecordReader (org.apache.drill.exec.store.RecordReader)24 SchemaChangeException (org.apache.drill.exec.exception.SchemaChangeException)22 LinkedList (java.util.LinkedList)16 Map (java.util.Map)14 MaterializedField (org.apache.drill.exec.record.MaterializedField)13 ExecutionException (java.util.concurrent.ExecutionException)10 DrillRuntimeException (org.apache.drill.common.exceptions.DrillRuntimeException)10 OperatorContext (org.apache.drill.exec.ops.OperatorContext)8 UserException (org.apache.drill.common.exceptions.UserException)7 MajorType (org.apache.drill.common.types.TypeProtos.MajorType)7 JobConf (org.apache.hadoop.mapred.JobConf)7 HashMap (java.util.HashMap)6 List (java.util.List)6 OutOfMemoryException (org.apache.drill.exec.exception.OutOfMemoryException)6 VectorContainerWriter (org.apache.drill.exec.vector.complex.impl.VectorContainerWriter)6 Path (org.apache.hadoop.fs.Path)6