Search in sources :

Example 6 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class FragmentExecutor method run.

@SuppressWarnings("resource")
@Override
public void run() {
    // if a cancel thread has already entered this executor, we have not reason to continue.
    if (!hasCloseoutThread.compareAndSet(false, true)) {
        return;
    }
    final Thread myThread = Thread.currentThread();
    myThreadRef.set(myThread);
    final String originalThreadName = myThread.getName();
    final FragmentHandle fragmentHandle = fragmentContext.getHandle();
    final DrillbitContext drillbitContext = fragmentContext.getDrillbitContext();
    final ClusterCoordinator clusterCoordinator = drillbitContext.getClusterCoordinator();
    final DrillbitStatusListener drillbitStatusListener = new FragmentDrillbitStatusListener();
    final String newThreadName = QueryIdHelper.getExecutorThreadName(fragmentHandle);
    try {
        myThread.setName(newThreadName);
        // if we didn't get the root operator when the executor was created, create it now.
        final FragmentRoot rootOperator = this.rootOperator != null ? this.rootOperator : drillbitContext.getPlanReader().readFragmentOperator(fragment.getFragmentJson());
        root = ImplCreator.getExec(fragmentContext, rootOperator);
        if (root == null) {
            return;
        }
        clusterCoordinator.addDrillbitStatusListener(drillbitStatusListener);
        updateState(FragmentState.RUNNING);
        eventProcessor.start();
        injector.injectPause(fragmentContext.getExecutionControls(), "fragment-running", logger);
        final DrillbitEndpoint endpoint = drillbitContext.getEndpoint();
        logger.debug("Starting fragment {}:{} on {}:{}", fragmentHandle.getMajorFragmentId(), fragmentHandle.getMinorFragmentId(), endpoint.getAddress(), endpoint.getUserPort());
        final UserGroupInformation queryUserUgi = fragmentContext.isImpersonationEnabled() ? ImpersonationUtil.createProxyUgi(fragmentContext.getQueryUserName()) : ImpersonationUtil.getProcessUserUGI();
        queryUserUgi.doAs(new PrivilegedExceptionAction<Void>() {

            @Override
            public Void run() throws Exception {
                injector.injectChecked(fragmentContext.getExecutionControls(), "fragment-execution", IOException.class);
                /*
           * Run the query until root.next returns false OR we no longer need to continue.
           */
                while (shouldContinue() && root.next()) {
                // loop
                }
                return null;
            }
        });
    } catch (OutOfMemoryError | OutOfMemoryException e) {
        if (!(e instanceof OutOfMemoryError) || "Direct buffer memory".equals(e.getMessage())) {
            fail(UserException.memoryError(e).build(logger));
        } else {
            // we have a heap out of memory error. The JVM in unstable, exit.
            CatastrophicFailure.exit(e, "Unable to handle out of memory condition in FragmentExecutor.", -2);
        }
    } catch (AssertionError | Exception e) {
        fail(e);
    } finally {
        // interruption after we have moved beyond this block.
        synchronized (myThreadRef) {
            myThreadRef.set(null);
            Thread.interrupted();
        }
        // Make sure the event processor is started at least once
        eventProcessor.start();
        // here we could be in FAILED, RUNNING, or CANCELLATION_REQUESTED
        cleanup(FragmentState.FINISHED);
        clusterCoordinator.removeDrillbitStatusListener(drillbitStatusListener);
        myThread.setName(originalThreadName);
    }
}
Also used : DrillbitContext(org.apache.drill.exec.server.DrillbitContext) FragmentRoot(org.apache.drill.exec.physical.base.FragmentRoot) FragmentHandle(org.apache.drill.exec.proto.ExecProtos.FragmentHandle) ClusterCoordinator(org.apache.drill.exec.coord.ClusterCoordinator) IOException(java.io.IOException) DrillbitStatusListener(org.apache.drill.exec.work.foreman.DrillbitStatusListener) UserException(org.apache.drill.common.exceptions.UserException) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException) DeferredException(org.apache.drill.common.DeferredException) IOException(java.io.IOException) DrillbitEndpoint(org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException) UserGroupInformation(org.apache.hadoop.security.UserGroupInformation)

Example 7 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class IndirectRowSet method makeSv2.

private static SelectionVector2 makeSv2(BufferAllocator allocator, VectorContainer container) {
    int rowCount = container.getRecordCount();
    SelectionVector2 sv2 = new SelectionVector2(allocator);
    if (!sv2.allocateNewSafe(rowCount)) {
        throw new OutOfMemoryException("Unable to allocate sv2 buffer");
    }
    for (int i = 0; i < rowCount; i++) {
        sv2.setIndex(i, (char) i);
    }
    sv2.setRecordCount(rowCount);
    container.buildSchema(SelectionVectorMode.TWO_BYTE);
    return sv2;
}
Also used : SelectionVector2(org.apache.drill.exec.record.selection.SelectionVector2) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException)

Example 8 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class PartitionSenderRootExec method innerNext.

@Override
public boolean innerNext() {
    if (!ok) {
        return false;
    }
    IterOutcome out;
    if (!done) {
        out = next(incoming);
    } else {
        incoming.kill(true);
        out = IterOutcome.NONE;
    }
    logger.debug("Partitioner.next(): got next record batch with status {}", out);
    if (first && out == IterOutcome.OK) {
        out = IterOutcome.OK_NEW_SCHEMA;
    }
    switch(out) {
        case NONE:
            try {
                // send any pending batches
                if (partitioner != null) {
                    partitioner.flushOutgoingBatches(true, false);
                } else {
                    sendEmptyBatch(true);
                }
            } catch (IOException e) {
                incoming.kill(false);
                logger.error("Error while creating partitioning sender or flushing outgoing batches", e);
                context.fail(e);
            }
            return false;
        case OUT_OF_MEMORY:
            throw new OutOfMemoryException();
        case STOP:
            if (partitioner != null) {
                partitioner.clear();
            }
            return false;
        case OK_NEW_SCHEMA:
            try {
                // send all existing batches
                if (partitioner != null) {
                    partitioner.flushOutgoingBatches(false, true);
                    partitioner.clear();
                }
                createPartitioner();
                if (first) {
                    // Send an empty batch for fast schema
                    first = false;
                    sendEmptyBatch(false);
                }
            } catch (IOException e) {
                incoming.kill(false);
                logger.error("Error while flushing outgoing batches", e);
                context.fail(e);
                return false;
            } catch (SchemaChangeException e) {
                incoming.kill(false);
                logger.error("Error while setting up partitioner", e);
                context.fail(e);
                return false;
            }
        case OK:
            try {
                partitioner.partitionBatch(incoming);
            } catch (IOException e) {
                context.fail(e);
                incoming.kill(false);
                return false;
            }
            for (VectorWrapper<?> v : incoming) {
                v.clear();
            }
            return true;
        case NOT_YET:
        default:
            throw new IllegalStateException();
    }
}
Also used : IterOutcome(org.apache.drill.exec.record.RecordBatch.IterOutcome) SchemaChangeException(org.apache.drill.exec.exception.SchemaChangeException) IOException(java.io.IOException) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException)

Example 9 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class BroadcastSenderRootExec method innerNext.

@Override
public boolean innerNext() {
    RecordBatch.IterOutcome out = next(incoming);
    logger.debug("Outcome of sender next {}", out);
    switch(out) {
        case OUT_OF_MEMORY:
            throw new OutOfMemoryException();
        case STOP:
        case NONE:
            for (int i = 0; i < tunnels.length; ++i) {
                FragmentWritableBatch b2 = FragmentWritableBatch.getEmptyLast(handle.getQueryId(), handle.getMajorFragmentId(), handle.getMinorFragmentId(), config.getOppositeMajorFragmentId(), receivingMinorFragments[i]);
                stats.startWait();
                try {
                    tunnels[i].sendRecordBatch(b2);
                } finally {
                    stats.stopWait();
                }
            }
            return false;
        case OK_NEW_SCHEMA:
        case OK:
            WritableBatch writableBatch = incoming.getWritableBatch().transfer(oContext.getAllocator());
            if (tunnels.length > 1) {
                writableBatch.retainBuffers(tunnels.length - 1);
            }
            for (int i = 0; i < tunnels.length; ++i) {
                FragmentWritableBatch batch = new FragmentWritableBatch(false, handle.getQueryId(), handle.getMajorFragmentId(), handle.getMinorFragmentId(), config.getOppositeMajorFragmentId(), receivingMinorFragments[i], writableBatch);
                updateStats(batch);
                stats.startWait();
                try {
                    tunnels[i].sendRecordBatch(batch);
                } finally {
                    stats.stopWait();
                }
            }
            return ok;
        case NOT_YET:
        default:
            throw new IllegalStateException();
    }
}
Also used : FragmentWritableBatch(org.apache.drill.exec.record.FragmentWritableBatch) RecordBatch(org.apache.drill.exec.record.RecordBatch) WritableBatch(org.apache.drill.exec.record.WritableBatch) FragmentWritableBatch(org.apache.drill.exec.record.FragmentWritableBatch) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException) DrillbitEndpoint(org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint) MinorFragmentEndpoint(org.apache.drill.exec.physical.MinorFragmentEndpoint)

Example 10 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class ExternalSortBatch method innerNext.

@SuppressWarnings("resource")
@Override
public IterOutcome innerNext() {
    if (schema != null) {
        if (spillCount == 0) {
            return (getSelectionVector4().next()) ? IterOutcome.OK : IterOutcome.NONE;
        } else {
            Stopwatch w = Stopwatch.createStarted();
            int count = copier.next(targetRecordCount);
            if (count > 0) {
                long t = w.elapsed(TimeUnit.MICROSECONDS);
                logger.debug("Took {} us to merge {} records", t, count);
                container.setRecordCount(count);
                return IterOutcome.OK;
            } else {
                logger.debug("copier returned 0 records");
                return IterOutcome.NONE;
            }
        }
    }
    int totalCount = 0;
    // total number of batches received so far
    int totalBatches = 0;
    try {
        container.clear();
        outer: while (true) {
            IterOutcome upstream;
            if (first) {
                upstream = IterOutcome.OK_NEW_SCHEMA;
            } else {
                upstream = next(incoming);
            }
            if (upstream == IterOutcome.OK && sorter == null) {
                upstream = IterOutcome.OK_NEW_SCHEMA;
            }
            switch(upstream) {
                case NONE:
                    if (first) {
                        return upstream;
                    }
                    break outer;
                case NOT_YET:
                    throw new UnsupportedOperationException();
                case STOP:
                    return upstream;
                case OK_NEW_SCHEMA:
                case OK:
                    VectorContainer convertedBatch;
                    // only change in the case that the schema truly changes.  Artificial schema changes are ignored.
                    if (upstream == IterOutcome.OK_NEW_SCHEMA && !incoming.getSchema().equals(schema)) {
                        if (schema != null) {
                            if (unionTypeEnabled) {
                                this.schema = SchemaUtil.mergeSchemas(schema, incoming.getSchema());
                            } else {
                                throw SchemaChangeException.schemaChanged("Schema changes not supported in External Sort. Please enable Union type", schema, incoming.getSchema());
                            }
                        } else {
                            schema = incoming.getSchema();
                        }
                        convertedBatch = SchemaUtil.coerceContainer(incoming, schema, oContext);
                        for (BatchGroup b : batchGroups) {
                            b.setSchema(schema);
                        }
                        for (BatchGroup b : spilledBatchGroups) {
                            b.setSchema(schema);
                        }
                        this.sorter = createNewSorter(context, convertedBatch);
                    } else {
                        convertedBatch = SchemaUtil.coerceContainer(incoming, schema, oContext);
                    }
                    if (first) {
                        first = false;
                    }
                    if (convertedBatch.getRecordCount() == 0) {
                        for (VectorWrapper<?> w : convertedBatch) {
                            w.clear();
                        }
                        break;
                    }
                    SelectionVector2 sv2;
                    if (incoming.getSchema().getSelectionVectorMode() == BatchSchema.SelectionVectorMode.TWO_BYTE) {
                        sv2 = incoming.getSelectionVector2().clone();
                    } else {
                        try {
                            sv2 = newSV2();
                        } catch (InterruptedException e) {
                            return IterOutcome.STOP;
                        } catch (OutOfMemoryException e) {
                            throw new OutOfMemoryException(e);
                        }
                    }
                    int count = sv2.getCount();
                    totalCount += count;
                    totalBatches++;
                    sorter.setup(context, sv2, convertedBatch);
                    sorter.sort(sv2);
                    RecordBatchData rbd = new RecordBatchData(convertedBatch, oAllocator);
                    boolean success = false;
                    try {
                        rbd.setSv2(sv2);
                        batchGroups.add(new BatchGroup(rbd.getContainer(), rbd.getSv2(), oContext));
                        if (peakNumBatches < batchGroups.size()) {
                            peakNumBatches = batchGroups.size();
                            stats.setLongStat(Metric.PEAK_BATCHES_IN_MEMORY, peakNumBatches);
                        }
                        batchesSinceLastSpill++;
                        if (// If we haven't spilled so far, do we have enough memory for MSorter if this turns out to be the last incoming batch?
                        (spillCount == 0 && !hasMemoryForInMemorySort(totalCount)) || // If we haven't spilled so far, make sure we don't exceed the maximum number of batches SV4 can address
                        (spillCount == 0 && totalBatches > Character.MAX_VALUE) || // current memory used is more than 95% of memory usage limit of this operator
                        (oAllocator.getAllocatedMemory() > .95 * oAllocator.getLimit()) || // since the last spill exceed the defined limit
                        (batchGroups.size() > SPILL_THRESHOLD && batchesSinceLastSpill >= SPILL_BATCH_GROUP_SIZE)) {
                            if (firstSpillBatchCount == 0) {
                                firstSpillBatchCount = batchGroups.size();
                            }
                            if (spilledBatchGroups.size() > firstSpillBatchCount / 2) {
                                logger.info("Merging spills");
                                final BatchGroup merged = mergeAndSpill(spilledBatchGroups);
                                if (merged != null) {
                                    spilledBatchGroups.addFirst(merged);
                                }
                            }
                            final BatchGroup merged = mergeAndSpill(batchGroups);
                            if (merged != null) {
                                // make sure we don't add null to spilledBatchGroups
                                spilledBatchGroups.add(merged);
                                batchesSinceLastSpill = 0;
                            }
                        }
                        success = true;
                    } finally {
                        if (!success) {
                            rbd.clear();
                        }
                    }
                    break;
                case OUT_OF_MEMORY:
                    logger.debug("received OUT_OF_MEMORY, trying to spill");
                    if (batchesSinceLastSpill > 2) {
                        final BatchGroup merged = mergeAndSpill(batchGroups);
                        if (merged != null) {
                            spilledBatchGroups.add(merged);
                            batchesSinceLastSpill = 0;
                        }
                    } else {
                        logger.debug("not enough batches to spill, sending OUT_OF_MEMORY downstream");
                        return IterOutcome.OUT_OF_MEMORY;
                    }
                    break;
                default:
                    throw new UnsupportedOperationException();
            }
        }
        if (totalCount == 0) {
            return IterOutcome.NONE;
        }
        if (spillCount == 0) {
            if (builder != null) {
                builder.clear();
                builder.close();
            }
            builder = new SortRecordBatchBuilder(oAllocator);
            for (BatchGroup group : batchGroups) {
                RecordBatchData rbd = new RecordBatchData(group.getContainer(), oAllocator);
                rbd.setSv2(group.getSv2());
                builder.add(rbd);
            }
            builder.build(context, container);
            sv4 = builder.getSv4();
            mSorter = createNewMSorter();
            mSorter.setup(context, oAllocator, getSelectionVector4(), this.container);
            // For testing memory-leak purpose, inject exception after mSorter finishes setup
            injector.injectUnchecked(context.getExecutionControls(), INTERRUPTION_AFTER_SETUP);
            mSorter.sort(this.container);
            // sort may have prematurely exited due to should continue returning false.
            if (!context.shouldContinue()) {
                return IterOutcome.STOP;
            }
            // For testing memory-leak purpose, inject exception after mSorter finishes sorting
            injector.injectUnchecked(context.getExecutionControls(), INTERRUPTION_AFTER_SORT);
            sv4 = mSorter.getSV4();
            container.buildSchema(SelectionVectorMode.FOUR_BYTE);
        } else {
            // some batches were spilled
            final BatchGroup merged = mergeAndSpill(batchGroups);
            if (merged != null) {
                spilledBatchGroups.add(merged);
            }
            batchGroups.addAll(spilledBatchGroups);
            // no need to cleanup spilledBatchGroups, all it's batches are in batchGroups now
            spilledBatchGroups = null;
            logger.warn("Starting to merge. {} batch groups. Current allocated memory: {}", batchGroups.size(), oAllocator.getAllocatedMemory());
            VectorContainer hyperBatch = constructHyperBatch(batchGroups);
            createCopier(hyperBatch, batchGroups, container, false);
            int estimatedRecordSize = 0;
            for (VectorWrapper<?> w : batchGroups.get(0)) {
                try {
                    estimatedRecordSize += TypeHelper.getSize(w.getField().getType());
                } catch (UnsupportedOperationException e) {
                    estimatedRecordSize += 50;
                }
            }
            targetRecordCount = Math.min(MAX_BATCH_SIZE, Math.max(1, COPIER_BATCH_MEM_LIMIT / estimatedRecordSize));
            int count = copier.next(targetRecordCount);
            container.buildSchema(SelectionVectorMode.NONE);
            container.setRecordCount(count);
        }
        return IterOutcome.OK_NEW_SCHEMA;
    } catch (SchemaChangeException ex) {
        kill(false);
        context.fail(UserException.unsupportedError(ex).message("Sort doesn't currently support sorts with changing schemas").build(logger));
        return IterOutcome.STOP;
    } catch (ClassTransformationException | IOException ex) {
        kill(false);
        context.fail(ex);
        return IterOutcome.STOP;
    } catch (UnsupportedOperationException e) {
        throw new RuntimeException(e);
    }
}
Also used : ClassTransformationException(org.apache.drill.exec.exception.ClassTransformationException) RecordBatchData(org.apache.drill.exec.physical.impl.sort.RecordBatchData) VectorWrapper(org.apache.drill.exec.record.VectorWrapper) Stopwatch(com.google.common.base.Stopwatch) SortRecordBatchBuilder(org.apache.drill.exec.physical.impl.sort.SortRecordBatchBuilder) IOException(java.io.IOException) VectorContainer(org.apache.drill.exec.record.VectorContainer) SchemaChangeException(org.apache.drill.exec.exception.SchemaChangeException) SelectionVector2(org.apache.drill.exec.record.selection.SelectionVector2) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException)

Aggregations

OutOfMemoryException (org.apache.drill.exec.exception.OutOfMemoryException)14 IOException (java.io.IOException)5 SelectionVector2 (org.apache.drill.exec.record.selection.SelectionVector2)4 ByteBuf (io.netty.buffer.ByteBuf)3 DrillBuf (io.netty.buffer.DrillBuf)3 SchemaChangeException (org.apache.drill.exec.exception.SchemaChangeException)3 DrillbitEndpoint (org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint)2 DrillbitContext (org.apache.drill.exec.server.DrillbitContext)2 Test (org.junit.Test)2 Stopwatch (com.google.common.base.Stopwatch)1 CompositeByteBuf (io.netty.buffer.CompositeByteBuf)1 CorruptedFrameException (io.netty.handler.codec.CorruptedFrameException)1 DeferredException (org.apache.drill.common.DeferredException)1 DrillConfig (org.apache.drill.common.config.DrillConfig)1 UserException (org.apache.drill.common.exceptions.UserException)1 ClusterCoordinator (org.apache.drill.exec.coord.ClusterCoordinator)1 ClassTransformationException (org.apache.drill.exec.exception.ClassTransformationException)1 FunctionImplementationRegistry (org.apache.drill.exec.expr.fn.FunctionImplementationRegistry)1 FragmentContext (org.apache.drill.exec.ops.FragmentContext)1 OpProfileDef (org.apache.drill.exec.ops.OpProfileDef)1