Search in sources :

Example 31 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by axbaretto.

the class HashJoinBatch method executeBuildPhase.

public void executeBuildPhase() throws SchemaChangeException, ClassTransformationException, IOException {
    // skip first batch if count is zero, as it may be an empty schema batch
    if (isFurtherProcessingRequired(rightUpstream) && right.getRecordCount() == 0) {
        for (final VectorWrapper<?> w : right) {
            w.clear();
        }
        rightUpstream = next(right);
        if (isFurtherProcessingRequired(rightUpstream) && right.getRecordCount() > 0 && hashTable == null) {
            setupHashTable();
        }
    }
    boolean moreData = true;
    while (moreData) {
        switch(rightUpstream) {
            case OUT_OF_MEMORY:
            case NONE:
            case NOT_YET:
            case STOP:
                moreData = false;
                continue;
            case OK_NEW_SCHEMA:
                if (rightSchema == null) {
                    rightSchema = right.getSchema();
                    if (rightSchema.getSelectionVectorMode() != BatchSchema.SelectionVectorMode.NONE) {
                        final String errorMsg = new StringBuilder().append("Hash join does not support build batch with selection vectors. ").append("Build batch has selection mode = ").append(left.getSchema().getSelectionVectorMode()).toString();
                        throw new SchemaChangeException(errorMsg);
                    }
                    setupHashTable();
                } else {
                    if (!rightSchema.equals(right.getSchema())) {
                        throw SchemaChangeException.schemaChanged("Hash join does not support schema changes in build side.", rightSchema, right.getSchema());
                    }
                    hashTable.updateBatches();
                }
            // Fall through
            case OK:
                final int currentRecordCount = right.getRecordCount();
                /* For every new build batch, we store some state in the helper context
                     * Add new state to the helper context
                     */
                hjHelper.addNewBatch(currentRecordCount);
                // Holder contains the global index where the key is hashed into using the hash table
                final IndexPointer htIndex = new IndexPointer();
                // For every record in the build batch , hash the key columns
                for (int i = 0; i < currentRecordCount; i++) {
                    int hashCode = hashTable.getHashCode(i);
                    try {
                        hashTable.put(i, htIndex, hashCode);
                    }// Hash Join can not retry yet
                     catch (RetryAfterSpillException RE) {
                        throw new OutOfMemoryException("HT put");
                    }
                    /* Use the global index returned by the hash table, to store
                         * the current record index and batch index. This will be used
                         * later when we probe and find a match.
                         */
                    hjHelper.setCurrentIndex(htIndex.value, buildBatchIndex, i);
                }
                /* Completed hashing all records in this batch. Transfer the batch
                     * to the hyper vector container. Will be used when we want to retrieve
                     * records that have matching keys on the probe side.
                     */
                final RecordBatchData nextBatch = new RecordBatchData(right, oContext.getAllocator());
                boolean success = false;
                try {
                    if (hyperContainer == null) {
                        hyperContainer = new ExpandableHyperContainer(nextBatch.getContainer());
                    } else {
                        hyperContainer.addBatch(nextBatch.getContainer());
                    }
                    // completed processing a batch, increment batch index
                    buildBatchIndex++;
                    success = true;
                } finally {
                    if (!success) {
                        nextBatch.clear();
                    }
                }
                break;
        }
        // Get the next record batch
        rightUpstream = next(HashJoinHelper.RIGHT_INPUT, right);
    }
}
Also used : SchemaChangeException(org.apache.drill.exec.exception.SchemaChangeException) ExpandableHyperContainer(org.apache.drill.exec.record.ExpandableHyperContainer) RetryAfterSpillException(org.apache.drill.common.exceptions.RetryAfterSpillException) RecordBatchData(org.apache.drill.exec.physical.impl.sort.RecordBatchData) IndexPointer(org.apache.drill.exec.physical.impl.common.IndexPointer) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException)

Example 32 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by axbaretto.

the class SaslEncryptionHandler method encode.

public void encode(ChannelHandlerContext ctx, ByteBuf msg, List<Object> out) throws IOException {
    if (!ctx.channel().isOpen()) {
        logger.debug("In " + RpcConstants.SASL_ENCRYPTION_HANDLER + " and channel is not open. " + "So releasing msg memory before encryption.");
        msg.release();
        return;
    }
    try {
        // If encryption is enabled then this handler will always get ByteBuf of type Composite ByteBuf
        assert (msg instanceof CompositeByteBuf);
        final CompositeByteBuf cbb = (CompositeByteBuf) msg;
        final int numComponents = cbb.numComponents();
        // Get all the components inside the Composite ByteBuf for encryption
        for (int currentIndex = 0; currentIndex < numComponents; ++currentIndex) {
            final ByteBuf component = cbb.component(currentIndex);
            // will break the RPC message into chunks of wrapSizeLimit.
            if (component.readableBytes() > wrapSizeLimit) {
                throw new RpcException(String.format("Component Chunk size: %d is greater than the wrapSizeLimit: %d", component.readableBytes(), wrapSizeLimit));
            }
            // Uncomment the below code if msg can contain both of Direct and Heap ByteBuf. Currently Drill only supports
            // DirectByteBuf so the below condition will always be false. If the msg are always HeapByteBuf then in
            // addition also remove the allocation of origMsgBuffer from constructor.
            /*if (component.hasArray()) {
          origMsg = component.array();
        } else {

        if (RpcConstants.EXTRA_DEBUGGING) {
          logger.trace("The input bytebuf is not backed by a byte array so allocating a new one");
        }*/
            final byte[] origMsg = origMsgBuffer;
            component.getBytes(component.readerIndex(), origMsg, 0, component.readableBytes());
            if (logger.isTraceEnabled()) {
                logger.trace("Trying to encrypt chunk of size:{} with wrapSizeLimit:{} and chunkMode: {}", component.readableBytes(), wrapSizeLimit);
            }
            // Length to encrypt will be component length not origMsg length since that can be greater.
            final byte[] wrappedMsg = saslCodec.wrap(origMsg, 0, component.readableBytes());
            if (logger.isTraceEnabled()) {
                logger.trace("Successfully encrypted message, original size: {} Final Size: {}", component.readableBytes(), wrappedMsg.length);
            }
            // Allocate the buffer (directByteBuff) for copying the encrypted byte array and 4 octets for length of the
            // encrypted message. This is preferred since later on if the passed buffer is not in direct memory then it
            // will be copied by the channel into a temporary direct memory which will be cached to the thread. The size
            // of that temporary direct memory will be size of largest message send.
            final ByteBuf encryptedBuf = ctx.alloc().buffer(wrappedMsg.length + RpcConstants.LENGTH_FIELD_LENGTH);
            // Based on SASL RFC 2222/4422 we should have starting 4 octet as the length of the encrypted buffer in network
            // byte order. SASL framework provided by JDK doesn't do that by default and leaves it upto application. Whereas
            // Cyrus SASL implementation of sasl_encode does take care of this.
            lengthOctets.putInt(wrappedMsg.length);
            encryptedBuf.writeBytes(lengthOctets.array());
            // reset the position for re-use in next round
            lengthOctets.rewind();
            // Write the encrypted bytes inside the buffer
            encryptedBuf.writeBytes(wrappedMsg);
            // Update the msg and component reader index
            msg.skipBytes(component.readableBytes());
            component.skipBytes(component.readableBytes());
            // Add the encrypted buffer into the output to send it on wire.
            out.add(encryptedBuf);
        }
    } catch (OutOfMemoryException e) {
        logger.warn("Failure allocating buffer on incoming stream due to memory limits.");
        msg.resetReaderIndex();
        outOfMemoryHandler.handle();
    } catch (IOException e) {
        logger.error("Something went wrong while wrapping the message: {} with MaxRawWrapSize: {}, ChunkMode: {} " + "and error: {}", msg, wrapSizeLimit, e.getMessage());
        throw e;
    }
}
Also used : CompositeByteBuf(io.netty.buffer.CompositeByteBuf) IOException(java.io.IOException) CompositeByteBuf(io.netty.buffer.CompositeByteBuf) ByteBuf(io.netty.buffer.ByteBuf) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException)

Example 33 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by axbaretto.

the class ProtobufLengthDecoder method decode.

@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
    if (!ctx.channel().isOpen()) {
        if (in.readableBytes() > 0) {
            logger.info("Channel is closed, discarding remaining {} byte(s) in buffer.", in.readableBytes());
        }
        in.skipBytes(in.readableBytes());
        return;
    }
    in.markReaderIndex();
    final byte[] buf = new byte[5];
    for (int i = 0; i < buf.length; i++) {
        if (!in.isReadable()) {
            in.resetReaderIndex();
            return;
        }
        buf[i] = in.readByte();
        if (buf[i] >= 0) {
            int length = CodedInputStream.newInstance(buf, 0, i + 1).readRawVarint32();
            if (length < 0) {
                throw new CorruptedFrameException("negative length: " + length);
            }
            if (length == 0) {
                throw new CorruptedFrameException("Received a message of length 0.");
            }
            if (in.readableBytes() < length) {
                in.resetReaderIndex();
                return;
            } else {
                // need to make buffer copy, otherwise netty will try to refill this buffer if we move the readerIndex forward...
                // TODO: Can we avoid this copy?
                ByteBuf outBuf;
                try {
                    outBuf = allocator.buffer(length);
                } catch (OutOfMemoryException e) {
                    logger.warn("Failure allocating buffer on incoming stream due to memory limits.  Current Allocation: {}.", allocator.getAllocatedMemory());
                    in.resetReaderIndex();
                    outOfMemoryHandler.handle();
                    return;
                }
                outBuf.writeBytes(in, in.readerIndex(), length);
                in.skipBytes(length);
                if (RpcConstants.EXTRA_DEBUGGING) {
                    logger.debug(String.format("ReaderIndex is %d after length header of %d bytes and frame body of length %d bytes.", in.readerIndex(), i + 1, length));
                }
                out.add(outBuf);
                return;
            }
        }
    }
    // Couldn't find the byte whose MSB is off.
    throw new CorruptedFrameException("length wider than 32-bit");
}
Also used : CorruptedFrameException(io.netty.handler.codec.CorruptedFrameException) ByteBuf(io.netty.buffer.ByteBuf) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException)

Example 34 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by axbaretto.

the class RepeatedMapVector method allocateNew.

public void allocateNew(int groupCount, int innerValueCount) {
    clear();
    try {
        allocateOffsetsNew(groupCount);
        for (ValueVector v : getChildren()) {
            AllocationHelper.allocatePrecomputedChildCount(v, groupCount, 50, innerValueCount);
        }
    } catch (OutOfMemoryException e) {
        clear();
        throw e;
    }
    mutator.reset();
}
Also used : ValueVector(org.apache.drill.exec.vector.ValueVector) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException)

Example 35 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class BaseAllocator method buffer.

@Override
public DrillBuf buffer(final int initialRequestSize, BufferManager manager) {
    assertOpen();
    Preconditions.checkArgument(initialRequestSize >= 0, "the requested size must be non-negative");
    if (initialRequestSize == 0) {
        return empty;
    }
    // round to next largest power of two if we're within a chunk since that is how our allocator operates
    final int actualRequestSize = initialRequestSize < CHUNK_SIZE ? nextPowerOfTwo(initialRequestSize) : initialRequestSize;
    AllocationOutcome outcome = allocateBytes(actualRequestSize);
    if (!outcome.isOk()) {
        throw new OutOfMemoryException(createErrorMsg(this, actualRequestSize, initialRequestSize));
    }
    boolean success = false;
    try {
        DrillBuf buffer = bufferWithoutReservation(actualRequestSize, manager);
        success = true;
        return buffer;
    } finally {
        if (!success) {
            releaseBytes(actualRequestSize);
        }
    }
}
Also used : OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException) DrillBuf(io.netty.buffer.DrillBuf)

Aggregations

OutOfMemoryException (org.apache.drill.exec.exception.OutOfMemoryException)44 DrillBuf (io.netty.buffer.DrillBuf)12 SelectionVector2 (org.apache.drill.exec.record.selection.SelectionVector2)10 Test (org.junit.Test)10 IOException (java.io.IOException)9 SchemaChangeException (org.apache.drill.exec.exception.SchemaChangeException)8 ByteBuf (io.netty.buffer.ByteBuf)6 BufferAllocator (org.apache.drill.exec.memory.BufferAllocator)6 LogFixture (org.apache.drill.test.LogFixture)6 LogFixtureBuilder (org.apache.drill.test.LogFixture.LogFixtureBuilder)6 SubOperatorTest (org.apache.drill.test.SubOperatorTest)6 MemoryTest (org.apache.drill.categories.MemoryTest)4 RetryAfterSpillException (org.apache.drill.common.exceptions.RetryAfterSpillException)4 Accountant (org.apache.drill.exec.memory.Accountant)4 RecordBatchData (org.apache.drill.exec.physical.impl.sort.RecordBatchData)3 DrillbitEndpoint (org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint)3 ValueVector (org.apache.drill.exec.vector.ValueVector)3 Stopwatch (com.google.common.base.Stopwatch)2 CompositeByteBuf (io.netty.buffer.CompositeByteBuf)2 CorruptedFrameException (io.netty.handler.codec.CorruptedFrameException)2