Search in sources :

Example 11 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class BaseAllocator method buffer.

@Override
public DrillBuf buffer(final int initialRequestSize, BufferManager manager) {
    assertOpen();
    Preconditions.checkArgument(initialRequestSize >= 0, "the requested size must be non-negative");
    if (initialRequestSize == 0) {
        return empty;
    }
    // round to next largest power of two if we're within a chunk since that is how our allocator operates
    final int actualRequestSize = initialRequestSize < CHUNK_SIZE ? nextPowerOfTwo(initialRequestSize) : initialRequestSize;
    AllocationOutcome outcome = this.allocateBytes(actualRequestSize);
    if (!outcome.isOk()) {
        throw new OutOfMemoryException(createErrorMsg(this, actualRequestSize, initialRequestSize));
    }
    boolean success = false;
    try {
        DrillBuf buffer = bufferWithoutReservation(actualRequestSize, manager);
        success = true;
        return buffer;
    } finally {
        if (!success) {
            releaseBytes(actualRequestSize);
        }
    }
}
Also used : OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException) DrillBuf(io.netty.buffer.DrillBuf)

Example 12 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class RepeatedMapVector method allocateNew.

@Override
public void allocateNew(int groupCount, int innerValueCount) {
    clear();
    try {
        offsets.allocateNew(groupCount + 1);
        for (ValueVector v : getChildren()) {
            AllocationHelper.allocatePrecomputedChildCount(v, groupCount, 50, innerValueCount);
        }
    } catch (OutOfMemoryException e) {
        clear();
        throw e;
    }
    offsets.zeroVector();
    mutator.reset();
}
Also used : ValueVector(org.apache.drill.exec.vector.ValueVector) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException)

Example 13 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class ProtobufLengthDecoder method decode.

@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
    if (!ctx.channel().isOpen()) {
        if (in.readableBytes() > 0) {
            logger.info("Channel is closed, discarding remaining {} byte(s) in buffer.", in.readableBytes());
        }
        in.skipBytes(in.readableBytes());
        return;
    }
    in.markReaderIndex();
    final byte[] buf = new byte[5];
    for (int i = 0; i < buf.length; i++) {
        if (!in.isReadable()) {
            in.resetReaderIndex();
            return;
        }
        buf[i] = in.readByte();
        if (buf[i] >= 0) {
            int length = CodedInputStream.newInstance(buf, 0, i + 1).readRawVarint32();
            if (length < 0) {
                throw new CorruptedFrameException("negative length: " + length);
            }
            if (length == 0) {
                throw new CorruptedFrameException("Received a message of length 0.");
            }
            if (in.readableBytes() < length) {
                in.resetReaderIndex();
                return;
            } else {
                // need to make buffer copy, otherwise netty will try to refill this buffer if we move the readerIndex forward...
                // TODO: Can we avoid this copy?
                ByteBuf outBuf;
                try {
                    outBuf = allocator.buffer(length);
                } catch (OutOfMemoryException e) {
                    logger.warn("Failure allocating buffer on incoming stream due to memory limits.  Current Allocation: {}.", allocator.getAllocatedMemory());
                    in.resetReaderIndex();
                    outOfMemoryHandler.handle();
                    return;
                }
                outBuf.writeBytes(in, in.readerIndex(), length);
                in.skipBytes(length);
                if (RpcConstants.EXTRA_DEBUGGING) {
                    logger.debug(String.format("ReaderIndex is %d after length header of %d bytes and frame body of length %d bytes.", in.readerIndex(), i + 1, length));
                }
                out.add(outBuf);
                return;
            }
        }
    }
    // Couldn't find the byte whose MSB is off.
    throw new CorruptedFrameException("length wider than 32-bit");
}
Also used : CorruptedFrameException(io.netty.handler.codec.CorruptedFrameException) ByteBuf(io.netty.buffer.ByteBuf) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException)

Example 14 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class SaslEncryptionHandler method encode.

public void encode(ChannelHandlerContext ctx, ByteBuf msg, List<Object> out) throws IOException {
    if (!ctx.channel().isOpen()) {
        logger.debug("In " + RpcConstants.SASL_ENCRYPTION_HANDLER + " and channel is not open. " + "So releasing msg memory before encryption.");
        msg.release();
        return;
    }
    try {
        // If encryption is enabled then this handler will always get ByteBuf of type Composite ByteBuf
        assert (msg instanceof CompositeByteBuf);
        final CompositeByteBuf cbb = (CompositeByteBuf) msg;
        final int numComponents = cbb.numComponents();
        // Get all the components inside the Composite ByteBuf for encryption
        for (int currentIndex = 0; currentIndex < numComponents; ++currentIndex) {
            final ByteBuf component = cbb.component(currentIndex);
            // will break the RPC message into chunks of wrapSizeLimit.
            if (component.readableBytes() > wrapSizeLimit) {
                throw new RpcException(String.format("Component Chunk size: %d is greater than the wrapSizeLimit: %d", component.readableBytes(), wrapSizeLimit));
            }
            // Uncomment the below code if msg can contain both of Direct and Heap ByteBuf. Currently Drill only supports
            // DirectByteBuf so the below condition will always be false. If the msg are always HeapByteBuf then in
            // addition also remove the allocation of origMsgBuffer from constructor.
            /*if (component.hasArray()) {
          origMsg = component.array();
        } else {

        if (RpcConstants.EXTRA_DEBUGGING) {
          logger.trace("The input bytebuf is not backed by a byte array so allocating a new one");
        }*/
            final byte[] origMsg = origMsgBuffer;
            component.getBytes(component.readerIndex(), origMsg, 0, component.readableBytes());
            if (logger.isTraceEnabled()) {
                logger.trace("Trying to encrypt chunk of size:{} with wrapSizeLimit:{} and chunkMode: {}", component.readableBytes(), wrapSizeLimit);
            }
            // Length to encrypt will be component length not origMsg length since that can be greater.
            final byte[] wrappedMsg = saslCodec.wrap(origMsg, 0, component.readableBytes());
            if (logger.isTraceEnabled()) {
                logger.trace("Successfully encrypted message, original size: {} Final Size: {}", component.readableBytes(), wrappedMsg.length);
            }
            // Allocate the buffer (directByteBuff) for copying the encrypted byte array and 4 octets for length of the
            // encrypted message. This is preferred since later on if the passed buffer is not in direct memory then it
            // will be copied by the channel into a temporary direct memory which will be cached to the thread. The size
            // of that temporary direct memory will be size of largest message send.
            final ByteBuf encryptedBuf = ctx.alloc().buffer(wrappedMsg.length + RpcConstants.LENGTH_FIELD_LENGTH);
            // Based on SASL RFC 2222/4422 we should have starting 4 octet as the length of the encrypted buffer in network
            // byte order. SASL framework provided by JDK doesn't do that by default and leaves it upto application. Whereas
            // Cyrus SASL implementation of sasl_encode does take care of this.
            lengthOctets.putInt(wrappedMsg.length);
            encryptedBuf.writeBytes(lengthOctets.array());
            // reset the position for re-use in next round
            lengthOctets.rewind();
            // Write the encrypted bytes inside the buffer
            encryptedBuf.writeBytes(wrappedMsg);
            // Update the msg and component reader index
            msg.skipBytes(component.readableBytes());
            component.skipBytes(component.readableBytes());
            // Add the encrypted buffer into the output to send it on wire.
            out.add(encryptedBuf);
        }
    } catch (OutOfMemoryException e) {
        logger.warn("Failure allocating buffer on incoming stream due to memory limits.");
        msg.resetReaderIndex();
        outOfMemoryHandler.handle();
    } catch (IOException e) {
        logger.error("Something went wrong while wrapping the message: {} with MaxRawWrapSize: {}, ChunkMode: {} " + "and error: {}", msg, wrapSizeLimit, e.getMessage());
        throw e;
    }
}
Also used : CompositeByteBuf(io.netty.buffer.CompositeByteBuf) IOException(java.io.IOException) CompositeByteBuf(io.netty.buffer.CompositeByteBuf) ByteBuf(io.netty.buffer.ByteBuf) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException)

Aggregations

OutOfMemoryException (org.apache.drill.exec.exception.OutOfMemoryException)14 IOException (java.io.IOException)5 SelectionVector2 (org.apache.drill.exec.record.selection.SelectionVector2)4 ByteBuf (io.netty.buffer.ByteBuf)3 DrillBuf (io.netty.buffer.DrillBuf)3 SchemaChangeException (org.apache.drill.exec.exception.SchemaChangeException)3 DrillbitEndpoint (org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint)2 DrillbitContext (org.apache.drill.exec.server.DrillbitContext)2 Test (org.junit.Test)2 Stopwatch (com.google.common.base.Stopwatch)1 CompositeByteBuf (io.netty.buffer.CompositeByteBuf)1 CorruptedFrameException (io.netty.handler.codec.CorruptedFrameException)1 DeferredException (org.apache.drill.common.DeferredException)1 DrillConfig (org.apache.drill.common.config.DrillConfig)1 UserException (org.apache.drill.common.exceptions.UserException)1 ClusterCoordinator (org.apache.drill.exec.coord.ClusterCoordinator)1 ClassTransformationException (org.apache.drill.exec.exception.ClassTransformationException)1 FunctionImplementationRegistry (org.apache.drill.exec.expr.fn.FunctionImplementationRegistry)1 FragmentContext (org.apache.drill.exec.ops.FragmentContext)1 OpProfileDef (org.apache.drill.exec.ops.OpProfileDef)1