Search in sources :

Example 36 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class SaslEncryptionHandler method encode.

public void encode(ChannelHandlerContext ctx, ByteBuf msg, List<Object> out) throws IOException {
    if (!ctx.channel().isOpen()) {
        logger.debug("In " + RpcConstants.SASL_ENCRYPTION_HANDLER + " and channel is not open. " + "So releasing msg memory before encryption.");
        msg.release();
        return;
    }
    try {
        // If encryption is enabled then this handler will always get ByteBuf of type Composite ByteBuf
        assert (msg instanceof CompositeByteBuf);
        final CompositeByteBuf cbb = (CompositeByteBuf) msg;
        final int numComponents = cbb.numComponents();
        // Get all the components inside the Composite ByteBuf for encryption
        for (int currentIndex = 0; currentIndex < numComponents; ++currentIndex) {
            final ByteBuf component = cbb.component(currentIndex);
            // will break the RPC message into chunks of wrapSizeLimit.
            if (component.readableBytes() > wrapSizeLimit) {
                throw new RpcException(String.format("Component Chunk size: %d is greater than the wrapSizeLimit: %d", component.readableBytes(), wrapSizeLimit));
            }
            // Uncomment the below code if msg can contain both of Direct and Heap ByteBuf. Currently Drill only supports
            // DirectByteBuf so the below condition will always be false. If the msg are always HeapByteBuf then in
            // addition also remove the allocation of origMsgBuffer from constructor.
            /*if (component.hasArray()) {
          origMsg = component.array();
        } else {

        if (RpcConstants.EXTRA_DEBUGGING) {
          logger.trace("The input bytebuf is not backed by a byte array so allocating a new one");
        }*/
            final byte[] origMsg = origMsgBuffer;
            component.getBytes(component.readerIndex(), origMsg, 0, component.readableBytes());
            if (logger.isTraceEnabled()) {
                logger.trace("Trying to encrypt chunk of size:{} with wrapSizeLimit:{}", component.readableBytes(), wrapSizeLimit);
            }
            // Length to encrypt will be component length not origMsg length since that can be greater.
            final byte[] wrappedMsg = saslCodec.wrap(origMsg, 0, component.readableBytes());
            if (logger.isTraceEnabled()) {
                logger.trace("Successfully encrypted message, original size: {} Final Size: {}", component.readableBytes(), wrappedMsg.length);
            }
            // Allocate the buffer (directByteBuff) for copying the encrypted byte array and 4 octets for length of the
            // encrypted message. This is preferred since later on if the passed buffer is not in direct memory then it
            // will be copied by the channel into a temporary direct memory which will be cached to the thread. The size
            // of that temporary direct memory will be size of largest message send.
            final ByteBuf encryptedBuf = ctx.alloc().buffer(wrappedMsg.length + RpcConstants.LENGTH_FIELD_LENGTH);
            // Based on SASL RFC 2222/4422 we should have starting 4 octet as the length of the encrypted buffer in network
            // byte order. SASL framework provided by JDK doesn't do that by default and leaves it upto application. Whereas
            // Cyrus SASL implementation of sasl_encode does take care of this.
            lengthOctets.putInt(wrappedMsg.length);
            encryptedBuf.writeBytes(lengthOctets.array());
            // reset the position for re-use in next round
            lengthOctets.rewind();
            // Write the encrypted bytes inside the buffer
            encryptedBuf.writeBytes(wrappedMsg);
            // Update the msg and component reader index
            msg.skipBytes(component.readableBytes());
            component.skipBytes(component.readableBytes());
            // Add the encrypted buffer into the output to send it on wire.
            out.add(encryptedBuf);
        }
    } catch (OutOfMemoryException e) {
        logger.warn("Failure allocating buffer on incoming stream due to memory limits.");
        msg.resetReaderIndex();
        outOfMemoryHandler.handle();
    } catch (IOException e) {
        logger.error("Something went wrong while wrapping the message: {} with MaxRawWrapSize: {}, " + "and error: {}", msg, wrapSizeLimit, e.getMessage());
        throw e;
    }
}
Also used : CompositeByteBuf(io.netty.buffer.CompositeByteBuf) IOException(java.io.IOException) CompositeByteBuf(io.netty.buffer.CompositeByteBuf) ByteBuf(io.netty.buffer.ByteBuf) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException)

Example 37 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class ProtobufLengthDecoder method decode.

@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
    if (!ctx.channel().isOpen()) {
        if (in.readableBytes() > 0) {
            logger.info("Channel is closed, discarding remaining {} byte(s) in buffer.", in.readableBytes());
        }
        in.skipBytes(in.readableBytes());
        return;
    }
    in.markReaderIndex();
    final byte[] buf = new byte[5];
    for (int i = 0; i < buf.length; i++) {
        if (!in.isReadable()) {
            in.resetReaderIndex();
            return;
        }
        buf[i] = in.readByte();
        if (buf[i] >= 0) {
            int length = CodedInputStream.newInstance(buf, 0, i + 1).readRawVarint32();
            if (length < 0) {
                throw new CorruptedFrameException("negative length: " + length);
            }
            if (length == 0) {
                throw new CorruptedFrameException("Received a message of length 0.");
            }
            if (in.readableBytes() < length) {
                in.resetReaderIndex();
                return;
            } else {
                // need to make buffer copy, otherwise netty will try to refill this buffer if we move the readerIndex forward...
                // TODO: Can we avoid this copy?
                ByteBuf outBuf;
                try {
                    outBuf = allocator.buffer(length);
                } catch (OutOfMemoryException e) {
                    logger.warn("Failure allocating buffer on incoming stream due to memory limits.  Current Allocation: {}.", allocator.getAllocatedMemory());
                    in.resetReaderIndex();
                    outOfMemoryHandler.handle();
                    return;
                }
                outBuf.writeBytes(in, in.readerIndex(), length);
                in.skipBytes(length);
                if (RpcConstants.EXTRA_DEBUGGING) {
                    logger.debug(String.format("ReaderIndex is %d after length header of %d bytes and frame body of length %d bytes.", in.readerIndex(), i + 1, length));
                }
                out.add(outBuf);
                return;
            }
        }
    }
    // Couldn't find the byte whose MSB is off.
    throw new CorruptedFrameException("length wider than 32-bit");
}
Also used : CorruptedFrameException(io.netty.handler.codec.CorruptedFrameException) ByteBuf(io.netty.buffer.ByteBuf) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException)

Example 38 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class RecordBatchSizerManager method allocate.

/**
 * Allocates value vectors for the current batch.
 *
 * @param vectorMap a collection of value vectors keyed by their field names
 * @throws OutOfMemoryException
 */
public void allocate(Map<String, ValueVector> vectorMap) throws OutOfMemoryException {
    if (columnPrecisionChanged) {
        // We need to divide the overall memory pool amongst all columns
        assignColumnsBatchMemory();
    }
    try {
        for (final ValueVector v : vectorMap.values()) {
            ColumnMemoryInfo columnMemoryInfo = columnMemoryInfoMap.get(v.getField().getName());
            if (columnMemoryInfo != null) {
                Preconditions.checkState(columnMemoryInfo.columnPrecision <= Integer.MAX_VALUE, "Column precision cannot exceed 2GB");
                AllocationHelper.allocate(v, recordsPerBatch, (int) columnMemoryInfo.columnPrecision, 0);
            } else {
                // This column was found in another Parquet file but not the current one; so we inject
                // a null value. At this time, we do not account for such columns. Why? the right design is
                // to create a ZERO byte all-nulls value vector to handle such columns (there could be hundred of these).
                // the helper will still use a precision of 1
                AllocationHelper.allocate(v, recordsPerBatch, 0, 0);
            }
        }
    } catch (NullPointerException e) {
        throw new OutOfMemoryException();
    }
}
Also used : ValueVector(org.apache.drill.exec.vector.ValueVector) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException)

Example 39 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class TestLenientAllocation method testLenientLimit.

@Test
public void testLenientLimit() {
    LogFixtureBuilder logBuilder = LogFixture.builder().logger(Accountant.class, Level.WARN);
    try (LogFixture logFixture = logBuilder.build()) {
        // Test can't run without assertions
        assertTrue(AssertionUtil.isAssertionsEnabled());
        // Create a child allocator
        BufferAllocator allocator = fixture.allocator().newChildAllocator("test", 10 * ONE_MEG, 128 * ONE_MEG);
        ((Accountant) allocator).forceLenient();
        // Allocate most of the available memory
        DrillBuf buf1 = allocator.buffer(64 * ONE_MEG);
        // Oops, we did our math wrong; allocate too large a buffer.
        DrillBuf buf2 = allocator.buffer(128 * ONE_MEG);
        try {
            allocator.buffer(64 * ONE_MEG);
            fail();
        } catch (OutOfMemoryException e) {
        // Expected
        }
        // Clean up
        buf1.close();
        buf2.close();
        allocator.close();
    }
}
Also used : LogFixture(org.apache.drill.test.LogFixture) Accountant(org.apache.drill.exec.memory.Accountant) LogFixtureBuilder(org.apache.drill.test.LogFixture.LogFixtureBuilder) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException) BufferAllocator(org.apache.drill.exec.memory.BufferAllocator) DrillBuf(io.netty.buffer.DrillBuf) SubOperatorTest(org.apache.drill.test.SubOperatorTest) Test(org.junit.Test)

Example 40 with OutOfMemoryException

use of org.apache.drill.exec.exception.OutOfMemoryException in project drill by apache.

the class TestLenientAllocation method testStrict.

/**
 * Test that the allocator is normally strict in debug mode.
 */
@Test
public void testStrict() {
    LogFixtureBuilder logBuilder = LogFixture.builder().logger(Accountant.class, Level.WARN);
    try (LogFixture logFixture = logBuilder.build()) {
        // Test can't run without assertions
        assertTrue(AssertionUtil.isAssertionsEnabled());
        // Create a child allocator
        BufferAllocator allocator = fixture.allocator().newChildAllocator("test", 10 * 1024, 128 * 1024);
        // Allocate most of the available memory
        DrillBuf buf1 = allocator.buffer(64 * 1024);
        try {
            allocator.buffer(128 * 1024);
            fail();
        } catch (OutOfMemoryException e) {
        // Expected
        }
        // Clean up
        buf1.close();
        allocator.close();
    }
}
Also used : LogFixture(org.apache.drill.test.LogFixture) LogFixtureBuilder(org.apache.drill.test.LogFixture.LogFixtureBuilder) OutOfMemoryException(org.apache.drill.exec.exception.OutOfMemoryException) BufferAllocator(org.apache.drill.exec.memory.BufferAllocator) DrillBuf(io.netty.buffer.DrillBuf) SubOperatorTest(org.apache.drill.test.SubOperatorTest) Test(org.junit.Test)

Aggregations

OutOfMemoryException (org.apache.drill.exec.exception.OutOfMemoryException)44 DrillBuf (io.netty.buffer.DrillBuf)12 SelectionVector2 (org.apache.drill.exec.record.selection.SelectionVector2)10 Test (org.junit.Test)10 IOException (java.io.IOException)9 SchemaChangeException (org.apache.drill.exec.exception.SchemaChangeException)8 ByteBuf (io.netty.buffer.ByteBuf)6 BufferAllocator (org.apache.drill.exec.memory.BufferAllocator)6 LogFixture (org.apache.drill.test.LogFixture)6 LogFixtureBuilder (org.apache.drill.test.LogFixture.LogFixtureBuilder)6 SubOperatorTest (org.apache.drill.test.SubOperatorTest)6 MemoryTest (org.apache.drill.categories.MemoryTest)4 RetryAfterSpillException (org.apache.drill.common.exceptions.RetryAfterSpillException)4 Accountant (org.apache.drill.exec.memory.Accountant)4 RecordBatchData (org.apache.drill.exec.physical.impl.sort.RecordBatchData)3 DrillbitEndpoint (org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint)3 ValueVector (org.apache.drill.exec.vector.ValueVector)3 Stopwatch (com.google.common.base.Stopwatch)2 CompositeByteBuf (io.netty.buffer.CompositeByteBuf)2 CorruptedFrameException (io.netty.handler.codec.CorruptedFrameException)2