Search in sources :

Example 1 with CorruptRecordException

use of org.apache.kafka.common.errors.CorruptRecordException in project kafka by apache.

the class ByteBufferLogInputStream method nextEntry.

public ByteBufferLogEntry nextEntry() throws IOException {
    int remaining = buffer.remaining();
    if (remaining < LOG_OVERHEAD)
        return null;
    int recordSize = buffer.getInt(buffer.position() + Records.SIZE_OFFSET);
    if (recordSize < Record.RECORD_OVERHEAD_V0)
        throw new CorruptRecordException(String.format("Record size is less than the minimum record overhead (%d)", Record.RECORD_OVERHEAD_V0));
    if (recordSize > maxMessageSize)
        throw new CorruptRecordException(String.format("Record size exceeds the largest allowable message size (%d).", maxMessageSize));
    int entrySize = recordSize + LOG_OVERHEAD;
    if (remaining < entrySize)
        return null;
    ByteBuffer entrySlice = buffer.slice();
    entrySlice.limit(entrySize);
    buffer.position(buffer.position() + entrySize);
    return new ByteBufferLogEntry(entrySlice);
}
Also used : CorruptRecordException(org.apache.kafka.common.errors.CorruptRecordException) ByteBuffer(java.nio.ByteBuffer)

Example 2 with CorruptRecordException

use of org.apache.kafka.common.errors.CorruptRecordException in project kafka by apache.

the class RecordSendTest method testError.

/**
     * Test that an asynchronous request will eventually throw the right exception
     */
@Test(expected = ExecutionException.class)
public void testError() throws Exception {
    FutureRecordMetadata future = new FutureRecordMetadata(asyncRequest(baseOffset, new CorruptRecordException(), 50L), relOffset, Record.NO_TIMESTAMP, 0, 0, 0);
    future.get();
}
Also used : FutureRecordMetadata(org.apache.kafka.clients.producer.internals.FutureRecordMetadata) CorruptRecordException(org.apache.kafka.common.errors.CorruptRecordException) Test(org.junit.Test)

Example 3 with CorruptRecordException

use of org.apache.kafka.common.errors.CorruptRecordException in project kafka by apache.

the class FileLogInputStream method nextEntry.

@Override
public FileChannelLogEntry nextEntry() throws IOException {
    if (position + Records.LOG_OVERHEAD >= end)
        return null;
    logHeaderBuffer.rewind();
    Utils.readFullyOrFail(channel, logHeaderBuffer, position, "log header");
    logHeaderBuffer.rewind();
    long offset = logHeaderBuffer.getLong();
    int size = logHeaderBuffer.getInt();
    if (size < Record.RECORD_OVERHEAD_V0)
        throw new CorruptRecordException(String.format("Record size is smaller than minimum record overhead (%d).", Record.RECORD_OVERHEAD_V0));
    if (size > maxRecordSize)
        throw new CorruptRecordException(String.format("Record size exceeds the largest allowable message size (%d).", maxRecordSize));
    if (position + Records.LOG_OVERHEAD + size > end)
        return null;
    FileChannelLogEntry logEntry = new FileChannelLogEntry(offset, channel, position, size);
    position += logEntry.sizeInBytes();
    return logEntry;
}
Also used : CorruptRecordException(org.apache.kafka.common.errors.CorruptRecordException)

Aggregations

CorruptRecordException (org.apache.kafka.common.errors.CorruptRecordException)3 ByteBuffer (java.nio.ByteBuffer)1 FutureRecordMetadata (org.apache.kafka.clients.producer.internals.FutureRecordMetadata)1 Test (org.junit.Test)1