Search in sources :

Example 36 with ByteBufferAccessor

use of org.apache.kafka.common.protocol.ByteBufferAccessor in project kafka by apache.

the class MetadataRecordSerdeTest method testParsingMalformedMessageTypeVarint.

/**
 * Test attempting to parse an event which has a malformed message type varint.
 */
@Test
public void testParsingMalformedMessageTypeVarint() {
    MetadataRecordSerde serde = new MetadataRecordSerde();
    ByteBuffer buffer = ByteBuffer.allocate(64);
    buffer.clear();
    buffer.put((byte) 0x01);
    buffer.put((byte) 0x80);
    buffer.put((byte) 0x80);
    buffer.put((byte) 0x80);
    buffer.put((byte) 0x80);
    buffer.put((byte) 0x80);
    buffer.put((byte) 0x80);
    buffer.position(0);
    buffer.limit(64);
    assertStartsWith("Error while reading type", assertThrows(MetadataParseException.class, () -> serde.read(new ByteBufferAccessor(buffer), buffer.remaining())).getMessage());
}
Also used : ByteBufferAccessor(org.apache.kafka.common.protocol.ByteBufferAccessor) ByteBuffer(java.nio.ByteBuffer) Test(org.junit.jupiter.api.Test)

Example 37 with ByteBufferAccessor

use of org.apache.kafka.common.protocol.ByteBufferAccessor in project kafka by apache.

the class SnapshotFileReader method handleControlBatch.

private void handleControlBatch(FileChannelRecordBatch batch) {
    for (Iterator<Record> iter = batch.iterator(); iter.hasNext(); ) {
        Record record = iter.next();
        try {
            short typeId = ControlRecordType.parseTypeId(record.key());
            ControlRecordType type = ControlRecordType.fromTypeId(typeId);
            switch(type) {
                case LEADER_CHANGE:
                    LeaderChangeMessage message = new LeaderChangeMessage();
                    message.read(new ByteBufferAccessor(record.value()), (short) 0);
                    listener.handleLeaderChange(new LeaderAndEpoch(OptionalInt.of(message.leaderId()), batch.partitionLeaderEpoch()));
                    break;
                default:
                    log.error("Ignoring control record with type {} at offset {}", type, record.offset());
            }
        } catch (Throwable e) {
            log.error("unable to read control record at offset {}", record.offset(), e);
        }
    }
}
Also used : LeaderChangeMessage(org.apache.kafka.common.message.LeaderChangeMessage) Record(org.apache.kafka.common.record.Record) LeaderAndEpoch(org.apache.kafka.raft.LeaderAndEpoch) ByteBufferAccessor(org.apache.kafka.common.protocol.ByteBufferAccessor) ControlRecordType(org.apache.kafka.common.record.ControlRecordType)

Example 38 with ByteBufferAccessor

use of org.apache.kafka.common.protocol.ByteBufferAccessor in project kafka by apache.

the class BytesApiMessageSerde method serialize.

public byte[] serialize(ApiMessageAndVersion messageAndVersion) {
    ObjectSerializationCache cache = new ObjectSerializationCache();
    int size = apiMessageSerde.recordSize(messageAndVersion, cache);
    ByteBufferAccessor writable = new ByteBufferAccessor(ByteBuffer.allocate(size));
    apiMessageSerde.write(messageAndVersion, cache, writable);
    return writable.buffer().array();
}
Also used : ObjectSerializationCache(org.apache.kafka.common.protocol.ObjectSerializationCache) ByteBufferAccessor(org.apache.kafka.common.protocol.ByteBufferAccessor)

Example 39 with ByteBufferAccessor

use of org.apache.kafka.common.protocol.ByteBufferAccessor in project kafka by apache.

the class SubscriptionInfo method decode.

/**
 * @throws TaskAssignmentException if method fails to decode the data
 */
public static SubscriptionInfo decode(final ByteBuffer data) {
    data.rewind();
    final int version = data.getInt();
    if (version > LATEST_SUPPORTED_VERSION) {
        // in this special case, we only rely on the version and latest version,
        // 
        final int latestSupportedVersion = data.getInt();
        final SubscriptionInfoData subscriptionInfoData = new SubscriptionInfoData();
        subscriptionInfoData.setVersion(version);
        subscriptionInfoData.setLatestSupportedVersion(latestSupportedVersion);
        LOG.info("Unable to decode subscription data: used version: {}; latest supported version: {}", version, latestSupportedVersion);
        return new SubscriptionInfo(subscriptionInfoData);
    } else {
        data.rewind();
        final ByteBufferAccessor accessor = new ByteBufferAccessor(data);
        final SubscriptionInfoData subscriptionInfoData = new SubscriptionInfoData(accessor, (short) version);
        return new SubscriptionInfo(subscriptionInfoData);
    }
}
Also used : SubscriptionInfoData(org.apache.kafka.streams.internals.generated.SubscriptionInfoData) ByteBufferAccessor(org.apache.kafka.common.protocol.ByteBufferAccessor)

Aggregations

ByteBufferAccessor (org.apache.kafka.common.protocol.ByteBufferAccessor)39 ByteBuffer (java.nio.ByteBuffer)24 Test (org.junit.jupiter.api.Test)23 ObjectSerializationCache (org.apache.kafka.common.protocol.ObjectSerializationCache)13 TopicPartition (org.apache.kafka.common.TopicPartition)6 ArrayList (java.util.ArrayList)5 HashMap (java.util.HashMap)4 Uuid (org.apache.kafka.common.Uuid)4 UnsupportedVersionException (org.apache.kafka.common.errors.UnsupportedVersionException)4 Collections (java.util.Collections)3 List (java.util.List)3 Map (java.util.Map)3 LeaderChangeMessage (org.apache.kafka.common.message.LeaderChangeMessage)3 UpdateMetadataEndpoint (org.apache.kafka.common.message.UpdateMetadataRequestData.UpdateMetadataEndpoint)3 Send (org.apache.kafka.common.network.Send)3 BufferUnderflowException (java.nio.BufferUnderflowException)2 Arrays.asList (java.util.Arrays.asList)2 Collections.emptyList (java.util.Collections.emptyList)2 HashSet (java.util.HashSet)2 Set (java.util.Set)2