Search in sources :

Example 1 with UnsupportedCompressionTypeException

use of org.apache.kafka.common.errors.UnsupportedCompressionTypeException in project kafka by apache.

the class RecordsUtil method downConvert.

/**
 * Down convert batches to the provided message format version. The first offset parameter is only relevant in the
 * conversion from uncompressed v2 or higher to v1 or lower. The reason is that uncompressed records in v0 and v1
 * are not batched (put another way, each batch always has 1 record).
 *
 * If a client requests records in v1 format starting from the middle of an uncompressed batch in v2 format, we
 * need to drop records from the batch during the conversion. Some versions of librdkafka rely on this for
 * correctness.
 *
 * The temporaryMemoryBytes computation assumes that the batches are not loaded into the heap
 * (via classes like FileChannelRecordBatch) before this method is called. This is the case in the broker (we
 * only load records into the heap when down converting), but it's not for the producer. However, down converting
 * in the producer is very uncommon and the extra complexity to handle that case is not worth it.
 */
protected static ConvertedRecords<MemoryRecords> downConvert(Iterable<? extends RecordBatch> batches, byte toMagic, long firstOffset, Time time) {
    // maintain the batch along with the decompressed records to avoid the need to decompress again
    List<RecordBatchAndRecords> recordBatchAndRecordsList = new ArrayList<>();
    int totalSizeEstimate = 0;
    long startNanos = time.nanoseconds();
    for (RecordBatch batch : batches) {
        if (toMagic < RecordBatch.MAGIC_VALUE_V2) {
            if (batch.isControlBatch())
                continue;
            if (batch.compressionType() == CompressionType.ZSTD)
                throw new UnsupportedCompressionTypeException("Down-conversion of zstandard-compressed batches " + "is not supported");
        }
        if (batch.magic() <= toMagic) {
            totalSizeEstimate += batch.sizeInBytes();
            recordBatchAndRecordsList.add(new RecordBatchAndRecords(batch, null, null));
        } else {
            List<Record> records = new ArrayList<>();
            for (Record record : batch) {
                // See the method javadoc for an explanation
                if (toMagic > RecordBatch.MAGIC_VALUE_V1 || batch.isCompressed() || record.offset() >= firstOffset)
                    records.add(record);
            }
            if (records.isEmpty())
                continue;
            final long baseOffset;
            if (batch.magic() >= RecordBatch.MAGIC_VALUE_V2 && toMagic >= RecordBatch.MAGIC_VALUE_V2)
                baseOffset = batch.baseOffset();
            else
                baseOffset = records.get(0).offset();
            totalSizeEstimate += AbstractRecords.estimateSizeInBytes(toMagic, baseOffset, batch.compressionType(), records);
            recordBatchAndRecordsList.add(new RecordBatchAndRecords(batch, records, baseOffset));
        }
    }
    ByteBuffer buffer = ByteBuffer.allocate(totalSizeEstimate);
    long temporaryMemoryBytes = 0;
    int numRecordsConverted = 0;
    for (RecordBatchAndRecords recordBatchAndRecords : recordBatchAndRecordsList) {
        temporaryMemoryBytes += recordBatchAndRecords.batch.sizeInBytes();
        if (recordBatchAndRecords.batch.magic() <= toMagic) {
            buffer = Utils.ensureCapacity(buffer, buffer.position() + recordBatchAndRecords.batch.sizeInBytes());
            recordBatchAndRecords.batch.writeTo(buffer);
        } else {
            MemoryRecordsBuilder builder = convertRecordBatch(toMagic, buffer, recordBatchAndRecords);
            buffer = builder.buffer();
            temporaryMemoryBytes += builder.uncompressedBytesWritten();
            numRecordsConverted += builder.numRecords();
        }
    }
    buffer.flip();
    RecordConversionStats stats = new RecordConversionStats(temporaryMemoryBytes, numRecordsConverted, time.nanoseconds() - startNanos);
    return new ConvertedRecords<>(MemoryRecords.readableRecords(buffer), stats);
}
Also used : ArrayList(java.util.ArrayList) ByteBuffer(java.nio.ByteBuffer) UnsupportedCompressionTypeException(org.apache.kafka.common.errors.UnsupportedCompressionTypeException)

Example 2 with UnsupportedCompressionTypeException

use of org.apache.kafka.common.errors.UnsupportedCompressionTypeException in project kafka by apache.

the class ProduceRequest method validateRecords.

public static void validateRecords(short version, BaseRecords baseRecords) {
    if (version >= 3) {
        if (baseRecords instanceof Records) {
            Records records = (Records) baseRecords;
            Iterator<? extends RecordBatch> iterator = records.batches().iterator();
            if (!iterator.hasNext())
                throw new InvalidRecordException("Produce requests with version " + version + " must have at least " + "one record batch");
            RecordBatch entry = iterator.next();
            if (entry.magic() != RecordBatch.MAGIC_VALUE_V2)
                throw new InvalidRecordException("Produce requests with version " + version + " are only allowed to " + "contain record batches with magic version 2");
            if (version < 7 && entry.compressionType() == CompressionType.ZSTD) {
                throw new UnsupportedCompressionTypeException("Produce requests with version " + version + " are not allowed to " + "use ZStandard compression");
            }
            if (iterator.hasNext())
                throw new InvalidRecordException("Produce requests with version " + version + " are only allowed to " + "contain exactly one record batch");
        }
    }
// Note that we do not do similar validation for older versions to ensure compatibility with
// clients which send the wrong magic version in the wrong version of the produce request. The broker
// did not do this validation before, so we maintain that behavior here.
}
Also used : UnsupportedCompressionTypeException(org.apache.kafka.common.errors.UnsupportedCompressionTypeException) RecordBatch(org.apache.kafka.common.record.RecordBatch) Records(org.apache.kafka.common.record.Records) BaseRecords(org.apache.kafka.common.record.BaseRecords) InvalidRecordException(org.apache.kafka.common.InvalidRecordException)

Aggregations

UnsupportedCompressionTypeException (org.apache.kafka.common.errors.UnsupportedCompressionTypeException)2 ByteBuffer (java.nio.ByteBuffer)1 ArrayList (java.util.ArrayList)1 InvalidRecordException (org.apache.kafka.common.InvalidRecordException)1 BaseRecords (org.apache.kafka.common.record.BaseRecords)1 RecordBatch (org.apache.kafka.common.record.RecordBatch)1 Records (org.apache.kafka.common.record.Records)1