use of org.apache.flink.runtime.io.network.buffer.BufferBuilder in project flink by apache.
the class SpanningRecordSerializationTest method appendLeftOverBytes.
private static Buffer appendLeftOverBytes(Buffer buffer, byte[] leftOverBytes) {
try (BufferBuilder bufferBuilder = new BufferBuilder(MemorySegmentFactory.allocateUnpooledSegment(buffer.readableBytes() + leftOverBytes.length), FreeingBufferRecycler.INSTANCE)) {
try (BufferConsumer bufferConsumer = bufferBuilder.createBufferConsumer()) {
bufferBuilder.append(buffer.getNioBufferReadable());
bufferBuilder.appendAndCommit(ByteBuffer.wrap(leftOverBytes));
return bufferConsumer.build();
}
}
}
use of org.apache.flink.runtime.io.network.buffer.BufferBuilder in project flink by splunk.
the class SpanningRecordSerializationTest method setNextBufferForSerializer.
private static BufferAndSerializerResult setNextBufferForSerializer(ByteBuffer serializedRecord, int segmentSize) throws IOException {
// create a bufferBuilder with some random starting offset to properly test handling buffer
// slices in the
// deserialization code.
int startingOffset = segmentSize > 2 ? RANDOM.nextInt(segmentSize / 2) : 0;
BufferBuilder bufferBuilder = createFilledBufferBuilder(segmentSize + startingOffset, startingOffset);
BufferConsumer bufferConsumer = bufferBuilder.createBufferConsumer();
bufferConsumer.build().recycleBuffer();
// Closing the BufferBuilder here just allow to be sure that Buffer will be recovered when
// BufferConsumer will be closed.
bufferBuilder.close();
bufferBuilder.appendAndCommit(serializedRecord);
return new BufferAndSerializerResult(bufferBuilder, bufferConsumer, bufferBuilder.isFull(), !serializedRecord.hasRemaining());
}
use of org.apache.flink.runtime.io.network.buffer.BufferBuilder in project flink by splunk.
the class HashBasedDataBuffer method finish.
@Override
public void finish() {
checkState(!isFull, "DataBuffer must not be full.");
checkState(!isFinished, "DataBuffer is already finished.");
isFull = true;
isFinished = true;
for (int channel = 0; channel < builders.length; ++channel) {
BufferBuilder builder = builders[channel];
if (builder != null) {
builder.finish();
buffers[channel].add(builder.createBufferConsumerFromBeginning());
builder.close();
builders[channel] = null;
}
}
}
use of org.apache.flink.runtime.io.network.buffer.BufferBuilder in project flink by splunk.
the class BufferWritingResultPartition method appendUnicastDataForRecordContinuation.
private BufferBuilder appendUnicastDataForRecordContinuation(final ByteBuffer remainingRecordBytes, final int targetSubpartition) throws IOException {
final BufferBuilder buffer = requestNewUnicastBufferBuilder(targetSubpartition);
// !! Be aware, in case of partialRecordBytes != 0, partial length and data has to
// `appendAndCommit` first
// before consumer is created. Otherwise it would be confused with the case the buffer
// starting
// with a complete record.
// !! The next two lines can not change order.
final int partialRecordBytes = buffer.appendAndCommit(remainingRecordBytes);
addToSubpartition(buffer, targetSubpartition, partialRecordBytes, partialRecordBytes);
return buffer;
}
use of org.apache.flink.runtime.io.network.buffer.BufferBuilder in project flink by splunk.
the class BufferWritingResultPartition method appendBroadcastDataForRecordContinuation.
private BufferBuilder appendBroadcastDataForRecordContinuation(final ByteBuffer remainingRecordBytes) throws IOException {
final BufferBuilder buffer = requestNewBroadcastBufferBuilder();
// !! Be aware, in case of partialRecordBytes != 0, partial length and data has to
// `appendAndCommit` first
// before consumer is created. Otherwise it would be confused with the case the buffer
// starting
// with a complete record.
// !! The next two lines can not change order.
final int partialRecordBytes = buffer.appendAndCommit(remainingRecordBytes);
createBroadcastBufferConsumers(buffer, partialRecordBytes, partialRecordBytes);
return buffer;
}
Aggregations