Search in sources :

Example 11 with AbstractEvent

use of org.apache.flink.runtime.event.AbstractEvent in project flink by apache.

the class SubtaskCheckpointCoordinatorTest method testForceAlignedCheckpointResultingInPriorityEvents.

@Test
public void testForceAlignedCheckpointResultingInPriorityEvents() throws Exception {
    final long checkpointId = 42L;
    MockEnvironment mockEnvironment = MockEnvironment.builder().build();
    try (SubtaskCheckpointCoordinator coordinator = new MockSubtaskCheckpointCoordinatorBuilder().setUnalignedCheckpointEnabled(true).setEnvironment(mockEnvironment).build()) {
        AtomicReference<Boolean> broadcastedPriorityEvent = new AtomicReference<>(null);
        final OperatorChain<?, ?> operatorChain = new RegularOperatorChain(new MockStreamTaskBuilder(mockEnvironment).build(), new NonRecordWriter<>()) {

            @Override
            public void broadcastEvent(AbstractEvent event, boolean isPriorityEvent) throws IOException {
                super.broadcastEvent(event, isPriorityEvent);
                broadcastedPriorityEvent.set(isPriorityEvent);
                // test if we can write output data
                coordinator.getChannelStateWriter().addOutputData(checkpointId, new ResultSubpartitionInfo(0, 0), 0, BufferBuilderTestUtils.buildSomeBuffer(500));
            }
        };
        CheckpointOptions forcedAlignedOptions = CheckpointOptions.unaligned(CheckpointType.CHECKPOINT, CheckpointStorageLocationReference.getDefault()).withUnalignedUnsupported();
        coordinator.checkpointState(new CheckpointMetaData(checkpointId, 0), forcedAlignedOptions, new CheckpointMetricsBuilder(), operatorChain, false, () -> true);
        assertEquals(true, broadcastedPriorityEvent.get());
    }
}
Also used : MockStreamTaskBuilder(org.apache.flink.streaming.util.MockStreamTaskBuilder) CheckpointMetricsBuilder(org.apache.flink.runtime.checkpoint.CheckpointMetricsBuilder) AtomicReference(java.util.concurrent.atomic.AtomicReference) AbstractEvent(org.apache.flink.runtime.event.AbstractEvent) CheckpointMetaData(org.apache.flink.runtime.checkpoint.CheckpointMetaData) MockEnvironment(org.apache.flink.runtime.operators.testutils.MockEnvironment) ResultSubpartitionInfo(org.apache.flink.runtime.checkpoint.channel.ResultSubpartitionInfo) CheckpointOptions(org.apache.flink.runtime.checkpoint.CheckpointOptions) Test(org.junit.Test)

Example 12 with AbstractEvent

use of org.apache.flink.runtime.event.AbstractEvent in project flink by apache.

the class EventSerializer method isEvent.

/**
	 * Identifies whether the given buffer encodes the given event.
	 *
	 * <p><strong>Pre-condition</strong>: This buffer must encode some event!</p>
	 *
	 * @param buffer the buffer to peak into
	 * @param eventClass the expected class of the event type
	 * @param classLoader the class loader to use for custom event classes
	 * @return whether the event class of the <tt>buffer</tt> matches the given <tt>eventClass</tt>
	 * @throws IOException
	 */
private static boolean isEvent(ByteBuffer buffer, Class<?> eventClass, ClassLoader classLoader) throws IOException {
    if (buffer.remaining() < 4) {
        throw new IOException("Incomplete event");
    }
    final int bufferPos = buffer.position();
    final ByteOrder bufferOrder = buffer.order();
    buffer.order(ByteOrder.BIG_ENDIAN);
    try {
        int type = buffer.getInt();
        switch(type) {
            case END_OF_PARTITION_EVENT:
                return eventClass.equals(EndOfPartitionEvent.class);
            case CHECKPOINT_BARRIER_EVENT:
                return eventClass.equals(CheckpointBarrier.class);
            case END_OF_SUPERSTEP_EVENT:
                return eventClass.equals(EndOfSuperstepEvent.class);
            case CANCEL_CHECKPOINT_MARKER_EVENT:
                return eventClass.equals(CancelCheckpointMarker.class);
            case OTHER_EVENT:
                try {
                    final DataInputDeserializer deserializer = new DataInputDeserializer(buffer);
                    final String className = deserializer.readUTF();
                    final Class<? extends AbstractEvent> clazz;
                    try {
                        clazz = classLoader.loadClass(className).asSubclass(AbstractEvent.class);
                    } catch (ClassNotFoundException e) {
                        throw new IOException("Could not load event class '" + className + "'.", e);
                    } catch (ClassCastException e) {
                        throw new IOException("The class '" + className + "' is not a valid subclass of '" + AbstractEvent.class.getName() + "'.", e);
                    }
                    return eventClass.equals(clazz);
                } catch (Exception e) {
                    throw new IOException("Error while deserializing or instantiating event.", e);
                }
            default:
                throw new IOException("Corrupt byte stream for event");
        }
    } finally {
        buffer.order(bufferOrder);
        // restore the original position in the buffer (recall: we only peak into it!)
        buffer.position(bufferPos);
    }
}
Also used : IOException(java.io.IOException) ByteOrder(java.nio.ByteOrder) AbstractEvent(org.apache.flink.runtime.event.AbstractEvent) IOException(java.io.IOException) DataInputDeserializer(org.apache.flink.runtime.util.DataInputDeserializer)

Example 13 with AbstractEvent

use of org.apache.flink.runtime.event.AbstractEvent in project flink by apache.

the class StreamMockEnvironment method addBufferToOutputList.

/**
	 * Adds the object behind the given <tt>buffer</tt> to the <tt>outputList</tt>.
	 *
	 * @param recordDeserializer de-serializer to use for the buffer
	 * @param delegate de-serialization delegate to use for non-event buffers
	 * @param buffer the buffer to add
	 * @param outputList the output list to add the object to
	 * @param <T> type of the objects behind the non-event buffers
	 *
	 * @throws java.io.IOException
	 */
private <T> void addBufferToOutputList(RecordDeserializer<DeserializationDelegate<T>> recordDeserializer, NonReusingDeserializationDelegate<T> delegate, Buffer buffer, final Queue<Object> outputList) throws java.io.IOException {
    if (buffer.isBuffer()) {
        recordDeserializer.setNextBuffer(buffer);
        while (recordDeserializer.hasUnfinishedData()) {
            RecordDeserializer.DeserializationResult result = recordDeserializer.getNextRecord(delegate);
            if (result.isFullRecord()) {
                outputList.add(delegate.getInstance());
            }
            if (result == RecordDeserializer.DeserializationResult.LAST_RECORD_FROM_BUFFER || result == RecordDeserializer.DeserializationResult.PARTIAL_RECORD) {
                break;
            }
        }
    } else {
        // is event
        AbstractEvent event = EventSerializer.fromBuffer(buffer, getClass().getClassLoader());
        outputList.add(event);
    }
}
Also used : RecordDeserializer(org.apache.flink.runtime.io.network.api.serialization.RecordDeserializer) AdaptiveSpanningRecordDeserializer(org.apache.flink.runtime.io.network.api.serialization.AdaptiveSpanningRecordDeserializer) AbstractEvent(org.apache.flink.runtime.event.AbstractEvent)

Example 14 with AbstractEvent

use of org.apache.flink.runtime.event.AbstractEvent in project flink by apache.

the class PipelinedSubpartitionTest method testProduceConsume.

private void testProduceConsume(boolean isSlowProducer, boolean isSlowConsumer) throws Exception {
    // Config
    final int producerNumberOfBuffersToProduce = 128;
    final int bufferSize = 32 * 1024;
    // Producer behaviour
    final TestProducerSource producerSource = new TestProducerSource() {

        private int numberOfBuffers;

        @Override
        public BufferAndChannel getNextBuffer() throws Exception {
            if (numberOfBuffers == producerNumberOfBuffersToProduce) {
                return null;
            }
            MemorySegment segment = MemorySegmentFactory.allocateUnpooledSegment(bufferSize);
            int next = numberOfBuffers * (bufferSize / Integer.BYTES);
            for (int i = 0; i < bufferSize; i += 4) {
                segment.putInt(i, next);
                next++;
            }
            numberOfBuffers++;
            return new BufferAndChannel(segment.getArray(), 0);
        }
    };
    // Consumer behaviour
    final TestConsumerCallback consumerCallback = new TestConsumerCallback() {

        private int numberOfBuffers;

        @Override
        public void onBuffer(Buffer buffer) {
            final MemorySegment segment = buffer.getMemorySegment();
            assertEquals(segment.size(), buffer.getSize());
            int expected = numberOfBuffers * (segment.size() / 4);
            for (int i = 0; i < segment.size(); i += 4) {
                assertEquals(expected, segment.getInt(i));
                expected++;
            }
            numberOfBuffers++;
            buffer.recycleBuffer();
        }

        @Override
        public void onEvent(AbstractEvent event) {
        // Nothing to do in this test
        }
    };
    final PipelinedSubpartition subpartition = createSubpartition();
    TestSubpartitionProducer producer = new TestSubpartitionProducer(subpartition, isSlowProducer, producerSource);
    TestSubpartitionConsumer consumer = new TestSubpartitionConsumer(isSlowConsumer, consumerCallback);
    final PipelinedSubpartitionView view = subpartition.createReadView(consumer);
    consumer.setSubpartitionView(view);
    CompletableFuture<Boolean> producerResult = CompletableFuture.supplyAsync(CheckedSupplier.unchecked(producer::call), executorService);
    CompletableFuture<Boolean> consumerResult = CompletableFuture.supplyAsync(CheckedSupplier.unchecked(consumer::call), executorService);
    FutureUtils.waitForAll(Arrays.asList(producerResult, consumerResult)).get(60_000L, TimeUnit.MILLISECONDS);
}
Also used : Buffer(org.apache.flink.runtime.io.network.buffer.Buffer) TestConsumerCallback(org.apache.flink.runtime.io.network.util.TestConsumerCallback) TestProducerSource(org.apache.flink.runtime.io.network.util.TestProducerSource) AbstractEvent(org.apache.flink.runtime.event.AbstractEvent) MemorySegment(org.apache.flink.core.memory.MemorySegment) TestSubpartitionProducer(org.apache.flink.runtime.io.network.util.TestSubpartitionProducer) TestSubpartitionConsumer(org.apache.flink.runtime.io.network.util.TestSubpartitionConsumer)

Example 15 with AbstractEvent

use of org.apache.flink.runtime.event.AbstractEvent in project flink by apache.

the class TestSubpartitionConsumer method call.

@Override
public Boolean call() throws Exception {
    try {
        while (true) {
            if (Thread.interrupted()) {
                throw new InterruptedException();
            }
            synchronized (dataAvailableNotification) {
                while (!dataAvailableNotification.getAndSet(false)) {
                    dataAvailableNotification.wait();
                }
            }
            final BufferAndBacklog bufferAndBacklog = subpartitionView.getNextBuffer();
            if (isSlowConsumer) {
                Thread.sleep(random.nextInt(MAX_SLEEP_TIME_MS + 1));
            }
            if (bufferAndBacklog != null) {
                if (bufferAndBacklog.isDataAvailable()) {
                    dataAvailableNotification.set(true);
                }
                if (bufferAndBacklog.buffer().isBuffer()) {
                    callback.onBuffer(bufferAndBacklog.buffer());
                } else {
                    final AbstractEvent event = EventSerializer.fromBuffer(bufferAndBacklog.buffer(), getClass().getClassLoader());
                    callback.onEvent(event);
                    bufferAndBacklog.buffer().recycleBuffer();
                    if (event.getClass() == EndOfPartitionEvent.class) {
                        subpartitionView.releaseAllResources();
                        return true;
                    }
                }
            } else if (subpartitionView.isReleased()) {
                return true;
            }
        }
    } finally {
        subpartitionView.releaseAllResources();
    }
}
Also used : BufferAndBacklog(org.apache.flink.runtime.io.network.partition.ResultSubpartition.BufferAndBacklog) AbstractEvent(org.apache.flink.runtime.event.AbstractEvent)

Aggregations

AbstractEvent (org.apache.flink.runtime.event.AbstractEvent)24 CheckpointBarrier (org.apache.flink.runtime.io.network.api.CheckpointBarrier)6 IOException (java.io.IOException)5 Buffer (org.apache.flink.runtime.io.network.buffer.Buffer)5 Test (org.junit.Test)5 EventAnnouncement (org.apache.flink.runtime.io.network.api.EventAnnouncement)4 BufferConsumer (org.apache.flink.runtime.io.network.buffer.BufferConsumer)4 ByteBuffer (java.nio.ByteBuffer)3 EndOfData (org.apache.flink.runtime.io.network.api.EndOfData)3 BufferOrEvent (org.apache.flink.runtime.io.network.partition.consumer.BufferOrEvent)3 StreamElement (org.apache.flink.streaming.runtime.streamrecord.StreamElement)3 ByteOrder (java.nio.ByteOrder)2 AtomicReference (java.util.concurrent.atomic.AtomicReference)2 Nullable (javax.annotation.Nullable)2 CheckpointMetaData (org.apache.flink.runtime.checkpoint.CheckpointMetaData)2 CheckpointMetricsBuilder (org.apache.flink.runtime.checkpoint.CheckpointMetricsBuilder)2 CheckpointOptions (org.apache.flink.runtime.checkpoint.CheckpointOptions)2 CancelCheckpointMarker (org.apache.flink.runtime.io.network.api.CancelCheckpointMarker)2 EndOfPartitionEvent (org.apache.flink.runtime.io.network.api.EndOfPartitionEvent)2 DeserializationResult (org.apache.flink.runtime.io.network.api.serialization.RecordDeserializer.DeserializationResult)2