Search in sources :

Example 11 with SnapshotResult

use of org.apache.flink.runtime.state.SnapshotResult in project flink by apache.

the class TtlStateTestBase method testIncrementalCleanup.

@Test
public void testIncrementalCleanup() throws Exception {
    assumeTrue(incrementalCleanupSupported());
    initTest(getConfBuilder(TTL).cleanupIncrementally(5, true).build());
    final int keysToUpdate = (CopyOnWriteStateMap.DEFAULT_CAPACITY >> 3) * NUMBER_OF_KEY_GROUPS;
    timeProvider.time = 0;
    // create enough keys to trigger incremental rehash
    updateKeys(0, INC_CLEANUP_ALL_KEYS, ctx().updateEmpty);
    timeProvider.time = 50;
    // update some
    updateKeys(0, keysToUpdate, ctx().updateUnexpired);
    RunnableFuture<SnapshotResult<KeyedStateHandle>> snapshotRunnableFuture = sbetc.triggerSnapshot();
    // update more concurrently with snapshotting
    updateKeys(keysToUpdate, keysToUpdate * 2, ctx().updateUnexpired);
    // expire rest
    timeProvider.time = 120;
    triggerMoreIncrementalCleanupByOtherOps();
    // check rest expired and cleanup updated
    checkExpiredKeys(keysToUpdate * 2, INC_CLEANUP_ALL_KEYS);
    KeyedStateHandle snapshot = snapshotRunnableFuture.get().getJobManagerOwnedSnapshot();
    // restore snapshot which should discard concurrent updates
    timeProvider.time = 50;
    restoreSnapshot(snapshot, NUMBER_OF_KEY_GROUPS);
    // check rest unexpired, also after restore which should discard concurrent updates
    checkUnexpiredKeys(keysToUpdate, INC_CLEANUP_ALL_KEYS, ctx().getUpdateEmpty);
    timeProvider.time = 120;
    // remove some
    for (int i = keysToUpdate >> 1; i < keysToUpdate + (keysToUpdate >> 2); i++) {
        sbetc.setCurrentKey(Integer.toString(i));
        ctx().ttlState.clear();
    }
    // check updated not expired
    checkUnexpiredKeys(0, keysToUpdate >> 1, ctx().getUnexpired);
    triggerMoreIncrementalCleanupByOtherOps();
    // check that concurrently updated and then restored with original values are expired
    checkExpiredKeys(keysToUpdate, keysToUpdate * 2);
    timeProvider.time = 170;
    // check rest expired and cleanup updated
    checkExpiredKeys(keysToUpdate >> 1, INC_CLEANUP_ALL_KEYS);
    // check updated expired
    checkExpiredKeys(0, keysToUpdate >> 1);
}
Also used : SnapshotResult(org.apache.flink.runtime.state.SnapshotResult) KeyedStateHandle(org.apache.flink.runtime.state.KeyedStateHandle) Test(org.junit.Test)

Example 12 with SnapshotResult

use of org.apache.flink.runtime.state.SnapshotResult in project flink by apache.

the class RocksDBAsyncSnapshotTest method testCleanupOfSnapshotsInFailureCase.

/**
 * Test that the snapshot files are cleaned up in case of a failure during the snapshot
 * procedure.
 */
@Test
public void testCleanupOfSnapshotsInFailureCase() throws Exception {
    long checkpointId = 1L;
    long timestamp = 42L;
    MockEnvironment env = MockEnvironment.builder().build();
    final IOException testException = new IOException("Test exception");
    CheckpointStateOutputStream outputStream = spy(new FailingStream(testException));
    RocksDBStateBackend backend = new RocksDBStateBackend((StateBackend) new MemoryStateBackend());
    backend.setDbStoragePath(temporaryFolder.newFolder().toURI().toString());
    AbstractKeyedStateBackend<Void> keyedStateBackend = backend.createKeyedStateBackend(env, new JobID(), "test operator", VoidSerializer.INSTANCE, 1, new KeyGroupRange(0, 0), null, TtlTimeProvider.DEFAULT, new UnregisteredMetricsGroup(), Collections.emptyList(), new CloseableRegistry());
    try {
        // register a state so that the state backend has to checkpoint something
        keyedStateBackend.getPartitionedState("namespace", StringSerializer.INSTANCE, new ValueStateDescriptor<>("foobar", String.class));
        RunnableFuture<SnapshotResult<KeyedStateHandle>> snapshotFuture = keyedStateBackend.snapshot(checkpointId, timestamp, new TestCheckpointStreamFactory(() -> outputStream), CheckpointOptions.forCheckpointWithDefaultLocation());
        try {
            FutureUtils.runIfNotDoneAndGet(snapshotFuture);
            fail("Expected an exception to be thrown here.");
        } catch (ExecutionException e) {
            Assert.assertEquals(testException, e.getCause());
        }
        verify(outputStream).close();
    } finally {
        IOUtils.closeQuietly(keyedStateBackend);
        keyedStateBackend.dispose();
        IOUtils.closeQuietly(env);
    }
}
Also used : UnregisteredMetricsGroup(org.apache.flink.metrics.groups.UnregisteredMetricsGroup) SnapshotResult(org.apache.flink.runtime.state.SnapshotResult) MemoryStateBackend(org.apache.flink.runtime.state.memory.MemoryStateBackend) KeyGroupRange(org.apache.flink.runtime.state.KeyGroupRange) IOException(java.io.IOException) CloseableRegistry(org.apache.flink.core.fs.CloseableRegistry) TestCheckpointStreamFactory(org.apache.flink.runtime.state.testutils.TestCheckpointStreamFactory) MockEnvironment(org.apache.flink.runtime.operators.testutils.MockEnvironment) StreamMockEnvironment(org.apache.flink.streaming.runtime.tasks.StreamMockEnvironment) CheckpointStateOutputStream(org.apache.flink.runtime.state.CheckpointStateOutputStream) ExecutionException(java.util.concurrent.ExecutionException) JobID(org.apache.flink.api.common.JobID) Test(org.junit.Test)

Example 13 with SnapshotResult

use of org.apache.flink.runtime.state.SnapshotResult in project flink by apache.

the class EmbeddedRocksDBStateBackendTest method testSharedIncrementalStateDeRegistration.

@Test
public void testSharedIncrementalStateDeRegistration() throws Exception {
    if (enableIncrementalCheckpointing) {
        CheckpointableKeyedStateBackend<Integer> backend = createKeyedBackend(IntSerializer.INSTANCE);
        try {
            ValueStateDescriptor<String> kvId = new ValueStateDescriptor<>("id", String.class, null);
            kvId.initializeSerializerUnlessSet(new ExecutionConfig());
            ValueState<String> state = backend.getPartitionedState(VoidNamespace.INSTANCE, VoidNamespaceSerializer.INSTANCE, kvId);
            Queue<IncrementalRemoteKeyedStateHandle> previousStateHandles = new LinkedList<>();
            SharedStateRegistry sharedStateRegistry = spy(new SharedStateRegistryImpl());
            for (int checkpointId = 0; checkpointId < 3; ++checkpointId) {
                reset(sharedStateRegistry);
                backend.setCurrentKey(checkpointId);
                state.update("Hello-" + checkpointId);
                RunnableFuture<SnapshotResult<KeyedStateHandle>> snapshot = backend.snapshot(checkpointId, checkpointId, createStreamFactory(), CheckpointOptions.forCheckpointWithDefaultLocation());
                snapshot.run();
                SnapshotResult<KeyedStateHandle> snapshotResult = snapshot.get();
                IncrementalRemoteKeyedStateHandle stateHandle = (IncrementalRemoteKeyedStateHandle) snapshotResult.getJobManagerOwnedSnapshot();
                Map<StateHandleID, StreamStateHandle> sharedState = new HashMap<>(stateHandle.getSharedState());
                stateHandle.registerSharedStates(sharedStateRegistry, checkpointId);
                for (Map.Entry<StateHandleID, StreamStateHandle> e : sharedState.entrySet()) {
                    verify(sharedStateRegistry).registerReference(stateHandle.createSharedStateRegistryKeyFromFileName(e.getKey()), e.getValue(), checkpointId);
                }
                previousStateHandles.add(stateHandle);
                ((CheckpointListener) backend).notifyCheckpointComplete(checkpointId);
                if (previousStateHandles.size() > 1) {
                    previousStateHandles.remove().discardState();
                }
            }
            while (!previousStateHandles.isEmpty()) {
                reset(sharedStateRegistry);
                previousStateHandles.remove().discardState();
            }
        } finally {
            IOUtils.closeQuietly(backend);
            backend.dispose();
        }
    }
}
Also used : SnapshotResult(org.apache.flink.runtime.state.SnapshotResult) HashMap(java.util.HashMap) CheckpointListener(org.apache.flink.api.common.state.CheckpointListener) ExecutionConfig(org.apache.flink.api.common.ExecutionConfig) IncrementalRemoteKeyedStateHandle(org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle) KeyedStateHandle(org.apache.flink.runtime.state.KeyedStateHandle) LinkedList(java.util.LinkedList) SharedStateRegistry(org.apache.flink.runtime.state.SharedStateRegistry) ValueStateDescriptor(org.apache.flink.api.common.state.ValueStateDescriptor) StreamStateHandle(org.apache.flink.runtime.state.StreamStateHandle) StateHandleID(org.apache.flink.runtime.state.StateHandleID) SharedStateRegistryImpl(org.apache.flink.runtime.state.SharedStateRegistryImpl) IncrementalRemoteKeyedStateHandle(org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle) Map(java.util.Map) HashMap(java.util.HashMap) Test(org.junit.Test)

Example 14 with SnapshotResult

use of org.apache.flink.runtime.state.SnapshotResult in project flink by apache.

the class EmbeddedRocksDBStateBackendTest method testCancelRunningSnapshot.

@Test
public void testCancelRunningSnapshot() throws Exception {
    setupRocksKeyedStateBackend();
    try {
        RunnableFuture<SnapshotResult<KeyedStateHandle>> snapshot = keyedStateBackend.snapshot(0L, 0L, testStreamFactory, CheckpointOptions.forCheckpointWithDefaultLocation());
        Thread asyncSnapshotThread = new Thread(snapshot);
        asyncSnapshotThread.start();
        // wait for snapshot to run
        waiter.await();
        waiter.reset();
        runStateUpdates();
        snapshot.cancel(true);
        // allow checkpointing to start writing
        blocker.trigger();
        for (BlockingCheckpointOutputStream stream : testStreamFactory.getAllCreatedStreams()) {
            assertTrue(stream.isClosed());
        }
        // wait for snapshot stream writing to run
        waiter.await();
        try {
            snapshot.get();
            fail();
        } catch (Exception ignored) {
        }
        asyncSnapshotThread.join();
        verifyRocksObjectsReleased();
    } finally {
        this.keyedStateBackend.dispose();
        this.keyedStateBackend = null;
    }
    verifyRocksDBStateUploaderClosed();
}
Also used : SnapshotResult(org.apache.flink.runtime.state.SnapshotResult) BlockingCheckpointOutputStream(org.apache.flink.runtime.util.BlockingCheckpointOutputStream) SupplierWithException(org.apache.flink.util.function.SupplierWithException) IOException(java.io.IOException) Test(org.junit.Test)

Example 15 with SnapshotResult

use of org.apache.flink.runtime.state.SnapshotResult in project flink by apache.

the class EmbeddedRocksDBStateBackendTest method testCompletingSnapshot.

@Test
public void testCompletingSnapshot() throws Exception {
    setupRocksKeyedStateBackend();
    try {
        RunnableFuture<SnapshotResult<KeyedStateHandle>> snapshot = keyedStateBackend.snapshot(0L, 0L, testStreamFactory, CheckpointOptions.forCheckpointWithDefaultLocation());
        Thread asyncSnapshotThread = new Thread(snapshot);
        asyncSnapshotThread.start();
        // wait for snapshot to run
        waiter.await();
        waiter.reset();
        runStateUpdates();
        // allow checkpointing to start writing
        blocker.trigger();
        // wait for snapshot stream writing to run
        waiter.await();
        SnapshotResult<KeyedStateHandle> snapshotResult = snapshot.get();
        KeyedStateHandle keyedStateHandle = snapshotResult.getJobManagerOwnedSnapshot();
        assertNotNull(keyedStateHandle);
        assertTrue(keyedStateHandle.getStateSize() > 0);
        assertEquals(2, keyedStateHandle.getKeyGroupRange().getNumberOfKeyGroups());
        for (BlockingCheckpointOutputStream stream : testStreamFactory.getAllCreatedStreams()) {
            assertTrue(stream.isClosed());
        }
        asyncSnapshotThread.join();
        verifyRocksObjectsReleased();
    } finally {
        this.keyedStateBackend.dispose();
        this.keyedStateBackend = null;
    }
    verifyRocksDBStateUploaderClosed();
}
Also used : SnapshotResult(org.apache.flink.runtime.state.SnapshotResult) BlockingCheckpointOutputStream(org.apache.flink.runtime.util.BlockingCheckpointOutputStream) IncrementalRemoteKeyedStateHandle(org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle) KeyedStateHandle(org.apache.flink.runtime.state.KeyedStateHandle) Test(org.junit.Test)

Aggregations

SnapshotResult (org.apache.flink.runtime.state.SnapshotResult)15 Test (org.junit.Test)13 KeyedStateHandle (org.apache.flink.runtime.state.KeyedStateHandle)8 IOException (java.io.IOException)4 CloseableRegistry (org.apache.flink.core.fs.CloseableRegistry)4 MemCheckpointStreamFactory (org.apache.flink.runtime.state.memory.MemCheckpointStreamFactory)4 Map (java.util.Map)3 ExecutionConfig (org.apache.flink.api.common.ExecutionConfig)3 StateObjectCollection (org.apache.flink.runtime.checkpoint.StateObjectCollection)3 InputChannelStateHandle (org.apache.flink.runtime.state.InputChannelStateHandle)3 KeyGroupRange (org.apache.flink.runtime.state.KeyGroupRange)3 OperatorStateHandle (org.apache.flink.runtime.state.OperatorStateHandle)3 ResultSubpartitionStateHandle (org.apache.flink.runtime.state.ResultSubpartitionStateHandle)3 SupplierWithException (org.apache.flink.util.function.SupplierWithException)3 HashMap (java.util.HashMap)2 ExecutionException (java.util.concurrent.ExecutionException)2 TimeoutException (java.util.concurrent.TimeoutException)2 ListStateDescriptor (org.apache.flink.api.common.state.ListStateDescriptor)2 ValueStateDescriptor (org.apache.flink.api.common.state.ValueStateDescriptor)2 CheckpointException (org.apache.flink.runtime.checkpoint.CheckpointException)2