Search in sources :

Example 1 with SavepointMetadata

use of org.apache.flink.state.api.runtime.metadata.SavepointMetadata in project flink by apache.

the class Savepoint method load.

/**
 * Loads an existing savepoint. Useful if you want to query, modify, or extend the state of an
 * existing application.
 *
 * @param env The execution environment used to transform the savepoint.
 * @param path The path to an existing savepoint on disk.
 * @param stateBackend The state backend of the savepoint.
 * @see #load(ExecutionEnvironment, String)
 */
public static ExistingSavepoint load(ExecutionEnvironment env, String path, StateBackend stateBackend) throws IOException {
    Preconditions.checkNotNull(stateBackend, "The state backend must not be null");
    CheckpointMetadata metadata = SavepointLoader.loadSavepointMetadata(path);
    int maxParallelism = metadata.getOperatorStates().stream().map(OperatorState::getMaxParallelism).max(Comparator.naturalOrder()).orElseThrow(() -> new RuntimeException("Savepoint must contain at least one operator state."));
    SavepointMetadata savepointMetadata = new SavepointMetadata(maxParallelism, metadata.getMasterStates(), metadata.getOperatorStates());
    return new ExistingSavepoint(env, savepointMetadata, stateBackend);
}
Also used : SavepointMetadata(org.apache.flink.state.api.runtime.metadata.SavepointMetadata) OperatorState(org.apache.flink.runtime.checkpoint.OperatorState) CheckpointMetadata(org.apache.flink.runtime.checkpoint.metadata.CheckpointMetadata)

Example 2 with SavepointMetadata

use of org.apache.flink.state.api.runtime.metadata.SavepointMetadata in project flink by apache.

the class Savepoint method load.

/**
 * Loads an existing savepoint. Useful if you want to query, modify, or extend the state of an
 * existing application. The savepoint will be read using the state backend defined via the
 * clusters configuration.
 *
 * @param env The execution environment used to transform the savepoint.
 * @param path The path to an existing savepoint on disk.
 * @see #load(ExecutionEnvironment, String, StateBackend)
 */
public static ExistingSavepoint load(ExecutionEnvironment env, String path) throws IOException {
    CheckpointMetadata metadata = SavepointLoader.loadSavepointMetadata(path);
    int maxParallelism = metadata.getOperatorStates().stream().map(OperatorState::getMaxParallelism).max(Comparator.naturalOrder()).orElseThrow(() -> new RuntimeException("Savepoint must contain at least one operator state."));
    SavepointMetadata savepointMetadata = new SavepointMetadata(maxParallelism, metadata.getMasterStates(), metadata.getOperatorStates());
    return new ExistingSavepoint(env, savepointMetadata, null);
}
Also used : SavepointMetadata(org.apache.flink.state.api.runtime.metadata.SavepointMetadata) OperatorState(org.apache.flink.runtime.checkpoint.OperatorState) CheckpointMetadata(org.apache.flink.runtime.checkpoint.metadata.CheckpointMetadata)

Example 3 with SavepointMetadata

use of org.apache.flink.state.api.runtime.metadata.SavepointMetadata in project flink by apache.

the class Savepoint method create.

/**
 * Creates a new savepoint.
 *
 * @param stateBackend The state backend of the savepoint used for keyed state.
 * @param maxParallelism The max parallelism of the savepoint.
 * @return A new savepoint.
 * @see #create(int)
 */
public static NewSavepoint create(StateBackend stateBackend, int maxParallelism) {
    Preconditions.checkNotNull(stateBackend, "The state backend must not be null");
    Preconditions.checkArgument(maxParallelism > 0 && maxParallelism <= UPPER_BOUND_MAX_PARALLELISM, "Maximum parallelism must be between 1 and " + UPPER_BOUND_MAX_PARALLELISM + ". Found: " + maxParallelism);
    SavepointMetadata metadata = new SavepointMetadata(maxParallelism, Collections.emptyList(), Collections.emptyList());
    return new NewSavepoint(metadata, stateBackend);
}
Also used : SavepointMetadata(org.apache.flink.state.api.runtime.metadata.SavepointMetadata)

Example 4 with SavepointMetadata

use of org.apache.flink.state.api.runtime.metadata.SavepointMetadata in project flink by apache.

the class Savepoint method create.

/**
 * Creates a new savepoint. The savepoint will be read using the state backend defined via the
 * clusters configuration.
 *
 * @param maxParallelism The max parallelism of the savepoint.
 * @return A new savepoint.
 * @see #create(StateBackend, int)
 */
public static NewSavepoint create(int maxParallelism) {
    Preconditions.checkArgument(maxParallelism > 0 && maxParallelism <= UPPER_BOUND_MAX_PARALLELISM, "Maximum parallelism must be between 1 and " + UPPER_BOUND_MAX_PARALLELISM + ". Found: " + maxParallelism);
    SavepointMetadata metadata = new SavepointMetadata(maxParallelism, Collections.emptyList(), Collections.emptyList());
    return new NewSavepoint(metadata, null);
}
Also used : SavepointMetadata(org.apache.flink.state.api.runtime.metadata.SavepointMetadata)

Example 5 with SavepointMetadata

use of org.apache.flink.state.api.runtime.metadata.SavepointMetadata in project flink by apache.

the class SavepointTest method testNewSavepointEnforceUniqueUIDs.

@Test(expected = IllegalArgumentException.class)
public void testNewSavepointEnforceUniqueUIDs() {
    ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
    env.setParallelism(10);
    DataSource<Integer> input = env.fromElements(0);
    BootstrapTransformation<Integer> transformation = OperatorTransformation.bootstrapWith(input).transform(new ExampleStateBootstrapFunction());
    SavepointMetadata metadata = new SavepointMetadata(1, Collections.emptyList(), Collections.emptyList());
    new NewSavepoint(metadata, new MemoryStateBackend()).withOperator(UID, transformation).withOperator(UID, transformation);
}
Also used : ExecutionEnvironment(org.apache.flink.api.java.ExecutionEnvironment) MemoryStateBackend(org.apache.flink.runtime.state.memory.MemoryStateBackend) SavepointMetadata(org.apache.flink.state.api.runtime.metadata.SavepointMetadata) Test(org.junit.Test)

Aggregations

SavepointMetadata (org.apache.flink.state.api.runtime.metadata.SavepointMetadata)7 OperatorState (org.apache.flink.runtime.checkpoint.OperatorState)4 ExecutionEnvironment (org.apache.flink.api.java.ExecutionEnvironment)3 MemoryStateBackend (org.apache.flink.runtime.state.memory.MemoryStateBackend)3 Test (org.junit.Test)3 CheckpointMetadata (org.apache.flink.runtime.checkpoint.metadata.CheckpointMetadata)2