use of org.apache.flink.state.api.runtime.metadata.SavepointMetadata in project flink by apache.
the class Savepoint method load.
/**
* Loads an existing savepoint. Useful if you want to query, modify, or extend the state of an
* existing application.
*
* @param env The execution environment used to transform the savepoint.
* @param path The path to an existing savepoint on disk.
* @param stateBackend The state backend of the savepoint.
* @see #load(ExecutionEnvironment, String)
*/
public static ExistingSavepoint load(ExecutionEnvironment env, String path, StateBackend stateBackend) throws IOException {
Preconditions.checkNotNull(stateBackend, "The state backend must not be null");
CheckpointMetadata metadata = SavepointLoader.loadSavepointMetadata(path);
int maxParallelism = metadata.getOperatorStates().stream().map(OperatorState::getMaxParallelism).max(Comparator.naturalOrder()).orElseThrow(() -> new RuntimeException("Savepoint must contain at least one operator state."));
SavepointMetadata savepointMetadata = new SavepointMetadata(maxParallelism, metadata.getMasterStates(), metadata.getOperatorStates());
return new ExistingSavepoint(env, savepointMetadata, stateBackend);
}
use of org.apache.flink.state.api.runtime.metadata.SavepointMetadata in project flink by apache.
the class Savepoint method load.
/**
* Loads an existing savepoint. Useful if you want to query, modify, or extend the state of an
* existing application. The savepoint will be read using the state backend defined via the
* clusters configuration.
*
* @param env The execution environment used to transform the savepoint.
* @param path The path to an existing savepoint on disk.
* @see #load(ExecutionEnvironment, String, StateBackend)
*/
public static ExistingSavepoint load(ExecutionEnvironment env, String path) throws IOException {
CheckpointMetadata metadata = SavepointLoader.loadSavepointMetadata(path);
int maxParallelism = metadata.getOperatorStates().stream().map(OperatorState::getMaxParallelism).max(Comparator.naturalOrder()).orElseThrow(() -> new RuntimeException("Savepoint must contain at least one operator state."));
SavepointMetadata savepointMetadata = new SavepointMetadata(maxParallelism, metadata.getMasterStates(), metadata.getOperatorStates());
return new ExistingSavepoint(env, savepointMetadata, null);
}
use of org.apache.flink.state.api.runtime.metadata.SavepointMetadata in project flink by apache.
the class Savepoint method create.
/**
* Creates a new savepoint.
*
* @param stateBackend The state backend of the savepoint used for keyed state.
* @param maxParallelism The max parallelism of the savepoint.
* @return A new savepoint.
* @see #create(int)
*/
public static NewSavepoint create(StateBackend stateBackend, int maxParallelism) {
Preconditions.checkNotNull(stateBackend, "The state backend must not be null");
Preconditions.checkArgument(maxParallelism > 0 && maxParallelism <= UPPER_BOUND_MAX_PARALLELISM, "Maximum parallelism must be between 1 and " + UPPER_BOUND_MAX_PARALLELISM + ". Found: " + maxParallelism);
SavepointMetadata metadata = new SavepointMetadata(maxParallelism, Collections.emptyList(), Collections.emptyList());
return new NewSavepoint(metadata, stateBackend);
}
use of org.apache.flink.state.api.runtime.metadata.SavepointMetadata in project flink by apache.
the class Savepoint method create.
/**
* Creates a new savepoint. The savepoint will be read using the state backend defined via the
* clusters configuration.
*
* @param maxParallelism The max parallelism of the savepoint.
* @return A new savepoint.
* @see #create(StateBackend, int)
*/
public static NewSavepoint create(int maxParallelism) {
Preconditions.checkArgument(maxParallelism > 0 && maxParallelism <= UPPER_BOUND_MAX_PARALLELISM, "Maximum parallelism must be between 1 and " + UPPER_BOUND_MAX_PARALLELISM + ". Found: " + maxParallelism);
SavepointMetadata metadata = new SavepointMetadata(maxParallelism, Collections.emptyList(), Collections.emptyList());
return new NewSavepoint(metadata, null);
}
use of org.apache.flink.state.api.runtime.metadata.SavepointMetadata in project flink by apache.
the class SavepointTest method testNewSavepointEnforceUniqueUIDs.
@Test(expected = IllegalArgumentException.class)
public void testNewSavepointEnforceUniqueUIDs() {
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(10);
DataSource<Integer> input = env.fromElements(0);
BootstrapTransformation<Integer> transformation = OperatorTransformation.bootstrapWith(input).transform(new ExampleStateBootstrapFunction());
SavepointMetadata metadata = new SavepointMetadata(1, Collections.emptyList(), Collections.emptyList());
new NewSavepoint(metadata, new MemoryStateBackend()).withOperator(UID, transformation).withOperator(UID, transformation);
}
Aggregations