use of org.apache.flink.state.api.runtime.metadata.SavepointMetadataV2 in project flink by apache.
the class SavepointWriter method newSavepoint.
/**
* Creates a new savepoint. The savepoint will be written using the state backend defined via
* the clusters configuration.
*
* @param maxParallelism The max parallelism of the savepoint.
* @return A {@link SavepointWriter}.
* @see #newSavepoint(StateBackend, int)
* @see #withConfiguration(ConfigOption, Object)
*/
public static SavepointWriter newSavepoint(int maxParallelism) {
Preconditions.checkArgument(maxParallelism > 0 && maxParallelism <= UPPER_BOUND_MAX_PARALLELISM, "Maximum parallelism must be between 1 and " + UPPER_BOUND_MAX_PARALLELISM + ". Found: " + maxParallelism);
SavepointMetadataV2 metadata = new SavepointMetadataV2(maxParallelism, Collections.emptyList(), Collections.emptyList());
return new SavepointWriter(metadata, null);
}
use of org.apache.flink.state.api.runtime.metadata.SavepointMetadataV2 in project flink by apache.
the class SavepointReader method read.
/**
* Loads an existing savepoint. Useful if you want to query the state of an existing
* application.
*
* @param env The execution environment used to transform the savepoint.
* @param path The path to an existing savepoint on disk.
* @param stateBackend The state backend of the savepoint.
* @return A {@link SavepointReader}.
*/
public static SavepointReader read(StreamExecutionEnvironment env, String path, StateBackend stateBackend) throws IOException {
CheckpointMetadata metadata = SavepointLoader.loadSavepointMetadata(path);
int maxParallelism = metadata.getOperatorStates().stream().map(OperatorState::getMaxParallelism).max(Comparator.naturalOrder()).orElseThrow(() -> new RuntimeException("Savepoint must contain at least one operator state."));
SavepointMetadataV2 savepointMetadata = new SavepointMetadataV2(maxParallelism, metadata.getMasterStates(), metadata.getOperatorStates());
return new SavepointReader(env, savepointMetadata, stateBackend);
}
use of org.apache.flink.state.api.runtime.metadata.SavepointMetadataV2 in project flink by apache.
the class SavepointReader method read.
/**
* Loads an existing savepoint. Useful if you want to query the state of an existing
* application. The savepoint will be read using the state backend defined via the clusters
* configuration.
*
* @param env The execution environment used to transform the savepoint.
* @param path The path to an existing savepoint on disk.
* @return A {@link SavepointReader}.
*/
public static SavepointReader read(StreamExecutionEnvironment env, String path) throws IOException {
CheckpointMetadata metadata = SavepointLoader.loadSavepointMetadata(path);
int maxParallelism = metadata.getOperatorStates().stream().map(OperatorState::getMaxParallelism).max(Comparator.naturalOrder()).orElseThrow(() -> new RuntimeException("Savepoint must contain at least one operator state."));
SavepointMetadataV2 savepointMetadata = new SavepointMetadataV2(maxParallelism, metadata.getMasterStates(), metadata.getOperatorStates());
return new SavepointReader(env, savepointMetadata, null);
}
use of org.apache.flink.state.api.runtime.metadata.SavepointMetadataV2 in project flink by apache.
the class SavepointWriter method newSavepoint.
/**
* Creates a new savepoint.
*
* @param stateBackend The state backend of the savepoint used for keyed state.
* @param maxParallelism The max parallelism of the savepoint.
* @return A {@link SavepointWriter}.
* @see #newSavepoint(int)
*/
public static SavepointWriter newSavepoint(StateBackend stateBackend, int maxParallelism) {
Preconditions.checkArgument(maxParallelism > 0 && maxParallelism <= UPPER_BOUND_MAX_PARALLELISM, "Maximum parallelism must be between 1 and " + UPPER_BOUND_MAX_PARALLELISM + ". Found: " + maxParallelism);
SavepointMetadataV2 metadata = new SavepointMetadataV2(maxParallelism, Collections.emptyList(), Collections.emptyList());
return new SavepointWriter(metadata, stateBackend);
}
use of org.apache.flink.state.api.runtime.metadata.SavepointMetadataV2 in project flink by apache.
the class SavepointWriter method fromExistingSavepoint.
/**
* Loads an existing savepoint. Useful if you want to modify or extend the state of an existing
* application.
*
* @param path The path to an existing savepoint on disk.
* @param stateBackend The state backend of the savepoint.
* @return A {@link SavepointWriter}.
* @see #fromExistingSavepoint(String)
*/
public static SavepointWriter fromExistingSavepoint(String path, StateBackend stateBackend) throws IOException {
CheckpointMetadata metadata = SavepointLoader.loadSavepointMetadata(path);
int maxParallelism = metadata.getOperatorStates().stream().map(OperatorState::getMaxParallelism).max(Comparator.naturalOrder()).orElseThrow(() -> new RuntimeException("Savepoint must contain at least one operator state."));
SavepointMetadataV2 savepointMetadata = new SavepointMetadataV2(maxParallelism, metadata.getMasterStates(), metadata.getOperatorStates());
return new SavepointWriter(savepointMetadata, stateBackend);
}
Aggregations