use of org.apache.flink.runtime.checkpoint.metadata.CheckpointMetadata in project flink by apache.
the class OperatorCoordinatorSchedulerTest method serializeAsCheckpointMetadata.
private static byte[] serializeAsCheckpointMetadata(OperatorID id, byte[] coordinatorState) throws IOException {
final OperatorState state = createOperatorState(id, coordinatorState);
final CheckpointMetadata metadata = new CheckpointMetadata(1337L, Collections.singletonList(state), Collections.emptyList());
final ByteArrayOutputStream out = new ByteArrayOutputStream();
Checkpoints.storeCheckpointMetadata(metadata, out);
return out.toByteArray();
}
use of org.apache.flink.runtime.checkpoint.metadata.CheckpointMetadata in project flink by apache.
the class SavepointOutputFormatTest method createSavepoint.
private CheckpointMetadata createSavepoint() {
OperatorState operatorState = new OperatorState(OperatorIDGenerator.fromUid("uid"), 1, 128);
operatorState.putState(0, OperatorSubtaskState.builder().build());
return new CheckpointMetadata(0, Collections.singleton(operatorState), Collections.emptyList());
}
use of org.apache.flink.runtime.checkpoint.metadata.CheckpointMetadata in project flink by apache.
the class SavepointReader method read.
/**
* Loads an existing savepoint. Useful if you want to query the state of an existing
* application. The savepoint will be read using the state backend defined via the clusters
* configuration.
*
* @param env The execution environment used to transform the savepoint.
* @param path The path to an existing savepoint on disk.
* @return A {@link SavepointReader}.
*/
public static SavepointReader read(StreamExecutionEnvironment env, String path) throws IOException {
CheckpointMetadata metadata = SavepointLoader.loadSavepointMetadata(path);
int maxParallelism = metadata.getOperatorStates().stream().map(OperatorState::getMaxParallelism).max(Comparator.naturalOrder()).orElseThrow(() -> new RuntimeException("Savepoint must contain at least one operator state."));
SavepointMetadataV2 savepointMetadata = new SavepointMetadataV2(maxParallelism, metadata.getMasterStates(), metadata.getOperatorStates());
return new SavepointReader(env, savepointMetadata, null);
}
use of org.apache.flink.runtime.checkpoint.metadata.CheckpointMetadata in project flink by apache.
the class MergeOperatorStates method reduce.
@Override
public void reduce(Iterable<OperatorState> values, Collector<CheckpointMetadata> out) {
CheckpointMetadata metadata = new CheckpointMetadata(SnapshotUtils.CHECKPOINT_ID, StreamSupport.stream(values.spliterator(), false).collect(Collectors.toList()), masterStates);
out.collect(metadata);
}
use of org.apache.flink.runtime.checkpoint.metadata.CheckpointMetadata in project flink by apache.
the class SavepointWriter method fromExistingSavepoint.
/**
* Loads an existing savepoint. Useful if you want to modify or extend the state of an existing
* application.
*
* @param path The path to an existing savepoint on disk.
* @param stateBackend The state backend of the savepoint.
* @return A {@link SavepointWriter}.
* @see #fromExistingSavepoint(String)
*/
public static SavepointWriter fromExistingSavepoint(String path, StateBackend stateBackend) throws IOException {
CheckpointMetadata metadata = SavepointLoader.loadSavepointMetadata(path);
int maxParallelism = metadata.getOperatorStates().stream().map(OperatorState::getMaxParallelism).max(Comparator.naturalOrder()).orElseThrow(() -> new RuntimeException("Savepoint must contain at least one operator state."));
SavepointMetadataV2 savepointMetadata = new SavepointMetadataV2(maxParallelism, metadata.getMasterStates(), metadata.getOperatorStates());
return new SavepointWriter(savepointMetadata, stateBackend);
}
Aggregations