use of org.apache.flink.core.fs.RecoverableFsDataOutputStream in project flink by apache.
the class HadoopViewFileSystemTruncateTest method getOpenStreamToFileWithContent.
private RecoverableFsDataOutputStream getOpenStreamToFileWithContent(final RecoverableWriter writerUnderTest, final org.apache.flink.core.fs.Path path, final String expectedContent) throws IOException {
final byte[] content = expectedContent.getBytes(UTF_8);
final RecoverableFsDataOutputStream streamUnderTest = writerUnderTest.open(path);
streamUnderTest.write(content);
return streamUnderTest;
}
use of org.apache.flink.core.fs.RecoverableFsDataOutputStream in project flink by apache.
the class HadoopS3RecoverableWriterExceptionITCase method testExceptionWritingAfterCloseForCommit.
@Test(expected = IOException.class)
public void testExceptionWritingAfterCloseForCommit() throws Exception {
final Path path = new Path(basePathForTest, "part-0");
final RecoverableFsDataOutputStream stream = getFileSystem().createRecoverableWriter().open(path);
stream.write(testData1.getBytes(StandardCharsets.UTF_8));
stream.closeForCommit().getRecoverable();
stream.write(testData2.getBytes(StandardCharsets.UTF_8));
}
use of org.apache.flink.core.fs.RecoverableFsDataOutputStream in project flink by apache.
the class HadoopS3RecoverableWriterExceptionITCase method testResumeAfterCommit.
// IMPORTANT FOR THE FOLLOWING TWO TESTS:
// These tests illustrate a difference in the user-perceived behavior of the different writers.
// In HDFS this will fail when trying to recover the stream while here is will fail at "commit",
// i.e.
// when we try to "publish" the multipart upload and we realize that the MPU is no longer
// active.
@Test(expected = IOException.class)
public void testResumeAfterCommit() throws Exception {
final RecoverableWriter writer = getFileSystem().createRecoverableWriter();
final Path path = new Path(basePathForTest, "part-0");
final RecoverableFsDataOutputStream stream = writer.open(path);
stream.write(testData1.getBytes(StandardCharsets.UTF_8));
final RecoverableWriter.ResumeRecoverable recoverable = stream.persist();
stream.write(testData2.getBytes(StandardCharsets.UTF_8));
stream.closeForCommit().commit();
final RecoverableFsDataOutputStream recoveredStream = writer.recover(recoverable);
recoveredStream.closeForCommit().commit();
}
use of org.apache.flink.core.fs.RecoverableFsDataOutputStream in project flink by apache.
the class HadoopS3RecoverableWriterITCase method testCommitAfterPersist.
@Test
public void testCommitAfterPersist() throws Exception {
final RecoverableWriter writer = getRecoverableWriter();
final Path path = new Path(basePathForTest, "part-0");
final RecoverableFsDataOutputStream stream = writer.open(path);
stream.write(bytesOf(testData1));
stream.persist();
stream.write(bytesOf(testData2));
stream.closeForCommit().commit();
Assert.assertEquals(testData1 + testData2, getContentsOfFile(path));
}
use of org.apache.flink.core.fs.RecoverableFsDataOutputStream in project flink by apache.
the class HadoopS3RecoverableWriterITCase method testCloseWithNoData.
// ----------------------- Test Normal Execution -----------------------
@Test
public void testCloseWithNoData() throws Exception {
final RecoverableWriter writer = getRecoverableWriter();
final Path path = new Path(basePathForTest, "part-0");
final RecoverableFsDataOutputStream stream = writer.open(path);
stream.closeForCommit().commit();
}
Aggregations