use of org.apache.flink.formats.compress.extractor.DefaultExtractor in project flink by apache.
the class CompressionFactoryITCase method testWriteCompressedFile.
@Test
public void testWriteCompressedFile() throws Exception {
final File folder = TEMPORARY_FOLDER.newFolder();
final Path testPath = Path.fromLocalFile(folder);
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
env.enableCheckpointing(100);
DataStream<String> stream = env.addSource(new FiniteTestSource<>(testData), TypeInformation.of(String.class));
stream.map(str -> str).addSink(StreamingFileSink.forBulkFormat(testPath, CompressWriters.forExtractor(new DefaultExtractor<String>()).withHadoopCompression(TEST_CODEC_NAME)).withBucketAssigner(new UniqueBucketAssigner<>("test")).build());
env.execute();
validateResults(folder, testData, new CompressionCodecFactory(configuration).getCodecByName(TEST_CODEC_NAME));
}
use of org.apache.flink.formats.compress.extractor.DefaultExtractor in project flink by apache.
the class CompressWriterFactoryTest method testCompressByName.
private void testCompressByName(String codec, Configuration conf) throws Exception {
CompressWriterFactory<String> writer = CompressWriters.forExtractor(new DefaultExtractor<String>()).withHadoopCompression(codec, conf);
List<String> lines = Arrays.asList("line1", "line2", "line3");
File directory = prepareCompressedFile(writer, lines);
validateResults(directory, lines, new CompressionCodecFactory(conf).getCodecByName(codec));
}
Aggregations