Search in sources :

Example 1 with FileResult

use of org.apache.beam.sdk.io.FileBasedSink.FileResult in project beam by apache.

the class FileBasedSinkTest method testFileBasedWriterWithWritableByteChannelFactory.

/**
   * {@link Writer} writes to the {@link WritableByteChannel} provided by {@link
   * DrunkWritableByteChannelFactory}.
   */
@Test
public void testFileBasedWriterWithWritableByteChannelFactory() throws Exception {
    final String testUid = "testId";
    ResourceId root = getBaseOutputDirectory();
    WriteOperation<String> writeOp = new SimpleSink(root, "file", "-SS-of-NN", "txt", new DrunkWritableByteChannelFactory()).createWriteOperation();
    final Writer<String> writer = writeOp.createWriter();
    final ResourceId expectedFile = writeOp.tempDirectory.get().resolve(testUid, StandardResolveOptions.RESOLVE_FILE);
    final List<String> expected = new ArrayList<>();
    expected.add("header");
    expected.add("header");
    expected.add("a");
    expected.add("a");
    expected.add("b");
    expected.add("b");
    expected.add("footer");
    expected.add("footer");
    writer.openUnwindowed(testUid, -1);
    writer.write("a");
    writer.write("b");
    final FileResult result = writer.close();
    assertEquals(expectedFile, result.getTempFilename());
    assertFileContains(expected, expectedFile);
}
Also used : FileResult(org.apache.beam.sdk.io.FileBasedSink.FileResult) ResourceId(org.apache.beam.sdk.io.fs.ResourceId) ArrayList(java.util.ArrayList) Test(org.junit.Test)

Example 2 with FileResult

use of org.apache.beam.sdk.io.FileBasedSink.FileResult in project beam by apache.

the class WriteFiles method createWrite.

/**
   * A write is performed as sequence of three {@link ParDo}'s.
   *
   * <p>This singleton collection containing the WriteOperation is then used as a side
   * input to a ParDo over the PCollection of elements to write. In this bundle-writing phase,
   * {@link WriteOperation#createWriter} is called to obtain a {@link Writer}.
   * {@link Writer#open} and {@link Writer#close} are called in
   * {@link DoFn.StartBundle} and {@link DoFn.FinishBundle}, respectively, and
   * {@link Writer#write} method is called for every element in the bundle. The output
   * of this ParDo is a PCollection of <i>writer result</i> objects (see {@link FileBasedSink}
   * for a description of writer results)-one for each bundle.
   *
   * <p>The final do-once ParDo uses a singleton collection asinput and the collection of writer
   * results as a side-input. In this ParDo, {@link WriteOperation#finalize} is called
   * to finalize the write.
   *
   * <p>If the write of any element in the PCollection fails, {@link Writer#close} will be
   * called before the exception that caused the write to fail is propagated and the write result
   * will be discarded.
   *
   * <p>Since the {@link WriteOperation} is serialized after the initialization ParDo and
   * deserialized in the bundle-writing and finalization phases, any state change to the
   * WriteOperation object that occurs during initialization is visible in the latter
   * phases. However, the WriteOperation is not serialized after the bundle-writing
   * phase. This is why implementations should guarantee that
   * {@link WriteOperation#createWriter} does not mutate WriteOperation).
   */
private PDone createWrite(PCollection<T> input) {
    Pipeline p = input.getPipeline();
    if (!windowedWrites) {
        // Re-window the data into the global window and remove any existing triggers.
        input = input.apply(Window.<T>into(new GlobalWindows()).triggering(DefaultTrigger.of()).discardingFiredPanes());
    }
    // Perform the per-bundle writes as a ParDo on the input PCollection (with the
    // WriteOperation as a side input) and collect the results of the writes in a
    // PCollection. There is a dependency between this ParDo and the first (the
    // WriteOperation PCollection as a side input), so this will happen after the
    // initial ParDo.
    PCollection<FileResult> results;
    final PCollectionView<Integer> numShardsView;
    Coder<BoundedWindow> shardedWindowCoder = (Coder<BoundedWindow>) input.getWindowingStrategy().getWindowFn().windowCoder();
    if (computeNumShards == null && numShardsProvider == null) {
        numShardsView = null;
        results = input.apply("WriteBundles", ParDo.of(windowedWrites ? new WriteWindowedBundles() : new WriteUnwindowedBundles()));
    } else {
        List<PCollectionView<?>> sideInputs = Lists.newArrayList();
        if (computeNumShards != null) {
            numShardsView = input.apply(computeNumShards);
            sideInputs.add(numShardsView);
        } else {
            numShardsView = null;
        }
        PCollection<KV<Integer, Iterable<T>>> sharded = input.apply("ApplyShardLabel", ParDo.of(new ApplyShardingKey<T>(numShardsView, (numShardsView != null) ? null : numShardsProvider)).withSideInputs(sideInputs)).apply("GroupIntoShards", GroupByKey.<Integer, T>create());
        shardedWindowCoder = (Coder<BoundedWindow>) sharded.getWindowingStrategy().getWindowFn().windowCoder();
        results = sharded.apply("WriteShardedBundles", ParDo.of(new WriteShardedBundles()));
    }
    results.setCoder(FileResultCoder.of(shardedWindowCoder));
    if (windowedWrites) {
        // When processing streaming windowed writes, results will arrive multiple times. This
        // means we can't share the below implementation that turns the results into a side input,
        // as new data arriving into a side input does not trigger the listening DoFn. Instead
        // we aggregate the result set using a singleton GroupByKey, so the DoFn will be triggered
        // whenever new data arrives.
        PCollection<KV<Void, FileResult>> keyedResults = results.apply("AttachSingletonKey", WithKeys.<Void, FileResult>of((Void) null));
        keyedResults.setCoder(KvCoder.of(VoidCoder.of(), FileResultCoder.of(shardedWindowCoder)));
        // Is the continuation trigger sufficient?
        keyedResults.apply("FinalizeGroupByKey", GroupByKey.<Void, FileResult>create()).apply("Finalize", ParDo.of(new DoFn<KV<Void, Iterable<FileResult>>, Integer>() {

            @ProcessElement
            public void processElement(ProcessContext c) throws Exception {
                LOG.info("Finalizing write operation {}.", writeOperation);
                List<FileResult> results = Lists.newArrayList(c.element().getValue());
                writeOperation.finalize(results);
                LOG.debug("Done finalizing write operation");
            }
        }));
    } else {
        final PCollectionView<Iterable<FileResult>> resultsView = results.apply(View.<FileResult>asIterable());
        ImmutableList.Builder<PCollectionView<?>> sideInputs = ImmutableList.<PCollectionView<?>>builder().add(resultsView);
        if (numShardsView != null) {
            sideInputs.add(numShardsView);
        }
        // Finalize the write in another do-once ParDo on the singleton collection containing the
        // Writer. The results from the per-bundle writes are given as an Iterable side input.
        // The WriteOperation's state is the same as after its initialization in the first
        // do-once ParDo. There is a dependency between this ParDo and the parallel write (the writer
        // results collection as a side input), so it will happen after the parallel write.
        // For the non-windowed case, we guarantee that  if no data is written but the user has
        // set numShards, then all shards will be written out as empty files. For this reason we
        // use a side input here.
        PCollection<Void> singletonCollection = p.apply(Create.of((Void) null));
        singletonCollection.apply("Finalize", ParDo.of(new DoFn<Void, Integer>() {

            @ProcessElement
            public void processElement(ProcessContext c) throws Exception {
                LOG.info("Finalizing write operation {}.", writeOperation);
                List<FileResult> results = Lists.newArrayList(c.sideInput(resultsView));
                LOG.debug("Side input initialized to finalize write operation {}.", writeOperation);
                // We must always output at least 1 shard, and honor user-specified numShards if
                // set.
                int minShardsNeeded;
                if (numShardsView != null) {
                    minShardsNeeded = c.sideInput(numShardsView);
                } else if (numShardsProvider != null) {
                    minShardsNeeded = numShardsProvider.get();
                } else {
                    minShardsNeeded = 1;
                }
                int extraShardsNeeded = minShardsNeeded - results.size();
                if (extraShardsNeeded > 0) {
                    LOG.info("Creating {} empty output shards in addition to {} written for a total of {}.", extraShardsNeeded, results.size(), minShardsNeeded);
                    for (int i = 0; i < extraShardsNeeded; ++i) {
                        Writer<T> writer = writeOperation.createWriter();
                        writer.openUnwindowed(UUID.randomUUID().toString(), UNKNOWN_SHARDNUM);
                        FileResult emptyWrite = writer.close();
                        results.add(emptyWrite);
                    }
                    LOG.debug("Done creating extra shards.");
                }
                writeOperation.finalize(results);
                LOG.debug("Done finalizing write operation {}", writeOperation);
            }
        }).withSideInputs(sideInputs.build()));
    }
    return PDone.in(input.getPipeline());
}
Also used : ImmutableList(com.google.common.collect.ImmutableList) BoundedWindow(org.apache.beam.sdk.transforms.windowing.BoundedWindow) ImmutableList(com.google.common.collect.ImmutableList) List(java.util.List) Coder(org.apache.beam.sdk.coders.Coder) KvCoder(org.apache.beam.sdk.coders.KvCoder) FileResultCoder(org.apache.beam.sdk.io.FileBasedSink.FileResultCoder) VoidCoder(org.apache.beam.sdk.coders.VoidCoder) GlobalWindows(org.apache.beam.sdk.transforms.windowing.GlobalWindows) KV(org.apache.beam.sdk.values.KV) Pipeline(org.apache.beam.sdk.Pipeline) PCollectionView(org.apache.beam.sdk.values.PCollectionView) DoFn(org.apache.beam.sdk.transforms.DoFn) FileResult(org.apache.beam.sdk.io.FileBasedSink.FileResult) Writer(org.apache.beam.sdk.io.FileBasedSink.Writer)

Example 3 with FileResult

use of org.apache.beam.sdk.io.FileBasedSink.FileResult in project beam by apache.

the class FileBasedSinkTest method testWriter.

/**
   * Writer opens the correct file, writes the header, footer, and elements in the correct
   * order, and returns the correct filename.
   */
@Test
public void testWriter() throws Exception {
    String testUid = "testId";
    ResourceId expectedTempFile = getBaseTempDirectory().resolve(testUid, StandardResolveOptions.RESOLVE_FILE);
    List<String> values = Arrays.asList("sympathetic vulture", "boresome hummingbird");
    List<String> expected = new ArrayList<>();
    expected.add(SimpleSink.SimpleWriter.HEADER);
    expected.addAll(values);
    expected.add(SimpleSink.SimpleWriter.FOOTER);
    SimpleSink.SimpleWriter writer = buildWriteOperationWithTempDir(getBaseTempDirectory()).createWriter();
    writer.openUnwindowed(testUid, -1);
    for (String value : values) {
        writer.write(value);
    }
    FileResult result = writer.close();
    FileBasedSink sink = writer.getWriteOperation().getSink();
    assertEquals(expectedTempFile, result.getTempFilename());
    assertFileContains(expected, expectedTempFile);
}
Also used : FileResult(org.apache.beam.sdk.io.FileBasedSink.FileResult) ResourceId(org.apache.beam.sdk.io.fs.ResourceId) ArrayList(java.util.ArrayList) Test(org.junit.Test)

Example 4 with FileResult

use of org.apache.beam.sdk.io.FileBasedSink.FileResult in project beam by apache.

the class FileBasedSinkTest method runFinalize.

/** Finalize and verify that files are copied and temporary files are optionally removed. */
private void runFinalize(SimpleSink.SimpleWriteOperation writeOp, List<File> temporaryFiles) throws Exception {
    int numFiles = temporaryFiles.size();
    List<FileResult> fileResults = new ArrayList<>();
    // Create temporary output bundles and output File objects.
    for (int i = 0; i < numFiles; i++) {
        fileResults.add(new FileResult(LocalResources.fromFile(temporaryFiles.get(i), false), WriteFiles.UNKNOWN_SHARDNUM, null, null));
    }
    writeOp.finalize(fileResults);
    ResourceId outputDirectory = writeOp.getSink().getBaseOutputDirectoryProvider().get();
    for (int i = 0; i < numFiles; i++) {
        ResourceId outputFilename = writeOp.getSink().getFilenamePolicy().unwindowedFilename(outputDirectory, new Context(i, numFiles), "");
        assertTrue(new File(outputFilename.toString()).exists());
        assertFalse(temporaryFiles.get(i).exists());
    }
    assertFalse(new File(writeOp.tempDirectory.get().toString()).exists());
    // Test that repeated requests of the temp directory return a stable result.
    assertEquals(writeOp.tempDirectory.get(), writeOp.tempDirectory.get());
}
Also used : Context(org.apache.beam.sdk.io.FileBasedSink.FilenamePolicy.Context) FileResult(org.apache.beam.sdk.io.FileBasedSink.FileResult) ResourceId(org.apache.beam.sdk.io.fs.ResourceId) ArrayList(java.util.ArrayList) File(java.io.File)

Example 5 with FileResult

use of org.apache.beam.sdk.io.FileBasedSink.FileResult in project beam by apache.

the class FileBasedSinkTest method testCollidingOutputFilenames.

/** Reject non-distinct output filenames. */
@Test
public void testCollidingOutputFilenames() throws IOException {
    ResourceId root = getBaseOutputDirectory();
    SimpleSink sink = new SimpleSink(root, "file", "-NN", "test");
    SimpleSink.SimpleWriteOperation writeOp = new SimpleSink.SimpleWriteOperation(sink);
    ResourceId temp1 = root.resolve("temp1", StandardResolveOptions.RESOLVE_FILE);
    ResourceId temp2 = root.resolve("temp2", StandardResolveOptions.RESOLVE_FILE);
    ResourceId temp3 = root.resolve("temp3", StandardResolveOptions.RESOLVE_FILE);
    ResourceId output = root.resolve("file-03.test", StandardResolveOptions.RESOLVE_FILE);
    // More than one shard does.
    try {
        Iterable<FileResult> results = Lists.newArrayList(new FileResult(temp1, 1, null, null), new FileResult(temp2, 1, null, null), new FileResult(temp3, 1, null, null));
        writeOp.buildOutputFilenames(results);
        fail("Should have failed.");
    } catch (IllegalStateException exn) {
        assertEquals("Only generated 1 distinct file names for 3 files.", exn.getMessage());
    }
}
Also used : FileResult(org.apache.beam.sdk.io.FileBasedSink.FileResult) ResourceId(org.apache.beam.sdk.io.fs.ResourceId) Test(org.junit.Test)

Aggregations

FileResult (org.apache.beam.sdk.io.FileBasedSink.FileResult)5 ResourceId (org.apache.beam.sdk.io.fs.ResourceId)4 ArrayList (java.util.ArrayList)3 Test (org.junit.Test)3 ImmutableList (com.google.common.collect.ImmutableList)1 File (java.io.File)1 List (java.util.List)1 Pipeline (org.apache.beam.sdk.Pipeline)1 Coder (org.apache.beam.sdk.coders.Coder)1 KvCoder (org.apache.beam.sdk.coders.KvCoder)1 VoidCoder (org.apache.beam.sdk.coders.VoidCoder)1 FileResultCoder (org.apache.beam.sdk.io.FileBasedSink.FileResultCoder)1 Context (org.apache.beam.sdk.io.FileBasedSink.FilenamePolicy.Context)1 Writer (org.apache.beam.sdk.io.FileBasedSink.Writer)1 DoFn (org.apache.beam.sdk.transforms.DoFn)1 BoundedWindow (org.apache.beam.sdk.transforms.windowing.BoundedWindow)1 GlobalWindows (org.apache.beam.sdk.transforms.windowing.GlobalWindows)1 KV (org.apache.beam.sdk.values.KV)1 PCollectionView (org.apache.beam.sdk.values.PCollectionView)1