Search in sources :

Example 21 with FileSourceSplit

use of org.apache.flink.connector.file.src.FileSourceSplit in project flink by apache.

the class ContinuousFileSplitEnumeratorTest method testRequestingReaderUnavailableWhenSplitDiscovered.

@Test
public void testRequestingReaderUnavailableWhenSplitDiscovered() throws Exception {
    final TestingFileEnumerator fileEnumerator = new TestingFileEnumerator();
    final TestingSplitEnumeratorContext<FileSourceSplit> context = new TestingSplitEnumeratorContext<>(4);
    final ContinuousFileSplitEnumerator enumerator = createEnumerator(fileEnumerator, context);
    // register one reader, and let it request a split
    context.registerReader(2, "localhost");
    enumerator.addReader(2);
    enumerator.handleSplitRequest(2, "localhost");
    // remove the reader (like in a failure)
    context.registeredReaders().remove(2);
    // make one split available and trigger the periodic discovery
    final FileSourceSplit split = createRandomSplit();
    fileEnumerator.addSplits(split);
    context.triggerAllActions();
    assertFalse(context.getSplitAssignments().containsKey(2));
    assertThat(enumerator.snapshotState(1L).getSplits(), contains(split));
}
Also used : TestingFileEnumerator(org.apache.flink.connector.file.src.testutils.TestingFileEnumerator) FileSourceSplit(org.apache.flink.connector.file.src.FileSourceSplit) TestingSplitEnumeratorContext(org.apache.flink.connector.testutils.source.reader.TestingSplitEnumeratorContext) Test(org.junit.Test)

Example 22 with FileSourceSplit

use of org.apache.flink.connector.file.src.FileSourceSplit in project flink by apache.

the class StaticFileSplitEnumeratorTest method testSplitRequestForNonRegisteredReader.

@Test
public void testSplitRequestForNonRegisteredReader() throws Exception {
    final TestingSplitEnumeratorContext<FileSourceSplit> context = new TestingSplitEnumeratorContext<>(4);
    final FileSourceSplit split = createRandomSplit();
    final StaticFileSplitEnumerator enumerator = createEnumerator(context, split);
    enumerator.handleSplitRequest(3, "somehost");
    assertFalse(context.getSplitAssignments().containsKey(3));
    assertThat(enumerator.snapshotState(1L).getSplits(), contains(split));
}
Also used : FileSourceSplit(org.apache.flink.connector.file.src.FileSourceSplit) TestingSplitEnumeratorContext(org.apache.flink.connector.testutils.source.reader.TestingSplitEnumeratorContext) Test(org.junit.Test)

Example 23 with FileSourceSplit

use of org.apache.flink.connector.file.src.FileSourceSplit in project flink by apache.

the class AdapterTestBase method testClosesStreamIfReaderCreationFails.

// ------------------------------------------------------------------------
@Test
public void testClosesStreamIfReaderCreationFails() throws Exception {
    // setup
    final Path testPath = new Path("testFs:///testpath-1");
    final CloseTestingInputStream in = new CloseTestingInputStream();
    final TestingFileSystem testFs = TestingFileSystem.createForFileStatus("testFs", TestingFileSystem.TestFileStatus.forFileWithStream(testPath, 1024, in));
    testFs.register();
    // test
    final BulkFormat<Integer, FileSourceSplit> adapter = wrapWithAdapter(createFormatFailingInInstantiation());
    try {
        adapter.createReader(new Configuration(), new FileSourceSplit("id", testPath, 0, 1024, 0, 1024));
    } catch (IOException ignored) {
    }
    // assertions
    assertTrue(in.closed);
    // cleanup
    testFs.unregister();
}
Also used : Path(org.apache.flink.core.fs.Path) FileSourceSplit(org.apache.flink.connector.file.src.FileSourceSplit) Configuration(org.apache.flink.configuration.Configuration) TestingFileSystem(org.apache.flink.connector.file.src.testutils.TestingFileSystem) IOException(java.io.IOException) Test(org.junit.Test)

Example 24 with FileSourceSplit

use of org.apache.flink.connector.file.src.FileSourceSplit in project flink by apache.

the class AdapterTestBase method testReading.

private void testReading(FormatT format, int numSplits, int... recoverAfterRecords) throws IOException {
    // add the end boundary for recovery
    final int[] boundaries = Arrays.copyOf(recoverAfterRecords, recoverAfterRecords.length + 1);
    boundaries[boundaries.length - 1] = NUM_NUMBERS;
    // set a fetch size so that we get three records per fetch
    final Configuration config = new Configuration();
    config.set(StreamFormat.FETCH_IO_SIZE, new MemorySize(10));
    final BulkFormat<Integer, FileSourceSplit> adapter = wrapWithAdapter(format);
    final Queue<FileSourceSplit> splits = buildSplits(numSplits);
    final List<Integer> result = new ArrayList<>();
    FileSourceSplit currentSplit = null;
    BulkFormat.Reader<Integer> currentReader = null;
    for (int nextRecordToRecover : boundaries) {
        final FileSourceSplit toRecoverFrom = readNumbers(currentReader, currentSplit, adapter, splits, config, result, nextRecordToRecover - result.size());
        currentSplit = toRecoverFrom;
        currentReader = toRecoverFrom == null ? null : adapter.restoreReader(config, toRecoverFrom);
    }
    verifyIntListResult(result);
}
Also used : MemorySize(org.apache.flink.configuration.MemorySize) Configuration(org.apache.flink.configuration.Configuration) FileSourceSplit(org.apache.flink.connector.file.src.FileSourceSplit) ArrayList(java.util.ArrayList) BulkFormat(org.apache.flink.connector.file.src.reader.BulkFormat)

Example 25 with FileSourceSplit

use of org.apache.flink.connector.file.src.FileSourceSplit in project flink by apache.

the class AdapterTestBase method testClosesStreamIfReaderRestoreFails.

@Test
public void testClosesStreamIfReaderRestoreFails() throws Exception {
    // setup
    final Path testPath = new Path("testFs:///testpath-1");
    final CloseTestingInputStream in = new CloseTestingInputStream();
    final TestingFileSystem testFs = TestingFileSystem.createForFileStatus("testFs", TestingFileSystem.TestFileStatus.forFileWithStream(testPath, 1024, in));
    testFs.register();
    // test
    final BulkFormat<Integer, FileSourceSplit> adapter = wrapWithAdapter(createFormatFailingInInstantiation());
    final FileSourceSplit split = new FileSourceSplit("id", testPath, 0, 1024, 0, 1024, new String[0], new CheckpointedPosition(0L, 5L));
    try {
        adapter.restoreReader(new Configuration(), split);
    } catch (IOException ignored) {
    }
    // assertions
    assertTrue(in.closed);
    // cleanup
    testFs.unregister();
}
Also used : Path(org.apache.flink.core.fs.Path) FileSourceSplit(org.apache.flink.connector.file.src.FileSourceSplit) Configuration(org.apache.flink.configuration.Configuration) TestingFileSystem(org.apache.flink.connector.file.src.testutils.TestingFileSystem) CheckpointedPosition(org.apache.flink.connector.file.src.util.CheckpointedPosition) IOException(java.io.IOException) Test(org.junit.Test)

Aggregations

FileSourceSplit (org.apache.flink.connector.file.src.FileSourceSplit)50 Test (org.junit.Test)32 Path (org.apache.flink.core.fs.Path)20 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)11 BulkFormat (org.apache.flink.connector.file.src.reader.BulkFormat)11 Configuration (org.apache.flink.configuration.Configuration)10 ArrayList (java.util.ArrayList)9 TestingSplitEnumeratorContext (org.apache.flink.connector.testutils.source.reader.TestingSplitEnumeratorContext)7 IOException (java.io.IOException)6 RowData (org.apache.flink.table.data.RowData)6 LogicalType (org.apache.flink.table.types.logical.LogicalType)6 LinkedHashMap (java.util.LinkedHashMap)5 TestingFileSystem (org.apache.flink.connector.file.src.testutils.TestingFileSystem)5 FileStatus (org.apache.flink.core.fs.FileStatus)5 AtomicLong (java.util.concurrent.atomic.AtomicLong)4 BigIntType (org.apache.flink.table.types.logical.BigIntType)4 DoubleType (org.apache.flink.table.types.logical.DoubleType)4 IntType (org.apache.flink.table.types.logical.IntType)4 SmallIntType (org.apache.flink.table.types.logical.SmallIntType)4 TinyIntType (org.apache.flink.table.types.logical.TinyIntType)4