Search in sources :

Example 6 with ReadResult

use of com.datastax.oss.dsbulk.executor.api.result.ReadResult in project dsbulk by datastax.

the class ReadResultSubscription method toPage.

@Override
Page toPage(AsyncResultSet rs, ExecutionContext local) {
    Iterator<Row> rows = rs.currentPage().iterator();
    Iterator<ReadResult> results = new AbstractIterator<ReadResult>() {

        @Override
        protected ReadResult computeNext() {
            if (rows.hasNext()) {
                Row row = rows.next();
                if (listener != null) {
                    listener.onRowReceived(row, local);
                }
                return new DefaultReadResult(statement, rs.getExecutionInfo(), row);
            }
            return endOfData();
        }
    };
    return new Page(results, rs.hasMorePages() ? rs::fetchNextPage : null);
}
Also used : ReadResult(com.datastax.oss.dsbulk.executor.api.result.ReadResult) DefaultReadResult(com.datastax.oss.dsbulk.executor.api.result.DefaultReadResult) Row(com.datastax.oss.driver.api.core.cql.Row) AbstractIterator(com.datastax.oss.driver.shaded.guava.common.collect.AbstractIterator) DefaultReadResult(com.datastax.oss.dsbulk.executor.api.result.DefaultReadResult)

Example 7 with ReadResult

use of com.datastax.oss.dsbulk.executor.api.result.ReadResult in project dsbulk by datastax.

the class LogManagerTest method should_not_stop_when_sample_size_is_not_met.

@Test
void should_not_stop_when_sample_size_is_not_met() throws Exception {
    Path outputDir = Files.createTempDirectory("test");
    LogManager logManager = new LogManager(session, outputDir, ErrorThreshold.forRatio(0.01f, 100), ErrorThreshold.forAbsoluteValue(0), true, statementFormatter, EXTENDED, rowFormatter);
    logManager.init();
    Flux<ReadResult> stmts = Flux.just(failedReadResult1, failedReadResult2, failedReadResult3);
    stmts.transform(logManager.newTotalItemsCounter()).transform(logManager.newFailedReadsHandler()).blockLast();
    logManager.close();
    Path errors = logManager.getOperationDirectory().resolve("unload-errors.log");
    assertThat(errors.toFile()).exists();
    assertThat(FileUtils.listAllFilesInDirectory(logManager.getOperationDirectory())).containsOnly(errors);
    List<String> lines = Files.readAllLines(errors, UTF_8);
    String content = String.join("\n", lines);
    assertThat(content).doesNotContain("Resource: ").doesNotContain("Source: ").contains("SELECT 1").containsOnlyOnce("com.datastax.oss.dsbulk.executor.api.exception.BulkExecutionException: Statement execution failed: SELECT 1 (error 1)").contains("SELECT 2").containsOnlyOnce("com.datastax.oss.dsbulk.executor.api.exception.BulkExecutionException: Statement execution failed: SELECT 2 (error 2)");
}
Also used : Path(java.nio.file.Path) ReadResult(com.datastax.oss.dsbulk.executor.api.result.ReadResult) DefaultReadResult(com.datastax.oss.dsbulk.executor.api.result.DefaultReadResult) Test(org.junit.jupiter.api.Test)

Example 8 with ReadResult

use of com.datastax.oss.dsbulk.executor.api.result.ReadResult in project dsbulk by datastax.

the class LogManagerTest method should_stop_when_max_read_errors_reached.

@Test
void should_stop_when_max_read_errors_reached() throws Exception {
    Path outputDir = Files.createTempDirectory("test");
    LogManager logManager = new LogManager(session, outputDir, ErrorThreshold.forAbsoluteValue(2), ErrorThreshold.forAbsoluteValue(0), true, statementFormatter, EXTENDED, rowFormatter);
    logManager.init();
    Flux<ReadResult> stmts = Flux.just(failedReadResult1, failedReadResult2, failedReadResult3);
    try {
        stmts.transform(logManager.newFailedReadsHandler()).blockLast();
        fail("Expecting TooManyErrorsException to be thrown");
    } catch (TooManyErrorsException e) {
        assertThat(e).hasMessage("Too many errors, the maximum allowed is 2.");
        assertThat(((AbsoluteErrorThreshold) e.getThreshold()).getMaxErrors()).isEqualTo(2);
    }
    logManager.close();
    Path errors = logManager.getOperationDirectory().resolve("unload-errors.log");
    assertThat(errors.toFile()).exists();
    assertThat(FileUtils.listAllFilesInDirectory(logManager.getOperationDirectory())).containsOnly(errors);
    List<String> lines = Files.readAllLines(errors, UTF_8);
    String content = String.join("\n", lines);
    assertThat(content).doesNotContain("Resource: ").doesNotContain("Position: ").doesNotContain("Source: ").contains("SELECT 1").containsOnlyOnce("com.datastax.oss.dsbulk.executor.api.exception.BulkExecutionException: Statement execution failed: SELECT 1 (error 1)").contains("SELECT 2").containsOnlyOnce("com.datastax.oss.dsbulk.executor.api.exception.BulkExecutionException: Statement execution failed: SELECT 2 (error 2)");
}
Also used : Path(java.nio.file.Path) TooManyErrorsException(com.datastax.oss.dsbulk.workflow.api.error.TooManyErrorsException) ReadResult(com.datastax.oss.dsbulk.executor.api.result.ReadResult) DefaultReadResult(com.datastax.oss.dsbulk.executor.api.result.DefaultReadResult) Test(org.junit.jupiter.api.Test)

Example 9 with ReadResult

use of com.datastax.oss.dsbulk.executor.api.result.ReadResult in project dsbulk by datastax.

the class LogManagerTest method setUp.

@BeforeEach
void setUp() throws Exception {
    session = mockSession();
    resource1 = new URI("file:///file1.csv");
    resource2 = new URI("file:///file2.csv");
    resource3 = new URI("file:///file3.csv");
    csvRecord1 = new DefaultErrorRecord(source1, resource1, 1, new RuntimeException("error 1"));
    csvRecord2 = new DefaultErrorRecord(source2, resource2, 2, new RuntimeException("error 2"));
    csvRecord3 = new DefaultErrorRecord(source3, resource3, 3, new RuntimeException("error 3"));
    unmappableStmt1 = new UnmappableStatement(csvRecord1, new RuntimeException("error 1"));
    unmappableStmt2 = new UnmappableStatement(csvRecord2, new RuntimeException("error 2"));
    unmappableStmt3 = new UnmappableStatement(csvRecord3, new RuntimeException("error 3"));
    failedWriteResult1 = new DefaultWriteResult(new BulkExecutionException(new DriverTimeoutException("error 1"), new MappedBoundStatement(csvRecord1, mockBoundStatement("INSERT 1"))));
    failedWriteResult2 = new DefaultWriteResult(new BulkExecutionException(new DriverTimeoutException("error 2"), new MappedBoundStatement(csvRecord2, mockBoundStatement("INSERT 2"))));
    failedWriteResult3 = new DefaultWriteResult(new BulkExecutionException(new DriverTimeoutException("error 3"), new MappedBoundStatement(csvRecord3, mockBoundStatement("INSERT 3"))));
    failedReadResult1 = new DefaultReadResult(new BulkExecutionException(new DriverTimeoutException("error 1"), mockBoundStatement("SELECT 1")));
    failedReadResult2 = new DefaultReadResult(new BulkExecutionException(new DriverTimeoutException("error 2"), mockBoundStatement("SELECT 2")));
    failedReadResult3 = new DefaultReadResult(new BulkExecutionException(new DriverTimeoutException("error 3"), mockBoundStatement("SELECT 3")));
    BatchStatement batch = BatchStatement.newInstance(DefaultBatchType.UNLOGGED, new MappedBoundStatement(csvRecord1, mockBoundStatement("INSERT 1", "foo", 42)), new MappedBoundStatement(csvRecord2, mockBoundStatement("INSERT 2", "bar", 43)), new MappedBoundStatement(csvRecord3, mockBoundStatement("INSERT 3", "qix", 44)));
    batchWriteResult = new DefaultWriteResult(new BulkExecutionException(new DriverTimeoutException("error batch"), batch));
    ExecutionInfo info = mock(ExecutionInfo.class);
    row1 = mockRow(1);
    Row row2 = mockRow(2);
    Row row3 = mockRow(3);
    Statement<?> stmt1 = SimpleStatement.newInstance("SELECT 1");
    Statement<?> stmt2 = SimpleStatement.newInstance("SELECT 2");
    Statement<?> stmt3 = SimpleStatement.newInstance("SELECT 3");
    successfulReadResult1 = new DefaultReadResult(stmt1, info, row1);
    ReadResult successfulReadResult2 = new DefaultReadResult(stmt2, info, row2);
    ReadResult successfulReadResult3 = new DefaultReadResult(stmt3, info, row3);
    rowRecord1 = new DefaultErrorRecord(successfulReadResult1, tableResource, 1, new RuntimeException("error 1"));
    rowRecord2 = new DefaultErrorRecord(successfulReadResult2, tableResource, 2, new RuntimeException("error 2"));
    rowRecord3 = new DefaultErrorRecord(successfulReadResult3, tableResource, 3, new RuntimeException("error 3"));
}
Also used : BulkExecutionException(com.datastax.oss.dsbulk.executor.api.exception.BulkExecutionException) DefaultWriteResult(com.datastax.oss.dsbulk.executor.api.result.DefaultWriteResult) DriverTimeoutException(com.datastax.oss.driver.api.core.DriverTimeoutException) ExecutionInfo(com.datastax.oss.driver.api.core.cql.ExecutionInfo) ReadResult(com.datastax.oss.dsbulk.executor.api.result.ReadResult) DefaultReadResult(com.datastax.oss.dsbulk.executor.api.result.DefaultReadResult) MappedBoundStatement(com.datastax.oss.dsbulk.workflow.commons.statement.MappedBoundStatement) URI(java.net.URI) UnmappableStatement(com.datastax.oss.dsbulk.workflow.commons.statement.UnmappableStatement) DefaultErrorRecord(com.datastax.oss.dsbulk.connectors.api.DefaultErrorRecord) BatchStatement(com.datastax.oss.driver.api.core.cql.BatchStatement) DefaultReadResult(com.datastax.oss.dsbulk.executor.api.result.DefaultReadResult) Row(com.datastax.oss.driver.api.core.cql.Row) DriverUtils.mockRow(com.datastax.oss.dsbulk.tests.driver.DriverUtils.mockRow) BeforeEach(org.junit.jupiter.api.BeforeEach)

Aggregations

ReadResult (com.datastax.oss.dsbulk.executor.api.result.ReadResult)9 DefaultReadResult (com.datastax.oss.dsbulk.executor.api.result.DefaultReadResult)7 Path (java.nio.file.Path)5 Test (org.junit.jupiter.api.Test)5 Row (com.datastax.oss.driver.api.core.cql.Row)4 BulkExecutionException (com.datastax.oss.dsbulk.executor.api.exception.BulkExecutionException)4 DriverTimeoutException (com.datastax.oss.driver.api.core.DriverTimeoutException)3 TooManyErrorsException (com.datastax.oss.dsbulk.workflow.api.error.TooManyErrorsException)3 BeforeEach (org.junit.jupiter.api.BeforeEach)3 DriverExecutionException (com.datastax.oss.driver.api.core.DriverExecutionException)2 BatchStatement (com.datastax.oss.driver.api.core.cql.BatchStatement)2 ExecutionInfo (com.datastax.oss.driver.api.core.cql.ExecutionInfo)2 SimpleStatement (com.datastax.oss.driver.api.core.cql.SimpleStatement)2 AbstractIterator (com.datastax.oss.driver.shaded.guava.common.collect.AbstractIterator)2 DefaultErrorRecord (com.datastax.oss.dsbulk.connectors.api.DefaultErrorRecord)2 DefaultWriteResult (com.datastax.oss.dsbulk.executor.api.result.DefaultWriteResult)2 WriteResult (com.datastax.oss.dsbulk.executor.api.result.WriteResult)2 List (java.util.List)2 Flux (reactor.core.publisher.Flux)2 AllNodesFailedException (com.datastax.oss.driver.api.core.AllNodesFailedException)1