Search in sources :

Example 11 with JobClient

use of org.apache.flink.core.execution.JobClient in project flink by apache.

the class AdaptiveSchedulerITCase method testStopWithSavepointFailOnCheckpoint.

@Test
public void testStopWithSavepointFailOnCheckpoint() throws Exception {
    StreamExecutionEnvironment env = getEnvWithSource(StopWithSavepointTestBehavior.FAIL_ON_CHECKPOINT);
    env.setRestartStrategy(RestartStrategies.fixedDelayRestart(Integer.MAX_VALUE, 0L));
    DummySource.resetForParallelism(PARALLELISM);
    JobClient client = env.executeAsync();
    DummySource.awaitRunning();
    try {
        client.stopWithSavepoint(false, tempFolder.newFolder("savepoint").getAbsolutePath(), SavepointFormatType.CANONICAL).get();
        fail("Expect exception");
    } catch (ExecutionException e) {
        assertThat(e, containsCause(FlinkException.class));
    }
    // expect job to run again (maybe restart)
    CommonTestUtils.waitUntilCondition(() -> client.getJobStatus().get() == JobStatus.RUNNING, Deadline.fromNow(Duration.of(1, ChronoUnit.MINUTES)));
}
Also used : StreamExecutionEnvironment(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment) ExecutionException(java.util.concurrent.ExecutionException) JobClient(org.apache.flink.core.execution.JobClient) Test(org.junit.Test)

Example 12 with JobClient

use of org.apache.flink.core.execution.JobClient in project flink by apache.

the class AdaptiveSchedulerITCase method testStopWithSavepointFailOnFirstSavepointSucceedOnSecond.

@Test
public void testStopWithSavepointFailOnFirstSavepointSucceedOnSecond() throws Exception {
    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    env.setRestartStrategy(RestartStrategies.fixedDelayRestart(1, 0L));
    env.setParallelism(PARALLELISM);
    env.addSource(new DummySource(StopWithSavepointTestBehavior.FAIL_ON_FIRST_CHECKPOINT_ONLY)).addSink(new DiscardingSink<>());
    DummySource.resetForParallelism(PARALLELISM);
    JobClient client = env.executeAsync();
    DummySource.awaitRunning();
    DummySource.resetForParallelism(PARALLELISM);
    final File savepointDirectory = tempFolder.newFolder("savepoint");
    try {
        client.stopWithSavepoint(false, savepointDirectory.getAbsolutePath(), SavepointFormatType.CANONICAL).get();
        fail("Expect failure of operation");
    } catch (ExecutionException e) {
        assertThat(e, containsCause(FlinkException.class));
    }
    DummySource.awaitRunning();
    // ensure failed savepoint files have been removed from the directory.
    // We execute this in a retry loop with a timeout, because the savepoint deletion happens
    // asynchronously and is not bound to the job lifecycle. See FLINK-22493 for more details.
    CommonTestUtils.waitUntilCondition(() -> isDirectoryEmpty(savepointDirectory), Deadline.fromNow(Duration.ofSeconds(10)));
    // trigger second savepoint
    final String savepoint = client.stopWithSavepoint(false, savepointDirectory.getAbsolutePath(), SavepointFormatType.CANONICAL).get();
    assertThat(savepoint, containsString(savepointDirectory.getAbsolutePath()));
}
Also used : StreamExecutionEnvironment(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment) CoreMatchers.containsString(org.hamcrest.CoreMatchers.containsString) ExecutionException(java.util.concurrent.ExecutionException) JobClient(org.apache.flink.core.execution.JobClient) File(java.io.File) Test(org.junit.Test)

Example 13 with JobClient

use of org.apache.flink.core.execution.JobClient in project flink by apache.

the class ReactiveModeITCase method testScaleUpOnAdditionalTaskManager.

/**
 * Test that a job scales up when a TaskManager gets added to the cluster.
 */
@Test
public void testScaleUpOnAdditionalTaskManager() throws Exception {
    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    final DataStream<String> input = env.addSource(new DummySource());
    input.addSink(new DiscardingSink<>());
    final JobClient jobClient = env.executeAsync();
    waitUntilParallelismForVertexReached(miniClusterResource.getRestClusterClient(), jobClient.getJobID(), NUMBER_SLOTS_PER_TASK_MANAGER * INITIAL_NUMBER_TASK_MANAGERS);
    // scale up to 2 TaskManagers:
    miniClusterResource.getMiniCluster().startTaskManager();
    waitUntilParallelismForVertexReached(miniClusterResource.getRestClusterClient(), jobClient.getJobID(), NUMBER_SLOTS_PER_TASK_MANAGER * (INITIAL_NUMBER_TASK_MANAGERS + 1));
}
Also used : StreamExecutionEnvironment(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment) JobClient(org.apache.flink.core.execution.JobClient) Test(org.junit.Test)

Example 14 with JobClient

use of org.apache.flink.core.execution.JobClient in project flink by apache.

the class ReactiveModeITCase method testScaleLimitByMaxParallelism.

/**
 * Users can set maxParallelism and reactive mode must not run with a parallelism higher than
 * maxParallelism.
 */
@Test
public void testScaleLimitByMaxParallelism() throws Exception {
    // test preparation: ensure we have 2 TaskManagers running
    startAdditionalTaskManager();
    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    // we set maxParallelism = 1 and assert it never exceeds it
    final DataStream<String> input = env.addSource(new FailOnParallelExecutionSource()).setMaxParallelism(1);
    input.addSink(new DiscardingSink<>());
    final JobClient jobClient = env.executeAsync();
    waitUntilParallelismForVertexReached(miniClusterResource.getRestClusterClient(), jobClient.getJobID(), 1);
}
Also used : StreamExecutionEnvironment(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment) JobClient(org.apache.flink.core.execution.JobClient) Test(org.junit.Test)

Example 15 with JobClient

use of org.apache.flink.core.execution.JobClient in project flink by apache.

the class TableEnvironmentImpl method executeQueryOperation.

private TableResultInternal executeQueryOperation(QueryOperation operation) {
    CollectModifyOperation sinkOperation = new CollectModifyOperation(operation);
    List<Transformation<?>> transformations = translate(Collections.singletonList(sinkOperation));
    final String defaultJobName = "collect";
    Pipeline pipeline = execEnv.createPipeline(transformations, tableConfig.getConfiguration(), defaultJobName);
    try {
        JobClient jobClient = execEnv.executeAsync(pipeline);
        ResultProvider resultProvider = sinkOperation.getSelectResultProvider();
        resultProvider.setJobClient(jobClient);
        return TableResultImpl.builder().jobClient(jobClient).resultKind(ResultKind.SUCCESS_WITH_CONTENT).schema(operation.getResolvedSchema()).resultProvider(resultProvider).setPrintStyle(PrintStyle.tableauWithTypeInferredColumnWidths(// sinkOperation.getConsumedDataType() handles legacy types
        DataTypeUtils.expandCompositeTypeToSchema(sinkOperation.getConsumedDataType()), resultProvider.getRowDataStringConverter(), PrintStyle.DEFAULT_MAX_COLUMN_WIDTH, false, isStreamingMode)).build();
    } catch (Exception e) {
        throw new TableException("Failed to execute sql", e);
    }
}
Also used : Transformation(org.apache.flink.api.dag.Transformation) TableException(org.apache.flink.table.api.TableException) CollectModifyOperation(org.apache.flink.table.operations.CollectModifyOperation) JobClient(org.apache.flink.core.execution.JobClient) FunctionAlreadyExistException(org.apache.flink.table.catalog.exceptions.FunctionAlreadyExistException) DatabaseNotExistException(org.apache.flink.table.catalog.exceptions.DatabaseNotExistException) TableAlreadyExistException(org.apache.flink.table.catalog.exceptions.TableAlreadyExistException) TableException(org.apache.flink.table.api.TableException) IOException(java.io.IOException) ExecutionException(java.util.concurrent.ExecutionException) CatalogException(org.apache.flink.table.catalog.exceptions.CatalogException) FunctionNotExistException(org.apache.flink.table.catalog.exceptions.FunctionNotExistException) DatabaseNotEmptyException(org.apache.flink.table.catalog.exceptions.DatabaseNotEmptyException) DatabaseAlreadyExistException(org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException) SqlParserException(org.apache.flink.table.api.SqlParserException) ValidationException(org.apache.flink.table.api.ValidationException) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) Pipeline(org.apache.flink.api.dag.Pipeline)

Aggregations

JobClient (org.apache.flink.core.execution.JobClient)70 StreamExecutionEnvironment (org.apache.flink.streaming.api.environment.StreamExecutionEnvironment)36 Test (org.junit.Test)32 JobExecutionResult (org.apache.flink.api.common.JobExecutionResult)16 Configuration (org.apache.flink.configuration.Configuration)16 JobListener (org.apache.flink.core.execution.JobListener)14 ArrayList (java.util.ArrayList)12 List (java.util.List)10 JobID (org.apache.flink.api.common.JobID)10 ExecutionException (java.util.concurrent.ExecutionException)9 AtomicReference (java.util.concurrent.atomic.AtomicReference)8 DEFAULT_COLLECT_DATA_TIMEOUT (org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT)8 DEFAULT_JOB_STATUS_CHANGE_TIMEOUT (org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT)8 IOException (java.io.IOException)7 DisplayName (org.junit.jupiter.api.DisplayName)7 TestTemplate (org.junit.jupiter.api.TestTemplate)7 Iterator (java.util.Iterator)6 CompletableFuture (java.util.concurrent.CompletableFuture)6 ExecutionEnvironment (org.apache.flink.api.java.ExecutionEnvironment)6 Preconditions.checkNotNull (org.apache.flink.util.Preconditions.checkNotNull)6