Search in sources :

Example 6 with Pipeline

use of org.apache.flink.api.dag.Pipeline in project flink by apache.

the class DefaultPackagedProgramRetrieverTest method retrieveJobGraph.

private JobGraph retrieveJobGraph(PackagedProgramRetriever retrieverUnderTest, Configuration configuration) throws FlinkException, ProgramInvocationException, MalformedURLException {
    final PackagedProgram packagedProgram = retrieverUnderTest.getPackagedProgram();
    final int defaultParallelism = configuration.getInteger(CoreOptions.DEFAULT_PARALLELISM);
    ConfigUtils.encodeCollectionToConfig(configuration, PipelineOptions.JARS, packagedProgram.getJobJarAndDependencies(), URL::toString);
    ConfigUtils.encodeCollectionToConfig(configuration, PipelineOptions.CLASSPATHS, packagedProgram.getClasspaths(), URL::toString);
    final Pipeline pipeline = PackagedProgramUtils.getPipelineFromProgram(packagedProgram, configuration, defaultParallelism, false);
    return PipelineExecutorUtils.getJobGraph(pipeline, configuration);
}
Also used : URL(java.net.URL) Pipeline(org.apache.flink.api.dag.Pipeline)

Example 7 with Pipeline

use of org.apache.flink.api.dag.Pipeline in project flink by apache.

the class TableEnvironmentImpl method executeInternal.

private TableResultInternal executeInternal(List<Transformation<?>> transformations, List<String> sinkIdentifierNames) {
    final String defaultJobName = "insert-into_" + String.join(",", sinkIdentifierNames);
    Pipeline pipeline = execEnv.createPipeline(transformations, tableConfig.getConfiguration(), defaultJobName);
    try {
        JobClient jobClient = execEnv.executeAsync(pipeline);
        final List<Column> columns = new ArrayList<>();
        Long[] affectedRowCounts = new Long[transformations.size()];
        for (int i = 0; i < transformations.size(); ++i) {
            // use sink identifier name as field name
            columns.add(Column.physical(sinkIdentifierNames.get(i), DataTypes.BIGINT()));
            affectedRowCounts[i] = -1L;
        }
        return TableResultImpl.builder().jobClient(jobClient).resultKind(ResultKind.SUCCESS_WITH_CONTENT).schema(ResolvedSchema.of(columns)).resultProvider(new InsertResultProvider(affectedRowCounts).setJobClient(jobClient)).build();
    } catch (Exception e) {
        throw new TableException("Failed to execute sql", e);
    }
}
Also used : TableException(org.apache.flink.table.api.TableException) Column(org.apache.flink.table.catalog.Column) ArrayList(java.util.ArrayList) JobClient(org.apache.flink.core.execution.JobClient) FunctionAlreadyExistException(org.apache.flink.table.catalog.exceptions.FunctionAlreadyExistException) DatabaseNotExistException(org.apache.flink.table.catalog.exceptions.DatabaseNotExistException) TableAlreadyExistException(org.apache.flink.table.catalog.exceptions.TableAlreadyExistException) TableException(org.apache.flink.table.api.TableException) IOException(java.io.IOException) ExecutionException(java.util.concurrent.ExecutionException) CatalogException(org.apache.flink.table.catalog.exceptions.CatalogException) FunctionNotExistException(org.apache.flink.table.catalog.exceptions.FunctionNotExistException) DatabaseNotEmptyException(org.apache.flink.table.catalog.exceptions.DatabaseNotEmptyException) DatabaseAlreadyExistException(org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException) SqlParserException(org.apache.flink.table.api.SqlParserException) ValidationException(org.apache.flink.table.api.ValidationException) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) Pipeline(org.apache.flink.api.dag.Pipeline)

Example 8 with Pipeline

use of org.apache.flink.api.dag.Pipeline in project flink by apache.

the class CliFrontend method info.

/**
 * Executes the info action.
 *
 * @param args Command line arguments for the info action.
 */
protected void info(String[] args) throws Exception {
    LOG.info("Running 'info' command.");
    final Options commandOptions = CliFrontendParser.getInfoCommandOptions();
    final CommandLine commandLine = CliFrontendParser.parse(commandOptions, args, true);
    final ProgramOptions programOptions = ProgramOptions.create(commandLine);
    // evaluate help flag
    if (commandLine.hasOption(HELP_OPTION.getOpt())) {
        CliFrontendParser.printHelpForInfo();
        return;
    }
    // -------- build the packaged program -------------
    LOG.info("Building program from JAR file");
    PackagedProgram program = null;
    try {
        int parallelism = programOptions.getParallelism();
        if (ExecutionConfig.PARALLELISM_DEFAULT == parallelism) {
            parallelism = defaultParallelism;
        }
        LOG.info("Creating program plan dump");
        final CustomCommandLine activeCommandLine = validateAndGetActiveCommandLine(checkNotNull(commandLine));
        final Configuration effectiveConfiguration = getEffectiveConfiguration(activeCommandLine, commandLine, programOptions, getJobJarAndDependencies(programOptions));
        program = buildProgram(programOptions, effectiveConfiguration);
        Pipeline pipeline = PackagedProgramUtils.getPipelineFromProgram(program, effectiveConfiguration, parallelism, true);
        String jsonPlan = FlinkPipelineTranslationUtil.translateToJSONExecutionPlan(pipeline);
        if (jsonPlan != null) {
            System.out.println("----------------------- Execution Plan -----------------------");
            System.out.println(jsonPlan);
            System.out.println("--------------------------------------------------------------");
        } else {
            System.out.println("JSON plan could not be generated.");
        }
        String description = program.getDescription();
        if (description != null) {
            System.out.println();
            System.out.println(description);
        } else {
            System.out.println();
            System.out.println("No description provided.");
        }
    } finally {
        if (program != null) {
            program.close();
        }
    }
}
Also used : JobManagerOptions(org.apache.flink.configuration.JobManagerOptions) Options(org.apache.commons.cli.Options) RestOptions(org.apache.flink.configuration.RestOptions) CoreOptions(org.apache.flink.configuration.CoreOptions) PackagedProgram(org.apache.flink.client.program.PackagedProgram) CommandLine(org.apache.commons.cli.CommandLine) ApplicationConfiguration(org.apache.flink.client.deployment.application.ApplicationConfiguration) SecurityConfiguration(org.apache.flink.runtime.security.SecurityConfiguration) Configuration(org.apache.flink.configuration.Configuration) GlobalConfiguration(org.apache.flink.configuration.GlobalConfiguration) Pipeline(org.apache.flink.api.dag.Pipeline)

Example 9 with Pipeline

use of org.apache.flink.api.dag.Pipeline in project flink by apache.

the class DumpCompiledPlanTest method verifyOptimizedPlan.

private void verifyOptimizedPlan(Class<?> entrypoint, String... args) throws Exception {
    final PackagedProgram program = PackagedProgram.newBuilder().setEntryPointClassName(entrypoint.getName()).setArguments(args).build();
    final Pipeline pipeline = PackagedProgramUtils.getPipelineFromProgram(program, new Configuration(), 1, true);
    assertTrue(pipeline instanceof Plan);
    final Plan plan = (Plan) pipeline;
    final OptimizedPlan op = compileNoStats(plan);
    final PlanJSONDumpGenerator dumper = new PlanJSONDumpGenerator();
    final String json = dumper.getOptimizerPlanAsJSON(op);
    try (JsonParser parser = new JsonFactory().createParser(json)) {
        while (parser.nextToken() != null) {
        }
    }
}
Also used : PackagedProgram(org.apache.flink.client.program.PackagedProgram) PlanJSONDumpGenerator(org.apache.flink.optimizer.plandump.PlanJSONDumpGenerator) Configuration(org.apache.flink.configuration.Configuration) JsonFactory(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonFactory) Plan(org.apache.flink.api.common.Plan) OptimizedPlan(org.apache.flink.optimizer.plan.OptimizedPlan) Pipeline(org.apache.flink.api.dag.Pipeline) OptimizedPlan(org.apache.flink.optimizer.plan.OptimizedPlan) JsonParser(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonParser)

Example 10 with Pipeline

use of org.apache.flink.api.dag.Pipeline in project flink by apache.

the class PreviewPlanDumpTest method verifyPlanDump.

private static void verifyPlanDump(Class<?> entrypoint, String... args) throws Exception {
    final PackagedProgram program = PackagedProgram.newBuilder().setEntryPointClassName(entrypoint.getName()).setArguments(args).build();
    final Pipeline pipeline = PackagedProgramUtils.getPipelineFromProgram(program, new Configuration(), 1, true);
    assertTrue(pipeline instanceof Plan);
    final Plan plan = (Plan) pipeline;
    final List<DataSinkNode> sinks = Optimizer.createPreOptimizedPlan(plan);
    final PlanJSONDumpGenerator dumper = new PlanJSONDumpGenerator();
    final String json = dumper.getPactPlanAsJSON(sinks);
    try (JsonParser parser = new JsonFactory().createParser(json)) {
        while (parser.nextToken() != null) {
        }
    }
}
Also used : PackagedProgram(org.apache.flink.client.program.PackagedProgram) PlanJSONDumpGenerator(org.apache.flink.optimizer.plandump.PlanJSONDumpGenerator) Configuration(org.apache.flink.configuration.Configuration) DataSinkNode(org.apache.flink.optimizer.dag.DataSinkNode) JsonFactory(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonFactory) Plan(org.apache.flink.api.common.Plan) Pipeline(org.apache.flink.api.dag.Pipeline) JsonParser(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonParser)

Aggregations

Pipeline (org.apache.flink.api.dag.Pipeline)10 Configuration (org.apache.flink.configuration.Configuration)7 PackagedProgram (org.apache.flink.client.program.PackagedProgram)4 URL (java.net.URL)3 JobClient (org.apache.flink.core.execution.JobClient)3 Test (org.junit.Test)3 IOException (java.io.IOException)2 ExecutionException (java.util.concurrent.ExecutionException)2 CommandLine (org.apache.commons.cli.CommandLine)2 ExecutionConfig (org.apache.flink.api.common.ExecutionConfig)2 Plan (org.apache.flink.api.common.Plan)2 PlanJSONDumpGenerator (org.apache.flink.optimizer.plandump.PlanJSONDumpGenerator)2 JsonFactory (org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonFactory)2 JsonParser (org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonParser)2 SqlParserException (org.apache.flink.table.api.SqlParserException)2 TableException (org.apache.flink.table.api.TableException)2 ValidationException (org.apache.flink.table.api.ValidationException)2 CatalogException (org.apache.flink.table.catalog.exceptions.CatalogException)2 DatabaseAlreadyExistException (org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException)2 DatabaseNotEmptyException (org.apache.flink.table.catalog.exceptions.DatabaseNotEmptyException)2