Search in sources :

Example 1 with PySparkJob

use of com.google.cloud.dataproc.v1.PySparkJob in project java-dataproc by googleapis.

the class Quickstart method quickstart.

public static void quickstart(String projectId, String region, String clusterName, String jobFilePath) throws IOException, InterruptedException {
    String myEndpoint = String.format("%s-dataproc.googleapis.com:443", region);
    // Configure the settings for the cluster controller client.
    ClusterControllerSettings clusterControllerSettings = ClusterControllerSettings.newBuilder().setEndpoint(myEndpoint).build();
    // Configure the settings for the job controller client.
    JobControllerSettings jobControllerSettings = JobControllerSettings.newBuilder().setEndpoint(myEndpoint).build();
    // manually with the .close() method.
    try (ClusterControllerClient clusterControllerClient = ClusterControllerClient.create(clusterControllerSettings);
        JobControllerClient jobControllerClient = JobControllerClient.create(jobControllerSettings)) {
        // Configure the settings for our cluster.
        InstanceGroupConfig masterConfig = InstanceGroupConfig.newBuilder().setMachineTypeUri("n1-standard-2").setNumInstances(1).build();
        InstanceGroupConfig workerConfig = InstanceGroupConfig.newBuilder().setMachineTypeUri("n1-standard-2").setNumInstances(2).build();
        ClusterConfig clusterConfig = ClusterConfig.newBuilder().setMasterConfig(masterConfig).setWorkerConfig(workerConfig).build();
        // Create the cluster object with the desired cluster config.
        Cluster cluster = Cluster.newBuilder().setClusterName(clusterName).setConfig(clusterConfig).build();
        // Create the Cloud Dataproc cluster.
        OperationFuture<Cluster, ClusterOperationMetadata> createClusterAsyncRequest = clusterControllerClient.createClusterAsync(projectId, region, cluster);
        Cluster clusterResponse = createClusterAsyncRequest.get();
        System.out.println(String.format("Cluster created successfully: %s", clusterResponse.getClusterName()));
        // Configure the settings for our job.
        JobPlacement jobPlacement = JobPlacement.newBuilder().setClusterName(clusterName).build();
        PySparkJob pySparkJob = PySparkJob.newBuilder().setMainPythonFileUri(jobFilePath).build();
        Job job = Job.newBuilder().setPlacement(jobPlacement).setPysparkJob(pySparkJob).build();
        // Submit an asynchronous request to execute the job.
        OperationFuture<Job, JobMetadata> submitJobAsOperationAsyncRequest = jobControllerClient.submitJobAsOperationAsync(projectId, region, job);
        Job jobResponse = submitJobAsOperationAsyncRequest.get();
        // Print output from Google Cloud Storage.
        Matcher matches = Pattern.compile("gs://(.*?)/(.*)").matcher(jobResponse.getDriverOutputResourceUri());
        matches.matches();
        Storage storage = StorageOptions.getDefaultInstance().getService();
        Blob blob = storage.get(matches.group(1), String.format("%s.000000000", matches.group(2)));
        System.out.println(String.format("Job finished successfully: %s", new String(blob.getContent())));
        // Delete the cluster.
        OperationFuture<Empty, ClusterOperationMetadata> deleteClusterAsyncRequest = clusterControllerClient.deleteClusterAsync(projectId, region, clusterName);
        deleteClusterAsyncRequest.get();
        System.out.println(String.format("Cluster \"%s\" successfully deleted.", clusterName));
    } catch (ExecutionException e) {
        System.err.println(String.format("quickstart: %s ", e.getMessage()));
    }
}
Also used : JobControllerSettings(com.google.cloud.dataproc.v1.JobControllerSettings) JobMetadata(com.google.cloud.dataproc.v1.JobMetadata) Blob(com.google.cloud.storage.Blob) ClusterOperationMetadata(com.google.cloud.dataproc.v1.ClusterOperationMetadata) Matcher(java.util.regex.Matcher) Cluster(com.google.cloud.dataproc.v1.Cluster) ClusterControllerSettings(com.google.cloud.dataproc.v1.ClusterControllerSettings) PySparkJob(com.google.cloud.dataproc.v1.PySparkJob) Empty(com.google.protobuf.Empty) Storage(com.google.cloud.storage.Storage) ClusterControllerClient(com.google.cloud.dataproc.v1.ClusterControllerClient) JobPlacement(com.google.cloud.dataproc.v1.JobPlacement) JobControllerClient(com.google.cloud.dataproc.v1.JobControllerClient) PySparkJob(com.google.cloud.dataproc.v1.PySparkJob) Job(com.google.cloud.dataproc.v1.Job) ExecutionException(java.util.concurrent.ExecutionException) InstanceGroupConfig(com.google.cloud.dataproc.v1.InstanceGroupConfig) ClusterConfig(com.google.cloud.dataproc.v1.ClusterConfig)

Aggregations

Cluster (com.google.cloud.dataproc.v1.Cluster)1 ClusterConfig (com.google.cloud.dataproc.v1.ClusterConfig)1 ClusterControllerClient (com.google.cloud.dataproc.v1.ClusterControllerClient)1 ClusterControllerSettings (com.google.cloud.dataproc.v1.ClusterControllerSettings)1 ClusterOperationMetadata (com.google.cloud.dataproc.v1.ClusterOperationMetadata)1 InstanceGroupConfig (com.google.cloud.dataproc.v1.InstanceGroupConfig)1 Job (com.google.cloud.dataproc.v1.Job)1 JobControllerClient (com.google.cloud.dataproc.v1.JobControllerClient)1 JobControllerSettings (com.google.cloud.dataproc.v1.JobControllerSettings)1 JobMetadata (com.google.cloud.dataproc.v1.JobMetadata)1 JobPlacement (com.google.cloud.dataproc.v1.JobPlacement)1 PySparkJob (com.google.cloud.dataproc.v1.PySparkJob)1 Blob (com.google.cloud.storage.Blob)1 Storage (com.google.cloud.storage.Storage)1 Empty (com.google.protobuf.Empty)1 ExecutionException (java.util.concurrent.ExecutionException)1 Matcher (java.util.regex.Matcher)1