Search in sources :

Example 1 with DependencyUploader

use of org.apache.storm.dependency.DependencyUploader in project storm by apache.

the class StormSubmitter method submitTopologyAs.

/**
     * Submits a topology to run on the cluster as a particular user. A topology runs forever or until
     * explicitly killed.
     *
     * @param name
     * @param stormConf
     * @param topology
     * @param opts
     * @param progressListener
     * @param asUser The user as which this topology should be submitted.
     * @throws AlreadyAliveException
     * @throws InvalidTopologyException
     * @throws AuthorizationException
     * @throws IllegalArgumentException thrown if configs will yield an unschedulable topology. validateConfs validates confs
     * @thorws SubmitterHookException if any Exception occurs during initialization or invocation of registered {@link ISubmitterHook}
     */
public static void submitTopologyAs(String name, Map stormConf, StormTopology topology, SubmitOptions opts, ProgressListener progressListener, String asUser) throws AlreadyAliveException, InvalidTopologyException, AuthorizationException, IllegalArgumentException {
    if (!Utils.isValidConf(stormConf)) {
        throw new IllegalArgumentException("Storm conf is not valid. Must be json-serializable");
    }
    stormConf = new HashMap(stormConf);
    stormConf.putAll(Utils.readCommandLineOpts());
    Map conf = Utils.readStormConfig();
    conf.putAll(stormConf);
    stormConf.putAll(prepareZookeeperAuthentication(conf));
    validateConfs(conf, topology);
    Map<String, String> passedCreds = new HashMap<>();
    if (opts != null) {
        Credentials tmpCreds = opts.get_creds();
        if (tmpCreds != null) {
            passedCreds = tmpCreds.get_creds();
        }
    }
    Map<String, String> fullCreds = populateCredentials(conf, passedCreds);
    if (!fullCreds.isEmpty()) {
        if (opts == null) {
            opts = new SubmitOptions(TopologyInitialStatus.ACTIVE);
        }
        opts.set_creds(new Credentials(fullCreds));
    }
    try {
        if (localNimbus != null) {
            LOG.info("Submitting topology " + name + " in local mode");
            if (opts != null) {
                localNimbus.submitTopologyWithOpts(name, stormConf, topology, opts);
            } else {
                // this is for backwards compatibility
                localNimbus.submitTopology(name, stormConf, topology);
            }
            LOG.info("Finished submitting topology: " + name);
        } else {
            String serConf = JSONValue.toJSONString(stormConf);
            try (NimbusClient client = NimbusClient.getConfiguredClientAs(conf, asUser)) {
                if (topologyNameExists(name, client)) {
                    throw new RuntimeException("Topology with name `" + name + "` already exists on cluster");
                }
                // Dependency uploading only makes sense for distributed mode
                List<String> jarsBlobKeys = Collections.emptyList();
                List<String> artifactsBlobKeys;
                DependencyUploader uploader = new DependencyUploader();
                try {
                    uploader.init();
                    jarsBlobKeys = uploadDependencyJarsToBlobStore(uploader);
                    artifactsBlobKeys = uploadDependencyArtifactsToBlobStore(uploader);
                } catch (Throwable e) {
                    // remove uploaded jars blobs, not artifacts since they're shared across the cluster
                    uploader.deleteBlobs(jarsBlobKeys);
                    uploader.shutdown();
                    throw e;
                }
                try {
                    setDependencyBlobsToTopology(topology, jarsBlobKeys, artifactsBlobKeys);
                    submitTopologyInDistributeMode(name, topology, opts, progressListener, asUser, conf, serConf, client);
                } catch (AlreadyAliveException | InvalidTopologyException | AuthorizationException e) {
                    // remove uploaded jars blobs, not artifacts since they're shared across the cluster
                    // Note that we don't handle TException to delete jars blobs
                    // because it's safer to leave some blobs instead of topology not running
                    uploader.deleteBlobs(jarsBlobKeys);
                    throw e;
                } finally {
                    uploader.shutdown();
                }
            }
        }
    } catch (TException e) {
        throw new RuntimeException(e);
    }
    invokeSubmitterHook(name, asUser, conf, topology);
}
Also used : TException(org.apache.thrift.TException) HashMap(java.util.HashMap) NimbusClient(org.apache.storm.utils.NimbusClient) DependencyUploader(org.apache.storm.dependency.DependencyUploader) HashMap(java.util.HashMap) Map(java.util.Map) IAutoCredentials(org.apache.storm.security.auth.IAutoCredentials)

Aggregations

HashMap (java.util.HashMap)1 Map (java.util.Map)1 DependencyUploader (org.apache.storm.dependency.DependencyUploader)1 IAutoCredentials (org.apache.storm.security.auth.IAutoCredentials)1 NimbusClient (org.apache.storm.utils.NimbusClient)1 TException (org.apache.thrift.TException)1