Search in sources :

Example 71 with TException

use of org.apache.thrift.TException in project storm by apache.

the class Nimbus method cancelBlobUpload.

@SuppressWarnings("deprecation")
@Override
public void cancelBlobUpload(String session) throws AuthorizationException, TException {
    try {
        AtomicOutputStream os = (AtomicOutputStream) blobUploaders.get(session);
        if (os == null) {
            throw new RuntimeException("Blob for session " + session + " does not exist (or timed out)");
        }
        os.cancel();
        LOG.info("Canceled uploading blob for session {}. Closing session.", session);
        blobUploaders.remove(session);
    } catch (Exception e) {
        LOG.warn("finish blob upload exception.", e);
        if (e instanceof TException) {
            throw (TException) e;
        }
        throw new RuntimeException(e);
    }
}
Also used : TException(org.apache.thrift.TException) AtomicOutputStream(org.apache.storm.blobstore.AtomicOutputStream) AuthorizationException(org.apache.storm.generated.AuthorizationException) NotAliveException(org.apache.storm.generated.NotAliveException) InterruptedIOException(java.io.InterruptedIOException) TException(org.apache.thrift.TException) IOException(java.io.IOException) AlreadyAliveException(org.apache.storm.generated.AlreadyAliveException) KeyAlreadyExistsException(org.apache.storm.generated.KeyAlreadyExistsException) KeyNotFoundException(org.apache.storm.generated.KeyNotFoundException) InvalidTopologyException(org.apache.storm.generated.InvalidTopologyException) BindException(java.net.BindException)

Example 72 with TException

use of org.apache.thrift.TException in project storm by apache.

the class Nimbus method uploadBlobChunk.

@SuppressWarnings("deprecation")
@Override
public void uploadBlobChunk(String session, ByteBuffer chunk) throws AuthorizationException, TException {
    try {
        OutputStream os = blobUploaders.get(session);
        if (os == null) {
            throw new RuntimeException("Blob for session " + session + " does not exist (or timed out)");
        }
        byte[] array = chunk.array();
        int remaining = chunk.remaining();
        int offset = chunk.arrayOffset();
        int position = chunk.position();
        os.write(array, offset + position, remaining);
        blobUploaders.put(session, os);
    } catch (Exception e) {
        LOG.warn("upload blob chunk exception.", e);
        if (e instanceof TException) {
            throw (TException) e;
        }
        throw new RuntimeException(e);
    }
}
Also used : TException(org.apache.thrift.TException) FileOutputStream(java.io.FileOutputStream) AtomicOutputStream(org.apache.storm.blobstore.AtomicOutputStream) OutputStream(java.io.OutputStream) DataPoint(org.apache.storm.metric.api.DataPoint) AuthorizationException(org.apache.storm.generated.AuthorizationException) NotAliveException(org.apache.storm.generated.NotAliveException) InterruptedIOException(java.io.InterruptedIOException) TException(org.apache.thrift.TException) IOException(java.io.IOException) AlreadyAliveException(org.apache.storm.generated.AlreadyAliveException) KeyAlreadyExistsException(org.apache.storm.generated.KeyAlreadyExistsException) KeyNotFoundException(org.apache.storm.generated.KeyNotFoundException) InvalidTopologyException(org.apache.storm.generated.InvalidTopologyException) BindException(java.net.BindException)

Example 73 with TException

use of org.apache.thrift.TException in project storm by apache.

the class Nimbus method getTopologyInfoWithOpts.

@Override
public TopologyInfo getTopologyInfoWithOpts(String topoId, GetInfoOptions options) throws NotAliveException, AuthorizationException, TException {
    try {
        getTopologyInfoWithOptsCalls.mark();
        CommonTopoInfo common = getCommonTopoInfo(topoId, "getTopologyInfo");
        if (common.base == null) {
            throw new NotAliveException(topoId);
        }
        IStormClusterState state = stormClusterState;
        NumErrorsChoice numErrChoice = OR(options.get_num_err_choice(), NumErrorsChoice.ALL);
        Map<String, List<ErrorInfo>> errors = new HashMap<>();
        for (String component : common.allComponents) {
            switch(numErrChoice) {
                case NONE:
                    errors.put(component, Collections.emptyList());
                    break;
                case ONE:
                    List<ErrorInfo> errList = new ArrayList<>();
                    ErrorInfo info = state.lastError(topoId, component);
                    if (info != null) {
                        errList.add(info);
                    }
                    errors.put(component, errList);
                    break;
                case ALL:
                    errors.put(component, state.errors(topoId, component));
                    break;
                default:
                    LOG.warn("Got invalid NumErrorsChoice '{}'", numErrChoice);
                    errors.put(component, state.errors(topoId, component));
                    break;
            }
        }
        List<ExecutorSummary> summaries = new ArrayList<>();
        if (common.assignment != null) {
            for (Entry<List<Long>, NodeInfo> entry : common.assignment.get_executor_node_port().entrySet()) {
                NodeInfo ni = entry.getValue();
                ExecutorInfo execInfo = toExecInfo(entry.getKey());
                String host = entry.getValue().get_node();
                Map<String, Object> heartbeat = common.beats.get(StatsUtil.convertExecutor(entry.getKey()));
                if (heartbeat == null) {
                    heartbeat = Collections.emptyMap();
                }
                ExecutorSummary summ = new ExecutorSummary(execInfo, common.taskToComponent.get(execInfo.get_task_start()), ni.get_node(), ni.get_port_iterator().next().intValue(), (Integer) heartbeat.getOrDefault("uptime", 0));
                //heartbeats "stats"
                Map<String, Object> hb = (Map<String, Object>) heartbeat.get("heartbeat");
                if (hb != null) {
                    Map ex = (Map) hb.get("stats");
                    if (ex != null) {
                        ExecutorStats stats = StatsUtil.thriftifyExecutorStats(ex);
                        summ.set_stats(stats);
                    }
                }
                summaries.add(summ);
            }
        }
        TopologyInfo topoInfo = new TopologyInfo(topoId, common.topoName, Time.deltaSecs(common.launchTimeSecs), summaries, extractStatusStr(common.base), errors);
        if (common.base.is_set_owner()) {
            topoInfo.set_owner(common.base.get_owner());
        }
        String schedStatus = idToSchedStatus.get().get(topoId);
        if (schedStatus != null) {
            topoInfo.set_sched_status(schedStatus);
        }
        TopologyResources resources = getResourcesForTopology(topoId, common.base);
        if (resources != null) {
            topoInfo.set_requested_memonheap(resources.getRequestedMemOnHeap());
            topoInfo.set_requested_memoffheap(resources.getRequestedMemOffHeap());
            topoInfo.set_requested_cpu(resources.getRequestedCpu());
            topoInfo.set_assigned_memonheap(resources.getAssignedMemOnHeap());
            topoInfo.set_assigned_memoffheap(resources.getAssignedMemOffHeap());
            topoInfo.set_assigned_cpu(resources.getAssignedCpu());
        }
        if (common.base.is_set_component_debug()) {
            topoInfo.set_component_debug(common.base.get_component_debug());
        }
        topoInfo.set_replication_count(getBlobReplicationCount(ConfigUtils.masterStormCodeKey(topoId)));
        return topoInfo;
    } catch (Exception e) {
        LOG.warn("Get topo info exception. (topology id='{}')", topoId, e);
        if (e instanceof TException) {
            throw (TException) e;
        }
        throw new RuntimeException(e);
    }
}
Also used : TException(org.apache.thrift.TException) HashMap(java.util.HashMap) ExecutorStats(org.apache.storm.generated.ExecutorStats) ArrayList(java.util.ArrayList) ExecutorSummary(org.apache.storm.generated.ExecutorSummary) NotAliveException(org.apache.storm.generated.NotAliveException) ExecutorInfo(org.apache.storm.generated.ExecutorInfo) ArrayList(java.util.ArrayList) List(java.util.List) IStormClusterState(org.apache.storm.cluster.IStormClusterState) ErrorInfo(org.apache.storm.generated.ErrorInfo) AuthorizationException(org.apache.storm.generated.AuthorizationException) NotAliveException(org.apache.storm.generated.NotAliveException) InterruptedIOException(java.io.InterruptedIOException) TException(org.apache.thrift.TException) IOException(java.io.IOException) AlreadyAliveException(org.apache.storm.generated.AlreadyAliveException) KeyAlreadyExistsException(org.apache.storm.generated.KeyAlreadyExistsException) KeyNotFoundException(org.apache.storm.generated.KeyNotFoundException) InvalidTopologyException(org.apache.storm.generated.InvalidTopologyException) BindException(java.net.BindException) NodeInfo(org.apache.storm.generated.NodeInfo) NumErrorsChoice(org.apache.storm.generated.NumErrorsChoice) Map(java.util.Map) TimeCacheMap(org.apache.storm.utils.TimeCacheMap) ImmutableMap(com.google.common.collect.ImmutableMap) HashMap(java.util.HashMap) TopologyInfo(org.apache.storm.generated.TopologyInfo)

Example 74 with TException

use of org.apache.thrift.TException in project storm by apache.

the class Nimbus method listBlobs.

@SuppressWarnings("deprecation")
@Override
public ListBlobsResult listBlobs(String session) throws TException {
    try {
        Iterator<String> keyIt;
        // starting from the beginning.
        if (session == null || session.isEmpty()) {
            keyIt = blobStore.listKeys();
            session = Utils.uuid();
        } else {
            keyIt = blobListers.get(session);
        }
        if (keyIt == null) {
            throw new RuntimeException("Blob list for session " + session + " does not exist (or timed out)");
        }
        if (!keyIt.hasNext()) {
            blobListers.remove(session);
            LOG.info("No more blobs to list for session {}", session);
            // A blank result communicates that there are no more blobs.
            return new ListBlobsResult(Collections.emptyList(), session);
        }
        ArrayList<String> listChunk = new ArrayList<>();
        for (int i = 0; i < 100 && keyIt.hasNext(); i++) {
            listChunk.add(keyIt.next());
        }
        blobListers.put(session, keyIt);
        LOG.info("Downloading {} entries", listChunk.size());
        return new ListBlobsResult(listChunk, session);
    } catch (Exception e) {
        LOG.warn("list blobs exception.", e);
        if (e instanceof TException) {
            throw (TException) e;
        }
        throw new RuntimeException(e);
    }
}
Also used : TException(org.apache.thrift.TException) ListBlobsResult(org.apache.storm.generated.ListBlobsResult) ArrayList(java.util.ArrayList) DataPoint(org.apache.storm.metric.api.DataPoint) AuthorizationException(org.apache.storm.generated.AuthorizationException) NotAliveException(org.apache.storm.generated.NotAliveException) InterruptedIOException(java.io.InterruptedIOException) TException(org.apache.thrift.TException) IOException(java.io.IOException) AlreadyAliveException(org.apache.storm.generated.AlreadyAliveException) KeyAlreadyExistsException(org.apache.storm.generated.KeyAlreadyExistsException) KeyNotFoundException(org.apache.storm.generated.KeyNotFoundException) InvalidTopologyException(org.apache.storm.generated.InvalidTopologyException) BindException(java.net.BindException)

Example 75 with TException

use of org.apache.thrift.TException in project storm by apache.

the class ReturnResults method execute.

@Override
public void execute(Tuple input) {
    String result = (String) input.getValue(0);
    String returnInfo = (String) input.getValue(1);
    if (returnInfo != null) {
        Map retMap = null;
        try {
            retMap = (Map) JSONValue.parseWithException(returnInfo);
        } catch (ParseException e) {
            LOG.error("Parseing returnInfo failed", e);
            _collector.fail(input);
            return;
        }
        final String host = (String) retMap.get("host");
        final int port = Utils.getInt(retMap.get("port"));
        String id = (String) retMap.get("id");
        DistributedRPCInvocations.Iface client;
        if (local) {
            client = (DistributedRPCInvocations.Iface) ServiceRegistry.getService(host);
        } else {
            List server = new ArrayList() {

                {
                    add(host);
                    add(port);
                }
            };
            if (!_clients.containsKey(server)) {
                try {
                    _clients.put(server, new DRPCInvocationsClient(_conf, host, port));
                } catch (TTransportException ex) {
                    throw new RuntimeException(ex);
                }
            }
            client = _clients.get(server);
        }
        int retryCnt = 0;
        int maxRetries = 3;
        while (retryCnt < maxRetries) {
            retryCnt++;
            try {
                client.result(id, result);
                _collector.ack(input);
                break;
            } catch (AuthorizationException aze) {
                LOG.error("Not authorized to return results to DRPC server", aze);
                _collector.fail(input);
                throw new RuntimeException(aze);
            } catch (TException tex) {
                if (retryCnt >= maxRetries) {
                    LOG.error("Failed to return results to DRPC server", tex);
                    _collector.fail(input);
                }
                reconnectClient((DRPCInvocationsClient) client);
            }
        }
    }
}
Also used : TException(org.apache.thrift.TException) AuthorizationException(org.apache.storm.generated.AuthorizationException) ArrayList(java.util.ArrayList) DistributedRPCInvocations(org.apache.storm.generated.DistributedRPCInvocations) TTransportException(org.apache.thrift.transport.TTransportException) ArrayList(java.util.ArrayList) List(java.util.List) ParseException(org.json.simple.parser.ParseException) HashMap(java.util.HashMap) Map(java.util.Map)

Aggregations

TException (org.apache.thrift.TException)381 IOException (java.io.IOException)164 MetaException (org.apache.hadoop.hive.metastore.api.MetaException)57 NoSuchObjectException (org.apache.hadoop.hive.metastore.api.NoSuchObjectException)48 ArrayList (java.util.ArrayList)42 HashMap (java.util.HashMap)40 Table (org.apache.hadoop.hive.metastore.api.Table)38 Map (java.util.Map)30 TBinaryProtocol (org.apache.thrift.protocol.TBinaryProtocol)29 AuthorizationException (org.apache.storm.generated.AuthorizationException)27 Test (org.junit.Test)26 List (java.util.List)25 InvalidObjectException (org.apache.hadoop.hive.metastore.api.InvalidObjectException)24 UnknownHostException (java.net.UnknownHostException)23 TProtocol (org.apache.thrift.protocol.TProtocol)23 FileNotFoundException (java.io.FileNotFoundException)22 HiveSQLException (org.apache.hive.service.cli.HiveSQLException)22 InvalidMetaException (com.netflix.metacat.common.server.connectors.exception.InvalidMetaException)21 LoginException (javax.security.auth.login.LoginException)21 ConnectorException (com.netflix.metacat.common.server.connectors.exception.ConnectorException)20