Search in sources :

Example 6 with UnknownNodeException

use of org.apache.nifi.cluster.manager.exception.UnknownNodeException in project nifi by apache.

the class ThreadPoolRequestReplicator method replicate.

/**
 * Replicates the request to all nodes in the given set of node identifiers
 *
 * @param nodeIds             the NodeIdentifiers that identify which nodes to send the request to
 * @param method              the HTTP method to use
 * @param uri                 the URI to send the request to
 * @param entity              the entity to use
 * @param headers             the HTTP Headers
 * @param performVerification whether or not to verify that all nodes in the cluster are connected and that all nodes can perform request. Ignored if request is not mutable.
 * @param response            the response to update with the results
 * @param executionPhase      <code>true</code> if this is the execution phase, <code>false</code> otherwise
 * @param monitor             a monitor that will be notified when the request completes (successfully or otherwise)
 * @return an AsyncClusterResponse that can be used to obtain the result
 */
AsyncClusterResponse replicate(final Set<NodeIdentifier> nodeIds, final String method, final URI uri, final Object entity, final Map<String, String> headers, final boolean performVerification, StandardAsyncClusterResponse response, final boolean executionPhase, final boolean merge, final Object monitor) {
    try {
        // state validation
        Objects.requireNonNull(nodeIds);
        Objects.requireNonNull(method);
        Objects.requireNonNull(uri);
        Objects.requireNonNull(entity);
        Objects.requireNonNull(headers);
        if (nodeIds.isEmpty()) {
            throw new IllegalArgumentException("Cannot replicate request to 0 nodes");
        }
        // verify all of the nodes exist and are in the proper state
        for (final NodeIdentifier nodeId : nodeIds) {
            final NodeConnectionStatus status = clusterCoordinator.getConnectionStatus(nodeId);
            if (status == null) {
                throw new UnknownNodeException("Node " + nodeId + " does not exist in this cluster");
            }
            if (status.getState() != NodeConnectionState.CONNECTED) {
                throw new IllegalClusterStateException("Cannot replicate request to Node " + nodeId + " because the node is not connected");
            }
        }
        logger.debug("Replicating request {} {} with entity {} to {}; response is {}", method, uri, entity, nodeIds, response);
        // Update headers to indicate the current revision so that we can
        // prevent multiple users changing the flow at the same time
        final Map<String, String> updatedHeaders = new HashMap<>(headers);
        final String requestId = updatedHeaders.computeIfAbsent(REQUEST_TRANSACTION_ID_HEADER, key -> UUID.randomUUID().toString());
        long verifyClusterStateNanos = -1;
        if (performVerification) {
            final long start = System.nanoTime();
            verifyClusterState(method, uri.getPath());
            verifyClusterStateNanos = System.nanoTime() - start;
        }
        int numRequests = responseMap.size();
        if (numRequests >= maxConcurrentRequests) {
            numRequests = purgeExpiredRequests();
        }
        if (numRequests >= maxConcurrentRequests) {
            final Map<String, Long> countsByUri = responseMap.values().stream().collect(Collectors.groupingBy(StandardAsyncClusterResponse::getURIPath, Collectors.counting()));
            logger.error("Cannot replicate request {} {} because there are {} outstanding HTTP Requests already. Request Counts Per URI = {}", method, uri.getPath(), numRequests, countsByUri);
            throw new IllegalStateException("There are too many outstanding HTTP requests with a total " + numRequests + " outstanding requests");
        }
        // create a response object if one was not already passed to us
        if (response == null) {
            // create the request objects and replicate to all nodes.
            // When the request has completed, we need to ensure that we notify the monitor, if there is one.
            final CompletionCallback completionCallback = clusterResponse -> {
                try {
                    onCompletedResponse(requestId);
                } finally {
                    if (monitor != null) {
                        synchronized (monitor) {
                            monitor.notify();
                        }
                        logger.debug("Notified monitor {} because request {} {} has completed", monitor, method, uri);
                    }
                }
            };
            final Runnable responseConsumedCallback = () -> onResponseConsumed(requestId);
            response = new StandardAsyncClusterResponse(requestId, uri, method, nodeIds, responseMapper, completionCallback, responseConsumedCallback, merge);
            responseMap.put(requestId, response);
        }
        if (verifyClusterStateNanos > -1) {
            response.addTiming("Verify Cluster State", "All Nodes", verifyClusterStateNanos);
        }
        logger.debug("For Request ID {}, response object is {}", requestId, response);
        // if mutable request, we have to do a two-phase commit where we ask each node to verify
        // that the request can take place and then, if all nodes agree that it can, we can actually
        // issue the request. This is all handled by calling performVerification, which will replicate
        // the 'vote' request to all nodes and then if successful will call back into this method to
        // replicate the actual request.
        final boolean mutableRequest = isMutableRequest(method, uri.getPath());
        if (mutableRequest && performVerification) {
            logger.debug("Performing verification (first phase of two-phase commit) for Request ID {}", requestId);
            performVerification(nodeIds, method, uri, entity, updatedHeaders, response, merge, monitor);
            return response;
        } else if (mutableRequest) {
            response.setPhase(StandardAsyncClusterResponse.COMMIT_PHASE);
        }
        // Callback function for generating a NodeHttpRequestCallable that can be used to perform the work
        final StandardAsyncClusterResponse finalResponse = response;
        NodeRequestCompletionCallback nodeCompletionCallback = nodeResponse -> {
            logger.debug("Received response from {} for {} {}", nodeResponse.getNodeId(), method, uri.getPath());
            finalResponse.add(nodeResponse);
        };
        // instruct the node to actually perform the underlying action
        if (mutableRequest && executionPhase) {
            updatedHeaders.put(REQUEST_EXECUTION_HTTP_HEADER, "true");
        }
        // replicate the request to all nodes
        final Function<NodeIdentifier, NodeHttpRequest> requestFactory = nodeId -> new NodeHttpRequest(nodeId, method, createURI(uri, nodeId), entity, updatedHeaders, nodeCompletionCallback, finalResponse);
        submitAsyncRequest(nodeIds, uri.getScheme(), uri.getPath(), requestFactory, updatedHeaders);
        return response;
    } catch (final Throwable t) {
        if (monitor != null) {
            synchronized (monitor) {
                monitor.notify();
            }
            logger.debug("Notified monitor {} because request {} {} has failed with Throwable {}", monitor, method, uri, t);
        }
        if (response != null) {
            final RuntimeException failure = (t instanceof RuntimeException) ? (RuntimeException) t : new RuntimeException("Failed to submit Replication Request to background thread", t);
            response.setFailure(failure, new NodeIdentifier());
        }
        throw t;
    }
}
Also used : NodeIdentifier(org.apache.nifi.cluster.protocol.NodeIdentifier) DisconnectedNodeMutableRequestException(org.apache.nifi.cluster.manager.exception.DisconnectedNodeMutableRequestException) URISyntaxException(java.net.URISyntaxException) LoggerFactory(org.slf4j.LoggerFactory) UriConstructionException(org.apache.nifi.cluster.manager.exception.UriConstructionException) StringUtils(org.apache.commons.lang3.StringUtils) MediaType(javax.ws.rs.core.MediaType) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Map(java.util.Map) URI(java.net.URI) ThreadFactory(java.util.concurrent.ThreadFactory) NodeResponse(org.apache.nifi.cluster.manager.NodeResponse) ReadWriteLock(java.util.concurrent.locks.ReadWriteLock) ComponentIdGenerator(org.apache.nifi.util.ComponentIdGenerator) HttpHeaders(org.apache.nifi.remote.protocol.http.HttpHeaders) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) HttpResponseMapper(org.apache.nifi.cluster.coordination.http.HttpResponseMapper) Set(java.util.Set) ClientProperties(org.glassfish.jersey.client.ClientProperties) Invocation(javax.ws.rs.client.Invocation) UUID(java.util.UUID) EncodingFilter(org.glassfish.jersey.client.filter.EncodingFilter) StandardHttpResponseMapper(org.apache.nifi.cluster.coordination.http.StandardHttpResponseMapper) ProxiedEntitiesUtils(org.apache.nifi.web.security.ProxiedEntitiesUtils) Entity(javax.ws.rs.client.Entity) LinkedBlockingQueue(java.util.concurrent.LinkedBlockingQueue) Collectors(java.util.stream.Collectors) Executors(java.util.concurrent.Executors) Objects(java.util.Objects) List(java.util.List) GZipEncoder(org.glassfish.jersey.message.GZipEncoder) Stream(java.util.stream.Stream) Response(javax.ws.rs.core.Response) IllegalClusterStateException(org.apache.nifi.cluster.manager.exception.IllegalClusterStateException) Entry(java.util.Map.Entry) ThreadPoolExecutor(java.util.concurrent.ThreadPoolExecutor) NoConnectedNodesException(org.apache.nifi.cluster.manager.exception.NoConnectedNodesException) Client(javax.ws.rs.client.Client) AccessDeniedException(org.apache.nifi.authorization.AccessDeniedException) HashMap(java.util.HashMap) ConnectingNodeMutableRequestException(org.apache.nifi.cluster.manager.exception.ConnectingNodeMutableRequestException) ReentrantReadWriteLock(java.util.concurrent.locks.ReentrantReadWriteLock) Function(java.util.function.Function) HttpMethod(javax.ws.rs.HttpMethod) ConcurrentMap(java.util.concurrent.ConcurrentMap) HashSet(java.util.HashSet) NiFiUser(org.apache.nifi.authorization.user.NiFiUser) ClusterCoordinator(org.apache.nifi.cluster.coordination.ClusterCoordinator) ScheduledExecutorService(java.util.concurrent.ScheduledExecutorService) NodeConnectionState(org.apache.nifi.cluster.coordination.node.NodeConnectionState) Status(javax.ws.rs.core.Response.Status) NodeConnectionStatus(org.apache.nifi.cluster.coordination.node.NodeConnectionStatus) LongSummaryStatistics(java.util.LongSummaryStatistics) JwtAuthenticationFilter(org.apache.nifi.web.security.jwt.JwtAuthenticationFilter) Logger(org.slf4j.Logger) MultivaluedHashMap(javax.ws.rs.core.MultivaluedHashMap) MultivaluedMap(javax.ws.rs.core.MultivaluedMap) TimeUnit(java.util.concurrent.TimeUnit) Lock(java.util.concurrent.locks.Lock) EventReporter(org.apache.nifi.events.EventReporter) FormatUtils(org.apache.nifi.util.FormatUtils) NiFiProperties(org.apache.nifi.util.NiFiProperties) NiFiUserUtils(org.apache.nifi.authorization.user.NiFiUserUtils) Severity(org.apache.nifi.reporting.Severity) UnknownNodeException(org.apache.nifi.cluster.manager.exception.UnknownNodeException) WebTarget(javax.ws.rs.client.WebTarget) Collections(java.util.Collections) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) HashMap(java.util.HashMap) MultivaluedHashMap(javax.ws.rs.core.MultivaluedHashMap) IllegalClusterStateException(org.apache.nifi.cluster.manager.exception.IllegalClusterStateException) NodeConnectionStatus(org.apache.nifi.cluster.coordination.node.NodeConnectionStatus) UnknownNodeException(org.apache.nifi.cluster.manager.exception.UnknownNodeException) NodeIdentifier(org.apache.nifi.cluster.protocol.NodeIdentifier)

Example 7 with UnknownNodeException

use of org.apache.nifi.cluster.manager.exception.UnknownNodeException in project nifi by apache.

the class FlowFileQueueResource method getFlowFile.

/**
 * Gets the specified flowfile from the specified connection.
 *
 * @param connectionId  The connection id
 * @param flowFileUuid  The flowfile uuid
 * @param clusterNodeId The cluster node id where the flowfile resides
 * @return a flowFileDTO
 * @throws InterruptedException if interrupted
 */
@GET
@Consumes(MediaType.WILDCARD)
@Produces(MediaType.APPLICATION_JSON)
@Path("{id}/flowfiles/{flowfile-uuid}")
@ApiOperation(value = "Gets a FlowFile from a Connection.", response = FlowFileEntity.class, authorizations = { @Authorization(value = "Read Source Data - /data/{component-type}/{uuid}") })
@ApiResponses(value = { @ApiResponse(code = 400, message = "NiFi was unable to complete the request because it was invalid. The request should not be retried without modification."), @ApiResponse(code = 401, message = "Client could not be authenticated."), @ApiResponse(code = 403, message = "Client is not authorized to make this request."), @ApiResponse(code = 404, message = "The specified resource could not be found."), @ApiResponse(code = 409, message = "The request was valid but NiFi was not in the appropriate state to process it. Retrying the same request later may be successful.") })
public Response getFlowFile(@ApiParam(value = "The connection id.", required = true) @PathParam("id") final String connectionId, @ApiParam(value = "The flowfile uuid.", required = true) @PathParam("flowfile-uuid") final String flowFileUuid, @ApiParam(value = "The id of the node where the content exists if clustered.", required = false) @QueryParam("clusterNodeId") final String clusterNodeId) throws InterruptedException {
    // replicate if cluster manager
    if (isReplicateRequest()) {
        // determine where this request should be sent
        if (clusterNodeId == null) {
            throw new IllegalArgumentException("The id of the node in the cluster is required.");
        } else {
            // get the target node and ensure it exists
            final NodeIdentifier targetNode = getClusterCoordinator().getNodeIdentifier(clusterNodeId);
            if (targetNode == null) {
                throw new UnknownNodeException("The specified cluster node does not exist.");
            }
            return replicate(HttpMethod.GET, targetNode);
        }
    }
    // NOTE - deferred authorization so we can consider flowfile attributes in the access decision
    // get the flowfile
    final FlowFileDTO flowfileDto = serviceFacade.getFlowFile(connectionId, flowFileUuid);
    populateRemainingFlowFileContent(connectionId, flowfileDto);
    // create the response entity
    final FlowFileEntity entity = new FlowFileEntity();
    entity.setFlowFile(flowfileDto);
    return generateOkResponse(entity).build();
}
Also used : FlowFileEntity(org.apache.nifi.web.api.entity.FlowFileEntity) UnknownNodeException(org.apache.nifi.cluster.manager.exception.UnknownNodeException) NodeIdentifier(org.apache.nifi.cluster.protocol.NodeIdentifier) FlowFileDTO(org.apache.nifi.web.api.dto.FlowFileDTO) Path(javax.ws.rs.Path) Consumes(javax.ws.rs.Consumes) Produces(javax.ws.rs.Produces) GET(javax.ws.rs.GET) ApiOperation(io.swagger.annotations.ApiOperation) ApiResponses(io.swagger.annotations.ApiResponses)

Aggregations

UnknownNodeException (org.apache.nifi.cluster.manager.exception.UnknownNodeException)7 NodeIdentifier (org.apache.nifi.cluster.protocol.NodeIdentifier)7 ApiOperation (io.swagger.annotations.ApiOperation)3 ApiResponses (io.swagger.annotations.ApiResponses)3 Consumes (javax.ws.rs.Consumes)3 GET (javax.ws.rs.GET)3 Path (javax.ws.rs.Path)3 Produces (javax.ws.rs.Produces)3 NiFiUser (org.apache.nifi.authorization.user.NiFiUser)3 WebApplicationException (javax.ws.rs.WebApplicationException)2 NodeConnectionStatus (org.apache.nifi.cluster.coordination.node.NodeConnectionStatus)2 NodeResponse (org.apache.nifi.cluster.manager.NodeResponse)2 InputStream (java.io.InputStream)1 OutputStream (java.io.OutputStream)1 URI (java.net.URI)1 URISyntaxException (java.net.URISyntaxException)1 Collections (java.util.Collections)1 HashMap (java.util.HashMap)1 HashSet (java.util.HashSet)1 List (java.util.List)1