Search in sources :

Example 36 with NodeID

use of org.jivesoftware.openfire.cluster.NodeID in project Openfire by igniterealtime.

the class SessionManager method leftCluster.

@Override
public void leftCluster(byte[] nodeID) {
    // Another node left the cluster.
    final NodeID nodeIDOfLostNode = NodeID.getInstance(nodeID);
    Log.debug("Cluster node {} just left the cluster.", nodeIDOfLostNode);
    // When the local node drops out of the cluster (for example, due to a network failure), then from the perspective
    // of that node, all other nodes leave the cluster. This method is invoked for each of them. In certain
    // circumstances, this can mean that the local node no longer has access to all data (or its backups) that is
    // maintained in the clustered caches. From the perspective of the remaining node, this data is lost. (OF-2297/OF-2300).
    // To prevent this being an issue, most caches have supporting local data structures that maintain a copy of the most
    // critical bits of the data stored in the clustered cache, which is to be used to detect and/or correct such a
    // loss in data. This is done in the next few lines of this method.
    detectAndFixBrokenCaches();
    // When a peer server leaves the cluster, any remote sessions that were associated with the defunct node must be
    // dropped from the session caches (and supporting data structures) that are shared by the remaining cluster member(s).
    // Note: All remaining cluster nodes will be in a race to clean up the same data. We can not depend on cluster
    // seniority to appoint a 'single' cleanup node, because for a small moment we may not have a senior cluster member.
    // Remove incoming server sessions hosted in node that left the cluster
    final Set<StreamID> removedServerSessions = incomingServerSessionInfoByClusterNode.remove(nodeIDOfLostNode);
    if (removedServerSessions != null) {
        removedServerSessions.forEach(streamID -> {
            try {
                // Remove all the domains that were registered for this server session.
                unregisterIncomingServerSession(streamID);
            } catch (Exception e) {
                Log.error("Node {} left the cluster. Incoming server sessions on that node are no longer available. To reflect this, we're deleting these sessions. While doing this for '{}', this caused an exception to occur.", nodeIDOfLostNode, streamID, e);
            }
        });
    }
    // For componentSessionsCache and multiplexerSessionsCache there is no clean up to be done, except for removing
    // the value from the cache. Therefore it is unnecessary to create a reverse lookup tracking state per (remote)
    // node.
    CacheUtil.removeValueFromMultiValuedCache(componentSessionsCache, NodeID.getInstance(nodeID));
    CacheUtil.removeValueFromCache(multiplexerSessionsCache, NodeID.getInstance(nodeID));
    // Remove client sessions hosted in node that left the cluster
    final Set<String> removedSessionInfo = sessionInfoKeysByClusterNode.remove(nodeIDOfLostNode);
    if (removedSessionInfo != null) {
        removedSessionInfo.forEach(fullJID -> {
            final JID offlineJID = new JID(fullJID);
            boolean sessionIsAnonymous = false;
            final ClientSessionInfo clientSessionInfoAboutToBeRemoved = sessionInfoCache.remove(fullJID);
            if (clientSessionInfoAboutToBeRemoved != null) {
                sessionIsAnonymous = clientSessionInfoAboutToBeRemoved.isAnonymous();
            } else {
                // Apparently there is an inconsistency between sessionInfoKeysByClusterNode and sessionInfoCache.
                // That's troublesome, so log a warning. For the session removal we can't do more than just assume
                // the session was not anonymous (which has the highest probability for most use cases).
                Log.warn("Session information for {} is not available from sessionInfoCache, while it was still expected to be there", fullJID);
            }
            removeSession(null, offlineJID, sessionIsAnonymous, true);
        });
    }
    // In some cache implementations, the entry-set is unmodifiable. To guard against potential
    // future changes of this implementation (that would make the implementation incompatible with
    // these cache implementations), the entry-set that's operated on in this implementation is
    // explicitly wrapped in an unmodifiable collection. That forces this implementation to be
    // compatible with the 'lowest common denominator'.
    final Set<Map.Entry<String, ClientSessionInfo>> entries = Collections.unmodifiableSet(sessionInfoCache.entrySet());
    for (final Map.Entry<String, ClientSessionInfo> entry : entries) {
        if (entry.getValue().getNodeID().equals(NodeID.getInstance(nodeID))) {
            sessionInfoCache.remove(entry.getKey());
        }
    }
}
Also used : JID(org.xmpp.packet.JID) UnauthorizedException(org.jivesoftware.openfire.auth.UnauthorizedException) UnknownHostException(java.net.UnknownHostException) NodeID(org.jivesoftware.openfire.cluster.NodeID) Map(java.util.Map) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) ConcurrentMap(java.util.concurrent.ConcurrentMap)

Example 37 with NodeID

use of org.jivesoftware.openfire.cluster.NodeID in project Openfire by igniterealtime.

the class SessionManager method getComponentSessions.

/**
 * Returns a collection with the established sessions from external components.
 *
 * @return a collection with the established sessions from external components.
 */
public Collection<ComponentSession> getComponentSessions() {
    List<ComponentSession> sessions = new ArrayList<>();
    // Add sessions of external components connected to this JVM
    sessions.addAll(localSessionManager.getComponentsSessions());
    // Add sessions of external components connected to other cluster nodes
    RemoteSessionLocator locator = server.getRemoteSessionLocator();
    if (locator != null) {
        for (Map.Entry<String, HashSet<NodeID>> entry : componentSessionsCache.entrySet()) {
            for (NodeID nodeID : entry.getValue()) {
                if (!server.getNodeID().equals(nodeID)) {
                    sessions.add(locator.getComponentSession(nodeID.toByteArray(), new JID(entry.getKey())));
                }
            }
        }
    }
    return sessions;
}
Also used : JID(org.xmpp.packet.JID) ArrayList(java.util.ArrayList) NodeID(org.jivesoftware.openfire.cluster.NodeID) Map(java.util.Map) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) ConcurrentMap(java.util.concurrent.ConcurrentMap) HashSet(java.util.HashSet)

Example 38 with NodeID

use of org.jivesoftware.openfire.cluster.NodeID in project Openfire by igniterealtime.

the class ReverseLookupComputingCacheEntryListenerTest method testUpdate.

/**
 * Simulates a scenario where one cluster node adds an entry to an otherwise empty cache, followed by another
 * cluster node updating that entry, adding itself as another 'owner' of the entry. In this scenario, the
 * event listeners are fired in the same order as the order in which the insertions occur. Due to the asynchronous
 * behavior, this is not guaranteed to occur (see #testUpdateEventsInWrongOrder).
 */
@Test
public void testUpdate() throws Exception {
    // Setup text fixture, Simulating things for a cache with this signature: Cache<String, Set<NodeID>> cache;
    final Map<NodeID, Set<String>> reverseLookupMap = new HashMap<>();
    final Function<HashSet<NodeID>, Set<NodeID>> deducer = nodeIDS -> nodeIDS;
    final ReverseLookupComputingCacheEntryListener<String, HashSet<NodeID>> listener = new ReverseLookupComputingCacheEntryListener<>(reverseLookupMap, deducer);
    final NodeID clusterNodeA = NodeID.getInstance(UUID.randomUUID().toString().getBytes());
    final NodeID clusterNodeB = NodeID.getInstance(UUID.randomUUID().toString().getBytes());
    // Execute system under test.
    listener.entryAdded("somekey", new HashSet<>(Arrays.asList(clusterNodeA)), clusterNodeA);
    listener.entryUpdated("somekey", new HashSet<>(Arrays.asList(clusterNodeA)), new HashSet<>(Arrays.asList(clusterNodeA, clusterNodeB)), clusterNodeB);
    // Assert result
    assertTrue(reverseLookupMap.containsKey(clusterNodeA));
    assertTrue(reverseLookupMap.get(clusterNodeA).contains("somekey"));
    assertTrue(reverseLookupMap.containsKey(clusterNodeB));
    assertTrue(reverseLookupMap.get(clusterNodeB).contains("somekey"));
}
Also used : java.util(java.util) NodeID(org.jivesoftware.openfire.cluster.NodeID) Test(org.junit.Test) Assert(org.junit.Assert) Function(java.util.function.Function) NodeID(org.jivesoftware.openfire.cluster.NodeID) Test(org.junit.Test)

Example 39 with NodeID

use of org.jivesoftware.openfire.cluster.NodeID in project Openfire by igniterealtime.

the class ReverseLookupComputingCacheEntryListenerTest method testUpdateEventsInWrongOrder.

/**
 * Simulates a scenario where one cluster node adds an entry to an otherwise empty cache, followed by another
 * cluster node updating that entry, adding itself as another 'owner' of the entry, where the events that are
 * generated by these actions arrive in the reversed order (which, as this is an async operation, can occur).
 */
@Test
public void testUpdateEventsInWrongOrder() throws Exception {
    // Setup text fixture, Simulating things for a cache with this signature: Cache<String, Set<NodeID>> cache;
    final Map<NodeID, Set<String>> reverseLookupMap = new HashMap<>();
    final Function<HashSet<NodeID>, Set<NodeID>> deducer = nodeIDS -> nodeIDS;
    final ReverseLookupComputingCacheEntryListener<String, HashSet<NodeID>> listener = new ReverseLookupComputingCacheEntryListener<>(reverseLookupMap, deducer);
    final NodeID clusterNodeA = NodeID.getInstance(UUID.randomUUID().toString().getBytes());
    final NodeID clusterNodeB = NodeID.getInstance(UUID.randomUUID().toString().getBytes());
    // Execute system under test.
    listener.entryUpdated("somekey", new HashSet<>(Arrays.asList(clusterNodeA)), new HashSet<>(Arrays.asList(clusterNodeA, clusterNodeB)), clusterNodeB);
    listener.entryAdded("somekey", new HashSet<>(Arrays.asList(clusterNodeA)), clusterNodeA);
    // Assert result
    assertTrue(reverseLookupMap.containsKey(clusterNodeA));
    assertTrue(reverseLookupMap.get(clusterNodeA).contains("somekey"));
    assertTrue(reverseLookupMap.containsKey(clusterNodeB));
    assertTrue(reverseLookupMap.get(clusterNodeB).contains("somekey"));
}
Also used : java.util(java.util) NodeID(org.jivesoftware.openfire.cluster.NodeID) Test(org.junit.Test) Assert(org.junit.Assert) Function(java.util.function.Function) NodeID(org.jivesoftware.openfire.cluster.NodeID) Test(org.junit.Test)

Example 40 with NodeID

use of org.jivesoftware.openfire.cluster.NodeID in project Openfire by igniterealtime.

the class OccupantManager method replaceOccupant.

/**
 * Registers disappearance of an existing occupant, and/or appearance of a new occupant, on a specific node.
 *
 * This method maintains the three different occupant lookup tables, and keeps them in sync.
 *
 * @param oldOccupant An occupant that is to be removed from the registration of the referred node (nullable)
 * @param newOccupant An occupant that is to be added to the registration of the referred node (nullable)
 * @param nodeIDToReplaceOccupantFor The id of the node that the old/new occupant need to be (de)registered under. If null then the occupant is (de)registered for each node.
 */
private void replaceOccupant(Occupant oldOccupant, Occupant newOccupant, NodeID nodeIDToReplaceOccupantFor) {
    Set<NodeID> nodeIDsToReplaceOccupantFor = new HashSet<>();
    if (nodeIDToReplaceOccupantFor == null) {
        // all node ids
        nodeIDsToReplaceOccupantFor = occupantsByNode.keySet();
    } else {
        // just the one
        nodeIDsToReplaceOccupantFor.add(nodeIDToReplaceOccupantFor);
    }
    for (NodeID nodeID : nodeIDsToReplaceOccupantFor) {
        synchronized (nodeID) {
            // Step 1: remove old occupant, if there is any
            deleteOccupantFromNode(oldOccupant, nodeID);
            // Step 2: add new occupant, if there is any
            if (newOccupant != null) {
                occupantsByNode.computeIfAbsent(nodeID, (n) -> new HashSet<>()).add(newOccupant);
                nodesByOccupant.computeIfAbsent(newOccupant, (n) -> new HashSet<>()).add(nodeID);
            }
            Log.debug("Replaced occupant {} with {} for node {}", oldOccupant, newOccupant, nodeID);
        }
    }
    Log.debug("Occupants remaining after replace: {}", nodesByOccupant);
}
Also used : java.util(java.util) OccupantUpdatedTask(org.jivesoftware.openfire.muc.cluster.OccupantUpdatedTask) OccupantKickedForNicknameTask(org.jivesoftware.openfire.muc.cluster.OccupantKickedForNicknameTask) MultiUserChatService(org.jivesoftware.openfire.muc.MultiUserChatService) CacheFactory(org.jivesoftware.util.cache.CacheFactory) TaskEngine(org.jivesoftware.util.TaskEngine) LoggerFactory(org.slf4j.LoggerFactory) OccupantRemovedTask(org.jivesoftware.openfire.muc.cluster.OccupantRemovedTask) JID(org.xmpp.packet.JID) ConcurrentMap(java.util.concurrent.ConcurrentMap) NodeID(org.jivesoftware.openfire.cluster.NodeID) Message(org.xmpp.packet.Message) OccupantAddedTask(org.jivesoftware.openfire.muc.cluster.OccupantAddedTask) SystemProperty(org.jivesoftware.util.SystemProperty) LocalTime(java.time.LocalTime) XMPPServer(org.jivesoftware.openfire.XMPPServer) SyncLocalOccupantsAndSendJoinPresenceTask(org.jivesoftware.openfire.muc.cluster.SyncLocalOccupantsAndSendJoinPresenceTask) Nonnull(javax.annotation.Nonnull) Nullable(javax.annotation.Nullable) Logger(org.slf4j.Logger) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) MUCRoom(org.jivesoftware.openfire.muc.MUCRoom) Instant(java.time.Instant) Collectors(java.util.stream.Collectors) MUCRole(org.jivesoftware.openfire.muc.MUCRole) DateTimeFormatter(java.time.format.DateTimeFormatter) MUCEventListener(org.jivesoftware.openfire.muc.MUCEventListener) NodeID(org.jivesoftware.openfire.cluster.NodeID)

Aggregations

NodeID (org.jivesoftware.openfire.cluster.NodeID)41 JID (org.xmpp.packet.JID)18 Lock (java.util.concurrent.locks.Lock)15 java.util (java.util)12 ConcurrentHashMap (java.util.concurrent.ConcurrentHashMap)12 ConcurrentMap (java.util.concurrent.ConcurrentMap)12 Collectors (java.util.stream.Collectors)11 Nonnull (javax.annotation.Nonnull)10 XMPPServer (org.jivesoftware.openfire.XMPPServer)10 UnauthorizedException (org.jivesoftware.openfire.auth.UnauthorizedException)9 MUCRole (org.jivesoftware.openfire.muc.MUCRole)8 MUCRoom (org.jivesoftware.openfire.muc.MUCRoom)8 Logger (org.slf4j.Logger)8 LoggerFactory (org.slf4j.LoggerFactory)8 Nullable (javax.annotation.Nullable)7 ClusterManager (org.jivesoftware.openfire.cluster.ClusterManager)7 CacheFactory (org.jivesoftware.util.cache.CacheFactory)7 Map (java.util.Map)6 PacketException (org.jivesoftware.openfire.PacketException)6 DomainPair (org.jivesoftware.openfire.session.DomainPair)6