Search in sources :

Example 1 with MockReplicaId

use of com.github.ambry.clustermap.MockReplicaId in project ambry by linkedin.

the class CloudStorageManagerTest method addStartAndRemoveBlobStoreTest.

/**
 * Test {@code CloudStorageManager#addBlobStore}, {@code CloudStorageManager#startBlobStore}, {@code CloudStorageManager#removeBlobStore}
 * @throws IOException
 */
@Test
public void addStartAndRemoveBlobStoreTest() throws IOException {
    CloudStorageManager cloudStorageManager = createNewCloudStorageManager();
    ReplicaId mockReplicaId = clusterMap.getReplicaIds(clusterMap.getDataNodeIds().get(0)).get(0);
    PartitionId partitionId = mockReplicaId.getPartitionId();
    // start store for Partitionid not added to the store
    Assert.assertFalse(cloudStorageManager.startBlobStore(partitionId));
    // remove store for Partitionid not added to the store
    Assert.assertFalse(cloudStorageManager.removeBlobStore(partitionId));
    // add a replica to the store
    Assert.assertTrue(cloudStorageManager.addBlobStore(mockReplicaId));
    // add an already added replica to the store
    Assert.assertTrue(cloudStorageManager.addBlobStore(mockReplicaId));
    // try start for the added paritition
    Assert.assertTrue(cloudStorageManager.startBlobStore(partitionId));
    // try remove for an added partition
    Assert.assertTrue(cloudStorageManager.removeBlobStore(partitionId));
}
Also used : MockPartitionId(com.github.ambry.clustermap.MockPartitionId) PartitionId(com.github.ambry.clustermap.PartitionId) MockReplicaId(com.github.ambry.clustermap.MockReplicaId) ReplicaId(com.github.ambry.clustermap.ReplicaId) Test(org.junit.Test)

Example 2 with MockReplicaId

use of com.github.ambry.clustermap.MockReplicaId in project ambry by linkedin.

the class ResponseHandlerTest method basicTest.

@Test
public void basicTest() {
    DummyMap mockClusterMap = new DummyMap();
    ResponseHandler handler = new ResponseHandler(mockClusterMap);
    Map<Object, ReplicaEventType[]> expectedEventTypes = new HashMap<>();
    expectedEventTypes.put(new SocketException(), new ReplicaEventType[] { ReplicaEventType.Node_Timeout });
    expectedEventTypes.put(new IOException(), new ReplicaEventType[] { ReplicaEventType.Node_Timeout });
    expectedEventTypes.put(new ConnectionPoolTimeoutException(""), new ReplicaEventType[] { ReplicaEventType.Node_Timeout });
    expectedEventTypes.put(ServerErrorCode.IO_Error, new ReplicaEventType[] { ReplicaEventType.Node_Response, ReplicaEventType.Disk_Error });
    expectedEventTypes.put(ServerErrorCode.Disk_Unavailable, new ReplicaEventType[] { ReplicaEventType.Node_Response, ReplicaEventType.Disk_Error });
    expectedEventTypes.put(ServerErrorCode.Partition_ReadOnly, new ReplicaEventType[] { ReplicaEventType.Node_Response, ReplicaEventType.Disk_Ok, ReplicaEventType.Partition_ReadOnly, ReplicaEventType.Replica_Available });
    expectedEventTypes.put(ServerErrorCode.Replica_Unavailable, new ReplicaEventType[] { ReplicaEventType.Node_Response, ReplicaEventType.Disk_Ok, ReplicaEventType.Replica_Unavailable });
    expectedEventTypes.put(ServerErrorCode.Temporarily_Disabled, new ReplicaEventType[] { ReplicaEventType.Node_Response, ReplicaEventType.Disk_Ok, ReplicaEventType.Replica_Unavailable });
    expectedEventTypes.put(ServerErrorCode.Unknown_Error, new ReplicaEventType[] { ReplicaEventType.Node_Response, ReplicaEventType.Disk_Ok, ReplicaEventType.Replica_Available });
    expectedEventTypes.put(ServerErrorCode.No_Error, new ReplicaEventType[] { ReplicaEventType.Node_Response, ReplicaEventType.Disk_Ok, ReplicaEventType.Replica_Available });
    expectedEventTypes.put(NetworkClientErrorCode.NetworkError, new ReplicaEventType[] { ReplicaEventType.Node_Timeout });
    expectedEventTypes.put(NetworkClientErrorCode.ConnectionUnavailable, new ReplicaEventType[] {});
    expectedEventTypes.put(new RouterException("", RouterErrorCode.UnexpectedInternalError), new ReplicaEventType[] {});
    expectedEventTypes.put(RouterErrorCode.AmbryUnavailable, new ReplicaEventType[] {});
    for (Map.Entry<Object, ReplicaEventType[]> entry : expectedEventTypes.entrySet()) {
        mockClusterMap.reset();
        handler.onEvent(new MockReplicaId(ReplicaType.DISK_BACKED), entry.getKey());
        Set<ReplicaEventType> expectedEvents = new HashSet<>(Arrays.asList(entry.getValue()));
        Set<ReplicaEventType> generatedEvents = mockClusterMap.getLastReplicaEvents();
        Assert.assertEquals("Unexpected generated event for event " + entry.getKey() + " \nExpected: " + expectedEvents + " \nReceived: " + generatedEvents, expectedEvents, generatedEvents);
    }
}
Also used : SocketException(java.net.SocketException) RouterException(com.github.ambry.router.RouterException) HashMap(java.util.HashMap) IOException(java.io.IOException) ConnectionPoolTimeoutException(com.github.ambry.network.ConnectionPoolTimeoutException) ReplicaEventType(com.github.ambry.clustermap.ReplicaEventType) MockReplicaId(com.github.ambry.clustermap.MockReplicaId) JSONObject(org.json.JSONObject) HashMap(java.util.HashMap) Map(java.util.Map) ClusterMap(com.github.ambry.clustermap.ClusterMap) HashSet(java.util.HashSet) Test(org.junit.Test)

Example 3 with MockReplicaId

use of com.github.ambry.clustermap.MockReplicaId in project ambry by linkedin.

the class LeaderBasedReplicationTest method onRoutingTableUpdateCallbackTest.

/**
 * Test cluster map change callback in {@link ReplicationManager} for routing table updates.
 * Test setup: When creating partitions, have one replica in LEADER state and rest in STANDBY states on each data center and
 * later switch the states of replicas (LEADER to STANDBY and STANDBY to LEADER) on one of the DCs during the test
 * Test condition: When replication manager receives onRoutingTableUpdate() indication after the remote replica states were updated,
 * map of partition to peer leader replicas stored in replication manager should be updated correctly
 * @throws Exception
 */
@Test
public void onRoutingTableUpdateCallbackTest() throws Exception {
    Pair<StorageManager, ReplicationManager> managers = createStorageManagerAndReplicationManager(clusterMap, clusterMapConfig, mockHelixParticipant);
    StorageManager storageManager = managers.getFirst();
    MockReplicationManager replicationManager = (MockReplicationManager) managers.getSecond();
    replicationManager.start();
    // Trigger PartitionStateChangeListener callback to replication manager to notify that a local replica state has changed from STANDBY to LEADER
    List<? extends ReplicaId> replicaIds = clusterMap.getReplicaIds(replicationManager.dataNodeId);
    for (ReplicaId replicaId : replicaIds) {
        MockReplicaId mockReplicaId = (MockReplicaId) replicaId;
        if (mockReplicaId.getReplicaState() == ReplicaState.LEADER) {
            MockPartitionId existingPartition = (MockPartitionId) mockReplicaId.getPartitionId();
            mockHelixParticipant.onPartitionBecomeLeaderFromStandby(existingPartition.toPathString());
            // verify that map of peerLeaderReplicasByPartition in PartitionLeaderInfo is updated correctly
            Set<ReplicaId> peerLeaderReplicasInPartitionLeaderInfo = replicationManager.leaderBasedReplicationAdmin.getLeaderPartitionToPeerLeaderReplicas().get(existingPartition.toPathString());
            Set<ReplicaId> peerLeaderReplicasInClusterMap = new HashSet<>(existingPartition.getReplicaIdsByState(ReplicaState.LEADER, null));
            peerLeaderReplicasInClusterMap.remove(mockReplicaId);
            assertThat("Mismatch in list of leader peer replicas stored by partition in replication manager with cluster map", peerLeaderReplicasInPartitionLeaderInfo, is(peerLeaderReplicasInClusterMap));
            // Switch the LEADER/STANDBY states for remote replicas on one of the remote data centers
            ReplicaId peerLeaderReplica = peerLeaderReplicasInClusterMap.iterator().next();
            ReplicaId peerStandByReplica = existingPartition.getReplicaIdsByState(ReplicaState.STANDBY, peerLeaderReplica.getDataNodeId().getDatacenterName()).get(0);
            existingPartition.setReplicaState(peerLeaderReplica, ReplicaState.STANDBY);
            existingPartition.setReplicaState(peerStandByReplica, ReplicaState.LEADER);
            // Trigger routing table change callback to replication manager
            ClusterMapChangeListener clusterMapChangeListener = clusterMap.getClusterMapChangeListener();
            clusterMapChangeListener.onRoutingTableChange();
            // verify that new remote leader is reflected in the peerLeaderReplicasByPartition map
            peerLeaderReplicasInPartitionLeaderInfo = replicationManager.leaderBasedReplicationAdmin.getLeaderPartitionToPeerLeaderReplicas().get(existingPartition.toPathString());
            peerLeaderReplicasInClusterMap = new HashSet<>(existingPartition.getReplicaIdsByState(ReplicaState.LEADER, null));
            peerLeaderReplicasInClusterMap.remove(mockReplicaId);
            assertThat("Mismatch in map of peer leader replicas stored by partition in replication manager with cluster map after routing table update", peerLeaderReplicasInPartitionLeaderInfo, is(peerLeaderReplicasInClusterMap));
        }
    }
    storageManager.shutdown();
}
Also used : ClusterMapChangeListener(com.github.ambry.clustermap.ClusterMapChangeListener) MockPartitionId(com.github.ambry.clustermap.MockPartitionId) StorageManager(com.github.ambry.store.StorageManager) MockReplicaId(com.github.ambry.clustermap.MockReplicaId) MockReplicaId(com.github.ambry.clustermap.MockReplicaId) ReplicaId(com.github.ambry.clustermap.ReplicaId) HashSet(java.util.HashSet) Test(org.junit.Test)

Example 4 with MockReplicaId

use of com.github.ambry.clustermap.MockReplicaId in project ambry by linkedin.

the class LeaderBasedReplicationTest method replicaThreadLeaderBasedReplicationTokenCatchUpForStandbyToLeaderTest.

/**
 * Test leader based replication to verify remote token is caught up for standby replicas and updated token is used
 * when their state transitions to leader.
 * @throws Exception
 */
@Test
public void replicaThreadLeaderBasedReplicationTokenCatchUpForStandbyToLeaderTest() throws Exception {
    /*
      Setup:
      we have 3 nodes that have replicas belonging to same partitions:
      a) localNode (local node that hosts partitions)
      b) remoteNodeInLocalDC (remote node in local data center that shares the partitions)
      c) remoteNodeInRemoteDC (remote node in remote data center that shares the partitions)

      Each node have few of its partitions as leaders and others are standby. They are randomly assigned during creation
      of replicas for mock partitions.
     */
    Map<DataNodeId, MockHost> hosts = new HashMap<>();
    hosts.put(remoteNodeInLocalDC, remoteHostInLocalDC);
    hosts.put(remoteNodeInRemoteDC, remoteHostInRemoteDC);
    int batchSize = 5;
    ConnectionPool mockConnectionPool = new MockConnectionPool(hosts, clusterMap, batchSize);
    Pair<StorageManager, ReplicationManager> managers = createStorageManagerAndReplicationManager(clusterMap, clusterMapConfig, mockHelixParticipant, mockConnectionPool);
    StorageManager storageManager = managers.getFirst();
    MockReplicationManager replicationManager = (MockReplicationManager) managers.getSecond();
    // set mock local stores on all remoteReplicaInfos which will used during replication.
    for (PartitionId partitionId : replicationManager.partitionToPartitionInfo.keySet()) {
        localHost.addStore(partitionId, null);
        Store localStore = localHost.getStore(partitionId);
        localStore.start();
        List<RemoteReplicaInfo> remoteReplicaInfos = replicationManager.partitionToPartitionInfo.get(partitionId).getRemoteReplicaInfos();
        remoteReplicaInfos.forEach(remoteReplicaInfo -> remoteReplicaInfo.setLocalStore(localStore));
    }
    // get remote replicas and replica thread for remote host on local datacenter
    ReplicaThread intraColoReplicaThread = replicationManager.dataNodeIdToReplicaThread.get(remoteNodeInLocalDC);
    List<RemoteReplicaInfo> remoteReplicaInfosForLocalDC = intraColoReplicaThread.getRemoteReplicaInfos().get(remoteNodeInLocalDC);
    // get remote replicas and replica thread for remote host on remote datacenter
    ReplicaThread crossColoReplicaThread = replicationManager.dataNodeIdToReplicaThread.get(remoteNodeInRemoteDC);
    List<RemoteReplicaInfo> remoteReplicaInfosForRemoteDC = crossColoReplicaThread.getRemoteReplicaInfos().get(remoteNodeInRemoteDC);
    // mock helix transition state from standby to leader for local leader partitions
    List<? extends ReplicaId> replicaIds = clusterMap.getReplicaIds(replicationManager.dataNodeId);
    for (ReplicaId replicaId : replicaIds) {
        MockReplicaId mockReplicaId = (MockReplicaId) replicaId;
        if (mockReplicaId.getReplicaState() == ReplicaState.LEADER) {
            MockPartitionId mockPartitionId = (MockPartitionId) replicaId.getPartitionId();
            mockHelixParticipant.onPartitionBecomeLeaderFromStandby(mockPartitionId.toPathString());
        }
    }
    // Add put messages to all partitions on remoteHost1 and remoteHost2
    List<PartitionId> partitionIds = clusterMap.getWritablePartitionIds(null);
    for (PartitionId partitionId : partitionIds) {
        // add batchSize messages to the remoteHost1 and remote host 2 from which local host will replicate.
        addPutMessagesToReplicasOfPartition(partitionId, Arrays.asList(remoteHostInLocalDC, remoteHostInRemoteDC), batchSize + batchSize);
    }
    // Choose partitions that are leaders on both local and remote nodes
    Set<ReplicaId> leaderReplicasOnLocalAndRemoteNodes = new HashSet<>();
    // Track a standby replica which has leader partition on remote node. We will update the state of replica to leader after one cycle of replication
    // and verify that replication resumes from remote token.
    MockReplicaId localStandbyReplicaWithLeaderPartitionOnRemoteNode = null;
    List<? extends ReplicaId> localReplicas = clusterMap.getReplicaIds(replicationManager.dataNodeId);
    List<? extends ReplicaId> remoteReplicas = clusterMap.getReplicaIds(remoteNodeInRemoteDC);
    for (int i = 0; i < localReplicas.size(); i++) {
        MockReplicaId localReplica = (MockReplicaId) localReplicas.get(i);
        MockReplicaId remoteReplica = (MockReplicaId) remoteReplicas.get(i);
        if (localReplica.getReplicaState() == ReplicaState.LEADER && remoteReplica.getReplicaState() == ReplicaState.LEADER) {
            leaderReplicasOnLocalAndRemoteNodes.add(remoteReplicas.get(i));
        }
        if (localReplica.getReplicaState() == ReplicaState.STANDBY && remoteReplica.getReplicaState() == ReplicaState.LEADER && localStandbyReplicaWithLeaderPartitionOnRemoteNode == null) {
            localStandbyReplicaWithLeaderPartitionOnRemoteNode = localReplica;
        }
    }
    // replicate with remote node in remote DC
    crossColoReplicaThread.replicate();
    // missing messages are not fetched yet.
    for (RemoteReplicaInfo remoteReplicaInfo : remoteReplicaInfosForRemoteDC) {
        if (leaderReplicasOnLocalAndRemoteNodes.contains(remoteReplicaInfo.getReplicaId())) {
            assertEquals("remote token mismatch for leader replicas", ((MockFindToken) remoteReplicaInfo.getToken()).getIndex(), batchSize - 1);
        } else {
            assertEquals("remote token should not move forward for standby replicas until missing keys are fetched", ((MockFindToken) remoteReplicaInfo.getToken()).getIndex(), 0);
        }
    }
    // Replicate with remote node in local dc
    intraColoReplicaThread.replicate();
    // verify that remote token will be moved for all intra-colo replicas with token index = batchSize-1
    for (RemoteReplicaInfo replicaInfo : remoteReplicaInfosForLocalDC) {
        assertEquals("mismatch in remote token set for intra colo replicas", ((MockFindToken) replicaInfo.getToken()).getIndex(), batchSize - 1);
    }
    // process missing keys for cross colo replicas from previous metadata exchange
    for (RemoteReplicaInfo remoteReplicaInfo : remoteReplicaInfosForRemoteDC) {
        crossColoReplicaThread.processMissingKeysFromPreviousMetadataResponse(remoteReplicaInfo);
    }
    // as missing keys must now be obtained via intra-dc replication
    for (RemoteReplicaInfo replicaInfo : remoteReplicaInfosForRemoteDC) {
        assertEquals("mismatch in remote token set for inter colo replicas", ((MockFindToken) replicaInfo.getToken()).getIndex(), batchSize - 1);
    }
    // If we have a local standby replica with leader partition on remote node, change its state to leader
    if (localStandbyReplicaWithLeaderPartitionOnRemoteNode != null) {
        MockPartitionId mockPartitionId = (MockPartitionId) localStandbyReplicaWithLeaderPartitionOnRemoteNode.getPartitionId();
        mockHelixParticipant.onPartitionBecomeLeaderFromStandby(mockPartitionId.toPathString());
    }
    // Trigger replication again with remote node in remote DC
    crossColoReplicaThread.replicate();
    // leader localStandbyReplicaWithLeaderPartitionOnRemoteNode whose replication should be have resumed with remote token index = batchSize-1
    for (RemoteReplicaInfo remoteReplicaInfo : remoteReplicaInfosForRemoteDC) {
        if (leaderReplicasOnLocalAndRemoteNodes.contains(remoteReplicaInfo.getReplicaId()) || (remoteReplicaInfo.getLocalReplicaId().equals(localStandbyReplicaWithLeaderPartitionOnRemoteNode))) {
            assertEquals("remote token mismatch for leader replicas", ((MockFindToken) remoteReplicaInfo.getToken()).getIndex(), batchSize * 2 - 2);
        } else {
            assertEquals("remote token should not move forward for standby replicas until missing keys are fetched", ((MockFindToken) remoteReplicaInfo.getToken()).getIndex(), batchSize - 1);
        }
    }
    // Trigger replication again with remote node in local DC
    intraColoReplicaThread.replicate();
    // verify that remote token is moved forward for all intra-colo replicas.
    for (RemoteReplicaInfo replicaInfo : remoteReplicaInfosForLocalDC) {
        assertEquals("mismatch in remote token set for intra colo replicas", ((MockFindToken) replicaInfo.getToken()).getIndex(), batchSize * 2 - 2);
    }
    // process missing keys for cross colo replicas from previous metadata exchange
    for (RemoteReplicaInfo remoteReplicaInfo : remoteReplicaInfosForRemoteDC) {
        crossColoReplicaThread.processMissingKeysFromPreviousMetadataResponse(remoteReplicaInfo);
    }
    // via intra-dc replication.
    for (RemoteReplicaInfo remoteReplicaInfo : remoteReplicaInfosForRemoteDC) {
        assertEquals("mismatch in remote token set for intra colo replicas", ((MockFindToken) remoteReplicaInfo.getToken()).getIndex(), batchSize * 2 - 2);
    }
    storageManager.shutdown();
}
Also used : ConnectionPool(com.github.ambry.network.ConnectionPool) HashMap(java.util.HashMap) MockPartitionId(com.github.ambry.clustermap.MockPartitionId) StorageManager(com.github.ambry.store.StorageManager) Store(com.github.ambry.store.Store) MockPartitionId(com.github.ambry.clustermap.MockPartitionId) PartitionId(com.github.ambry.clustermap.PartitionId) MockReplicaId(com.github.ambry.clustermap.MockReplicaId) ReplicaId(com.github.ambry.clustermap.ReplicaId) MockReplicaId(com.github.ambry.clustermap.MockReplicaId) DataNodeId(com.github.ambry.clustermap.DataNodeId) HashSet(java.util.HashSet) Test(org.junit.Test)

Example 5 with MockReplicaId

use of com.github.ambry.clustermap.MockReplicaId in project ambry by linkedin.

the class ReplicationTest method replicaFromStandbyToLeaderTest.

/**
 * Test state transition in replication manager from STANDBY to LEADER
 * Test setup: When creating partitions, make sure that there is exactly one replica in LEADER STATE on each data center
 * Test condition: When a partition on current node moves from standby to leader, verify that in-memory map storing
 * partition to peer leader replicas is updated correctly
 * @throws Exception
 */
@Test
public void replicaFromStandbyToLeaderTest() throws Exception {
    MockClusterMap clusterMap = new MockClusterMap();
    ClusterMapConfig clusterMapConfig = new ClusterMapConfig(verifiableProperties);
    MockHelixParticipant.metricRegistry = new MetricRegistry();
    MockHelixParticipant mockHelixParticipant = new MockHelixParticipant(clusterMapConfig);
    ReplicationConfig initialReplicationConfig = replicationConfig;
    properties.setProperty("replication.model.across.datacenters", "LEADER_BASED");
    replicationConfig = new ReplicationConfig(new VerifiableProperties(properties));
    Pair<StorageManager, ReplicationManager> managers = createStorageManagerAndReplicationManager(clusterMap, clusterMapConfig, mockHelixParticipant);
    StorageManager storageManager = managers.getFirst();
    MockReplicationManager replicationManager = (MockReplicationManager) managers.getSecond();
    List<ReplicaId> replicaIds = clusterMap.getReplicaIds(replicationManager.dataNodeId);
    for (ReplicaId replicaId : replicaIds) {
        MockReplicaId mockReplicaId = (MockReplicaId) replicaId;
        if (mockReplicaId.getReplicaState() == ReplicaState.LEADER) {
            PartitionId existingPartition = mockReplicaId.getPartitionId();
            mockHelixParticipant.onPartitionBecomeLeaderFromStandby(existingPartition.toPathString());
            Set<ReplicaId> peerLeaderReplicasInReplicationManager = replicationManager.leaderBasedReplicationAdmin.getLeaderPartitionToPeerLeaderReplicas().get(existingPartition.toPathString());
            Set<ReplicaId> peerLeaderReplicasInClusterMap = new HashSet<>(existingPartition.getReplicaIdsByState(ReplicaState.LEADER, null));
            peerLeaderReplicasInClusterMap.remove(mockReplicaId);
            assertThat("Mismatch in list of leader peer replicas stored by partition in replication manager and cluster map", peerLeaderReplicasInReplicationManager, is(peerLeaderReplicasInClusterMap));
        }
    }
    storageManager.shutdown();
    replicationConfig = initialReplicationConfig;
}
Also used : ReplicationConfig(com.github.ambry.config.ReplicationConfig) VerifiableProperties(com.github.ambry.config.VerifiableProperties) MetricRegistry(com.codahale.metrics.MetricRegistry) StorageManager(com.github.ambry.store.StorageManager) MockPartitionId(com.github.ambry.clustermap.MockPartitionId) PartitionId(com.github.ambry.clustermap.PartitionId) ClusterMapConfig(com.github.ambry.config.ClusterMapConfig) MockReplicaId(com.github.ambry.clustermap.MockReplicaId) ReplicaId(com.github.ambry.clustermap.ReplicaId) MockHelixParticipant(com.github.ambry.clustermap.MockHelixParticipant) MockReplicaId(com.github.ambry.clustermap.MockReplicaId) MockClusterMap(com.github.ambry.clustermap.MockClusterMap) HashSet(java.util.HashSet) Test(org.junit.Test)

Aggregations

MockReplicaId (com.github.ambry.clustermap.MockReplicaId)27 ReplicaId (com.github.ambry.clustermap.ReplicaId)20 MockPartitionId (com.github.ambry.clustermap.MockPartitionId)19 Test (org.junit.Test)19 PartitionId (com.github.ambry.clustermap.PartitionId)13 ArrayList (java.util.ArrayList)10 MockClusterMap (com.github.ambry.clustermap.MockClusterMap)8 StorageManager (com.github.ambry.store.StorageManager)8 HashSet (java.util.HashSet)7 Port (com.github.ambry.network.Port)6 Store (com.github.ambry.store.Store)6 HashMap (java.util.HashMap)6 MockDataNodeId (com.github.ambry.clustermap.MockDataNodeId)5 VerifiableProperties (com.github.ambry.config.VerifiableProperties)5 MetricRegistry (com.codahale.metrics.MetricRegistry)4 ClusterMapConfig (com.github.ambry.config.ClusterMapConfig)4 StoreKey (com.github.ambry.store.StoreKey)4 IOException (java.io.IOException)4 List (java.util.List)4 Counter (com.codahale.metrics.Counter)3