Search in sources :

Example 1 with ClusterMap

use of com.github.ambry.clustermap.ClusterMap in project ambry by linkedin.

the class AmbryBlobStorageServiceFactoryTest method getAmbryBlobStorageServiceFactoryWithBadInputTest.

/**
 * Tests instantiation of {@link AmbryBlobStorageServiceFactory} with bad input.
 * @throws Exception
 */
@Test
public void getAmbryBlobStorageServiceFactoryWithBadInputTest() throws Exception {
    // dud properties. server should pick up defaults
    Properties properties = new Properties();
    VerifiableProperties verifiableProperties = new VerifiableProperties(properties);
    ClusterMap clusterMap = new MockClusterMap();
    RestResponseHandler restResponseHandler = new MockRestRequestResponseHandler();
    Router router = new InMemoryRouter(verifiableProperties, clusterMap);
    // VerifiableProperties null.
    try {
        new AmbryBlobStorageServiceFactory(null, clusterMap, restResponseHandler, router, new MockNotifier());
        fail("Instantiation should have failed because one of the arguments was null");
    } catch (IllegalArgumentException e) {
    // expected. Nothing to do.
    }
    // ClusterMap null.
    try {
        new AmbryBlobStorageServiceFactory(verifiableProperties, null, restResponseHandler, router, new MockNotifier());
        fail("Instantiation should have failed because one of the arguments was null");
    } catch (IllegalArgumentException e) {
    // expected. Nothing to do.
    }
    // RestResponseHandler null.
    try {
        new AmbryBlobStorageServiceFactory(verifiableProperties, clusterMap, null, router, new MockNotifier());
        fail("Instantiation should have failed because one of the arguments was null");
    } catch (IllegalArgumentException e) {
    // expected. Nothing to do.
    }
    // Router null.
    try {
        new AmbryBlobStorageServiceFactory(verifiableProperties, clusterMap, restResponseHandler, null, new MockNotifier());
        fail("Instantiation should have failed because one of the arguments was null");
    } catch (IllegalArgumentException e) {
    // expected. Nothing to do.
    }
}
Also used : ClusterMap(com.github.ambry.clustermap.ClusterMap) MockClusterMap(com.github.ambry.clustermap.MockClusterMap) InMemoryRouter(com.github.ambry.router.InMemoryRouter) MockNotifier(com.github.ambry.account.MockNotifier) RestResponseHandler(com.github.ambry.rest.RestResponseHandler) VerifiableProperties(com.github.ambry.config.VerifiableProperties) InMemoryRouter(com.github.ambry.router.InMemoryRouter) Router(com.github.ambry.router.Router) MockRestRequestResponseHandler(com.github.ambry.rest.MockRestRequestResponseHandler) Properties(java.util.Properties) VerifiableProperties(com.github.ambry.config.VerifiableProperties) MockClusterMap(com.github.ambry.clustermap.MockClusterMap) Test(org.junit.Test)

Example 2 with ClusterMap

use of com.github.ambry.clustermap.ClusterMap in project ambry by linkedin.

the class ConsistencyCheckerTool method main.

public static void main(String[] args) throws Exception {
    VerifiableProperties properties = ToolUtils.getVerifiableProperties(args);
    ConsistencyCheckerToolConfig config = new ConsistencyCheckerToolConfig(properties);
    ClusterMapConfig clusterMapConfig = new ClusterMapConfig(properties);
    ServerConfig serverConfig = new ServerConfig(properties);
    try (ClusterMap clusterMap = new StaticClusterAgentsFactory(clusterMapConfig, config.hardwareLayoutFilePath, config.partitionLayoutFilePath).getClusterMap()) {
        StoreToolsMetrics metrics = new StoreToolsMetrics(clusterMap.getMetricRegistry());
        StoreConfig storeConfig = new StoreConfig(properties);
        // this tool supports only blob IDs. It can become generic if StoreKeyFactory provides a deserFromString method.
        BlobIdFactory blobIdFactory = new BlobIdFactory(clusterMap);
        Set<StoreKey> filterKeySet = new HashSet<>();
        for (String key : config.filterSet) {
            filterKeySet.add(new BlobId(key, clusterMap));
        }
        Time time = SystemTime.getInstance();
        Throttler throttler = new Throttler(config.indexEntriesToProcessPerSec, 1000, true, time);
        StoreKeyConverterFactory storeKeyConverterFactory = Utils.getObj(serverConfig.serverStoreKeyConverterFactory, properties, clusterMap.getMetricRegistry());
        ConsistencyCheckerTool consistencyCheckerTool = new ConsistencyCheckerTool(clusterMap, blobIdFactory, storeConfig, filterKeySet, throttler, metrics, time, storeKeyConverterFactory.getStoreKeyConverter());
        boolean success = consistencyCheckerTool.checkConsistency(config.pathOfInput.listFiles(File::isDirectory)).getFirst();
        System.exit(success ? 0 : 1);
    }
}
Also used : ClusterMap(com.github.ambry.clustermap.ClusterMap) VerifiableProperties(com.github.ambry.config.VerifiableProperties) SystemTime(com.github.ambry.utils.SystemTime) Time(com.github.ambry.utils.Time) ClusterMapConfig(com.github.ambry.config.ClusterMapConfig) BlobIdFactory(com.github.ambry.commons.BlobIdFactory) ServerConfig(com.github.ambry.config.ServerConfig) StaticClusterAgentsFactory(com.github.ambry.clustermap.StaticClusterAgentsFactory) StoreConfig(com.github.ambry.config.StoreConfig) BlobId(com.github.ambry.commons.BlobId) File(java.io.File) HashSet(java.util.HashSet) Throttler(com.github.ambry.utils.Throttler)

Example 3 with ClusterMap

use of com.github.ambry.clustermap.ClusterMap in project ambry by linkedin.

the class DumpCompactionLogTool method main.

public static void main(String[] args) throws Exception {
    VerifiableProperties verifiableProperties = ToolUtils.getVerifiableProperties(args);
    DumpCompactionLogConfig config = new DumpCompactionLogConfig(verifiableProperties);
    ClusterMapConfig clusterMapConfig = new ClusterMapConfig(verifiableProperties);
    try (ClusterMap clusterMap = ((ClusterAgentsFactory) Utils.getObj(clusterMapConfig.clusterMapClusterAgentsFactory, clusterMapConfig, config.hardwareLayoutFilePath, config.partitionLayoutFilePath)).getClusterMap()) {
        File file = new File(config.compactionLogFilePath);
        BlobIdFactory blobIdFactory = new BlobIdFactory(clusterMap);
        StoreConfig storeConfig = new StoreConfig(verifiableProperties);
        Time time = SystemTime.getInstance();
        CompactionLog compactionLog = new CompactionLog(file, blobIdFactory, time, storeConfig);
        System.out.println(compactionLog.toString());
    }
}
Also used : ClusterMap(com.github.ambry.clustermap.ClusterMap) VerifiableProperties(com.github.ambry.config.VerifiableProperties) StoreConfig(com.github.ambry.config.StoreConfig) SystemTime(com.github.ambry.utils.SystemTime) Time(com.github.ambry.utils.Time) ClusterAgentsFactory(com.github.ambry.clustermap.ClusterAgentsFactory) File(java.io.File) ClusterMapConfig(com.github.ambry.config.ClusterMapConfig) BlobIdFactory(com.github.ambry.commons.BlobIdFactory)

Example 4 with ClusterMap

use of com.github.ambry.clustermap.ClusterMap in project ambry by linkedin.

the class IndexWritePerformance method main.

public static void main(String[] args) {
    FileWriter writer = null;
    try {
        OptionParser parser = new OptionParser();
        ArgumentAcceptingOptionSpec<Integer> numberOfIndexesOpt = parser.accepts("numberOfIndexes", "The number of indexes to create").withRequiredArg().describedAs("number_of_indexes").ofType(Integer.class);
        ArgumentAcceptingOptionSpec<String> hardwareLayoutOpt = parser.accepts("hardwareLayout", "The path of the hardware layout file").withRequiredArg().describedAs("hardware_layout").ofType(String.class);
        ArgumentAcceptingOptionSpec<String> partitionLayoutOpt = parser.accepts("partitionLayout", "The path of the partition layout file").withRequiredArg().describedAs("partition_layout").ofType(String.class);
        ArgumentAcceptingOptionSpec<Integer> numberOfWritersOpt = parser.accepts("numberOfWriters", "The number of writers that write to a random index concurrently").withRequiredArg().describedAs("The number of writers").ofType(Integer.class).defaultsTo(4);
        ArgumentAcceptingOptionSpec<Integer> writesPerSecondOpt = parser.accepts("writesPerSecond", "The rate at which writes need to be performed").withRequiredArg().describedAs("The number of writes per second").ofType(Integer.class).defaultsTo(1000);
        ArgumentAcceptingOptionSpec<Boolean> verboseLoggingOpt = parser.accepts("enableVerboseLogging", "Enables verbose logging").withOptionalArg().describedAs("Enable verbose logging").ofType(Boolean.class).defaultsTo(false);
        OptionSet options = parser.parse(args);
        ArrayList<OptionSpec> listOpt = new ArrayList<>();
        listOpt.add(numberOfIndexesOpt);
        listOpt.add(hardwareLayoutOpt);
        listOpt.add(partitionLayoutOpt);
        ToolUtils.ensureOrExit(listOpt, options, parser);
        int numberOfIndexes = options.valueOf(numberOfIndexesOpt);
        int numberOfWriters = options.valueOf(numberOfWritersOpt);
        int writesPerSecond = options.valueOf(writesPerSecondOpt);
        boolean enableVerboseLogging = options.has(verboseLoggingOpt);
        if (enableVerboseLogging) {
            System.out.println("Enabled verbose logging");
        }
        final AtomicLong totalTimeTakenInNs = new AtomicLong(0);
        final AtomicLong totalWrites = new AtomicLong(0);
        String hardwareLayoutPath = options.valueOf(hardwareLayoutOpt);
        String partitionLayoutPath = options.valueOf(partitionLayoutOpt);
        ClusterMapConfig clusterMapConfig = new ClusterMapConfig(new VerifiableProperties(new Properties()));
        ClusterMap map = ((ClusterAgentsFactory) Utils.getObj(clusterMapConfig.clusterMapClusterAgentsFactory, clusterMapConfig, hardwareLayoutPath, partitionLayoutPath)).getClusterMap();
        StoreKeyFactory factory = new BlobIdFactory(map);
        File logFile = new File(System.getProperty("user.dir"), "writeperflog");
        writer = new FileWriter(logFile);
        MetricRegistry metricRegistry = new MetricRegistry();
        StoreMetrics metrics = new StoreMetrics(metricRegistry);
        DiskSpaceAllocator diskSpaceAllocator = new DiskSpaceAllocator(false, null, 0, new StorageManagerMetrics(metricRegistry));
        Properties props = new Properties();
        props.setProperty("store.index.memory.size.bytes", "2097152");
        props.setProperty("store.segment.size.in.bytes", "10");
        StoreConfig config = new StoreConfig(new VerifiableProperties(props));
        Log log = new Log(System.getProperty("user.dir"), 10, diskSpaceAllocator, config, metrics, null);
        ScheduledExecutorService s = Utils.newScheduler(numberOfWriters, "index", false);
        ArrayList<BlobIndexMetrics> indexWithMetrics = new ArrayList<BlobIndexMetrics>(numberOfIndexes);
        for (int i = 0; i < numberOfIndexes; i++) {
            File indexFile = new File(System.getProperty("user.dir"), Integer.toString(i));
            if (indexFile.exists()) {
                for (File c : indexFile.listFiles()) {
                    c.delete();
                }
            } else {
                indexFile.mkdir();
            }
            System.out.println("Creating index folder " + indexFile.getAbsolutePath());
            writer.write("logdir-" + indexFile.getAbsolutePath() + "\n");
            indexWithMetrics.add(new BlobIndexMetrics(indexFile.getAbsolutePath(), s, log, enableVerboseLogging, totalWrites, totalTimeTakenInNs, totalWrites, config, writer, factory));
        }
        final CountDownLatch latch = new CountDownLatch(numberOfWriters);
        final AtomicBoolean shutdown = new AtomicBoolean(false);
        // attach shutdown handler to catch control-c
        Runtime.getRuntime().addShutdownHook(new Thread() {

            public void run() {
                try {
                    System.out.println("Shutdown invoked");
                    shutdown.set(true);
                    latch.await();
                    System.out.println("Total writes : " + totalWrites.get() + "  Total time taken : " + totalTimeTakenInNs.get() + " Nano Seconds  Average time taken per write " + ((double) totalWrites.get() / totalTimeTakenInNs.get()) / SystemTime.NsPerSec + " Seconds");
                } catch (Exception e) {
                    System.out.println("Error while shutting down " + e);
                }
            }
        });
        Throttler throttler = new Throttler(writesPerSecond, 100, true, SystemTime.getInstance());
        Thread[] threadIndexPerf = new Thread[numberOfWriters];
        for (int i = 0; i < numberOfWriters; i++) {
            threadIndexPerf[i] = new Thread(new IndexWritePerfRun(indexWithMetrics, throttler, shutdown, latch, map));
            threadIndexPerf[i].start();
        }
        for (int i = 0; i < numberOfWriters; i++) {
            threadIndexPerf[i].join();
        }
    } catch (StoreException e) {
        System.err.println("Index creation error on exit " + e.getMessage());
    } catch (Exception e) {
        System.err.println("Error on exit " + e);
    } finally {
        if (writer != null) {
            try {
                writer.close();
            } catch (Exception e) {
                System.out.println("Error when closing the writer");
            }
        }
    }
}
Also used : OptionSpec(joptsimple.OptionSpec) ArgumentAcceptingOptionSpec(joptsimple.ArgumentAcceptingOptionSpec) ClusterMap(com.github.ambry.clustermap.ClusterMap) FileWriter(java.io.FileWriter) ArrayList(java.util.ArrayList) Properties(java.util.Properties) VerifiableProperties(com.github.ambry.config.VerifiableProperties) OptionParser(joptsimple.OptionParser) ClusterAgentsFactory(com.github.ambry.clustermap.ClusterAgentsFactory) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) Throttler(com.github.ambry.utils.Throttler) ScheduledExecutorService(java.util.concurrent.ScheduledExecutorService) VerifiableProperties(com.github.ambry.config.VerifiableProperties) MetricRegistry(com.codahale.metrics.MetricRegistry) CountDownLatch(java.util.concurrent.CountDownLatch) ClusterMapConfig(com.github.ambry.config.ClusterMapConfig) BlobIdFactory(com.github.ambry.commons.BlobIdFactory) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) AtomicLong(java.util.concurrent.atomic.AtomicLong) StoreConfig(com.github.ambry.config.StoreConfig) OptionSet(joptsimple.OptionSet) File(java.io.File)

Example 5 with ClusterMap

use of com.github.ambry.clustermap.ClusterMap in project ambry by linkedin.

the class ServerAdminTool method main.

/**
 * Runs the server admin tool
 * @param args associated arguments.
 * @throws Exception
 */
public static void main(String[] args) throws Exception {
    VerifiableProperties verifiableProperties = ToolUtils.getVerifiableProperties(args);
    ServerAdminToolConfig config = new ServerAdminToolConfig(verifiableProperties);
    ClusterMapConfig clusterMapConfig = new ClusterMapConfig(verifiableProperties);
    ClusterMap clusterMap = ((ClusterAgentsFactory) Utils.getObj(clusterMapConfig.clusterMapClusterAgentsFactory, clusterMapConfig, config.hardwareLayoutFilePath, config.partitionLayoutFilePath)).getClusterMap();
    SSLFactory sslFactory = !clusterMapConfig.clusterMapSslEnabledDatacenters.isEmpty() ? SSLFactory.getNewInstance(new SSLConfig(verifiableProperties)) : null;
    ServerAdminTool serverAdminTool = new ServerAdminTool(clusterMap, sslFactory, verifiableProperties);
    File file = new File(config.dataOutputFilePath);
    if (!file.exists() && !file.createNewFile()) {
        throw new IllegalStateException("Could not create " + file);
    }
    FileOutputStream outputFileStream = new FileOutputStream(config.dataOutputFilePath);
    DataNodeId dataNodeId = clusterMap.getDataNodeId(config.hostname, config.port);
    if (dataNodeId == null) {
        throw new IllegalArgumentException("Could not find a data node corresponding to " + config.hostname + ":" + config.port);
    }
    switch(config.typeOfOperation) {
        case GetBlobProperties:
            BlobId blobId = new BlobId(config.blobId, clusterMap);
            Pair<ServerErrorCode, BlobProperties> bpResponse = serverAdminTool.getBlobProperties(dataNodeId, blobId, config.getOption, clusterMap);
            if (bpResponse.getFirst() == ServerErrorCode.No_Error) {
                LOGGER.info("Blob properties for {} from {}: {}", blobId, dataNodeId, bpResponse.getSecond());
            } else {
                LOGGER.error("Failed to get blob properties for {} from {} with option {}. Error code is {}", blobId, dataNodeId, config.getOption, bpResponse.getFirst());
            }
            break;
        case GetUserMetadata:
            blobId = new BlobId(config.blobId, clusterMap);
            Pair<ServerErrorCode, ByteBuffer> umResponse = serverAdminTool.getUserMetadata(dataNodeId, blobId, config.getOption, clusterMap);
            if (umResponse.getFirst() == ServerErrorCode.No_Error) {
                writeBufferToFile(umResponse.getSecond(), outputFileStream);
                LOGGER.info("User metadata for {} from {} written to {}", blobId, dataNodeId, config.dataOutputFilePath);
            } else {
                LOGGER.error("Failed to get user metadata for {} from {} with option {}. Error code is {}", blobId, dataNodeId, config.getOption, umResponse.getFirst());
            }
            break;
        case GetBlob:
            blobId = new BlobId(config.blobId, clusterMap);
            Pair<ServerErrorCode, BlobData> bResponse = serverAdminTool.getBlob(dataNodeId, blobId, config.getOption, clusterMap);
            if (bResponse.getFirst() == ServerErrorCode.No_Error) {
                LOGGER.info("Blob type of {} from {} is {}", blobId, dataNodeId, bResponse.getSecond().getBlobType());
                ByteBuf buffer = bResponse.getSecond().content();
                try {
                    writeByteBufToFile(buffer, outputFileStream);
                } finally {
                    buffer.release();
                }
                LOGGER.info("Blob data for {} from {} written to {}", blobId, dataNodeId, config.dataOutputFilePath);
            } else {
                LOGGER.error("Failed to get blob data for {} from {} with option {}. Error code is {}", blobId, dataNodeId, config.getOption, bResponse.getFirst());
            }
            break;
        case TriggerCompaction:
            if (config.partitionIds.length > 0 && !config.partitionIds[0].isEmpty()) {
                for (String partitionIdStr : config.partitionIds) {
                    PartitionId partitionId = getPartitionIdFromStr(partitionIdStr, clusterMap);
                    ServerErrorCode errorCode = serverAdminTool.triggerCompaction(dataNodeId, partitionId);
                    if (errorCode == ServerErrorCode.No_Error) {
                        LOGGER.info("Compaction has been triggered for {} on {}", partitionId, dataNodeId);
                    } else {
                        LOGGER.error("From {}, received server error code {} for trigger compaction request on {}", dataNodeId, errorCode, partitionId);
                    }
                }
            } else {
                LOGGER.error("There were no partitions provided to trigger compaction on");
            }
            break;
        case RequestControl:
            if (config.partitionIds.length > 0 && !config.partitionIds[0].isEmpty()) {
                for (String partitionIdStr : config.partitionIds) {
                    PartitionId partitionId = getPartitionIdFromStr(partitionIdStr, clusterMap);
                    sendRequestControlRequest(serverAdminTool, dataNodeId, partitionId, config.requestTypeToControl, config.enableState);
                }
            } else {
                LOGGER.info("No partition list provided. Requesting enable status of {} to be set to {} on all partitions", config.requestTypeToControl, config.enableState);
                sendRequestControlRequest(serverAdminTool, dataNodeId, null, config.requestTypeToControl, config.enableState);
            }
            break;
        case ReplicationControl:
            List<String> origins = Collections.emptyList();
            if (config.origins.length > 0 && !config.origins[0].isEmpty()) {
                origins = Arrays.asList(config.origins);
            }
            if (config.partitionIds.length > 0 && !config.partitionIds[0].isEmpty()) {
                for (String partitionIdStr : config.partitionIds) {
                    PartitionId partitionId = getPartitionIdFromStr(partitionIdStr, clusterMap);
                    sendReplicationControlRequest(serverAdminTool, dataNodeId, partitionId, origins, config.enableState);
                }
            } else {
                LOGGER.info("No partition list provided. Requesting enable status for replication from {} to be set to {} on " + "all partitions", origins.isEmpty() ? "all DCs" : origins, config.enableState);
                sendReplicationControlRequest(serverAdminTool, dataNodeId, null, origins, config.enableState);
            }
            break;
        case CatchupStatus:
            if (config.partitionIds.length > 0 && !config.partitionIds[0].isEmpty()) {
                for (String partitionIdStr : config.partitionIds) {
                    PartitionId partitionId = getPartitionIdFromStr(partitionIdStr, clusterMap);
                    Pair<ServerErrorCode, Boolean> response = serverAdminTool.isCaughtUp(dataNodeId, partitionId, config.acceptableLagInBytes, config.numReplicasCaughtUpPerPartition);
                    if (response.getFirst() == ServerErrorCode.No_Error) {
                        LOGGER.info("Replicas are {} within {} bytes for {}", response.getSecond() ? "" : "NOT", config.acceptableLagInBytes, partitionId);
                    } else {
                        LOGGER.error("From {}, received server error code {} for request for catchup status of {}", dataNodeId, response.getFirst(), partitionId);
                    }
                }
            } else {
                Pair<ServerErrorCode, Boolean> response = serverAdminTool.isCaughtUp(dataNodeId, null, config.acceptableLagInBytes, config.numReplicasCaughtUpPerPartition);
                if (response.getFirst() == ServerErrorCode.No_Error) {
                    LOGGER.info("Replicas are {} within {} bytes for all partitions", response.getSecond() ? "" : "NOT", config.acceptableLagInBytes);
                } else {
                    LOGGER.error("From {}, received server error code {} for request for catchup status of all partitions", dataNodeId, response.getFirst());
                }
            }
            break;
        case BlobStoreControl:
            if (config.partitionIds.length > 0 && !config.partitionIds[0].isEmpty()) {
                for (String partitionIdStr : config.partitionIds) {
                    PartitionId partitionId = getPartitionIdFromStr(partitionIdStr, clusterMap);
                    sendBlobStoreControlRequest(serverAdminTool, dataNodeId, partitionId, config.numReplicasCaughtUpPerPartition, config.storeControlRequestType);
                }
            } else {
                LOGGER.error("There were no partitions provided to be controlled (Start/Stop)");
            }
            break;
        default:
            throw new IllegalStateException("Recognized but unsupported operation: " + config.typeOfOperation);
    }
    serverAdminTool.close();
    outputFileStream.close();
    clusterMap.close();
    System.out.println("Server admin tool is safely closed");
    System.exit(0);
}
Also used : ClusterMap(com.github.ambry.clustermap.ClusterMap) SSLFactory(com.github.ambry.commons.SSLFactory) ByteBuf(io.netty.buffer.ByteBuf) BlobData(com.github.ambry.messageformat.BlobData) ClusterAgentsFactory(com.github.ambry.clustermap.ClusterAgentsFactory) SSLConfig(com.github.ambry.config.SSLConfig) VerifiableProperties(com.github.ambry.config.VerifiableProperties) PartitionId(com.github.ambry.clustermap.PartitionId) ByteBuffer(java.nio.ByteBuffer) ClusterMapConfig(com.github.ambry.config.ClusterMapConfig) ServerErrorCode(com.github.ambry.server.ServerErrorCode) FileOutputStream(java.io.FileOutputStream) BlobProperties(com.github.ambry.messageformat.BlobProperties) File(java.io.File) DataNodeId(com.github.ambry.clustermap.DataNodeId) BlobId(com.github.ambry.commons.BlobId)

Aggregations

ClusterMap (com.github.ambry.clustermap.ClusterMap)51 VerifiableProperties (com.github.ambry.config.VerifiableProperties)37 MockClusterMap (com.github.ambry.clustermap.MockClusterMap)32 ArrayList (java.util.ArrayList)29 Properties (java.util.Properties)27 ClusterMapConfig (com.github.ambry.config.ClusterMapConfig)25 HashMap (java.util.HashMap)25 Map (java.util.Map)24 Test (org.junit.Test)24 PartitionId (com.github.ambry.clustermap.PartitionId)23 BlobIdFactory (com.github.ambry.commons.BlobIdFactory)23 List (java.util.List)23 MockPartitionId (com.github.ambry.clustermap.MockPartitionId)20 DataNodeId (com.github.ambry.clustermap.DataNodeId)19 StoreKeyFactory (com.github.ambry.store.StoreKeyFactory)18 ClusterAgentsFactory (com.github.ambry.clustermap.ClusterAgentsFactory)17 BlobId (com.github.ambry.commons.BlobId)17 MetricRegistry (com.codahale.metrics.MetricRegistry)16 MockStoreKeyConverterFactory (com.github.ambry.store.MockStoreKeyConverterFactory)16 File (java.io.File)16