Search in sources :

Example 1 with NamespaceInfo

use of org.apache.hadoop.hdfs.server.protocol.NamespaceInfo in project hadoop by apache.

the class DataStorage method prepareVolume.

/**
   * Prepare a storage directory. It creates a builder which can be used to add
   * to the volume. If the volume cannot be added, it is OK to discard the
   * builder later.
   *
   * @param datanode DataNode object.
   * @param location the StorageLocation for the storage directory.
   * @param nsInfos an array of namespace infos.
   * @return a VolumeBuilder that holds the metadata of this storage directory
   * and can be added to DataStorage later.
   * @throws IOException if encounters I/O errors.
   *
   * Note that if there is IOException, the state of DataStorage is not modified.
   */
public VolumeBuilder prepareVolume(DataNode datanode, StorageLocation location, List<NamespaceInfo> nsInfos) throws IOException {
    if (containsStorageDir(location)) {
        final String errorMessage = "Storage directory is in use";
        LOG.warn(errorMessage + ".");
        throw new IOException(errorMessage);
    }
    StorageDirectory sd = loadStorageDirectory(datanode, nsInfos.get(0), location, StartupOption.HOTSWAP, null);
    VolumeBuilder builder = new VolumeBuilder(this, sd);
    for (NamespaceInfo nsInfo : nsInfos) {
        location.makeBlockPoolDir(nsInfo.getBlockPoolID(), null);
        final BlockPoolSliceStorage bpStorage = getBlockPoolSliceStorage(nsInfo);
        final List<StorageDirectory> dirs = bpStorage.loadBpStorageDirectories(nsInfo, location, StartupOption.HOTSWAP, null, datanode.getConf());
        builder.addBpStorageDirectories(nsInfo.getBlockPoolID(), dirs);
    }
    return builder;
}
Also used : IOException(java.io.IOException) NamespaceInfo(org.apache.hadoop.hdfs.server.protocol.NamespaceInfo)

Example 2 with NamespaceInfo

use of org.apache.hadoop.hdfs.server.protocol.NamespaceInfo in project hadoop by apache.

the class NameNode method initializeSharedEdits.

/**
   * Format a new shared edits dir and copy in enough edit log segments so that
   * the standby NN can start up.
   * 
   * @param conf configuration
   * @param force format regardless of whether or not the shared edits dir exists
   * @param interactive prompt the user when a dir exists
   * @return true if the command aborts, false otherwise
   */
private static boolean initializeSharedEdits(Configuration conf, boolean force, boolean interactive) throws IOException {
    String nsId = DFSUtil.getNamenodeNameServiceId(conf);
    String namenodeId = HAUtil.getNameNodeId(conf, nsId);
    initializeGenericKeys(conf, nsId, namenodeId);
    if (conf.get(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY) == null) {
        LOG.error("No shared edits directory configured for namespace " + nsId + " namenode " + namenodeId);
        return false;
    }
    if (UserGroupInformation.isSecurityEnabled()) {
        InetSocketAddress socAddr = DFSUtilClient.getNNAddress(conf);
        SecurityUtil.login(conf, DFS_NAMENODE_KEYTAB_FILE_KEY, DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, socAddr.getHostName());
    }
    NNStorage existingStorage = null;
    FSImage sharedEditsImage = null;
    try {
        FSNamesystem fsns = FSNamesystem.loadFromDisk(getConfigurationWithoutSharedEdits(conf));
        existingStorage = fsns.getFSImage().getStorage();
        NamespaceInfo nsInfo = existingStorage.getNamespaceInfo();
        List<URI> sharedEditsDirs = FSNamesystem.getSharedEditsDirs(conf);
        sharedEditsImage = new FSImage(conf, Lists.<URI>newArrayList(), sharedEditsDirs);
        sharedEditsImage.getEditLog().initJournalsForWrite();
        if (!sharedEditsImage.confirmFormat(force, interactive)) {
            // abort
            return true;
        }
        NNStorage newSharedStorage = sharedEditsImage.getStorage();
        // Call Storage.format instead of FSImage.format here, since we don't
        // actually want to save a checkpoint - just prime the dirs with
        // the existing namespace info
        newSharedStorage.format(nsInfo);
        sharedEditsImage.getEditLog().formatNonFileJournals(nsInfo);
        // Need to make sure the edit log segments are in good shape to initialize
        // the shared edits dir.
        fsns.getFSImage().getEditLog().close();
        fsns.getFSImage().getEditLog().initJournalsForWrite();
        fsns.getFSImage().getEditLog().recoverUnclosedStreams();
        copyEditLogSegmentsToSharedDir(fsns, sharedEditsDirs, newSharedStorage, conf);
    } catch (IOException ioe) {
        LOG.error("Could not initialize shared edits dir", ioe);
        // aborted
        return true;
    } finally {
        if (sharedEditsImage != null) {
            try {
                sharedEditsImage.close();
            } catch (IOException ioe) {
                LOG.warn("Could not close sharedEditsImage", ioe);
            }
        }
        // unit test, which runs in the same JVM as NNs.
        if (existingStorage != null) {
            try {
                existingStorage.unlockAll();
            } catch (IOException ioe) {
                LOG.warn("Could not unlock storage directories", ioe);
                // aborted
                return true;
            }
        }
    }
    // did not abort
    return false;
}
Also used : InetSocketAddress(java.net.InetSocketAddress) IOException(java.io.IOException) NamespaceInfo(org.apache.hadoop.hdfs.server.protocol.NamespaceInfo) URI(java.net.URI)

Example 3 with NamespaceInfo

use of org.apache.hadoop.hdfs.server.protocol.NamespaceInfo in project hadoop by apache.

the class TestFsDatasetImpl method getDfsUsedValueOfNewVolume.

private long getDfsUsedValueOfNewVolume(long cacheDfsUsed, long waitIntervalTime) throws IOException, InterruptedException {
    List<NamespaceInfo> nsInfos = Lists.newArrayList();
    nsInfos.add(new NamespaceInfo(0, CLUSTER_ID, BLOCK_POOL_IDS[0], 1));
    String CURRENT_DIR = "current";
    String DU_CACHE_FILE = BlockPoolSlice.DU_CACHE_FILE;
    String path = BASE_DIR + "/newData0";
    String pathUri = new Path(path).toUri().toString();
    StorageLocation loc = StorageLocation.parse(pathUri);
    Storage.StorageDirectory sd = createStorageDirectory(new File(path));
    DataStorage.VolumeBuilder builder = new DataStorage.VolumeBuilder(storage, sd);
    when(storage.prepareVolume(eq(datanode), eq(loc), anyListOf(NamespaceInfo.class))).thenReturn(builder);
    String cacheFilePath = String.format("%s/%s/%s/%s/%s", path, CURRENT_DIR, BLOCK_POOL_IDS[0], CURRENT_DIR, DU_CACHE_FILE);
    File outFile = new File(cacheFilePath);
    if (!outFile.getParentFile().exists()) {
        outFile.getParentFile().mkdirs();
    }
    if (outFile.exists()) {
        outFile.delete();
    }
    FakeTimer timer = new FakeTimer();
    try {
        try (Writer out = new OutputStreamWriter(new FileOutputStream(outFile), StandardCharsets.UTF_8)) {
            // Write the dfsUsed value and the time to cache file
            out.write(Long.toString(cacheDfsUsed) + " " + Long.toString(timer.now()));
            out.flush();
        }
    } catch (IOException ioe) {
    }
    dataset.setTimer(timer);
    timer.advance(waitIntervalTime);
    dataset.addVolume(loc, nsInfos);
    // Get the last volume which was just added before
    FsVolumeImpl newVolume;
    try (FsDatasetSpi.FsVolumeReferences volumes = dataset.getFsVolumeReferences()) {
        newVolume = (FsVolumeImpl) volumes.get(volumes.size() - 1);
    }
    long dfsUsed = newVolume.getDfsUsed();
    return dfsUsed;
}
Also used : Path(org.apache.hadoop.fs.Path) DataStorage(org.apache.hadoop.hdfs.server.datanode.DataStorage) StorageDirectory(org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory) FsDatasetSpi(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi) Matchers.anyString(org.mockito.Matchers.anyString) IOException(java.io.IOException) MultipleIOException(org.apache.hadoop.io.MultipleIOException) FsVolumeReferences(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi.FsVolumeReferences) DataStorage(org.apache.hadoop.hdfs.server.datanode.DataStorage) Storage(org.apache.hadoop.hdfs.server.common.Storage) FileOutputStream(java.io.FileOutputStream) OutputStreamWriter(java.io.OutputStreamWriter) NamespaceInfo(org.apache.hadoop.hdfs.server.protocol.NamespaceInfo) StorageLocation(org.apache.hadoop.hdfs.server.datanode.StorageLocation) File(java.io.File) FakeTimer(org.apache.hadoop.util.FakeTimer) Writer(java.io.Writer) OutputStreamWriter(java.io.OutputStreamWriter)

Example 4 with NamespaceInfo

use of org.apache.hadoop.hdfs.server.protocol.NamespaceInfo in project hadoop by apache.

the class FsDatasetImpl method addVolume.

@Override
public void addVolume(final StorageLocation location, final List<NamespaceInfo> nsInfos) throws IOException {
    // Prepare volume in DataStorage
    final DataStorage.VolumeBuilder builder;
    try {
        builder = dataStorage.prepareVolume(datanode, location, nsInfos);
    } catch (IOException e) {
        volumes.addVolumeFailureInfo(new VolumeFailureInfo(location, Time.now()));
        throw e;
    }
    final Storage.StorageDirectory sd = builder.getStorageDirectory();
    StorageType storageType = location.getStorageType();
    final FsVolumeImpl fsVolume = createFsVolume(sd.getStorageUuid(), sd, location);
    final ReplicaMap tempVolumeMap = new ReplicaMap(new AutoCloseableLock());
    ArrayList<IOException> exceptions = Lists.newArrayList();
    for (final NamespaceInfo nsInfo : nsInfos) {
        String bpid = nsInfo.getBlockPoolID();
        try {
            fsVolume.addBlockPool(bpid, this.conf, this.timer);
            fsVolume.getVolumeMap(bpid, tempVolumeMap, ramDiskReplicaTracker);
        } catch (IOException e) {
            LOG.warn("Caught exception when adding " + fsVolume + ". Will throw later.", e);
            exceptions.add(e);
        }
    }
    if (!exceptions.isEmpty()) {
        try {
            sd.unlock();
        } catch (IOException e) {
            exceptions.add(e);
        }
        throw MultipleIOException.createIOException(exceptions);
    }
    final FsVolumeReference ref = fsVolume.obtainReference();
    setupAsyncLazyPersistThread(fsVolume);
    builder.build();
    activateVolume(tempVolumeMap, sd, storageType, ref);
    LOG.info("Added volume - " + location + ", StorageType: " + storageType);
}
Also used : DataStorage(org.apache.hadoop.hdfs.server.datanode.DataStorage) StorageType(org.apache.hadoop.fs.StorageType) IOException(java.io.IOException) MultipleIOException(org.apache.hadoop.io.MultipleIOException) DataStorage(org.apache.hadoop.hdfs.server.datanode.DataStorage) Storage(org.apache.hadoop.hdfs.server.common.Storage) DatanodeStorage(org.apache.hadoop.hdfs.server.protocol.DatanodeStorage) FsVolumeReference(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference) AutoCloseableLock(org.apache.hadoop.util.AutoCloseableLock) NamespaceInfo(org.apache.hadoop.hdfs.server.protocol.NamespaceInfo)

Example 5 with NamespaceInfo

use of org.apache.hadoop.hdfs.server.protocol.NamespaceInfo in project hadoop by apache.

the class FSImage method format.

void format(FSNamesystem fsn, String clusterId) throws IOException {
    long fileCount = fsn.getFilesTotal();
    // Expect 1 file, which is the root inode
    Preconditions.checkState(fileCount == 1, "FSImage.format should be called with an uninitialized namesystem, has " + fileCount + " files");
    NamespaceInfo ns = NNStorage.newNamespaceInfo();
    LOG.info("Allocated new BlockPoolId: " + ns.getBlockPoolID());
    ns.clusterID = clusterId;
    storage.format(ns);
    editLog.formatNonFileJournals(ns);
    saveFSImageInAllDirs(fsn, 0);
}
Also used : NamespaceInfo(org.apache.hadoop.hdfs.server.protocol.NamespaceInfo)

Aggregations

NamespaceInfo (org.apache.hadoop.hdfs.server.protocol.NamespaceInfo)35 IOException (java.io.IOException)13 Test (org.junit.Test)13 File (java.io.File)8 InetSocketAddress (java.net.InetSocketAddress)7 Storage (org.apache.hadoop.hdfs.server.common.Storage)6 StorageDirectory (org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory)6 ArrayList (java.util.ArrayList)5 DatanodeProtocolClientSideTranslatorPB (org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB)5 DataStorage (org.apache.hadoop.hdfs.server.datanode.DataStorage)5 Configuration (org.apache.hadoop.conf.Configuration)4 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)4 StorageLocation (org.apache.hadoop.hdfs.server.datanode.StorageLocation)4 DatanodeRegistration (org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration)4 NNHAStatusHeartbeat (org.apache.hadoop.hdfs.server.protocol.NNHAStatusHeartbeat)4 SlowPeerReports (org.apache.hadoop.hdfs.server.protocol.SlowPeerReports)4 VolumeFailureSummary (org.apache.hadoop.hdfs.server.protocol.VolumeFailureSummary)4 HeartbeatResponse (org.apache.hadoop.hdfs.server.protocol.HeartbeatResponse)3 MultipleIOException (org.apache.hadoop.io.MultipleIOException)3 Matchers.anyString (org.mockito.Matchers.anyString)3