Search in sources :

Example 1 with BestEffortLongFile

use of org.apache.hadoop.hdfs.util.BestEffortLongFile in project hadoop by apache.

the class Journal method refreshCachedData.

/**
   * Reload any data that may have been cached. This is necessary
   * when we first load the Journal, but also after any formatting
   * operation, since the cached data is no longer relevant.
   */
private synchronized void refreshCachedData() {
    IOUtils.closeStream(committedTxnId);
    File currentDir = storage.getSingularStorageDir().getCurrentDir();
    this.lastPromisedEpoch = new PersistentLongFile(new File(currentDir, LAST_PROMISED_FILENAME), 0);
    this.lastWriterEpoch = new PersistentLongFile(new File(currentDir, LAST_WRITER_EPOCH), 0);
    this.committedTxnId = new BestEffortLongFile(new File(currentDir, COMMITTED_TXID_FILENAME), HdfsServerConstants.INVALID_TXID);
}
Also used : PersistentLongFile(org.apache.hadoop.hdfs.util.PersistentLongFile) EditLogFile(org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile) PersistentLongFile(org.apache.hadoop.hdfs.util.PersistentLongFile) BestEffortLongFile(org.apache.hadoop.hdfs.util.BestEffortLongFile) File(java.io.File) BestEffortLongFile(org.apache.hadoop.hdfs.util.BestEffortLongFile)

Example 2 with BestEffortLongFile

use of org.apache.hadoop.hdfs.util.BestEffortLongFile in project hadoop by apache.

the class Journal method doUpgrade.

public synchronized void doUpgrade(StorageInfo sInfo) throws IOException {
    long oldCTime = storage.getCTime();
    storage.cTime = sInfo.cTime;
    int oldLV = storage.getLayoutVersion();
    storage.layoutVersion = sInfo.layoutVersion;
    LOG.info("Starting upgrade of edits directory: " + ".\n   old LV = " + oldLV + "; old CTime = " + oldCTime + ".\n   new LV = " + storage.getLayoutVersion() + "; new CTime = " + storage.getCTime());
    storage.getJournalManager().doUpgrade(storage);
    storage.createPaxosDir();
    // Copy over the contents of the epoch data files to the new dir.
    File currentDir = storage.getSingularStorageDir().getCurrentDir();
    File previousDir = storage.getSingularStorageDir().getPreviousDir();
    PersistentLongFile prevLastPromisedEpoch = new PersistentLongFile(new File(previousDir, LAST_PROMISED_FILENAME), 0);
    PersistentLongFile prevLastWriterEpoch = new PersistentLongFile(new File(previousDir, LAST_WRITER_EPOCH), 0);
    BestEffortLongFile prevCommittedTxnId = new BestEffortLongFile(new File(previousDir, COMMITTED_TXID_FILENAME), HdfsServerConstants.INVALID_TXID);
    lastPromisedEpoch = new PersistentLongFile(new File(currentDir, LAST_PROMISED_FILENAME), 0);
    lastWriterEpoch = new PersistentLongFile(new File(currentDir, LAST_WRITER_EPOCH), 0);
    committedTxnId = new BestEffortLongFile(new File(currentDir, COMMITTED_TXID_FILENAME), HdfsServerConstants.INVALID_TXID);
    try {
        lastPromisedEpoch.set(prevLastPromisedEpoch.get());
        lastWriterEpoch.set(prevLastWriterEpoch.get());
        committedTxnId.set(prevCommittedTxnId.get());
    } finally {
        IOUtils.cleanup(LOG, prevCommittedTxnId);
    }
}
Also used : PersistentLongFile(org.apache.hadoop.hdfs.util.PersistentLongFile) EditLogFile(org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile) PersistentLongFile(org.apache.hadoop.hdfs.util.PersistentLongFile) BestEffortLongFile(org.apache.hadoop.hdfs.util.BestEffortLongFile) File(java.io.File) BestEffortLongFile(org.apache.hadoop.hdfs.util.BestEffortLongFile)

Example 3 with BestEffortLongFile

use of org.apache.hadoop.hdfs.util.BestEffortLongFile in project hadoop by apache.

the class TestDFSUpgradeWithHA method getCommittedTxnIdValue.

private long getCommittedTxnIdValue(MiniQJMHACluster qjCluster) throws IOException {
    Journal journal1 = qjCluster.getJournalCluster().getJournalNode(0).getOrCreateJournal(MiniQJMHACluster.NAMESERVICE);
    BestEffortLongFile committedTxnId = (BestEffortLongFile) Whitebox.getInternalState(journal1, "committedTxnId");
    return committedTxnId != null ? committedTxnId.get() : HdfsServerConstants.INVALID_TXID;
}
Also used : Journal(org.apache.hadoop.hdfs.qjournal.server.Journal) BestEffortLongFile(org.apache.hadoop.hdfs.util.BestEffortLongFile)

Aggregations

BestEffortLongFile (org.apache.hadoop.hdfs.util.BestEffortLongFile)3 File (java.io.File)2 EditLogFile (org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile)2 PersistentLongFile (org.apache.hadoop.hdfs.util.PersistentLongFile)2 Journal (org.apache.hadoop.hdfs.qjournal.server.Journal)1