Search in sources :

Example 1 with PersistentLongFile

use of org.apache.hadoop.hdfs.util.PersistentLongFile in project hadoop by apache.

the class TestDFSUpgradeWithHA method assertEpochFilesCopied.

private static void assertEpochFilesCopied(MiniQJMHACluster jnCluster) throws IOException {
    for (int i = 0; i < 3; i++) {
        File journalDir = jnCluster.getJournalCluster().getJournalDir(i, "ns1");
        File currDir = new File(journalDir, "current");
        File prevDir = new File(journalDir, "previous");
        for (String fileName : new String[] { Journal.LAST_PROMISED_FILENAME, Journal.LAST_WRITER_EPOCH }) {
            File prevFile = new File(prevDir, fileName);
            // writer before the upgrade.
            if (prevFile.exists()) {
                PersistentLongFile prevLongFile = new PersistentLongFile(prevFile, -10);
                PersistentLongFile currLongFile = new PersistentLongFile(new File(currDir, fileName), -11);
                assertTrue("Value in " + fileName + " has decreased on upgrade in " + journalDir, prevLongFile.get() <= currLongFile.get());
            }
        }
    }
}
Also used : PersistentLongFile(org.apache.hadoop.hdfs.util.PersistentLongFile) BestEffortLongFile(org.apache.hadoop.hdfs.util.BestEffortLongFile) PersistentLongFile(org.apache.hadoop.hdfs.util.PersistentLongFile) File(java.io.File)

Example 2 with PersistentLongFile

use of org.apache.hadoop.hdfs.util.PersistentLongFile in project hadoop by apache.

the class Journal method refreshCachedData.

/**
   * Reload any data that may have been cached. This is necessary
   * when we first load the Journal, but also after any formatting
   * operation, since the cached data is no longer relevant.
   */
private synchronized void refreshCachedData() {
    IOUtils.closeStream(committedTxnId);
    File currentDir = storage.getSingularStorageDir().getCurrentDir();
    this.lastPromisedEpoch = new PersistentLongFile(new File(currentDir, LAST_PROMISED_FILENAME), 0);
    this.lastWriterEpoch = new PersistentLongFile(new File(currentDir, LAST_WRITER_EPOCH), 0);
    this.committedTxnId = new BestEffortLongFile(new File(currentDir, COMMITTED_TXID_FILENAME), HdfsServerConstants.INVALID_TXID);
}
Also used : PersistentLongFile(org.apache.hadoop.hdfs.util.PersistentLongFile) EditLogFile(org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile) PersistentLongFile(org.apache.hadoop.hdfs.util.PersistentLongFile) BestEffortLongFile(org.apache.hadoop.hdfs.util.BestEffortLongFile) File(java.io.File) BestEffortLongFile(org.apache.hadoop.hdfs.util.BestEffortLongFile)

Example 3 with PersistentLongFile

use of org.apache.hadoop.hdfs.util.PersistentLongFile in project hadoop by apache.

the class Journal method doUpgrade.

public synchronized void doUpgrade(StorageInfo sInfo) throws IOException {
    long oldCTime = storage.getCTime();
    storage.cTime = sInfo.cTime;
    int oldLV = storage.getLayoutVersion();
    storage.layoutVersion = sInfo.layoutVersion;
    LOG.info("Starting upgrade of edits directory: " + ".\n   old LV = " + oldLV + "; old CTime = " + oldCTime + ".\n   new LV = " + storage.getLayoutVersion() + "; new CTime = " + storage.getCTime());
    storage.getJournalManager().doUpgrade(storage);
    storage.createPaxosDir();
    // Copy over the contents of the epoch data files to the new dir.
    File currentDir = storage.getSingularStorageDir().getCurrentDir();
    File previousDir = storage.getSingularStorageDir().getPreviousDir();
    PersistentLongFile prevLastPromisedEpoch = new PersistentLongFile(new File(previousDir, LAST_PROMISED_FILENAME), 0);
    PersistentLongFile prevLastWriterEpoch = new PersistentLongFile(new File(previousDir, LAST_WRITER_EPOCH), 0);
    BestEffortLongFile prevCommittedTxnId = new BestEffortLongFile(new File(previousDir, COMMITTED_TXID_FILENAME), HdfsServerConstants.INVALID_TXID);
    lastPromisedEpoch = new PersistentLongFile(new File(currentDir, LAST_PROMISED_FILENAME), 0);
    lastWriterEpoch = new PersistentLongFile(new File(currentDir, LAST_WRITER_EPOCH), 0);
    committedTxnId = new BestEffortLongFile(new File(currentDir, COMMITTED_TXID_FILENAME), HdfsServerConstants.INVALID_TXID);
    try {
        lastPromisedEpoch.set(prevLastPromisedEpoch.get());
        lastWriterEpoch.set(prevLastWriterEpoch.get());
        committedTxnId.set(prevCommittedTxnId.get());
    } finally {
        IOUtils.cleanup(LOG, prevCommittedTxnId);
    }
}
Also used : PersistentLongFile(org.apache.hadoop.hdfs.util.PersistentLongFile) EditLogFile(org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile) PersistentLongFile(org.apache.hadoop.hdfs.util.PersistentLongFile) BestEffortLongFile(org.apache.hadoop.hdfs.util.BestEffortLongFile) File(java.io.File) BestEffortLongFile(org.apache.hadoop.hdfs.util.BestEffortLongFile)

Aggregations

File (java.io.File)3 BestEffortLongFile (org.apache.hadoop.hdfs.util.BestEffortLongFile)3 PersistentLongFile (org.apache.hadoop.hdfs.util.PersistentLongFile)3 EditLogFile (org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile)2