Search in sources :

Example 6 with BackupImage

use of org.apache.hadoop.hbase.backup.impl.BackupManifest.BackupImage in project hbase by apache.

the class BackupManager method getAncestors.

/**
   * Get direct ancestors of the current backup.
   * @param backupInfo The backup info for the current backup
   * @return The ancestors for the current backup
   * @throws IOException exception
   * @throws BackupException exception
   */
public ArrayList<BackupImage> getAncestors(BackupInfo backupInfo) throws IOException, BackupException {
    LOG.debug("Getting the direct ancestors of the current backup " + backupInfo.getBackupId());
    ArrayList<BackupImage> ancestors = new ArrayList<BackupImage>();
    // full backup does not have ancestor
    if (backupInfo.getType() == BackupType.FULL) {
        LOG.debug("Current backup is a full backup, no direct ancestor for it.");
        return ancestors;
    }
    // get all backup history list in descending order
    ArrayList<BackupInfo> allHistoryList = getBackupHistory(true);
    for (BackupInfo backup : allHistoryList) {
        BackupImage.Builder builder = BackupImage.newBuilder();
        BackupImage image = builder.withBackupId(backup.getBackupId()).withType(backup.getType()).withRootDir(backup.getBackupRootDir()).withTableList(backup.getTableNames()).withStartTime(backup.getStartTs()).withCompleteTime(backup.getCompleteTs()).build();
        // add the full backup image as an ancestor until the last incremental backup
        if (backup.getType().equals(BackupType.FULL)) {
            // then no need to add
            if (!BackupManifest.canCoverImage(ancestors, image)) {
                ancestors.add(image);
            }
        } else {
            // incremental backup
            if (BackupManifest.canCoverImage(ancestors, image)) {
                LOG.debug("Met the backup boundary of the current table set:");
                for (BackupImage image1 : ancestors) {
                    LOG.debug("  BackupID=" + image1.getBackupId() + ", BackupDir=" + image1.getRootDir());
                }
            } else {
                Path logBackupPath = HBackupFileSystem.getLogBackupPath(backup.getBackupRootDir(), backup.getBackupId());
                LOG.debug("Current backup has an incremental backup ancestor, " + "touching its image manifest in " + logBackupPath.toString() + " to construct the dependency.");
                BackupManifest lastIncrImgManifest = new BackupManifest(conf, logBackupPath);
                BackupImage lastIncrImage = lastIncrImgManifest.getBackupImage();
                ancestors.add(lastIncrImage);
                LOG.debug("Last dependent incremental backup image: " + "{BackupID=" + lastIncrImage.getBackupId() + "," + "BackupDir=" + lastIncrImage.getRootDir() + "}");
            }
        }
    }
    LOG.debug("Got " + ancestors.size() + " ancestors for the current backup.");
    return ancestors;
}
Also used : BackupInfo(org.apache.hadoop.hbase.backup.BackupInfo) Path(org.apache.hadoop.fs.Path) BackupImage(org.apache.hadoop.hbase.backup.impl.BackupManifest.BackupImage) ArrayList(java.util.ArrayList)

Aggregations

BackupImage (org.apache.hadoop.hbase.backup.impl.BackupManifest.BackupImage)6 ArrayList (java.util.ArrayList)4 TableName (org.apache.hadoop.hbase.TableName)3 IOException (java.io.IOException)2 TreeSet (java.util.TreeSet)2 Path (org.apache.hadoop.fs.Path)2 HashMap (java.util.HashMap)1 BackupInfo (org.apache.hadoop.hbase.backup.BackupInfo)1 BackupManifest (org.apache.hadoop.hbase.backup.impl.BackupManifest)1 RestoreTool (org.apache.hadoop.hbase.backup.util.RestoreTool)1