Search in sources :

Example 1 with WALPlayer

use of org.apache.hadoop.hbase.mapreduce.WALPlayer in project hbase by apache.

the class MapReduceRestoreJob method run.

@Override
public void run(Path[] dirPaths, TableName[] tableNames, TableName[] newTableNames, boolean fullBackupRestore) throws IOException {
    String bulkOutputConfKey;
    if (fullBackupRestore) {
        player = new HFileSplitterJob();
        bulkOutputConfKey = HFileSplitterJob.BULK_OUTPUT_CONF_KEY;
    } else {
        player = new WALPlayer();
        bulkOutputConfKey = WALPlayer.BULK_OUTPUT_CONF_KEY;
    }
    // Player reads all files in arbitrary directory structure and creates
    // a Map task for each file
    String dirs = StringUtils.join(dirPaths, ",");
    if (LOG.isDebugEnabled()) {
        LOG.debug("Restore " + (fullBackupRestore ? "full" : "incremental") + " backup from directory " + dirs + " from hbase tables " + StringUtils.join(tableNames, BackupRestoreConstants.TABLENAME_DELIMITER_IN_COMMAND) + " to tables " + StringUtils.join(newTableNames, BackupRestoreConstants.TABLENAME_DELIMITER_IN_COMMAND));
    }
    for (int i = 0; i < tableNames.length; i++) {
        LOG.info("Restore " + tableNames[i] + " into " + newTableNames[i]);
        Path bulkOutputPath = getBulkOutputDir(getFileNameCompatibleString(newTableNames[i]));
        Configuration conf = getConf();
        conf.set(bulkOutputConfKey, bulkOutputPath.toString());
        String[] playerArgs = { dirs, tableNames[i].getNameAsString() };
        int result = 0;
        int loaderResult = 0;
        try {
            player.setConf(getConf());
            result = player.run(playerArgs);
            if (succeeded(result)) {
                // do bulk load
                LoadIncrementalHFiles loader = createLoader();
                if (LOG.isDebugEnabled()) {
                    LOG.debug("Restoring HFiles from directory " + bulkOutputPath);
                }
                String[] args = { bulkOutputPath.toString(), newTableNames[i].getNameAsString() };
                loaderResult = loader.run(args);
                if (failed(loaderResult)) {
                    throw new IOException("Can not restore from backup directory " + dirs + " (check Hadoop and HBase logs). Bulk loader return code =" + loaderResult);
                }
            } else {
                throw new IOException("Can not restore from backup directory " + dirs + " (check Hadoop/MR and HBase logs). Player return code =" + result);
            }
            LOG.debug("Restore Job finished:" + result);
        } catch (Exception e) {
            throw new IOException("Can not restore from backup directory " + dirs + " (check Hadoop and HBase logs) ", e);
        }
    }
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) LoadIncrementalHFiles(org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles) WALPlayer(org.apache.hadoop.hbase.mapreduce.WALPlayer) IOException(java.io.IOException) IOException(java.io.IOException)

Aggregations

IOException (java.io.IOException)1 Configuration (org.apache.hadoop.conf.Configuration)1 Path (org.apache.hadoop.fs.Path)1 LoadIncrementalHFiles (org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles)1 WALPlayer (org.apache.hadoop.hbase.mapreduce.WALPlayer)1