Search in sources :

Example 1 with RestoreJob

use of org.apache.hadoop.hbase.backup.RestoreJob in project hbase by apache.

the class RestoreTool method incrementalRestoreTable.

/**
   * During incremental backup operation. Call WalPlayer to replay WAL in backup image Currently
   * tableNames and newTablesNames only contain single table, will be expanded to multiple tables in
   * the future
   * @param conn HBase connection
   * @param tableBackupPath backup path
   * @param logDirs : incremental backup folders, which contains WAL
   * @param tableNames : source tableNames(table names were backuped)
   * @param newTableNames : target tableNames(table names to be restored to)
   * @param incrBackupId incremental backup Id
   * @throws IOException exception
   */
public void incrementalRestoreTable(Connection conn, Path tableBackupPath, Path[] logDirs, TableName[] tableNames, TableName[] newTableNames, String incrBackupId) throws IOException {
    try (Admin admin = conn.getAdmin()) {
        if (tableNames.length != newTableNames.length) {
            throw new IOException("Number of source tables and target tables does not match!");
        }
        FileSystem fileSys = tableBackupPath.getFileSystem(this.conf);
        // full backup. Here, check that all new tables exists
        for (TableName tableName : newTableNames) {
            if (!admin.tableExists(tableName)) {
                throw new IOException("HBase table " + tableName + " does not exist. Create the table first, e.g. by restoring a full backup.");
            }
        }
        // adjust table schema
        for (int i = 0; i < tableNames.length; i++) {
            TableName tableName = tableNames[i];
            HTableDescriptor tableDescriptor = getTableDescriptor(fileSys, tableName, incrBackupId);
            LOG.debug("Found descriptor " + tableDescriptor + " through " + incrBackupId);
            TableName newTableName = newTableNames[i];
            HTableDescriptor newTableDescriptor = admin.getTableDescriptor(newTableName);
            List<HColumnDescriptor> families = Arrays.asList(tableDescriptor.getColumnFamilies());
            List<HColumnDescriptor> existingFamilies = Arrays.asList(newTableDescriptor.getColumnFamilies());
            boolean schemaChangeNeeded = false;
            for (HColumnDescriptor family : families) {
                if (!existingFamilies.contains(family)) {
                    newTableDescriptor.addFamily(family);
                    schemaChangeNeeded = true;
                }
            }
            for (HColumnDescriptor family : existingFamilies) {
                if (!families.contains(family)) {
                    newTableDescriptor.removeFamily(family.getName());
                    schemaChangeNeeded = true;
                }
            }
            if (schemaChangeNeeded) {
                modifyTableSync(conn, newTableDescriptor);
                LOG.info("Changed " + newTableDescriptor.getTableName() + " to: " + newTableDescriptor);
            }
        }
        RestoreJob restoreService = BackupRestoreFactory.getRestoreJob(conf);
        restoreService.run(logDirs, tableNames, newTableNames, false);
    }
}
Also used : RestoreJob(org.apache.hadoop.hbase.backup.RestoreJob) TableName(org.apache.hadoop.hbase.TableName) HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor) FileSystem(org.apache.hadoop.fs.FileSystem) HRegionFileSystem(org.apache.hadoop.hbase.regionserver.HRegionFileSystem) HBackupFileSystem(org.apache.hadoop.hbase.backup.HBackupFileSystem) IOException(java.io.IOException) Admin(org.apache.hadoop.hbase.client.Admin) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor)

Aggregations

IOException (java.io.IOException)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 HColumnDescriptor (org.apache.hadoop.hbase.HColumnDescriptor)1 HTableDescriptor (org.apache.hadoop.hbase.HTableDescriptor)1 TableName (org.apache.hadoop.hbase.TableName)1 HBackupFileSystem (org.apache.hadoop.hbase.backup.HBackupFileSystem)1 RestoreJob (org.apache.hadoop.hbase.backup.RestoreJob)1 Admin (org.apache.hadoop.hbase.client.Admin)1 HRegionFileSystem (org.apache.hadoop.hbase.regionserver.HRegionFileSystem)1