Search in sources :

Example 6 with HRegion

use of org.apache.hadoop.hbase.regionserver.HRegion in project hbase by apache.

the class TestHFileArchiving method testRemovesRegionDirOnArchive.

@Test
public void testRemovesRegionDirOnArchive() throws Exception {
    final TableName tableName = TableName.valueOf(name.getMethodName());
    UTIL.createTable(tableName, TEST_FAM);
    final Admin admin = UTIL.getAdmin();
    // get the current store files for the region
    List<HRegion> servingRegions = UTIL.getHBaseCluster().getRegions(tableName);
    // make sure we only have 1 region serving this table
    assertEquals(1, servingRegions.size());
    HRegion region = servingRegions.get(0);
    // and load the table
    UTIL.loadRegion(region, TEST_FAM);
    // shutdown the table so we can manipulate the files
    admin.disableTable(tableName);
    FileSystem fs = UTIL.getTestFileSystem();
    // now attempt to depose the region
    Path rootDir = region.getRegionFileSystem().getTableDir().getParent();
    Path regionDir = HRegion.getRegionDir(rootDir, region.getRegionInfo());
    HFileArchiver.archiveRegion(UTIL.getConfiguration(), fs, region.getRegionInfo());
    // check for the existence of the archive directory and some files in it
    Path archiveDir = HFileArchiveTestingUtil.getRegionArchiveDir(UTIL.getConfiguration(), region);
    assertTrue(fs.exists(archiveDir));
    // check to make sure the store directory was copied
    // check to make sure the store directory was copied
    FileStatus[] stores = fs.listStatus(archiveDir, new PathFilter() {

        @Override
        public boolean accept(Path p) {
            if (p.getName().contains(HConstants.RECOVERED_EDITS_DIR)) {
                return false;
            }
            return true;
        }
    });
    assertTrue(stores.length == 1);
    // make sure we archived the store files
    FileStatus[] storeFiles = fs.listStatus(stores[0].getPath());
    assertTrue(storeFiles.length > 0);
    // then ensure the region's directory isn't present
    assertFalse(fs.exists(regionDir));
    UTIL.deleteTable(tableName);
}
Also used : Path(org.apache.hadoop.fs.Path) HRegion(org.apache.hadoop.hbase.regionserver.HRegion) PathFilter(org.apache.hadoop.fs.PathFilter) FileStatus(org.apache.hadoop.fs.FileStatus) FileSystem(org.apache.hadoop.fs.FileSystem) Admin(org.apache.hadoop.hbase.client.Admin) Test(org.junit.Test)

Example 7 with HRegion

use of org.apache.hadoop.hbase.regionserver.HRegion in project hbase by apache.

the class OpenRegionHandler method process.

@Override
public void process() throws IOException {
    boolean openSuccessful = false;
    final String regionName = regionInfo.getRegionNameAsString();
    HRegion region = null;
    try {
        if (this.server.isStopped() || this.rsServices.isStopping()) {
            return;
        }
        final String encodedName = regionInfo.getEncodedName();
        // Check that this region is not already online
        if (this.rsServices.getFromOnlineRegions(encodedName) != null) {
            LOG.error("Region " + encodedName + " was already online when we started processing the opening. " + "Marking this new attempt as failed");
            return;
        }
        // If fails, just return.  Someone stole the region from under us.
        if (!isRegionStillOpening()) {
            LOG.error("Region " + encodedName + " opening cancelled");
            return;
        }
        // Open region.  After a successful open, failures in subsequent
        // processing needs to do a close as part of cleanup.
        region = openRegion();
        if (region == null) {
            return;
        }
        if (!updateMeta(region, masterSystemTime) || this.server.isStopped() || this.rsServices.isStopping()) {
            return;
        }
        if (!isRegionStillOpening()) {
            return;
        }
        // Successful region open, and add it to OnlineRegions
        this.rsServices.addToOnlineRegions(region);
        openSuccessful = true;
        // Done!  Successful region open
        LOG.debug("Opened " + regionName + " on " + this.server.getServerName());
    } finally {
        // Do all clean up here
        if (!openSuccessful) {
            doCleanUpOnFailedOpen(region);
        }
        final Boolean current = this.rsServices.getRegionsInTransitionInRS().remove(this.regionInfo.getEncodedNameAsBytes());
        // would help.
        if (openSuccessful) {
            if (current == null) {
                // Should NEVER happen, but let's be paranoid.
                LOG.error("Bad state: we've just opened a region that was NOT in transition. Region=" + regionName);
            } else if (Boolean.FALSE.equals(current)) {
                // Can happen, if we're
                // really unlucky.
                LOG.error("Race condition: we've finished to open a region, while a close was requested " + " on region=" + regionName + ". It can be a critical error, as a region that" + " should be closed is now opened. Closing it now");
                cleanupFailedOpen(region);
            }
        }
    }
}
Also used : HRegion(org.apache.hadoop.hbase.regionserver.HRegion) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean)

Example 8 with HRegion

use of org.apache.hadoop.hbase.regionserver.HRegion in project hbase by apache.

the class HBaseFsck method adoptHdfsOrphan.

/**
   * Orphaned regions are regions without a .regioninfo file in them.  We "adopt"
   * these orphans by creating a new region, and moving the column families,
   * recovered edits, WALs, into the new region dir.  We determine the region
   * startkey and endkeys by looking at all of the hfiles inside the column
   * families to identify the min and max keys. The resulting region will
   * likely violate table integrity but will be dealt with by merging
   * overlapping regions.
   */
@SuppressWarnings("deprecation")
private void adoptHdfsOrphan(HbckInfo hi) throws IOException {
    Path p = hi.getHdfsRegionDir();
    FileSystem fs = p.getFileSystem(getConf());
    FileStatus[] dirs = fs.listStatus(p);
    if (dirs == null) {
        LOG.warn("Attempt to adopt orphan hdfs region skipped because no files present in " + p + ". This dir could probably be deleted.");
        return;
    }
    TableName tableName = hi.getTableName();
    TableInfo tableInfo = tablesInfo.get(tableName);
    Preconditions.checkNotNull(tableInfo, "Table '" + tableName + "' not present!");
    HTableDescriptor template = tableInfo.getHTD();
    // find min and max key values
    Pair<byte[], byte[]> orphanRegionRange = null;
    for (FileStatus cf : dirs) {
        String cfName = cf.getPath().getName();
        // TODO Figure out what the special dirs are
        if (cfName.startsWith(".") || cfName.equals(HConstants.SPLIT_LOGDIR_NAME))
            continue;
        FileStatus[] hfiles = fs.listStatus(cf.getPath());
        for (FileStatus hfile : hfiles) {
            byte[] start, end;
            HFile.Reader hf = null;
            try {
                CacheConfig cacheConf = new CacheConfig(getConf());
                hf = HFile.createReader(fs, hfile.getPath(), cacheConf, getConf());
                hf.loadFileInfo();
                Cell startKv = hf.getFirstKey();
                start = CellUtil.cloneRow(startKv);
                Cell endKv = hf.getLastKey();
                end = CellUtil.cloneRow(endKv);
            } catch (IOException ioe) {
                LOG.warn("Problem reading orphan file " + hfile + ", skipping");
                continue;
            } catch (NullPointerException ioe) {
                LOG.warn("Orphan file " + hfile + " is possibly corrupted HFile, skipping");
                continue;
            } finally {
                if (hf != null) {
                    hf.close();
                }
            }
            // expand the range to include the range of all hfiles
            if (orphanRegionRange == null) {
                // first range
                orphanRegionRange = new Pair<>(start, end);
            } else {
                // expand range only if the hfile is wider.
                if (Bytes.compareTo(orphanRegionRange.getFirst(), start) > 0) {
                    orphanRegionRange.setFirst(start);
                }
                if (Bytes.compareTo(orphanRegionRange.getSecond(), end) < 0) {
                    orphanRegionRange.setSecond(end);
                }
            }
        }
    }
    if (orphanRegionRange == null) {
        LOG.warn("No data in dir " + p + ", sidelining data");
        fixes++;
        sidelineRegionDir(fs, hi);
        return;
    }
    LOG.info("Min max keys are : [" + Bytes.toString(orphanRegionRange.getFirst()) + ", " + Bytes.toString(orphanRegionRange.getSecond()) + ")");
    // create new region on hdfs. move data into place.
    HRegionInfo hri = new HRegionInfo(template.getTableName(), orphanRegionRange.getFirst(), Bytes.add(orphanRegionRange.getSecond(), new byte[1]));
    LOG.info("Creating new region : " + hri);
    HRegion region = HBaseFsckRepair.createHDFSRegionDir(getConf(), hri, template);
    Path target = region.getRegionFileSystem().getRegionDir();
    // rename all the data to new region
    mergeRegionDirs(target, hi);
    fixes++;
}
Also used : Path(org.apache.hadoop.fs.Path) FileStatus(org.apache.hadoop.fs.FileStatus) InterruptedIOException(java.io.InterruptedIOException) IOException(java.io.IOException) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor) HRegionInfo(org.apache.hadoop.hbase.HRegionInfo) TableName(org.apache.hadoop.hbase.TableName) HRegion(org.apache.hadoop.hbase.regionserver.HRegion) FileSystem(org.apache.hadoop.fs.FileSystem) MasterFileSystem(org.apache.hadoop.hbase.master.MasterFileSystem) HRegionFileSystem(org.apache.hadoop.hbase.regionserver.HRegionFileSystem) HFile(org.apache.hadoop.hbase.io.hfile.HFile) CacheConfig(org.apache.hadoop.hbase.io.hfile.CacheConfig) Cell(org.apache.hadoop.hbase.Cell)

Example 9 with HRegion

use of org.apache.hadoop.hbase.regionserver.HRegion in project hbase by apache.

the class MetaUtils method shutdown.

/**
   * Closes catalog regions if open. Also closes and deletes the WAL. You
   * must call this method if you want to persist changes made during a
   * MetaUtils edit session.
   */
public synchronized void shutdown() {
    if (this.metaRegion != null) {
        try {
            this.metaRegion.close();
        } catch (IOException e) {
            LOG.error("closing meta region", e);
        } finally {
            this.metaRegion = null;
        }
    }
    try {
        for (HRegion r : metaRegions.values()) {
            LOG.info("CLOSING hbase:meta " + r.toString());
            r.close();
        }
    } catch (IOException e) {
        LOG.error("closing meta region", e);
    } finally {
        metaRegions.clear();
    }
    try {
        if (this.walFactory != null) {
            this.walFactory.close();
        }
    } catch (IOException e) {
        LOG.error("closing WAL", e);
    }
}
Also used : HRegion(org.apache.hadoop.hbase.regionserver.HRegion) IOException(java.io.IOException)

Example 10 with HRegion

use of org.apache.hadoop.hbase.regionserver.HRegion in project hbase by apache.

the class ModifyRegionUtils method createRegion.

/**
   * Create new set of regions on the specified file-system.
   * @param conf {@link Configuration}
   * @param rootDir Root directory for HBase instance
   * @param hTableDescriptor description of the table
   * @param newRegion {@link HRegionInfo} that describes the region to create
   * @param task {@link RegionFillTask} custom code to populate region after creation
   * @throws IOException
   */
public static HRegionInfo createRegion(final Configuration conf, final Path rootDir, final HTableDescriptor hTableDescriptor, final HRegionInfo newRegion, final RegionFillTask task) throws IOException {
    // 1. Create HRegion
    // The WAL subsystem will use the default rootDir rather than the passed in rootDir
    // unless I pass along via the conf.
    Configuration confForWAL = new Configuration(conf);
    confForWAL.set(HConstants.HBASE_DIR, rootDir.toString());
    HRegion region = HRegion.createHRegion(newRegion, rootDir, conf, hTableDescriptor, null, false);
    try {
        // 2. Custom user code to interact with the created region
        if (task != null) {
            task.fillRegion(region);
        }
    } finally {
        // 3. Close the new region to flush to disk. Close log file too.
        region.close();
    }
    return region.getRegionInfo();
}
Also used : HRegion(org.apache.hadoop.hbase.regionserver.HRegion) Configuration(org.apache.hadoop.conf.Configuration)

Aggregations

HRegion (org.apache.hadoop.hbase.regionserver.HRegion)148 Test (org.junit.Test)88 Put (org.apache.hadoop.hbase.client.Put)56 Path (org.apache.hadoop.fs.Path)40 HTableDescriptor (org.apache.hadoop.hbase.HTableDescriptor)40 Scan (org.apache.hadoop.hbase.client.Scan)37 HRegionInfo (org.apache.hadoop.hbase.HRegionInfo)36 Cell (org.apache.hadoop.hbase.Cell)35 TableId (co.cask.cdap.data2.util.TableId)32 HColumnDescriptor (org.apache.hadoop.hbase.HColumnDescriptor)28 IOException (java.io.IOException)26 WAL (org.apache.hadoop.hbase.wal.WAL)25 FileSystem (org.apache.hadoop.fs.FileSystem)24 ArrayList (java.util.ArrayList)22 TableName (org.apache.hadoop.hbase.TableName)22 Configuration (org.apache.hadoop.conf.Configuration)21 Result (org.apache.hadoop.hbase.client.Result)21 Region (org.apache.hadoop.hbase.regionserver.Region)21 MiniHBaseCluster (org.apache.hadoop.hbase.MiniHBaseCluster)19 RegionScanner (org.apache.hadoop.hbase.regionserver.RegionScanner)19