Search in sources :

Example 11 with StoppableImplementation

use of org.apache.hadoop.hbase.util.StoppableImplementation in project hbase by apache.

the class TestCleanerChore method testNoExceptionFromDirectoryWithRacyChildren.

/**
   * The cleaner runs in a loop, where it first checks to see all the files under a directory can be
   * deleted. If they all can, then we try to delete the directory. However, a file may be added
   * that directory to after the original check. This ensures that we don't accidentally delete that
   * directory on and don't get spurious IOExceptions.
   * <p>
   * This was from HBASE-7465.
   * @throws Exception on failure
   */
@Test
public void testNoExceptionFromDirectoryWithRacyChildren() throws Exception {
    Stoppable stop = new StoppableImplementation();
    // need to use a localutil to not break the rest of the test that runs on the local FS, which
    // gets hosed when we start to use a minicluster.
    HBaseTestingUtility localUtil = new HBaseTestingUtility();
    Configuration conf = localUtil.getConfiguration();
    final Path testDir = UTIL.getDataTestDir();
    final FileSystem fs = UTIL.getTestFileSystem();
    LOG.debug("Writing test data to: " + testDir);
    String confKey = "hbase.test.cleaner.delegates";
    conf.set(confKey, AlwaysDelete.class.getName());
    AllValidPaths chore = new AllValidPaths("test-file-cleaner", stop, conf, fs, testDir, confKey);
    // spy on the delegate to ensure that we don't check for directories
    AlwaysDelete delegate = (AlwaysDelete) chore.cleanersChain.get(0);
    AlwaysDelete spy = Mockito.spy(delegate);
    chore.cleanersChain.set(0, spy);
    // create the directory layout in the directory to clean
    final Path parent = new Path(testDir, "parent");
    Path file = new Path(parent, "someFile");
    fs.mkdirs(parent);
    // touch a new file
    fs.create(file).close();
    assertTrue("Test file didn't get created.", fs.exists(file));
    final Path racyFile = new Path(parent, "addedFile");
    // when we attempt to delete the original file, add another file in the same directory
    Mockito.doAnswer(new Answer<Boolean>() {

        @Override
        public Boolean answer(InvocationOnMock invocation) throws Throwable {
            fs.create(racyFile).close();
            FSUtils.logFileSystemState(fs, testDir, LOG);
            return (Boolean) invocation.callRealMethod();
        }
    }).when(spy).isFileDeletable(Mockito.any(FileStatus.class));
    // attempt to delete the directory, which
    if (chore.checkAndDeleteDirectory(parent)) {
        throw new Exception("Reported success deleting directory, should have failed when adding file mid-iteration");
    }
    // make sure all the directories + added file exist, but the original file is deleted
    assertTrue("Added file unexpectedly deleted", fs.exists(racyFile));
    assertTrue("Parent directory deleted unexpectedly", fs.exists(parent));
    assertFalse("Original file unexpectedly retained", fs.exists(file));
    Mockito.verify(spy, Mockito.times(1)).isFileDeletable(Mockito.any(FileStatus.class));
}
Also used : Path(org.apache.hadoop.fs.Path) FileStatus(org.apache.hadoop.fs.FileStatus) Configuration(org.apache.hadoop.conf.Configuration) StoppableImplementation(org.apache.hadoop.hbase.util.StoppableImplementation) Stoppable(org.apache.hadoop.hbase.Stoppable) IOException(java.io.IOException) HBaseTestingUtility(org.apache.hadoop.hbase.HBaseTestingUtility) InvocationOnMock(org.mockito.invocation.InvocationOnMock) FileSystem(org.apache.hadoop.fs.FileSystem) Test(org.junit.Test)

Example 12 with StoppableImplementation

use of org.apache.hadoop.hbase.util.StoppableImplementation in project hbase by apache.

the class TestZooKeeperTableArchiveClient method testMultipleTables.

/**
   * Test archiving/cleaning across multiple tables, where some are retained, and others aren't
   * @throws Exception on failure
   */
@Test(timeout = 300000)
public void testMultipleTables() throws Exception {
    createArchiveDirectory();
    String otherTable = "otherTable";
    FileSystem fs = UTIL.getTestFileSystem();
    Path archiveDir = getArchiveDir();
    Path tableDir = getTableDir(STRING_TABLE_NAME);
    Path otherTableDir = getTableDir(otherTable);
    // register cleanup for the created directories
    toCleanup.add(archiveDir);
    toCleanup.add(tableDir);
    toCleanup.add(otherTableDir);
    Configuration conf = UTIL.getConfiguration();
    // setup the delegate
    Stoppable stop = new StoppableImplementation();
    final ChoreService choreService = new ChoreService("TEST_SERVER_NAME");
    HFileCleaner cleaner = setupAndCreateCleaner(conf, fs, archiveDir, stop);
    List<BaseHFileCleanerDelegate> cleaners = turnOnArchiving(STRING_TABLE_NAME, cleaner);
    final LongTermArchivingHFileCleaner delegate = (LongTermArchivingHFileCleaner) cleaners.get(0);
    // create the region
    HColumnDescriptor hcd = new HColumnDescriptor(TEST_FAM);
    HRegion region = UTIL.createTestRegion(STRING_TABLE_NAME, hcd);
    List<Region> regions = new ArrayList<>();
    regions.add(region);
    when(rss.getOnlineRegions()).thenReturn(regions);
    final CompactedHFilesDischarger compactionCleaner = new CompactedHFilesDischarger(100, stop, rss, false);
    loadFlushAndCompact(region, TEST_FAM);
    compactionCleaner.chore();
    // create the another table that we don't archive
    hcd = new HColumnDescriptor(TEST_FAM);
    HRegion otherRegion = UTIL.createTestRegion(otherTable, hcd);
    regions = new ArrayList<>();
    regions.add(otherRegion);
    when(rss.getOnlineRegions()).thenReturn(regions);
    final CompactedHFilesDischarger compactionCleaner1 = new CompactedHFilesDischarger(100, stop, rss, false);
    loadFlushAndCompact(otherRegion, TEST_FAM);
    compactionCleaner1.chore();
    // get the current hfiles in the archive directory
    // Should  be archived
    List<Path> files = getAllFiles(fs, archiveDir);
    if (files == null) {
        FSUtils.logFileSystemState(fs, archiveDir, LOG);
        throw new RuntimeException("Didn't load archive any files!");
    }
    // make sure we have files from both tables
    int initialCountForPrimary = 0;
    int initialCountForOtherTable = 0;
    for (Path file : files) {
        String tableName = file.getParent().getParent().getParent().getName();
        // check to which table this file belongs
        if (tableName.equals(otherTable))
            initialCountForOtherTable++;
        else if (tableName.equals(STRING_TABLE_NAME))
            initialCountForPrimary++;
    }
    assertTrue("Didn't archive files for:" + STRING_TABLE_NAME, initialCountForPrimary > 0);
    assertTrue("Didn't archive files for:" + otherTable, initialCountForOtherTable > 0);
    // run the cleaners, checking for each of the directories + files (both should be deleted and
    // need to be checked) in 'otherTable' and the files (which should be retained) in the 'table'
    CountDownLatch finished = setupCleanerWatching(delegate, cleaners, files.size() + 3);
    // run the cleaner
    choreService.scheduleChore(cleaner);
    // wait for the cleaner to check all the files
    finished.await();
    // stop the cleaner
    stop.stop("");
    // know the cleaner ran, so now check all the files again to make sure they are still there
    List<Path> archivedFiles = getAllFiles(fs, archiveDir);
    int archivedForPrimary = 0;
    for (Path file : archivedFiles) {
        String tableName = file.getParent().getParent().getParent().getName();
        // ensure we don't have files from the non-archived table
        assertFalse("Have a file from the non-archived table: " + file, tableName.equals(otherTable));
        if (tableName.equals(STRING_TABLE_NAME))
            archivedForPrimary++;
    }
    assertEquals("Not all archived files for the primary table were retained.", initialCountForPrimary, archivedForPrimary);
    // but we still have the archive directory
    assertTrue("Archive directory was deleted via archiver", fs.exists(archiveDir));
}
Also used : Path(org.apache.hadoop.fs.Path) BaseHFileCleanerDelegate(org.apache.hadoop.hbase.master.cleaner.BaseHFileCleanerDelegate) Configuration(org.apache.hadoop.conf.Configuration) HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor) StoppableImplementation(org.apache.hadoop.hbase.util.StoppableImplementation) ArrayList(java.util.ArrayList) Stoppable(org.apache.hadoop.hbase.Stoppable) HFileCleaner(org.apache.hadoop.hbase.master.cleaner.HFileCleaner) CountDownLatch(java.util.concurrent.CountDownLatch) HRegion(org.apache.hadoop.hbase.regionserver.HRegion) ChoreService(org.apache.hadoop.hbase.ChoreService) CompactedHFilesDischarger(org.apache.hadoop.hbase.regionserver.CompactedHFilesDischarger) FileSystem(org.apache.hadoop.fs.FileSystem) HRegion(org.apache.hadoop.hbase.regionserver.HRegion) Region(org.apache.hadoop.hbase.regionserver.Region) Test(org.junit.Test)

Example 13 with StoppableImplementation

use of org.apache.hadoop.hbase.util.StoppableImplementation in project hbase by apache.

the class TestStoreFileRefresherChore method testIsStale.

@Test
public void testIsStale() throws IOException {
    int period = 0;
    byte[][] families = new byte[][] { Bytes.toBytes("cf") };
    byte[] qf = Bytes.toBytes("cq");
    HRegionServer regionServer = mock(HRegionServer.class);
    List<Region> regions = new ArrayList<>();
    when(regionServer.getOnlineRegionsLocalContext()).thenReturn(regions);
    when(regionServer.getConfiguration()).thenReturn(TEST_UTIL.getConfiguration());
    HTableDescriptor htd = getTableDesc(TableName.valueOf(name.getMethodName()), families);
    Region primary = initHRegion(htd, HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW, 0);
    Region replica1 = initHRegion(htd, HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW, 1);
    regions.add(primary);
    regions.add(replica1);
    StaleStorefileRefresherChore chore = new StaleStorefileRefresherChore(period, regionServer, new StoppableImplementation());
    // write some data to primary and flush
    putData(primary, 0, 100, qf, families);
    primary.flush(true);
    verifyData(primary, 0, 100, qf, families);
    try {
        verifyData(replica1, 0, 100, qf, families);
        Assert.fail("should have failed");
    } catch (AssertionError ex) {
    // expected
    }
    chore.chore();
    verifyData(replica1, 0, 100, qf, families);
    // simulate an fs failure where we cannot refresh the store files for the replica
    ((FailingHRegionFileSystem) ((HRegion) replica1).getRegionFileSystem()).fail = true;
    // write some more data to primary and flush
    putData(primary, 100, 100, qf, families);
    primary.flush(true);
    verifyData(primary, 0, 200, qf, families);
    // should not throw ex, but we cannot refresh the store files
    chore.chore();
    verifyData(replica1, 0, 100, qf, families);
    try {
        verifyData(replica1, 100, 100, qf, families);
        Assert.fail("should have failed");
    } catch (AssertionError ex) {
    // expected
    }
    chore.isStale = true;
    //now after this, we cannot read back any value
    chore.chore();
    try {
        verifyData(replica1, 0, 100, qf, families);
        Assert.fail("should have failed with IOException");
    } catch (IOException ex) {
    // expected
    }
}
Also used : StoppableImplementation(org.apache.hadoop.hbase.util.StoppableImplementation) ArrayList(java.util.ArrayList) IOException(java.io.IOException) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor) Test(org.junit.Test)

Aggregations

StoppableImplementation (org.apache.hadoop.hbase.util.StoppableImplementation)13 Test (org.junit.Test)13 Configuration (org.apache.hadoop.conf.Configuration)11 FileSystem (org.apache.hadoop.fs.FileSystem)11 Path (org.apache.hadoop.fs.Path)11 Stoppable (org.apache.hadoop.hbase.Stoppable)11 IOException (java.io.IOException)3 ArrayList (java.util.ArrayList)3 FileStatus (org.apache.hadoop.fs.FileStatus)3 HFileCleaner (org.apache.hadoop.hbase.master.cleaner.HFileCleaner)3 CountDownLatch (java.util.concurrent.CountDownLatch)2 ChoreService (org.apache.hadoop.hbase.ChoreService)2 HColumnDescriptor (org.apache.hadoop.hbase.HColumnDescriptor)2 BaseHFileCleanerDelegate (org.apache.hadoop.hbase.master.cleaner.BaseHFileCleanerDelegate)2 CompactedHFilesDischarger (org.apache.hadoop.hbase.regionserver.CompactedHFilesDischarger)2 HRegion (org.apache.hadoop.hbase.regionserver.HRegion)2 Region (org.apache.hadoop.hbase.regionserver.Region)2 InvocationOnMock (org.mockito.invocation.InvocationOnMock)2 HBaseTestingUtility (org.apache.hadoop.hbase.HBaseTestingUtility)1 HTableDescriptor (org.apache.hadoop.hbase.HTableDescriptor)1