Search in sources :

Example 1 with EnvironmentEdge

use of org.apache.hadoop.hbase.util.EnvironmentEdge in project hbase by apache.

the class TestHFileCleaner method testHFileCleaning.

@Test(timeout = 60 * 1000)
public void testHFileCleaning() throws Exception {
    final EnvironmentEdge originalEdge = EnvironmentEdgeManager.getDelegate();
    String prefix = "someHFileThatWouldBeAUUID";
    Configuration conf = UTIL.getConfiguration();
    // set TTL
    long ttl = 2000;
    conf.set(HFileCleaner.MASTER_HFILE_CLEANER_PLUGINS, "org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner");
    conf.setLong(TimeToLiveHFileCleaner.TTL_CONF_KEY, ttl);
    Server server = new DummyServer();
    Path archivedHfileDir = new Path(UTIL.getDataTestDirOnTestFS(), HConstants.HFILE_ARCHIVE_DIRECTORY);
    FileSystem fs = FileSystem.get(conf);
    HFileCleaner cleaner = new HFileCleaner(1000, server, conf, fs, archivedHfileDir);
    // Create 2 invalid files, 1 "recent" file, 1 very new file and 30 old files
    final long createTime = System.currentTimeMillis();
    fs.delete(archivedHfileDir, true);
    fs.mkdirs(archivedHfileDir);
    // Case 1: 1 invalid file, which should be deleted directly
    fs.createNewFile(new Path(archivedHfileDir, "dfd-dfd"));
    // Case 2: 1 "recent" file, not even deletable for the first log cleaner
    // (TimeToLiveLogCleaner), so we are not going down the chain
    LOG.debug("Now is: " + createTime);
    for (int i = 1; i < 32; i++) {
        // Case 3: old files which would be deletable for the first log cleaner
        // (TimeToLiveHFileCleaner),
        Path fileName = new Path(archivedHfileDir, (prefix + "." + (createTime + i)));
        fs.createNewFile(fileName);
        // set the creation time past ttl to ensure that it gets removed
        fs.setTimes(fileName, createTime - ttl - 1, -1);
        LOG.debug("Creating " + getFileStats(fileName, fs));
    }
    // Case 2: 1 newer file, not even deletable for the first log cleaner
    // (TimeToLiveLogCleaner), so we are not going down the chain
    Path saved = new Path(archivedHfileDir, prefix + ".00000000000");
    fs.createNewFile(saved);
    // set creation time within the ttl
    fs.setTimes(saved, createTime - ttl / 2, -1);
    LOG.debug("Creating " + getFileStats(saved, fs));
    for (FileStatus stat : fs.listStatus(archivedHfileDir)) {
        LOG.debug(stat.getPath().toString());
    }
    assertEquals(33, fs.listStatus(archivedHfileDir).length);
    // set a custom edge manager to handle time checking
    EnvironmentEdge setTime = new EnvironmentEdge() {

        @Override
        public long currentTime() {
            return createTime;
        }
    };
    EnvironmentEdgeManager.injectEdge(setTime);
    // run the chore
    cleaner.chore();
    // ensure we only end up with the saved file
    assertEquals(1, fs.listStatus(archivedHfileDir).length);
    for (FileStatus file : fs.listStatus(archivedHfileDir)) {
        LOG.debug("Kept hfiles: " + file.getPath().getName());
    }
    // reset the edge back to the original edge
    EnvironmentEdgeManager.injectEdge(originalEdge);
}
Also used : Path(org.apache.hadoop.fs.Path) FileStatus(org.apache.hadoop.fs.FileStatus) Configuration(org.apache.hadoop.conf.Configuration) Server(org.apache.hadoop.hbase.Server) EnvironmentEdge(org.apache.hadoop.hbase.util.EnvironmentEdge) FileSystem(org.apache.hadoop.fs.FileSystem) Test(org.junit.Test)

Example 2 with EnvironmentEdge

use of org.apache.hadoop.hbase.util.EnvironmentEdge in project hbase by apache.

the class TestFIFOCompactionPolicy method setEnvironmentEdge.

@BeforeClass
public static void setEnvironmentEdge() {
    EnvironmentEdge ee = new TimeOffsetEnvironmentEdge();
    EnvironmentEdgeManager.injectEdge(ee);
}
Also used : TimeOffsetEnvironmentEdge(org.apache.hadoop.hbase.util.TimeOffsetEnvironmentEdge) EnvironmentEdge(org.apache.hadoop.hbase.util.EnvironmentEdge) TimeOffsetEnvironmentEdge(org.apache.hadoop.hbase.util.TimeOffsetEnvironmentEdge) BeforeClass(org.junit.BeforeClass)

Example 3 with EnvironmentEdge

use of org.apache.hadoop.hbase.util.EnvironmentEdge in project hbase by apache.

the class AbstractTestFSWAL method testFlushSequenceIdIsGreaterThanAllEditsInHFile.

/**
   * Test flush for sure has a sequence id that is beyond the last edit appended. We do this by
   * slowing appends in the background ring buffer thread while in foreground we call flush. The
   * addition of the sync over HRegion in flush should fix an issue where flush was returning before
   * all of its appends had made it out to the WAL (HBASE-11109).
   * @throws IOException
   * @see <a href="https://issues.apache.org/jira/browse/HBASE-11109">HBASE-11109</a>
   */
@Test
public void testFlushSequenceIdIsGreaterThanAllEditsInHFile() throws IOException {
    String testName = currentTest.getMethodName();
    final TableName tableName = TableName.valueOf(testName);
    final HRegionInfo hri = new HRegionInfo(tableName);
    final byte[] rowName = tableName.getName();
    final HTableDescriptor htd = new HTableDescriptor(tableName);
    htd.addFamily(new HColumnDescriptor("f"));
    HRegion r = HBaseTestingUtility.createRegionAndWAL(hri, TEST_UTIL.getDefaultRootDirPath(), TEST_UTIL.getConfiguration(), htd);
    HBaseTestingUtility.closeRegionAndWAL(r);
    final int countPerFamily = 10;
    final AtomicBoolean goslow = new AtomicBoolean(false);
    NavigableMap<byte[], Integer> scopes = new TreeMap<>(Bytes.BYTES_COMPARATOR);
    for (byte[] fam : htd.getFamiliesKeys()) {
        scopes.put(fam, 0);
    }
    // subclass and doctor a method.
    AbstractFSWAL<?> wal = newSlowWAL(FS, FSUtils.getWALRootDir(CONF), DIR.toString(), testName, CONF, null, true, null, null, new Runnable() {

        @Override
        public void run() {
            if (goslow.get()) {
                Threads.sleep(100);
                LOG.debug("Sleeping before appending 100ms");
            }
        }
    });
    HRegion region = HRegion.openHRegion(TEST_UTIL.getConfiguration(), TEST_UTIL.getTestFileSystem(), TEST_UTIL.getDefaultRootDirPath(), hri, htd, wal);
    EnvironmentEdge ee = EnvironmentEdgeManager.getDelegate();
    try {
        List<Put> puts = null;
        for (HColumnDescriptor hcd : htd.getFamilies()) {
            puts = TestWALReplay.addRegionEdits(rowName, hcd.getName(), countPerFamily, ee, region, "x");
        }
        // Now assert edits made it in.
        final Get g = new Get(rowName);
        Result result = region.get(g);
        assertEquals(countPerFamily * htd.getFamilies().size(), result.size());
        // Construct a WALEdit and add it a few times to the WAL.
        WALEdit edits = new WALEdit();
        for (Put p : puts) {
            CellScanner cs = p.cellScanner();
            while (cs.advance()) {
                edits.add(cs.current());
            }
        }
        // Add any old cluster id.
        List<UUID> clusterIds = new ArrayList<>(1);
        clusterIds.add(UUID.randomUUID());
        // Now make appends run slow.
        goslow.set(true);
        for (int i = 0; i < countPerFamily; i++) {
            final HRegionInfo info = region.getRegionInfo();
            final WALKey logkey = new WALKey(info.getEncodedNameAsBytes(), tableName, System.currentTimeMillis(), clusterIds, -1, -1, region.getMVCC(), scopes);
            wal.append(info, logkey, edits, true);
            region.getMVCC().completeAndWait(logkey.getWriteEntry());
        }
        region.flush(true);
        // FlushResult.flushSequenceId is not visible here so go get the current sequence id.
        long currentSequenceId = region.getReadPoint(null);
        // Now release the appends
        goslow.set(false);
        assertTrue(currentSequenceId >= region.getReadPoint(null));
    } finally {
        region.close(true);
        wal.close();
    }
}
Also used : ArrayList(java.util.ArrayList) CellScanner(org.apache.hadoop.hbase.CellScanner) Result(org.apache.hadoop.hbase.client.Result) HRegionInfo(org.apache.hadoop.hbase.HRegionInfo) WALKey(org.apache.hadoop.hbase.wal.WALKey) UUID(java.util.UUID) HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor) EnvironmentEdge(org.apache.hadoop.hbase.util.EnvironmentEdge) TreeMap(java.util.TreeMap) Put(org.apache.hadoop.hbase.client.Put) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor) TableName(org.apache.hadoop.hbase.TableName) HRegion(org.apache.hadoop.hbase.regionserver.HRegion) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) Get(org.apache.hadoop.hbase.client.Get) Test(org.junit.Test)

Example 4 with EnvironmentEdge

use of org.apache.hadoop.hbase.util.EnvironmentEdge in project hbase by apache.

the class TestStoreScanner method testDeleteMarkerLongevity.

@Test
public void testDeleteMarkerLongevity() throws Exception {
    try {
        final long now = System.currentTimeMillis();
        EnvironmentEdgeManagerTestHelper.injectEdge(new EnvironmentEdge() {

            public long currentTime() {
                return now;
            }
        });
        KeyValue[] kvs = new KeyValue[] { /*0*/
        new KeyValue(Bytes.toBytes("R1"), Bytes.toBytes("cf"), null, now - 100, // live
        KeyValue.Type.DeleteFamily), /*1*/
        new KeyValue(Bytes.toBytes("R1"), Bytes.toBytes("cf"), null, now - 1000, // expired
        KeyValue.Type.DeleteFamily), /*2*/
        KeyValueTestUtil.create("R1", "cf", "a", now - 50, KeyValue.Type.Put, // live
        "v3"), /*3*/
        KeyValueTestUtil.create("R1", "cf", "a", now - 55, KeyValue.Type.Delete, // live
        "dontcare"), /*4*/
        KeyValueTestUtil.create("R1", "cf", "a", now - 55, KeyValue.Type.Put, // deleted
        "deleted-version v2"), /*5*/
        KeyValueTestUtil.create("R1", "cf", "a", now - 60, KeyValue.Type.Put, // live
        "v1"), /*6*/
        KeyValueTestUtil.create("R1", "cf", "a", now - 65, KeyValue.Type.Put, // max-version reached
        "v0"), /*7*/
        KeyValueTestUtil.create("R1", "cf", "a", now - 100, KeyValue.Type.DeleteColumn, // max-version
        "dont-care"), /*8*/
        KeyValueTestUtil.create("R1", "cf", "b", now - 600, KeyValue.Type.DeleteColumn, //expired
        "dont-care"), /*9*/
        KeyValueTestUtil.create("R1", "cf", "b", now - 70, KeyValue.Type.Put, //live
        "v2"), /*10*/
        KeyValueTestUtil.create("R1", "cf", "b", now - 750, KeyValue.Type.Put, //expired
        "v1"), /*11*/
        KeyValueTestUtil.create("R1", "cf", "c", now - 500, KeyValue.Type.Delete, //expired
        "dontcare"), /*12*/
        KeyValueTestUtil.create("R1", "cf", "c", now - 600, KeyValue.Type.Put, //expired
        "v1"), /*13*/
        KeyValueTestUtil.create("R1", "cf", "c", now - 1000, KeyValue.Type.Delete, //expired
        "dontcare"), /*14*/
        KeyValueTestUtil.create("R1", "cf", "d", now - 60, KeyValue.Type.Put, //live
        "expired put"), /*15*/
        KeyValueTestUtil.create("R1", "cf", "d", now - 100, KeyValue.Type.Delete, //live
        "not-expired delete") };
        List<KeyValueScanner> scanners = scanFixture(kvs);
        Scan scan = new Scan();
        scan.setMaxVersions(2);
        ScanInfo scanInfo = new ScanInfo(CONF, Bytes.toBytes("cf"), 0, /* minVersions */
        2, /* maxVersions */
        500, /* ttl */
        KeepDeletedCells.FALSE, /* keepDeletedCells */
        200, /* timeToPurgeDeletes */
        CellComparator.COMPARATOR);
        try (StoreScanner scanner = new StoreScanner(scan, scanInfo, ScanType.COMPACT_DROP_DELETES, null, scanners, HConstants.OLDEST_TIMESTAMP)) {
            List<Cell> results = new ArrayList<>();
            results = new ArrayList<>();
            Assert.assertEquals(true, scanner.next(results));
            Assert.assertEquals(kvs[0], results.get(0));
            Assert.assertEquals(kvs[2], results.get(1));
            Assert.assertEquals(kvs[3], results.get(2));
            Assert.assertEquals(kvs[5], results.get(3));
            Assert.assertEquals(kvs[9], results.get(4));
            Assert.assertEquals(kvs[14], results.get(5));
            Assert.assertEquals(kvs[15], results.get(6));
            Assert.assertEquals(7, results.size());
        }
    } finally {
        EnvironmentEdgeManagerTestHelper.reset();
    }
}
Also used : KeyValue(org.apache.hadoop.hbase.KeyValue) EnvironmentEdge(org.apache.hadoop.hbase.util.EnvironmentEdge) ArrayList(java.util.ArrayList) Scan(org.apache.hadoop.hbase.client.Scan) Cell(org.apache.hadoop.hbase.Cell) Test(org.junit.Test)

Aggregations

EnvironmentEdge (org.apache.hadoop.hbase.util.EnvironmentEdge)4 Test (org.junit.Test)3 ArrayList (java.util.ArrayList)2 TreeMap (java.util.TreeMap)1 UUID (java.util.UUID)1 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)1 Configuration (org.apache.hadoop.conf.Configuration)1 FileStatus (org.apache.hadoop.fs.FileStatus)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 Path (org.apache.hadoop.fs.Path)1 Cell (org.apache.hadoop.hbase.Cell)1 CellScanner (org.apache.hadoop.hbase.CellScanner)1 HColumnDescriptor (org.apache.hadoop.hbase.HColumnDescriptor)1 HRegionInfo (org.apache.hadoop.hbase.HRegionInfo)1 HTableDescriptor (org.apache.hadoop.hbase.HTableDescriptor)1 KeyValue (org.apache.hadoop.hbase.KeyValue)1 Server (org.apache.hadoop.hbase.Server)1 TableName (org.apache.hadoop.hbase.TableName)1 Get (org.apache.hadoop.hbase.client.Get)1 Put (org.apache.hadoop.hbase.client.Put)1