Search in sources :

Example 1 with FinishRegionRecoveringHandler

use of org.apache.hadoop.hbase.regionserver.handler.FinishRegionRecoveringHandler in project hbase by apache.

the class RecoveringRegionWatcher method nodeDeleted.

/**
   * Called when a node has been deleted
   * @param path full path of the deleted node
   */
@Override
public void nodeDeleted(String path) {
    if (this.server.isStopped() || this.server.isStopping()) {
        return;
    }
    String parentPath = path.substring(0, path.lastIndexOf('/'));
    if (!this.watcher.znodePaths.recoveringRegionsZNode.equalsIgnoreCase(parentPath)) {
        return;
    }
    String regionName = path.substring(parentPath.length() + 1);
    server.getExecutorService().submit(new FinishRegionRecoveringHandler(server, regionName, path));
}
Also used : FinishRegionRecoveringHandler(org.apache.hadoop.hbase.regionserver.handler.FinishRegionRecoveringHandler)

Example 2 with FinishRegionRecoveringHandler

use of org.apache.hadoop.hbase.regionserver.handler.FinishRegionRecoveringHandler in project hbase by apache.

the class ZkSplitLogWorkerCoordination method taskLoop.

/**
   * Wait for tasks to become available at /hbase/splitlog zknode. Grab a task one at a time. This
   * policy puts an upper-limit on the number of simultaneous log splitting that could be happening
   * in a cluster.
   * <p>
   * Synchronization using <code>taskReadyLock</code> ensures that it will try to grab every task
   * that has been put up
   * @throws InterruptedException
   */
@Override
public void taskLoop() throws InterruptedException {
    while (!shouldStop) {
        int seq_start = taskReadySeq.get();
        List<String> paths = null;
        paths = getTaskList();
        if (paths == null) {
            LOG.warn("Could not get tasks, did someone remove " + watcher.znodePaths.splitLogZNode + " ... worker thread exiting.");
            return;
        }
        // pick meta wal firstly
        int offset = (int) (Math.random() * paths.size());
        for (int i = 0; i < paths.size(); i++) {
            if (AbstractFSWALProvider.isMetaFile(paths.get(i))) {
                offset = i;
                break;
            }
        }
        int numTasks = paths.size();
        for (int i = 0; i < numTasks; i++) {
            int idx = (i + offset) % paths.size();
            // double encoding of the path name
            if (this.calculateAvailableSplitters(numTasks) > 0) {
                grabTask(ZKUtil.joinZNode(watcher.znodePaths.splitLogZNode, paths.get(idx)));
            } else {
                LOG.debug("Current region server " + server.getServerName() + " has " + this.tasksInProgress.get() + " tasks in progress and can't take more.");
                break;
            }
            if (shouldStop) {
                return;
            }
        }
        SplitLogCounters.tot_wkr_task_grabing.incrementAndGet();
        synchronized (taskReadyLock) {
            while (seq_start == taskReadySeq.get()) {
                taskReadyLock.wait(checkInterval);
                if (server != null) {
                    // check to see if we have stale recovering regions in our internal memory state
                    Map<String, Region> recoveringRegions = server.getRecoveringRegions();
                    if (!recoveringRegions.isEmpty()) {
                        // Make a local copy to prevent ConcurrentModificationException when other threads
                        // modify recoveringRegions
                        List<String> tmpCopy = new ArrayList<>(recoveringRegions.keySet());
                        int listSize = tmpCopy.size();
                        for (int i = 0; i < listSize; i++) {
                            String region = tmpCopy.get(i);
                            String nodePath = ZKUtil.joinZNode(watcher.znodePaths.recoveringRegionsZNode, region);
                            try {
                                if (ZKUtil.checkExists(watcher, nodePath) == -1) {
                                    server.getExecutorService().submit(new FinishRegionRecoveringHandler(server, region, nodePath));
                                } else {
                                    // check the first one is good enough.
                                    break;
                                }
                            } catch (KeeperException e) {
                                // ignore zookeeper error
                                LOG.debug("Got a zookeeper when trying to open a recovering region", e);
                                break;
                            }
                        }
                    }
                }
            }
        }
    }
}
Also used : FinishRegionRecoveringHandler(org.apache.hadoop.hbase.regionserver.handler.FinishRegionRecoveringHandler) ArrayList(java.util.ArrayList) Region(org.apache.hadoop.hbase.regionserver.Region) KeeperException(org.apache.zookeeper.KeeperException)

Example 3 with FinishRegionRecoveringHandler

use of org.apache.hadoop.hbase.regionserver.handler.FinishRegionRecoveringHandler in project hbase by apache.

the class TestHRegion method testOpenRegionWrittenToWALForLogReplay.

@Test
public void testOpenRegionWrittenToWALForLogReplay() throws Exception {
    // similar to the above test but with distributed log replay
    final ServerName serverName = ServerName.valueOf(name.getMethodName(), 100, 42);
    final RegionServerServices rss = spy(TEST_UTIL.createMockRegionServerService(serverName));
    HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(name.getMethodName()));
    htd.addFamily(new HColumnDescriptor(fam1));
    htd.addFamily(new HColumnDescriptor(fam2));
    HRegionInfo hri = new HRegionInfo(htd.getTableName(), HConstants.EMPTY_BYTE_ARRAY, HConstants.EMPTY_BYTE_ARRAY);
    // open the region w/o rss and wal and flush some files
    HRegion region = HBaseTestingUtility.createRegionAndWAL(hri, TEST_UTIL.getDataTestDir(), TEST_UTIL.getConfiguration(), htd);
    assertNotNull(region);
    // create a file in fam1 for the region before opening in OpenRegionHandler
    region.put(new Put(Bytes.toBytes("a")).addColumn(fam1, fam1, fam1));
    region.flush(true);
    HBaseTestingUtility.closeRegionAndWAL(region);
    ArgumentCaptor<WALEdit> editCaptor = ArgumentCaptor.forClass(WALEdit.class);
    // capture append() calls
    WAL wal = mockWAL();
    when(rss.getWAL((HRegionInfo) any())).thenReturn(wal);
    // add the region to recovering regions
    HashMap<String, Region> recoveringRegions = Maps.newHashMap();
    recoveringRegions.put(region.getRegionInfo().getEncodedName(), null);
    when(rss.getRecoveringRegions()).thenReturn(recoveringRegions);
    try {
        Configuration conf = new Configuration(TEST_UTIL.getConfiguration());
        conf.set(HConstants.REGION_IMPL, HRegionWithSeqId.class.getName());
        region = HRegion.openHRegion(hri, htd, rss.getWAL(hri), conf, rss, null);
        // verify that we have not appended region open event to WAL because this region is still
        // recovering
        verify(wal, times(0)).append((HRegionInfo) any(), (WALKey) any(), editCaptor.capture(), anyBoolean());
        // not put the region out of recovering state
        new FinishRegionRecoveringHandler(rss, region.getRegionInfo().getEncodedName(), "/foo").prepare().process();
        // now we should have put the entry
        verify(wal, times(1)).append((HRegionInfo) any(), (WALKey) any(), editCaptor.capture(), anyBoolean());
        WALEdit edit = editCaptor.getValue();
        assertNotNull(edit);
        assertNotNull(edit.getCells());
        assertEquals(1, edit.getCells().size());
        RegionEventDescriptor desc = WALEdit.getRegionEventDescriptor(edit.getCells().get(0));
        assertNotNull(desc);
        LOG.info("RegionEventDescriptor from WAL: " + desc);
        assertEquals(RegionEventDescriptor.EventType.REGION_OPEN, desc.getEventType());
        assertTrue(Bytes.equals(desc.getTableName().toByteArray(), htd.getName()));
        assertTrue(Bytes.equals(desc.getEncodedRegionName().toByteArray(), hri.getEncodedNameAsBytes()));
        assertTrue(desc.getLogSequenceNumber() > 0);
        assertEquals(serverName, ProtobufUtil.toServerName(desc.getServer()));
        assertEquals(2, desc.getStoresCount());
        StoreDescriptor store = desc.getStores(0);
        assertTrue(Bytes.equals(store.getFamilyName().toByteArray(), fam1));
        assertEquals(store.getStoreHomeDir(), Bytes.toString(fam1));
        // 1store file
        assertEquals(1, store.getStoreFileCount());
        // ensure path is relative
        assertFalse(store.getStoreFile(0).contains("/"));
        store = desc.getStores(1);
        assertTrue(Bytes.equals(store.getFamilyName().toByteArray(), fam2));
        assertEquals(store.getStoreHomeDir(), Bytes.toString(fam2));
        // no store files
        assertEquals(0, store.getStoreFileCount());
    } finally {
        HBaseTestingUtility.closeRegionAndWAL(region);
    }
}
Also used : WAL(org.apache.hadoop.hbase.wal.WAL) MetricsWAL(org.apache.hadoop.hbase.regionserver.wal.MetricsWAL) Configuration(org.apache.hadoop.conf.Configuration) HBaseConfiguration(org.apache.hadoop.hbase.HBaseConfiguration) HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor) ByteString(org.apache.hadoop.hbase.shaded.com.google.protobuf.ByteString) Put(org.apache.hadoop.hbase.client.Put) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor) StoreDescriptor(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.StoreDescriptor) HRegionInfo(org.apache.hadoop.hbase.HRegionInfo) WALEdit(org.apache.hadoop.hbase.regionserver.wal.WALEdit) FinishRegionRecoveringHandler(org.apache.hadoop.hbase.regionserver.handler.FinishRegionRecoveringHandler) ServerName(org.apache.hadoop.hbase.ServerName) RegionEventDescriptor(org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.RegionEventDescriptor) Test(org.junit.Test)

Aggregations

FinishRegionRecoveringHandler (org.apache.hadoop.hbase.regionserver.handler.FinishRegionRecoveringHandler)3 ArrayList (java.util.ArrayList)1 Configuration (org.apache.hadoop.conf.Configuration)1 HBaseConfiguration (org.apache.hadoop.hbase.HBaseConfiguration)1 HColumnDescriptor (org.apache.hadoop.hbase.HColumnDescriptor)1 HRegionInfo (org.apache.hadoop.hbase.HRegionInfo)1 HTableDescriptor (org.apache.hadoop.hbase.HTableDescriptor)1 ServerName (org.apache.hadoop.hbase.ServerName)1 Put (org.apache.hadoop.hbase.client.Put)1 Region (org.apache.hadoop.hbase.regionserver.Region)1 MetricsWAL (org.apache.hadoop.hbase.regionserver.wal.MetricsWAL)1 WALEdit (org.apache.hadoop.hbase.regionserver.wal.WALEdit)1 ByteString (org.apache.hadoop.hbase.shaded.com.google.protobuf.ByteString)1 RegionEventDescriptor (org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.RegionEventDescriptor)1 StoreDescriptor (org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.StoreDescriptor)1 WAL (org.apache.hadoop.hbase.wal.WAL)1 KeeperException (org.apache.zookeeper.KeeperException)1 Test (org.junit.Test)1