Search in sources :

Example 1 with TableDescriptors

use of org.apache.hadoop.hbase.TableDescriptors in project hbase by apache.

the class TestFSTableDescriptors method testUpdates.

@Test
public void testUpdates() throws IOException {
    final String name = "testUpdates";
    FileSystem fs = FileSystem.get(UTIL.getConfiguration());
    // Cleanup old tests if any detrius laying around.
    Path rootdir = new Path(UTIL.getDataTestDir(), name);
    TableDescriptors htds = new FSTableDescriptors(UTIL.getConfiguration(), fs, rootdir);
    HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(name));
    htds.add(htd);
    htds.add(htd);
    htds.add(htd);
}
Also used : Path(org.apache.hadoop.fs.Path) TableDescriptors(org.apache.hadoop.hbase.TableDescriptors) FileSystem(org.apache.hadoop.fs.FileSystem) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor) Test(org.junit.Test)

Example 2 with TableDescriptors

use of org.apache.hadoop.hbase.TableDescriptors in project hbase by apache.

the class TestFSTableDescriptors method testRemoves.

@Test
public void testRemoves() throws IOException {
    final String name = this.name.getMethodName();
    FileSystem fs = FileSystem.get(UTIL.getConfiguration());
    // Cleanup old tests if any detrius laying around.
    Path rootdir = new Path(UTIL.getDataTestDir(), name);
    TableDescriptors htds = new FSTableDescriptors(UTIL.getConfiguration(), fs, rootdir);
    HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(name));
    htds.add(htd);
    assertNotNull(htds.remove(htd.getTableName()));
    assertNull(htds.remove(htd.getTableName()));
}
Also used : Path(org.apache.hadoop.fs.Path) TableDescriptors(org.apache.hadoop.hbase.TableDescriptors) FileSystem(org.apache.hadoop.fs.FileSystem) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor) Test(org.junit.Test)

Example 3 with TableDescriptors

use of org.apache.hadoop.hbase.TableDescriptors in project hbase by apache.

the class ExpiredMobFileCleanerChore method chore.

@Override
@edu.umd.cs.findbugs.annotations.SuppressWarnings(value = "REC_CATCH_EXCEPTION", justification = "Intentional")
protected void chore() {
    try {
        TableDescriptors htds = master.getTableDescriptors();
        Map<String, HTableDescriptor> map = htds.getAll();
        for (HTableDescriptor htd : map.values()) {
            for (HColumnDescriptor hcd : htd.getColumnFamilies()) {
                if (hcd.isMobEnabled() && hcd.getMinVersions() == 0) {
                    // clean only for mob-enabled column.
                    // obtain a read table lock before cleaning, synchronize with MobFileCompactionChore.
                    final LockManager.MasterLock lock = master.getLockManager().createMasterLock(MobUtils.getTableLockName(htd.getTableName()), LockProcedure.LockType.SHARED, this.getClass().getSimpleName() + ": Cleaning expired mob files");
                    try {
                        lock.acquire();
                        cleaner.cleanExpiredMobFiles(htd.getTableName().getNameAsString(), hcd);
                    } finally {
                        lock.release();
                    }
                }
            }
        }
    } catch (Exception e) {
        LOG.error("Fail to clean the expired mob files", e);
    }
}
Also used : LockManager(org.apache.hadoop.hbase.master.locking.LockManager) TableDescriptors(org.apache.hadoop.hbase.TableDescriptors) HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor)

Example 4 with TableDescriptors

use of org.apache.hadoop.hbase.TableDescriptors in project hbase by apache.

the class MobCompactionChore method chore.

@Override
protected void chore() {
    try {
        TableDescriptors htds = master.getTableDescriptors();
        Map<String, HTableDescriptor> map = htds.getAll();
        for (HTableDescriptor htd : map.values()) {
            if (!master.getTableStateManager().isTableState(htd.getTableName(), TableState.State.ENABLED)) {
                continue;
            }
            boolean reported = false;
            try {
                final LockManager.MasterLock lock = master.getLockManager().createMasterLock(MobUtils.getTableLockName(htd.getTableName()), LockProcedure.LockType.EXCLUSIVE, this.getClass().getName() + ": mob compaction");
                for (HColumnDescriptor hcd : htd.getColumnFamilies()) {
                    if (!hcd.isMobEnabled()) {
                        continue;
                    }
                    if (!reported) {
                        master.reportMobCompactionStart(htd.getTableName());
                        reported = true;
                    }
                    MobUtils.doMobCompaction(master.getConfiguration(), master.getFileSystem(), htd.getTableName(), hcd, pool, false, lock);
                }
            } finally {
                if (reported) {
                    master.reportMobCompactionEnd(htd.getTableName());
                }
            }
        }
    } catch (Exception e) {
        LOG.error("Failed to compact mob files", e);
    }
}
Also used : LockManager(org.apache.hadoop.hbase.master.locking.LockManager) TableDescriptors(org.apache.hadoop.hbase.TableDescriptors) HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor)

Example 5 with TableDescriptors

use of org.apache.hadoop.hbase.TableDescriptors in project hbase by apache.

the class TableStateManager method start.

public void start() throws IOException {
    TableDescriptors tableDescriptors = master.getTableDescriptors();
    Connection connection = master.getConnection();
    fixTableStates(tableDescriptors, connection);
}
Also used : TableDescriptors(org.apache.hadoop.hbase.TableDescriptors) Connection(org.apache.hadoop.hbase.client.Connection)

Aggregations

TableDescriptors (org.apache.hadoop.hbase.TableDescriptors)8 HTableDescriptor (org.apache.hadoop.hbase.HTableDescriptor)4 FileSystem (org.apache.hadoop.fs.FileSystem)3 Path (org.apache.hadoop.fs.Path)3 Test (org.junit.Test)3 HColumnDescriptor (org.apache.hadoop.hbase.HColumnDescriptor)2 LockManager (org.apache.hadoop.hbase.master.locking.LockManager)2 IOException (java.io.IOException)1 RejectedExecutionException (java.util.concurrent.RejectedExecutionException)1 Connection (org.apache.hadoop.hbase.client.Connection)1 AssignmentManager (org.apache.hadoop.hbase.master.AssignmentManager)1 MasterServices (org.apache.hadoop.hbase.master.MasterServices)1 HRegionServer (org.apache.hadoop.hbase.regionserver.HRegionServer)1 RegionServerCoprocessorHost (org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost)1 ReplicationEndpoint (org.apache.hadoop.hbase.replication.ReplicationEndpoint)1 ReplicationException (org.apache.hadoop.hbase.replication.ReplicationException)1