Search in sources :

Example 51 with TableName

use of org.apache.hadoop.hbase.TableName in project hbase by apache.

the class MobUtils method doMobCompaction.

/**
   * Performs the mob compaction.
   * @param conf the Configuration
   * @param fs the file system
   * @param tableName the table the compact
   * @param hcd the column descriptor
   * @param pool the thread pool
   * @param allFiles Whether add all mob files into the compaction.
   */
public static void doMobCompaction(Configuration conf, FileSystem fs, TableName tableName, HColumnDescriptor hcd, ExecutorService pool, boolean allFiles, LockManager.MasterLock lock) throws IOException {
    String className = conf.get(MobConstants.MOB_COMPACTOR_CLASS_KEY, PartitionedMobCompactor.class.getName());
    // instantiate the mob compactor.
    MobCompactor compactor = null;
    try {
        compactor = ReflectionUtils.instantiateWithCustomCtor(className, new Class[] { Configuration.class, FileSystem.class, TableName.class, HColumnDescriptor.class, ExecutorService.class }, new Object[] { conf, fs, tableName, hcd, pool });
    } catch (Exception e) {
        throw new IOException("Unable to load configured mob file compactor '" + className + "'", e);
    }
    // with major compaction in mob-enabled column.
    try {
        lock.acquire();
        compactor.compact(allFiles);
    } catch (Exception e) {
        LOG.error("Failed to compact the mob files for the column " + hcd.getNameAsString() + " in the table " + tableName.getNameAsString(), e);
    } finally {
        lock.release();
    }
}
Also used : TableName(org.apache.hadoop.hbase.TableName) Configuration(org.apache.hadoop.conf.Configuration) HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor) PartitionedMobCompactor(org.apache.hadoop.hbase.mob.compactions.PartitionedMobCompactor) FileSystem(org.apache.hadoop.fs.FileSystem) ExecutorService(java.util.concurrent.ExecutorService) MobCompactor(org.apache.hadoop.hbase.mob.compactions.MobCompactor) PartitionedMobCompactor(org.apache.hadoop.hbase.mob.compactions.PartitionedMobCompactor) IOException(java.io.IOException) ParseException(java.text.ParseException) FileNotFoundException(java.io.FileNotFoundException) RejectedExecutionException(java.util.concurrent.RejectedExecutionException) IOException(java.io.IOException)

Example 52 with TableName

use of org.apache.hadoop.hbase.TableName in project hbase by apache.

the class RegionServerQuotaManager method checkQuota.

/**
   * Check the quota for the current (rpc-context) user.
   * Returns the OperationQuota used to get the available quota and
   * to report the data/usage of the operation.
   * @param region the region where the operation will be performed
   * @param numWrites number of writes to perform
   * @param numReads number of short-reads to perform
   * @param numScans number of scan to perform
   * @return the OperationQuota
   * @throws ThrottlingException if the operation cannot be executed due to quota exceeded.
   */
private OperationQuota checkQuota(final Region region, final int numWrites, final int numReads, final int numScans) throws IOException, ThrottlingException {
    User user = RpcServer.getRequestUser();
    UserGroupInformation ugi;
    if (user != null) {
        ugi = user.getUGI();
    } else {
        ugi = User.getCurrent().getUGI();
    }
    TableName table = region.getTableDesc().getTableName();
    OperationQuota quota = getQuota(ugi, table);
    try {
        quota.checkQuota(numWrites, numReads, numScans);
    } catch (ThrottlingException e) {
        LOG.debug("Throttling exception for user=" + ugi.getUserName() + " table=" + table + " numWrites=" + numWrites + " numReads=" + numReads + " numScans=" + numScans + ": " + e.getMessage());
        throw e;
    }
    return quota;
}
Also used : TableName(org.apache.hadoop.hbase.TableName) User(org.apache.hadoop.hbase.security.User) UserGroupInformation(org.apache.hadoop.security.UserGroupInformation)

Example 53 with TableName

use of org.apache.hadoop.hbase.TableName in project hbase by apache.

the class ExpiredMobFileCleaner method cleanExpiredMobFiles.

/**
   * Cleans the MOB files when they're expired and their min versions are 0.
   * If the latest timestamp of Cells in a MOB file is older than the TTL in the column family,
   * it's regarded as expired. This cleaner deletes them.
   * At a time T0, the cells in a mob file M0 are expired. If a user starts a scan before T0, those
   * mob cells are visible, this scan still runs after T0. At that time T1, this mob file M0
   * is expired, meanwhile a cleaner starts, the M0 is archived and can be read in the archive
   * directory.
   * @param tableName The current table name.
   * @param family The current family.
   */
public void cleanExpiredMobFiles(String tableName, HColumnDescriptor family) throws IOException {
    Configuration conf = getConf();
    TableName tn = TableName.valueOf(tableName);
    FileSystem fs = FileSystem.get(conf);
    LOG.info("Cleaning the expired MOB files of " + family.getNameAsString() + " in " + tableName);
    // disable the block cache.
    Configuration copyOfConf = new Configuration(conf);
    copyOfConf.setFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY, 0f);
    CacheConfig cacheConfig = new CacheConfig(copyOfConf);
    MobUtils.cleanExpiredMobFiles(fs, conf, tn, family, cacheConfig, EnvironmentEdgeManager.currentTime());
}
Also used : TableName(org.apache.hadoop.hbase.TableName) HBaseConfiguration(org.apache.hadoop.hbase.HBaseConfiguration) Configuration(org.apache.hadoop.conf.Configuration) FileSystem(org.apache.hadoop.fs.FileSystem) CacheConfig(org.apache.hadoop.hbase.io.hfile.CacheConfig)

Example 54 with TableName

use of org.apache.hadoop.hbase.TableName in project hbase by apache.

the class ExpiredMobFileCleaner method run.

@edu.umd.cs.findbugs.annotations.SuppressWarnings(value = "REC_CATCH_EXCEPTION", justification = "Intentional")
public int run(String[] args) throws Exception {
    if (args.length != 2) {
        printUsage();
        return 1;
    }
    String tableName = args[0];
    String familyName = args[1];
    TableName tn = TableName.valueOf(tableName);
    HBaseAdmin.available(getConf());
    Connection connection = ConnectionFactory.createConnection(getConf());
    Admin admin = connection.getAdmin();
    try {
        HTableDescriptor htd = admin.getTableDescriptor(tn);
        HColumnDescriptor family = htd.getFamily(Bytes.toBytes(familyName));
        if (family == null || !family.isMobEnabled()) {
            throw new IOException("Column family " + familyName + " is not a MOB column family");
        }
        if (family.getMinVersions() > 0) {
            throw new IOException("The minVersions of the column family is not 0, could not be handled by this cleaner");
        }
        cleanExpiredMobFiles(tableName, family);
        return 0;
    } finally {
        try {
            admin.close();
        } catch (IOException e) {
            LOG.error("Failed to close the HBaseAdmin.", e);
        }
        try {
            connection.close();
        } catch (IOException e) {
            LOG.error("Failed to close the connection.", e);
        }
    }
}
Also used : TableName(org.apache.hadoop.hbase.TableName) HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor) Connection(org.apache.hadoop.hbase.client.Connection) IOException(java.io.IOException) HBaseAdmin(org.apache.hadoop.hbase.client.HBaseAdmin) Admin(org.apache.hadoop.hbase.client.Admin) HTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor)

Example 55 with TableName

use of org.apache.hadoop.hbase.TableName in project hbase by apache.

the class TruncateTableProcedure method preTruncate.

private boolean preTruncate(final MasterProcedureEnv env) throws IOException, InterruptedException {
    final MasterCoprocessorHost cpHost = env.getMasterCoprocessorHost();
    if (cpHost != null) {
        final TableName tableName = getTableName();
        cpHost.preTruncateTableAction(tableName, getUser());
    }
    return true;
}
Also used : TableName(org.apache.hadoop.hbase.TableName) MasterCoprocessorHost(org.apache.hadoop.hbase.master.MasterCoprocessorHost)

Aggregations

TableName (org.apache.hadoop.hbase.TableName)1033 Test (org.junit.Test)695 HTableDescriptor (org.apache.hadoop.hbase.HTableDescriptor)257 Table (org.apache.hadoop.hbase.client.Table)228 IOException (java.io.IOException)225 HRegionInfo (org.apache.hadoop.hbase.HRegionInfo)215 HColumnDescriptor (org.apache.hadoop.hbase.HColumnDescriptor)203 Result (org.apache.hadoop.hbase.client.Result)125 ArrayList (java.util.ArrayList)120 Put (org.apache.hadoop.hbase.client.Put)118 Path (org.apache.hadoop.fs.Path)113 Connection (org.apache.hadoop.hbase.client.Connection)103 Scan (org.apache.hadoop.hbase.client.Scan)98 ResultScanner (org.apache.hadoop.hbase.client.ResultScanner)89 ServerName (org.apache.hadoop.hbase.ServerName)85 Admin (org.apache.hadoop.hbase.client.Admin)85 Cell (org.apache.hadoop.hbase.Cell)77 HashMap (java.util.HashMap)75 Delete (org.apache.hadoop.hbase.client.Delete)66 InterruptedIOException (java.io.InterruptedIOException)63