Search in sources :

Example 6 with HFileSystem

use of org.apache.hadoop.hbase.fs.HFileSystem in project hbase by apache.

the class TestHRegionFileSystem method getHRegionFS.

private HRegionFileSystem getHRegionFS(HTable table, Configuration conf) throws IOException {
    FileSystem fs = TEST_UTIL.getDFSCluster().getFileSystem();
    Path tableDir = FSUtils.getTableDir(TEST_UTIL.getDefaultRootDirPath(), table.getName());
    List<Path> regionDirs = FSUtils.getRegionDirs(fs, tableDir);
    assertEquals(1, regionDirs.size());
    List<Path> familyDirs = FSUtils.getFamilyDirs(fs, regionDirs.get(0));
    assertEquals(2, familyDirs.size());
    HRegionInfo hri = table.getRegionLocator().getAllRegionLocations().get(0).getRegionInfo();
    HRegionFileSystem regionFs = new HRegionFileSystem(conf, new HFileSystem(fs), tableDir, hri);
    return regionFs;
}
Also used : Path(org.apache.hadoop.fs.Path) HRegionInfo(org.apache.hadoop.hbase.HRegionInfo) FileSystem(org.apache.hadoop.fs.FileSystem) HFileSystem(org.apache.hadoop.hbase.fs.HFileSystem) HFileSystem(org.apache.hadoop.hbase.fs.HFileSystem)

Example 7 with HFileSystem

use of org.apache.hadoop.hbase.fs.HFileSystem in project hbase by apache.

the class HRegionFileSystem method bulkLoadStoreFile.

/**
   * Bulk load: Add a specified store file to the specified family.
   * If the source file is on the same different file-system is moved from the
   * source location to the destination location, otherwise is copied over.
   *
   * @param familyName Family that will gain the file
   * @param srcPath {@link Path} to the file to import
   * @param seqNum Bulk Load sequence number
   * @return The destination {@link Path} of the bulk loaded file
   * @throws IOException
   */
Pair<Path, Path> bulkLoadStoreFile(final String familyName, Path srcPath, long seqNum) throws IOException {
    // Copy the file if it's on another filesystem
    FileSystem srcFs = srcPath.getFileSystem(conf);
    srcPath = srcFs.resolvePath(srcPath);
    FileSystem realSrcFs = srcPath.getFileSystem(conf);
    FileSystem desFs = fs instanceof HFileSystem ? ((HFileSystem) fs).getBackingFs() : fs;
    // TODO deal with viewFS
    if (!FSHDFSUtils.isSameHdfs(conf, realSrcFs, desFs)) {
        LOG.info("Bulk-load file " + srcPath + " is on different filesystem than " + "the destination store. Copying file over to destination filesystem.");
        Path tmpPath = createTempName();
        FileUtil.copy(realSrcFs, srcPath, fs, tmpPath, false, conf);
        LOG.info("Copied " + srcPath + " to temporary path on destination filesystem: " + tmpPath);
        srcPath = tmpPath;
    }
    return new Pair<>(srcPath, preCommitStoreFile(familyName, srcPath, seqNum, true));
}
Also used : Path(org.apache.hadoop.fs.Path) FileSystem(org.apache.hadoop.fs.FileSystem) HFileSystem(org.apache.hadoop.hbase.fs.HFileSystem) HFileSystem(org.apache.hadoop.hbase.fs.HFileSystem) Pair(org.apache.hadoop.hbase.util.Pair)

Aggregations

HFileSystem (org.apache.hadoop.hbase.fs.HFileSystem)7 Path (org.apache.hadoop.fs.Path)6 FileSystem (org.apache.hadoop.fs.FileSystem)4 Test (org.junit.Test)3 IOException (java.io.IOException)2 CacheConfig (org.apache.hadoop.hbase.io.hfile.CacheConfig)2 HFileContext (org.apache.hadoop.hbase.io.hfile.HFileContext)2 HFileContextBuilder (org.apache.hadoop.hbase.io.hfile.HFileContextBuilder)2 ByteArrayInputStream (java.io.ByteArrayInputStream)1 DataInputStream (java.io.DataInputStream)1 DataOutputStream (java.io.DataOutputStream)1 Configuration (org.apache.hadoop.conf.Configuration)1 FSDataInputStream (org.apache.hadoop.fs.FSDataInputStream)1 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)1 FileStatus (org.apache.hadoop.fs.FileStatus)1 FilterFileSystem (org.apache.hadoop.fs.FilterFileSystem)1 HBaseTestingUtility (org.apache.hadoop.hbase.HBaseTestingUtility)1 HColumnDescriptor (org.apache.hadoop.hbase.HColumnDescriptor)1 HRegionInfo (org.apache.hadoop.hbase.HRegionInfo)1 Admin (org.apache.hadoop.hbase.client.Admin)1