use of org.apache.hadoop.hbase.util.FSTableDescriptors in project hbase by apache.
the class HBaseTestCase method createMetaRegion.
/**
* You must call {@link #closeRootAndMeta()} when done after calling this
* method. It does cleanup.
* @throws IOException
*/
protected void createMetaRegion() throws IOException {
FSTableDescriptors fsTableDescriptors = new FSTableDescriptors(conf);
meta = HBaseTestingUtility.createRegionAndWAL(HRegionInfo.FIRST_META_REGIONINFO, testDir, conf, fsTableDescriptors.get(TableName.META_TABLE_NAME));
}
use of org.apache.hadoop.hbase.util.FSTableDescriptors in project hbase by apache.
the class SnapshotManifest method consolidate.
public void consolidate() throws IOException {
if (getSnapshotFormat(desc) == SnapshotManifestV1.DESCRIPTOR_VERSION) {
Path rootDir = FSUtils.getRootDir(conf);
LOG.info("Using old Snapshot Format");
// write a copy of descriptor to the snapshot directory
new FSTableDescriptors(conf, fs, rootDir).createTableDescriptorForTableDirectory(workingDir, htd, false);
} else {
LOG.debug("Convert to Single Snapshot Manifest");
convertToV2SingleManifest();
}
}
use of org.apache.hadoop.hbase.util.FSTableDescriptors in project hbase by apache.
the class TestHRegionInfo method testReadAndWriteHRegionInfoFile.
@Test
public void testReadAndWriteHRegionInfoFile() throws IOException, InterruptedException {
HBaseTestingUtility htu = new HBaseTestingUtility();
HRegionInfo hri = HRegionInfo.FIRST_META_REGIONINFO;
Path basedir = htu.getDataTestDir();
// Create a region. That'll write the .regioninfo file.
FSTableDescriptors fsTableDescriptors = new FSTableDescriptors(htu.getConfiguration());
HRegion r = HBaseTestingUtility.createRegionAndWAL(hri, basedir, htu.getConfiguration(), fsTableDescriptors.get(TableName.META_TABLE_NAME));
// Get modtime on the file.
long modtime = getModTime(r);
HBaseTestingUtility.closeRegionAndWAL(r);
Thread.sleep(1001);
r = HRegion.openHRegion(basedir, hri, fsTableDescriptors.get(TableName.META_TABLE_NAME), null, htu.getConfiguration());
// Ensure the file is not written for a second time.
long modtime2 = getModTime(r);
assertEquals(modtime, modtime2);
// Now load the file.
HRegionInfo deserializedHri = HRegionFileSystem.loadRegionInfoFileContent(r.getRegionFileSystem().getFileSystem(), r.getRegionFileSystem().getRegionDir());
assertTrue(hri.equals(deserializedHri));
HBaseTestingUtility.closeRegionAndWAL(r);
}
use of org.apache.hadoop.hbase.util.FSTableDescriptors in project hbase by apache.
the class TestFSTableDescriptorForceCreation method testShouldCreateNewTableDescriptorIfForcefulCreationIsFalse.
@Test
public void testShouldCreateNewTableDescriptorIfForcefulCreationIsFalse() throws IOException {
final String name = this.name.getMethodName();
FileSystem fs = FileSystem.get(UTIL.getConfiguration());
Path rootdir = new Path(UTIL.getDataTestDir(), name);
FSTableDescriptors fstd = new FSTableDescriptors(fs, rootdir);
assertTrue("Should create new table descriptor", fstd.createTableDescriptor(TableDescriptorBuilder.newBuilder(TableName.valueOf(name)).build(), false));
}
use of org.apache.hadoop.hbase.util.FSTableDescriptors in project hbase by apache.
the class TestFSTableDescriptorForceCreation method testShouldAllowForcefulCreationOfAlreadyExistingTableDescriptor.
@Test
public void testShouldAllowForcefulCreationOfAlreadyExistingTableDescriptor() throws Exception {
final String name = this.name.getMethodName();
FileSystem fs = FileSystem.get(UTIL.getConfiguration());
Path rootdir = new Path(UTIL.getDataTestDir(), name);
FSTableDescriptors fstd = new FSTableDescriptors(fs, rootdir);
TableDescriptor htd = TableDescriptorBuilder.newBuilder(TableName.valueOf(name)).build();
fstd.createTableDescriptor(htd, false);
assertTrue("Should create new table descriptor", fstd.createTableDescriptor(htd, true));
}
Aggregations