Search in sources :

Example 36 with FsShell

use of org.apache.hadoop.fs.FsShell in project hadoop by apache.

the class TestDistCpSyncReverseBase method testSync4.

/**
   * Test a case where multiple level dirs are renamed.
   */
@Test
public void testSync4() throws Exception {
    if (isSrcNotSameAsTgt) {
        initData4(source);
    }
    initData4(target);
    enableAndCreateFirstSnapshot();
    final FsShell shell = new FsShell(conf);
    lsr("Before change target: ", shell, target);
    // make changes under target
    int numDeletedAndModified = changeData4(target);
    createSecondSnapshotAtTarget();
    SnapshotDiffReport report = dfs.getSnapshotDiffReport(target, "s2", "s1");
    System.out.println(report);
    testAndVerify(numDeletedAndModified);
}
Also used : FsShell(org.apache.hadoop.fs.FsShell) SnapshotDiffReport(org.apache.hadoop.hdfs.protocol.SnapshotDiffReport) Test(org.junit.Test)

Example 37 with FsShell

use of org.apache.hadoop.fs.FsShell in project hadoop by apache.

the class TestDistCh method testDistCh.

public void testDistCh() throws Exception {
    final Configuration conf = new Configuration();
    conf.set(CapacitySchedulerConfiguration.PREFIX + CapacitySchedulerConfiguration.ROOT + "." + CapacitySchedulerConfiguration.QUEUES, "default");
    conf.set(CapacitySchedulerConfiguration.PREFIX + CapacitySchedulerConfiguration.ROOT + ".default." + CapacitySchedulerConfiguration.CAPACITY, "100");
    final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).format(true).build();
    final FileSystem fs = cluster.getFileSystem();
    final FsShell shell = new FsShell(conf);
    try {
        final FileTree tree = new FileTree(fs, "testDistCh");
        final FileStatus rootstatus = fs.getFileStatus(tree.rootdir);
        runLsr(shell, tree.root, 0);
        final String[] args = new String[NUN_SUBS];
        final ChPermissionStatus[] newstatus = new ChPermissionStatus[NUN_SUBS];
        args[0] = "/test/testDistCh/sub0:sub1::";
        newstatus[0] = new ChPermissionStatus(rootstatus, "sub1", "", "");
        args[1] = "/test/testDistCh/sub1::sub2:";
        newstatus[1] = new ChPermissionStatus(rootstatus, "", "sub2", "");
        args[2] = "/test/testDistCh/sub2:::437";
        newstatus[2] = new ChPermissionStatus(rootstatus, "", "", "437");
        args[3] = "/test/testDistCh/sub3:sub1:sub2:447";
        newstatus[3] = new ChPermissionStatus(rootstatus, "sub1", "sub2", "447");
        args[4] = "/test/testDistCh/sub4::sub5:437";
        newstatus[4] = new ChPermissionStatus(rootstatus, "", "sub5", "437");
        args[5] = "/test/testDistCh/sub5:sub1:sub5:";
        newstatus[5] = new ChPermissionStatus(rootstatus, "sub1", "sub5", "");
        args[6] = "/test/testDistCh/sub6:sub3::437";
        newstatus[6] = new ChPermissionStatus(rootstatus, "sub3", "", "437");
        System.out.println("args=" + Arrays.asList(args).toString().replace(",", ",\n  "));
        System.out.println("newstatus=" + Arrays.asList(newstatus).toString().replace(",", ",\n  "));
        //run DistCh
        new DistCh(MiniMRClientClusterFactory.create(this.getClass(), 2, conf).getConfig()).run(args);
        runLsr(shell, tree.root, 0);
        //check results
        for (int i = 0; i < NUN_SUBS; i++) {
            Path sub = new Path(tree.root + "/sub" + i);
            checkFileStatus(newstatus[i], fs.getFileStatus(sub));
            for (FileStatus status : fs.listStatus(sub)) {
                checkFileStatus(newstatus[i], status);
            }
        }
    } finally {
        cluster.shutdown();
    }
}
Also used : Path(org.apache.hadoop.fs.Path) MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) FileStatus(org.apache.hadoop.fs.FileStatus) CapacitySchedulerConfiguration(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration) Configuration(org.apache.hadoop.conf.Configuration) FsShell(org.apache.hadoop.fs.FsShell) FileSystem(org.apache.hadoop.fs.FileSystem)

Aggregations

FsShell (org.apache.hadoop.fs.FsShell)37 Path (org.apache.hadoop.fs.Path)27 Test (org.junit.Test)26 Configuration (org.apache.hadoop.conf.Configuration)18 FileSystem (org.apache.hadoop.fs.FileSystem)10 FileStatus (org.apache.hadoop.fs.FileStatus)9 HdfsAdmin (org.apache.hadoop.hdfs.client.HdfsAdmin)6 IOException (java.io.IOException)5 FsPermission (org.apache.hadoop.fs.permission.FsPermission)4 Mockito.anyString (org.mockito.Mockito.anyString)4 ByteArrayOutputStream (java.io.ByteArrayOutputStream)3 PrintStream (java.io.PrintStream)3 HdfsFileStatus (org.apache.hadoop.hdfs.protocol.HdfsFileStatus)3 SnapshotDiffReport (org.apache.hadoop.hdfs.protocol.SnapshotDiffReport)3 WebHdfsFileSystem (org.apache.hadoop.hdfs.web.WebHdfsFileSystem)3 File (java.io.File)2 FileNotFoundException (java.io.FileNotFoundException)2 HashMap (java.util.HashMap)2 Map (java.util.Map)2 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)2