Search in sources :

Example 1 with FileAlreadyExistsException

use of org.apache.hadoop.mapred.FileAlreadyExistsException in project hadoop by apache.

the class TestFileOutputFormat method testCheckOutputSpecsException.

public void testCheckOutputSpecsException() throws Exception {
    Job job = Job.getInstance();
    Path outDir = new Path(System.getProperty("test.build.data", "/tmp"), "output");
    FileSystem fs = outDir.getFileSystem(new Configuration());
    // Create the output dir so it already exists and set it for the job
    fs.mkdirs(outDir);
    FileOutputFormat.setOutputPath(job, outDir);
    // We don't need a "full" implementation of FileOutputFormat for this test
    FileOutputFormat fof = new FileOutputFormat() {

        @Override
        public RecordWriter getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException {
            return null;
        }
    };
    try {
        try {
            // This should throw a FileAlreadyExistsException because the outputDir
            // already exists
            fof.checkOutputSpecs(job);
            fail("Should have thrown a FileAlreadyExistsException");
        } catch (FileAlreadyExistsException re) {
        // correct behavior
        }
    } finally {
        // Cleanup
        if (fs.exists(outDir)) {
            fs.delete(outDir, true);
        }
    }
}
Also used : Path(org.apache.hadoop.fs.Path) FileAlreadyExistsException(org.apache.hadoop.mapred.FileAlreadyExistsException) Configuration(org.apache.hadoop.conf.Configuration) FileSystem(org.apache.hadoop.fs.FileSystem) TaskAttemptContext(org.apache.hadoop.mapreduce.TaskAttemptContext) Job(org.apache.hadoop.mapreduce.Job)

Example 2 with FileAlreadyExistsException

use of org.apache.hadoop.mapred.FileAlreadyExistsException in project phoenix by apache.

the class RegexBulkLoadToolIT method testAlreadyExistsOutputPath.

@Test
public void testAlreadyExistsOutputPath() {
    String tableName = "TABLE9";
    String outputPath = "/tmp/output/tabl9";
    try {
        Statement stmt = conn.createStatement();
        stmt.execute("CREATE TABLE " + tableName + "(ID INTEGER NOT NULL PRIMARY KEY, " + "FIRST_NAME VARCHAR, LAST_NAME VARCHAR)");
        FileSystem fs = FileSystem.get(getUtility().getConfiguration());
        fs.create(new Path(outputPath));
        FSDataOutputStream outputStream = fs.create(new Path("/tmp/input9.csv"));
        PrintWriter printWriter = new PrintWriter(outputStream);
        printWriter.println("1,FirstName 1,LastName 1");
        printWriter.println("2,FirstName 2,LastName 2");
        printWriter.close();
        RegexBulkLoadTool regexBulkLoadTool = new RegexBulkLoadTool();
        regexBulkLoadTool.setConf(getUtility().getConfiguration());
        regexBulkLoadTool.run(new String[] { "--input", "/tmp/input9.csv", "--output", outputPath, "--table", tableName, "--regex", "([^,]*),([^,]*),([^,]*)", "--zookeeper", zkQuorum });
        fail(String.format("Output path %s already exists. hence, should fail", outputPath));
    } catch (Exception ex) {
        assertTrue(ex instanceof FileAlreadyExistsException);
    }
}
Also used : Path(org.apache.hadoop.fs.Path) FileAlreadyExistsException(org.apache.hadoop.mapred.FileAlreadyExistsException) Statement(java.sql.Statement) FileSystem(org.apache.hadoop.fs.FileSystem) RegexBulkLoadTool(org.apache.phoenix.mapreduce.RegexBulkLoadTool) FSDataOutputStream(org.apache.hadoop.fs.FSDataOutputStream) FileAlreadyExistsException(org.apache.hadoop.mapred.FileAlreadyExistsException) PrintWriter(java.io.PrintWriter) Test(org.junit.Test)

Example 3 with FileAlreadyExistsException

use of org.apache.hadoop.mapred.FileAlreadyExistsException in project phoenix by apache.

the class CsvBulkLoadToolIT method testAlreadyExistsOutputPath.

@Test
public void testAlreadyExistsOutputPath() {
    String tableName = "TABLE9";
    String outputPath = "/tmp/output/tabl9";
    try {
        Statement stmt = conn.createStatement();
        stmt.execute("CREATE TABLE " + tableName + "(ID INTEGER NOT NULL PRIMARY KEY, " + "FIRST_NAME VARCHAR, LAST_NAME VARCHAR)");
        FileSystem fs = FileSystem.get(getUtility().getConfiguration());
        fs.create(new Path(outputPath));
        FSDataOutputStream outputStream = fs.create(new Path("/tmp/input9.csv"));
        PrintWriter printWriter = new PrintWriter(outputStream);
        printWriter.println("1,FirstName 1,LastName 1");
        printWriter.println("2,FirstName 2,LastName 2");
        printWriter.close();
        CsvBulkLoadTool csvBulkLoadTool = new CsvBulkLoadTool();
        csvBulkLoadTool.setConf(getUtility().getConfiguration());
        csvBulkLoadTool.run(new String[] { "--input", "/tmp/input9.csv", "--output", outputPath, "--table", tableName, "--zookeeper", zkQuorum });
        fail(String.format("Output path %s already exists. hence, should fail", outputPath));
    } catch (Exception ex) {
        assertTrue(ex instanceof FileAlreadyExistsException);
    }
}
Also used : Path(org.apache.hadoop.fs.Path) CsvBulkLoadTool(org.apache.phoenix.mapreduce.CsvBulkLoadTool) FileAlreadyExistsException(org.apache.hadoop.mapred.FileAlreadyExistsException) Statement(java.sql.Statement) FileSystem(org.apache.hadoop.fs.FileSystem) FSDataOutputStream(org.apache.hadoop.fs.FSDataOutputStream) FileAlreadyExistsException(org.apache.hadoop.mapred.FileAlreadyExistsException) PrintWriter(java.io.PrintWriter) Test(org.junit.Test)

Example 4 with FileAlreadyExistsException

use of org.apache.hadoop.mapred.FileAlreadyExistsException in project tez by apache.

the class TestOrderedWordCount method run.

@Override
public int run(String[] args) throws Exception {
    Configuration conf = getConf();
    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
    boolean generateSplitsInClient;
    SplitsInClientOptionParser splitCmdLineParser = new SplitsInClientOptionParser();
    try {
        generateSplitsInClient = splitCmdLineParser.parse(otherArgs, false);
        otherArgs = splitCmdLineParser.getRemainingArgs();
    } catch (ParseException e1) {
        System.err.println("Invalid options");
        printUsage();
        return 2;
    }
    boolean useTezSession = conf.getBoolean("USE_TEZ_SESSION", true);
    long interJobSleepTimeout = conf.getInt("INTER_JOB_SLEEP_INTERVAL", 0) * 1000;
    boolean retainStagingDir = conf.getBoolean("RETAIN_STAGING_DIR", false);
    boolean useMRSettings = conf.getBoolean("USE_MR_CONFIGS", true);
    // TODO needs to use auto reduce parallelism
    int intermediateNumReduceTasks = conf.getInt("IREDUCE_NUM_TASKS", 2);
    int maxDataLengthThroughIPC = conf.getInt(MAX_IPC_DATA_LENGTH, -1);
    int exceedDataLimit = conf.getInt(EXCEED_IPC_DATA_LIMIT, 3);
    if (maxDataLengthThroughIPC > 0) {
        conf.setInt(CommonConfigurationKeys.IPC_MAXIMUM_DATA_LENGTH, maxDataLengthThroughIPC * 1024 * 1024);
    }
    if (((otherArgs.length % 2) != 0) || (!useTezSession && otherArgs.length != 2)) {
        printUsage();
        return 2;
    }
    List<String> inputPaths = new ArrayList<String>();
    List<String> outputPaths = new ArrayList<String>();
    TezConfiguration tezConf = new TezConfiguration(conf);
    for (int i = 0; i < otherArgs.length; i += 2) {
        FileSystem inputPathFs = new Path(otherArgs[i]).getFileSystem(tezConf);
        inputPaths.add(inputPathFs.makeQualified(new Path(otherArgs[i])).toString());
        FileSystem outputPathFs = new Path(otherArgs[i + 1]).getFileSystem(tezConf);
        outputPaths.add(outputPathFs.makeQualified(new Path(otherArgs[i + 1])).toString());
    }
    UserGroupInformation.setConfiguration(conf);
    HadoopShim hadoopShim = new HadoopShimsLoader(tezConf).getHadoopShim();
    TestOrderedWordCount instance = new TestOrderedWordCount();
    FileSystem fs = FileSystem.get(conf);
    String stagingDirStr = conf.get(TezConfiguration.TEZ_AM_STAGING_DIR, TezConfiguration.TEZ_AM_STAGING_DIR_DEFAULT) + Path.SEPARATOR + Long.toString(System.currentTimeMillis());
    Path stagingDir = new Path(stagingDirStr);
    FileSystem pathFs = stagingDir.getFileSystem(tezConf);
    pathFs.mkdirs(new Path(stagingDirStr));
    tezConf.set(TezConfiguration.TEZ_AM_STAGING_DIR, stagingDirStr);
    stagingDir = pathFs.makeQualified(new Path(stagingDirStr));
    TokenCache.obtainTokensForNamenodes(instance.credentials, new Path[] { stagingDir }, conf);
    TezClientUtils.ensureStagingDirExists(tezConf, stagingDir);
    if (useTezSession) {
        LOG.info("Creating Tez Session");
        tezConf.setBoolean(TezConfiguration.TEZ_AM_SESSION_MODE, true);
    } else {
        tezConf.setBoolean(TezConfiguration.TEZ_AM_SESSION_MODE, false);
    }
    TezClient tezSession = TezClient.create("OrderedWordCountSession", tezConf, null, instance.credentials);
    tezSession.start();
    if (tezSession.getAppMasterApplicationId() != null) {
        TezUtilsInternal.setHadoopCallerContext(hadoopShim, tezSession.getAppMasterApplicationId());
    }
    DAGStatus dagStatus = null;
    DAGClient dagClient = null;
    String[] vNames = { "initialmap", "intermediate_reducer", "finalreduce" };
    Set<StatusGetOpts> statusGetOpts = EnumSet.of(StatusGetOpts.GET_COUNTERS);
    try {
        for (int dagIndex = 1; dagIndex <= inputPaths.size(); ++dagIndex) {
            if (dagIndex != 1 && interJobSleepTimeout > 0) {
                try {
                    LOG.info("Sleeping between jobs, sleepInterval=" + (interJobSleepTimeout / 1000));
                    Thread.sleep(interJobSleepTimeout);
                } catch (InterruptedException e) {
                    LOG.info("Main thread interrupted. Breaking out of job loop");
                    break;
                }
            }
            String inputPath = inputPaths.get(dagIndex - 1);
            String outputPath = outputPaths.get(dagIndex - 1);
            if (fs.exists(new Path(outputPath))) {
                throw new FileAlreadyExistsException("Output directory " + outputPath + " already exists");
            }
            LOG.info("Running OrderedWordCount DAG" + ", dagIndex=" + dagIndex + ", inputPath=" + inputPath + ", outputPath=" + outputPath);
            Map<String, LocalResource> localResources = new TreeMap<String, LocalResource>();
            DAG dag = instance.createDAG(fs, tezConf, localResources, stagingDir, dagIndex, inputPath, outputPath, generateSplitsInClient, useMRSettings, intermediateNumReduceTasks, maxDataLengthThroughIPC, exceedDataLimit);
            String callerType = "TestOrderedWordCount";
            String callerId = tezSession.getAppMasterApplicationId() == null ? ("UnknownApp_" + System.currentTimeMillis() + dagIndex) : (tezSession.getAppMasterApplicationId().toString() + "_" + dagIndex);
            dag.setCallerContext(CallerContext.create("Tez", callerId, callerType, "TestOrderedWordCount Job"));
            boolean doPreWarm = dagIndex == 1 && useTezSession && conf.getBoolean("PRE_WARM_SESSION", true);
            int preWarmNumContainers = 0;
            if (doPreWarm) {
                preWarmNumContainers = conf.getInt("PRE_WARM_NUM_CONTAINERS", 0);
                if (preWarmNumContainers <= 0) {
                    doPreWarm = false;
                }
            }
            if (doPreWarm) {
                LOG.info("Pre-warming Session");
                PreWarmVertex preWarmVertex = PreWarmVertex.create("PreWarm", preWarmNumContainers, dag.getVertex("initialmap").getTaskResource());
                preWarmVertex.addTaskLocalFiles(dag.getVertex("initialmap").getTaskLocalFiles());
                preWarmVertex.setTaskEnvironment(dag.getVertex("initialmap").getTaskEnvironment());
                preWarmVertex.setTaskLaunchCmdOpts(dag.getVertex("initialmap").getTaskLaunchCmdOpts());
                tezSession.preWarm(preWarmVertex);
            }
            if (useTezSession) {
                LOG.info("Waiting for TezSession to get into ready state");
                waitForTezSessionReady(tezSession);
                LOG.info("Submitting DAG to Tez Session, dagIndex=" + dagIndex);
                dagClient = tezSession.submitDAG(dag);
                LOG.info("Submitted DAG to Tez Session, dagIndex=" + dagIndex);
            } else {
                LOG.info("Submitting DAG as a new Tez Application");
                dagClient = tezSession.submitDAG(dag);
            }
            while (true) {
                dagStatus = dagClient.getDAGStatus(statusGetOpts);
                if (dagStatus.getState() == DAGStatus.State.RUNNING || dagStatus.getState() == DAGStatus.State.SUCCEEDED || dagStatus.getState() == DAGStatus.State.FAILED || dagStatus.getState() == DAGStatus.State.KILLED || dagStatus.getState() == DAGStatus.State.ERROR) {
                    break;
                }
                try {
                    Thread.sleep(500);
                } catch (InterruptedException e) {
                // continue;
                }
            }
            while (dagStatus.getState() != DAGStatus.State.SUCCEEDED && dagStatus.getState() != DAGStatus.State.FAILED && dagStatus.getState() != DAGStatus.State.KILLED && dagStatus.getState() != DAGStatus.State.ERROR) {
                if (dagStatus.getState() == DAGStatus.State.RUNNING) {
                    ExampleDriver.printDAGStatus(dagClient, vNames);
                }
                try {
                    try {
                        Thread.sleep(1000);
                    } catch (InterruptedException e) {
                    // continue;
                    }
                    dagStatus = dagClient.getDAGStatus(statusGetOpts);
                } catch (TezException e) {
                    LOG.error("Failed to get application progress. Exiting");
                    return -1;
                }
            }
            ExampleDriver.printDAGStatus(dagClient, vNames, true, true);
            LOG.info("DAG " + dagIndex + " completed. " + "FinalState=" + dagStatus.getState());
            if (dagStatus.getState() != DAGStatus.State.SUCCEEDED) {
                LOG.info("DAG " + dagIndex + " diagnostics: " + dagStatus.getDiagnostics());
            }
        }
    } catch (Exception e) {
        LOG.error("Error occurred when submitting/running DAGs", e);
        throw e;
    } finally {
        if (!retainStagingDir) {
            pathFs.delete(stagingDir, true);
        }
        LOG.info("Shutting down session");
        tezSession.stop();
    }
    if (!useTezSession) {
        ExampleDriver.printDAGStatus(dagClient, vNames);
        LOG.info("Application completed. " + "FinalState=" + dagStatus.getState());
    }
    return dagStatus.getState() == DAGStatus.State.SUCCEEDED ? 0 : 1;
}
Also used : TezException(org.apache.tez.dag.api.TezException) FileAlreadyExistsException(org.apache.hadoop.mapred.FileAlreadyExistsException) Configuration(org.apache.hadoop.conf.Configuration) TezConfiguration(org.apache.tez.dag.api.TezConfiguration) TezRuntimeConfiguration(org.apache.tez.runtime.library.api.TezRuntimeConfiguration) HadoopShim(org.apache.tez.hadoop.shim.HadoopShim) ArrayList(java.util.ArrayList) HadoopShimsLoader(org.apache.tez.hadoop.shim.HadoopShimsLoader) TezClient(org.apache.tez.client.TezClient) PreWarmVertex(org.apache.tez.dag.api.PreWarmVertex) FileSystem(org.apache.hadoop.fs.FileSystem) DAGStatus(org.apache.tez.dag.api.client.DAGStatus) TezConfiguration(org.apache.tez.dag.api.TezConfiguration) Path(org.apache.hadoop.fs.Path) DAG(org.apache.tez.dag.api.DAG) TreeMap(java.util.TreeMap) FileAlreadyExistsException(org.apache.hadoop.mapred.FileAlreadyExistsException) ParseException(org.apache.commons.cli.ParseException) IOException(java.io.IOException) TezException(org.apache.tez.dag.api.TezException) LocalResource(org.apache.hadoop.yarn.api.records.LocalResource) StatusGetOpts(org.apache.tez.dag.api.client.StatusGetOpts) SplitsInClientOptionParser(org.apache.tez.mapreduce.examples.helpers.SplitsInClientOptionParser) DAGClient(org.apache.tez.dag.api.client.DAGClient) ParseException(org.apache.commons.cli.ParseException) GenericOptionsParser(org.apache.hadoop.util.GenericOptionsParser)

Example 5 with FileAlreadyExistsException

use of org.apache.hadoop.mapred.FileAlreadyExistsException in project tez by apache.

the class MapUtils method configureLocalDirs.

public static void configureLocalDirs(Configuration conf, String localDir) throws IOException {
    String[] localSysDirs = new String[1];
    localSysDirs[0] = localDir;
    conf.setStrings(TezRuntimeFrameworkConfigs.LOCAL_DIRS, localSysDirs);
    conf.set(MRFrameworkConfigs.TASK_LOCAL_RESOURCE_DIR, localDir);
    LOG.info(TezRuntimeFrameworkConfigs.LOCAL_DIRS + " for child: " + conf.get(TezRuntimeFrameworkConfigs.LOCAL_DIRS));
    LOG.info(MRFrameworkConfigs.TASK_LOCAL_RESOURCE_DIR + " for child: " + conf.get(MRFrameworkConfigs.TASK_LOCAL_RESOURCE_DIR));
    LocalDirAllocator lDirAlloc = new LocalDirAllocator(TezRuntimeFrameworkConfigs.LOCAL_DIRS);
    Path workDir = null;
    // First, try to find the JOB_LOCAL_DIR on this host.
    try {
        workDir = lDirAlloc.getLocalPathToRead("work", conf);
    } catch (DiskErrorException e) {
    // DiskErrorException means dir not found. If not found, it will
    // be created below.
    }
    if (workDir == null) {
        // JOB_LOCAL_DIR doesn't exist on this host -- Create it.
        workDir = lDirAlloc.getLocalPathForWrite("work", conf);
        FileSystem lfs = FileSystem.getLocal(conf).getRaw();
        boolean madeDir = false;
        try {
            madeDir = lfs.mkdirs(workDir);
        } catch (FileAlreadyExistsException e) {
            // Since all tasks will be running in their own JVM, the race condition
            // exists where multiple tasks could be trying to create this directory
            // at the same time. If this task loses the race, it's okay because
            // the directory already exists.
            madeDir = true;
            workDir = lDirAlloc.getLocalPathToRead("work", conf);
        }
        if (!madeDir) {
            throw new IOException("Mkdirs failed to create " + workDir.toString());
        }
    }
    conf.set(MRFrameworkConfigs.JOB_LOCAL_DIR, workDir.toString());
}
Also used : Path(org.apache.hadoop.fs.Path) FileAlreadyExistsException(org.apache.hadoop.mapred.FileAlreadyExistsException) DiskErrorException(org.apache.hadoop.util.DiskChecker.DiskErrorException) FileSystem(org.apache.hadoop.fs.FileSystem) LocalDirAllocator(org.apache.hadoop.fs.LocalDirAllocator) IOException(java.io.IOException)

Aggregations

Path (org.apache.hadoop.fs.Path)10 FileAlreadyExistsException (org.apache.hadoop.mapred.FileAlreadyExistsException)10 FileSystem (org.apache.hadoop.fs.FileSystem)9 Configuration (org.apache.hadoop.conf.Configuration)4 IOException (java.io.IOException)3 PrintWriter (java.io.PrintWriter)2 Statement (java.sql.Statement)2 TreeMap (java.util.TreeMap)2 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)2 FileStatus (org.apache.hadoop.fs.FileStatus)2 LocalDirAllocator (org.apache.hadoop.fs.LocalDirAllocator)2 InvalidJobConfException (org.apache.hadoop.mapred.InvalidJobConfException)2 DiskErrorException (org.apache.hadoop.util.DiskChecker.DiskErrorException)2 LocalResource (org.apache.hadoop.yarn.api.records.LocalResource)2 TezClient (org.apache.tez.client.TezClient)2 DAG (org.apache.tez.dag.api.DAG)2 TezConfiguration (org.apache.tez.dag.api.TezConfiguration)2 DAGClient (org.apache.tez.dag.api.client.DAGClient)2 DAGStatus (org.apache.tez.dag.api.client.DAGStatus)2 Test (org.junit.Test)2