Search in sources :

Example 1 with FileExistsException

use of org.apache.commons.io.FileExistsException in project hadoop by apache.

the class FsDatasetImplTestUtils method injectCorruptReplica.

@Override
public void injectCorruptReplica(ExtendedBlock block) throws IOException {
    Preconditions.checkState(!dataset.contains(block), "Block " + block + " already exists on dataset.");
    try (FsVolumeReferences volRef = dataset.getFsVolumeReferences()) {
        FsVolumeImpl volume = (FsVolumeImpl) volRef.get(0);
        FinalizedReplica finalized = new FinalizedReplica(block.getLocalBlock(), volume, volume.getFinalizedDir(block.getBlockPoolId()));
        File blockFile = finalized.getBlockFile();
        if (!blockFile.createNewFile()) {
            throw new FileExistsException("Block file " + blockFile + " already exists.");
        }
        File metaFile = FsDatasetUtil.getMetaFile(blockFile, 1000);
        if (!metaFile.createNewFile()) {
            throw new FileExistsException("Meta file " + metaFile + " already exists.");
        }
        dataset.volumeMap.add(block.getBlockPoolId(), finalized);
    }
}
Also used : FsVolumeReferences(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi.FsVolumeReferences) RandomAccessFile(java.io.RandomAccessFile) File(java.io.File) FinalizedReplica(org.apache.hadoop.hdfs.server.datanode.FinalizedReplica) FileExistsException(org.apache.commons.io.FileExistsException)

Example 2 with FileExistsException

use of org.apache.commons.io.FileExistsException in project hive by apache.

the class SparkHashTableSinkOperator method flushToFile.

protected void flushToFile(MapJoinPersistableTableContainer tableContainer, byte tag) throws Exception {
    MapredLocalWork localWork = getExecContext().getLocalWork();
    BucketMapJoinContext mapJoinCtx = localWork.getBucketMapjoinContext();
    Path inputPath = getExecContext().getCurrentInputPath();
    String bigInputPath = null;
    if (inputPath != null && mapJoinCtx != null) {
        Set<String> aliases = ((SparkBucketMapJoinContext) mapJoinCtx).getPosToAliasMap().get((int) tag);
        bigInputPath = mapJoinCtx.getMappingBigFile(aliases.iterator().next(), inputPath.toString());
    }
    // get tmp file URI
    Path tmpURI = localWork.getTmpHDFSPath();
    LOG.info("Temp URI for side table: " + tmpURI);
    // get current bucket file name
    String fileName = localWork.getBucketFileName(bigInputPath);
    // get the tmp URI path; it will be a hdfs path if not local mode
    String dumpFilePrefix = conf.getDumpFilePrefix();
    Path path = Utilities.generatePath(tmpURI, dumpFilePrefix, tag, fileName);
    FileSystem fs = path.getFileSystem(htsOperator.getConfiguration());
    short replication = fs.getDefaultReplication(path);
    // Create the folder and its parents if not there
    fs.mkdirs(path);
    while (true) {
        path = new Path(path, getOperatorId() + "-" + Math.abs(Utilities.randGen.nextInt()));
        try {
            // This will guarantee file name uniqueness.
            if (fs.createNewFile(path)) {
                break;
            }
        } catch (FileExistsException e) {
        // No problem, use a new name
        }
    }
    // TODO find out numOfPartitions for the big table
    int numOfPartitions = replication;
    replication = (short) Math.max(minReplication, numOfPartitions);
    htsOperator.console.printInfo(Utilities.now() + "\tDump the side-table for tag: " + tag + " with group count: " + tableContainer.size() + " into file: " + path);
    try {
        // get the hashtable file and path
        OutputStream os = null;
        ObjectOutputStream out = null;
        MapJoinTableContainerSerDe mapJoinTableSerde = htsOperator.mapJoinTableSerdes[tag];
        try {
            os = fs.create(path, replication);
            out = new ObjectOutputStream(new BufferedOutputStream(os, 4096));
            mapJoinTableSerde.persist(out, tableContainer);
        } finally {
            if (out != null) {
                out.close();
            } else if (os != null) {
                os.close();
            }
        }
        FileStatus status = fs.getFileStatus(path);
        htsOperator.console.printInfo(Utilities.now() + "\tUploaded 1 File to: " + path + " (" + status.getLen() + " bytes)");
    } catch (Exception e) {
        // Failed to dump the side-table, remove the partial file
        try {
            fs.delete(path, false);
        } catch (Exception ex) {
            LOG.warn("Got exception in deleting partial side-table dump for tag: " + tag + ", file " + path, ex);
        }
        throw e;
    }
    tableContainer.clear();
}
Also used : Path(org.apache.hadoop.fs.Path) FileStatus(org.apache.hadoop.fs.FileStatus) OutputStream(java.io.OutputStream) BufferedOutputStream(java.io.BufferedOutputStream) ObjectOutputStream(java.io.ObjectOutputStream) ObjectOutputStream(java.io.ObjectOutputStream) FileExistsException(org.apache.commons.io.FileExistsException) HiveException(org.apache.hadoop.hive.ql.metadata.HiveException) MapJoinTableContainerSerDe(org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe) SparkBucketMapJoinContext(org.apache.hadoop.hive.ql.plan.SparkBucketMapJoinContext) BucketMapJoinContext(org.apache.hadoop.hive.ql.plan.BucketMapJoinContext) FileSystem(org.apache.hadoop.fs.FileSystem) MapredLocalWork(org.apache.hadoop.hive.ql.plan.MapredLocalWork) BufferedOutputStream(java.io.BufferedOutputStream) FileExistsException(org.apache.commons.io.FileExistsException)

Example 3 with FileExistsException

use of org.apache.commons.io.FileExistsException in project jstorm by alibaba.

the class SyncSupervisorEvent method downloadDistributeStormCode.

/**
     * Don't need synchronize, due to EventManager will execute serially
     * 
     * @param conf
     * @param topologyId
     * @param masterCodeDir
     * @throws IOException
     * @throws TException
     */
private void downloadDistributeStormCode(Map conf, String topologyId, String masterCodeDir) throws IOException, TException {
    String tmproot = null;
    try {
        // STORM_LOCAL_DIR/supervisor/tmp/(UUID)
        tmproot = StormConfig.supervisorTmpDir(conf) + File.separator + UUID.randomUUID().toString();
        // STORM_LOCAL_DIR/supervisor/stormdist/topologyId
        String stormroot = StormConfig.supervisor_stormdist_root(conf, topologyId);
        //        JStormServerUtils.downloadCodeFromMaster(conf, tmproot, masterCodeDir, topologyId, true);
        JStormServerUtils.downloadCodeFromBlobStore(conf, tmproot, topologyId);
        // tmproot/stormjar.jar
        String localFileJarTmp = StormConfig.stormjar_path(tmproot);
        // extract dir from jar
        JStormUtils.extractDirFromJar(localFileJarTmp, StormConfig.RESOURCES_SUBDIR, tmproot);
        File srcDir = new File(tmproot);
        File destDir = new File(stormroot);
        try {
            FileUtils.moveDirectory(srcDir, destDir);
        } catch (FileExistsException e) {
            FileUtils.copyDirectory(srcDir, destDir);
            FileUtils.deleteQuietly(srcDir);
        }
    } finally {
        if (tmproot != null) {
            File srcDir = new File(tmproot);
            FileUtils.deleteQuietly(srcDir);
        }
    }
}
Also used : File(java.io.File) FileExistsException(org.apache.commons.io.FileExistsException)

Example 4 with FileExistsException

use of org.apache.commons.io.FileExistsException in project jstorm by alibaba.

the class SyncSupervisorEvent method downloadLocalStormCode.

private void downloadLocalStormCode(Map conf, String topologyId, String masterCodeDir) throws IOException, TException {
    // STORM_LOCAL_DIR/supervisor/tmp/(UUID)
    String tmproot = StormConfig.supervisorTmpDir(conf) + File.separator + UUID.randomUUID().toString();
    // STORM-LOCAL-DIR/supervisor/stormdist/storm-id
    String stormroot = StormConfig.supervisor_stormdist_root(conf, topologyId);
    BlobStore blobStore = null;
    try {
        blobStore = BlobStoreUtils.getNimbusBlobStore(conf, masterCodeDir, null);
        FileUtils.forceMkdir(new File(tmproot));
        blobStore.readBlobTo(StormConfig.master_stormcode_key(topologyId), new FileOutputStream(StormConfig.stormcode_path(tmproot)));
        blobStore.readBlobTo(StormConfig.master_stormconf_key(topologyId), new FileOutputStream(StormConfig.stormconf_path(tmproot)));
    } finally {
        if (blobStore != null)
            blobStore.shutdown();
    }
    File srcDir = new File(tmproot);
    File destDir = new File(stormroot);
    try {
        FileUtils.moveDirectory(srcDir, destDir);
    } catch (FileExistsException e) {
        FileUtils.copyDirectory(srcDir, destDir);
        FileUtils.deleteQuietly(srcDir);
    }
    ClassLoader classloader = Thread.currentThread().getContextClassLoader();
    String resourcesJar = resourcesJar();
    URL url = classloader.getResource(StormConfig.RESOURCES_SUBDIR);
    String targetDir = stormroot + '/' + StormConfig.RESOURCES_SUBDIR;
    if (resourcesJar != null) {
        LOG.info("Extracting resources from jar at " + resourcesJar + " to " + targetDir);
        // extract dir
        JStormUtils.extractDirFromJar(resourcesJar, StormConfig.RESOURCES_SUBDIR, stormroot);
    // from jar;;
    // util.clj
    } else if (url != null) {
        LOG.info("Copying resources at " + url.toString() + " to " + targetDir);
        FileUtils.copyDirectory(new File(url.getFile()), (new File(targetDir)));
    }
}
Also used : FileOutputStream(java.io.FileOutputStream) File(java.io.File) BlobStore(com.alibaba.jstorm.blobstore.BlobStore) URL(java.net.URL) FileExistsException(org.apache.commons.io.FileExistsException)

Example 5 with FileExistsException

use of org.apache.commons.io.FileExistsException in project syncany by syncany.

the class FileSystemAction method moveFileToFinalLocation.

protected File moveFileToFinalLocation(File reconstructedFileInCache, FileVersion targetFileVersion) throws IOException {
    NormalizedPath originalPath = new NormalizedPath(config.getLocalDir(), targetFileVersion.getPath());
    NormalizedPath targetPath = originalPath;
    try {
        // Clean filename
        if (targetPath.hasIllegalChars()) {
            targetPath = targetPath.toCreatable("filename conflict", true);
        }
        // Try creating folder
        createFolder(targetPath.getParent());
    } catch (Exception e) {
        throw new RuntimeException("What to do here?!");
    }
    // Try moving file to final destination
    try {
        FileUtils.moveFile(reconstructedFileInCache, targetPath.toFile());
    } catch (FileExistsException e) {
        logger.log(Level.FINE, "File already existed", e);
        moveToConflictFile(targetPath);
    } catch (Exception e) {
        throw new RuntimeException("What to do here?!");
    }
    return targetPath.toFile();
}
Also used : NormalizedPath(org.syncany.util.NormalizedPath) IOException(java.io.IOException) FileNotFoundException(java.io.FileNotFoundException) FileExistsException(org.apache.commons.io.FileExistsException) FileExistsException(org.apache.commons.io.FileExistsException)

Aggregations

FileExistsException (org.apache.commons.io.FileExistsException)6 File (java.io.File)3 FileNotFoundException (java.io.FileNotFoundException)2 IOException (java.io.IOException)2 NormalizedPath (org.syncany.util.NormalizedPath)2 BlobStore (com.alibaba.jstorm.blobstore.BlobStore)1 BufferedOutputStream (java.io.BufferedOutputStream)1 FileOutputStream (java.io.FileOutputStream)1 ObjectOutputStream (java.io.ObjectOutputStream)1 OutputStream (java.io.OutputStream)1 RandomAccessFile (java.io.RandomAccessFile)1 URL (java.net.URL)1 FileStatus (org.apache.hadoop.fs.FileStatus)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 Path (org.apache.hadoop.fs.Path)1 FinalizedReplica (org.apache.hadoop.hdfs.server.datanode.FinalizedReplica)1 FsVolumeReferences (org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi.FsVolumeReferences)1 MapJoinTableContainerSerDe (org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe)1 HiveException (org.apache.hadoop.hive.ql.metadata.HiveException)1 BucketMapJoinContext (org.apache.hadoop.hive.ql.plan.BucketMapJoinContext)1