Search in sources :

Example 11 with Files

use of java.nio.file.Files in project nifi by apache.

the class FlowConfigurationArchiveManager method archive.

/**
 * Archive current flow configuration file by copying the original file to the archive directory.
 * Before creating new archive file, this method removes old archives to satisfy following conditions:
 * <ul>
 * <li>Number of archive files less than or equal to maxCount</li>
 * <li>Keep archive files which has last modified timestamp no older than current system timestamp - maxTimeMillis</li>
 * <li>Total size of archive files less than or equal to maxStorageBytes</li>
 * </ul>
 * This method keeps other files intact, so that users can keep particular archive by copying it with different name.
 * Whether a given file is archive file or not is determined by the filename.
 * Since archive file name consists of timestamp up to seconds, if archive is called multiple times within a second,
 * it will overwrite existing archive file with the same name.
 * @return Newly created archive file, archive filename is computed by adding ISO8601
 * timestamp prefix to the original filename, ex) 20160706T160719+0900_flow.xml.gz
 * @throws IOException If it fails to create new archive file.
 * Although, other IOExceptions like the ones thrown during removing expired archive files will not be thrown.
 */
public File archive() throws IOException {
    final String originalFlowConfigFileName = flowConfigFile.getFileName().toString();
    final File archiveFile = setupArchiveFile();
    // Collect archive files by its name
    final long now = System.currentTimeMillis();
    final AtomicLong totalArchiveSize = new AtomicLong(0);
    final List<Path> archives = Files.walk(archiveDir, 1).filter(p -> {
        final String filename = p.getFileName().toString();
        if (Files.isRegularFile(p) && filename.endsWith("_" + originalFlowConfigFileName)) {
            final Matcher matcher = archiveFilenamePattern.matcher(filename);
            if (matcher.matches() && filename.equals(matcher.group(1) + "_" + originalFlowConfigFileName)) {
                try {
                    totalArchiveSize.getAndAdd(Files.size(p));
                } catch (IOException e) {
                    logger.warn("Failed to get file size of {} due to {}", p, e);
                }
                return true;
            }
        }
        return false;
    }).collect(Collectors.toList());
    // Sort by timestamp.
    archives.sort(Comparator.comparingLong(path -> path.toFile().lastModified()));
    logger.debug("archives={}", archives);
    final int archiveCount = archives.size();
    final long flowConfigFileSize = Files.size(flowConfigFile);
    IntStream.range(0, archiveCount).filter(i -> {
        // If maxCount is specified, remove old archives
        boolean old = maxCount != null && maxCount > 0 && (archiveCount - i) > maxCount - 1;
        // If maxTime is specified, remove expired archives
        final File archive = archives.get(i).toFile();
        old = old || (maxTimeMillis != null && maxTimeMillis > 0 && (now - archive.lastModified()) > maxTimeMillis);
        // If maxStorage is specified, remove old archives
        old = old || (maxStorageBytes != null && maxStorageBytes > 0 && (totalArchiveSize.get() + flowConfigFileSize > maxStorageBytes));
        if (old) {
            totalArchiveSize.getAndAdd(archive.length() * -1);
            logger.info("Removing old archive file {} to reduce storage usage. currentSize={}", archive, totalArchiveSize);
        }
        return old;
    }).forEach(i -> {
        try {
            Files.delete(archives.get(i));
        } catch (IOException e) {
            logger.warn("Failed to delete {} to reduce storage usage, due to {}", archives.get(i), e);
        }
    });
    // Create new archive file.
    Files.copy(flowConfigFile, archiveFile.toPath(), StandardCopyOption.REPLACE_EXISTING);
    if (maxStorageBytes != null && maxStorageBytes > 0 && flowConfigFileSize > maxStorageBytes) {
        logger.warn("Size of {} ({}) exceeds configured maxStorage size ({}). Archive won't be able to keep old files.", flowConfigFile, flowConfigFileSize, maxStorageBytes);
    }
    return archiveFile;
}
Also used : Path(java.nio.file.Path) IntStream(java.util.stream.IntStream) Logger(org.slf4j.Logger) GregorianCalendar(java.util.GregorianCalendar) Files(java.nio.file.Files) TimeZone(java.util.TimeZone) LoggerFactory(org.slf4j.LoggerFactory) IOException(java.io.IOException) StringUtils(org.apache.nifi.util.StringUtils) Collectors(java.util.stream.Collectors) File(java.io.File) StandardCopyOption(java.nio.file.StandardCopyOption) TimeUnit(java.util.concurrent.TimeUnit) AtomicLong(java.util.concurrent.atomic.AtomicLong) List(java.util.List) Calendar(java.util.Calendar) Matcher(java.util.regex.Matcher) FormatUtils(org.apache.nifi.util.FormatUtils) NiFiProperties(org.apache.nifi.util.NiFiProperties) DataUnit(org.apache.nifi.processor.DataUnit) Pattern(java.util.regex.Pattern) Comparator(java.util.Comparator) Path(java.nio.file.Path) AtomicLong(java.util.concurrent.atomic.AtomicLong) Matcher(java.util.regex.Matcher) IOException(java.io.IOException) File(java.io.File)

Example 12 with Files

use of java.nio.file.Files in project jabref by JabRef.

the class WriteXMPEntryEditorAction method actionPerformed.

@Override
public void actionPerformed(ActionEvent actionEvent) {
    setEnabled(false);
    panel.output(Localization.lang("Writing XMP-metadata..."));
    panel.frame().setProgressBarIndeterminate(true);
    panel.frame().setProgressBarVisible(true);
    BibEntry entry = editor.getEntry();
    // Make a list of all PDFs linked from this entry:
    List<Path> files = entry.getFiles().stream().filter(file -> file.getFileType().equalsIgnoreCase("pdf")).map(file -> file.findIn(panel.getBibDatabaseContext(), Globals.prefs.getFileDirectoryPreferences())).filter(Optional::isPresent).map(Optional::get).collect(Collectors.toList());
    // We want to offload the actual work to a background thread, so we have a worker
    // thread:
    AbstractWorker worker = new WriteXMPWorker(files, entry);
    // Using Spin, we get a thread that gets synchronously offloaded to a new thread,
    // blocking the execution of this method:
    worker.getWorker().run();
    // After the worker thread finishes, we are unblocked and ready to print the
    // status message:
    panel.output(message);
    panel.frame().setProgressBarVisible(false);
    setEnabled(true);
}
Also used : TransformerException(javax.xml.transform.TransformerException) Files(java.nio.file.Files) BibEntry(org.jabref.model.entry.BibEntry) AbstractWorker(org.jabref.gui.worker.AbstractWorker) IconTheme(org.jabref.gui.IconTheme) IOException(java.io.IOException) Action(javax.swing.Action) EntryEditor(org.jabref.gui.entryeditor.EntryEditor) ActionEvent(java.awt.event.ActionEvent) Collectors(java.util.stream.Collectors) BasePanel(org.jabref.gui.BasePanel) Globals(org.jabref.Globals) List(java.util.List) AbstractAction(javax.swing.AbstractAction) Optional(java.util.Optional) Localization(org.jabref.logic.l10n.Localization) XMPUtil(org.jabref.logic.xmp.XMPUtil) Path(java.nio.file.Path) Path(java.nio.file.Path) BibEntry(org.jabref.model.entry.BibEntry) Optional(java.util.Optional) AbstractWorker(org.jabref.gui.worker.AbstractWorker)

Example 13 with Files

use of java.nio.file.Files in project gatk by broadinstitute.

the class ReadsSparkSinkUnitTest method readsSinkShardedTest.

@Test(dataProvider = "loadReadsBAM", groups = "spark")
public void readsSinkShardedTest(String inputBam, String outputFileName, String referenceFile, String outputFileExtension) throws IOException {
    final File outputFile = createTempFile(outputFileName, outputFileExtension);
    JavaSparkContext ctx = SparkContextFactory.getTestSparkContext();
    ReadsSparkSource readSource = new ReadsSparkSource(ctx);
    JavaRDD<GATKRead> rddParallelReads = readSource.getParallelReads(inputBam, referenceFile);
    // ensure that the output is in two shards
    rddParallelReads = rddParallelReads.repartition(2);
    SAMFileHeader header = readSource.getHeader(inputBam, referenceFile);
    ReadsSparkSink.writeReads(ctx, outputFile.getAbsolutePath(), referenceFile, rddParallelReads, header, ReadsWriteFormat.SHARDED);
    int shards = outputFile.listFiles((dir, name) -> !name.startsWith(".") && !name.startsWith("_")).length;
    Assert.assertEquals(shards, 2);
    // check that no local .crc files are created
    int crcs = outputFile.listFiles((dir, name) -> name.startsWith(".") && name.endsWith(".crc")).length;
    Assert.assertEquals(crcs, 0);
    JavaRDD<GATKRead> rddParallelReads2 = readSource.getParallelReads(outputFile.getAbsolutePath(), referenceFile);
    // reads are not globally sorted, so don't test that
    Assert.assertEquals(rddParallelReads.count(), rddParallelReads2.count());
}
Also used : GATKRead(org.broadinstitute.hellbender.utils.read.GATKRead) Arrays(java.util.Arrays) MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) DataProvider(org.testng.annotations.DataProvider) BaseTest(org.broadinstitute.hellbender.utils.test.BaseTest) FileSystem(org.apache.hadoop.fs.FileSystem) JavaSparkContext(org.apache.spark.api.java.JavaSparkContext) Test(org.testng.annotations.Test) FileStatus(org.apache.hadoop.fs.FileStatus) GATKRead(org.broadinstitute.hellbender.utils.read.GATKRead) ReadsWriteFormat(org.broadinstitute.hellbender.utils.read.ReadsWriteFormat) SAMFileHeader(htsjdk.samtools.SAMFileHeader) GATKException(org.broadinstitute.hellbender.exceptions.GATKException) ArrayList(java.util.ArrayList) BucketUtils(org.broadinstitute.hellbender.utils.gcs.BucketUtils) Assert(org.testng.Assert) Configuration(org.apache.hadoop.conf.Configuration) Path(org.apache.hadoop.fs.Path) JavaRDD(org.apache.spark.api.java.JavaRDD) AfterClass(org.testng.annotations.AfterClass) IOUtils(org.broadinstitute.hellbender.utils.io.IOUtils) Files(java.nio.file.Files) BeforeClass(org.testng.annotations.BeforeClass) ReadCoordinateComparator(org.broadinstitute.hellbender.utils.read.ReadCoordinateComparator) IOException(java.io.IOException) SplittingBAMIndexer(org.seqdoop.hadoop_bam.SplittingBAMIndexer) SAMRecord(htsjdk.samtools.SAMRecord) File(java.io.File) List(java.util.List) SAMRecordCoordinateComparator(htsjdk.samtools.SAMRecordCoordinateComparator) SparkContextFactory(org.broadinstitute.hellbender.engine.spark.SparkContextFactory) Comparator(java.util.Comparator) MiniClusterUtils(org.broadinstitute.hellbender.utils.test.MiniClusterUtils) JavaSparkContext(org.apache.spark.api.java.JavaSparkContext) SAMFileHeader(htsjdk.samtools.SAMFileHeader) File(java.io.File) BaseTest(org.broadinstitute.hellbender.utils.test.BaseTest) Test(org.testng.annotations.Test)

Example 14 with Files

use of java.nio.file.Files in project jabref by JabRef.

the class CiteKeyBasedFileFinder method findFilesByExtension.

/**
     * Returns a list of all files in the given directories which have one of the given extension.
     */
public Set<Path> findFilesByExtension(List<Path> directories, List<String> extensions) {
    Objects.requireNonNull(extensions, "Extensions must not be null!");
    BiPredicate<Path, BasicFileAttributes> isFileWithCorrectExtension = (path, attributes) -> !Files.isDirectory(path) && extensions.contains(FileHelper.getFileExtension(path).orElse(""));
    Set<Path> result = new HashSet<>();
    for (Path directory : directories) {
        try (Stream<Path> files = Files.find(directory, Integer.MAX_VALUE, isFileWithCorrectExtension)) {
            result.addAll(files.collect(Collectors.toSet()));
        } catch (IOException e) {
            LOGGER.error("Problem in finding files", e);
        }
    }
    return result;
}
Also used : Files(java.nio.file.Files) BibEntry(org.jabref.model.entry.BibEntry) Set(java.util.Set) IOException(java.io.IOException) HashMap(java.util.HashMap) BasicFileAttributes(java.nio.file.attribute.BasicFileAttributes) Collectors(java.util.stream.Collectors) ArrayList(java.util.ArrayList) FileHelper(org.jabref.model.util.FileHelper) HashSet(java.util.HashSet) Objects(java.util.Objects) BiPredicate(java.util.function.BiPredicate) List(java.util.List) Stream(java.util.stream.Stream) Map(java.util.Map) Optional(java.util.Optional) Log(org.apache.commons.logging.Log) LogFactory(org.apache.commons.logging.LogFactory) Path(java.nio.file.Path) Path(java.nio.file.Path) IOException(java.io.IOException) BasicFileAttributes(java.nio.file.attribute.BasicFileAttributes) HashSet(java.util.HashSet)

Example 15 with Files

use of java.nio.file.Files in project winery by eclipse.

the class WriterUtils method storeTypes.

public static void storeTypes(Path path, String namespace, String id) {
    LOGGER.debug("Store type: {}", id);
    try {
        MediaType mediaType = MediaTypes.MEDIATYPE_XSD;
        TImport.Builder builder = new TImport.Builder(Namespaces.XML_NS);
        builder.setNamespace(namespace);
        builder.setLocation(id + ".xsd");
        GenericImportId rid = new XSDImportId(namespace, id, false);
        TDefinitions definitions = BackendUtils.createWrapperDefinitions(rid);
        definitions.getImport().add(builder.build());
        CsarImporter.storeDefinitions(rid, definitions);
        RepositoryFileReference ref = BackendUtils.getRefOfDefinitions(rid);
        List<File> files = Files.list(path).filter(Files::isRegularFile).map(Path::toFile).collect(Collectors.toList());
        for (File file : files) {
            BufferedInputStream stream = new BufferedInputStream(new FileInputStream(file));
            RepositoryFileReference fileRef = new RepositoryFileReference(ref.getParent(), file.getName());
            RepositoryFactory.getRepository().putContentToFile(fileRef, stream, mediaType);
        }
    } catch (IllegalArgumentException | IOException e) {
        throw new IllegalStateException(e);
    }
}
Also used : XSDImportId(org.eclipse.winery.common.ids.definitions.imports.XSDImportId) TImport(org.eclipse.winery.model.tosca.TImport) GenericImportId(org.eclipse.winery.common.ids.definitions.imports.GenericImportId) RepositoryFileReference(org.eclipse.winery.common.RepositoryFileReference) MediaType(org.apache.tika.mime.MediaType) Files(java.nio.file.Files) TDefinitions(org.eclipse.winery.model.tosca.TDefinitions)

Aggregations

Files (java.nio.file.Files)247 IOException (java.io.IOException)213 Path (java.nio.file.Path)199 List (java.util.List)177 Collectors (java.util.stream.Collectors)157 Paths (java.nio.file.Paths)135 File (java.io.File)130 ArrayList (java.util.ArrayList)117 Map (java.util.Map)111 Set (java.util.Set)97 Collections (java.util.Collections)89 Arrays (java.util.Arrays)81 Stream (java.util.stream.Stream)78 HashMap (java.util.HashMap)75 HashSet (java.util.HashSet)58 InputStream (java.io.InputStream)56 Collection (java.util.Collection)55 Logger (org.slf4j.Logger)54 Pattern (java.util.regex.Pattern)53 Optional (java.util.Optional)51