Search in sources :

Example 1 with JobService

use of org.haiku.haikudepotserver.job.model.JobService in project haikudepotserver by haiku.

the class PkgProminenceAndUserRatingSpreadsheetJobRunner method run.

@Override
public void run(JobService jobService, PkgProminenceAndUserRatingSpreadsheetJobSpecification specification) throws IOException {
    Preconditions.checkArgument(null != jobService);
    Preconditions.checkArgument(null != specification);
    final ObjectContext context = serverRuntime.newContext();
    // this will register the outbound data against the job.
    JobDataWithByteSink jobDataWithByteSink = jobService.storeGeneratedData(specification.getGuid(), "download", MediaType.CSV_UTF_8.toString());
    try (OutputStream outputStream = jobDataWithByteSink.getByteSink().openBufferedStream();
        OutputStreamWriter outputStreamWriter = new OutputStreamWriter(outputStream);
        CSVWriter writer = new CSVWriter(outputStreamWriter, ',')) {
        writer.writeNext(new String[] { "pkg-name", "repository-code", "prominence-name", "prominence-ordering", "derived-rating", "derived-rating-sample-size" });
        // stream out the packages.
        long startMs = System.currentTimeMillis();
        LOGGER.info("will produce prominence spreadsheet report");
        long count = pkgService.eachPkg(context, false, pkg -> {
            List<PkgProminence> pkgProminences = PkgProminence.findByPkg(context, pkg);
            List<PkgUserRatingAggregate> pkgUserRatingAggregates = PkgUserRatingAggregate.findByPkg(context, pkg);
            List<Repository> repositories = Stream.concat(pkgProminences.stream().map(PkgProminence::getRepository), pkgUserRatingAggregates.stream().map(PkgUserRatingAggregate::getRepository)).distinct().sorted().collect(Collectors.toList());
            if (repositories.isEmpty()) {
                writer.writeNext(new String[] { pkg.getName(), "", "", "", "", "" });
            } else {
                for (Repository repository : repositories) {
                    Optional<PkgProminence> pkgProminenceOptional = pkgProminences.stream().filter(pp -> pp.getRepository().equals(repository)).collect(SingleCollector.optional());
                    Optional<PkgUserRatingAggregate> pkgUserRatingAggregateOptional = pkgUserRatingAggregates.stream().filter(pura -> pura.getRepository().equals(repository)).collect(SingleCollector.optional());
                    writer.writeNext(new String[] { pkg.getName(), repository.getCode(), pkgProminenceOptional.map(p -> p.getProminence().getName()).orElse(""), pkgProminenceOptional.map(p -> p.getProminence().getOrdering().toString()).orElse(""), pkgUserRatingAggregateOptional.map(p -> p.getDerivedRating().toString()).orElse(""), pkgUserRatingAggregateOptional.map(p -> p.getDerivedRatingSampleSize().toString()).orElse("") });
                }
            }
            return true;
        });
        LOGGER.info("did produce prominence spreadsheet report for {} packages in {}ms", count, System.currentTimeMillis() - startMs);
    }
}
Also used : OutputStream(java.io.OutputStream) MediaType(com.google.common.net.MediaType) PkgService(org.haiku.haikudepotserver.pkg.model.PkgService) ObjectContext(org.apache.cayenne.ObjectContext) Logger(org.slf4j.Logger) SingleCollector(org.haiku.haikudepotserver.support.SingleCollector) PkgProminenceAndUserRatingSpreadsheetJobSpecification(org.haiku.haikudepotserver.pkg.model.PkgProminenceAndUserRatingSpreadsheetJobSpecification) AbstractJobRunner(org.haiku.haikudepotserver.job.AbstractJobRunner) LoggerFactory(org.slf4j.LoggerFactory) CSVWriter(com.opencsv.CSVWriter) PkgUserRatingAggregate(org.haiku.haikudepotserver.dataobjects.PkgUserRatingAggregate) IOException(java.io.IOException) Collectors(java.util.stream.Collectors) Component(org.springframework.stereotype.Component) List(java.util.List) PkgProminence(org.haiku.haikudepotserver.dataobjects.PkgProminence) Stream(java.util.stream.Stream) Repository(org.haiku.haikudepotserver.dataobjects.Repository) JobDataWithByteSink(org.haiku.haikudepotserver.job.model.JobDataWithByteSink) OutputStreamWriter(java.io.OutputStreamWriter) Optional(java.util.Optional) Preconditions(com.google.common.base.Preconditions) JobService(org.haiku.haikudepotserver.job.model.JobService) ServerRuntime(org.apache.cayenne.configuration.server.ServerRuntime) OutputStream(java.io.OutputStream) PkgUserRatingAggregate(org.haiku.haikudepotserver.dataobjects.PkgUserRatingAggregate) CSVWriter(com.opencsv.CSVWriter) PkgProminence(org.haiku.haikudepotserver.dataobjects.PkgProminence) JobDataWithByteSink(org.haiku.haikudepotserver.job.model.JobDataWithByteSink) Repository(org.haiku.haikudepotserver.dataobjects.Repository) OutputStreamWriter(java.io.OutputStreamWriter) ObjectContext(org.apache.cayenne.ObjectContext)

Example 2 with JobService

use of org.haiku.haikudepotserver.job.model.JobService in project haikudepotserver by haiku.

the class PkgScreenshotImportArchiveJobRunner method run.

@Override
public void run(JobService jobService, PkgScreenshotImportArchiveJobSpecification specification) throws IOException, JobRunnerException {
    Preconditions.checkArgument(null != jobService);
    Preconditions.checkArgument(null != specification);
    Preconditions.checkArgument(null != specification.getInputDataGuid(), "missing input data guid on specification");
    Preconditions.checkArgument(null != specification.getImportStrategy(), "missing import strategy on specification");
    // this will register the outbound data against the job.
    JobDataWithByteSink jobDataWithByteSink = jobService.storeGeneratedData(specification.getGuid(), "download", MediaType.CSV_UTF_8.toString());
    Optional<JobDataWithByteSource> jobDataWithByteSourceOptional = jobService.tryObtainData(specification.getInputDataGuid());
    if (!jobDataWithByteSourceOptional.isPresent()) {
        throw new IllegalStateException("the job data was not able to be found for guid; " + specification.getInputDataGuid());
    }
    if (!serverRuntime.performInTransaction(() -> {
        try (OutputStream outputStream = jobDataWithByteSink.getByteSink().openBufferedStream();
            OutputStreamWriter outputStreamWriter = new OutputStreamWriter(outputStream);
            CSVWriter writer = new CSVWriter(outputStreamWriter, ',')) {
            Map<String, ScreenshotImportMetadatas> metadatas = new HashMap<>();
            writer.writeNext(new String[] { "path", "pkg-name", "action", "message", "code" });
            // sweep through and collect meta-data about the packages in the tar file.
            LOGGER.info("will collect data about packages' screenshots from the archive", metadatas.size());
            consumeScreenshotArchiveEntries(jobDataWithByteSourceOptional.get().getByteSource(), (ae) -> collectScreenshotMetadataFromArchive(metadatas, ae.getArchiveInputStream(), ae.getArchiveEntry(), ae.getPkgName(), ae.getOrder()));
            LOGGER.info("did collect data about {} packages' screenshots from the archive", metadatas.size());
            LOGGER.info("will collect data about persisted packages' screenshots");
            collectPersistedScreenshotMetadata(metadatas);
            LOGGER.info("did collect data about persisted packages' screenshots");
            if (specification.getImportStrategy() == PkgScreenshotImportArchiveJobSpecification.ImportStrategy.REPLACE) {
                LOGGER.info("will delete persisted screenshots that are absent from the archive");
                int deleted = deletePersistedScreenshotsThatAreNotPresentInArchiveAndReport(writer, metadatas.values());
                LOGGER.info("did delete {} persisted screenshots that are absent from the archive", deleted);
            }
            blendInArtificialOrderings(metadatas.values());
            // sweep through the archive again and load in those screenshots that are not already present.
            // The ordering of the inbound data should be preserved.
            LOGGER.info("will load screenshots from archive", metadatas.size());
            consumeScreenshotArchiveEntries(jobDataWithByteSourceOptional.get().getByteSource(), (ae) -> importScreenshotsFromArchiveAndReport(writer, metadatas.get(ae.getPkgName()), ae.getArchiveInputStream(), ae.getArchiveEntry(), ae.getPkgName(), ae.getOrder()));
            LOGGER.info("did load screenshots from archive", metadatas.size());
            return true;
        } catch (IOException e) {
            LOGGER.error("unable to complete the job", e);
        }
        return false;
    })) {
        throw new JobRunnerException("unable to complete job");
    }
}
Also used : ObjectContext(org.apache.cayenne.ObjectContext) java.util(java.util) GZIPInputStream(java.util.zip.GZIPInputStream) ArchiveEntry(org.apache.commons.compress.archivers.ArchiveEntry) TarArchiveInputStream(org.apache.commons.compress.archivers.tar.TarArchiveInputStream) PkgScreenshotImage(org.haiku.haikudepotserver.dataobjects.PkgScreenshotImage) PkgScreenshotService(org.haiku.haikudepotserver.pkg.model.PkgScreenshotService) LoggerFactory(org.slf4j.LoggerFactory) Hashing(com.google.common.hash.Hashing) HashingInputStream(com.google.common.hash.HashingInputStream) BadPkgScreenshotException(org.haiku.haikudepotserver.pkg.model.BadPkgScreenshotException) Matcher(java.util.regex.Matcher) JobDataWithByteSource(org.haiku.haikudepotserver.job.model.JobDataWithByteSource) JobDataWithByteSink(org.haiku.haikudepotserver.job.model.JobDataWithByteSink) ArchiveInputStream(org.apache.commons.compress.archivers.ArchiveInputStream) JobService(org.haiku.haikudepotserver.job.model.JobService) ByteSource(com.google.common.io.ByteSource) MediaType(com.google.common.net.MediaType) Pkg(org.haiku.haikudepotserver.dataobjects.Pkg) Logger(org.slf4j.Logger) HashCode(com.google.common.hash.HashCode) AbstractJobRunner(org.haiku.haikudepotserver.job.AbstractJobRunner) CSVWriter(com.opencsv.CSVWriter) PkgScreenshotImportArchiveJobSpecification(org.haiku.haikudepotserver.pkg.model.PkgScreenshotImportArchiveJobSpecification) PkgScreenshot(org.haiku.haikudepotserver.dataobjects.PkgScreenshot) Consumer(java.util.function.Consumer) Component(org.springframework.stereotype.Component) java.io(java.io) ByteStreams(com.google.common.io.ByteStreams) Preconditions(com.google.common.base.Preconditions) Pattern(java.util.regex.Pattern) HashFunction(com.google.common.hash.HashFunction) JobRunnerException(org.haiku.haikudepotserver.job.model.JobRunnerException) ServerRuntime(org.apache.cayenne.configuration.server.ServerRuntime) JobRunnerException(org.haiku.haikudepotserver.job.model.JobRunnerException) JobDataWithByteSource(org.haiku.haikudepotserver.job.model.JobDataWithByteSource) CSVWriter(com.opencsv.CSVWriter) JobDataWithByteSink(org.haiku.haikudepotserver.job.model.JobDataWithByteSink)

Example 3 with JobService

use of org.haiku.haikudepotserver.job.model.JobService in project haikudepotserver by haiku.

the class PkgVersionPayloadLengthPopulationJobRunner method run.

@Override
public void run(JobService jobService, PkgVersionPayloadLengthPopulationJobSpecification specification) throws IOException {
    Preconditions.checkArgument(null != jobService);
    Preconditions.checkArgument(null != specification);
    ObjectContext context = serverRuntime.newContext();
    // we want to fetch the ObjectIds of PkgVersions that need to be handled.
    List<PkgVersion> pkgVersions = ObjectSelect.query(PkgVersion.class).where(PkgVersion.ACTIVE.isTrue()).and(PkgVersion.PKG.dot(Pkg.ACTIVE).isTrue()).and(PkgVersion.IS_LATEST.isTrue()).and(PkgVersion.PAYLOAD_LENGTH.isNull()).pageSize(50).select(context);
    LOGGER.info("did find {} package versions that need payload lengths to be populated", pkgVersions.size());
    for (int i = 0; i < pkgVersions.size(); i++) {
        PkgVersion pkgVersion = pkgVersions.get(i);
        Optional<URL> urlOptional = pkgVersion.tryGetHpkgURL(ExposureType.INTERNAL_FACING);
        if (urlOptional.isPresent()) {
            try {
                urlHelperService.tryGetPayloadLength(urlOptional.get()).filter(l -> l > 0L).ifPresent(l -> {
                    pkgVersion.setPayloadLength(l);
                    context.commitChanges();
                });
            } catch (IOException ioe) {
                LOGGER.error("unable to get the payload length for " + pkgVersion.toString(), ioe);
            }
        } else {
            LOGGER.info("unable to get the length of [{}] because no url" + "hpkg url was able to be obtained", pkgVersion);
        }
        jobService.setJobProgressPercent(specification.getGuid(), i * 100 / pkgVersions.size());
    }
}
Also used : ObjectContext(org.apache.cayenne.ObjectContext) Pkg(org.haiku.haikudepotserver.dataobjects.Pkg) Logger(org.slf4j.Logger) URL(java.net.URL) AbstractJobRunner(org.haiku.haikudepotserver.job.AbstractJobRunner) PkgVersionPayloadLengthPopulationJobSpecification(org.haiku.haikudepotserver.pkg.model.PkgVersionPayloadLengthPopulationJobSpecification) LoggerFactory(org.slf4j.LoggerFactory) IOException(java.io.IOException) ExposureType(org.haiku.haikudepotserver.support.ExposureType) PkgVersion(org.haiku.haikudepotserver.dataobjects.PkgVersion) URLHelperService(org.haiku.haikudepotserver.support.URLHelperService) Component(org.springframework.stereotype.Component) List(java.util.List) Optional(java.util.Optional) Preconditions(com.google.common.base.Preconditions) ObjectSelect(org.apache.cayenne.query.ObjectSelect) JobService(org.haiku.haikudepotserver.job.model.JobService) ServerRuntime(org.apache.cayenne.configuration.server.ServerRuntime) PkgVersion(org.haiku.haikudepotserver.dataobjects.PkgVersion) ObjectContext(org.apache.cayenne.ObjectContext) IOException(java.io.IOException) URL(java.net.URL)

Example 4 with JobService

use of org.haiku.haikudepotserver.job.model.JobService in project haikudepotserver by haiku.

the class RepositoryHpkrIngressJobRunner method run.

@Override
public void run(JobService jobService, RepositoryHpkrIngressJobSpecification specification) {
    Preconditions.checkNotNull(specification);
    ObjectContext mainContext = serverRuntime.newContext();
    Set<String> allowedRepositorySourceCodes = specification.getRepositorySourceCodes();
    RepositorySource.findActiveByRepository(mainContext, Repository.getByCode(mainContext, specification.getRepositoryCode())).stream().filter(rs -> null == allowedRepositorySourceCodes || allowedRepositorySourceCodes.contains(rs.getCode())).forEach(rs -> serverRuntime.performInTransaction(() -> {
        try {
            runForRepositorySource(mainContext, rs);
        } catch (Throwable e) {
            LOGGER.error("a problem has arisen processing a repository file for repository source [{}]", rs.getCode(), e);
        }
        return null;
    }));
}
Also used : PkgService(org.haiku.haikudepotserver.pkg.model.PkgService) ObjectContext(org.apache.cayenne.ObjectContext) URL(java.net.URL) LoggerFactory(org.slf4j.LoggerFactory) DriverSettings(org.haiku.driversettings.DriverSettings) StringUtils(org.apache.commons.lang3.StringUtils) Value(org.springframework.beans.factory.annotation.Value) ObjectUtils(org.apache.commons.lang3.ObjectUtils) Pkg(org.haiku.pkg.model.Pkg) JobService(org.haiku.haikudepotserver.job.model.JobService) PkgImportService(org.haiku.haikudepotserver.pkg.model.PkgImportService) Charsets(com.google.common.base.Charsets) Logger(org.slf4j.Logger) RepositorySource(org.haiku.haikudepotserver.dataobjects.RepositorySource) RepositoryHpkrIngressException(org.haiku.haikudepotserver.repository.model.RepositoryHpkrIngressException) AbstractJobRunner(org.haiku.haikudepotserver.job.AbstractJobRunner) Architecture(org.haiku.haikudepotserver.dataobjects.Architecture) Set(java.util.Set) IOException(java.io.IOException) FileInputStream(java.io.FileInputStream) InputStreamReader(java.io.InputStreamReader) Sets(com.google.common.collect.Sets) FileHelper(org.haiku.haikudepotserver.support.FileHelper) File(java.io.File) RepositoryHpkrIngressJobSpecification(org.haiku.haikudepotserver.repository.model.RepositoryHpkrIngressJobSpecification) Parameter(org.haiku.driversettings.Parameter) Objects(java.util.Objects) TimeUnit(java.util.concurrent.TimeUnit) Component(org.springframework.stereotype.Component) List(java.util.List) Repository(org.haiku.haikudepotserver.dataobjects.Repository) DriverSettingsException(org.haiku.driversettings.DriverSettingsException) HpkrFileExtractor(org.haiku.pkg.HpkrFileExtractor) Optional(java.util.Optional) Preconditions(com.google.common.base.Preconditions) BufferedReader(java.io.BufferedReader) ServerRuntime(org.apache.cayenne.configuration.server.ServerRuntime) PkgIterator(org.haiku.pkg.PkgIterator) InputStream(java.io.InputStream) ObjectContext(org.apache.cayenne.ObjectContext)

Example 5 with JobService

use of org.haiku.haikudepotserver.job.model.JobService in project haikudepotserver by haiku.

the class AuthorizationRulesSpreadsheetJobRunner method run.

@Override
public void run(JobService jobService, AuthorizationRulesSpreadsheetJobSpecification specification) throws IOException, JobRunnerException {
    final ObjectContext context = serverRuntime.newContext();
    DateTimeFormatter dateTimeFormatter = DateTimeHelper.createStandardDateTimeFormat();
    // this will register the outbound data against the job.
    JobDataWithByteSink jobDataWithByteSink = jobService.storeGeneratedData(specification.getGuid(), "download", MediaType.CSV_UTF_8.toString());
    try (OutputStream outputStream = jobDataWithByteSink.getByteSink().openBufferedStream();
        OutputStreamWriter outputStreamWriter = new OutputStreamWriter(outputStream);
        CSVWriter writer = new CSVWriter(outputStreamWriter, ',')) {
        writer.writeNext(new String[] { "create-timestamp", "user-nickname", "user-active", "permission-code", "permission-name", "pkg-name" });
        ObjectSelect<PermissionUserPkg> objectSelect = ObjectSelect.query(PermissionUserPkg.class).orderBy(PermissionUserPkg.USER.dot(User.NICKNAME).asc(), PermissionUserPkg.PERMISSION.dot(Permission.CODE).asc());
        try (ResultBatchIterator<PermissionUserPkg> batchIterator = objectSelect.batchIterator(context, 50)) {
            batchIterator.forEach((pups) -> pups.forEach((pup) -> writer.writeNext(new String[] { dateTimeFormatter.format(Instant.ofEpochMilli(pup.getCreateTimestamp().getTime())), pup.getUser().getNickname(), Boolean.toString(pup.getUser().getActive()), pup.getPermission().getCode(), pup.getPermission().getName(), null != pup.getPkg() ? pup.getPkg().getName() : "" })));
        }
        writer.flush();
        outputStreamWriter.flush();
    }
}
Also used : JobDataWithByteSink(org.haiku.haikudepotserver.job.model.JobDataWithByteSink) OutputStream(java.io.OutputStream) MediaType(com.google.common.net.MediaType) ObjectContext(org.apache.cayenne.ObjectContext) AbstractJobRunner(org.haiku.haikudepotserver.job.AbstractJobRunner) Resource(javax.annotation.Resource) CSVWriter(com.opencsv.CSVWriter) ResultBatchIterator(org.apache.cayenne.ResultBatchIterator) IOException(java.io.IOException) Instant(java.time.Instant) AuthorizationRulesSpreadsheetJobSpecification(org.haiku.haikudepotserver.security.model.AuthorizationRulesSpreadsheetJobSpecification) Component(org.springframework.stereotype.Component) Permission(org.haiku.haikudepotserver.dataobjects.Permission) DateTimeFormatter(java.time.format.DateTimeFormatter) PermissionUserPkg(org.haiku.haikudepotserver.dataobjects.PermissionUserPkg) JobDataWithByteSink(org.haiku.haikudepotserver.job.model.JobDataWithByteSink) OutputStreamWriter(java.io.OutputStreamWriter) Preconditions(com.google.common.base.Preconditions) ObjectSelect(org.apache.cayenne.query.ObjectSelect) User(org.haiku.haikudepotserver.dataobjects.User) JobService(org.haiku.haikudepotserver.job.model.JobService) JobRunnerException(org.haiku.haikudepotserver.job.model.JobRunnerException) ServerRuntime(org.apache.cayenne.configuration.server.ServerRuntime) DateTimeHelper(org.haiku.haikudepotserver.support.DateTimeHelper) OutputStream(java.io.OutputStream) CSVWriter(com.opencsv.CSVWriter) OutputStreamWriter(java.io.OutputStreamWriter) ObjectContext(org.apache.cayenne.ObjectContext) DateTimeFormatter(java.time.format.DateTimeFormatter) PermissionUserPkg(org.haiku.haikudepotserver.dataobjects.PermissionUserPkg)

Aggregations

Preconditions (com.google.common.base.Preconditions)5 ObjectContext (org.apache.cayenne.ObjectContext)5 ServerRuntime (org.apache.cayenne.configuration.server.ServerRuntime)5 AbstractJobRunner (org.haiku.haikudepotserver.job.AbstractJobRunner)5 JobService (org.haiku.haikudepotserver.job.model.JobService)5 Component (org.springframework.stereotype.Component)5 IOException (java.io.IOException)4 Logger (org.slf4j.Logger)4 LoggerFactory (org.slf4j.LoggerFactory)4 MediaType (com.google.common.net.MediaType)3 CSVWriter (com.opencsv.CSVWriter)3 List (java.util.List)3 Optional (java.util.Optional)3 JobDataWithByteSink (org.haiku.haikudepotserver.job.model.JobDataWithByteSink)3 OutputStream (java.io.OutputStream)2 OutputStreamWriter (java.io.OutputStreamWriter)2 URL (java.net.URL)2 Pkg (org.haiku.haikudepotserver.dataobjects.Pkg)2 Repository (org.haiku.haikudepotserver.dataobjects.Repository)2 PkgService (org.haiku.haikudepotserver.pkg.model.PkgService)2