Search in sources :

Example 1 with PipelineJob

use of gov.cms.bfd.pipeline.sharedutils.PipelineJob in project beneficiary-fhir-data by CMSgov.

the class DirectRdaLoadApp method main.

public static void main(String[] args) throws Exception {
    if (args.length != 2) {
        System.err.printf("usage: %s configfile claimType%n", DirectRdaLoadApp.class.getSimpleName());
        System.exit(1);
    }
    final ConfigLoader options = ConfigLoader.builder().addPropertiesFile(new File(args[0])).addSystemProperties().build();
    final String claimType = Strings.nullToEmpty(args[1]);
    final MetricRegistry metrics = new MetricRegistry();
    final Slf4jReporter reporter = Slf4jReporter.forRegistry(metrics).outputTo(LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME)).convertRatesTo(TimeUnit.SECONDS).convertDurationsTo(TimeUnit.MILLISECONDS).build();
    reporter.start(5, TimeUnit.SECONDS);
    final RdaLoadOptions jobConfig = readRdaLoadOptionsFromProperties(options);
    final DatabaseOptions databaseConfig = readDatabaseOptions(options, jobConfig.getJobConfig().getWriteThreads());
    HikariDataSource pooledDataSource = PipelineApplicationState.createPooledDataSource(databaseConfig, metrics);
    System.out.printf("thread count is %d%n", jobConfig.getJobConfig().getWriteThreads());
    System.out.printf("database pool size %d%n", pooledDataSource.getMaximumPoolSize());
    DatabaseSchemaManager.createOrUpdateSchema(pooledDataSource);
    try (PipelineApplicationState appState = new PipelineApplicationState(metrics, pooledDataSource, PipelineApplicationState.RDA_PERSISTENCE_UNIT_NAME, Clock.systemUTC())) {
        final Optional<PipelineJob<?>> job = createPipelineJob(jobConfig, appState, claimType);
        if (!job.isPresent()) {
            System.err.printf("error: invalid claim type: '%s' expected 'fiss' or 'mcs'%n", claimType);
            System.exit(1);
        }
        try {
            job.get().call();
        } finally {
            reporter.report();
        }
    }
}
Also used : RdaLoadOptions(gov.cms.bfd.pipeline.rda.grpc.RdaLoadOptions) PipelineApplicationState(gov.cms.bfd.pipeline.sharedutils.PipelineApplicationState) PipelineJob(gov.cms.bfd.pipeline.sharedutils.PipelineJob) HikariDataSource(com.zaxxer.hikari.HikariDataSource) ConfigLoader(gov.cms.bfd.sharedutils.config.ConfigLoader) MetricRegistry(com.codahale.metrics.MetricRegistry) Slf4jReporter(com.codahale.metrics.Slf4jReporter) DatabaseOptions(gov.cms.bfd.pipeline.sharedutils.DatabaseOptions) File(java.io.File)

Example 2 with PipelineJob

use of gov.cms.bfd.pipeline.sharedutils.PipelineJob in project beneficiary-fhir-data by CMSgov.

the class LoadRdaJsonApp method main.

public static void main(String[] args) throws Exception {
    final ConfigLoader.Builder options = ConfigLoader.builder();
    if (args.length == 1) {
        options.addPropertiesFile(new File(args[0]));
    } else if (System.getProperty("config.properties", "").length() > 0) {
        options.addPropertiesFile(new File(System.getProperty("config.properties")));
    }
    options.addSystemProperties();
    final Config config = new Config(options.build());
    final MetricRegistry metrics = new MetricRegistry();
    final Slf4jReporter reporter = Slf4jReporter.forRegistry(metrics).outputTo(LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME)).convertRatesTo(TimeUnit.SECONDS).convertDurationsTo(TimeUnit.MILLISECONDS).build();
    reporter.start(5, TimeUnit.SECONDS);
    try {
        logger.info("starting RDA API local server");
        RdaServer.LocalConfig.builder().fissSourceFactory(config::createFissClaimsSource).mcsSourceFactory(config::createMcsClaimsSource).build().runWithPortParam(port -> {
            final RdaLoadOptions jobConfig = config.createRdaLoadOptions(port);
            final DatabaseOptions databaseConfig = config.createDatabaseOptions();
            final HikariDataSource pooledDataSource = PipelineApplicationState.createPooledDataSource(databaseConfig, metrics);
            if (config.runSchemaMigration) {
                logger.info("running database migration");
                DatabaseSchemaManager.createOrUpdateSchema(pooledDataSource);
            }
            try (PipelineApplicationState appState = new PipelineApplicationState(metrics, pooledDataSource, PipelineApplicationState.RDA_PERSISTENCE_UNIT_NAME, Clock.systemUTC())) {
                final List<PipelineJob<?>> jobs = config.createPipelineJobs(jobConfig, appState);
                for (PipelineJob<?> job : jobs) {
                    logger.info("starting job {}", job.getClass().getSimpleName());
                    job.call();
                }
            }
        });
    } finally {
        reporter.report();
        reporter.close();
    }
}
Also used : RdaLoadOptions(gov.cms.bfd.pipeline.rda.grpc.RdaLoadOptions) HikariDataSource(com.zaxxer.hikari.HikariDataSource) ConfigLoader(gov.cms.bfd.sharedutils.config.ConfigLoader) MetricRegistry(com.codahale.metrics.MetricRegistry) PipelineApplicationState(gov.cms.bfd.pipeline.sharedutils.PipelineApplicationState) PipelineJob(gov.cms.bfd.pipeline.sharedutils.PipelineJob) Slf4jReporter(com.codahale.metrics.Slf4jReporter) DatabaseOptions(gov.cms.bfd.pipeline.sharedutils.DatabaseOptions) File(java.io.File)

Example 3 with PipelineJob

use of gov.cms.bfd.pipeline.sharedutils.PipelineJob in project beneficiary-fhir-data by CMSgov.

the class SchedulerJob method call.

/**
 * @see gov.cms.bfd.pipeline.sharedutils.PipelineJob#call()
 */
@Override
public PipelineJobOutcome call() throws Exception {
    boolean scheduledAJob = false;
    while (true) {
        try (Timer.Context timer = appMetrics.timer(MetricRegistry.name(getClass().getSimpleName(), "call", "iteration")).time()) {
            Instant now = Instant.now();
            Set<PipelineJob<NullPipelineJobArguments>> scheduledJobs = pipelineManager.getScheduledJobs();
            for (PipelineJob<NullPipelineJobArguments> scheduledJob : scheduledJobs) {
                PipelineJobSchedule jobSchedule = scheduledJob.getSchedule().get();
                Optional<PipelineJobRecord<NullPipelineJobArguments>> mostRecentExecution = jobRecordsStore.findMostRecent(scheduledJob.getType());
                /* Calculate whether or not we should trigger an execution of the next job. */
                boolean shouldTriggerJob;
                if (!mostRecentExecution.isPresent()) {
                    // If the job has never run, we'll always trigger it now, regardless of schedule.
                    shouldTriggerJob = true;
                } else {
                    if (!mostRecentExecution.get().isCompleted()) {
                        // If the job's still pending or running, don't double-trigger it.
                        shouldTriggerJob = false;
                    } else {
                        if (mostRecentExecution.get().isCompletedSuccessfully()) {
                            // If the job's not running, check to see if it's time to trigger it again.
                            // Note: This calculation is based on completion time, not submission or start time.
                            Instant nextExecution = mostRecentExecution.get().getStartedTime().get().plus(jobSchedule.getRepeatDelay(), jobSchedule.getRepeatDelayUnit());
                            shouldTriggerJob = now.equals(nextExecution) || now.isAfter(nextExecution);
                        } else {
                            // We don't re-run failed jobs.
                            shouldTriggerJob = false;
                        }
                    }
                }
                // If we shouldn't trigger this job, move on to the next.
                if (!shouldTriggerJob) {
                    continue;
                }
                // Trigger the job (for future execution, when VolunteerJob picks it up)!
                jobRecordsStore.submitPendingJob(scheduledJob.getType(), null);
            }
        }
        try {
            Thread.sleep(SCHEDULER_TICK_MILLIS);
        } catch (InterruptedException e) {
            /*
         * Jobs are only interrupted/cancelled as part of application shutdown, so when encountered,
         * we'll break out of our scheduling loop and close up shop here.
         */
            break;
        }
    }
    /*
     * Did we schedule at least one job? If we ever move to an autoscaled version of this
     * application, it will be important to ensure that we "collude" with the PipelineJobRecordStore
     * to ignore this PipelineJobOutcome and ensure that the record doesn't get marked as completed,
     * even when the application shuts down. (If that happened, then scheduled triggers would stop
     * firing.)
     */
    return scheduledAJob ? PipelineJobOutcome.WORK_DONE : PipelineJobOutcome.NOTHING_TO_DO;
}
Also used : PipelineJob(gov.cms.bfd.pipeline.sharedutils.PipelineJob) Timer(com.codahale.metrics.Timer) PipelineJobSchedule(gov.cms.bfd.pipeline.sharedutils.PipelineJobSchedule) Instant(java.time.Instant) NullPipelineJobArguments(gov.cms.bfd.pipeline.sharedutils.NullPipelineJobArguments) PipelineJobRecord(gov.cms.bfd.pipeline.sharedutils.jobs.store.PipelineJobRecord)

Aggregations

PipelineJob (gov.cms.bfd.pipeline.sharedutils.PipelineJob)3 MetricRegistry (com.codahale.metrics.MetricRegistry)2 Slf4jReporter (com.codahale.metrics.Slf4jReporter)2 HikariDataSource (com.zaxxer.hikari.HikariDataSource)2 RdaLoadOptions (gov.cms.bfd.pipeline.rda.grpc.RdaLoadOptions)2 DatabaseOptions (gov.cms.bfd.pipeline.sharedutils.DatabaseOptions)2 PipelineApplicationState (gov.cms.bfd.pipeline.sharedutils.PipelineApplicationState)2 ConfigLoader (gov.cms.bfd.sharedutils.config.ConfigLoader)2 File (java.io.File)2 Timer (com.codahale.metrics.Timer)1 NullPipelineJobArguments (gov.cms.bfd.pipeline.sharedutils.NullPipelineJobArguments)1 PipelineJobSchedule (gov.cms.bfd.pipeline.sharedutils.PipelineJobSchedule)1 PipelineJobRecord (gov.cms.bfd.pipeline.sharedutils.jobs.store.PipelineJobRecord)1 Instant (java.time.Instant)1