Search in sources :

Example 1 with RdaLoadOptions

use of gov.cms.bfd.pipeline.rda.grpc.RdaLoadOptions in project beneficiary-fhir-data by CMSgov.

the class DirectRdaLoadApp method main.

public static void main(String[] args) throws Exception {
    if (args.length != 2) {
        System.err.printf("usage: %s configfile claimType%n", DirectRdaLoadApp.class.getSimpleName());
        System.exit(1);
    }
    final ConfigLoader options = ConfigLoader.builder().addPropertiesFile(new File(args[0])).addSystemProperties().build();
    final String claimType = Strings.nullToEmpty(args[1]);
    final MetricRegistry metrics = new MetricRegistry();
    final Slf4jReporter reporter = Slf4jReporter.forRegistry(metrics).outputTo(LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME)).convertRatesTo(TimeUnit.SECONDS).convertDurationsTo(TimeUnit.MILLISECONDS).build();
    reporter.start(5, TimeUnit.SECONDS);
    final RdaLoadOptions jobConfig = readRdaLoadOptionsFromProperties(options);
    final DatabaseOptions databaseConfig = readDatabaseOptions(options, jobConfig.getJobConfig().getWriteThreads());
    HikariDataSource pooledDataSource = PipelineApplicationState.createPooledDataSource(databaseConfig, metrics);
    System.out.printf("thread count is %d%n", jobConfig.getJobConfig().getWriteThreads());
    System.out.printf("database pool size %d%n", pooledDataSource.getMaximumPoolSize());
    DatabaseSchemaManager.createOrUpdateSchema(pooledDataSource);
    try (PipelineApplicationState appState = new PipelineApplicationState(metrics, pooledDataSource, PipelineApplicationState.RDA_PERSISTENCE_UNIT_NAME, Clock.systemUTC())) {
        final Optional<PipelineJob<?>> job = createPipelineJob(jobConfig, appState, claimType);
        if (!job.isPresent()) {
            System.err.printf("error: invalid claim type: '%s' expected 'fiss' or 'mcs'%n", claimType);
            System.exit(1);
        }
        try {
            job.get().call();
        } finally {
            reporter.report();
        }
    }
}
Also used : RdaLoadOptions(gov.cms.bfd.pipeline.rda.grpc.RdaLoadOptions) PipelineApplicationState(gov.cms.bfd.pipeline.sharedutils.PipelineApplicationState) PipelineJob(gov.cms.bfd.pipeline.sharedutils.PipelineJob) HikariDataSource(com.zaxxer.hikari.HikariDataSource) ConfigLoader(gov.cms.bfd.sharedutils.config.ConfigLoader) MetricRegistry(com.codahale.metrics.MetricRegistry) Slf4jReporter(com.codahale.metrics.Slf4jReporter) DatabaseOptions(gov.cms.bfd.pipeline.sharedutils.DatabaseOptions) File(java.io.File)

Example 2 with RdaLoadOptions

use of gov.cms.bfd.pipeline.rda.grpc.RdaLoadOptions in project beneficiary-fhir-data by CMSgov.

the class LoadRdaJsonApp method main.

public static void main(String[] args) throws Exception {
    final ConfigLoader.Builder options = ConfigLoader.builder();
    if (args.length == 1) {
        options.addPropertiesFile(new File(args[0]));
    } else if (System.getProperty("config.properties", "").length() > 0) {
        options.addPropertiesFile(new File(System.getProperty("config.properties")));
    }
    options.addSystemProperties();
    final Config config = new Config(options.build());
    final MetricRegistry metrics = new MetricRegistry();
    final Slf4jReporter reporter = Slf4jReporter.forRegistry(metrics).outputTo(LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME)).convertRatesTo(TimeUnit.SECONDS).convertDurationsTo(TimeUnit.MILLISECONDS).build();
    reporter.start(5, TimeUnit.SECONDS);
    try {
        logger.info("starting RDA API local server");
        RdaServer.LocalConfig.builder().fissSourceFactory(config::createFissClaimsSource).mcsSourceFactory(config::createMcsClaimsSource).build().runWithPortParam(port -> {
            final RdaLoadOptions jobConfig = config.createRdaLoadOptions(port);
            final DatabaseOptions databaseConfig = config.createDatabaseOptions();
            final HikariDataSource pooledDataSource = PipelineApplicationState.createPooledDataSource(databaseConfig, metrics);
            if (config.runSchemaMigration) {
                logger.info("running database migration");
                DatabaseSchemaManager.createOrUpdateSchema(pooledDataSource);
            }
            try (PipelineApplicationState appState = new PipelineApplicationState(metrics, pooledDataSource, PipelineApplicationState.RDA_PERSISTENCE_UNIT_NAME, Clock.systemUTC())) {
                final List<PipelineJob<?>> jobs = config.createPipelineJobs(jobConfig, appState);
                for (PipelineJob<?> job : jobs) {
                    logger.info("starting job {}", job.getClass().getSimpleName());
                    job.call();
                }
            }
        });
    } finally {
        reporter.report();
        reporter.close();
    }
}
Also used : RdaLoadOptions(gov.cms.bfd.pipeline.rda.grpc.RdaLoadOptions) HikariDataSource(com.zaxxer.hikari.HikariDataSource) ConfigLoader(gov.cms.bfd.sharedutils.config.ConfigLoader) MetricRegistry(com.codahale.metrics.MetricRegistry) PipelineApplicationState(gov.cms.bfd.pipeline.sharedutils.PipelineApplicationState) PipelineJob(gov.cms.bfd.pipeline.sharedutils.PipelineJob) Slf4jReporter(com.codahale.metrics.Slf4jReporter) DatabaseOptions(gov.cms.bfd.pipeline.sharedutils.DatabaseOptions) File(java.io.File)

Example 3 with RdaLoadOptions

use of gov.cms.bfd.pipeline.rda.grpc.RdaLoadOptions in project beneficiary-fhir-data by CMSgov.

the class PipelineApplication method main.

/**
 * This method is the one that will get called when users launch the application from the command
 * line.
 *
 * @param args (should be empty, as this application accepts configuration via environment
 *     variables)
 * @throws Exception any unhandled checked {@link Exception}s that are encountered will cause the
 *     application to halt
 */
public static void main(String[] args) throws Exception {
    LOGGER.info("Application starting up!");
    configureUnexpectedExceptionHandlers();
    AppConfiguration appConfig = null;
    try {
        appConfig = AppConfiguration.readConfigFromEnvironmentVariables();
        LOGGER.info("Application configured: '{}'", appConfig);
    } catch (AppConfigurationException e) {
        System.err.println(e.getMessage());
        LOGGER.warn("Invalid app configuration.", e);
        System.exit(EXIT_CODE_BAD_CONFIG);
    }
    MetricRegistry appMetrics = new MetricRegistry();
    appMetrics.registerAll(new MemoryUsageGaugeSet());
    appMetrics.registerAll(new GarbageCollectorMetricSet());
    Slf4jReporter appMetricsReporter = Slf4jReporter.forRegistry(appMetrics).outputTo(LOGGER).build();
    MetricOptions metricOptions = appConfig.getMetricOptions();
    if (metricOptions.getNewRelicMetricKey().isPresent()) {
        SenderConfiguration configuration = SenderConfiguration.builder(metricOptions.getNewRelicMetricHost().orElse(null), metricOptions.getNewRelicMetricPath().orElse(null)).httpPoster(new OkHttpPoster()).apiKey(metricOptions.getNewRelicMetricKey().orElse(null)).build();
        MetricBatchSender metricBatchSender = MetricBatchSender.create(configuration);
        Attributes commonAttributes = new Attributes().put("host", metricOptions.getHostname().orElse("unknown")).put("appName", metricOptions.getNewRelicAppName().orElse(null));
        NewRelicReporter newRelicReporter = NewRelicReporter.build(appMetrics, metricBatchSender).commonAttributes(commonAttributes).build();
        newRelicReporter.start(metricOptions.getNewRelicMetricPeriod().orElse(15), TimeUnit.SECONDS);
    }
    appMetricsReporter.start(1, TimeUnit.HOURS);
    /*
     * Create the PipelineManager that will be responsible for running and managing the various
     * jobs.
     */
    PipelineJobRecordStore jobRecordStore = new PipelineJobRecordStore(appMetrics);
    PipelineManager pipelineManager = new PipelineManager(appMetrics, jobRecordStore);
    registerShutdownHook(appMetrics, pipelineManager);
    LOGGER.info("Job processing started.");
    // Create a pooled data source for use by the DatabaseSchemaUpdateJob.
    final HikariDataSource pooledDataSource = PipelineApplicationState.createPooledDataSource(appConfig.getDatabaseOptions(), appMetrics);
    /*
     * Register and wait for the database schema job to run, so that we don't have to worry about
     * declaring it as a dependency (since it is for pretty much everything right now).
     */
    pipelineManager.registerJob(new DatabaseSchemaUpdateJob(pooledDataSource));
    PipelineJobRecord<NullPipelineJobArguments> dbSchemaJobRecord = jobRecordStore.submitPendingJob(DatabaseSchemaUpdateJob.JOB_TYPE, null);
    try {
        jobRecordStore.waitForJobs(dbSchemaJobRecord);
    } catch (InterruptedException e) {
        pooledDataSource.close();
        throw new InterruptedException();
    }
    /*
     * Create and register the other jobs.
     */
    if (appConfig.getCcwRifLoadOptions().isPresent()) {
        // Create an application state that reuses the existing pooled data source with the ccw/rif
        // persistence unit.
        final PipelineApplicationState appState = new PipelineApplicationState(appMetrics, pooledDataSource, PipelineApplicationState.PERSISTENCE_UNIT_NAME, Clock.systemUTC());
        pipelineManager.registerJob(createCcwRifLoadJob(appConfig.getCcwRifLoadOptions().get(), appState));
        LOGGER.info("Registered CcwRifLoadJob.");
    } else {
        LOGGER.warn("CcwRifLoadJob is disabled in app configuration.");
    }
    if (appConfig.getRdaLoadOptions().isPresent()) {
        LOGGER.info("RDA API jobs are enabled in app configuration.");
        // Create an application state that reuses the existing pooled data source with the rda
        // persistence unit.
        final PipelineApplicationState rdaAppState = new PipelineApplicationState(appMetrics, pooledDataSource, PipelineApplicationState.RDA_PERSISTENCE_UNIT_NAME, Clock.systemUTC());
        final RdaLoadOptions rdaLoadOptions = appConfig.getRdaLoadOptions().get();
        final Optional<RdaServerJob> mockServerJob = rdaLoadOptions.createRdaServerJob();
        if (mockServerJob.isPresent()) {
            pipelineManager.registerJob(mockServerJob.get());
            LOGGER.warn("Registered RdaServerJob.");
        } else {
            LOGGER.info("Skipping RdaServerJob registration - not enabled in app configuration.");
        }
        pipelineManager.registerJob(rdaLoadOptions.createFissClaimsLoadJob(rdaAppState));
        LOGGER.info("Registered RdaFissClaimLoadJob.");
        pipelineManager.registerJob(rdaLoadOptions.createMcsClaimsLoadJob(rdaAppState));
        LOGGER.info("Registered RdaMcsClaimLoadJob.");
    } else {
        LOGGER.info("RDA API jobs are not enabled in app configuration.");
    }
/*
     * At this point, we're done here with the main thread. From now on, the PipelineManager's
     * executor service should be the only non-daemon thread running (and whatever it kicks off).
     * Once/if that thread stops, the application will run all registered shutdown hooks and Wait
     * for the PipelineManager to stop running jobs, and then check to see if we should exit
     * normally with 0 or abnormally with a non-0 because a job failed.
     */
}
Also used : RdaLoadOptions(gov.cms.bfd.pipeline.rda.grpc.RdaLoadOptions) PipelineJobRecordStore(gov.cms.bfd.pipeline.sharedutils.jobs.store.PipelineJobRecordStore) HikariDataSource(com.zaxxer.hikari.HikariDataSource) MetricRegistry(com.codahale.metrics.MetricRegistry) Attributes(com.newrelic.telemetry.Attributes) SenderConfiguration(com.newrelic.telemetry.SenderConfiguration) NullPipelineJobArguments(gov.cms.bfd.pipeline.sharedutils.NullPipelineJobArguments) MetricBatchSender(com.newrelic.telemetry.metrics.MetricBatchSender) PipelineApplicationState(gov.cms.bfd.pipeline.sharedutils.PipelineApplicationState) MemoryUsageGaugeSet(com.codahale.metrics.jvm.MemoryUsageGaugeSet) NewRelicReporter(com.codahale.metrics.newrelic.NewRelicReporter) RdaServerJob(gov.cms.bfd.pipeline.rda.grpc.RdaServerJob) Slf4jReporter(com.codahale.metrics.Slf4jReporter) OkHttpPoster(com.newrelic.telemetry.OkHttpPoster) DatabaseSchemaUpdateJob(gov.cms.bfd.pipeline.sharedutils.databaseschema.DatabaseSchemaUpdateJob) GarbageCollectorMetricSet(com.codahale.metrics.jvm.GarbageCollectorMetricSet)

Example 4 with RdaLoadOptions

use of gov.cms.bfd.pipeline.rda.grpc.RdaLoadOptions in project beneficiary-fhir-data by CMSgov.

the class AppConfiguration method readConfigFromEnvironmentVariables.

/**
 * Per <code>/dev/design-decisions-readme.md</code>, this application accepts its configuration
 * via environment variables. Read those in, and build an {@link AppConfiguration} instance from
 * them.
 *
 * <p>As a convenience, this method will also verify that AWS credentials were provided, such that
 * {@link DefaultAWSCredentialsProviderChain} can load them. If not, an {@link
 * AppConfigurationException} will be thrown.
 *
 * @return the {@link AppConfiguration} instance represented by the configuration provided to this
 *     application via the environment variables
 * @throws AppConfigurationException An {@link AppConfigurationException} will be thrown if the
 *     configuration passed to the application are incomplete or incorrect.
 */
static AppConfiguration readConfigFromEnvironmentVariables() {
    int hicnHashIterations = readEnvIntPositiveRequired(ENV_VAR_KEY_HICN_HASH_ITERATIONS);
    byte[] hicnHashPepper = readEnvBytesRequired(ENV_VAR_KEY_HICN_HASH_PEPPER);
    int hicnHashCacheSize = readEnvIntOptional(ENV_VAR_KEY_HICN_HASH_CACHE_SIZE).orElse(DEFAULT_HICN_HASH_CACHE_SIZE);
    String databaseUrl = readEnvStringRequired(ENV_VAR_KEY_DATABASE_URL);
    String databaseUsername = readEnvStringRequired(ENV_VAR_KEY_DATABASE_USERNAME);
    String databasePassword = readEnvStringRequired(ENV_VAR_KEY_DATABASE_PASSWORD);
    int loaderThreads = readEnvIntPositiveRequired(ENV_VAR_KEY_LOADER_THREADS);
    boolean idempotencyRequired = readEnvBooleanRequired(ENV_VAR_KEY_IDEMPOTENCY_REQUIRED);
    boolean filteringNonNullAndNon2022Benes = readEnvBooleanOptional(ENV_VAR_KEY_RIF_FILTERING_NON_NULL_AND_NON_2022_BENES).orElse(DEFAULT_RIF_FILTERING_NON_NULL_AND_NON_2022_BENES);
    Optional<String> newRelicMetricKey = readEnvStringOptional(ENV_VAR_NEW_RELIC_METRIC_KEY);
    Optional<String> newRelicAppName = readEnvStringOptional(ENV_VAR_NEW_RELIC_APP_NAME);
    Optional<String> newRelicMetricHost = readEnvStringOptional(ENV_VAR_NEW_RELIC_METRIC_HOST);
    Optional<String> newRelicMetricPath = readEnvStringOptional(ENV_VAR_NEW_RELIC_METRIC_PATH);
    Optional<Integer> newRelicMetricPeriod = readEnvIntOptional(ENV_VAR_NEW_RELIC_METRIC_PERIOD);
    /*
     * Note: For CcwRifLoadJob, databaseMaxPoolSize needs to be double the number of loader threads
     * when idempotent loads are being used. Apparently, the queries need a separate Connection?
     */
    Optional<Integer> databaseMaxPoolSize = readEnvIntOptional(ENV_VAR_KEY_DATABASE_MAX_POOL_SIZE);
    if (databaseMaxPoolSize.isPresent() && databaseMaxPoolSize.get() < 1)
        throw new AppConfigurationException(String.format("Invalid value for configuration environment variable '%s': '%s'", ENV_VAR_KEY_DATABASE_MAX_POOL_SIZE, databaseMaxPoolSize));
    if (!databaseMaxPoolSize.isPresent())
        databaseMaxPoolSize = Optional.of(loaderThreads * 2);
    Optional<String> hostname;
    try {
        hostname = Optional.of(InetAddress.getLocalHost().getHostName());
    } catch (UnknownHostException e) {
        hostname = Optional.empty();
    }
    MetricOptions metricOptions = new MetricOptions(newRelicMetricKey, newRelicAppName, newRelicMetricHost, newRelicMetricPath, newRelicMetricPeriod, hostname);
    DatabaseOptions databaseOptions = new DatabaseOptions(databaseUrl, databaseUsername, databasePassword, databaseMaxPoolSize.get());
    LoadAppOptions loadOptions = new LoadAppOptions(IdHasher.Config.builder().hashIterations(hicnHashIterations).hashPepper(hicnHashPepper).cacheSize(hicnHashCacheSize).build(), loaderThreads, idempotencyRequired, filteringNonNullAndNon2022Benes);
    CcwRifLoadOptions ccwRifLoadOptions = readCcwRifLoadOptionsFromEnvironmentVariables(loadOptions);
    RdaLoadOptions rdaLoadOptions = readRdaLoadOptionsFromEnvironmentVariables(loadOptions.getIdHasherConfig());
    return new AppConfiguration(metricOptions, databaseOptions, ccwRifLoadOptions, rdaLoadOptions);
}
Also used : RdaLoadOptions(gov.cms.bfd.pipeline.rda.grpc.RdaLoadOptions) UnknownHostException(java.net.UnknownHostException) LoadAppOptions(gov.cms.bfd.pipeline.ccw.rif.load.LoadAppOptions) CcwRifLoadOptions(gov.cms.bfd.pipeline.ccw.rif.CcwRifLoadOptions) DatabaseOptions(gov.cms.bfd.pipeline.sharedutils.DatabaseOptions)

Aggregations

RdaLoadOptions (gov.cms.bfd.pipeline.rda.grpc.RdaLoadOptions)4 MetricRegistry (com.codahale.metrics.MetricRegistry)3 Slf4jReporter (com.codahale.metrics.Slf4jReporter)3 HikariDataSource (com.zaxxer.hikari.HikariDataSource)3 DatabaseOptions (gov.cms.bfd.pipeline.sharedutils.DatabaseOptions)3 PipelineApplicationState (gov.cms.bfd.pipeline.sharedutils.PipelineApplicationState)3 PipelineJob (gov.cms.bfd.pipeline.sharedutils.PipelineJob)2 ConfigLoader (gov.cms.bfd.sharedutils.config.ConfigLoader)2 File (java.io.File)2 GarbageCollectorMetricSet (com.codahale.metrics.jvm.GarbageCollectorMetricSet)1 MemoryUsageGaugeSet (com.codahale.metrics.jvm.MemoryUsageGaugeSet)1 NewRelicReporter (com.codahale.metrics.newrelic.NewRelicReporter)1 Attributes (com.newrelic.telemetry.Attributes)1 OkHttpPoster (com.newrelic.telemetry.OkHttpPoster)1 SenderConfiguration (com.newrelic.telemetry.SenderConfiguration)1 MetricBatchSender (com.newrelic.telemetry.metrics.MetricBatchSender)1 CcwRifLoadOptions (gov.cms.bfd.pipeline.ccw.rif.CcwRifLoadOptions)1 LoadAppOptions (gov.cms.bfd.pipeline.ccw.rif.load.LoadAppOptions)1 RdaServerJob (gov.cms.bfd.pipeline.rda.grpc.RdaServerJob)1 NullPipelineJobArguments (gov.cms.bfd.pipeline.sharedutils.NullPipelineJobArguments)1