Search in sources :

Example 1 with ScraperException

use of org.apache.plc4x.java.scraper.exception.ScraperException in project plc4x by apache.

the class Plc4XConsumer method startTriggered.

private void startTriggered() throws ScraperException {
    ScraperConfiguration configuration = getScraperConfig(validateTags());
    TriggerCollector collector = new TriggerCollectorImpl(plc4XEndpoint.getPlcDriverManager());
    TriggeredScraperImpl scraper = new TriggeredScraperImpl(configuration, (job, alias, response) -> {
        try {
            Exchange exchange = plc4XEndpoint.createExchange();
            exchange.getIn().setBody(response);
            getProcessor().process(exchange);
        } catch (Exception e) {
            getExceptionHandler().handleException(e);
        }
    }, collector);
    scraper.start();
    collector.start();
}
Also used : ScraperConfiguration(org.apache.plc4x.java.scraper.config.ScraperConfiguration) TriggeredScraperImpl(org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl) Exchange(org.apache.camel.Exchange) TriggerCollector(org.apache.plc4x.java.scraper.triggeredscraper.triggerhandler.collector.TriggerCollector) TriggerCollectorImpl(org.apache.plc4x.java.scraper.triggeredscraper.triggerhandler.collector.TriggerCollectorImpl) ScraperException(org.apache.plc4x.java.scraper.exception.ScraperException) PlcIncompatibleDatatypeException(org.apache.plc4x.java.api.exceptions.PlcIncompatibleDatatypeException)

Example 2 with ScraperException

use of org.apache.plc4x.java.scraper.exception.ScraperException in project plc4x by apache.

the class TriggeredScraperImpl method start.

/**
 * Start the scraping.
 */
// ToDo code-refactoring and improved testing --> PLC4X-90
@Override
public void start() {
    // Schedule all jobs
    LOGGER.info("Starting jobs...");
    // start iterating over all available jobs
    for (ScrapeJob job : jobs) {
        // iterate over all source the jobs shall performed on
        for (Map.Entry<String, String> sourceEntry : job.getSourceConnections().entrySet()) {
            if (LOGGER.isDebugEnabled()) {
                LOGGER.debug("Register task for job {} for conn {} ({}) at rate {} ms", job.getJobName(), sourceEntry.getKey(), sourceEntry.getValue(), job.getScrapeRate());
            }
            // create the regarding triggered scraper task
            TriggeredScraperTask triggeredScraperTask;
            try {
                triggeredScraperTask = new TriggeredScraperTask(driverManager, job.getJobName(), sourceEntry.getKey(), sourceEntry.getValue(), job.getFields(), futureTimeOut, executorService, resultHandler, (TriggeredScrapeJobImpl) job, triggerCollector);
                // Add task to internal list
                if (LOGGER.isInfoEnabled()) {
                    LOGGER.info("Task {} added to scheduling", triggeredScraperTask);
                }
                registerTaskMBean(triggeredScraperTask);
                tasks.put(job, triggeredScraperTask);
                ScheduledFuture<?> future = scheduler.scheduleAtFixedRate(triggeredScraperTask, 0, job.getScrapeRate(), TimeUnit.MILLISECONDS);
                // Store the handle for stopping, etc.
                scraperTaskMap.put(triggeredScraperTask, future);
            } catch (ScraperException e) {
                LOGGER.warn("Error executing the job {} for conn {} ({}) at rate {} ms", job.getJobName(), sourceEntry.getKey(), sourceEntry.getValue(), job.getScrapeRate(), e);
            }
        }
    }
    // Add statistics tracker
    statisticsLogger = scheduler.scheduleAtFixedRate(() -> {
        for (Map.Entry<ScrapeJob, ScraperTask> entry : tasks.entries()) {
            DescriptiveStatistics statistics = entry.getValue().getLatencyStatistics();
            String msg = String.format(Locale.ENGLISH, "Job statistics (%s, %s) number of requests: %d (%d success, %.1f %% failed, %.1f %% too slow), min latency: %.2f ms, mean latency: %.2f ms, median: %.2f ms", entry.getValue().getJobName(), entry.getValue().getConnectionAlias(), entry.getValue().getRequestCounter(), entry.getValue().getSuccessfullRequestCounter(), entry.getValue().getPercentageFailed(), statistics.apply(new PercentageAboveThreshold(entry.getKey().getScrapeRate() * 1e6)), statistics.getMin() * 1e-6, statistics.getMean() * 1e-6, statistics.getPercentile(50) * 1e-6);
            if (LOGGER.isDebugEnabled()) {
                LOGGER.debug(msg);
            }
        }
    }, 1_000, 1_000, TimeUnit.MILLISECONDS);
}
Also used : DescriptiveStatistics(org.apache.commons.math3.stat.descriptive.DescriptiveStatistics) ScraperException(org.apache.plc4x.java.scraper.exception.ScraperException) PercentageAboveThreshold(org.apache.plc4x.java.scraper.util.PercentageAboveThreshold) ArrayListValuedHashMap(org.apache.commons.collections4.multimap.ArrayListValuedHashMap) Map(java.util.Map) MultiValuedMap(org.apache.commons.collections4.MultiValuedMap)

Example 3 with ScraperException

use of org.apache.plc4x.java.scraper.exception.ScraperException in project plc4x by apache.

the class Plc4xSchemaFactory method create.

@Override
public Schema create(SchemaPlus parentSchema, String name, Map<String, Object> operand) {
    // Fetch config
    Object config = operand.get("config");
    Validate.notNull(config, "No configuration file given. Please specify operand 'config'...'");
    // Load configuration from file
    ScraperConfiguration configuration;
    try {
        configuration = ScraperConfiguration.fromFile(config.toString(), ScraperConfigurationTriggeredImpl.class);
    } catch (IOException e) {
        throw new IllegalArgumentException("Unable to load configuration file!", e);
    }
    // Fetch limit
    Object limit = operand.get("limit");
    Validate.notNull(limit, "No limit for the number of rows for a table. Please specify operand 'config'...'");
    long parsedLimit;
    try {
        parsedLimit = Long.parseLong(limit.toString());
    } catch (NumberFormatException e) {
        throw new IllegalArgumentException("Given limit '" + limit + "' cannot be parsed to valid long!", e);
    }
    // Pass the configuration to the Schema
    try {
        return new Plc4xSchema(configuration, parsedLimit);
    } catch (ScraperException e) {
        LOGGER.warn("Could not evaluate Plc4xSchema", e);
        // ToDo Exception, but interface does not accept ... null is fishy
        return null;
    }
}
Also used : ScraperConfiguration(org.apache.plc4x.java.scraper.config.ScraperConfiguration) ScraperConfigurationTriggeredImpl(org.apache.plc4x.java.scraper.config.triggeredscraper.ScraperConfigurationTriggeredImpl) ScraperException(org.apache.plc4x.java.scraper.exception.ScraperException) IOException(java.io.IOException)

Example 4 with ScraperException

use of org.apache.plc4x.java.scraper.exception.ScraperException in project plc4x by apache.

the class Plc4xSourceTask method start.

@Override
public void start(Map<String, String> props) {
    AbstractConfig config = new AbstractConfig(CONFIG_DEF, props);
    String connectionName = config.getString(Constants.CONNECTION_NAME_CONFIG);
    String plc4xConnectionString = config.getString(Constants.CONNECTION_STRING_CONFIG);
    pollReturnInterval = config.getInt(Constants.KAFKA_POLL_RETURN_CONFIG);
    Integer bufferSize = config.getInt(Constants.BUFFER_SIZE_CONFIG);
    Map<String, String> topics = new HashMap<>();
    // Create a buffer with a capacity of BUFFER_SIZE_CONFIG elements which schedules access in a fair way.
    buffer = new ArrayBlockingQueue<>(bufferSize, true);
    ScraperConfigurationTriggeredImplBuilder builder = new ScraperConfigurationTriggeredImplBuilder();
    builder.addSource(connectionName, plc4xConnectionString);
    List<String> jobConfigs = config.getList(Constants.QUERIES_CONFIG);
    for (String jobConfig : jobConfigs) {
        String[] jobConfigSegments = jobConfig.split("\\|");
        if (jobConfigSegments.length < 4) {
            log.warn("Error in job configuration '{}'. " + "The configuration expects at least 4 segments: " + "{job-name}|{topic}|{rate}(|{field-alias}#{field-address})+", jobConfig);
            continue;
        }
        String jobName = jobConfigSegments[0];
        String topic = jobConfigSegments[1];
        Integer rate = Integer.valueOf(jobConfigSegments[2]);
        JobConfigurationTriggeredImplBuilder jobBuilder = builder.job(jobName, String.format("(SCHEDULED,%s)", rate)).source(connectionName);
        for (int i = 3; i < jobConfigSegments.length; i++) {
            String[] fieldSegments = jobConfigSegments[i].split("#");
            if (fieldSegments.length != 2) {
                log.warn("Error in job configuration '{}'. " + "The field segment expects a format {field-alias}#{field-address}, but got '%s'", jobName, jobConfigSegments[i]);
                continue;
            }
            String fieldAlias = fieldSegments[0];
            String fieldAddress = fieldSegments[1];
            jobBuilder.field(fieldAlias, fieldAddress);
            topics.put(jobName, topic);
        }
        jobBuilder.build();
    }
    ScraperConfigurationTriggeredImpl scraperConfig = builder.build();
    try {
        PlcDriverManager plcDriverManager = new PooledPlcDriverManager();
        TriggerCollector triggerCollector = new TriggerCollectorImpl(plcDriverManager);
        scraper = new TriggeredScraperImpl(scraperConfig, (jobName, sourceName, results) -> {
            try {
                Long timestamp = System.currentTimeMillis();
                Map<String, String> sourcePartition = new HashMap<>();
                sourcePartition.put("sourceName", sourceName);
                sourcePartition.put("jobName", jobName);
                Map<String, Long> sourceOffset = Collections.singletonMap("offset", timestamp);
                String topic = topics.get(jobName);
                // Prepare the key structure.
                Struct key = new Struct(KEY_SCHEMA).put(Constants.SOURCE_NAME_FIELD, sourceName).put(Constants.JOB_NAME_FIELD, jobName);
                // Build the Schema for the result struct.
                SchemaBuilder fieldSchemaBuilder = SchemaBuilder.struct().name("org.apache.plc4x.kafka.schema.Field");
                for (Map.Entry<String, Object> result : results.entrySet()) {
                    // Get field-name and -value from the results.
                    String fieldName = result.getKey();
                    Object fieldValue = result.getValue();
                    // Get the schema for the given value type.
                    Schema valueSchema = getSchema(fieldValue);
                    // Add the schema description for the current field.
                    fieldSchemaBuilder.field(fieldName, valueSchema);
                }
                Schema fieldSchema = fieldSchemaBuilder.build();
                Schema recordSchema = SchemaBuilder.struct().name("org.apache.plc4x.kafka.schema.JobResult").doc("PLC Job result. This contains all of the received PLCValues as well as a recieved timestamp").field(Constants.FIELDS_CONFIG, fieldSchema).field(Constants.TIMESTAMP_CONFIG, Schema.INT64_SCHEMA).field(Constants.EXPIRES_CONFIG, Schema.OPTIONAL_INT64_SCHEMA).build();
                // Build the struct itself.
                Struct fieldStruct = new Struct(fieldSchema);
                for (Map.Entry<String, Object> result : results.entrySet()) {
                    // Get field-name and -value from the results.
                    String fieldName = result.getKey();
                    Object fieldValue = result.getValue();
                    if (fieldSchema.field(fieldName).schema().type() == Schema.Type.ARRAY) {
                        fieldStruct.put(fieldName, ((List) fieldValue).stream().map(p -> ((PlcValue) p).getObject()).collect(Collectors.toList()));
                    } else {
                        fieldStruct.put(fieldName, fieldValue);
                    }
                }
                Struct recordStruct = new Struct(recordSchema).put(Constants.FIELDS_CONFIG, fieldStruct).put(Constants.TIMESTAMP_CONFIG, timestamp);
                // Prepare the source-record element.
                SourceRecord sourceRecord = new SourceRecord(sourcePartition, sourceOffset, topic, KEY_SCHEMA, key, recordSchema, recordStruct);
                // Add the new source-record to the buffer.
                buffer.add(sourceRecord);
            } catch (Exception e) {
                log.error("Error while parsing returned values", e);
            }
        }, triggerCollector);
        scraper.start();
        triggerCollector.start();
    } catch (ScraperException e) {
        log.error("Error starting the scraper", e);
    }
}
Also used : PlcDriverManager(org.apache.plc4x.java.PlcDriverManager) Date(org.apache.kafka.connect.data.Date) java.util(java.util) ScraperConfigurationTriggeredImpl(org.apache.plc4x.java.scraper.config.triggeredscraper.ScraperConfigurationTriggeredImpl) LoggerFactory(org.slf4j.LoggerFactory) LocalDateTime(java.time.LocalDateTime) PlcValue(org.apache.plc4x.java.api.value.PlcValue) JobConfigurationTriggeredImplBuilder(org.apache.plc4x.java.scraper.config.triggeredscraper.JobConfigurationTriggeredImplBuilder) ScraperException(org.apache.plc4x.java.scraper.exception.ScraperException) SecureRandom(java.security.SecureRandom) BigDecimal(java.math.BigDecimal) ScraperConfigurationTriggeredImplBuilder(org.apache.plc4x.java.scraper.config.triggeredscraper.ScraperConfigurationTriggeredImplBuilder) LocalTime(java.time.LocalTime) BigInteger(java.math.BigInteger) org.apache.kafka.connect.data(org.apache.kafka.connect.data) PooledPlcDriverManager(org.apache.plc4x.java.utils.connectionpool.PooledPlcDriverManager) ConfigDef(org.apache.kafka.common.config.ConfigDef) VersionUtil(org.apache.plc4x.kafka.util.VersionUtil) Logger(org.slf4j.Logger) java.util.concurrent(java.util.concurrent) SourceRecord(org.apache.kafka.connect.source.SourceRecord) Collectors(java.util.stream.Collectors) TriggerCollectorImpl(org.apache.plc4x.java.scraper.triggeredscraper.triggerhandler.collector.TriggerCollectorImpl) TimeUnit(java.util.concurrent.TimeUnit) Constants(org.apache.plc4x.kafka.config.Constants) AbstractConfig(org.apache.kafka.common.config.AbstractConfig) ConnectException(org.apache.kafka.connect.errors.ConnectException) LocalDate(java.time.LocalDate) TriggeredScraperImpl(org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl) TriggerCollector(org.apache.plc4x.java.scraper.triggeredscraper.triggerhandler.collector.TriggerCollector) SourceTask(org.apache.kafka.connect.source.SourceTask) TriggerCollector(org.apache.plc4x.java.scraper.triggeredscraper.triggerhandler.collector.TriggerCollector) SourceRecord(org.apache.kafka.connect.source.SourceRecord) TriggeredScraperImpl(org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl) PlcDriverManager(org.apache.plc4x.java.PlcDriverManager) PooledPlcDriverManager(org.apache.plc4x.java.utils.connectionpool.PooledPlcDriverManager) ScraperException(org.apache.plc4x.java.scraper.exception.ScraperException) ScraperException(org.apache.plc4x.java.scraper.exception.ScraperException) ConnectException(org.apache.kafka.connect.errors.ConnectException) PooledPlcDriverManager(org.apache.plc4x.java.utils.connectionpool.PooledPlcDriverManager) BigInteger(java.math.BigInteger) AbstractConfig(org.apache.kafka.common.config.AbstractConfig) PlcValue(org.apache.plc4x.java.api.value.PlcValue) ScraperConfigurationTriggeredImplBuilder(org.apache.plc4x.java.scraper.config.triggeredscraper.ScraperConfigurationTriggeredImplBuilder) ScraperConfigurationTriggeredImpl(org.apache.plc4x.java.scraper.config.triggeredscraper.ScraperConfigurationTriggeredImpl) JobConfigurationTriggeredImplBuilder(org.apache.plc4x.java.scraper.config.triggeredscraper.JobConfigurationTriggeredImplBuilder) TriggerCollectorImpl(org.apache.plc4x.java.scraper.triggeredscraper.triggerhandler.collector.TriggerCollectorImpl)

Aggregations

ScraperException (org.apache.plc4x.java.scraper.exception.ScraperException)4 ScraperConfiguration (org.apache.plc4x.java.scraper.config.ScraperConfiguration)2 ScraperConfigurationTriggeredImpl (org.apache.plc4x.java.scraper.config.triggeredscraper.ScraperConfigurationTriggeredImpl)2 TriggeredScraperImpl (org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl)2 TriggerCollector (org.apache.plc4x.java.scraper.triggeredscraper.triggerhandler.collector.TriggerCollector)2 TriggerCollectorImpl (org.apache.plc4x.java.scraper.triggeredscraper.triggerhandler.collector.TriggerCollectorImpl)2 IOException (java.io.IOException)1 BigDecimal (java.math.BigDecimal)1 BigInteger (java.math.BigInteger)1 SecureRandom (java.security.SecureRandom)1 LocalDate (java.time.LocalDate)1 LocalDateTime (java.time.LocalDateTime)1 LocalTime (java.time.LocalTime)1 java.util (java.util)1 Map (java.util.Map)1 java.util.concurrent (java.util.concurrent)1 TimeUnit (java.util.concurrent.TimeUnit)1 Collectors (java.util.stream.Collectors)1 Exchange (org.apache.camel.Exchange)1 MultiValuedMap (org.apache.commons.collections4.MultiValuedMap)1