Search in sources :

Example 26 with Event

use of org.folio.rest.jaxrs.model.Event in project mod-source-record-manager by folio-org.

the class DataImportKafkaHandler method handle.

@Override
public Future<String> handle(KafkaConsumerRecord<String, String> record) {
    try {
        Promise<String> result = Promise.promise();
        List<KafkaHeader> kafkaHeaders = record.headers();
        OkapiConnectionParams okapiConnectionParams = new OkapiConnectionParams(KafkaHeaderUtils.kafkaHeadersToMap(kafkaHeaders), vertx);
        String recordId = okapiConnectionParams.getHeaders().get(RECORD_ID_HEADER);
        Event event = Json.decodeValue(record.value(), Event.class);
        String jobExecutionId = extractJobExecutionId(kafkaHeaders);
        LOGGER.info("Event was received with recordId: '{}' event type: '{}' with jobExecutionId: '{}'", recordId, event.getEventType(), jobExecutionId);
        if (StringUtils.isBlank(recordId)) {
            handleLocalEvent(result, okapiConnectionParams, event);
            return result.future();
        }
        eventProcessedService.collectData(DATA_IMPORT_KAFKA_HANDLER_UUID, event.getId(), okapiConnectionParams.getTenantId()).onSuccess(res -> handleLocalEvent(result, okapiConnectionParams, event)).onFailure(e -> {
            if (e instanceof DuplicateEventException) {
                LOGGER.info(e.getMessage());
                result.complete();
            } else {
                LOGGER.error("Error with database during collecting of deduplication info for handlerId: {} , eventId: {}. ", DATA_IMPORT_KAFKA_HANDLER_UUID, event.getId(), e);
                result.fail(e);
            }
        });
        return result.future();
    } catch (Exception e) {
        LOGGER.error("Error during processing data-import result", e);
        return Future.failedFuture(e);
    }
}
Also used : Event(org.folio.rest.jaxrs.model.Event) Json(io.vertx.core.json.Json) DuplicateEventException(org.folio.kafka.exception.DuplicateEventException) Promise(io.vertx.core.Promise) Vertx(io.vertx.core.Vertx) Autowired(org.springframework.beans.factory.annotation.Autowired) EventHandlingService(org.folio.services.EventHandlingService) AsyncRecordHandler(org.folio.kafka.AsyncRecordHandler) Future(io.vertx.core.Future) StringUtils(org.apache.commons.lang3.StringUtils) OkapiConnectionParams(org.folio.dataimport.util.OkapiConnectionParams) Component(org.springframework.stereotype.Component) List(java.util.List) Logger(org.apache.logging.log4j.Logger) KafkaConsumerRecord(io.vertx.kafka.client.consumer.KafkaConsumerRecord) Qualifier(org.springframework.beans.factory.annotation.Qualifier) EventProcessedService(org.folio.services.EventProcessedService) KafkaHeader(io.vertx.kafka.client.producer.KafkaHeader) LogManager(org.apache.logging.log4j.LogManager) KafkaHeaderUtils(org.folio.kafka.KafkaHeaderUtils) RawRecordsFlowControlService(org.folio.services.flowcontrol.RawRecordsFlowControlService) DuplicateEventException(org.folio.kafka.exception.DuplicateEventException) Event(org.folio.rest.jaxrs.model.Event) OkapiConnectionParams(org.folio.dataimport.util.OkapiConnectionParams) DuplicateEventException(org.folio.kafka.exception.DuplicateEventException) KafkaHeader(io.vertx.kafka.client.producer.KafkaHeader)

Example 27 with Event

use of org.folio.rest.jaxrs.model.Event in project mod-source-record-manager by folio-org.

the class RawMarcChunksErrorHandler method handle.

@Override
public void handle(Throwable throwable, KafkaConsumerRecord<String, String> record) {
    Event event = Json.decodeValue(record.value(), Event.class);
    List<KafkaHeader> kafkaHeaders = record.headers();
    OkapiConnectionParams okapiParams = new OkapiConnectionParams(KafkaHeaderUtils.kafkaHeadersToMap(kafkaHeaders), vertx);
    String jobExecutionId = okapiParams.getHeaders().get(JOB_EXECUTION_ID_HEADER);
    String chunkId = okapiParams.getHeaders().get(CHUNK_ID_HEADER);
    String tenantId = okapiParams.getTenantId();
    String lastChunk = okapiParams.getHeaders().get(LAST_CHUNK_HEADER);
    if (StringUtils.isNotBlank(lastChunk)) {
        LOGGER.error("Source chunk with jobExecutionId: {} , tenantId: {}, chunkId: {} marked as last, prevent sending DI error", jobExecutionId, tenantId, chunkId, throwable);
    } else if (throwable instanceof RecordsPublishingException) {
        List<Record> failedRecords = ((RecordsPublishingException) throwable).getFailedRecords();
        for (Record failedRecord : failedRecords) {
            sendDiErrorEvent(throwable, okapiParams, jobExecutionId, tenantId, failedRecord);
        }
    } else if (throwable instanceof DuplicateEventException) {
        RawRecordsDto rawRecordsDto = Json.decodeValue(event.getEventPayload(), RawRecordsDto.class);
        LOGGER.info("Duplicate event received, skipping parsing for jobExecutionId: {} , tenantId: {}, chunkId:{}, totalRecords: {}, cause: {}", jobExecutionId, tenantId, chunkId, rawRecordsDto.getInitialRecords().size(), throwable.getMessage());
    } else if (throwable instanceof RawChunkRecordsParsingException) {
        RawChunkRecordsParsingException exception = (RawChunkRecordsParsingException) throwable;
        parsedRecordsErrorProvider.getParsedRecordsFromInitialRecords(okapiParams, jobExecutionId, exception.getRawRecordsDto()).onComplete(ar -> {
            List<Record> parsedRecords = ar.result();
            if (CollectionUtils.isNotEmpty(parsedRecords)) {
                for (Record rec : parsedRecords) {
                    sendDiError(throwable, jobExecutionId, okapiParams, rec);
                }
            } else {
                sendDiError(throwable, jobExecutionId, okapiParams, null);
            }
        });
    } else {
        sendDiErrorEvent(throwable, okapiParams, jobExecutionId, tenantId, null);
    }
}
Also used : DuplicateEventException(org.folio.kafka.exception.DuplicateEventException) Json(io.vertx.core.json.Json) ProcessRecordErrorHandler(org.folio.kafka.ProcessRecordErrorHandler) Autowired(org.springframework.beans.factory.annotation.Autowired) HashMap(java.util.HashMap) RecordsPublishingException(org.folio.services.exceptions.RecordsPublishingException) StringUtils(org.apache.commons.lang3.StringUtils) CollectionUtils(org.apache.commons.collections4.CollectionUtils) RawChunkRecordsParsingException(org.folio.services.exceptions.RawChunkRecordsParsingException) Qualifier(org.springframework.beans.factory.annotation.Qualifier) DI_ERROR(org.folio.rest.jaxrs.model.DataImportEventTypes.DI_ERROR) DiErrorPayloadBuilder(org.folio.verticle.consumers.errorhandlers.payloadbuilders.DiErrorPayloadBuilder) Event(org.folio.rest.jaxrs.model.Event) EventHandlingUtil(org.folio.services.util.EventHandlingUtil) DuplicateEventException(org.folio.kafka.exception.DuplicateEventException) Record(org.folio.rest.jaxrs.model.Record) RecordConversionUtil(org.folio.services.util.RecordConversionUtil) Vertx(io.vertx.core.Vertx) DataImportEventPayload(org.folio.DataImportEventPayload) OkapiConnectionParams(org.folio.dataimport.util.OkapiConnectionParams) RawRecordsDto(org.folio.rest.jaxrs.model.RawRecordsDto) Component(org.springframework.stereotype.Component) List(java.util.List) Logger(org.apache.logging.log4j.Logger) KafkaConsumerRecord(io.vertx.kafka.client.consumer.KafkaConsumerRecord) KafkaHeader(io.vertx.kafka.client.producer.KafkaHeader) LogManager(org.apache.logging.log4j.LogManager) KafkaHeaderUtils(org.folio.kafka.KafkaHeaderUtils) KafkaConfig(org.folio.kafka.KafkaConfig) RawRecordsDto(org.folio.rest.jaxrs.model.RawRecordsDto) RawChunkRecordsParsingException(org.folio.services.exceptions.RawChunkRecordsParsingException) RecordsPublishingException(org.folio.services.exceptions.RecordsPublishingException) Event(org.folio.rest.jaxrs.model.Event) List(java.util.List) Record(org.folio.rest.jaxrs.model.Record) KafkaConsumerRecord(io.vertx.kafka.client.consumer.KafkaConsumerRecord) OkapiConnectionParams(org.folio.dataimport.util.OkapiConnectionParams) KafkaHeader(io.vertx.kafka.client.producer.KafkaHeader)

Example 28 with Event

use of org.folio.rest.jaxrs.model.Event in project mod-source-record-manager by folio-org.

the class StoredRecordChunksErrorHandler method handle.

@Override
public void handle(Throwable throwable, KafkaConsumerRecord<String, String> kafkaConsumerRecord) {
    List<KafkaHeader> kafkaHeaders = kafkaConsumerRecord.headers();
    OkapiConnectionParams okapiParams = new OkapiConnectionParams(KafkaHeaderUtils.kafkaHeadersToMap(kafkaHeaders), vertx);
    String jobExecutionId = okapiParams.getHeaders().get(JOB_EXECUTION_ID_HEADER);
    // process for specific failure processed records from Exception body
    if (throwable instanceof RecordsPublishingException) {
        List<Record> failedRecords = ((RecordsPublishingException) throwable).getFailedRecords();
        for (Record failedRecord : failedRecords) {
            sendDiErrorForRecord(jobExecutionId, failedRecord, okapiParams, failedRecord.getErrorRecord().getDescription());
        }
    } else if (throwable instanceof DuplicateEventException) {
        LOGGER.info(throwable.getMessage());
    } else {
        // process for all other cases that will include all records
        Event event = Json.decodeValue(kafkaConsumerRecord.value(), Event.class);
        RecordsBatchResponse recordCollection = Json.decodeValue(event.getEventPayload(), RecordsBatchResponse.class);
        for (Record targetRecord : recordCollection.getRecords()) {
            sendDiErrorForRecord(jobExecutionId, targetRecord, okapiParams, throwable.getMessage());
        }
    }
}
Also used : DuplicateEventException(org.folio.kafka.exception.DuplicateEventException) RecordsBatchResponse(org.folio.rest.jaxrs.model.RecordsBatchResponse) RecordsPublishingException(org.folio.services.exceptions.RecordsPublishingException) Event(org.folio.rest.jaxrs.model.Event) Record(org.folio.rest.jaxrs.model.Record) KafkaConsumerRecord(io.vertx.kafka.client.consumer.KafkaConsumerRecord) OkapiConnectionParams(org.folio.dataimport.util.OkapiConnectionParams) KafkaHeader(io.vertx.kafka.client.producer.KafkaHeader)

Example 29 with Event

use of org.folio.rest.jaxrs.model.Event in project mod-source-record-manager by folio-org.

the class EventHandlingUtil method sendEventToKafka.

/**
 * Prepares and sends event with payload to kafka
 *
 * @param tenantId     tenant id
 * @param eventPayload eventPayload in String representation
 * @param eventType    eventType
 * @param kafkaHeaders kafka headers
 * @param kafkaConfig  kafka config
 * @return completed future with true if event was sent successfully
 */
public static Future<Boolean> sendEventToKafka(String tenantId, String eventPayload, String eventType, List<KafkaHeader> kafkaHeaders, KafkaConfig kafkaConfig, String key) {
    LOGGER.debug("Starting to send event to Kafka for eventType: {}", eventType);
    Event event = createEvent(eventPayload, eventType, tenantId);
    String topicName = createTopicName(eventType, tenantId, kafkaConfig);
    KafkaProducerRecord<String, String> record = createProducerRecord(event, key, topicName, kafkaHeaders);
    Promise<Boolean> promise = Promise.promise();
    String chunkId = extractHeader(kafkaHeaders, "chunkId");
    String recordId = extractHeader(kafkaHeaders, "recordId");
    String producerName = eventType + "_Producer";
    KafkaProducer<String, String> producer = KafkaProducer.createShared(Vertx.currentContext().owner(), producerName, kafkaConfig.getProducerProps());
    producer.write(record, war -> {
        producer.end(ear -> producer.close());
        if (war.succeeded()) {
            logSendingSucceeded(eventType, chunkId, recordId);
            promise.complete(true);
        } else {
            Throwable cause = war.cause();
            LOGGER.error("{} write error for event {}:", producerName, eventType, cause);
            handleKafkaPublishingErrors(promise, eventPayload, cause);
        }
    });
    return promise.future();
}
Also used : Event(org.folio.rest.jaxrs.model.Event)

Example 30 with Event

use of org.folio.rest.jaxrs.model.Event in project mod-source-record-manager by folio-org.

the class ChangeManagerAPITest method fillInRecordOrderIfAtLeastOneRecordHasNoOrder.

private void fillInRecordOrderIfAtLeastOneRecordHasNoOrder(String rawRecord) throws InterruptedException {
    RawRecordsDto rawRecordsDto = new RawRecordsDto().withId(UUID.randomUUID().toString()).withRecordsMetadata(new RecordsMetadata().withLast(true).withCounter(7).withContentType(RecordsMetadata.ContentType.MARC_RAW)).withInitialRecords(asList(new InitialRecord().withRecord(CORRECT_RAW_RECORD_1), new InitialRecord().withRecord(CORRECT_RAW_RECORD_2).withOrder(5), new InitialRecord().withRecord(rawRecord).withOrder(6)));
    InitJobExecutionsRsDto response = constructAndPostInitJobExecutionRqDto(1);
    List<JobExecution> createdJobExecutions = response.getJobExecutions();
    assertThat(createdJobExecutions.size(), is(1));
    JobExecution jobExec = createdJobExecutions.get(0);
    RestAssured.given().spec(spec).body(new JobProfileInfo().withName("MARC records").withId(DEFAULT_JOB_PROFILE_ID).withDataType(DataType.MARC)).when().put(JOB_EXECUTION_PATH + jobExec.getId() + JOB_PROFILE_PATH).then().statusCode(HttpStatus.SC_OK);
    RestAssured.given().spec(spec).body(rawRecordsDto).when().post(JOB_EXECUTION_PATH + jobExec.getId() + RECORDS_PATH).then().statusCode(HttpStatus.SC_NO_CONTENT);
    String topicToObserve = formatToKafkaTopicName(DI_RAW_RECORDS_CHUNK_PARSED.value());
    List<String> observedValues = kafkaCluster.observeValues(ObserveKeyValues.on(topicToObserve, 1).observeFor(30, TimeUnit.SECONDS).build());
    Event obtainedEvent = Json.decodeValue(observedValues.get(0), Event.class);
    assertEquals(DI_RAW_RECORDS_CHUNK_PARSED.value(), obtainedEvent.getEventType());
    RecordCollection processedRecords = Json.decodeValue(obtainedEvent.getEventPayload(), RecordCollection.class);
    assertEquals(3, processedRecords.getRecords().size());
    assertEquals(4, processedRecords.getRecords().get(0).getOrder().intValue());
    assertEquals(5, processedRecords.getRecords().get(1).getOrder().intValue());
    assertEquals(6, processedRecords.getRecords().get(2).getOrder().intValue());
}
Also used : JobExecution(org.folio.rest.jaxrs.model.JobExecution) InitialRecord(org.folio.rest.jaxrs.model.InitialRecord) JobProfileInfo(org.folio.rest.jaxrs.model.JobProfileInfo) RawRecordsDto(org.folio.rest.jaxrs.model.RawRecordsDto) RecordCollection(org.folio.rest.jaxrs.model.RecordCollection) RecordsMetadata(org.folio.rest.jaxrs.model.RecordsMetadata) Event(org.folio.rest.jaxrs.model.Event) InitJobExecutionsRsDto(org.folio.rest.jaxrs.model.InitJobExecutionsRsDto) Matchers.containsString(org.hamcrest.Matchers.containsString)

Aggregations

Event (org.folio.rest.jaxrs.model.Event)90 Test (org.junit.Test)41 HashMap (java.util.HashMap)32 ArgumentMatchers.anyString (org.mockito.ArgumentMatchers.anyString)28 DataImportEventPayload (org.folio.DataImportEventPayload)27 KeyValue (net.mguenther.kafka.junit.KeyValue)22 Async (io.vertx.ext.unit.Async)21 KafkaConsumerRecord (io.vertx.kafka.client.consumer.KafkaConsumerRecord)16 OkapiConnectionParams (org.folio.dataimport.util.OkapiConnectionParams)16 Record (org.folio.rest.jaxrs.model.Record)14 LogManager (org.apache.logging.log4j.LogManager)12 Logger (org.apache.logging.log4j.Logger)12 ProfileSnapshotWrapper (org.folio.rest.jaxrs.model.ProfileSnapshotWrapper)12 Test (org.junit.jupiter.api.Test)12 KafkaHeader (io.vertx.kafka.client.producer.KafkaHeader)11 DuplicateEventException (org.folio.kafka.exception.DuplicateEventException)10 AbstractRestTest (org.folio.rest.impl.AbstractRestTest)10 RecordCollection (org.folio.rest.jaxrs.model.RecordCollection)10 RecordsBatchResponse (org.folio.rest.jaxrs.model.RecordsBatchResponse)10 Future (io.vertx.core.Future)9