Search in sources :

Example 11 with MappingMetadataDto

use of org.folio.MappingMetadataDto in project mod-inventory by folio-org.

the class UpdateAuthorityEventHandlerTest method setUp.

@Before
public void setUp() throws IOException {
    MockitoAnnotations.openMocks(this);
    MappingManager.clearReaderFactories();
    MappingMetadataCache mappingMetadataCache = new MappingMetadataCache(vertx, vertx.createHttpClient(), 3600);
    eventHandler = new UpdateAuthorityEventHandler(storage, mappingMetadataCache, publisher);
    JsonObject mappingRules = new JsonObject(TestUtil.readFileFromPath(MAPPING_RULES_PATH));
    doAnswer(invocationOnMock -> {
        Consumer<Success<Void>> successHandler = invocationOnMock.getArgument(1);
        successHandler.accept(new Success<>(null));
        return null;
    }).when(authorityCollection).update(any(), any(), any());
    WireMock.stubFor(get(new UrlPathPattern(new RegexPattern(MAPPING_METADATA_URL + "/.*"), true)).willReturn(WireMock.ok().withBody(Json.encode(new MappingMetadataDto().withMappingParams(Json.encode(new MappingParameters())).withMappingRules(mappingRules.encode())))));
}
Also used : UrlPathPattern(com.github.tomakehurst.wiremock.matching.UrlPathPattern) MappingMetadataCache(org.folio.inventory.dataimport.cache.MappingMetadataCache) RegexPattern(com.github.tomakehurst.wiremock.matching.RegexPattern) JsonObject(io.vertx.core.json.JsonObject) MappingMetadataDto(org.folio.MappingMetadataDto) MappingParameters(org.folio.processing.mapping.defaultmapper.processor.parameters.MappingParameters) Success(org.folio.inventory.common.domain.Success) Before(org.junit.Before)

Example 12 with MappingMetadataDto

use of org.folio.MappingMetadataDto in project mod-inventory by folio-org.

the class MappingMetadataCacheTest method shouldReturnMappingMetadata.

@Test
public void shouldReturnMappingMetadata(TestContext context) {
    Async async = context.async();
    Future<Optional<MappingMetadataDto>> optionalFuture = mappingMetadataCache.get(mappingMetadata.getJobExecutionId(), this.context);
    optionalFuture.onComplete(ar -> {
        context.assertTrue(ar.succeeded());
        context.assertTrue(ar.result().isPresent());
        MappingMetadataDto actualMappingMetadata = ar.result().get();
        context.assertEquals(mappingMetadata.getJobExecutionId(), actualMappingMetadata.getJobExecutionId());
        context.assertNotNull(actualMappingMetadata.getMappingParams());
        context.assertNotNull(actualMappingMetadata.getMappingRules());
        context.assertEquals(mappingMetadata.getMappingParams(), actualMappingMetadata.getMappingParams());
        context.assertEquals(mappingMetadata.getMappingRules(), actualMappingMetadata.getMappingRules());
        async.complete();
    });
}
Also used : Optional(java.util.Optional) Async(io.vertx.ext.unit.Async) MappingMetadataDto(org.folio.MappingMetadataDto) Test(org.junit.Test)

Example 13 with MappingMetadataDto

use of org.folio.MappingMetadataDto in project mod-inventory by folio-org.

the class MarcBibInstanceHridSetKafkaHandler method handle.

@Override
public Future<String> handle(KafkaConsumerRecord<String, String> record) {
    try {
        Promise<String> promise = Promise.promise();
        Event event = OBJECT_MAPPER.readValue(record.value(), Event.class);
        @SuppressWarnings("unchecked") HashMap<String, String> eventPayload = OBJECT_MAPPER.readValue(event.getEventPayload(), HashMap.class);
        Map<String, String> headersMap = KafkaHeaderUtils.kafkaHeadersToMap(record.headers());
        String recordId = headersMap.get(RECORD_ID_HEADER);
        String chunkId = headersMap.get(CHUNK_ID_HEADER);
        String jobExecutionId = eventPayload.get(JOB_EXECUTION_ID_HEADER);
        LOGGER.info("Event payload has been received with event type: {}, recordId: {} by jobExecution: {} and chunkId: {}", event.getEventType(), recordId, jobExecutionId, chunkId);
        if (isEmpty(eventPayload.get(MARC_KEY))) {
            String message = format("Event payload does not contain required data to update Instance with event type: '%s', recordId: '%s' by jobExecution: '%s' and chunkId: '%s'", event.getEventType(), recordId, jobExecutionId, chunkId);
            LOGGER.error(message);
            return Future.failedFuture(message);
        }
        Context context = EventHandlingUtil.constructContext(headersMap.get(OKAPI_TENANT_HEADER), headersMap.get(OKAPI_TOKEN_HEADER), headersMap.get(OKAPI_URL_HEADER));
        Record marcRecord = new JsonObject(eventPayload.get(MARC_KEY)).mapTo(Record.class);
        mappingMetadataCache.get(jobExecutionId, context).map(metadataOptional -> metadataOptional.orElseThrow(() -> new EventProcessingException(format(MAPPING_METADATA_NOT_FOUND_MSG, jobExecutionId)))).onSuccess(mappingMetadataDto -> ensureEventPayloadWithMappingMetadata(eventPayload, mappingMetadataDto)).compose(v -> instanceUpdateDelegate.handle(eventPayload, marcRecord, context)).onComplete(ar -> {
            if (ar.succeeded()) {
                eventPayload.remove(CURRENT_RETRY_NUMBER);
                promise.complete(record.key());
            } else {
                if (ar.cause() instanceof OptimisticLockingException) {
                    processOLError(record, promise, eventPayload, ar);
                } else {
                    eventPayload.remove(CURRENT_RETRY_NUMBER);
                    LOGGER.error("Failed to set MarcBib Hrid by jobExecutionId {}:{}", jobExecutionId, ar.cause());
                    promise.fail(ar.cause());
                }
            }
        });
        return promise.future();
    } catch (Exception e) {
        LOGGER.error(format("Failed to process data import kafka record from topic %s", record.topic()), e);
        return Future.failedFuture(e);
    }
}
Also used : Context(org.folio.inventory.common.Context) MappingMetadataDto(org.folio.MappingMetadataDto) Context(org.folio.inventory.common.Context) OKAPI_TENANT_HEADER(org.folio.rest.util.OkapiConnectionParams.OKAPI_TENANT_HEADER) HashMap(java.util.HashMap) OKAPI_URL_HEADER(org.folio.rest.util.OkapiConnectionParams.OKAPI_URL_HEADER) ObjectMapperTool(org.folio.dbschema.ObjectMapperTool) Map(java.util.Map) JsonObject(io.vertx.core.json.JsonObject) AsyncResult(io.vertx.core.AsyncResult) StringUtils.isEmpty(org.apache.commons.lang3.StringUtils.isEmpty) Event(org.folio.rest.jaxrs.model.Event) Record(org.folio.rest.jaxrs.model.Record) Promise(io.vertx.core.Promise) ObjectMapper(com.fasterxml.jackson.databind.ObjectMapper) AsyncRecordHandler(org.folio.kafka.AsyncRecordHandler) Future(io.vertx.core.Future) String.format(java.lang.String.format) InstanceUpdateDelegate(org.folio.inventory.dataimport.handlers.actions.InstanceUpdateDelegate) Logger(org.apache.logging.log4j.Logger) EventProcessingException(org.folio.processing.exceptions.EventProcessingException) KafkaConsumerRecord(io.vertx.kafka.client.consumer.KafkaConsumerRecord) EventHandlingUtil(org.folio.inventory.dataimport.handlers.matching.util.EventHandlingUtil) OKAPI_TOKEN_HEADER(org.folio.rest.util.OkapiConnectionParams.OKAPI_TOKEN_HEADER) Instance(org.folio.inventory.domain.instances.Instance) OptimisticLockingException(org.folio.inventory.dataimport.exceptions.OptimisticLockingException) LogManager(org.apache.logging.log4j.LogManager) KafkaHeaderUtils(org.folio.kafka.KafkaHeaderUtils) MappingMetadataCache(org.folio.inventory.dataimport.cache.MappingMetadataCache) OptimisticLockingException(org.folio.inventory.dataimport.exceptions.OptimisticLockingException) JsonObject(io.vertx.core.json.JsonObject) EventProcessingException(org.folio.processing.exceptions.EventProcessingException) OptimisticLockingException(org.folio.inventory.dataimport.exceptions.OptimisticLockingException) Event(org.folio.rest.jaxrs.model.Event) Record(org.folio.rest.jaxrs.model.Record) KafkaConsumerRecord(io.vertx.kafka.client.consumer.KafkaConsumerRecord) EventProcessingException(org.folio.processing.exceptions.EventProcessingException)

Example 14 with MappingMetadataDto

use of org.folio.MappingMetadataDto in project mod-inventory by folio-org.

the class MarcHoldingsRecordHridSetKafkaHandler method handle.

@Override
public Future<String> handle(KafkaConsumerRecord<String, String> record) {
    try {
        Promise<String> promise = Promise.promise();
        Event event = OBJECT_MAPPER.readValue(record.value(), Event.class);
        @SuppressWarnings("unchecked") HashMap<String, String> eventPayload = OBJECT_MAPPER.readValue(event.getEventPayload(), HashMap.class);
        Map<String, String> headersMap = KafkaHeaderUtils.kafkaHeadersToMap(record.headers());
        String recordId = headersMap.get(RECORD_ID_HEADER);
        String chunkId = headersMap.get(CHUNK_ID_HEADER);
        String jobExecutionId = eventPayload.get(JOB_EXECUTION_ID_HEADER);
        LOGGER.info("Event payload has been received with event type: {}, recordId: {} by jobExecution: {} and chunkId: {}", event.getEventType(), recordId, jobExecutionId, chunkId);
        if (isEmpty(eventPayload.get(MARC_KEY))) {
            String message = String.format("Event payload does not contain required data to update Holdings with event type: '%s', recordId: '%s' by jobExecution: '%s' and chunkId: '%s'", event.getEventType(), recordId, jobExecutionId, chunkId);
            LOGGER.error(message);
            return Future.failedFuture(message);
        }
        Context context = constructContext(headersMap.get(OKAPI_TENANT_HEADER), headersMap.get(OKAPI_TOKEN_HEADER), headersMap.get(OKAPI_URL_HEADER));
        Record marcRecord = Json.decodeValue(eventPayload.get(MARC_KEY), Record.class);
        mappingMetadataCache.get(jobExecutionId, context).map(metadataOptional -> metadataOptional.orElseThrow(() -> new EventProcessingException(format(MAPPING_METADATA_NOT_FOUND_MSG, jobExecutionId)))).onSuccess(mappingMetadataDto -> ensureEventPayloadWithMappingMetadata(eventPayload, mappingMetadataDto)).compose(v -> holdingsRecordUpdateDelegate.handle(eventPayload, marcRecord, context)).onComplete(ar -> {
            if (ar.succeeded()) {
                eventPayload.remove(CURRENT_RETRY_NUMBER);
                promise.complete(record.key());
            } else {
                if (ar.cause() instanceof OptimisticLockingException) {
                    processOLError(record, promise, eventPayload, ar);
                } else {
                    eventPayload.remove(CURRENT_RETRY_NUMBER);
                    LOGGER.error("Failed to process data import event payload ", ar.cause());
                    promise.fail(ar.cause());
                }
            }
        });
        return promise.future();
    } catch (Exception e) {
        LOGGER.error(format("Failed to process data import kafka record from topic %s ", record.topic()), e);
        return Future.failedFuture(e);
    }
}
Also used : Context(org.folio.inventory.common.Context) EventHandlingUtil.constructContext(org.folio.inventory.dataimport.handlers.matching.util.EventHandlingUtil.constructContext) Json(io.vertx.core.json.Json) Context(org.folio.inventory.common.Context) MappingMetadataDto(org.folio.MappingMetadataDto) OKAPI_TENANT_HEADER(org.folio.rest.util.OkapiConnectionParams.OKAPI_TENANT_HEADER) HashMap(java.util.HashMap) OKAPI_URL_HEADER(org.folio.rest.util.OkapiConnectionParams.OKAPI_URL_HEADER) EventHandlingUtil.constructContext(org.folio.inventory.dataimport.handlers.matching.util.EventHandlingUtil.constructContext) HoldingsUpdateDelegate(org.folio.inventory.dataimport.handlers.actions.HoldingsUpdateDelegate) ObjectMapperTool(org.folio.dbschema.ObjectMapperTool) Map(java.util.Map) AsyncResult(io.vertx.core.AsyncResult) StringUtils.isEmpty(org.apache.commons.lang3.StringUtils.isEmpty) Event(org.folio.rest.jaxrs.model.Event) Record(org.folio.rest.jaxrs.model.Record) Promise(io.vertx.core.Promise) ObjectMapper(com.fasterxml.jackson.databind.ObjectMapper) AsyncRecordHandler(org.folio.kafka.AsyncRecordHandler) Future(io.vertx.core.Future) HoldingsRecord(org.folio.HoldingsRecord) String.format(java.lang.String.format) Logger(org.apache.logging.log4j.Logger) EventProcessingException(org.folio.processing.exceptions.EventProcessingException) KafkaConsumerRecord(io.vertx.kafka.client.consumer.KafkaConsumerRecord) OKAPI_TOKEN_HEADER(org.folio.rest.util.OkapiConnectionParams.OKAPI_TOKEN_HEADER) OptimisticLockingException(org.folio.inventory.dataimport.exceptions.OptimisticLockingException) LogManager(org.apache.logging.log4j.LogManager) KafkaHeaderUtils(org.folio.kafka.KafkaHeaderUtils) MappingMetadataCache(org.folio.inventory.dataimport.cache.MappingMetadataCache) OptimisticLockingException(org.folio.inventory.dataimport.exceptions.OptimisticLockingException) Event(org.folio.rest.jaxrs.model.Event) Record(org.folio.rest.jaxrs.model.Record) HoldingsRecord(org.folio.HoldingsRecord) KafkaConsumerRecord(io.vertx.kafka.client.consumer.KafkaConsumerRecord) EventProcessingException(org.folio.processing.exceptions.EventProcessingException) OptimisticLockingException(org.folio.inventory.dataimport.exceptions.OptimisticLockingException) EventProcessingException(org.folio.processing.exceptions.EventProcessingException)

Example 15 with MappingMetadataDto

use of org.folio.MappingMetadataDto in project mod-inventory by folio-org.

the class UpdateItemEventHandler method handle.

@Override
public CompletableFuture<DataImportEventPayload> handle(DataImportEventPayload dataImportEventPayload) {
    CompletableFuture<DataImportEventPayload> future = new CompletableFuture<>();
    try {
        dataImportEventPayload.setEventType(DI_INVENTORY_ITEM_UPDATED.value());
        HashMap<String, String> payloadContext = dataImportEventPayload.getContext();
        if (isNull(payloadContext) || isBlank(payloadContext.get(MARC_BIBLIOGRAPHIC.value())) || isBlank(payloadContext.get(ITEM.value()))) {
            LOG.error(PAYLOAD_HAS_NO_DATA_MSG);
            return CompletableFuture.failedFuture(new EventProcessingException(PAYLOAD_HAS_NO_DATA_MSG));
        }
        if (dataImportEventPayload.getCurrentNode().getChildSnapshotWrappers().isEmpty()) {
            LOG.error(ACTION_HAS_NO_MAPPING_MSG);
            return CompletableFuture.failedFuture(new EventProcessingException(ACTION_HAS_NO_MAPPING_MSG));
        }
        LOG.info("Processing UpdateItemEventHandler starting with jobExecutionId: {}.", dataImportEventPayload.getJobExecutionId());
        AtomicBoolean isProtectedStatusChanged = new AtomicBoolean();
        Context context = EventHandlingUtil.constructContext(dataImportEventPayload.getTenant(), dataImportEventPayload.getToken(), dataImportEventPayload.getOkapiUrl());
        String jobExecutionId = dataImportEventPayload.getJobExecutionId();
        String recordId = dataImportEventPayload.getContext().get(RECORD_ID_HEADER);
        String chunkId = dataImportEventPayload.getContext().get(CHUNK_ID_HEADER);
        mappingMetadataCache.get(jobExecutionId, context).map(parametersOptional -> parametersOptional.orElseThrow(() -> new EventProcessingException(format(MAPPING_METADATA_NOT_FOUND_MSG, jobExecutionId, recordId, chunkId)))).compose(mappingMetadataDto -> {
            String oldItemStatus = preparePayloadAndGetStatus(dataImportEventPayload, payloadContext, mappingMetadataDto);
            JsonObject mappedItemAsJson = new JsonObject(payloadContext.get(ITEM.value()));
            mappedItemAsJson = mappedItemAsJson.containsKey(ITEM_PATH_FIELD) ? mappedItemAsJson.getJsonObject(ITEM_PATH_FIELD) : mappedItemAsJson;
            List<String> errors = validateItem(mappedItemAsJson, requiredFields);
            if (!errors.isEmpty()) {
                String msg = format("Mapped Instance is invalid: %s, by jobExecutionId: '%s' and recordId: '%s' and chunkId: '%s' ", errors, jobExecutionId, recordId, chunkId);
                LOG.error(msg);
                return Future.failedFuture(msg);
            }
            String newItemStatus = mappedItemAsJson.getJsonObject(STATUS_KEY).getString("name");
            isProtectedStatusChanged.set(isProtectedStatusChanged(oldItemStatus, newItemStatus));
            if (isProtectedStatusChanged.get()) {
                mappedItemAsJson.getJsonObject(STATUS_KEY).put("name", oldItemStatus);
            }
            ItemCollection itemCollection = storage.getItemCollection(context);
            Item itemToUpdate = ItemUtil.jsonToItem(mappedItemAsJson);
            return verifyItemBarcodeUniqueness(itemToUpdate, itemCollection).compose(v -> updateItemAndRetryIfOLExists(itemToUpdate, itemCollection, dataImportEventPayload)).onSuccess(updatedItem -> {
                if (isProtectedStatusChanged.get()) {
                    String msg = String.format(STATUS_UPDATE_ERROR_MSG, oldItemStatus, newItemStatus);
                    LOG.warn(msg);
                    dataImportEventPayload.getContext().put(ITEM.value(), ItemUtil.mapToJson(updatedItem).encode());
                    future.completeExceptionally(new EventProcessingException(msg));
                } else {
                    addHoldingToPayloadIfNeeded(dataImportEventPayload, context, updatedItem).onComplete(item -> {
                        dataImportEventPayload.getContext().put(ITEM.value(), ItemUtil.mapToJson(updatedItem).encode());
                        future.complete(dataImportEventPayload);
                    });
                }
            });
        }).onFailure(e -> {
            LOG.error("Failed to update inventory Item by jobExecutionId: '{}' and recordId: '{}' and chunkId: '{}' ", jobExecutionId, recordId, chunkId, e);
            future.completeExceptionally(e);
        });
    } catch (Exception e) {
        LOG.error("Error updating inventory Item", e);
        future.completeExceptionally(e);
    }
    return future;
}
Also used : Context(org.folio.inventory.common.Context) MappingContext(org.folio.processing.mapping.mapper.MappingContext) Arrays(java.util.Arrays) MappingMetadataDto(org.folio.MappingMetadataDto) EventHandler(org.folio.processing.events.services.handler.EventHandler) ItemUtil(org.folio.inventory.support.ItemUtil) JsonHelper(org.folio.inventory.support.JsonHelper) ZonedDateTime(java.time.ZonedDateTime) HttpStatus(org.apache.http.HttpStatus) Failure(org.folio.inventory.common.domain.Failure) Item(org.folio.inventory.domain.items.Item) ACTION_PROFILE(org.folio.rest.jaxrs.model.ProfileSnapshotWrapper.ContentType.ACTION_PROFILE) StringUtils(org.apache.commons.lang3.StringUtils) ITEM(org.folio.rest.jaxrs.model.EntityType.ITEM) HoldingsRecordCollection(org.folio.inventory.domain.HoldingsRecordCollection) ProfileSnapshotWrapper(org.folio.rest.jaxrs.model.ProfileSnapshotWrapper) ObjectMapperTool(org.folio.dbschema.ObjectMapperTool) Objects.isNull(java.util.Objects.isNull) ItemCollection(org.folio.inventory.domain.items.ItemCollection) JsonObject(io.vertx.core.json.JsonObject) ZoneOffset(java.time.ZoneOffset) StringUtils.isEmpty(org.apache.commons.lang3.StringUtils.isEmpty) DataImportEventPayload(org.folio.DataImportEventPayload) Set(java.util.Set) UUID(java.util.UUID) Future(io.vertx.core.Future) String.format(java.lang.String.format) Storage(org.folio.inventory.storage.Storage) List(java.util.List) Logger(org.apache.logging.log4j.Logger) EventHandlingUtil(org.folio.inventory.dataimport.handlers.matching.util.EventHandlingUtil) UnsupportedEncodingException(java.io.UnsupportedEncodingException) PagingParameters(org.folio.inventory.common.api.request.PagingParameters) Context(org.folio.inventory.common.Context) Json(io.vertx.core.json.Json) MappingManager(org.folio.processing.mapping.MappingManager) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) HashMap(java.util.HashMap) CompletableFuture(java.util.concurrent.CompletableFuture) DI_INVENTORY_ITEM_UPDATED(org.folio.DataImportEventTypes.DI_INVENTORY_ITEM_UPDATED) HashSet(java.util.HashSet) MARC_BIBLIOGRAPHIC(org.folio.rest.jaxrs.model.EntityType.MARC_BIBLIOGRAPHIC) ActionProfile(org.folio.ActionProfile) MappingParameters(org.folio.processing.mapping.defaultmapper.processor.parameters.MappingParameters) MappingContext(org.folio.processing.mapping.mapper.MappingContext) HOLDINGS(org.folio.rest.jaxrs.model.EntityType.HOLDINGS) ItemStatusName(org.folio.inventory.domain.items.ItemStatusName) CqlHelper(org.folio.inventory.support.CqlHelper) Promise(io.vertx.core.Promise) JsonProcessingException(com.fasterxml.jackson.core.JsonProcessingException) UPDATE(org.folio.ActionProfile.Action.UPDATE) EventProcessingException(org.folio.processing.exceptions.EventProcessingException) StringUtils.isBlank(org.apache.commons.lang3.StringUtils.isBlank) STATUS_KEY(org.folio.inventory.domain.items.Item.STATUS_KEY) DateTimeFormatter(java.time.format.DateTimeFormatter) LogManager(org.apache.logging.log4j.LogManager) MappingMetadataCache(org.folio.inventory.dataimport.cache.MappingMetadataCache) JsonObject(io.vertx.core.json.JsonObject) ItemCollection(org.folio.inventory.domain.items.ItemCollection) UnsupportedEncodingException(java.io.UnsupportedEncodingException) JsonProcessingException(com.fasterxml.jackson.core.JsonProcessingException) EventProcessingException(org.folio.processing.exceptions.EventProcessingException) DataImportEventPayload(org.folio.DataImportEventPayload) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) Item(org.folio.inventory.domain.items.Item) CompletableFuture(java.util.concurrent.CompletableFuture) List(java.util.List) EventProcessingException(org.folio.processing.exceptions.EventProcessingException)

Aggregations

MappingMetadataDto (org.folio.MappingMetadataDto)23 JsonObject (io.vertx.core.json.JsonObject)21 Context (org.folio.inventory.common.Context)17 Before (org.junit.Before)16 MappingParameters (org.folio.processing.mapping.defaultmapper.processor.parameters.MappingParameters)13 Success (org.folio.inventory.common.domain.Success)12 MappingMetadataCache (org.folio.inventory.dataimport.cache.MappingMetadataCache)11 Consumer (java.util.function.Consumer)8 DataImportEventPayload (org.folio.DataImportEventPayload)8 TestContext (io.vertx.ext.unit.TestContext)7 HoldingsRecord (org.folio.HoldingsRecord)7 Instance (org.folio.inventory.domain.instances.Instance)7 RegexPattern (com.github.tomakehurst.wiremock.matching.RegexPattern)6 UrlPathPattern (com.github.tomakehurst.wiremock.matching.UrlPathPattern)6 Record (org.folio.rest.jaxrs.model.Record)6 EventProcessingException (org.folio.processing.exceptions.EventProcessingException)5 Promise (io.vertx.core.Promise)4 Json (io.vertx.core.json.Json)4 KafkaConsumerRecord (io.vertx.kafka.client.consumer.KafkaConsumerRecord)4 String.format (java.lang.String.format)4