Search in sources :

Example 6 with StoreException

use of io.confluent.kafka.schemaregistry.storage.exceptions.StoreException in project schema-registry by confluentinc.

the class KafkaSchemaRegistry method get.

public SchemaString get(int id, String subject, String format, boolean fetchMaxId) throws SchemaRegistryException {
    SchemaValue schema = null;
    try {
        SchemaKey subjectVersionKey = getSchemaKeyUsingContexts(id, subject);
        if (subjectVersionKey == null) {
            return null;
        }
        schema = (SchemaValue) kafkaStore.get(subjectVersionKey);
        if (schema == null) {
            return null;
        }
    } catch (StoreException e) {
        throw new SchemaRegistryStoreException("Error while retrieving schema with id " + id + " from the backend Kafka" + " store", e);
    }
    SchemaString schemaString = new SchemaString();
    schemaString.setSchemaType(schema.getSchemaType());
    List<io.confluent.kafka.schemaregistry.client.rest.entities.SchemaReference> refs = schema.getReferences() != null ? schema.getReferences().stream().map(ref -> new io.confluent.kafka.schemaregistry.client.rest.entities.SchemaReference(ref.getName(), ref.getSubject(), ref.getVersion())).collect(Collectors.toList()) : null;
    schemaString.setReferences(refs);
    if (format != null && !format.trim().isEmpty()) {
        ParsedSchema parsedSchema = parseSchema(schema.getSchemaType(), schema.getSchema(), refs, false);
        schemaString.setSchemaString(parsedSchema.formattedString(format));
    } else {
        schemaString.setSchemaString(schema.getSchema());
    }
    if (fetchMaxId) {
        schemaString.setMaxId(idGenerator.getMaxId(schema));
    }
    return schemaString;
}
Also used : SchemaString(io.confluent.kafka.schemaregistry.client.rest.entities.SchemaString) SchemaRegistryStoreException(io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryStoreException) SchemaRegistryStoreException(io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryStoreException) StoreException(io.confluent.kafka.schemaregistry.storage.exceptions.StoreException) ParsedSchema(io.confluent.kafka.schemaregistry.ParsedSchema)

Example 7 with StoreException

use of io.confluent.kafka.schemaregistry.storage.exceptions.StoreException in project schema-registry by confluentinc.

the class KafkaSchemaRegistry method deleteSubject.

@Override
public List<Integer> deleteSubject(String subject, boolean permanentDelete) throws SchemaRegistryException {
    // Ensure cache is up-to-date before any potential writes
    try {
        if (isReadOnlyMode(subject)) {
            throw new OperationNotPermittedException("Subject " + subject + " is in read-only mode");
        }
        kafkaStore.waitUntilKafkaReaderReachesLastOffset(subject, kafkaStoreTimeoutMs);
        List<Integer> deletedVersions = new ArrayList<>();
        int deleteWatermarkVersion = 0;
        Iterator<Schema> schemasToBeDeleted = getAllVersions(subject, permanentDelete);
        while (schemasToBeDeleted.hasNext()) {
            deleteWatermarkVersion = schemasToBeDeleted.next().getVersion();
            SchemaKey key = new SchemaKey(subject, deleteWatermarkVersion);
            if (!lookupCache.referencesSchema(key).isEmpty()) {
                throw new ReferenceExistsException(key.toString());
            }
            if (permanentDelete) {
                SchemaValue schemaValue = (SchemaValue) lookupCache.get(key);
                if (schemaValue != null && !schemaValue.isDeleted()) {
                    throw new SubjectNotSoftDeletedException(subject);
                }
            }
            deletedVersions.add(deleteWatermarkVersion);
        }
        if (!permanentDelete) {
            DeleteSubjectKey key = new DeleteSubjectKey(subject);
            DeleteSubjectValue value = new DeleteSubjectValue(subject, deleteWatermarkVersion);
            kafkaStore.put(key, value);
            if (getMode(subject) != null) {
                deleteMode(subject);
            }
            if (getCompatibilityLevel(subject) != null) {
                deleteCompatibility(subject);
            }
        } else {
            for (Integer version : deletedVersions) {
                kafkaStore.put(new SchemaKey(subject, version), null);
            }
        }
        return deletedVersions;
    } catch (StoreTimeoutException te) {
        throw new SchemaRegistryTimeoutException("Write to the Kafka store timed out while", te);
    } catch (StoreException e) {
        throw new SchemaRegistryStoreException("Error while deleting the subject in the" + " backend Kafka store", e);
    }
}
Also used : ReferenceExistsException(io.confluent.kafka.schemaregistry.exceptions.ReferenceExistsException) ParsedSchema(io.confluent.kafka.schemaregistry.ParsedSchema) Schema(io.confluent.kafka.schemaregistry.client.rest.entities.Schema) AvroSchema(io.confluent.kafka.schemaregistry.avro.AvroSchema) ArrayList(java.util.ArrayList) SchemaRegistryStoreException(io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryStoreException) SubjectNotSoftDeletedException(io.confluent.kafka.schemaregistry.exceptions.SubjectNotSoftDeletedException) SchemaRegistryStoreException(io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryStoreException) StoreException(io.confluent.kafka.schemaregistry.storage.exceptions.StoreException) StoreTimeoutException(io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException) OperationNotPermittedException(io.confluent.kafka.schemaregistry.exceptions.OperationNotPermittedException) SchemaRegistryTimeoutException(io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryTimeoutException)

Example 8 with StoreException

use of io.confluent.kafka.schemaregistry.storage.exceptions.StoreException in project schema-registry by confluentinc.

the class KafkaSchemaRegistry method register.

@Override
public int register(String subject, Schema schema, boolean normalize) throws SchemaRegistryException {
    try {
        checkRegisterMode(subject, schema);
        // Ensure cache is up-to-date before any potential writes
        kafkaStore.waitUntilKafkaReaderReachesLastOffset(subject, kafkaStoreTimeoutMs);
        int schemaId = schema.getId();
        ParsedSchema parsedSchema = canonicalizeSchema(schema, schemaId < 0, normalize);
        // see if the schema to be registered already exists
        SchemaIdAndSubjects schemaIdAndSubjects = this.lookupCache.schemaIdAndSubjects(schema);
        if (schemaIdAndSubjects != null) {
            if (schemaId >= 0 && schemaId != schemaIdAndSubjects.getSchemaId()) {
                throw new IdDoesNotMatchException(schemaIdAndSubjects.getSchemaId(), schema.getId());
            }
            if (schemaIdAndSubjects.hasSubject(subject) && !isSubjectVersionDeleted(subject, schemaIdAndSubjects.getVersion(subject))) {
                // return only if the schema was previously registered under the input subject
                return schemaIdAndSubjects.getSchemaId();
            } else {
                // need to register schema under the input subject
                schemaId = schemaIdAndSubjects.getSchemaId();
            }
        }
        // determine the latest version of the schema in the subject
        List<SchemaValue> allVersions = getAllSchemaValues(subject);
        Collections.reverse(allVersions);
        List<SchemaValue> deletedVersions = new ArrayList<>();
        List<ParsedSchema> undeletedVersions = new ArrayList<>();
        int newVersion = MIN_VERSION;
        for (SchemaValue schemaValue : allVersions) {
            newVersion = Math.max(newVersion, schemaValue.getVersion() + 1);
            if (schemaValue.isDeleted()) {
                deletedVersions.add(schemaValue);
            } else {
                ParsedSchema undeletedSchema = parseSchema(getSchemaEntityFromSchemaValue(schemaValue));
                if (parsedSchema.references().isEmpty() && !undeletedSchema.references().isEmpty() && parsedSchema.deepEquals(undeletedSchema)) {
                    // This handles the case where a schema is sent with all references resolved
                    return schemaValue.getId();
                }
                undeletedVersions.add(undeletedSchema);
            }
        }
        Collections.reverse(undeletedVersions);
        final List<String> compatibilityErrorLogs = isCompatibleWithPrevious(subject, parsedSchema, undeletedVersions);
        final boolean isCompatible = compatibilityErrorLogs.isEmpty();
        if (normalize) {
            parsedSchema = parsedSchema.normalize();
        }
        // Allow schema providers to modify the schema during compatibility checks
        schema.setSchema(parsedSchema.canonicalString());
        schema.setReferences(parsedSchema.references());
        if (isCompatible) {
            // save the context key
            QualifiedSubject qs = QualifiedSubject.create(tenant(), subject);
            if (qs != null && !DEFAULT_CONTEXT.equals(qs.getContext())) {
                ContextKey contextKey = new ContextKey(qs.getTenant(), qs.getContext());
                if (kafkaStore.get(contextKey) == null) {
                    ContextValue contextValue = new ContextValue(qs.getTenant(), qs.getContext());
                    kafkaStore.put(contextKey, contextValue);
                }
            }
            // assign a guid and put the schema in the kafka store
            if (schema.getVersion() <= 0) {
                schema.setVersion(newVersion);
            }
            SchemaKey schemaKey = new SchemaKey(subject, schema.getVersion());
            if (schemaId >= 0) {
                checkIfSchemaWithIdExist(schemaId, schema);
                schema.setId(schemaId);
                kafkaStore.put(schemaKey, new SchemaValue(schema));
            } else {
                int retries = 0;
                while (retries++ < kafkaStoreMaxRetries) {
                    int newId = idGenerator.id(new SchemaValue(schema));
                    // Verify id is not already in use
                    if (lookupCache.schemaKeyById(newId, subject) == null) {
                        schema.setId(newId);
                        if (retries > 1) {
                            log.warn(String.format("Retrying to register the schema with ID %s", newId));
                        }
                        kafkaStore.put(schemaKey, new SchemaValue(schema));
                        break;
                    }
                }
                if (retries >= kafkaStoreMaxRetries) {
                    throw new SchemaRegistryStoreException("Error while registering the schema due " + "to generating an ID that is already in use.");
                }
            }
            for (SchemaValue deleted : deletedVersions) {
                if (deleted.getId().equals(schema.getId()) && deleted.getVersion().compareTo(schema.getVersion()) < 0) {
                    // Tombstone previous version with the same ID
                    SchemaKey key = new SchemaKey(deleted.getSubject(), deleted.getVersion());
                    kafkaStore.put(key, null);
                }
            }
            return schema.getId();
        } else {
            throw new IncompatibleSchemaException(compatibilityErrorLogs.toString());
        }
    } catch (StoreTimeoutException te) {
        throw new SchemaRegistryTimeoutException("Write to the Kafka store timed out while", te);
    } catch (StoreException e) {
        throw new SchemaRegistryStoreException("Error while registering the schema in the" + " backend Kafka store", e);
    }
}
Also used : QualifiedSubject(io.confluent.kafka.schemaregistry.utils.QualifiedSubject) ArrayList(java.util.ArrayList) SchemaRegistryStoreException(io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryStoreException) SchemaString(io.confluent.kafka.schemaregistry.client.rest.entities.SchemaString) SchemaRegistryStoreException(io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryStoreException) StoreException(io.confluent.kafka.schemaregistry.storage.exceptions.StoreException) IncompatibleSchemaException(io.confluent.kafka.schemaregistry.exceptions.IncompatibleSchemaException) IdDoesNotMatchException(io.confluent.kafka.schemaregistry.exceptions.IdDoesNotMatchException) StoreTimeoutException(io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException) ParsedSchema(io.confluent.kafka.schemaregistry.ParsedSchema) SchemaRegistryTimeoutException(io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryTimeoutException)

Example 9 with StoreException

use of io.confluent.kafka.schemaregistry.storage.exceptions.StoreException in project schema-registry by confluentinc.

the class KafkaStoreReaderThread method doWork.

@Override
public void doWork() {
    try {
        ConsumerRecords<byte[], byte[]> records = consumer.poll(Long.MAX_VALUE);
        storeUpdateHandler.startBatch(records.count());
        for (ConsumerRecord<byte[], byte[]> record : records) {
            K messageKey = null;
            try {
                messageKey = this.serializer.deserializeKey(record.key());
            } catch (SerializationException e) {
                log.error("Failed to deserialize the schema or config key at offset " + record.offset(), e);
                continue;
            }
            if (messageKey.equals(noopKey)) {
                // If it's a noop, update local offset counter and do nothing else
                try {
                    offsetUpdateLock.lock();
                    offsetInSchemasTopic = record.offset();
                    offsetReachedThreshold.signalAll();
                } finally {
                    offsetUpdateLock.unlock();
                }
            } else {
                V message = null;
                try {
                    message = record.value() == null ? null : serializer.deserializeValue(messageKey, record.value());
                } catch (SerializationException e) {
                    log.error("Failed to deserialize a schema or config update at offset " + record.offset(), e);
                    continue;
                }
                try {
                    log.trace("Applying update (" + messageKey + "," + message + ") to the local store");
                    TopicPartition tp = new TopicPartition(record.topic(), record.partition());
                    long offset = record.offset();
                    long timestamp = record.timestamp();
                    ValidationStatus status = this.storeUpdateHandler.validateUpdate(messageKey, message, tp, offset, timestamp);
                    V oldMessage;
                    switch(status) {
                        case SUCCESS:
                            if (message == null) {
                                oldMessage = localStore.delete(messageKey);
                            } else {
                                oldMessage = localStore.put(messageKey, message);
                            }
                            this.storeUpdateHandler.handleUpdate(messageKey, message, oldMessage, tp, offset, timestamp);
                            break;
                        case ROLLBACK_FAILURE:
                            oldMessage = localStore.get(messageKey);
                            try {
                                ProducerRecord<byte[], byte[]> producerRecord = new ProducerRecord<>(topic, record.key(), oldMessage == null ? null : serializer.serializeValue(oldMessage));
                                producer.send(producerRecord);
                                log.warn("Rollback invalid update to key {}", messageKey);
                            } catch (KafkaException | SerializationException ke) {
                                log.error("Failed to recover from invalid update to key {}", messageKey, ke);
                            }
                            break;
                        case IGNORE_FAILURE:
                        default:
                            log.warn("Ignore invalid update to key {}", messageKey);
                            break;
                    }
                    try {
                        offsetUpdateLock.lock();
                        offsetInSchemasTopic = record.offset();
                        offsetReachedThreshold.signalAll();
                    } finally {
                        offsetUpdateLock.unlock();
                    }
                } catch (Exception se) {
                    log.error("Failed to add record from the Kafka topic" + topic + " the local store", se);
                }
            }
        }
        if (localStore.isPersistent() && initialized.get()) {
            try {
                localStore.flush();
                Map<TopicPartition, Long> offsets = storeUpdateHandler.checkpoint(records.count());
                checkpointOffsets(offsets);
            } catch (StoreException se) {
                log.warn("Failed to flush", se);
            }
        }
        storeUpdateHandler.endBatch(records.count());
    } catch (WakeupException we) {
    // do nothing because the thread is closing -- see shutdown()
    } catch (RecordTooLargeException rtle) {
        throw new IllegalStateException("Consumer threw RecordTooLargeException. A schema has been written that " + "exceeds the default maximum fetch size.", rtle);
    } catch (RuntimeException e) {
        log.error("KafkaStoreReader thread has died for an unknown reason.", e);
        throw new RuntimeException(e);
    }
}
Also used : SerializationException(io.confluent.kafka.schemaregistry.storage.exceptions.SerializationException) WakeupException(org.apache.kafka.common.errors.WakeupException) KafkaException(org.apache.kafka.common.KafkaException) WakeupException(org.apache.kafka.common.errors.WakeupException) StoreTimeoutException(io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException) IOException(java.io.IOException) SerializationException(io.confluent.kafka.schemaregistry.storage.exceptions.SerializationException) StoreException(io.confluent.kafka.schemaregistry.storage.exceptions.StoreException) RecordTooLargeException(org.apache.kafka.common.errors.RecordTooLargeException) StoreException(io.confluent.kafka.schemaregistry.storage.exceptions.StoreException) ValidationStatus(io.confluent.kafka.schemaregistry.storage.StoreUpdateHandler.ValidationStatus) TopicPartition(org.apache.kafka.common.TopicPartition) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) KafkaException(org.apache.kafka.common.KafkaException) RecordTooLargeException(org.apache.kafka.common.errors.RecordTooLargeException)

Example 10 with StoreException

use of io.confluent.kafka.schemaregistry.storage.exceptions.StoreException in project schema-registry by confluentinc.

the class KafkaSchemaRegistry method getModeInScope.

public Mode getModeInScope(String subject) throws SchemaRegistryStoreException {
    try {
        Mode globalMode = lookupCache.mode(null, true, defaultMode);
        Mode subjectMode = lookupCache.mode(subject, true, defaultMode);
        return globalMode == Mode.READONLY_OVERRIDE ? globalMode : subjectMode;
    } catch (StoreException e) {
        throw new SchemaRegistryStoreException("Failed to write new config value to the store", e);
    }
}
Also used : SchemaRegistryStoreException(io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryStoreException) SchemaRegistryStoreException(io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryStoreException) StoreException(io.confluent.kafka.schemaregistry.storage.exceptions.StoreException)

Aggregations

StoreException (io.confluent.kafka.schemaregistry.storage.exceptions.StoreException)22 SchemaRegistryStoreException (io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryStoreException)16 OperationNotPermittedException (io.confluent.kafka.schemaregistry.exceptions.OperationNotPermittedException)7 StoreTimeoutException (io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException)7 ParsedSchema (io.confluent.kafka.schemaregistry.ParsedSchema)5 SchemaString (io.confluent.kafka.schemaregistry.client.rest.entities.SchemaString)4 SchemaRegistryTimeoutException (io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryTimeoutException)4 ArrayList (java.util.ArrayList)4 AvroSchema (io.confluent.kafka.schemaregistry.avro.AvroSchema)3 Schema (io.confluent.kafka.schemaregistry.client.rest.entities.Schema)3 ReferenceExistsException (io.confluent.kafka.schemaregistry.exceptions.ReferenceExistsException)3 StoreInitializationException (io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException)3 ExecutionException (java.util.concurrent.ExecutionException)3 TimeoutException (java.util.concurrent.TimeoutException)3 SchemaProvider (io.confluent.kafka.schemaregistry.SchemaProvider)2 RestService (io.confluent.kafka.schemaregistry.client.rest.RestService)2 IdDoesNotMatchException (io.confluent.kafka.schemaregistry.exceptions.IdDoesNotMatchException)2 IncompatibleSchemaException (io.confluent.kafka.schemaregistry.exceptions.IncompatibleSchemaException)2 SchemaRegistryException (io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException)2 SchemaVersionNotSoftDeletedException (io.confluent.kafka.schemaregistry.exceptions.SchemaVersionNotSoftDeletedException)2