Search in sources :

Example 51 with PermanentBackendException

use of org.janusgraph.diskstorage.PermanentBackendException in project janusgraph by JanusGraph.

the class SolrIndex method register.

/**
 * Unlike the ElasticSearch Index, which is schema free, Solr requires a schema to
 * support searching. This means that you will need to modify the solr schema with the
 * appropriate field definitions in order to work properly.  If you have a running instance
 * of Solr and you modify its schema with new fields, don't forget to re-index!
 * @param store Index store
 * @param key New key to register
 * @param information data type to register for the key
 * @param tx enclosing transaction
 * @throws org.janusgraph.diskstorage.BackendException in case an exception is thrown when
 * creating a collection.
 */
@SuppressWarnings("unchecked")
@Override
public void register(String store, String key, KeyInformation information, BaseTransaction tx) throws BackendException {
    if (mode == Mode.CLOUD) {
        final CloudSolrClient client = (CloudSolrClient) solrClient;
        try {
            createCollectionIfNotExists(client, configuration, store);
        } catch (final IOException | SolrServerException | InterruptedException | KeeperException e) {
            throw new PermanentBackendException(e);
        }
    }
    // Since all data types must be defined in the schema.xml, pre-registering a type does not work
    // But we check Analyse feature
    String analyzer = ParameterType.STRING_ANALYZER.findParameter(information.getParameters(), null);
    if (analyzer != null) {
        // If the key have a tokenizer, we try to get it by reflection
        try {
            ((Constructor<Tokenizer>) ClassLoader.getSystemClassLoader().loadClass(analyzer).getConstructor()).newInstance();
        } catch (final ReflectiveOperationException e) {
            throw new PermanentBackendException(e.getMessage(), e);
        }
    }
    analyzer = ParameterType.TEXT_ANALYZER.findParameter(information.getParameters(), null);
    if (analyzer != null) {
        // If the key have a tokenizer, we try to get it by reflection
        try {
            ((Constructor<Tokenizer>) ClassLoader.getSystemClassLoader().loadClass(analyzer).getConstructor()).newInstance();
        } catch (final ReflectiveOperationException e) {
            throw new PermanentBackendException(e.getMessage(), e);
        }
    }
}
Also used : PermanentBackendException(org.janusgraph.diskstorage.PermanentBackendException) Constructor(java.lang.reflect.Constructor) SolrServerException(org.apache.solr.client.solrj.SolrServerException) UncheckedIOException(java.io.UncheckedIOException) IOException(java.io.IOException) KeeperException(org.apache.zookeeper.KeeperException) CloudSolrClient(org.apache.solr.client.solrj.impl.CloudSolrClient)

Example 52 with PermanentBackendException

use of org.janusgraph.diskstorage.PermanentBackendException in project janusgraph by JanusGraph.

the class SolrIndex method mutate.

@Override
public void mutate(Map<String, Map<String, IndexMutation>> mutations, KeyInformation.IndexRetriever information, BaseTransaction tx) throws BackendException {
    logger.debug("Mutating SOLR");
    try {
        for (final Map.Entry<String, Map<String, IndexMutation>> stores : mutations.entrySet()) {
            final String collectionName = stores.getKey();
            final String keyIdField = getKeyFieldId(collectionName);
            final List<String> deleteIds = new ArrayList<>();
            final Collection<SolrInputDocument> changes = new ArrayList<>();
            for (final Map.Entry<String, IndexMutation> entry : stores.getValue().entrySet()) {
                final String docId = entry.getKey();
                final IndexMutation mutation = entry.getValue();
                Preconditions.checkArgument(!(mutation.isNew() && mutation.isDeleted()));
                Preconditions.checkArgument(!mutation.isNew() || !mutation.hasDeletions());
                Preconditions.checkArgument(!mutation.isDeleted() || !mutation.hasAdditions());
                // Handle any deletions
                if (mutation.hasDeletions()) {
                    if (mutation.isDeleted()) {
                        logger.trace("Deleting entire document {}", docId);
                        deleteIds.add(docId);
                    } else {
                        final List<IndexEntry> fieldDeletions = new ArrayList<>(mutation.getDeletions());
                        if (mutation.hasAdditions()) {
                            for (final IndexEntry indexEntry : mutation.getAdditions()) {
                                fieldDeletions.remove(indexEntry);
                            }
                        }
                        handleRemovalsFromIndex(collectionName, keyIdField, docId, fieldDeletions, information);
                    }
                }
                if (mutation.hasAdditions()) {
                    final int ttl = mutation.determineTTL();
                    final SolrInputDocument doc = new SolrInputDocument();
                    doc.setField(keyIdField, docId);
                    final boolean isNewDoc = mutation.isNew();
                    if (isNewDoc)
                        logger.trace("Adding new document {}", docId);
                    final Map<String, Object> adds = collectFieldValues(mutation.getAdditions(), collectionName, information);
                    // If cardinality is not single then we should use the "add" operation to update
                    // the index so we don't overwrite existing values.
                    adds.keySet().forEach(v -> {
                        final KeyInformation keyInformation = information.get(collectionName, v);
                        final String solrOp = keyInformation.getCardinality() == Cardinality.SINGLE ? "set" : "add";
                        doc.setField(v, isNewDoc ? adds.get(v) : new HashMap<String, Object>(1) {

                            {
                                put(solrOp, adds.get(v));
                            }
                        });
                    });
                    if (ttl > 0) {
                        Preconditions.checkArgument(isNewDoc, "Solr only supports TTL on new documents [%s]", docId);
                        doc.setField(ttlField, String.format("+%dSECONDS", ttl));
                    }
                    changes.add(doc);
                }
            }
            commitDeletes(collectionName, deleteIds);
            commitChanges(collectionName, changes);
        }
    } catch (final IllegalArgumentException e) {
        throw new PermanentBackendException("Unable to complete query on Solr.", e);
    } catch (final Exception e) {
        throw storageException(e);
    }
}
Also used : HashMap(java.util.HashMap) PermanentBackendException(org.janusgraph.diskstorage.PermanentBackendException) ArrayList(java.util.ArrayList) IndexEntry(org.janusgraph.diskstorage.indexing.IndexEntry) SolrServerException(org.apache.solr.client.solrj.SolrServerException) UncheckedIOException(java.io.UncheckedIOException) IOException(java.io.IOException) TemporaryBackendException(org.janusgraph.diskstorage.TemporaryBackendException) BackendException(org.janusgraph.diskstorage.BackendException) KeeperException(org.apache.zookeeper.KeeperException) PermanentBackendException(org.janusgraph.diskstorage.PermanentBackendException) KeyInformation(org.janusgraph.diskstorage.indexing.KeyInformation) SolrInputDocument(org.apache.solr.common.SolrInputDocument) IndexMutation(org.janusgraph.diskstorage.indexing.IndexMutation) Map(java.util.Map) HashMap(java.util.HashMap)

Example 53 with PermanentBackendException

use of org.janusgraph.diskstorage.PermanentBackendException in project janusgraph by JanusGraph.

the class SolrIndex method exists.

@Override
public boolean exists() throws BackendException {
    if (mode != Mode.CLOUD)
        throw new UnsupportedOperationException("Operation only supported for SolrCloud");
    final CloudSolrClient server = (CloudSolrClient) solrClient;
    try {
        final ZkStateReader zkStateReader = server.getZkStateReader();
        zkStateReader.forciblyRefreshAllClusterStateSlow();
        final ClusterState clusterState = zkStateReader.getClusterState();
        final Map<String, DocCollection> collections = clusterState.getCollectionsMap();
        return collections != null && !collections.isEmpty();
    } catch (KeeperException | InterruptedException e) {
        throw new PermanentBackendException("Unable to check if index exists", e);
    }
}
Also used : ZkStateReader(org.apache.solr.common.cloud.ZkStateReader) ClusterState(org.apache.solr.common.cloud.ClusterState) PermanentBackendException(org.janusgraph.diskstorage.PermanentBackendException) DocCollection(org.apache.solr.common.cloud.DocCollection) KeeperException(org.apache.zookeeper.KeeperException) CloudSolrClient(org.apache.solr.client.solrj.impl.CloudSolrClient)

Aggregations

PermanentBackendException (org.janusgraph.diskstorage.PermanentBackendException)53 IOException (java.io.IOException)24 TemporaryBackendException (org.janusgraph.diskstorage.TemporaryBackendException)16 UncheckedIOException (java.io.UncheckedIOException)12 BackendException (org.janusgraph.diskstorage.BackendException)12 Configuration (org.janusgraph.diskstorage.configuration.Configuration)8 ConnectionException (com.netflix.astyanax.connectionpool.exceptions.ConnectionException)7 DatabaseException (com.sleepycat.je.DatabaseException)7 BaseTransactionConfig (org.janusgraph.diskstorage.BaseTransactionConfig)7 Duration (java.time.Duration)6 ArrayList (java.util.ArrayList)6 Map (java.util.Map)6 StaticBuffer (org.janusgraph.diskstorage.StaticBuffer)6 StoreTransaction (org.janusgraph.diskstorage.keycolumnvalue.StoreTransaction)6 SolrServerException (org.apache.solr.client.solrj.SolrServerException)5 KeeperException (org.apache.zookeeper.KeeperException)5 Transaction (com.sleepycat.je.Transaction)4 HashMap (java.util.HashMap)4 CloudSolrClient (org.apache.solr.client.solrj.impl.CloudSolrClient)4 KeyInformation (org.janusgraph.diskstorage.indexing.KeyInformation)4