Search in sources :

Example 26 with PermanentBackendException

use of org.janusgraph.diskstorage.PermanentBackendException in project janusgraph by JanusGraph.

the class SolrIndex method totals.

@Override
public Long totals(RawQuery query, KeyInformation.IndexRetriever information, BaseTransaction tx) throws BackendException {
    try {
        final String collection = query.getStore();
        final String keyIdField = getKeyFieldId(collection);
        final QueryResponse response = solrClient.query(collection, runCommonQuery(query, information, tx, collection, keyIdField));
        logger.debug("Executed query [{}] in {} ms", query.getQuery(), response.getElapsedTime());
        return response.getResults().getNumFound();
    } catch (final IOException e) {
        logger.error("Query did not complete : ", e);
        throw new PermanentBackendException(e);
    } catch (final SolrServerException e) {
        logger.error("Unable to query Solr index.", e);
        throw new PermanentBackendException(e);
    }
}
Also used : PermanentBackendException(org.janusgraph.diskstorage.PermanentBackendException) QueryResponse(org.apache.solr.client.solrj.response.QueryResponse) SolrServerException(org.apache.solr.client.solrj.SolrServerException) UncheckedIOException(java.io.UncheckedIOException) IOException(java.io.IOException)

Example 27 with PermanentBackendException

use of org.janusgraph.diskstorage.PermanentBackendException in project janusgraph by JanusGraph.

the class SolrIndex method mutate.

@Override
public void mutate(Map<String, Map<String, IndexMutation>> mutations, KeyInformation.IndexRetriever information, BaseTransaction tx) throws BackendException {
    logger.debug("Mutating SOLR");
    try {
        for (final Map.Entry<String, Map<String, IndexMutation>> stores : mutations.entrySet()) {
            final String collectionName = stores.getKey();
            final String keyIdField = getKeyFieldId(collectionName);
            final List<String> deleteIds = new ArrayList<>();
            final Collection<SolrInputDocument> changes = new ArrayList<>();
            for (final Map.Entry<String, IndexMutation> entry : stores.getValue().entrySet()) {
                final String docId = entry.getKey();
                final IndexMutation mutation = entry.getValue();
                Preconditions.checkArgument(!(mutation.isNew() && mutation.isDeleted()));
                Preconditions.checkArgument(!mutation.isNew() || !mutation.hasDeletions());
                Preconditions.checkArgument(!mutation.isDeleted() || !mutation.hasAdditions());
                // Handle any deletions
                if (mutation.hasDeletions()) {
                    if (mutation.isDeleted()) {
                        logger.trace("Deleting entire document {}", docId);
                        deleteIds.add(docId);
                    } else {
                        final List<IndexEntry> fieldDeletions = new ArrayList<>(mutation.getDeletions());
                        if (mutation.hasAdditions()) {
                            for (final IndexEntry indexEntry : mutation.getAdditions()) {
                                fieldDeletions.remove(indexEntry);
                            }
                        }
                        handleRemovalsFromIndex(collectionName, keyIdField, docId, fieldDeletions, information);
                    }
                }
                if (mutation.hasAdditions()) {
                    final int ttl = mutation.determineTTL();
                    final SolrInputDocument doc = new SolrInputDocument();
                    doc.setField(keyIdField, docId);
                    final boolean isNewDoc = mutation.isNew();
                    if (isNewDoc)
                        logger.trace("Adding new document {}", docId);
                    final Map<String, Object> adds = collectFieldValues(mutation.getAdditions(), collectionName, information);
                    // If cardinality is not single then we should use the "add" operation to update
                    // the index so we don't overwrite existing values.
                    adds.keySet().forEach(v -> {
                        final KeyInformation keyInformation = information.get(collectionName, v);
                        final String solrOp = keyInformation.getCardinality() == Cardinality.SINGLE ? "set" : "add";
                        doc.setField(v, isNewDoc ? adds.get(v) : new HashMap<String, Object>(1) {

                            {
                                put(solrOp, adds.get(v));
                            }
                        });
                    });
                    if (ttl > 0) {
                        Preconditions.checkArgument(isNewDoc, "Solr only supports TTL on new documents [%s]", docId);
                        doc.setField(ttlField, String.format("+%dSECONDS", ttl));
                    }
                    changes.add(doc);
                }
            }
            commitDeletes(collectionName, deleteIds);
            commitChanges(collectionName, changes);
        }
    } catch (final IllegalArgumentException e) {
        throw new PermanentBackendException("Unable to complete query on Solr.", e);
    } catch (final Exception e) {
        throw storageException(e);
    }
}
Also used : HashMap(java.util.HashMap) PermanentBackendException(org.janusgraph.diskstorage.PermanentBackendException) ArrayList(java.util.ArrayList) IndexEntry(org.janusgraph.diskstorage.indexing.IndexEntry) SolrServerException(org.apache.solr.client.solrj.SolrServerException) UncheckedIOException(java.io.UncheckedIOException) TemporaryBackendException(org.janusgraph.diskstorage.TemporaryBackendException) BackendException(org.janusgraph.diskstorage.BackendException) KeeperException(org.apache.zookeeper.KeeperException) IOException(java.io.IOException) PermanentBackendException(org.janusgraph.diskstorage.PermanentBackendException) KeyInformation(org.janusgraph.diskstorage.indexing.KeyInformation) SolrInputDocument(org.apache.solr.common.SolrInputDocument) IndexMutation(org.janusgraph.diskstorage.indexing.IndexMutation) Map(java.util.Map) HashMap(java.util.HashMap)

Example 28 with PermanentBackendException

use of org.janusgraph.diskstorage.PermanentBackendException in project janusgraph by JanusGraph.

the class SolrIndex method exists.

@Override
public boolean exists() throws BackendException {
    if (mode != Mode.CLOUD)
        throw new UnsupportedOperationException("Operation only supported for SolrCloud");
    final CloudSolrClient server = (CloudSolrClient) solrClient;
    try {
        final ZkStateReader zkStateReader = server.getZkStateReader();
        zkStateReader.forciblyRefreshAllClusterStateSlow();
        final ClusterState clusterState = zkStateReader.getClusterState();
        final Map<String, DocCollection> collections = clusterState.getCollectionsMap();
        return collections != null && !collections.isEmpty();
    } catch (KeeperException | InterruptedException e) {
        throw new PermanentBackendException("Unable to check if index exists", e);
    }
}
Also used : ZkStateReader(org.apache.solr.common.cloud.ZkStateReader) ClusterState(org.apache.solr.common.cloud.ClusterState) PermanentBackendException(org.janusgraph.diskstorage.PermanentBackendException) DocCollection(org.apache.solr.common.cloud.DocCollection) KeeperException(org.apache.zookeeper.KeeperException) CloudSolrClient(org.apache.solr.client.solrj.impl.CloudSolrClient)

Example 29 with PermanentBackendException

use of org.janusgraph.diskstorage.PermanentBackendException in project janusgraph by JanusGraph.

the class SolrIndex method register.

/**
 * Unlike the ElasticSearch Index, which is schema free, Solr requires a schema to
 * support searching. This means that you will need to modify the solr schema with the
 * appropriate field definitions in order to work properly.  If you have a running instance
 * of Solr and you modify its schema with new fields, don't forget to re-index!
 * @param store Index store
 * @param key New key to register
 * @param information data type to register for the key
 * @param tx enclosing transaction
 * @throws org.janusgraph.diskstorage.BackendException in case an exception is thrown when
 * creating a collection.
 */
@SuppressWarnings("unchecked")
@Override
public void register(String store, String key, KeyInformation information, BaseTransaction tx) throws BackendException {
    if (mode == Mode.CLOUD) {
        final CloudSolrClient client = (CloudSolrClient) solrClient;
        try {
            createCollectionIfNotExists(client, configuration, store);
        } catch (final IOException | SolrServerException | InterruptedException | KeeperException e) {
            throw new PermanentBackendException(e);
        }
    }
    // Since all data types must be defined in the schema.xml, pre-registering a type does not work
    // But we check Analyse feature
    String analyzer = ParameterType.STRING_ANALYZER.findParameter(information.getParameters(), null);
    if (analyzer != null) {
        // If the key have a tokenizer, we try to get it by reflection
        try {
            ((Constructor<Tokenizer>) ClassLoader.getSystemClassLoader().loadClass(analyzer).getConstructor()).newInstance();
        } catch (final ReflectiveOperationException e) {
            throw new PermanentBackendException(e.getMessage(), e);
        }
    }
    analyzer = ParameterType.TEXT_ANALYZER.findParameter(information.getParameters(), null);
    if (analyzer != null) {
        // If the key have a tokenizer, we try to get it by reflection
        try {
            ((Constructor<Tokenizer>) ClassLoader.getSystemClassLoader().loadClass(analyzer).getConstructor()).newInstance();
        } catch (final ReflectiveOperationException e) {
            throw new PermanentBackendException(e.getMessage(), e);
        }
    }
}
Also used : PermanentBackendException(org.janusgraph.diskstorage.PermanentBackendException) Constructor(java.lang.reflect.Constructor) SolrServerException(org.apache.solr.client.solrj.SolrServerException) UncheckedIOException(java.io.UncheckedIOException) IOException(java.io.IOException) KeeperException(org.apache.zookeeper.KeeperException) CloudSolrClient(org.apache.solr.client.solrj.impl.CloudSolrClient)

Aggregations

PermanentBackendException (org.janusgraph.diskstorage.PermanentBackendException)29 IOException (java.io.IOException)12 BackendException (org.janusgraph.diskstorage.BackendException)9 TemporaryBackendException (org.janusgraph.diskstorage.TemporaryBackendException)9 UncheckedIOException (java.io.UncheckedIOException)8 ConnectionException (com.netflix.astyanax.connectionpool.exceptions.ConnectionException)7 Map (java.util.Map)5 StoreTransaction (org.janusgraph.diskstorage.keycolumnvalue.StoreTransaction)5 SolrServerException (org.apache.solr.client.solrj.SolrServerException)4 KeeperException (org.apache.zookeeper.KeeperException)4 Rows (com.netflix.astyanax.model.Rows)3 CloudSolrClient (org.apache.solr.client.solrj.impl.CloudSolrClient)3 BiMap (com.google.common.collect.BiMap)2 Keyspace (com.netflix.astyanax.Keyspace)2 ColumnFamilyDefinition (com.netflix.astyanax.ddl.ColumnFamilyDefinition)2 KeyspaceDefinition (com.netflix.astyanax.ddl.KeyspaceDefinition)2 RowSliceQuery (com.netflix.astyanax.query.RowSliceQuery)2 ByteBuffer (java.nio.ByteBuffer)2 Duration (java.time.Duration)2 ConfigurationException (org.apache.cassandra.exceptions.ConfigurationException)2