Search in sources :

Example 6 with UpdateOp

use of org.apache.jackrabbit.oak.plugins.document.UpdateOp in project jackrabbit-oak by apache.

the class MongoDocumentStore method createOrUpdate.

/**
     * Try to apply all the {@link UpdateOp}s with at least MongoDB requests as
     * possible. The return value is the list of the old documents (before
     * applying changes). The mechanism is as follows:
     *
     * <ol>
     * <li>For each UpdateOp try to read the assigned document from the cache.
     *     Add them to {@code oldDocs}.</li>
     * <li>Prepare a list of all UpdateOps that doesn't have their documents and
     *     read them in one find() call. Add results to {@code oldDocs}.</li>
     * <li>Prepare a bulk update. For each remaining UpdateOp add following
     *     operation:
     *   <ul>
     *   <li>Find document with the same id and the same mod_count as in the
     *       {@code oldDocs}.</li>
     *   <li>Apply changes from the UpdateOps.</li>
     *   </ul>
     * </li>
     * <li>Execute the bulk update.</li>
     * </ol>
     *
     * If some other process modifies the target documents between points 2 and
     * 3, the mod_count will be increased as well and the bulk update will fail
     * for the concurrently modified docs. The method will then remove the
     * failed documents from the {@code oldDocs} and restart the process from
     * point 2. It will stop after 3rd iteration.
     */
@SuppressWarnings("unchecked")
@CheckForNull
@Override
public <T extends Document> List<T> createOrUpdate(Collection<T> collection, List<UpdateOp> updateOps) {
    log("createOrUpdate", updateOps);
    Map<String, UpdateOp> operationsToCover = new LinkedHashMap<String, UpdateOp>();
    List<UpdateOp> duplicates = new ArrayList<UpdateOp>();
    Map<UpdateOp, T> results = new LinkedHashMap<UpdateOp, T>();
    final Stopwatch watch = startWatch();
    try {
        for (UpdateOp updateOp : updateOps) {
            UpdateUtils.assertUnconditional(updateOp);
            UpdateOp clone = updateOp.copy();
            if (operationsToCover.containsKey(updateOp.getId())) {
                duplicates.add(clone);
            } else {
                operationsToCover.put(updateOp.getId(), clone);
            }
            results.put(clone, null);
        }
        Map<String, T> oldDocs = new HashMap<String, T>();
        if (collection == Collection.NODES) {
            oldDocs.putAll((Map<String, T>) getCachedNodes(operationsToCover.keySet()));
        }
        for (int i = 0; i <= bulkRetries; i++) {
            if (operationsToCover.size() <= 2) {
                // in bulk mode wouldn't result in any performance gain
                break;
            }
            for (List<UpdateOp> partition : Lists.partition(Lists.newArrayList(operationsToCover.values()), bulkSize)) {
                Map<UpdateOp, T> successfulUpdates = bulkUpdate(collection, partition, oldDocs);
                results.putAll(successfulUpdates);
                operationsToCover.values().removeAll(successfulUpdates.keySet());
            }
        }
        // if there are some changes left, we'll apply them one after another
        Iterator<UpdateOp> it = Iterators.concat(operationsToCover.values().iterator(), duplicates.iterator());
        while (it.hasNext()) {
            UpdateOp op = it.next();
            it.remove();
            T oldDoc = createOrUpdate(collection, op);
            if (oldDoc != null) {
                results.put(op, oldDoc);
            }
        }
    } catch (MongoException e) {
        throw handleException(e, collection, Iterables.transform(updateOps, new Function<UpdateOp, String>() {

            @Override
            public String apply(UpdateOp input) {
                return input.getId();
            }
        }));
    } finally {
        stats.doneCreateOrUpdate(watch.elapsed(TimeUnit.NANOSECONDS), collection, Lists.transform(updateOps, new Function<UpdateOp, String>() {

            @Override
            public String apply(UpdateOp input) {
                return input.getId();
            }
        }));
    }
    List<T> resultList = new ArrayList<T>(results.values());
    log("createOrUpdate returns", resultList);
    return resultList;
}
Also used : MongoException(com.mongodb.MongoException) UpdateOp(org.apache.jackrabbit.oak.plugins.document.UpdateOp) HashMap(java.util.HashMap) LinkedHashMap(java.util.LinkedHashMap) ArrayList(java.util.ArrayList) Stopwatch(com.google.common.base.Stopwatch) LinkedHashMap(java.util.LinkedHashMap) Function(com.google.common.base.Function) CheckForNull(javax.annotation.CheckForNull)

Example 7 with UpdateOp

use of org.apache.jackrabbit.oak.plugins.document.UpdateOp in project jackrabbit-oak by apache.

the class MongoDocumentStore method sendBulkUpdate.

private <T extends Document> BulkUpdateResult sendBulkUpdate(Collection<T> collection, java.util.Collection<UpdateOp> updateOps, Map<String, T> oldDocs) {
    DBCollection dbCollection = getDBCollection(collection);
    BulkWriteOperation bulk = dbCollection.initializeUnorderedBulkOperation();
    String[] bulkIds = new String[updateOps.size()];
    int i = 0;
    for (UpdateOp updateOp : updateOps) {
        String id = updateOp.getId();
        QueryBuilder query = createQueryForUpdate(id, updateOp.getConditions());
        T oldDoc = oldDocs.get(id);
        DBObject update;
        if (oldDoc == null || oldDoc == NodeDocument.NULL) {
            query.and(Document.MOD_COUNT).exists(false);
            update = createUpdate(updateOp, true);
        } else {
            query.and(Document.MOD_COUNT).is(oldDoc.getModCount());
            update = createUpdate(updateOp, false);
        }
        bulk.find(query.get()).upsert().updateOne(update);
        bulkIds[i++] = id;
    }
    BulkWriteResult bulkResult;
    Set<String> failedUpdates = new HashSet<String>();
    Set<String> upserts = new HashSet<String>();
    try {
        bulkResult = bulk.execute();
    } catch (BulkWriteException e) {
        bulkResult = e.getWriteResult();
        for (BulkWriteError err : e.getWriteErrors()) {
            failedUpdates.add(bulkIds[err.getIndex()]);
        }
    }
    for (BulkWriteUpsert upsert : bulkResult.getUpserts()) {
        upserts.add(bulkIds[upsert.getIndex()]);
    }
    return new BulkUpdateResult(failedUpdates, upserts);
}
Also used : BulkWriteOperation(com.mongodb.BulkWriteOperation) BulkWriteUpsert(com.mongodb.BulkWriteUpsert) UpdateOp(org.apache.jackrabbit.oak.plugins.document.UpdateOp) QueryBuilder(com.mongodb.QueryBuilder) BulkWriteError(com.mongodb.BulkWriteError) DBObject(com.mongodb.DBObject) BasicDBObject(com.mongodb.BasicDBObject) BulkWriteException(com.mongodb.BulkWriteException) BulkWriteResult(com.mongodb.BulkWriteResult) DBCollection(com.mongodb.DBCollection) HashSet(java.util.HashSet)

Example 8 with UpdateOp

use of org.apache.jackrabbit.oak.plugins.document.UpdateOp in project jackrabbit-oak by apache.

the class RDBDocumentStore method internalUpdate.

@CheckForNull
private <T extends Document> void internalUpdate(Collection<T> collection, List<String> ids, UpdateOp update) {
    if (isAppendableUpdate(update, true) && !requiresPreviousState(update)) {
        Operation modOperation = update.getChanges().get(MODIFIEDKEY);
        long modified = getModifiedFromOperation(modOperation);
        boolean modifiedIsConditional = modOperation == null || modOperation.type != UpdateOp.Operation.Type.SET;
        String appendData = ser.asString(update);
        for (List<String> chunkedIds : Lists.partition(ids, CHUNKSIZE)) {
            if (collection == Collection.NODES) {
                for (String key : chunkedIds) {
                    nodesCache.invalidate(key);
                }
            }
            Connection connection = null;
            RDBTableMetaData tmd = getTable(collection);
            boolean success = false;
            try {
                Stopwatch watch = startWatch();
                connection = this.ch.getRWConnection();
                success = db.batchedAppendingUpdate(connection, tmd, chunkedIds, modified, modifiedIsConditional, appendData);
                connection.commit();
                //Internally 'db' would make multiple calls and number of those
                //remote calls would not be captured
                stats.doneUpdate(watch.elapsed(TimeUnit.NANOSECONDS), collection, chunkedIds.size());
            } catch (SQLException ex) {
                success = false;
                this.ch.rollbackConnection(connection);
            } finally {
                this.ch.closeConnection(connection);
            }
            if (success) {
                if (collection == Collection.NODES) {
                    for (String id : chunkedIds) {
                        nodesCache.invalidate(id);
                    }
                }
            } else {
                for (String id : chunkedIds) {
                    UpdateOp up = update.copy();
                    up = up.shallowCopy(id);
                    internalCreateOrUpdate(collection, up, false, true);
                }
            }
        }
    } else {
        Stopwatch watch = startWatch();
        for (String id : ids) {
            UpdateOp up = update.copy();
            up = up.shallowCopy(id);
            internalCreateOrUpdate(collection, up, false, true);
        }
        stats.doneUpdate(watch.elapsed(TimeUnit.NANOSECONDS), collection, ids.size());
    }
}
Also used : SQLException(java.sql.SQLException) UpdateOp(org.apache.jackrabbit.oak.plugins.document.UpdateOp) Connection(java.sql.Connection) Stopwatch(com.google.common.base.Stopwatch) Operation(org.apache.jackrabbit.oak.plugins.document.UpdateOp.Operation) CheckForNull(javax.annotation.CheckForNull)

Example 9 with UpdateOp

use of org.apache.jackrabbit.oak.plugins.document.UpdateOp in project jackrabbit-oak by apache.

the class CacheConsistencyIT method runTest.

private void runTest() throws Throwable {
    addNodes(null, "/test", "/test/foo");
    final List<Throwable> exceptions = Collections.synchronizedList(new ArrayList<Throwable>());
    Thread t1 = new Thread(new Runnable() {

        @Override
        public void run() {
            String id = Utils.getIdFromPath("/test/foo");
            List<String> ids = Lists.newArrayList();
            ids.add(id);
            long v = 0;
            while (exceptions.isEmpty()) {
                try {
                    UpdateOp op = new UpdateOp(ids.get(0), false);
                    op.set("p", ++v);
                    store.update(NODES, ids, op);
                    NodeDocument doc = store.find(NODES, id);
                    Object p = doc.get("p");
                    assertEquals(v, ((Long) p).longValue());
                } catch (Throwable e) {
                    exceptions.add(e);
                }
            }
        }
    }, "update");
    t1.start();
    Thread t2 = new Thread(new Runnable() {

        @Override
        public void run() {
            String id = Utils.getIdFromPath("/test/foo");
            long v = 0;
            while (exceptions.isEmpty()) {
                try {
                    UpdateOp op = new UpdateOp(id, false);
                    op.set("q", ++v);
                    NodeDocument old = store.findAndUpdate(NODES, op);
                    Object q = old.get("q");
                    if (q != null) {
                        assertEquals(v - 1, ((Long) q).longValue());
                    }
                } catch (Throwable e) {
                    exceptions.add(e);
                }
            }
        }
    }, "findAndUpdate");
    t2.start();
    Thread t3 = new Thread(new Runnable() {

        @Override
        public void run() {
            String id = Utils.getIdFromPath("/test/foo");
            long p = 0;
            long q = 0;
            while (exceptions.isEmpty()) {
                try {
                    NodeDocument doc = store.find(NODES, id);
                    if (doc != null) {
                        Object value = doc.get("p");
                        if (value != null) {
                            assertTrue((Long) value >= p);
                            p = (Long) value;
                        }
                        value = doc.get("q");
                        if (value != null) {
                            assertTrue("previous: " + q + ", now: " + value, (Long) value >= q);
                            q = (Long) value;
                        }
                    }
                } catch (Throwable e) {
                    exceptions.add(e);
                }
            }
        }
    }, "reader");
    t3.start();
    NodeDocumentCache cache = store.getNodeDocumentCache();
    // run for at most five seconds
    long end = System.currentTimeMillis() + 1000;
    String id = Utils.getIdFromPath("/test/foo");
    while (t1.isAlive() && t2.isAlive() && t3.isAlive() && System.currentTimeMillis() < end) {
        if (cache.getIfPresent(id) != null) {
            Thread.sleep(0, (int) (Math.random() * 100));
            // simulate eviction
            cache.invalidate(id);
        }
    }
    for (Throwable e : exceptions) {
        throw e;
    }
    exceptions.add(new Exception("end"));
    t1.join();
    t2.join();
    t3.join();
}
Also used : NodeDocumentCache(org.apache.jackrabbit.oak.plugins.document.cache.NodeDocumentCache) UpdateOp(org.apache.jackrabbit.oak.plugins.document.UpdateOp) NodeDocument(org.apache.jackrabbit.oak.plugins.document.NodeDocument) ArrayList(java.util.ArrayList) List(java.util.List)

Example 10 with UpdateOp

use of org.apache.jackrabbit.oak.plugins.document.UpdateOp in project jackrabbit-oak by apache.

the class TimingDocumentStoreWrapperTest method createOrUpdate.

@Test
public void createOrUpdate() {
    DocumentStore store = new TimingDocumentStoreWrapper(new MemoryDocumentStore());
    UpdateOp op = new UpdateOp("foo", true);
    store.createOrUpdate(Collection.NODES, Collections.singletonList(op));
}
Also used : DocumentStore(org.apache.jackrabbit.oak.plugins.document.DocumentStore) MemoryDocumentStore(org.apache.jackrabbit.oak.plugins.document.memory.MemoryDocumentStore) MemoryDocumentStore(org.apache.jackrabbit.oak.plugins.document.memory.MemoryDocumentStore) UpdateOp(org.apache.jackrabbit.oak.plugins.document.UpdateOp) Test(org.junit.Test)

Aggregations

UpdateOp (org.apache.jackrabbit.oak.plugins.document.UpdateOp)21 ArrayList (java.util.ArrayList)12 NodeDocument (org.apache.jackrabbit.oak.plugins.document.NodeDocument)8 Test (org.junit.Test)8 Stopwatch (com.google.common.base.Stopwatch)5 Connection (java.sql.Connection)4 HashMap (java.util.HashMap)4 LinkedHashMap (java.util.LinkedHashMap)4 AbstractDocumentStoreTest (org.apache.jackrabbit.oak.plugins.document.AbstractDocumentStoreTest)4 Revision (org.apache.jackrabbit.oak.plugins.document.Revision)4 Lists.newArrayList (com.google.common.collect.Lists.newArrayList)3 HashSet (java.util.HashSet)3 CheckForNull (javax.annotation.CheckForNull)3 DocumentStoreException (org.apache.jackrabbit.oak.plugins.document.DocumentStoreException)3 QueryCondition (org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.QueryCondition)3 Function (com.google.common.base.Function)2 BasicDBObject (com.mongodb.BasicDBObject)2 BulkWriteOperation (com.mongodb.BulkWriteOperation)2 DBCollection (com.mongodb.DBCollection)2 DBObject (com.mongodb.DBObject)2