Search in sources :

Example 1 with Graph

use of org.apache.tinkerpop.gremlin.structure.Graph in project titan by thinkaurelius.

the class AbstractTitanGraphProvider method clear.

//    @Override
//    public <ID> ID reconstituteGraphSONIdentifier(final Class<? extends Element> clazz, final Object id) {
//        if (Edge.class.isAssignableFrom(clazz)) {
//            // TitanGraphSONModule toStrings the edgeid - expect a String value for the id
//            if (!(id instanceof String)) throw new RuntimeException("Expected a String value for the RelationIdentifier");
//            return (ID) RelationIdentifier.parse((String) id);
//        } else {
//            return (ID) id;
//        }
//    }
@Override
public void clear(Graph g, final Configuration configuration) throws Exception {
    if (null != g) {
        while (g instanceof WrappedGraph) g = ((WrappedGraph<? extends Graph>) g).getBaseGraph();
        TitanGraph graph = (TitanGraph) g;
        if (graph.isOpen()) {
            if (g.tx().isOpen())
                g.tx().rollback();
            g.close();
        }
    }
    WriteConfiguration config = new CommonsConfiguration(configuration);
    BasicConfiguration readConfig = new BasicConfiguration(GraphDatabaseConfiguration.ROOT_NS, config, BasicConfiguration.Restriction.NONE);
    if (readConfig.has(GraphDatabaseConfiguration.STORAGE_BACKEND)) {
        TitanGraphBaseTest.clearGraph(config);
    }
}
Also used : StandardTitanGraph(com.thinkaurelius.titan.graphdb.database.StandardTitanGraph) TitanGraph(com.thinkaurelius.titan.core.TitanGraph) Graph(org.apache.tinkerpop.gremlin.structure.Graph) StandardTitanGraph(com.thinkaurelius.titan.graphdb.database.StandardTitanGraph) TitanGraph(com.thinkaurelius.titan.core.TitanGraph) WrappedGraph(org.apache.tinkerpop.gremlin.structure.util.wrapped.WrappedGraph) CommonsConfiguration(com.thinkaurelius.titan.diskstorage.configuration.backend.CommonsConfiguration) WriteConfiguration(com.thinkaurelius.titan.diskstorage.configuration.WriteConfiguration) BasicConfiguration(com.thinkaurelius.titan.diskstorage.configuration.BasicConfiguration) WrappedGraph(org.apache.tinkerpop.gremlin.structure.util.wrapped.WrappedGraph)

Example 2 with Graph

use of org.apache.tinkerpop.gremlin.structure.Graph in project titan by thinkaurelius.

the class CassandraInputFormatIT method testReadGraphOfTheGods.

@Test
public void testReadGraphOfTheGods() {
    GraphOfTheGodsFactory.load(graph, null, true);
    assertEquals(12L, (long) graph.traversal().V().count().next());
    Graph g = GraphFactory.open("target/test-classes/cassandra-read.properties");
    GraphTraversalSource t = g.traversal(GraphTraversalSource.computer(SparkGraphComputer.class));
    assertEquals(12L, (long) t.V().count().next());
}
Also used : GraphTraversalSource(org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.GraphTraversalSource) Graph(org.apache.tinkerpop.gremlin.structure.Graph) SparkGraphComputer(org.apache.tinkerpop.gremlin.hadoop.process.computer.spark.SparkGraphComputer) Test(org.junit.Test) TitanGraphBaseTest(com.thinkaurelius.titan.graphdb.TitanGraphBaseTest)

Example 3 with Graph

use of org.apache.tinkerpop.gremlin.structure.Graph in project titan by thinkaurelius.

the class FulgoraGraphComputer method submit.

@Override
public Future<ComputerResult> submit() {
    if (executed)
        throw Exceptions.computerHasAlreadyBeenSubmittedAVertexProgram();
    else
        executed = true;
    // it is not possible execute a computer if it has no vertex program nor mapreducers
    if (null == vertexProgram && mapReduces.isEmpty())
        throw GraphComputer.Exceptions.computerHasNoVertexProgramNorMapReducers();
    // it is possible to run mapreducers without a vertex program
    if (null != vertexProgram) {
        GraphComputerHelper.validateProgramOnComputer(this, vertexProgram);
        this.mapReduces.addAll(this.vertexProgram.getMapReducers());
    }
    // if the user didn't set desired persistence/resultgraph, then get from vertex program or else, no persistence
    this.persistMode = GraphComputerHelper.getPersistState(Optional.ofNullable(this.vertexProgram), Optional.ofNullable(this.persistMode));
    this.resultGraphMode = GraphComputerHelper.getResultGraphState(Optional.ofNullable(this.vertexProgram), Optional.ofNullable(this.resultGraphMode));
    // determine the legality persistence and result graph options
    if (!this.features().supportsResultGraphPersistCombination(this.resultGraphMode, this.persistMode))
        throw GraphComputer.Exceptions.resultGraphPersistCombinationNotSupported(this.resultGraphMode, this.persistMode);
    memory = new FulgoraMemory(vertexProgram, mapReduces);
    return CompletableFuture.<ComputerResult>supplyAsync(() -> {
        final long time = System.currentTimeMillis();
        if (null != vertexProgram) {
            // ##### Execute vertex program
            vertexMemory = new FulgoraVertexMemory(expectedNumVertices, graph.getIDManager(), vertexProgram);
            // execute the vertex program
            vertexProgram.setup(memory);
            memory.completeSubRound();
            for (int iteration = 1; ; iteration++) {
                vertexMemory.nextIteration(vertexProgram.getMessageScopes(memory));
                jobId = name + "#" + iteration;
                VertexProgramScanJob.Executor job = VertexProgramScanJob.getVertexProgramScanJob(graph, memory, vertexMemory, vertexProgram);
                StandardScanner.Builder scanBuilder = graph.getBackend().buildEdgeScanJob();
                scanBuilder.setJobId(jobId);
                scanBuilder.setNumProcessingThreads(numThreads);
                scanBuilder.setWorkBlockSize(readBatchSize);
                scanBuilder.setJob(job);
                PartitionedVertexProgramExecutor pvpe = new PartitionedVertexProgramExecutor(graph, memory, vertexMemory, vertexProgram);
                try {
                    //Iterates over all vertices and computes the vertex program on all non-partitioned vertices. For partitioned ones, the data is aggregated
                    ScanMetrics jobResult = scanBuilder.execute().get();
                    long failures = jobResult.get(ScanMetrics.Metric.FAILURE);
                    if (failures > 0) {
                        throw new TitanException("Failed to process [" + failures + "] vertices in vertex program iteration [" + iteration + "]. Computer is aborting.");
                    }
                    //Runs the vertex program on all aggregated, partitioned vertices.
                    pvpe.run(numThreads, jobResult);
                    failures = jobResult.getCustom(PartitionedVertexProgramExecutor.PARTITION_VERTEX_POSTFAIL);
                    if (failures > 0) {
                        throw new TitanException("Failed to process [" + failures + "] partitioned vertices in vertex program iteration [" + iteration + "]. Computer is aborting.");
                    }
                } catch (Exception e) {
                    throw new TitanException(e);
                }
                vertexMemory.completeIteration();
                memory.completeSubRound();
                try {
                    if (this.vertexProgram.terminate(this.memory)) {
                        break;
                    }
                } finally {
                    memory.incrIteration();
                    memory.completeSubRound();
                }
            }
        }
        // ##### Execute mapreduce jobs
        // Collect map jobs
        Map<MapReduce, FulgoraMapEmitter> mapJobs = new HashMap<>(mapReduces.size());
        for (MapReduce mapReduce : mapReduces) {
            if (mapReduce.doStage(MapReduce.Stage.MAP)) {
                FulgoraMapEmitter mapEmitter = new FulgoraMapEmitter<>(mapReduce.doStage(MapReduce.Stage.REDUCE));
                mapJobs.put(mapReduce, mapEmitter);
            }
        }
        // Execute map jobs
        jobId = name + "#map";
        VertexMapJob.Executor job = VertexMapJob.getVertexMapJob(graph, vertexMemory, mapJobs);
        StandardScanner.Builder scanBuilder = graph.getBackend().buildEdgeScanJob();
        scanBuilder.setJobId(jobId);
        scanBuilder.setNumProcessingThreads(numThreads);
        scanBuilder.setWorkBlockSize(readBatchSize);
        scanBuilder.setJob(job);
        try {
            ScanMetrics jobResult = scanBuilder.execute().get();
            long failures = jobResult.get(ScanMetrics.Metric.FAILURE);
            if (failures > 0) {
                throw new TitanException("Failed to process [" + failures + "] vertices in map phase. Computer is aborting.");
            }
            failures = jobResult.getCustom(VertexMapJob.MAP_JOB_FAILURE);
            if (failures > 0) {
                throw new TitanException("Failed to process [" + failures + "] individual map jobs. Computer is aborting.");
            }
        } catch (Exception e) {
            throw new TitanException(e);
        }
        // Execute reduce phase and add to memory
        for (Map.Entry<MapReduce, FulgoraMapEmitter> mapJob : mapJobs.entrySet()) {
            FulgoraMapEmitter<?, ?> mapEmitter = mapJob.getValue();
            MapReduce mapReduce = mapJob.getKey();
            // sort results if a map output sort is defined
            mapEmitter.complete(mapReduce);
            if (mapReduce.doStage(MapReduce.Stage.REDUCE)) {
                final FulgoraReduceEmitter<?, ?> reduceEmitter = new FulgoraReduceEmitter<>();
                try (WorkerPool workers = new WorkerPool(numThreads)) {
                    workers.submit(() -> mapReduce.workerStart(MapReduce.Stage.REDUCE));
                    for (final Map.Entry queueEntry : mapEmitter.reduceMap.entrySet()) {
                        workers.submit(() -> mapReduce.reduce(queueEntry.getKey(), ((Iterable) queueEntry.getValue()).iterator(), reduceEmitter));
                    }
                    workers.submit(() -> mapReduce.workerEnd(MapReduce.Stage.REDUCE));
                } catch (Exception e) {
                    throw new TitanException("Exception while executing reduce phase", e);
                }
                //                    mapEmitter.reduceMap.entrySet().parallelStream().forEach(entry -> mapReduce.reduce(entry.getKey(), entry.getValue().iterator(), reduceEmitter));
                // sort results if a reduce output sort is defined
                reduceEmitter.complete(mapReduce);
                mapReduce.addResultToMemory(this.memory, reduceEmitter.reduceQueue.iterator());
            } else {
                mapReduce.addResultToMemory(this.memory, mapEmitter.mapQueue.iterator());
            }
        }
        // #### Write mutated properties back into graph
        Graph resultgraph = graph;
        if (persistMode == Persist.NOTHING && resultGraphMode == ResultGraph.NEW) {
            resultgraph = EmptyGraph.instance();
        } else if (persistMode != Persist.NOTHING && vertexProgram != null && !vertexProgram.getElementComputeKeys().isEmpty()) {
            //First, create property keys in graph if they don't already exist
            TitanManagement mgmt = graph.openManagement();
            try {
                for (String key : vertexProgram.getElementComputeKeys()) {
                    if (!mgmt.containsPropertyKey(key))
                        log.warn("Property key [{}] is not part of the schema and will be created. It is advised to initialize all keys.", key);
                    mgmt.getOrCreatePropertyKey(key);
                }
                mgmt.commit();
            } finally {
                if (mgmt != null && mgmt.isOpen())
                    mgmt.rollback();
            }
            //TODO: Filter based on VertexProgram
            Map<Long, Map<String, Object>> mutatedProperties = Maps.transformValues(vertexMemory.getMutableVertexProperties(), new Function<Map<String, Object>, Map<String, Object>>() {

                @Nullable
                @Override
                public Map<String, Object> apply(@Nullable Map<String, Object> o) {
                    return Maps.filterKeys(o, s -> !NON_PERSISTING_KEYS.contains(s));
                }
            });
            if (resultGraphMode == ResultGraph.ORIGINAL) {
                AtomicInteger failures = new AtomicInteger(0);
                try (WorkerPool workers = new WorkerPool(numThreads)) {
                    List<Map.Entry<Long, Map<String, Object>>> subset = new ArrayList<>(writeBatchSize / vertexProgram.getElementComputeKeys().size());
                    int currentSize = 0;
                    for (Map.Entry<Long, Map<String, Object>> entry : mutatedProperties.entrySet()) {
                        subset.add(entry);
                        currentSize += entry.getValue().size();
                        if (currentSize >= writeBatchSize) {
                            workers.submit(new VertexPropertyWriter(subset, failures));
                            subset = new ArrayList<>(subset.size());
                            currentSize = 0;
                        }
                    }
                    if (!subset.isEmpty())
                        workers.submit(new VertexPropertyWriter(subset, failures));
                } catch (Exception e) {
                    throw new TitanException("Exception while attempting to persist result into graph", e);
                }
                if (failures.get() > 0)
                    throw new TitanException("Could not persist program results to graph. Check log for details.");
            } else if (resultGraphMode == ResultGraph.NEW) {
                resultgraph = graph.newTransaction();
                for (Map.Entry<Long, Map<String, Object>> vprop : mutatedProperties.entrySet()) {
                    Vertex v = resultgraph.vertices(vprop.getKey()).next();
                    for (Map.Entry<String, Object> prop : vprop.getValue().entrySet()) {
                        v.property(VertexProperty.Cardinality.single, prop.getKey(), prop.getValue());
                    }
                }
            }
        }
        // update runtime and return the newly computed graph
        this.memory.setRuntime(System.currentTimeMillis() - time);
        this.memory.complete();
        return new DefaultComputerResult(resultgraph, this.memory);
    });
}
Also used : Vertex(org.apache.tinkerpop.gremlin.structure.Vertex) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) ScanMetrics(com.thinkaurelius.titan.diskstorage.keycolumnvalue.scan.ScanMetrics) MapReduce(org.apache.tinkerpop.gremlin.process.computer.MapReduce) Function(com.google.common.base.Function) ComputerResult(org.apache.tinkerpop.gremlin.process.computer.ComputerResult) DefaultComputerResult(org.apache.tinkerpop.gremlin.process.computer.util.DefaultComputerResult) ArrayList(java.util.ArrayList) List(java.util.List) TitanException(com.thinkaurelius.titan.core.TitanException) WorkerPool(com.thinkaurelius.titan.graphdb.util.WorkerPool) Graph(org.apache.tinkerpop.gremlin.structure.Graph) EmptyGraph(org.apache.tinkerpop.gremlin.structure.util.empty.EmptyGraph) StandardTitanGraph(com.thinkaurelius.titan.graphdb.database.StandardTitanGraph) StandardScanner(com.thinkaurelius.titan.diskstorage.keycolumnvalue.scan.StandardScanner) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) DefaultComputerResult(org.apache.tinkerpop.gremlin.process.computer.util.DefaultComputerResult) TitanException(com.thinkaurelius.titan.core.TitanException) HashMap(java.util.HashMap) Map(java.util.Map) TitanManagement(com.thinkaurelius.titan.core.schema.TitanManagement) Nullable(javax.annotation.Nullable)

Example 4 with Graph

use of org.apache.tinkerpop.gremlin.structure.Graph in project titan by thinkaurelius.

the class TitanLocalQueryOptimizerStrategy method apply.

@Override
public void apply(final Traversal.Admin<?, ?> traversal) {
    if (!traversal.getGraph().isPresent())
        return;
    Graph graph = traversal.getGraph().get();
    //If this is a compute graph then we can't apply local traversal optimisation at this stage.
    StandardTitanGraph titanGraph = graph instanceof StandardTitanTx ? ((StandardTitanTx) graph).getGraph() : (StandardTitanGraph) graph;
    final boolean useMultiQuery = traversal.getEngine().isStandard() && titanGraph.getConfiguration().useMultiQuery();
    /*
                ====== VERTEX STEP ======
         */
    TraversalHelper.getStepsOfClass(VertexStep.class, traversal).forEach(originalStep -> {
        TitanVertexStep vstep = new TitanVertexStep(originalStep);
        TraversalHelper.replaceStep(originalStep, vstep, traversal);
        if (TitanTraversalUtil.isEdgeReturnStep(vstep)) {
            HasStepFolder.foldInHasContainer(vstep, traversal);
        }
        assert TitanTraversalUtil.isEdgeReturnStep(vstep) || TitanTraversalUtil.isVertexReturnStep(vstep);
        Step nextStep = TitanTraversalUtil.getNextNonIdentityStep(vstep);
        if (nextStep instanceof RangeGlobalStep) {
            int limit = QueryUtil.convertLimit(((RangeGlobalStep) nextStep).getHighRange());
            vstep.setLimit(QueryUtil.mergeLimits(limit, vstep.getLimit()));
        }
        if (useMultiQuery) {
            vstep.setUseMultiQuery(true);
        }
    });
    /*
                ====== PROPERTIES STEP ======
         */
    TraversalHelper.getStepsOfClass(PropertiesStep.class, traversal).forEach(originalStep -> {
        TitanPropertiesStep vstep = new TitanPropertiesStep(originalStep);
        TraversalHelper.replaceStep(originalStep, vstep, traversal);
        if (vstep.getReturnType().forProperties()) {
            HasStepFolder.foldInHasContainer(vstep, traversal);
        }
        if (useMultiQuery) {
            vstep.setUseMultiQuery(true);
        }
    });
    /*
                ====== EITHER INSIDE LOCAL ======
         */
    TraversalHelper.getStepsOfClass(LocalStep.class, traversal).forEach(localStep -> {
        Traversal.Admin localTraversal = ((LocalStep<?, ?>) localStep).getLocalChildren().get(0);
        Step localStart = localTraversal.getStartStep();
        if (localStart instanceof VertexStep) {
            TitanVertexStep vstep = new TitanVertexStep((VertexStep) localStart);
            TraversalHelper.replaceStep(localStart, vstep, localTraversal);
            if (TitanTraversalUtil.isEdgeReturnStep(vstep)) {
                HasStepFolder.foldInHasContainer(vstep, localTraversal);
                HasStepFolder.foldInOrder(vstep, localTraversal, traversal, false);
            }
            HasStepFolder.foldInRange(vstep, localTraversal);
            unfoldLocalTraversal(traversal, localStep, localTraversal, vstep, useMultiQuery);
        }
        if (localStart instanceof PropertiesStep) {
            TitanPropertiesStep vstep = new TitanPropertiesStep((PropertiesStep) localStart);
            TraversalHelper.replaceStep(localStart, vstep, localTraversal);
            if (vstep.getReturnType().forProperties()) {
                HasStepFolder.foldInHasContainer(vstep, localTraversal);
                HasStepFolder.foldInOrder(vstep, localTraversal, traversal, false);
            }
            HasStepFolder.foldInRange(vstep, localTraversal);
            unfoldLocalTraversal(traversal, localStep, localTraversal, vstep, useMultiQuery);
        }
    });
}
Also used : PropertiesStep(org.apache.tinkerpop.gremlin.process.traversal.step.map.PropertiesStep) Traversal(org.apache.tinkerpop.gremlin.process.traversal.Traversal) PropertiesStep(org.apache.tinkerpop.gremlin.process.traversal.step.map.PropertiesStep) Step(org.apache.tinkerpop.gremlin.process.traversal.Step) LocalStep(org.apache.tinkerpop.gremlin.process.traversal.step.branch.LocalStep) RangeGlobalStep(org.apache.tinkerpop.gremlin.process.traversal.step.filter.RangeGlobalStep) VertexStep(org.apache.tinkerpop.gremlin.process.traversal.step.map.VertexStep) LocalStep(org.apache.tinkerpop.gremlin.process.traversal.step.branch.LocalStep) StandardTitanTx(com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx) VertexStep(org.apache.tinkerpop.gremlin.process.traversal.step.map.VertexStep) StandardTitanGraph(com.thinkaurelius.titan.graphdb.database.StandardTitanGraph) Graph(org.apache.tinkerpop.gremlin.structure.Graph) StandardTitanGraph(com.thinkaurelius.titan.graphdb.database.StandardTitanGraph) RangeGlobalStep(org.apache.tinkerpop.gremlin.process.traversal.step.filter.RangeGlobalStep)

Example 5 with Graph

use of org.apache.tinkerpop.gremlin.structure.Graph in project grakn by graknlabs.

the class TxFactoryJanusTest method testBuildIndexedGraphWithoutCommit.

@Test
public void testBuildIndexedGraphWithoutCommit() throws Exception {
    Graph graph = getGraph();
    addConcepts(graph);
    assertIndexCorrect(graph);
}
Also used : Graph(org.apache.tinkerpop.gremlin.structure.Graph) StandardJanusGraph(org.janusgraph.graphdb.database.StandardJanusGraph) JanusGraph(org.janusgraph.core.JanusGraph) Test(org.junit.Test)

Aggregations

Graph (org.apache.tinkerpop.gremlin.structure.Graph)125 Test (org.junit.Test)64 Vertex (org.apache.tinkerpop.gremlin.structure.Vertex)55 GraphTraversalSource (org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.GraphTraversalSource)39 BaseTest (org.umlg.sqlg.test.BaseTest)35 DefaultGraphTraversal (org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.DefaultGraphTraversal)30 Test (org.junit.jupiter.api.Test)23 SqlgGraph (org.umlg.sqlg.structure.SqlgGraph)22 List (java.util.List)14 StandardJanusGraph (org.janusgraph.graphdb.database.StandardJanusGraph)14 SqlgVertex (org.umlg.sqlg.structure.SqlgVertex)14 TestGraphBuilder.newGraph (nl.knaw.huygens.timbuctoo.util.TestGraphBuilder.newGraph)12 Edge (org.apache.tinkerpop.gremlin.structure.Edge)12 HashSet (java.util.HashSet)11 IOException (java.io.IOException)10 MatcherAssert.assertThat (org.hamcrest.MatcherAssert.assertThat)10 Traversal (org.apache.tinkerpop.gremlin.process.traversal.Traversal)9 GraphTraversal (org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.GraphTraversal)9 EmptyGraph (org.apache.tinkerpop.gremlin.structure.util.empty.EmptyGraph)9 JanusGraphBaseTest (org.janusgraph.graphdb.JanusGraphBaseTest)9