Search in sources :

Example 1 with JanusGraphException

use of org.janusgraph.core.JanusGraphException in project janusgraph by JanusGraph.

the class FulgoraGraphComputer method submit.

@Override
public Future<ComputerResult> submit() {
    if (executed)
        throw Exceptions.computerHasAlreadyBeenSubmittedAVertexProgram();
    else
        executed = true;
    // it is not possible execute a computer if it has no vertex program nor map-reducers
    if (null == vertexProgram && mapReduces.isEmpty())
        throw GraphComputer.Exceptions.computerHasNoVertexProgramNorMapReducers();
    // it is possible to run map-reducers without a vertex program
    if (null != vertexProgram) {
        GraphComputerHelper.validateProgramOnComputer(this, vertexProgram);
        this.mapReduces.addAll(this.vertexProgram.getMapReducers());
    }
    // if the user didn't set desired persistence/resultgraph, then get from vertex program or else, no persistence
    this.persistMode = GraphComputerHelper.getPersistState(Optional.ofNullable(this.vertexProgram), Optional.ofNullable(this.persistMode));
    this.resultGraphMode = GraphComputerHelper.getResultGraphState(Optional.ofNullable(this.vertexProgram), Optional.ofNullable(this.resultGraphMode));
    // determine the legality persistence and result graph options
    if (!this.features().supportsResultGraphPersistCombination(this.resultGraphMode, this.persistMode))
        throw GraphComputer.Exceptions.resultGraphPersistCombinationNotSupported(this.resultGraphMode, this.persistMode);
    // ensure requested workers are not larger than supported workers
    if (this.numThreads > this.features().getMaxWorkers())
        throw GraphComputer.Exceptions.computerRequiresMoreWorkersThanSupported(this.numThreads, this.features().getMaxWorkers());
    memory = new FulgoraMemory(vertexProgram, mapReduces);
    return CompletableFuture.supplyAsync(() -> {
        final long time = System.currentTimeMillis();
        if (null != vertexProgram) {
            // ##### Execute vertex program
            vertexMemory = new FulgoraVertexMemory(expectedNumVertices, graph.getIDManager(), vertexProgram);
            // execute the vertex program
            vertexProgram.setup(memory);
            try (VertexProgramScanJob.Executor job = VertexProgramScanJob.getVertexProgramScanJob(graph, memory, vertexMemory, vertexProgram)) {
                for (int iteration = 1; ; iteration++) {
                    memory.completeSubRound();
                    vertexMemory.nextIteration(vertexProgram.getMessageScopes(memory));
                    jobId = name + "#" + iteration;
                    StandardScanner.Builder scanBuilder = graph.getBackend().buildEdgeScanJob();
                    scanBuilder.setJobId(jobId);
                    scanBuilder.setNumProcessingThreads(numThreads);
                    scanBuilder.setWorkBlockSize(readBatchSize);
                    scanBuilder.setJob(job);
                    PartitionedVertexProgramExecutor programExecutor = new PartitionedVertexProgramExecutor(graph, memory, vertexMemory, vertexProgram);
                    try {
                        // Iterates over all vertices and computes the vertex program on all non-partitioned vertices. For partitioned ones, the data is aggregated
                        ScanMetrics jobResult = scanBuilder.execute().get();
                        long failures = jobResult.get(ScanMetrics.Metric.FAILURE);
                        if (failures > 0) {
                            throw new JanusGraphException("Failed to process [" + failures + "] vertices in vertex program iteration [" + iteration + "]. Computer is aborting.");
                        }
                        // Runs the vertex program on all aggregated, partitioned vertices.
                        programExecutor.run(numThreads, jobResult);
                        failures = jobResult.getCustom(PartitionedVertexProgramExecutor.PARTITION_VERTEX_POSTFAIL);
                        if (failures > 0) {
                            throw new JanusGraphException("Failed to process [" + failures + "] partitioned vertices in vertex program iteration [" + iteration + "]. Computer is aborting.");
                        }
                    } catch (Exception e) {
                        throw new JanusGraphException(e);
                    }
                    vertexMemory.completeIteration();
                    memory.completeSubRound();
                    try {
                        if (this.vertexProgram.terminate(this.memory)) {
                            break;
                        }
                    } finally {
                        memory.incrIteration();
                    }
                }
            }
        }
        // ##### Execute map-reduce jobs
        // Collect map jobs
        Map<MapReduce, FulgoraMapEmitter> mapJobs = new HashMap<>(mapReduces.size());
        for (MapReduce mapReduce : mapReduces) {
            if (mapReduce.doStage(MapReduce.Stage.MAP)) {
                FulgoraMapEmitter mapEmitter = new FulgoraMapEmitter<>(mapReduce.doStage(MapReduce.Stage.REDUCE));
                mapJobs.put(mapReduce, mapEmitter);
            }
        }
        // Execute map jobs
        jobId = name + "#map";
        try (VertexMapJob.Executor job = VertexMapJob.getVertexMapJob(graph, vertexMemory, mapJobs)) {
            StandardScanner.Builder scanBuilder = graph.getBackend().buildEdgeScanJob();
            scanBuilder.setJobId(jobId);
            scanBuilder.setNumProcessingThreads(numThreads);
            scanBuilder.setWorkBlockSize(readBatchSize);
            scanBuilder.setJob(job);
            try {
                ScanMetrics jobResult = scanBuilder.execute().get();
                long failures = jobResult.get(ScanMetrics.Metric.FAILURE);
                if (failures > 0) {
                    throw new JanusGraphException("Failed to process [" + failures + "] vertices in map phase. Computer is aborting.");
                }
                failures = jobResult.getCustom(VertexMapJob.MAP_JOB_FAILURE);
                if (failures > 0) {
                    throw new JanusGraphException("Failed to process [" + failures + "] individual map jobs. Computer is aborting.");
                }
            } catch (Exception e) {
                throw new JanusGraphException(e);
            }
            // Execute reduce phase and add to memory
            for (Map.Entry<MapReduce, FulgoraMapEmitter> mapJob : mapJobs.entrySet()) {
                FulgoraMapEmitter<?, ?> mapEmitter = mapJob.getValue();
                MapReduce mapReduce = mapJob.getKey();
                // sort results if a map output sort is defined
                mapEmitter.complete(mapReduce);
                if (mapReduce.doStage(MapReduce.Stage.REDUCE)) {
                    final FulgoraReduceEmitter<?, ?> reduceEmitter = new FulgoraReduceEmitter<>();
                    try (WorkerPool workers = new WorkerPool(numThreads)) {
                        workers.submit(() -> mapReduce.workerStart(MapReduce.Stage.REDUCE));
                        for (final Map.Entry queueEntry : mapEmitter.reduceMap.entrySet()) {
                            if (null == queueEntry)
                                break;
                            workers.submit(() -> mapReduce.reduce(queueEntry.getKey(), ((Iterable) queueEntry.getValue()).iterator(), reduceEmitter));
                        }
                        workers.submit(() -> mapReduce.workerEnd(MapReduce.Stage.REDUCE));
                    } catch (Exception e) {
                        throw new JanusGraphException("Exception while executing reduce phase", e);
                    }
                    // mapEmitter.reduceMap.entrySet().parallelStream().forEach(entry -> mapReduce.reduce(entry.getKey(), entry.getValue().iterator(), reduceEmitter));
                    // sort results if a reduce output sort is defined
                    reduceEmitter.complete(mapReduce);
                    mapReduce.addResultToMemory(this.memory, reduceEmitter.reduceQueue.iterator());
                } else {
                    mapReduce.addResultToMemory(this.memory, mapEmitter.mapQueue.iterator());
                }
            }
        }
        memory.attachReferenceElements(graph);
        // #### Write mutated properties back into graph
        Graph resultgraph = graph;
        if (persistMode == Persist.NOTHING && resultGraphMode == ResultGraph.NEW) {
            resultgraph = EmptyGraph.instance();
        } else if (persistMode != Persist.NOTHING && vertexProgram != null && !vertexProgram.getVertexComputeKeys().isEmpty()) {
            // First, create property keys in graph if they don't already exist
            JanusGraphManagement management = graph.openManagement();
            try {
                for (VertexComputeKey key : vertexProgram.getVertexComputeKeys()) {
                    if (!management.containsPropertyKey(key.getKey()))
                        log.warn("Property key [{}] is not part of the schema and will be created. It is advised to initialize all keys.", key.getKey());
                    management.getOrCreatePropertyKey(key.getKey());
                }
                management.commit();
            } finally {
                if (management != null && management.isOpen())
                    management.rollback();
            }
            // TODO: Filter based on VertexProgram
            Map<Long, Map<String, Object>> mutatedProperties = Maps.transformValues(vertexMemory.getMutableVertexProperties(), new Function<Map<String, Object>, Map<String, Object>>() {

                @Nullable
                @Override
                public Map<String, Object> apply(final Map<String, Object> o) {
                    return Maps.filterKeys(o, s -> !VertexProgramHelper.isTransientVertexComputeKey(s, vertexProgram.getVertexComputeKeys()));
                }
            });
            if (resultGraphMode == ResultGraph.ORIGINAL) {
                AtomicInteger failures = new AtomicInteger(0);
                try (WorkerPool workers = new WorkerPool(numThreads)) {
                    List<Map.Entry<Long, Map<String, Object>>> subset = new ArrayList<>(writeBatchSize / vertexProgram.getVertexComputeKeys().size());
                    int currentSize = 0;
                    for (Map.Entry<Long, Map<String, Object>> entry : mutatedProperties.entrySet()) {
                        subset.add(entry);
                        currentSize += entry.getValue().size();
                        if (currentSize >= writeBatchSize) {
                            workers.submit(new VertexPropertyWriter(subset, failures));
                            subset = new ArrayList<>(subset.size());
                            currentSize = 0;
                        }
                    }
                    if (!subset.isEmpty())
                        workers.submit(new VertexPropertyWriter(subset, failures));
                } catch (Exception e) {
                    throw new JanusGraphException("Exception while attempting to persist result into graph", e);
                }
                if (failures.get() > 0)
                    throw new JanusGraphException("Could not persist program results to graph. Check log for details.");
            } else if (resultGraphMode == ResultGraph.NEW) {
                resultgraph = graph.newTransaction();
                for (Map.Entry<Long, Map<String, Object>> vertexProperty : mutatedProperties.entrySet()) {
                    Vertex v = resultgraph.vertices(vertexProperty.getKey()).next();
                    for (Map.Entry<String, Object> prop : vertexProperty.getValue().entrySet()) {
                        if (prop.getValue() instanceof List) {
                            ((List) prop.getValue()).forEach(value -> v.property(VertexProperty.Cardinality.list, prop.getKey(), value));
                        } else {
                            v.property(VertexProperty.Cardinality.single, prop.getKey(), prop.getValue());
                        }
                    }
                }
            }
        }
        // update runtime and return the newly computed graph
        this.memory.setRuntime(System.currentTimeMillis() - time);
        this.memory.complete();
        return new DefaultComputerResult(resultgraph, this.memory);
    });
}
Also used : JanusGraphManagement(org.janusgraph.core.schema.JanusGraphManagement) Vertex(org.apache.tinkerpop.gremlin.structure.Vertex) HashMap(java.util.HashMap) JanusGraphException(org.janusgraph.core.JanusGraphException) ArrayList(java.util.ArrayList) ScanMetrics(org.janusgraph.diskstorage.keycolumnvalue.scan.ScanMetrics) MapReduce(org.apache.tinkerpop.gremlin.process.computer.MapReduce) Function(com.google.common.base.Function) ArrayList(java.util.ArrayList) List(java.util.List) JanusGraphException(org.janusgraph.core.JanusGraphException) WorkerPool(org.janusgraph.graphdb.util.WorkerPool) Graph(org.apache.tinkerpop.gremlin.structure.Graph) StandardJanusGraph(org.janusgraph.graphdb.database.StandardJanusGraph) EmptyGraph(org.apache.tinkerpop.gremlin.structure.util.empty.EmptyGraph) StandardScanner(org.janusgraph.diskstorage.keycolumnvalue.scan.StandardScanner) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) DefaultComputerResult(org.apache.tinkerpop.gremlin.process.computer.util.DefaultComputerResult) VertexComputeKey(org.apache.tinkerpop.gremlin.process.computer.VertexComputeKey) HashMap(java.util.HashMap) Map(java.util.Map)

Example 2 with JanusGraphException

use of org.janusgraph.core.JanusGraphException in project janusgraph by JanusGraph.

the class Backend method getConfiguration.

private static KCVSConfiguration getConfiguration(final BackendOperation.TransactionalProvider txProvider, final KeyColumnValueStore store, final String identifier, final Configuration config) {
    try {
        KCVSConfiguration keyColumnValueStoreConfiguration = new KCVSConfiguration(txProvider, config, store, identifier);
        keyColumnValueStoreConfiguration.setMaxOperationWaitTime(config.get(SETUP_WAITTIME));
        return keyColumnValueStoreConfiguration;
    } catch (BackendException e) {
        throw new JanusGraphException("Could not open global configuration", e);
    }
}
Also used : KCVSConfiguration(org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration) JanusGraphException(org.janusgraph.core.JanusGraphException)

Example 3 with JanusGraphException

use of org.janusgraph.core.JanusGraphException in project janusgraph by JanusGraph.

the class IndexRepairJob method process.

@Override
public void process(JanusGraphVertex vertex, ScanMetrics metrics) {
    try {
        BackendTransaction mutator = writeTx.getTxHandle();
        if (index instanceof RelationTypeIndex) {
            RelationTypeIndexWrapper wrapper = (RelationTypeIndexWrapper) index;
            InternalRelationType wrappedType = wrapper.getWrappedType();
            EdgeSerializer edgeSerializer = writeTx.getEdgeSerializer();
            List<Entry> outAdditions = new ArrayList<>();
            Map<StaticBuffer, List<Entry>> inAdditionsMap = new HashMap<>();
            for (Object relation : vertex.query().types(indexRelationTypeName).direction(Direction.OUT).relations()) {
                InternalRelation janusgraphRelation = (InternalRelation) relation;
                for (int pos = 0; pos < janusgraphRelation.getArity(); pos++) {
                    if (!wrappedType.isUnidirected(Direction.BOTH) && !wrappedType.isUnidirected(EdgeDirection.fromPosition(pos)))
                        // Directionality is not covered
                        continue;
                    Entry entry = edgeSerializer.writeRelation(janusgraphRelation, wrappedType, pos, writeTx);
                    if (pos == 0) {
                        outAdditions.add(entry);
                    } else {
                        assert pos == 1;
                        InternalVertex otherVertex = janusgraphRelation.getVertex(1);
                        StaticBuffer otherVertexKey = writeTx.getIdInspector().getKey(otherVertex.longId());
                        inAdditionsMap.computeIfAbsent(otherVertexKey, k -> new ArrayList<>()).add(entry);
                    }
                }
            }
            // Mutating all OUT relationships for the current vertex
            StaticBuffer vertexKey = writeTx.getIdInspector().getKey(vertex.longId());
            mutator.mutateEdges(vertexKey, outAdditions, KCVSCache.NO_DELETIONS);
            // Mutating all IN relationships for the current vertex
            int totalInAdditions = 0;
            for (Map.Entry<StaticBuffer, List<Entry>> entry : inAdditionsMap.entrySet()) {
                StaticBuffer otherVertexKey = entry.getKey();
                List<Entry> inAdditions = entry.getValue();
                totalInAdditions += inAdditions.size();
                mutator.mutateEdges(otherVertexKey, inAdditions, KCVSCache.NO_DELETIONS);
            }
            metrics.incrementCustom(ADDED_RECORDS_COUNT, outAdditions.size() + totalInAdditions);
        } else if (index instanceof JanusGraphIndex) {
            IndexType indexType = managementSystem.getSchemaVertex(index).asIndexType();
            assert indexType != null;
            IndexSerializer indexSerializer = graph.getIndexSerializer();
            // Gather elements to index
            List<JanusGraphElement> elements;
            switch(indexType.getElement()) {
                case VERTEX:
                    elements = Collections.singletonList(vertex);
                    break;
                case PROPERTY:
                    elements = new ArrayList<>();
                    addIndexSchemaConstraint(vertex.query(), indexType).properties().forEach(elements::add);
                    break;
                case EDGE:
                    elements = new ArrayList<>();
                    addIndexSchemaConstraint(vertex.query().direction(Direction.OUT), indexType).edges().forEach(elements::add);
                    break;
                default:
                    throw new AssertionError("Unexpected category: " + indexType.getElement());
            }
            if (indexType.isCompositeIndex()) {
                for (JanusGraphElement element : elements) {
                    Set<IndexSerializer.IndexUpdate<StaticBuffer, Entry>> updates = indexSerializer.reindexElement(element, (CompositeIndexType) indexType);
                    for (IndexSerializer.IndexUpdate<StaticBuffer, Entry> update : updates) {
                        log.debug("Mutating index {}: {}", indexType, update.getEntry());
                        mutator.mutateIndex(update.getKey(), new ArrayList<Entry>(1) {

                            {
                                add(update.getEntry());
                            }
                        }, KCVSCache.NO_DELETIONS);
                        metrics.incrementCustom(ADDED_RECORDS_COUNT);
                    }
                }
            } else {
                assert indexType.isMixedIndex();
                for (JanusGraphElement element : elements) {
                    if (indexSerializer.reindexElement(element, (MixedIndexType) indexType, documentsPerStore)) {
                        metrics.incrementCustom(DOCUMENT_UPDATES_COUNT);
                    }
                }
            }
        } else
            throw new UnsupportedOperationException("Unsupported index found: " + index);
    } catch (final Exception e) {
        managementSystem.rollback();
        writeTx.rollback();
        metrics.incrementCustom(FAILED_TX);
        throw new JanusGraphException(e.getMessage(), e);
    }
}
Also used : InternalVertex(org.janusgraph.graphdb.internal.InternalVertex) JanusGraphVertex(org.janusgraph.core.JanusGraphVertex) StringUtils(org.janusgraph.util.StringUtils) BaseVertexQuery(org.janusgraph.core.BaseVertexQuery) BackendTransaction(org.janusgraph.diskstorage.BackendTransaction) HashMap(java.util.HashMap) SchemaAction(org.janusgraph.core.schema.SchemaAction) IndexSerializer(org.janusgraph.graphdb.database.IndexSerializer) ArrayList(java.util.ArrayList) ScanMetrics(org.janusgraph.diskstorage.keycolumnvalue.scan.ScanMetrics) IndexEntry(org.janusgraph.diskstorage.indexing.IndexEntry) SchemaStatus(org.janusgraph.core.schema.SchemaStatus) QueryContainer(org.janusgraph.graphdb.olap.QueryContainer) JanusGraphIndex(org.janusgraph.core.schema.JanusGraphIndex) InternalRelation(org.janusgraph.graphdb.internal.InternalRelation) Map(java.util.Map) StaticBuffer(org.janusgraph.diskstorage.StaticBuffer) MixedIndexType(org.janusgraph.graphdb.types.MixedIndexType) JanusGraphException(org.janusgraph.core.JanusGraphException) IndexType(org.janusgraph.graphdb.types.IndexType) JanusGraphElement(org.janusgraph.core.JanusGraphElement) JanusGraphSchemaType(org.janusgraph.core.schema.JanusGraphSchemaType) BackendException(org.janusgraph.diskstorage.BackendException) RelationType(org.janusgraph.core.RelationType) VertexScanJob(org.janusgraph.graphdb.olap.VertexScanJob) CompositeIndexType(org.janusgraph.graphdb.types.CompositeIndexType) PropertyKey(org.janusgraph.core.PropertyKey) KCVSCache(org.janusgraph.diskstorage.keycolumnvalue.cache.KCVSCache) BaseLabel(org.janusgraph.graphdb.types.system.BaseLabel) Set(java.util.Set) JanusGraphSchemaVertex(org.janusgraph.graphdb.types.vertices.JanusGraphSchemaVertex) EdgeSerializer(org.janusgraph.graphdb.database.EdgeSerializer) EdgeDirection(org.janusgraph.graphdb.relations.EdgeDirection) Direction(org.apache.tinkerpop.gremlin.structure.Direction) List(java.util.List) Entry(org.janusgraph.diskstorage.Entry) Preconditions(com.google.common.base.Preconditions) RelationTypeIndex(org.janusgraph.core.schema.RelationTypeIndex) RelationTypeIndexWrapper(org.janusgraph.graphdb.database.management.RelationTypeIndexWrapper) InternalRelationType(org.janusgraph.graphdb.internal.InternalRelationType) Collections(java.util.Collections) Set(java.util.Set) HashMap(java.util.HashMap) JanusGraphException(org.janusgraph.core.JanusGraphException) ArrayList(java.util.ArrayList) InternalRelation(org.janusgraph.graphdb.internal.InternalRelation) RelationTypeIndex(org.janusgraph.core.schema.RelationTypeIndex) IndexEntry(org.janusgraph.diskstorage.indexing.IndexEntry) Entry(org.janusgraph.diskstorage.Entry) JanusGraphElement(org.janusgraph.core.JanusGraphElement) EdgeSerializer(org.janusgraph.graphdb.database.EdgeSerializer) RelationTypeIndexWrapper(org.janusgraph.graphdb.database.management.RelationTypeIndexWrapper) StaticBuffer(org.janusgraph.diskstorage.StaticBuffer) ArrayList(java.util.ArrayList) List(java.util.List) JanusGraphIndex(org.janusgraph.core.schema.JanusGraphIndex) MixedIndexType(org.janusgraph.graphdb.types.MixedIndexType) IndexType(org.janusgraph.graphdb.types.IndexType) CompositeIndexType(org.janusgraph.graphdb.types.CompositeIndexType) BackendTransaction(org.janusgraph.diskstorage.BackendTransaction) MixedIndexType(org.janusgraph.graphdb.types.MixedIndexType) IndexSerializer(org.janusgraph.graphdb.database.IndexSerializer) JanusGraphException(org.janusgraph.core.JanusGraphException) BackendException(org.janusgraph.diskstorage.BackendException) InternalVertex(org.janusgraph.graphdb.internal.InternalVertex) CompositeIndexType(org.janusgraph.graphdb.types.CompositeIndexType) InternalRelationType(org.janusgraph.graphdb.internal.InternalRelationType) HashMap(java.util.HashMap) Map(java.util.Map)

Example 4 with JanusGraphException

use of org.janusgraph.core.JanusGraphException in project janusgraph by JanusGraph.

the class FulgoraGraphComputer method executeReducePhase.

private void executeReducePhase(Map<MapReduce, FulgoraMapEmitter> mapJobs) {
    for (Map.Entry<MapReduce, FulgoraMapEmitter> mapJob : mapJobs.entrySet()) {
        FulgoraMapEmitter<?, ?> mapEmitter = mapJob.getValue();
        MapReduce mapReduce = mapJob.getKey();
        // sort results if a map output sort is defined
        mapEmitter.complete(mapReduce);
        if (mapReduce.doStage(MapReduce.Stage.REDUCE)) {
            final FulgoraReduceEmitter<?, ?> reduceEmitter = new FulgoraReduceEmitter<>();
            try (WorkerPool workers = new WorkerPool(numThreads)) {
                workers.submit(() -> mapReduce.workerStart(MapReduce.Stage.REDUCE));
                for (final Map.Entry queueEntry : mapEmitter.reduceMap.entrySet()) {
                    if (null == queueEntry)
                        break;
                    workers.submit(() -> mapReduce.reduce(queueEntry.getKey(), ((Iterable) queueEntry.getValue()).iterator(), reduceEmitter));
                }
                workers.submit(() -> mapReduce.workerEnd(MapReduce.Stage.REDUCE));
            } catch (Exception e) {
                throw new JanusGraphException("Exception while executing reduce phase", e);
            }
            // mapEmitter.reduceMap.entrySet().parallelStream().forEach(entry -> mapReduce.reduce(entry.getKey(), entry.getValue().iterator(), reduceEmitter));
            // sort results if a reduce output sort is defined
            reduceEmitter.complete(mapReduce);
            mapReduce.addResultToMemory(this.memory, reduceEmitter.reduceQueue.iterator());
        } else {
            mapReduce.addResultToMemory(this.memory, mapEmitter.mapQueue.iterator());
        }
    }
}
Also used : WorkerPool(org.janusgraph.graphdb.util.WorkerPool) JanusGraphException(org.janusgraph.core.JanusGraphException) Map(java.util.Map) HashMap(java.util.HashMap) JanusGraphException(org.janusgraph.core.JanusGraphException) BackendException(org.janusgraph.diskstorage.BackendException) ExecutionException(java.util.concurrent.ExecutionException) MapReduce(org.apache.tinkerpop.gremlin.process.computer.MapReduce)

Example 5 with JanusGraphException

use of org.janusgraph.core.JanusGraphException in project janusgraph by JanusGraph.

the class FulgoraGraphComputer method executeOnNonPartitionedVertices.

private ScanMetrics executeOnNonPartitionedVertices(int iteration, StandardScanner.Builder scanBuilder) throws InterruptedException, ExecutionException, BackendException {
    ScanMetrics jobResult = scanBuilder.execute().get();
    long failures = jobResult.get(ScanMetrics.Metric.FAILURE);
    if (failures > 0) {
        throw new JanusGraphException("Failed to process [" + failures + "] vertices in vertex program iteration " + "[" + iteration + "]. Computer is aborting.");
    }
    return jobResult;
}
Also used : JanusGraphException(org.janusgraph.core.JanusGraphException) ScanMetrics(org.janusgraph.diskstorage.keycolumnvalue.scan.ScanMetrics)

Aggregations

JanusGraphException (org.janusgraph.core.JanusGraphException)44 BackendException (org.janusgraph.diskstorage.BackendException)18 PropertyKey (org.janusgraph.core.PropertyKey)9 Test (org.junit.jupiter.api.Test)7 JanusGraphIndex (org.janusgraph.core.schema.JanusGraphIndex)6 StandardScanner (org.janusgraph.diskstorage.keycolumnvalue.scan.StandardScanner)6 Set (java.util.Set)5 JanusGraphManagement (org.janusgraph.core.schema.JanusGraphManagement)5 CompositeIndexType (org.janusgraph.graphdb.types.CompositeIndexType)5 IndexType (org.janusgraph.graphdb.types.IndexType)5 MixedIndexType (org.janusgraph.graphdb.types.MixedIndexType)5 RepeatedIfExceptionsTest (io.github.artsok.RepeatedIfExceptionsTest)4 ArrayList (java.util.ArrayList)4 HashMap (java.util.HashMap)4 Map (java.util.Map)4 ExecutionException (java.util.concurrent.ExecutionException)4 JanusGraphVertex (org.janusgraph.core.JanusGraphVertex)4 RelationTypeIndex (org.janusgraph.core.schema.RelationTypeIndex)4 ScanMetrics (org.janusgraph.diskstorage.keycolumnvalue.scan.ScanMetrics)4 JanusGraphSchemaVertex (org.janusgraph.graphdb.types.vertices.JanusGraphSchemaVertex)4