Search in sources :

Example 6 with TitanException

use of com.thinkaurelius.titan.core.TitanException in project titan by thinkaurelius.

the class Backend method initialize.

/**
     * Initializes this backend with the given configuration. Must be called before this Backend can be used
     *
     * @param config
     */
public void initialize(Configuration config) {
    try {
        //EdgeStore & VertexIndexStore
        KeyColumnValueStore idStore = storeManager.openDatabase(ID_STORE_NAME);
        idAuthority = null;
        if (storeFeatures.isKeyConsistent()) {
            idAuthority = new ConsistentKeyIDAuthority(idStore, storeManager, config);
        } else {
            throw new IllegalStateException("Store needs to support consistent key or transactional operations for ID manager to guarantee proper id allocations");
        }
        KeyColumnValueStore edgeStoreRaw = storeManagerLocking.openDatabase(EDGESTORE_NAME);
        KeyColumnValueStore indexStoreRaw = storeManagerLocking.openDatabase(INDEXSTORE_NAME);
        //Configure caches
        if (cacheEnabled) {
            long expirationTime = configuration.get(DB_CACHE_TIME);
            Preconditions.checkArgument(expirationTime >= 0, "Invalid cache expiration time: %s", expirationTime);
            if (expirationTime == 0)
                expirationTime = ETERNAL_CACHE_EXPIRATION;
            long cacheSizeBytes;
            double cachesize = configuration.get(DB_CACHE_SIZE);
            Preconditions.checkArgument(cachesize > 0.0, "Invalid cache size specified: %s", cachesize);
            if (cachesize < 1.0) {
                //Its a percentage
                Runtime runtime = Runtime.getRuntime();
                cacheSizeBytes = (long) ((runtime.maxMemory() - (runtime.totalMemory() - runtime.freeMemory())) * cachesize);
            } else {
                Preconditions.checkArgument(cachesize > 1000, "Cache size is too small: %s", cachesize);
                cacheSizeBytes = (long) cachesize;
            }
            log.info("Configuring total store cache size: {}", cacheSizeBytes);
            long cleanWaitTime = configuration.get(DB_CACHE_CLEAN_WAIT);
            Preconditions.checkArgument(EDGESTORE_CACHE_PERCENT + INDEXSTORE_CACHE_PERCENT == 1.0, "Cache percentages don't add up!");
            long edgeStoreCacheSize = Math.round(cacheSizeBytes * EDGESTORE_CACHE_PERCENT);
            long indexStoreCacheSize = Math.round(cacheSizeBytes * INDEXSTORE_CACHE_PERCENT);
            edgeStore = new ExpirationKCVSCache(edgeStoreRaw, getMetricsCacheName(EDGESTORE_NAME), expirationTime, cleanWaitTime, edgeStoreCacheSize);
            indexStore = new ExpirationKCVSCache(indexStoreRaw, getMetricsCacheName(INDEXSTORE_NAME), expirationTime, cleanWaitTime, indexStoreCacheSize);
        } else {
            edgeStore = new NoKCVSCache(edgeStoreRaw);
            indexStore = new NoKCVSCache(indexStoreRaw);
        }
        //Just open them so that they are cached
        txLogManager.openLog(SYSTEM_TX_LOG_NAME);
        mgmtLogManager.openLog(SYSTEM_MGMT_LOG_NAME);
        txLogStore = new NoKCVSCache(storeManager.openDatabase(SYSTEM_TX_LOG_NAME));
        //Open global configuration
        KeyColumnValueStore systemConfigStore = storeManagerLocking.openDatabase(SYSTEM_PROPERTIES_STORE_NAME);
        systemConfig = getGlobalConfiguration(new BackendOperation.TransactionalProvider() {

            @Override
            public StoreTransaction openTx() throws BackendException {
                return storeManagerLocking.beginTransaction(StandardBaseTransactionConfig.of(configuration.get(TIMESTAMP_PROVIDER), storeFeatures.getKeyConsistentTxConfig()));
            }

            @Override
            public void close() throws BackendException {
            //Do nothing, storeManager is closed explicitly by Backend
            }
        }, systemConfigStore, configuration);
        userConfig = getConfiguration(new BackendOperation.TransactionalProvider() {

            @Override
            public StoreTransaction openTx() throws BackendException {
                return storeManagerLocking.beginTransaction(StandardBaseTransactionConfig.of(configuration.get(TIMESTAMP_PROVIDER)));
            }

            @Override
            public void close() throws BackendException {
            //Do nothing, storeManager is closed explicitly by Backend
            }
        }, systemConfigStore, USER_CONFIGURATION_IDENTIFIER, configuration);
    } catch (BackendException e) {
        throw new TitanException("Could not initialize backend", e);
    }
}
Also used : ConsistentKeyIDAuthority(com.thinkaurelius.titan.diskstorage.idmanagement.ConsistentKeyIDAuthority) NoKCVSCache(com.thinkaurelius.titan.diskstorage.keycolumnvalue.cache.NoKCVSCache) ExpirationKCVSCache(com.thinkaurelius.titan.diskstorage.keycolumnvalue.cache.ExpirationKCVSCache) TitanException(com.thinkaurelius.titan.core.TitanException)

Example 7 with TitanException

use of com.thinkaurelius.titan.core.TitanException in project titan by thinkaurelius.

the class ElasticSearchIndex method checkForOrCreateIndex.

/**
     * If ES already contains this instance's target index, then do nothing.
     * Otherwise, create the index, then wait {@link #CREATE_SLEEP}.
     * <p>
     * The {@code client} field must point to a live, connected client.
     * The {@code indexName} field must be non-null and point to the name
     * of the index to check for existence or create.
     *
     * @param config the config for this ElasticSearchIndex
     * @throws java.lang.IllegalArgumentException if the index could not be created
     */
private void checkForOrCreateIndex(Configuration config) {
    Preconditions.checkState(null != client);
    //Create index if it does not already exist
    IndicesExistsResponse response = client.admin().indices().exists(new IndicesExistsRequest(indexName)).actionGet();
    if (!response.isExists()) {
        ImmutableSettings.Builder settings = ImmutableSettings.settingsBuilder();
        ElasticSearchSetup.applySettingsFromTitanConf(settings, config, ES_CREATE_EXTRAS_NS);
        CreateIndexResponse create = client.admin().indices().prepareCreate(indexName).setSettings(settings.build()).execute().actionGet();
        try {
            final long sleep = config.get(CREATE_SLEEP);
            log.debug("Sleeping {} ms after {} index creation returned from actionGet()", sleep, indexName);
            Thread.sleep(sleep);
        } catch (InterruptedException e) {
            throw new TitanException("Interrupted while waiting for index to settle in", e);
        }
        if (!create.isAcknowledged())
            throw new IllegalArgumentException("Could not create index: " + indexName);
    }
}
Also used : IndicesExistsResponse(org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsResponse) TitanException(com.thinkaurelius.titan.core.TitanException) CreateIndexResponse(org.elasticsearch.action.admin.indices.create.CreateIndexResponse) ImmutableSettings(org.elasticsearch.common.settings.ImmutableSettings) IndicesExistsRequest(org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsRequest)

Example 8 with TitanException

use of com.thinkaurelius.titan.core.TitanException in project titan by thinkaurelius.

the class FulgoraGraphComputer method submit.

@Override
public Future<ComputerResult> submit() {
    if (executed)
        throw Exceptions.computerHasAlreadyBeenSubmittedAVertexProgram();
    else
        executed = true;
    // it is not possible execute a computer if it has no vertex program nor mapreducers
    if (null == vertexProgram && mapReduces.isEmpty())
        throw GraphComputer.Exceptions.computerHasNoVertexProgramNorMapReducers();
    // it is possible to run mapreducers without a vertex program
    if (null != vertexProgram) {
        GraphComputerHelper.validateProgramOnComputer(this, vertexProgram);
        this.mapReduces.addAll(this.vertexProgram.getMapReducers());
    }
    // if the user didn't set desired persistence/resultgraph, then get from vertex program or else, no persistence
    this.persistMode = GraphComputerHelper.getPersistState(Optional.ofNullable(this.vertexProgram), Optional.ofNullable(this.persistMode));
    this.resultGraphMode = GraphComputerHelper.getResultGraphState(Optional.ofNullable(this.vertexProgram), Optional.ofNullable(this.resultGraphMode));
    // determine the legality persistence and result graph options
    if (!this.features().supportsResultGraphPersistCombination(this.resultGraphMode, this.persistMode))
        throw GraphComputer.Exceptions.resultGraphPersistCombinationNotSupported(this.resultGraphMode, this.persistMode);
    memory = new FulgoraMemory(vertexProgram, mapReduces);
    return CompletableFuture.<ComputerResult>supplyAsync(() -> {
        final long time = System.currentTimeMillis();
        if (null != vertexProgram) {
            // ##### Execute vertex program
            vertexMemory = new FulgoraVertexMemory(expectedNumVertices, graph.getIDManager(), vertexProgram);
            // execute the vertex program
            vertexProgram.setup(memory);
            memory.completeSubRound();
            for (int iteration = 1; ; iteration++) {
                vertexMemory.nextIteration(vertexProgram.getMessageScopes(memory));
                jobId = name + "#" + iteration;
                VertexProgramScanJob.Executor job = VertexProgramScanJob.getVertexProgramScanJob(graph, memory, vertexMemory, vertexProgram);
                StandardScanner.Builder scanBuilder = graph.getBackend().buildEdgeScanJob();
                scanBuilder.setJobId(jobId);
                scanBuilder.setNumProcessingThreads(numThreads);
                scanBuilder.setWorkBlockSize(readBatchSize);
                scanBuilder.setJob(job);
                PartitionedVertexProgramExecutor pvpe = new PartitionedVertexProgramExecutor(graph, memory, vertexMemory, vertexProgram);
                try {
                    //Iterates over all vertices and computes the vertex program on all non-partitioned vertices. For partitioned ones, the data is aggregated
                    ScanMetrics jobResult = scanBuilder.execute().get();
                    long failures = jobResult.get(ScanMetrics.Metric.FAILURE);
                    if (failures > 0) {
                        throw new TitanException("Failed to process [" + failures + "] vertices in vertex program iteration [" + iteration + "]. Computer is aborting.");
                    }
                    //Runs the vertex program on all aggregated, partitioned vertices.
                    pvpe.run(numThreads, jobResult);
                    failures = jobResult.getCustom(PartitionedVertexProgramExecutor.PARTITION_VERTEX_POSTFAIL);
                    if (failures > 0) {
                        throw new TitanException("Failed to process [" + failures + "] partitioned vertices in vertex program iteration [" + iteration + "]. Computer is aborting.");
                    }
                } catch (Exception e) {
                    throw new TitanException(e);
                }
                vertexMemory.completeIteration();
                memory.completeSubRound();
                try {
                    if (this.vertexProgram.terminate(this.memory)) {
                        break;
                    }
                } finally {
                    memory.incrIteration();
                    memory.completeSubRound();
                }
            }
        }
        // ##### Execute mapreduce jobs
        // Collect map jobs
        Map<MapReduce, FulgoraMapEmitter> mapJobs = new HashMap<>(mapReduces.size());
        for (MapReduce mapReduce : mapReduces) {
            if (mapReduce.doStage(MapReduce.Stage.MAP)) {
                FulgoraMapEmitter mapEmitter = new FulgoraMapEmitter<>(mapReduce.doStage(MapReduce.Stage.REDUCE));
                mapJobs.put(mapReduce, mapEmitter);
            }
        }
        // Execute map jobs
        jobId = name + "#map";
        VertexMapJob.Executor job = VertexMapJob.getVertexMapJob(graph, vertexMemory, mapJobs);
        StandardScanner.Builder scanBuilder = graph.getBackend().buildEdgeScanJob();
        scanBuilder.setJobId(jobId);
        scanBuilder.setNumProcessingThreads(numThreads);
        scanBuilder.setWorkBlockSize(readBatchSize);
        scanBuilder.setJob(job);
        try {
            ScanMetrics jobResult = scanBuilder.execute().get();
            long failures = jobResult.get(ScanMetrics.Metric.FAILURE);
            if (failures > 0) {
                throw new TitanException("Failed to process [" + failures + "] vertices in map phase. Computer is aborting.");
            }
            failures = jobResult.getCustom(VertexMapJob.MAP_JOB_FAILURE);
            if (failures > 0) {
                throw new TitanException("Failed to process [" + failures + "] individual map jobs. Computer is aborting.");
            }
        } catch (Exception e) {
            throw new TitanException(e);
        }
        // Execute reduce phase and add to memory
        for (Map.Entry<MapReduce, FulgoraMapEmitter> mapJob : mapJobs.entrySet()) {
            FulgoraMapEmitter<?, ?> mapEmitter = mapJob.getValue();
            MapReduce mapReduce = mapJob.getKey();
            // sort results if a map output sort is defined
            mapEmitter.complete(mapReduce);
            if (mapReduce.doStage(MapReduce.Stage.REDUCE)) {
                final FulgoraReduceEmitter<?, ?> reduceEmitter = new FulgoraReduceEmitter<>();
                try (WorkerPool workers = new WorkerPool(numThreads)) {
                    workers.submit(() -> mapReduce.workerStart(MapReduce.Stage.REDUCE));
                    for (final Map.Entry queueEntry : mapEmitter.reduceMap.entrySet()) {
                        workers.submit(() -> mapReduce.reduce(queueEntry.getKey(), ((Iterable) queueEntry.getValue()).iterator(), reduceEmitter));
                    }
                    workers.submit(() -> mapReduce.workerEnd(MapReduce.Stage.REDUCE));
                } catch (Exception e) {
                    throw new TitanException("Exception while executing reduce phase", e);
                }
                //                    mapEmitter.reduceMap.entrySet().parallelStream().forEach(entry -> mapReduce.reduce(entry.getKey(), entry.getValue().iterator(), reduceEmitter));
                // sort results if a reduce output sort is defined
                reduceEmitter.complete(mapReduce);
                mapReduce.addResultToMemory(this.memory, reduceEmitter.reduceQueue.iterator());
            } else {
                mapReduce.addResultToMemory(this.memory, mapEmitter.mapQueue.iterator());
            }
        }
        // #### Write mutated properties back into graph
        Graph resultgraph = graph;
        if (persistMode == Persist.NOTHING && resultGraphMode == ResultGraph.NEW) {
            resultgraph = EmptyGraph.instance();
        } else if (persistMode != Persist.NOTHING && vertexProgram != null && !vertexProgram.getElementComputeKeys().isEmpty()) {
            //First, create property keys in graph if they don't already exist
            TitanManagement mgmt = graph.openManagement();
            try {
                for (String key : vertexProgram.getElementComputeKeys()) {
                    if (!mgmt.containsPropertyKey(key))
                        log.warn("Property key [{}] is not part of the schema and will be created. It is advised to initialize all keys.", key);
                    mgmt.getOrCreatePropertyKey(key);
                }
                mgmt.commit();
            } finally {
                if (mgmt != null && mgmt.isOpen())
                    mgmt.rollback();
            }
            //TODO: Filter based on VertexProgram
            Map<Long, Map<String, Object>> mutatedProperties = Maps.transformValues(vertexMemory.getMutableVertexProperties(), new Function<Map<String, Object>, Map<String, Object>>() {

                @Nullable
                @Override
                public Map<String, Object> apply(@Nullable Map<String, Object> o) {
                    return Maps.filterKeys(o, s -> !NON_PERSISTING_KEYS.contains(s));
                }
            });
            if (resultGraphMode == ResultGraph.ORIGINAL) {
                AtomicInteger failures = new AtomicInteger(0);
                try (WorkerPool workers = new WorkerPool(numThreads)) {
                    List<Map.Entry<Long, Map<String, Object>>> subset = new ArrayList<>(writeBatchSize / vertexProgram.getElementComputeKeys().size());
                    int currentSize = 0;
                    for (Map.Entry<Long, Map<String, Object>> entry : mutatedProperties.entrySet()) {
                        subset.add(entry);
                        currentSize += entry.getValue().size();
                        if (currentSize >= writeBatchSize) {
                            workers.submit(new VertexPropertyWriter(subset, failures));
                            subset = new ArrayList<>(subset.size());
                            currentSize = 0;
                        }
                    }
                    if (!subset.isEmpty())
                        workers.submit(new VertexPropertyWriter(subset, failures));
                } catch (Exception e) {
                    throw new TitanException("Exception while attempting to persist result into graph", e);
                }
                if (failures.get() > 0)
                    throw new TitanException("Could not persist program results to graph. Check log for details.");
            } else if (resultGraphMode == ResultGraph.NEW) {
                resultgraph = graph.newTransaction();
                for (Map.Entry<Long, Map<String, Object>> vprop : mutatedProperties.entrySet()) {
                    Vertex v = resultgraph.vertices(vprop.getKey()).next();
                    for (Map.Entry<String, Object> prop : vprop.getValue().entrySet()) {
                        v.property(VertexProperty.Cardinality.single, prop.getKey(), prop.getValue());
                    }
                }
            }
        }
        // update runtime and return the newly computed graph
        this.memory.setRuntime(System.currentTimeMillis() - time);
        this.memory.complete();
        return new DefaultComputerResult(resultgraph, this.memory);
    });
}
Also used : Vertex(org.apache.tinkerpop.gremlin.structure.Vertex) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) ScanMetrics(com.thinkaurelius.titan.diskstorage.keycolumnvalue.scan.ScanMetrics) MapReduce(org.apache.tinkerpop.gremlin.process.computer.MapReduce) Function(com.google.common.base.Function) ComputerResult(org.apache.tinkerpop.gremlin.process.computer.ComputerResult) DefaultComputerResult(org.apache.tinkerpop.gremlin.process.computer.util.DefaultComputerResult) ArrayList(java.util.ArrayList) List(java.util.List) TitanException(com.thinkaurelius.titan.core.TitanException) WorkerPool(com.thinkaurelius.titan.graphdb.util.WorkerPool) Graph(org.apache.tinkerpop.gremlin.structure.Graph) EmptyGraph(org.apache.tinkerpop.gremlin.structure.util.empty.EmptyGraph) StandardTitanGraph(com.thinkaurelius.titan.graphdb.database.StandardTitanGraph) StandardScanner(com.thinkaurelius.titan.diskstorage.keycolumnvalue.scan.StandardScanner) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) DefaultComputerResult(org.apache.tinkerpop.gremlin.process.computer.util.DefaultComputerResult) TitanException(com.thinkaurelius.titan.core.TitanException) HashMap(java.util.HashMap) Map(java.util.Map) TitanManagement(com.thinkaurelius.titan.core.schema.TitanManagement) Nullable(javax.annotation.Nullable)

Example 9 with TitanException

use of com.thinkaurelius.titan.core.TitanException in project titan by thinkaurelius.

the class TitanIndexTest method testIndexing.

private void testIndexing(Cardinality cardinality) {
    if (supportsCollections()) {
        PropertyKey stringProperty = mgmt.makePropertyKey("name").dataType(String.class).cardinality(cardinality).make();
        PropertyKey intProperty = mgmt.makePropertyKey("age").dataType(Integer.class).cardinality(cardinality).make();
        PropertyKey longProperty = mgmt.makePropertyKey("long").dataType(Long.class).cardinality(cardinality).make();
        PropertyKey uuidProperty = mgmt.makePropertyKey("uuid").dataType(UUID.class).cardinality(cardinality).make();
        PropertyKey geoProperty = mgmt.makePropertyKey("geo").dataType(Geoshape.class).cardinality(cardinality).make();
        mgmt.buildIndex("collectionIndex", Vertex.class).addKey(stringProperty, getStringMapping()).addKey(intProperty).addKey(longProperty).addKey(uuidProperty).addKey(geoProperty).buildMixedIndex(INDEX);
        finishSchema();
        testCollection(cardinality, "name", "Totoro", "Hiro");
        testCollection(cardinality, "age", 1, 2);
        testCollection(cardinality, "long", 1L, 2L);
        testCollection(cardinality, "uuid", UUID.randomUUID(), UUID.randomUUID());
        testCollection(cardinality, "geo", Geoshape.point(1.0, 1.0), Geoshape.point(2.0, 2.0));
    } else {
        try {
            PropertyKey stringProperty = mgmt.makePropertyKey("name").dataType(String.class).cardinality(cardinality).make();
            //This should throw an exception
            mgmt.buildIndex("collectionIndex", Vertex.class).addKey(stringProperty, getStringMapping()).buildMixedIndex(INDEX);
            Assert.fail("Should have thrown an exception");
        } catch (TitanException e) {
        }
    }
}
Also used : TitanException(com.thinkaurelius.titan.core.TitanException) PropertyKey(com.thinkaurelius.titan.core.PropertyKey)

Example 10 with TitanException

use of com.thinkaurelius.titan.core.TitanException in project titan by thinkaurelius.

the class IDPoolTest method testAllocationTimeout.

@Test
public void testAllocationTimeout() {
    final MockIDAuthority idauth = new MockIDAuthority(10000, Integer.MAX_VALUE, 5000);
    StandardIDPool pool = new StandardIDPool(idauth, 1, 1, Integer.MAX_VALUE, Duration.ofMillis(4000), 0.1);
    try {
        pool.nextID();
        fail();
    } catch (TitanException e) {
    }
}
Also used : StandardIDPool(com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool) TitanException(com.thinkaurelius.titan.core.TitanException) Test(org.junit.Test)

Aggregations

TitanException (com.thinkaurelius.titan.core.TitanException)19 BackendException (com.thinkaurelius.titan.diskstorage.BackendException)4 PropertyKey (com.thinkaurelius.titan.core.PropertyKey)3 RelationTypeIndex (com.thinkaurelius.titan.core.schema.RelationTypeIndex)3 TitanGraphIndex (com.thinkaurelius.titan.core.schema.TitanGraphIndex)3 StandardTitanGraph (com.thinkaurelius.titan.graphdb.database.StandardTitanGraph)3 CompositeIndexType (com.thinkaurelius.titan.graphdb.types.CompositeIndexType)3 IndexType (com.thinkaurelius.titan.graphdb.types.IndexType)3 TitanSchemaVertex (com.thinkaurelius.titan.graphdb.types.vertices.TitanSchemaVertex)3 List (java.util.List)3 TitanManagement (com.thinkaurelius.titan.core.schema.TitanManagement)2 ScanMetrics (com.thinkaurelius.titan.diskstorage.keycolumnvalue.scan.ScanMetrics)2 StandardScanner (com.thinkaurelius.titan.diskstorage.keycolumnvalue.scan.StandardScanner)2 StandardIDPool (com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool)2 HashMap (java.util.HashMap)2 Map (java.util.Map)2 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)2 Function (com.google.common.base.Function)1 Preconditions (com.google.common.base.Preconditions)1 ImmutableList (com.google.common.collect.ImmutableList)1