Search in sources :

Example 1 with Span

use of org.apache.accumulo.core.trace.Span in project accumulo by apache.

the class TracerRecoversAfterOfflineTableIT method test.

@Test
public void test() throws Exception {
    Process tracer = null;
    Connector conn = getConnector();
    if (!conn.tableOperations().exists("trace")) {
        MiniAccumuloClusterImpl mac = cluster;
        tracer = mac.exec(TraceServer.class);
        while (!conn.tableOperations().exists("trace")) {
            sleepUninterruptibly(1, TimeUnit.SECONDS);
        }
        sleepUninterruptibly(5, TimeUnit.SECONDS);
    }
    log.info("Taking table offline");
    conn.tableOperations().offline("trace", true);
    String tableName = getUniqueNames(1)[0];
    conn.tableOperations().create(tableName);
    log.info("Start a distributed trace span");
    DistributedTrace.enable("localhost", "testTrace", getClientConfig());
    Span root = Trace.on("traceTest");
    BatchWriter bw = conn.createBatchWriter(tableName, null);
    Mutation m = new Mutation("m");
    m.put("a", "b", "c");
    bw.addMutation(m);
    bw.close();
    root.stop();
    log.info("Bringing trace table back online");
    conn.tableOperations().online("trace", true);
    log.info("Trace table is online, should be able to find trace");
    try (Scanner scanner = conn.createScanner("trace", Authorizations.EMPTY)) {
        scanner.setRange(new Range(new Text(Long.toHexString(root.traceId()))));
        while (true) {
            final StringBuilder finalBuffer = new StringBuilder();
            int traceCount = TraceDump.printTrace(scanner, new Printer() {

                @Override
                public void print(final String line) {
                    try {
                        finalBuffer.append(line).append("\n");
                    } catch (Exception ex) {
                        throw new RuntimeException(ex);
                    }
                }
            });
            String traceOutput = finalBuffer.toString();
            log.info("Trace output:{}", traceOutput);
            if (traceCount > 0) {
                int lastPos = 0;
                for (String part : "traceTest,close,binMutations".split(",")) {
                    log.info("Looking in trace output for '{}'", part);
                    int pos = traceOutput.indexOf(part);
                    assertTrue("Did not find '" + part + "' in output", pos > 0);
                    assertTrue("'" + part + "' occurred earlier than the previous element unexpectedly", pos > lastPos);
                    lastPos = pos;
                }
                break;
            } else {
                log.info("Ignoring trace output as traceCount not greater than zero: {}", traceCount);
                Thread.sleep(1000);
            }
        }
        if (tracer != null) {
            tracer.destroy();
        }
    }
}
Also used : Connector(org.apache.accumulo.core.client.Connector) Scanner(org.apache.accumulo.core.client.Scanner) Text(org.apache.hadoop.io.Text) Range(org.apache.accumulo.core.data.Range) Printer(org.apache.accumulo.tracer.TraceDump.Printer) Span(org.apache.accumulo.core.trace.Span) TraceServer(org.apache.accumulo.tracer.TraceServer) BatchWriter(org.apache.accumulo.core.client.BatchWriter) Mutation(org.apache.accumulo.core.data.Mutation) MiniAccumuloClusterImpl(org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl) Test(org.junit.Test)

Example 2 with Span

use of org.apache.accumulo.core.trace.Span in project accumulo by apache.

the class GarbageCollectionAlgorithm method deleteConfirmed.

private void deleteConfirmed(GarbageCollectionEnvironment gce, SortedMap<String, String> candidateMap) throws IOException, AccumuloException, AccumuloSecurityException, TableNotFoundException {
    Span deleteSpan = Trace.start("deleteFiles");
    try {
        gce.delete(candidateMap);
    } finally {
        deleteSpan.stop();
    }
    cleanUpDeletedTableDirs(gce, candidateMap);
}
Also used : Span(org.apache.accumulo.core.trace.Span)

Example 3 with Span

use of org.apache.accumulo.core.trace.Span in project accumulo by apache.

the class CloseWriteAheadLogReferences method run.

@Override
public void run() {
    // As long as we depend on a newer Guava than Hadoop uses, we have to make sure we're compatible with
    // what the version they bundle uses.
    Stopwatch sw = new Stopwatch();
    Connector conn;
    try {
        conn = context.getConnector();
    } catch (Exception e) {
        log.error("Could not create connector", e);
        throw new RuntimeException(e);
    }
    if (!ReplicationTable.isOnline(conn)) {
        log.debug("Replication table isn't online, not attempting to clean up wals");
        return;
    }
    Span findWalsSpan = Trace.start("findReferencedWals");
    HashSet<String> closed = null;
    try {
        sw.start();
        closed = getClosedLogs(conn);
    } finally {
        sw.stop();
        findWalsSpan.stop();
    }
    log.info("Found {} WALs referenced in metadata in {}", closed.size(), sw.toString());
    sw.reset();
    Span updateReplicationSpan = Trace.start("updateReplicationTable");
    long recordsClosed = 0;
    try {
        sw.start();
        recordsClosed = updateReplicationEntries(conn, closed);
    } finally {
        sw.stop();
        updateReplicationSpan.stop();
    }
    log.info("Closed {} WAL replication references in replication table in {}", recordsClosed, sw.toString());
}
Also used : Connector(org.apache.accumulo.core.client.Connector) Stopwatch(com.google.common.base.Stopwatch) Span(org.apache.accumulo.core.trace.Span) TableNotFoundException(org.apache.accumulo.core.client.TableNotFoundException) WalMarkerException(org.apache.accumulo.server.log.WalStateManager.WalMarkerException) InvalidProtocolBufferException(com.google.protobuf.InvalidProtocolBufferException) MutationsRejectedException(org.apache.accumulo.core.client.MutationsRejectedException) TException(org.apache.thrift.TException)

Example 4 with Span

use of org.apache.accumulo.core.trace.Span in project vertexium by visallo.

the class AccumuloGraph method getExtendedData.

@Override
public Iterable<ExtendedDataRow> getExtendedData(ElementType elementType, String elementId, String tableName, Authorizations authorizations) {
    try {
        Span trace = Trace.start("getExtendedData");
        trace.data("elementType", elementType.name());
        trace.data("elementId", elementId);
        trace.data("tableName", tableName);
        org.apache.accumulo.core.data.Range range = org.apache.accumulo.core.data.Range.prefix(KeyHelper.createExtendedDataRowKey(elementType, elementId, tableName, ""));
        return getExtendedDataRowsInRange(trace, Lists.newArrayList(range), FetchHints.ALL, authorizations);
    } catch (IllegalStateException ex) {
        throw new VertexiumException("Failed to get extended data: " + elementType + ":" + elementId + ":" + tableName, ex);
    } catch (RuntimeException ex) {
        if (ex.getCause() instanceof AccumuloSecurityException) {
            throw new SecurityVertexiumException("Could not get extended data " + elementType + ":" + elementId + ":" + tableName + " with authorizations: " + authorizations, authorizations, ex.getCause());
        }
        throw ex;
    }
}
Also used : Span(org.apache.accumulo.core.trace.Span)

Example 5 with Span

use of org.apache.accumulo.core.trace.Span in project vertexium by visallo.

the class AccumuloGraph method findRelatedEdgeIds.

@Override
public Iterable<String> findRelatedEdgeIds(Iterable<String> vertexIds, Long endTime, Authorizations authorizations) {
    Set<String> vertexIdsSet = IterableUtils.toSet(vertexIds);
    Span trace = Trace.start("findRelatedEdges");
    try {
        if (LOGGER.isTraceEnabled()) {
            LOGGER.trace("findRelatedEdges:\n  %s", IterableUtils.join(vertexIdsSet, "\n  "));
        }
        if (vertexIdsSet.size() == 0) {
            return new HashSet<>();
        }
        List<org.apache.accumulo.core.data.Range> ranges = new ArrayList<>();
        for (String vertexId : vertexIdsSet) {
            ranges.add(RangeUtils.createRangeFromString(vertexId));
        }
        Long startTime = null;
        int maxVersions = 1;
        FetchHints fetchHints = FetchHints.builder().setIncludeOutEdgeRefs(true).build();
        ScannerBase scanner = createElementScanner(fetchHints, ElementType.VERTEX, maxVersions, startTime, endTime, ranges, false, authorizations);
        IteratorSetting edgeRefFilterSettings = new IteratorSetting(1000, EdgeRefFilter.class.getSimpleName(), EdgeRefFilter.class);
        EdgeRefFilter.setVertexIds(edgeRefFilterSettings, vertexIdsSet);
        scanner.addScanIterator(edgeRefFilterSettings);
        IteratorSetting vertexEdgeIdIteratorSettings = new IteratorSetting(1001, VertexEdgeIdIterator.class.getSimpleName(), VertexEdgeIdIterator.class);
        scanner.addScanIterator(vertexEdgeIdIteratorSettings);
        final long timerStartTime = System.currentTimeMillis();
        try {
            Iterator<Map.Entry<Key, Value>> it = scanner.iterator();
            List<String> edgeIds = new ArrayList<>();
            while (it.hasNext()) {
                Map.Entry<Key, Value> c = it.next();
                for (ByteArrayWrapper edgeId : VertexEdgeIdIterator.decodeValue(c.getValue())) {
                    edgeIds.add(new Text(edgeId.getData()).toString());
                }
            }
            return edgeIds;
        } finally {
            scanner.close();
            GRAPH_LOGGER.logEndIterator(System.currentTimeMillis() - timerStartTime);
        }
    } finally {
        trace.stop();
    }
}
Also used : Span(org.apache.accumulo.core.trace.Span) ByteArrayWrapper(org.vertexium.accumulo.iterator.util.ByteArrayWrapper) IteratorFetchHints(org.vertexium.accumulo.iterator.model.IteratorFetchHints) Text(org.apache.hadoop.io.Text) IndexHint(org.vertexium.search.IndexHint) Value(org.apache.accumulo.core.data.Value) StreamingPropertyValue(org.vertexium.property.StreamingPropertyValue) Key(org.apache.accumulo.core.data.Key) PartialKey(org.apache.accumulo.core.data.PartialKey)

Aggregations

Span (org.apache.accumulo.core.trace.Span)56 Key (org.apache.accumulo.core.data.Key)12 Value (org.apache.accumulo.core.data.Value)12 IOException (java.io.IOException)11 ColumnVisibility (org.apache.accumulo.core.security.ColumnVisibility)10 Text (org.apache.hadoop.io.Text)9 StreamingPropertyValue (org.vertexium.property.StreamingPropertyValue)8 AccumuloException (org.apache.accumulo.core.client.AccumuloException)7 AccumuloSecurityException (org.apache.accumulo.core.client.AccumuloSecurityException)7 TableNotFoundException (org.apache.accumulo.core.client.TableNotFoundException)6 PartialKey (org.apache.accumulo.core.data.PartialKey)6 Mutation (org.apache.accumulo.core.data.Mutation)5 IndexHint (org.vertexium.search.IndexHint)5 InvalidProtocolBufferException (com.google.protobuf.InvalidProtocolBufferException)4 Connector (org.apache.accumulo.core.client.Connector)4 Scanner (org.apache.accumulo.core.client.Scanner)4 ReplicationTableOfflineException (org.apache.accumulo.core.replication.ReplicationTableOfflineException)4 Status (org.apache.accumulo.server.replication.proto.Replication.Status)4 Test (org.junit.Test)4 FileNotFoundException (java.io.FileNotFoundException)3