Search in sources :

Example 1 with SpanId

use of org.apache.htrace.core.SpanId in project hadoop by apache.

the class Sender method requestShortCircuitShm.

@Override
public void requestShortCircuitShm(String clientName) throws IOException {
    ShortCircuitShmRequestProto.Builder builder = ShortCircuitShmRequestProto.newBuilder().setClientName(clientName);
    SpanId spanId = Tracer.getCurrentSpanId();
    if (spanId.isValid()) {
        builder.setTraceInfo(DataTransferTraceInfoProto.newBuilder().setTraceId(spanId.getHigh()).setParentId(spanId.getLow()));
    }
    ShortCircuitShmRequestProto proto = builder.build();
    send(out, Op.REQUEST_SHORT_CIRCUIT_SHM, proto);
}
Also used : ShortCircuitShmRequestProto(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitShmRequestProto) SpanId(org.apache.htrace.core.SpanId)

Example 2 with SpanId

use of org.apache.htrace.core.SpanId in project hadoop by apache.

the class TestDFSPacket method testAddParentsGetParents.

@Test
public void testAddParentsGetParents() throws Exception {
    DFSPacket p = new DFSPacket(null, maxChunksPerPacket, 0, 0, checksumSize, false);
    SpanId[] parents = p.getTraceParents();
    Assert.assertEquals(0, parents.length);
    p.addTraceParent(new SpanId(0, 123));
    p.addTraceParent(new SpanId(0, 123));
    parents = p.getTraceParents();
    Assert.assertEquals(1, parents.length);
    Assert.assertEquals(new SpanId(0, 123), parents[0]);
    // test calling 'get' again.
    parents = p.getTraceParents();
    Assert.assertEquals(1, parents.length);
    Assert.assertEquals(new SpanId(0, 123), parents[0]);
    p.addTraceParent(new SpanId(0, 1));
    p.addTraceParent(new SpanId(0, 456));
    p.addTraceParent(new SpanId(0, 789));
    parents = p.getTraceParents();
    Assert.assertEquals(4, parents.length);
    Assert.assertEquals(new SpanId(0, 1), parents[0]);
    Assert.assertEquals(new SpanId(0, 123), parents[1]);
    Assert.assertEquals(new SpanId(0, 456), parents[2]);
    Assert.assertEquals(new SpanId(0, 789), parents[3]);
}
Also used : SpanId(org.apache.htrace.core.SpanId) Test(org.junit.Test)

Example 3 with SpanId

use of org.apache.htrace.core.SpanId in project hadoop by apache.

the class DataStreamer method run.

/*
   * streamer thread is the only thread that opens streams to datanode,
   * and closes them. Any error recovery is also done by this thread.
   */
@Override
public void run() {
    long lastPacket = Time.monotonicNow();
    TraceScope scope = null;
    while (!streamerClosed && dfsClient.clientRunning) {
        // if the Responder encountered an error, shutdown Responder
        if (errorState.hasError()) {
            closeResponder();
        }
        DFSPacket one;
        try {
            // process datanode IO errors if any
            boolean doSleep = processDatanodeOrExternalError();
            final int halfSocketTimeout = dfsClient.getConf().getSocketTimeout() / 2;
            synchronized (dataQueue) {
                // wait for a packet to be sent.
                long now = Time.monotonicNow();
                while ((!shouldStop() && dataQueue.size() == 0 && (stage != BlockConstructionStage.DATA_STREAMING || stage == BlockConstructionStage.DATA_STREAMING && now - lastPacket < halfSocketTimeout)) || doSleep) {
                    long timeout = halfSocketTimeout - (now - lastPacket);
                    timeout = timeout <= 0 ? 1000 : timeout;
                    timeout = (stage == BlockConstructionStage.DATA_STREAMING) ? timeout : 1000;
                    try {
                        dataQueue.wait(timeout);
                    } catch (InterruptedException e) {
                        LOG.warn("Caught exception", e);
                    }
                    doSleep = false;
                    now = Time.monotonicNow();
                }
                if (shouldStop()) {
                    continue;
                }
                // get packet to be sent.
                if (dataQueue.isEmpty()) {
                    one = createHeartbeatPacket();
                } else {
                    try {
                        backOffIfNecessary();
                    } catch (InterruptedException e) {
                        LOG.warn("Caught exception", e);
                    }
                    // regular data packet
                    one = dataQueue.getFirst();
                    SpanId[] parents = one.getTraceParents();
                    if (parents.length > 0) {
                        scope = dfsClient.getTracer().newScope("dataStreamer", parents[0]);
                        scope.getSpan().setParents(parents);
                    }
                }
            }
            // get new block from namenode.
            if (LOG.isDebugEnabled()) {
                LOG.debug("stage=" + stage + ", " + this);
            }
            if (stage == BlockConstructionStage.PIPELINE_SETUP_CREATE) {
                LOG.debug("Allocating new block: {}", this);
                setPipeline(nextBlockOutputStream());
                initDataStreaming();
            } else if (stage == BlockConstructionStage.PIPELINE_SETUP_APPEND) {
                LOG.debug("Append to block {}", block);
                setupPipelineForAppendOrRecovery();
                if (streamerClosed) {
                    continue;
                }
                initDataStreaming();
            }
            long lastByteOffsetInBlock = one.getLastByteOffsetBlock();
            if (lastByteOffsetInBlock > stat.getBlockSize()) {
                throw new IOException("BlockSize " + stat.getBlockSize() + " < lastByteOffsetInBlock, " + this + ", " + one);
            }
            if (one.isLastPacketInBlock()) {
                // wait for all data packets have been successfully acked
                synchronized (dataQueue) {
                    while (!shouldStop() && ackQueue.size() != 0) {
                        try {
                            // wait for acks to arrive from datanodes
                            dataQueue.wait(1000);
                        } catch (InterruptedException e) {
                            LOG.warn("Caught exception", e);
                        }
                    }
                }
                if (shouldStop()) {
                    continue;
                }
                stage = BlockConstructionStage.PIPELINE_CLOSE;
            }
            // send the packet
            SpanId spanId = SpanId.INVALID;
            synchronized (dataQueue) {
                // move packet from dataQueue to ackQueue
                if (!one.isHeartbeatPacket()) {
                    if (scope != null) {
                        spanId = scope.getSpanId();
                        scope.detach();
                        one.setTraceScope(scope);
                    }
                    scope = null;
                    dataQueue.removeFirst();
                    ackQueue.addLast(one);
                    packetSendTime.put(one.getSeqno(), Time.monotonicNow());
                    dataQueue.notifyAll();
                }
            }
            LOG.debug("{} sending {}", this, one);
            // write out data to remote datanode
            try (TraceScope ignored = dfsClient.getTracer().newScope("DataStreamer#writeTo", spanId)) {
                one.writeTo(blockStream);
                blockStream.flush();
            } catch (IOException e) {
                // HDFS-3398 treat primary DN is down since client is unable to
                // write to primary DN. If a failed or restarting node has already
                // been recorded by the responder, the following call will have no
                // effect. Pipeline recovery can handle only one node error at a
                // time. If the primary node fails again during the recovery, it
                // will be taken out then.
                errorState.markFirstNodeIfNotMarked();
                throw e;
            }
            lastPacket = Time.monotonicNow();
            // update bytesSent
            long tmpBytesSent = one.getLastByteOffsetBlock();
            if (bytesSent < tmpBytesSent) {
                bytesSent = tmpBytesSent;
            }
            if (shouldStop()) {
                continue;
            }
            // Is this block full?
            if (one.isLastPacketInBlock()) {
                // wait for the close packet has been acked
                synchronized (dataQueue) {
                    while (!shouldStop() && ackQueue.size() != 0) {
                        // wait for acks to arrive from datanodes
                        dataQueue.wait(1000);
                    }
                }
                if (shouldStop()) {
                    continue;
                }
                endBlock();
            }
            if (progress != null) {
                progress.progress();
            }
            // This is used by unit test to trigger race conditions.
            if (artificialSlowdown != 0 && dfsClient.clientRunning) {
                Thread.sleep(artificialSlowdown);
            }
        } catch (Throwable e) {
            // Log warning if there was a real error.
            if (!errorState.isRestartingNode()) {
                // log a verbose stack-trace WARN for quota exceptions.
                if (e instanceof QuotaExceededException) {
                    LOG.debug("DataStreamer Quota Exception", e);
                } else {
                    LOG.warn("DataStreamer Exception", e);
                }
            }
            lastException.set(e);
            assert !(e instanceof NullPointerException);
            errorState.setInternalError();
            if (!errorState.isNodeMarked()) {
                // Not a datanode issue
                streamerClosed = true;
            }
        } finally {
            if (scope != null) {
                scope.close();
                scope = null;
            }
        }
    }
    closeInternal();
}
Also used : QuotaExceededException(org.apache.hadoop.hdfs.protocol.QuotaExceededException) TraceScope(org.apache.htrace.core.TraceScope) InterruptedIOException(java.io.InterruptedIOException) IOException(java.io.IOException) MultipleIOException(org.apache.hadoop.io.MultipleIOException) SpanId(org.apache.htrace.core.SpanId)

Example 4 with SpanId

use of org.apache.htrace.core.SpanId in project hadoop by apache.

the class Receiver method continueTraceSpan.

private TraceScope continueTraceSpan(DataTransferTraceInfoProto proto, String description) {
    TraceScope scope = null;
    SpanId spanId = fromProto(proto);
    if (spanId != null) {
        scope = tracer.newScope(description, spanId);
    }
    return scope;
}
Also used : TraceScope(org.apache.htrace.core.TraceScope) SpanId(org.apache.htrace.core.SpanId)

Example 5 with SpanId

use of org.apache.htrace.core.SpanId in project cxf by apache.

the class HTraceTracingCustomHeadersTest method testThatNewSpanIsCreated.

@Test
public void testThatNewSpanIsCreated() {
    final SpanId spanId = SpanId.fromRandom();
    final Response r = createWebClient("/bookstore/books").header(CUSTOM_HEADER_SPAN_ID, spanId.toString()).get();
    assertEquals(Status.OK.getStatusCode(), r.getStatus());
    assertThat((String) r.getHeaders().getFirst(CUSTOM_HEADER_SPAN_ID), equalTo(spanId.toString()));
}
Also used : Response(javax.ws.rs.core.Response) SpanId(org.apache.htrace.core.SpanId) Test(org.junit.Test)

Aggregations

SpanId (org.apache.htrace.core.SpanId)16 Test (org.junit.Test)9 Response (javax.ws.rs.core.Response)7 TraceScope (org.apache.htrace.core.TraceScope)3 IOException (java.io.IOException)1 InterruptedIOException (java.io.InterruptedIOException)1 HashMap (java.util.HashMap)1 List (java.util.List)1 BookStoreService (org.apache.cxf.systest.jaxws.tracing.BookStoreService)1 QuotaExceededException (org.apache.hadoop.hdfs.protocol.QuotaExceededException)1 BaseHeaderProto (org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.BaseHeaderProto)1 ReleaseShortCircuitAccessRequestProto (org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ReleaseShortCircuitAccessRequestProto)1 ShortCircuitShmRequestProto (org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitShmRequestProto)1 MultipleIOException (org.apache.hadoop.io.MultipleIOException)1