Search in sources :

Example 1 with DataTransferThrottler

use of org.apache.hadoop.hdfs.util.DataTransferThrottler in project hadoop by apache.

the class TestBlockReplacement method testThrottler.

@Test
public void testThrottler() throws IOException {
    Configuration conf = new HdfsConfiguration();
    FileSystem.setDefaultUri(conf, "hdfs://localhost:0");
    long bandwidthPerSec = 1024 * 1024L;
    final long TOTAL_BYTES = 6 * bandwidthPerSec;
    long bytesToSend = TOTAL_BYTES;
    long start = Time.monotonicNow();
    DataTransferThrottler throttler = new DataTransferThrottler(bandwidthPerSec);
    // 0.5MB
    long bytesSent = 1024 * 512L;
    throttler.throttle(bytesSent);
    bytesToSend -= bytesSent;
    // 0.75MB
    bytesSent = 1024 * 768L;
    throttler.throttle(bytesSent);
    bytesToSend -= bytesSent;
    try {
        Thread.sleep(1000);
    } catch (InterruptedException ignored) {
    }
    throttler.throttle(bytesToSend);
    long end = Time.monotonicNow();
    assertTrue(TOTAL_BYTES * 1000 / (end - start) <= bandwidthPerSec);
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) DataTransferThrottler(org.apache.hadoop.hdfs.util.DataTransferThrottler) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) Test(org.junit.Test)

Example 2 with DataTransferThrottler

use of org.apache.hadoop.hdfs.util.DataTransferThrottler in project hadoop by apache.

the class ImageServlet method getThrottlerForBootstrapStandby.

private static DataTransferThrottler getThrottlerForBootstrapStandby(Configuration conf) {
    long transferBandwidth = conf.getLong(DFSConfigKeys.DFS_IMAGE_TRANSFER_BOOTSTRAP_STANDBY_RATE_KEY, DFSConfigKeys.DFS_IMAGE_TRANSFER_BOOTSTRAP_STANDBY_RATE_DEFAULT);
    DataTransferThrottler throttler = null;
    if (transferBandwidth > 0) {
        throttler = new DataTransferThrottler(transferBandwidth);
    }
    return throttler;
}
Also used : DataTransferThrottler(org.apache.hadoop.hdfs.util.DataTransferThrottler)

Example 3 with DataTransferThrottler

use of org.apache.hadoop.hdfs.util.DataTransferThrottler in project hadoop by apache.

the class GetJournalEditServlet method doGet.

@Override
public void doGet(final HttpServletRequest request, final HttpServletResponse response) throws ServletException, IOException {
    FileInputStream editFileIn = null;
    try {
        final ServletContext context = getServletContext();
        final Configuration conf = (Configuration) getServletContext().getAttribute(JspHelper.CURRENT_CONF);
        final String journalId = request.getParameter(JOURNAL_ID_PARAM);
        QuorumJournalManager.checkJournalId(journalId);
        final JNStorage storage = JournalNodeHttpServer.getJournalFromContext(context, journalId).getStorage();
        // Check security
        if (!checkRequestorOrSendError(conf, request, response)) {
            return;
        }
        // Check that the namespace info is correct
        if (!checkStorageInfoOrSendError(storage, request, response)) {
            return;
        }
        long segmentTxId = ServletUtil.parseLongParam(request, SEGMENT_TXID_PARAM);
        FileJournalManager fjm = storage.getJournalManager();
        File editFile;
        synchronized (fjm) {
            // Synchronize on the FJM so that the file doesn't get finalized
            // out from underneath us while we're in the process of opening
            // it up.
            EditLogFile elf = fjm.getLogFile(segmentTxId);
            if (elf == null) {
                response.sendError(HttpServletResponse.SC_NOT_FOUND, "No edit log found starting at txid " + segmentTxId);
                return;
            }
            editFile = elf.getFile();
            ImageServlet.setVerificationHeadersForGet(response, editFile);
            ImageServlet.setFileNameHeaders(response, editFile);
            editFileIn = new FileInputStream(editFile);
        }
        DataTransferThrottler throttler = ImageServlet.getThrottler(conf);
        // send edits
        TransferFsImage.copyFileToStream(response.getOutputStream(), editFile, editFileIn, throttler);
    } catch (Throwable t) {
        String errMsg = "getedit failed. " + StringUtils.stringifyException(t);
        response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR, errMsg);
        throw new IOException(errMsg);
    } finally {
        IOUtils.closeStream(editFileIn);
    }
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) EditLogFile(org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile) ServletContext(javax.servlet.ServletContext) FileJournalManager(org.apache.hadoop.hdfs.server.namenode.FileJournalManager) DataTransferThrottler(org.apache.hadoop.hdfs.util.DataTransferThrottler) IOException(java.io.IOException) EditLogFile(org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile) File(java.io.File) FileInputStream(java.io.FileInputStream)

Example 4 with DataTransferThrottler

use of org.apache.hadoop.hdfs.util.DataTransferThrottler in project hadoop by apache.

the class JournalNodeSyncer method getThrottler.

private static DataTransferThrottler getThrottler(Configuration conf) {
    long transferBandwidth = conf.getLong(DFSConfigKeys.DFS_EDIT_LOG_TRANSFER_RATE_KEY, DFSConfigKeys.DFS_EDIT_LOG_TRANSFER_RATE_DEFAULT);
    DataTransferThrottler throttler = null;
    if (transferBandwidth > 0) {
        throttler = new DataTransferThrottler(transferBandwidth);
    }
    return throttler;
}
Also used : DataTransferThrottler(org.apache.hadoop.hdfs.util.DataTransferThrottler)

Example 5 with DataTransferThrottler

use of org.apache.hadoop.hdfs.util.DataTransferThrottler in project hadoop by apache.

the class ImageServlet method doGet.

@Override
public void doGet(final HttpServletRequest request, final HttpServletResponse response) throws ServletException, IOException {
    try {
        final ServletContext context = getServletContext();
        final FSImage nnImage = NameNodeHttpServer.getFsImageFromContext(context);
        final GetImageParams parsedParams = new GetImageParams(request, response);
        final Configuration conf = (Configuration) context.getAttribute(JspHelper.CURRENT_CONF);
        final NameNodeMetrics metrics = NameNode.getNameNodeMetrics();
        validateRequest(context, conf, request, response, nnImage, parsedParams.getStorageInfoString());
        UserGroupInformation.getCurrentUser().doAs(new PrivilegedExceptionAction<Void>() {

            @Override
            public Void run() throws Exception {
                if (parsedParams.isGetImage()) {
                    long txid = parsedParams.getTxId();
                    File imageFile = null;
                    String errorMessage = "Could not find image";
                    if (parsedParams.shouldFetchLatest()) {
                        imageFile = nnImage.getStorage().getHighestFsImageName();
                    } else {
                        errorMessage += " with txid " + txid;
                        imageFile = nnImage.getStorage().getFsImage(txid, EnumSet.of(NameNodeFile.IMAGE, NameNodeFile.IMAGE_ROLLBACK));
                    }
                    if (imageFile == null) {
                        throw new IOException(errorMessage);
                    }
                    CheckpointFaultInjector.getInstance().beforeGetImageSetsHeaders();
                    long start = monotonicNow();
                    serveFile(imageFile);
                    if (metrics != null) {
                        // Metrics non-null only when used inside name node
                        long elapsed = monotonicNow() - start;
                        metrics.addGetImage(elapsed);
                    }
                } else if (parsedParams.isGetEdit()) {
                    long startTxId = parsedParams.getStartTxId();
                    long endTxId = parsedParams.getEndTxId();
                    File editFile = nnImage.getStorage().findFinalizedEditsFile(startTxId, endTxId);
                    long start = monotonicNow();
                    serveFile(editFile);
                    if (metrics != null) {
                        // Metrics non-null only when used inside name node
                        long elapsed = monotonicNow() - start;
                        metrics.addGetEdit(elapsed);
                    }
                }
                return null;
            }

            private void serveFile(File file) throws IOException {
                FileInputStream fis = new FileInputStream(file);
                try {
                    setVerificationHeadersForGet(response, file);
                    setFileNameHeaders(response, file);
                    if (!file.exists()) {
                        // process of setting headers!
                        throw new FileNotFoundException(file.toString());
                    // It's possible the file could be deleted after this point, but
                    // we've already opened the 'fis' stream.
                    // It's also possible length could change, but this would be
                    // detected by the client side as an inaccurate length header.
                    }
                    // send file
                    DataTransferThrottler throttler = parsedParams.isBootstrapStandby ? getThrottlerForBootstrapStandby(conf) : getThrottler(conf);
                    TransferFsImage.copyFileToStream(response.getOutputStream(), file, fis, throttler);
                } finally {
                    IOUtils.closeStream(fis);
                }
            }
        });
    } catch (Throwable t) {
        String errMsg = "GetImage failed. " + StringUtils.stringifyException(t);
        response.sendError(HttpServletResponse.SC_GONE, errMsg);
        throw new IOException(errMsg);
    } finally {
        response.getOutputStream().close();
    }
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) DataTransferThrottler(org.apache.hadoop.hdfs.util.DataTransferThrottler) ServletException(javax.servlet.ServletException) ServletContext(javax.servlet.ServletContext) NameNodeMetrics(org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics) NameNodeFile(org.apache.hadoop.hdfs.server.namenode.NNStorage.NameNodeFile)

Aggregations

DataTransferThrottler (org.apache.hadoop.hdfs.util.DataTransferThrottler)6 Configuration (org.apache.hadoop.conf.Configuration)3 ServletContext (javax.servlet.ServletContext)2 File (java.io.File)1 FileInputStream (java.io.FileInputStream)1 IOException (java.io.IOException)1 ServletException (javax.servlet.ServletException)1 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)1 FileJournalManager (org.apache.hadoop.hdfs.server.namenode.FileJournalManager)1 EditLogFile (org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile)1 NameNodeFile (org.apache.hadoop.hdfs.server.namenode.NNStorage.NameNodeFile)1 NameNodeMetrics (org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics)1 Test (org.junit.Test)1