Search in sources :

Example 16 with Size

use of com.github.joschi.jadconfig.util.Size in project graylog2-server by Graylog2.

the class LocalKafkaJournalTest method maxMessageSize.

@Test
public void maxMessageSize() throws Exception {
    final Size segmentSize = Size.kilobytes(1L);
    final LocalKafkaJournal journal = new LocalKafkaJournal(journalDirectory.toPath(), scheduler, segmentSize, Duration.standardHours(1), Size.kilobytes(10L), Duration.standardDays(1), 1_000_000, Duration.standardMinutes(1), 100, new MetricRegistry(), serverStatus);
    long size = 0L;
    long maxSize = segmentSize.toBytes();
    final List<Journal.Entry> list = Lists.newArrayList();
    final String largeMessage1 = randomAlphanumeric(Ints.saturatedCast(segmentSize.toBytes() * 2));
    list.add(journal.createEntry(randomAlphanumeric(6).getBytes(UTF_8), largeMessage1.getBytes(UTF_8)));
    final byte[] idBytes0 = randomAlphanumeric(6).getBytes(UTF_8);
    // Build a message that has exactly the max segment size
    final String largeMessage2 = randomAlphanumeric(Ints.saturatedCast(segmentSize.toBytes() - MessageSet.LogOverhead() - Message.MessageOverhead() - idBytes0.length));
    list.add(journal.createEntry(idBytes0, largeMessage2.getBytes(UTF_8)));
    while (size <= maxSize) {
        final byte[] idBytes = randomAlphanumeric(6).getBytes(UTF_8);
        final byte[] messageBytes = "the-message".getBytes(UTF_8);
        size += idBytes.length + messageBytes.length;
        list.add(journal.createEntry(idBytes, messageBytes));
    }
    // Make sure all messages but the large one have been written
    assertThat(journal.write(list)).isEqualTo(list.size() - 2);
}
Also used : Size(com.github.joschi.jadconfig.util.Size) MetricRegistry(com.codahale.metrics.MetricRegistry) Test(org.junit.Test)

Example 17 with Size

use of com.github.joschi.jadconfig.util.Size in project graylog2-server by Graylog2.

the class LocalKafkaJournalTest method segmentRotation.

@Test
public void segmentRotation() throws Exception {
    final Size segmentSize = Size.kilobytes(1L);
    final LocalKafkaJournal journal = new LocalKafkaJournal(journalDirectory.toPath(), scheduler, segmentSize, Duration.standardHours(1), Size.kilobytes(10L), Duration.standardDays(1), 1_000_000, Duration.standardMinutes(1), 100, new MetricRegistry(), serverStatus);
    createBulkChunks(journal, segmentSize, 3);
    final File[] files = journalDirectory.listFiles();
    assertNotNull(files);
    assertTrue("there should be files in the journal directory", files.length > 0);
    final File[] messageJournalDir = journalDirectory.listFiles((FileFilter) and(directoryFileFilter(), nameFileFilter("messagejournal-0")));
    assertTrue(messageJournalDir.length == 1);
    final File[] logFiles = messageJournalDir[0].listFiles((FileFilter) and(fileFileFilter(), suffixFileFilter(".log")));
    assertEquals("should have two journal segments", 3, logFiles.length);
}
Also used : Size(com.github.joschi.jadconfig.util.Size) MetricRegistry(com.codahale.metrics.MetricRegistry) File(java.io.File) Test(org.junit.Test)

Example 18 with Size

use of com.github.joschi.jadconfig.util.Size in project graylog2-server by Graylog2.

the class TrafficCounterCalculator method doRun.

@Override
public void doRun() {
    final DateTime now = Tools.nowUTC();
    final int secondOfMinute = now.getSecondOfMinute();
    // on the top of every minute, we flush the current throughput
    if (secondOfMinute == 0) {
        LOG.trace("Calculating input and output traffic for the previous minute");
        final long currentInputBytes = inputCounter.getCount();
        final long currentOutputBytes = outputCounter.getCount();
        final long currentDecodedBytes = decodedCounter.getCount();
        final long inputLastMinute = currentInputBytes - previousInputBytes;
        previousInputBytes = currentInputBytes;
        final long outputBytesLastMinute = currentOutputBytes - previousOutputBytes;
        previousOutputBytes = currentOutputBytes;
        final long decodedBytesLastMinute = currentDecodedBytes - previousDecodedBytes;
        previousDecodedBytes = currentDecodedBytes;
        if (LOG.isDebugEnabled()) {
            final Size in = Size.bytes(inputLastMinute);
            final Size out = Size.bytes(outputBytesLastMinute);
            final Size decoded = Size.bytes(decodedBytesLastMinute);
            LOG.debug("Traffic in the last minute: in: {} bytes ({} MB), out: {} bytes ({} MB}), decoded: {} bytes ({} MB})", in, in.toMegabytes(), out, out.toMegabytes(), decoded, decoded.toMegabytes());
        }
        final DateTime previousMinute = now.minusMinutes(1);
        trafficService.updateTraffic(previousMinute, nodeId, inputLastMinute, outputBytesLastMinute, decodedBytesLastMinute);
    }
}
Also used : Size(com.github.joschi.jadconfig.util.Size) DateTime(org.joda.time.DateTime)

Aggregations

Size (com.github.joschi.jadconfig.util.Size)18 MetricRegistry (com.codahale.metrics.MetricRegistry)17 Test (org.junit.Test)17 File (java.io.File)9 InstantMillisProvider (org.graylog2.plugin.InstantMillisProvider)2 FileOutputStream (java.io.FileOutputStream)1 FileChannel (java.nio.channels.FileChannel)1 Path (java.nio.file.Path)1 LogSegment (kafka.log.LogSegment)1 LogSegment (org.graylog.shaded.kafka09.log.LogSegment)1 DateTime (org.joda.time.DateTime)1