Search in sources :

Example 1 with LogSegment

use of kafka.log.LogSegment in project graylog2-server by Graylog2.

the class JournalShow method appendSegmentDetails.

private void appendSegmentDetails(KafkaJournal journal, StringBuffer sb) {
    final Iterable<LogSegment> segments = journal.getSegments();
    int i = 1;
    for (LogSegment segment : segments) {
        sb.append("\t\t").append("Segment ").append(i++).append("\n");
        sb.append("\t\t\t").append("Base offset: ").append(segment.baseOffset()).append("\n");
        sb.append("\t\t\t").append("Size in bytes: ").append(segment.size()).append("\n");
        sb.append("\t\t\t").append("Created at: ").append(new DateTime(segment.created(), DateTimeZone.UTC)).append("\n");
        sb.append("\t\t\t").append("Last modified: ").append(new DateTime(segment.lastModified(), DateTimeZone.UTC)).append("\n");
    }
}
Also used : LogSegment(kafka.log.LogSegment) DateTime(org.joda.time.DateTime)

Example 2 with LogSegment

use of kafka.log.LogSegment in project graylog2-server by Graylog2.

the class KafkaJournal method getLogStartOffset.

/**
     * Returns the first valid offset in the entire journal.
     *
     * @return first offset
     */
public long getLogStartOffset() {
    final Iterable<LogSegment> logSegments = JavaConversions.asJavaIterable(kafkaLog.logSegments());
    final LogSegment segment = Iterables.getFirst(logSegments, null);
    if (segment == null) {
        return 0;
    }
    return segment.baseOffset();
}
Also used : LogSegment(kafka.log.LogSegment)

Example 3 with LogSegment

use of kafka.log.LogSegment in project graylog2-server by Graylog2.

the class KafkaJournal method setupKafkaLogMetrics.

private void setupKafkaLogMetrics(final MetricRegistry metricRegistry) {
    metricRegistry.register(name(KafkaJournal.class, "size"), (Gauge<Long>) kafkaLog::size);
    metricRegistry.register(name(KafkaJournal.class, "logEndOffset"), (Gauge<Long>) kafkaLog::logEndOffset);
    metricRegistry.register(name(KafkaJournal.class, "numberOfSegments"), (Gauge<Integer>) kafkaLog::numberOfSegments);
    metricRegistry.register(name(KafkaJournal.class, "unflushedMessages"), (Gauge<Long>) kafkaLog::unflushedMessages);
    metricRegistry.register(name(KafkaJournal.class, "recoveryPoint"), (Gauge<Long>) kafkaLog::recoveryPoint);
    metricRegistry.register(name(KafkaJournal.class, "lastFlushTime"), (Gauge<Long>) kafkaLog::lastFlushTime);
    // must not be a lambda, because the serialization cannot determine the proper Metric type :(
    metricRegistry.register(GlobalMetricNames.JOURNAL_OLDEST_SEGMENT, (Gauge<Date>) new Gauge<Date>() {

        @Override
        public Date getValue() {
            long oldestSegment = Long.MAX_VALUE;
            for (final LogSegment segment : KafkaJournal.this.getSegments()) {
                oldestSegment = Math.min(oldestSegment, segment.created());
            }
            return new Date(oldestSegment);
        }
    });
}
Also used : AtomicInteger(java.util.concurrent.atomic.AtomicInteger) LogSegment(kafka.log.LogSegment) AtomicLong(java.util.concurrent.atomic.AtomicLong) Date(java.util.Date) Gauge(com.codahale.metrics.Gauge)

Example 4 with LogSegment

use of kafka.log.LogSegment in project graylog2-server by Graylog2.

the class KafkaJournalTest method segmentAgeCleanup.

@Test
public void segmentAgeCleanup() throws Exception {
    final InstantMillisProvider clock = new InstantMillisProvider(DateTime.now(DateTimeZone.UTC));
    DateTimeUtils.setCurrentMillisProvider(clock);
    try {
        final Size segmentSize = Size.kilobytes(1L);
        final KafkaJournal journal = new KafkaJournal(journalDirectory, scheduler, segmentSize, Duration.standardHours(1), Size.kilobytes(10L), Duration.standardMinutes(1), 1_000_000, Duration.standardMinutes(1), 100, new MetricRegistry(), serverStatus);
        final File messageJournalDir = new File(journalDirectory, "messagejournal-0");
        assertTrue(messageJournalDir.exists());
        // we need to fix up the last modified times of the actual files.
        long[] lastModifiedTs = new long[2];
        // create two chunks, 30 seconds apart
        createBulkChunks(journal, segmentSize, 1);
        journal.flushDirtyLogs();
        lastModifiedTs[0] = clock.getMillis();
        clock.tick(Period.seconds(30));
        createBulkChunks(journal, segmentSize, 1);
        journal.flushDirtyLogs();
        lastModifiedTs[1] = clock.getMillis();
        int i = 0;
        for (final LogSegment segment : journal.getSegments()) {
            assertTrue(i < 2);
            segment.lastModified_$eq(lastModifiedTs[i]);
            i++;
        }
        int cleanedLogs = journal.cleanupLogs();
        assertEquals("no segments should've been cleaned", cleanedLogs, 0);
        assertEquals("two segments segment should remain", countSegmentsInDir(messageJournalDir), 2);
        // move clock beyond the retention period and clean again
        clock.tick(Period.seconds(120));
        cleanedLogs = journal.cleanupLogs();
        assertEquals("two segments should've been cleaned (only one will actually be removed...)", cleanedLogs, 2);
        assertEquals("one segment should remain", countSegmentsInDir(messageJournalDir), 1);
    } finally {
        DateTimeUtils.setCurrentMillisSystem();
    }
}
Also used : LogSegment(kafka.log.LogSegment) Size(com.github.joschi.jadconfig.util.Size) InstantMillisProvider(org.graylog2.plugin.InstantMillisProvider) MetricRegistry(com.codahale.metrics.MetricRegistry) File(java.io.File) Test(org.junit.Test)

Aggregations

LogSegment (kafka.log.LogSegment)4 Gauge (com.codahale.metrics.Gauge)1 MetricRegistry (com.codahale.metrics.MetricRegistry)1 Size (com.github.joschi.jadconfig.util.Size)1 File (java.io.File)1 Date (java.util.Date)1 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)1 AtomicLong (java.util.concurrent.atomic.AtomicLong)1 InstantMillisProvider (org.graylog2.plugin.InstantMillisProvider)1 DateTime (org.joda.time.DateTime)1 Test (org.junit.Test)1