Search in sources :

Example 1 with HdfsTaskLogs

use of io.druid.storage.hdfs.tasklog.HdfsTaskLogs in project druid by druid-io.

the class HdfsTaskLogsTest method testStream.

@Test
public void testStream() throws Exception {
    final File tmpDir = tempFolder.newFolder();
    final File logDir = new File(tmpDir, "logs");
    final File logFile = new File(tmpDir, "log");
    Files.write("blah", logFile, Charsets.UTF_8);
    final TaskLogs taskLogs = new HdfsTaskLogs(new HdfsTaskLogsConfig(logDir.toString()), new Configuration());
    taskLogs.pushTaskLog("foo", logFile);
    final Map<Long, String> expected = ImmutableMap.of(0L, "blah", 1L, "lah", -2L, "ah", -5L, "blah");
    for (Map.Entry<Long, String> entry : expected.entrySet()) {
        final String string = readLog(taskLogs, "foo", entry.getKey());
        Assert.assertEquals(String.format("Read with offset %,d", entry.getKey()), string, entry.getValue());
    }
}
Also used : TaskLogs(io.druid.tasklogs.TaskLogs) HdfsTaskLogs(io.druid.storage.hdfs.tasklog.HdfsTaskLogs) Configuration(org.apache.hadoop.conf.Configuration) HdfsTaskLogsConfig(io.druid.storage.hdfs.tasklog.HdfsTaskLogsConfig) HdfsTaskLogs(io.druid.storage.hdfs.tasklog.HdfsTaskLogs) File(java.io.File) ImmutableMap(com.google.common.collect.ImmutableMap) Map(java.util.Map) Test(org.junit.Test)

Example 2 with HdfsTaskLogs

use of io.druid.storage.hdfs.tasklog.HdfsTaskLogs in project druid by druid-io.

the class HdfsTaskLogsTest method testOverwrite.

@Test
public void testOverwrite() throws Exception {
    final File tmpDir = tempFolder.newFolder();
    final File logDir = new File(tmpDir, "logs");
    final File logFile = new File(tmpDir, "log");
    final TaskLogs taskLogs = new HdfsTaskLogs(new HdfsTaskLogsConfig(logDir.toString()), new Configuration());
    Files.write("blah", logFile, Charsets.UTF_8);
    taskLogs.pushTaskLog("foo", logFile);
    Assert.assertEquals("blah", readLog(taskLogs, "foo", 0));
    Files.write("blah blah", logFile, Charsets.UTF_8);
    taskLogs.pushTaskLog("foo", logFile);
    Assert.assertEquals("blah blah", readLog(taskLogs, "foo", 0));
}
Also used : TaskLogs(io.druid.tasklogs.TaskLogs) HdfsTaskLogs(io.druid.storage.hdfs.tasklog.HdfsTaskLogs) Configuration(org.apache.hadoop.conf.Configuration) HdfsTaskLogsConfig(io.druid.storage.hdfs.tasklog.HdfsTaskLogsConfig) HdfsTaskLogs(io.druid.storage.hdfs.tasklog.HdfsTaskLogs) File(java.io.File) Test(org.junit.Test)

Example 3 with HdfsTaskLogs

use of io.druid.storage.hdfs.tasklog.HdfsTaskLogs in project druid by druid-io.

the class HdfsTaskLogsTest method testKill.

@Test
public void testKill() throws Exception {
    final File tmpDir = tempFolder.newFolder();
    final File logDir = new File(tmpDir, "logs");
    final File logFile = new File(tmpDir, "log");
    final Path logDirPath = new Path(logDir.toString());
    FileSystem fs = new Path(logDir.toString()).getFileSystem(new Configuration());
    final TaskLogs taskLogs = new HdfsTaskLogs(new HdfsTaskLogsConfig(logDir.toString()), new Configuration());
    Files.write("log1content", logFile, Charsets.UTF_8);
    taskLogs.pushTaskLog("log1", logFile);
    Assert.assertEquals("log1content", readLog(taskLogs, "log1", 0));
    //File modification timestamp is only maintained to seconds resolution, so artificial delay
    //is necessary to separate 2 file creations by a timestamp that would result in only one
    //of them getting deleted
    Thread.sleep(1500);
    long time = (System.currentTimeMillis() / 1000) * 1000;
    Assert.assertTrue(fs.getFileStatus(new Path(logDirPath, "log1")).getModificationTime() < time);
    Files.write("log2content", logFile, Charsets.UTF_8);
    taskLogs.pushTaskLog("log2", logFile);
    Assert.assertEquals("log2content", readLog(taskLogs, "log2", 0));
    Assert.assertTrue(fs.getFileStatus(new Path(logDirPath, "log2")).getModificationTime() >= time);
    taskLogs.killOlderThan(time);
    Assert.assertFalse(taskLogs.streamTaskLog("log1", 0).isPresent());
    Assert.assertEquals("log2content", readLog(taskLogs, "log2", 0));
}
Also used : Path(org.apache.hadoop.fs.Path) TaskLogs(io.druid.tasklogs.TaskLogs) HdfsTaskLogs(io.druid.storage.hdfs.tasklog.HdfsTaskLogs) Configuration(org.apache.hadoop.conf.Configuration) FileSystem(org.apache.hadoop.fs.FileSystem) HdfsTaskLogsConfig(io.druid.storage.hdfs.tasklog.HdfsTaskLogsConfig) HdfsTaskLogs(io.druid.storage.hdfs.tasklog.HdfsTaskLogs) File(java.io.File) Test(org.junit.Test)

Aggregations

HdfsTaskLogs (io.druid.storage.hdfs.tasklog.HdfsTaskLogs)3 HdfsTaskLogsConfig (io.druid.storage.hdfs.tasklog.HdfsTaskLogsConfig)3 TaskLogs (io.druid.tasklogs.TaskLogs)3 File (java.io.File)3 Configuration (org.apache.hadoop.conf.Configuration)3 Test (org.junit.Test)3 ImmutableMap (com.google.common.collect.ImmutableMap)1 Map (java.util.Map)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 Path (org.apache.hadoop.fs.Path)1