Search in sources :

Example 1 with TaskLogs

use of io.druid.tasklogs.TaskLogs in project druid by druid-io.

the class HdfsTaskLogsTest method testStream.

@Test
public void testStream() throws Exception {
    final File tmpDir = tempFolder.newFolder();
    final File logDir = new File(tmpDir, "logs");
    final File logFile = new File(tmpDir, "log");
    Files.write("blah", logFile, Charsets.UTF_8);
    final TaskLogs taskLogs = new HdfsTaskLogs(new HdfsTaskLogsConfig(logDir.toString()), new Configuration());
    taskLogs.pushTaskLog("foo", logFile);
    final Map<Long, String> expected = ImmutableMap.of(0L, "blah", 1L, "lah", -2L, "ah", -5L, "blah");
    for (Map.Entry<Long, String> entry : expected.entrySet()) {
        final String string = readLog(taskLogs, "foo", entry.getKey());
        Assert.assertEquals(String.format("Read with offset %,d", entry.getKey()), string, entry.getValue());
    }
}
Also used : TaskLogs(io.druid.tasklogs.TaskLogs) HdfsTaskLogs(io.druid.storage.hdfs.tasklog.HdfsTaskLogs) Configuration(org.apache.hadoop.conf.Configuration) HdfsTaskLogsConfig(io.druid.storage.hdfs.tasklog.HdfsTaskLogsConfig) HdfsTaskLogs(io.druid.storage.hdfs.tasklog.HdfsTaskLogs) File(java.io.File) ImmutableMap(com.google.common.collect.ImmutableMap) Map(java.util.Map) Test(org.junit.Test)

Example 2 with TaskLogs

use of io.druid.tasklogs.TaskLogs in project druid by druid-io.

the class HdfsTaskLogsTest method testOverwrite.

@Test
public void testOverwrite() throws Exception {
    final File tmpDir = tempFolder.newFolder();
    final File logDir = new File(tmpDir, "logs");
    final File logFile = new File(tmpDir, "log");
    final TaskLogs taskLogs = new HdfsTaskLogs(new HdfsTaskLogsConfig(logDir.toString()), new Configuration());
    Files.write("blah", logFile, Charsets.UTF_8);
    taskLogs.pushTaskLog("foo", logFile);
    Assert.assertEquals("blah", readLog(taskLogs, "foo", 0));
    Files.write("blah blah", logFile, Charsets.UTF_8);
    taskLogs.pushTaskLog("foo", logFile);
    Assert.assertEquals("blah blah", readLog(taskLogs, "foo", 0));
}
Also used : TaskLogs(io.druid.tasklogs.TaskLogs) HdfsTaskLogs(io.druid.storage.hdfs.tasklog.HdfsTaskLogs) Configuration(org.apache.hadoop.conf.Configuration) HdfsTaskLogsConfig(io.druid.storage.hdfs.tasklog.HdfsTaskLogsConfig) HdfsTaskLogs(io.druid.storage.hdfs.tasklog.HdfsTaskLogs) File(java.io.File) Test(org.junit.Test)

Example 3 with TaskLogs

use of io.druid.tasklogs.TaskLogs in project druid by druid-io.

the class HdfsTaskLogsTest method testKill.

@Test
public void testKill() throws Exception {
    final File tmpDir = tempFolder.newFolder();
    final File logDir = new File(tmpDir, "logs");
    final File logFile = new File(tmpDir, "log");
    final Path logDirPath = new Path(logDir.toString());
    FileSystem fs = new Path(logDir.toString()).getFileSystem(new Configuration());
    final TaskLogs taskLogs = new HdfsTaskLogs(new HdfsTaskLogsConfig(logDir.toString()), new Configuration());
    Files.write("log1content", logFile, Charsets.UTF_8);
    taskLogs.pushTaskLog("log1", logFile);
    Assert.assertEquals("log1content", readLog(taskLogs, "log1", 0));
    //File modification timestamp is only maintained to seconds resolution, so artificial delay
    //is necessary to separate 2 file creations by a timestamp that would result in only one
    //of them getting deleted
    Thread.sleep(1500);
    long time = (System.currentTimeMillis() / 1000) * 1000;
    Assert.assertTrue(fs.getFileStatus(new Path(logDirPath, "log1")).getModificationTime() < time);
    Files.write("log2content", logFile, Charsets.UTF_8);
    taskLogs.pushTaskLog("log2", logFile);
    Assert.assertEquals("log2content", readLog(taskLogs, "log2", 0));
    Assert.assertTrue(fs.getFileStatus(new Path(logDirPath, "log2")).getModificationTime() >= time);
    taskLogs.killOlderThan(time);
    Assert.assertFalse(taskLogs.streamTaskLog("log1", 0).isPresent());
    Assert.assertEquals("log2content", readLog(taskLogs, "log2", 0));
}
Also used : Path(org.apache.hadoop.fs.Path) TaskLogs(io.druid.tasklogs.TaskLogs) HdfsTaskLogs(io.druid.storage.hdfs.tasklog.HdfsTaskLogs) Configuration(org.apache.hadoop.conf.Configuration) FileSystem(org.apache.hadoop.fs.FileSystem) HdfsTaskLogsConfig(io.druid.storage.hdfs.tasklog.HdfsTaskLogsConfig) HdfsTaskLogs(io.druid.storage.hdfs.tasklog.HdfsTaskLogs) File(java.io.File) Test(org.junit.Test)

Example 4 with TaskLogs

use of io.druid.tasklogs.TaskLogs in project druid by druid-io.

the class FileTaskLogsTest method testPushTaskLogDirCreationFails.

@Test
public void testPushTaskLogDirCreationFails() throws Exception {
    final File tmpDir = temporaryFolder.newFolder();
    final File logDir = new File(tmpDir, "druid/logs");
    final File logFile = new File(tmpDir, "log");
    Files.write("blah", logFile, Charsets.UTF_8);
    if (!tmpDir.setWritable(false)) {
        throw new RuntimeException("failed to make tmp dir read-only");
    }
    final TaskLogs taskLogs = new FileTaskLogs(new FileTaskLogsConfig(logDir));
    expectedException.expect(IOException.class);
    expectedException.expectMessage("Unable to create task log dir");
    taskLogs.pushTaskLog("foo", logFile);
}
Also used : TaskLogs(io.druid.tasklogs.TaskLogs) File(java.io.File) FileTaskLogsConfig(io.druid.indexing.common.config.FileTaskLogsConfig) Test(org.junit.Test)

Example 5 with TaskLogs

use of io.druid.tasklogs.TaskLogs in project druid by druid-io.

the class FileTaskLogsTest method testSimple.

@Test
public void testSimple() throws Exception {
    final File tmpDir = temporaryFolder.newFolder();
    try {
        final File logDir = new File(tmpDir, "druid/logs");
        final File logFile = new File(tmpDir, "log");
        Files.write("blah", logFile, Charsets.UTF_8);
        final TaskLogs taskLogs = new FileTaskLogs(new FileTaskLogsConfig(logDir));
        taskLogs.pushTaskLog("foo", logFile);
        final Map<Long, String> expected = ImmutableMap.of(0L, "blah", 1L, "lah", -2L, "ah", -5L, "blah");
        for (Map.Entry<Long, String> entry : expected.entrySet()) {
            final byte[] bytes = ByteStreams.toByteArray(taskLogs.streamTaskLog("foo", entry.getKey()).get().getInput());
            final String string = new String(bytes);
            Assert.assertEquals(String.format("Read with offset %,d", entry.getKey()), string, entry.getValue());
        }
    } finally {
        FileUtils.deleteDirectory(tmpDir);
    }
}
Also used : TaskLogs(io.druid.tasklogs.TaskLogs) File(java.io.File) ImmutableMap(com.google.common.collect.ImmutableMap) Map(java.util.Map) FileTaskLogsConfig(io.druid.indexing.common.config.FileTaskLogsConfig) Test(org.junit.Test)

Aggregations

TaskLogs (io.druid.tasklogs.TaskLogs)6 File (java.io.File)6 Test (org.junit.Test)6 FileTaskLogsConfig (io.druid.indexing.common.config.FileTaskLogsConfig)3 HdfsTaskLogs (io.druid.storage.hdfs.tasklog.HdfsTaskLogs)3 HdfsTaskLogsConfig (io.druid.storage.hdfs.tasklog.HdfsTaskLogsConfig)3 Configuration (org.apache.hadoop.conf.Configuration)3 ImmutableMap (com.google.common.collect.ImmutableMap)2 Map (java.util.Map)2 FileSystem (org.apache.hadoop.fs.FileSystem)1 Path (org.apache.hadoop.fs.Path)1