Search in sources :

Example 6 with FileSystem

use of org.apache.hadoop.fs.FileSystem in project camel by apache.

the class HdfsProducerSplitTest method tearDown.

@Override
public void tearDown() throws Exception {
    if (!canTest()) {
        return;
    }
    super.tearDown();
    Thread.sleep(100);
    Configuration conf = new Configuration();
    Path dir = new Path("target/test");
    FileSystem fs = FileSystem.get(dir.toUri(), conf);
    fs.delete(dir, true);
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) FileSystem(org.apache.hadoop.fs.FileSystem)

Example 7 with FileSystem

use of org.apache.hadoop.fs.FileSystem in project camel by apache.

the class HdfsProducerTest method testBloomMapWriteText.

@Test
public void testBloomMapWriteText() throws Exception {
    if (!canTest()) {
        return;
    }
    String txtKey = "THEKEY";
    String txtValue = "CIAO MONDO !";
    template.sendBodyAndHeader("direct:write_text5", txtValue, "KEY", txtKey);
    Configuration conf = new Configuration();
    Path file1 = new Path("file:///" + TEMP_DIR.toUri() + "/test-camel-text5");
    FileSystem fs1 = FileSystem.get(file1.toUri(), conf);
    BloomMapFile.Reader reader = new BloomMapFile.Reader(fs1, "file:///" + TEMP_DIR.toUri() + "/test-camel-text5", conf);
    Text key = (Text) ReflectionUtils.newInstance(reader.getKeyClass(), conf);
    Text value = (Text) ReflectionUtils.newInstance(reader.getValueClass(), conf);
    reader.next(key, value);
    assertEquals(key.toString(), txtKey);
    assertEquals(value.toString(), txtValue);
    IOHelper.close(reader);
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) FileSystem(org.apache.hadoop.fs.FileSystem) BloomMapFile(org.apache.hadoop.io.BloomMapFile) Text(org.apache.hadoop.io.Text) Test(org.junit.Test)

Example 8 with FileSystem

use of org.apache.hadoop.fs.FileSystem in project camel by apache.

the class HdfsProducerTest method testMapWriteTextWithKey.

@Test
public void testMapWriteTextWithKey() throws Exception {
    if (!canTest()) {
        return;
    }
    String txtKey = "THEKEY";
    String txtValue = "CIAO MONDO !";
    template.sendBodyAndHeader("direct:write_text3", txtValue, "KEY", txtKey);
    Configuration conf = new Configuration();
    Path file1 = new Path("file:///" + TEMP_DIR.toUri() + "/test-camel-text3");
    FileSystem fs1 = FileSystem.get(file1.toUri(), conf);
    MapFile.Reader reader = new MapFile.Reader(fs1, "file:///" + TEMP_DIR.toUri() + "/test-camel-text3", conf);
    Text key = (Text) ReflectionUtils.newInstance(reader.getKeyClass(), conf);
    Text value = (Text) ReflectionUtils.newInstance(reader.getValueClass(), conf);
    reader.next(key, value);
    assertEquals(key.toString(), txtKey);
    assertEquals(value.toString(), txtValue);
    IOHelper.close(reader);
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) FileSystem(org.apache.hadoop.fs.FileSystem) BloomMapFile(org.apache.hadoop.io.BloomMapFile) MapFile(org.apache.hadoop.io.MapFile) Text(org.apache.hadoop.io.Text) Test(org.junit.Test)

Example 9 with FileSystem

use of org.apache.hadoop.fs.FileSystem in project camel by apache.

the class HdfsProducerTest method testProducerClose.

@Test
public void testProducerClose() throws Exception {
    if (!canTest()) {
        return;
    }
    for (int i = 0; i < 10; ++i) {
        // send 10 messages, and mark to close in last message
        template.sendBodyAndHeader("direct:start1", "PAPPO" + i, HdfsConstants.HDFS_CLOSE, i == 9 ? true : false);
    }
    Configuration conf = new Configuration();
    Path file1 = new Path("file:///" + TEMP_DIR.toUri() + "/test-camel1");
    FileSystem fs1 = FileSystem.get(file1.toUri(), conf);
    SequenceFile.Reader reader = new SequenceFile.Reader(fs1, file1, conf);
    Writable key = (Writable) ReflectionUtils.newInstance(reader.getKeyClass(), conf);
    Writable value = (Writable) ReflectionUtils.newInstance(reader.getValueClass(), conf);
    int i = 0;
    while (reader.next(key, value)) {
        Text txt = (Text) value;
        assertEquals("PAPPO" + i, txt.toString());
        ++i;
    }
    IOHelper.close(reader);
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) SequenceFile(org.apache.hadoop.io.SequenceFile) FileSystem(org.apache.hadoop.fs.FileSystem) Writable(org.apache.hadoop.io.Writable) DoubleWritable(org.apache.hadoop.io.DoubleWritable) LongWritable(org.apache.hadoop.io.LongWritable) ByteWritable(org.apache.hadoop.io.ByteWritable) IntWritable(org.apache.hadoop.io.IntWritable) BooleanWritable(org.apache.hadoop.io.BooleanWritable) FloatWritable(org.apache.hadoop.io.FloatWritable) Text(org.apache.hadoop.io.Text) Test(org.junit.Test)

Example 10 with FileSystem

use of org.apache.hadoop.fs.FileSystem in project camel by apache.

the class FromFileToHdfsTest method tearDown.

@Override
public void tearDown() throws Exception {
    if (!canTest()) {
        return;
    }
    super.tearDown();
    Configuration conf = new Configuration();
    Path dir = new Path("target/outbox");
    FileSystem fs = FileSystem.get(dir.toUri(), conf);
    fs.delete(dir, true);
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) FileSystem(org.apache.hadoop.fs.FileSystem)

Aggregations

FileSystem (org.apache.hadoop.fs.FileSystem)2611 Path (org.apache.hadoop.fs.Path)2199 Test (org.junit.Test)1034 Configuration (org.apache.hadoop.conf.Configuration)890 IOException (java.io.IOException)757 FileStatus (org.apache.hadoop.fs.FileStatus)419 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)264 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)227 ArrayList (java.util.ArrayList)208 File (java.io.File)181 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)165 JobConf (org.apache.hadoop.mapred.JobConf)163 FSDataInputStream (org.apache.hadoop.fs.FSDataInputStream)151 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)145 URI (java.net.URI)135 SequenceFile (org.apache.hadoop.io.SequenceFile)118 Text (org.apache.hadoop.io.Text)112 FileNotFoundException (java.io.FileNotFoundException)102 FsPermission (org.apache.hadoop.fs.permission.FsPermission)94 Job (org.apache.hadoop.mapreduce.Job)81