Search in sources :

Example 1 with Writer

use of org.apache.hadoop.io.MapFile.Writer in project accumulo by apache.

the class MultiReaderTest method setUp.

@Before
public void setUp() throws Exception {
    root.create();
    String path = root.getRoot().getAbsolutePath() + "/manyMaps";
    fs = VolumeManagerImpl.getLocal(path);
    Path root = new Path("file://" + path);
    fs.mkdirs(root);
    fs.create(new Path(root, "finished")).close();
    FileSystem ns = fs.getVolumeByPath(root).getFileSystem();
    Writer oddWriter = new Writer(ns.getConf(), ns.makeQualified(new Path(root, "odd")), Writer.keyClass(IntWritable.class), Writer.valueClass(BytesWritable.class));
    BytesWritable value = new BytesWritable("someValue".getBytes());
    for (int i = 1; i < 1000; i += 2) {
        oddWriter.append(new IntWritable(i), value);
    }
    oddWriter.close();
    Writer evenWriter = new Writer(ns.getConf(), ns.makeQualified(new Path(root, "even")), Writer.keyClass(IntWritable.class), Writer.valueClass(BytesWritable.class));
    for (int i = 0; i < 1000; i += 2) {
        if (i == 10)
            continue;
        evenWriter.append(new IntWritable(i), value);
    }
    evenWriter.close();
}
Also used : Path(org.apache.hadoop.fs.Path) FileSystem(org.apache.hadoop.fs.FileSystem) BytesWritable(org.apache.hadoop.io.BytesWritable) Writer(org.apache.hadoop.io.MapFile.Writer) IntWritable(org.apache.hadoop.io.IntWritable) Before(org.junit.Before)

Example 2 with Writer

use of org.apache.hadoop.io.MapFile.Writer in project accumulo by apache.

the class SortedLogRecoveryTest method recover.

private static List<Mutation> recover(Map<String, KeyValue[]> logs, Set<String> files, KeyExtent extent) throws IOException {
    TemporaryFolder root = new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
    root.create();
    final String workdir = root.getRoot().getAbsolutePath() + "/workdir";
    VolumeManager fs = VolumeManagerImpl.getLocal(workdir);
    final Path workdirPath = new Path("file://" + workdir);
    fs.deleteRecursively(workdirPath);
    ArrayList<Path> dirs = new ArrayList<>();
    try {
        for (Entry<String, KeyValue[]> entry : logs.entrySet()) {
            String path = workdir + "/" + entry.getKey();
            FileSystem ns = fs.getVolumeByPath(new Path(path)).getFileSystem();
            @SuppressWarnings("deprecation") Writer map = new MapFile.Writer(ns.getConf(), ns, path + "/log1", LogFileKey.class, LogFileValue.class);
            for (KeyValue lfe : entry.getValue()) {
                map.append(lfe.key, lfe.value);
            }
            map.close();
            ns.create(SortedLogState.getFinishedMarkerPath(path)).close();
            dirs.add(new Path(path));
        }
        // Recover
        SortedLogRecovery recovery = new SortedLogRecovery(fs);
        CaptureMutations capture = new CaptureMutations();
        recovery.recover(extent, dirs, files, capture);
        return capture.result;
    } finally {
        root.delete();
    }
}
Also used : Path(org.apache.hadoop.fs.Path) VolumeManager(org.apache.accumulo.server.fs.VolumeManager) ArrayList(java.util.ArrayList) FileSystem(org.apache.hadoop.fs.FileSystem) TemporaryFolder(org.junit.rules.TemporaryFolder) MapFile(org.apache.hadoop.io.MapFile) File(java.io.File) Writer(org.apache.hadoop.io.MapFile.Writer)

Aggregations

FileSystem (org.apache.hadoop.fs.FileSystem)2 Path (org.apache.hadoop.fs.Path)2 Writer (org.apache.hadoop.io.MapFile.Writer)2 File (java.io.File)1 ArrayList (java.util.ArrayList)1 VolumeManager (org.apache.accumulo.server.fs.VolumeManager)1 BytesWritable (org.apache.hadoop.io.BytesWritable)1 IntWritable (org.apache.hadoop.io.IntWritable)1 MapFile (org.apache.hadoop.io.MapFile)1 Before (org.junit.Before)1 TemporaryFolder (org.junit.rules.TemporaryFolder)1