Search in sources :

Example 1 with MultiReader

use of org.apache.accumulo.tserver.log.MultiReader in project accumulo by apache.

the class LogReader method main.

/**
 * Dump a Log File (Map or Sequence) to stdout. Will read from HDFS or local file system.
 *
 * @param args
 *          - first argument is the file to print
 */
public static void main(String[] args) throws IOException {
    Opts opts = new Opts();
    opts.parseArgs(LogReader.class.getName(), args);
    VolumeManager fs = VolumeManagerImpl.get();
    Matcher rowMatcher = null;
    KeyExtent ke = null;
    Text row = null;
    if (opts.files.isEmpty()) {
        new JCommander(opts).usage();
        return;
    }
    if (opts.row != null)
        row = new Text(opts.row);
    if (opts.extent != null) {
        String[] sa = opts.extent.split(";");
        ke = new KeyExtent(Table.ID.of(sa[0]), new Text(sa[1]), new Text(sa[2]));
    }
    if (opts.regexp != null) {
        Pattern pattern = Pattern.compile(opts.regexp);
        rowMatcher = pattern.matcher("");
    }
    Set<Integer> tabletIds = new HashSet<>();
    for (String file : opts.files) {
        Path path = new Path(file);
        LogFileKey key = new LogFileKey();
        LogFileValue value = new LogFileValue();
        if (fs.isFile(path)) {
            try (final FSDataInputStream fsinput = fs.open(path)) {
                // read log entries from a simple hdfs file
                DFSLoggerInputStreams streams;
                try {
                    streams = DfsLogger.readHeaderAndReturnStream(fsinput, SiteConfiguration.getInstance());
                } catch (LogHeaderIncompleteException e) {
                    log.warn("Could not read header for {} . Ignoring...", path);
                    continue;
                }
                try (DataInputStream input = streams.getDecryptingInputStream()) {
                    while (true) {
                        try {
                            key.readFields(input);
                            value.readFields(input);
                        } catch (EOFException ex) {
                            break;
                        }
                        printLogEvent(key, value, row, rowMatcher, ke, tabletIds, opts.maxMutations);
                    }
                }
            }
        } else {
            // read the log entries sorted in a map file
            MultiReader input = new MultiReader(fs, path);
            while (input.next(key, value)) {
                printLogEvent(key, value, row, rowMatcher, ke, tabletIds, opts.maxMutations);
            }
        }
    }
}
Also used : Path(org.apache.hadoop.fs.Path) VolumeManager(org.apache.accumulo.server.fs.VolumeManager) Pattern(java.util.regex.Pattern) Matcher(java.util.regex.Matcher) MultiReader(org.apache.accumulo.tserver.log.MultiReader) Text(org.apache.hadoop.io.Text) DataInputStream(java.io.DataInputStream) FSDataInputStream(org.apache.hadoop.fs.FSDataInputStream) KeyExtent(org.apache.accumulo.core.data.impl.KeyExtent) LogHeaderIncompleteException(org.apache.accumulo.tserver.log.DfsLogger.LogHeaderIncompleteException) JCommander(com.beust.jcommander.JCommander) DFSLoggerInputStreams(org.apache.accumulo.tserver.log.DfsLogger.DFSLoggerInputStreams) EOFException(java.io.EOFException) FSDataInputStream(org.apache.hadoop.fs.FSDataInputStream) HashSet(java.util.HashSet)

Aggregations

JCommander (com.beust.jcommander.JCommander)1 DataInputStream (java.io.DataInputStream)1 EOFException (java.io.EOFException)1 HashSet (java.util.HashSet)1 Matcher (java.util.regex.Matcher)1 Pattern (java.util.regex.Pattern)1 KeyExtent (org.apache.accumulo.core.data.impl.KeyExtent)1 VolumeManager (org.apache.accumulo.server.fs.VolumeManager)1 DFSLoggerInputStreams (org.apache.accumulo.tserver.log.DfsLogger.DFSLoggerInputStreams)1 LogHeaderIncompleteException (org.apache.accumulo.tserver.log.DfsLogger.LogHeaderIncompleteException)1 MultiReader (org.apache.accumulo.tserver.log.MultiReader)1 FSDataInputStream (org.apache.hadoop.fs.FSDataInputStream)1 Path (org.apache.hadoop.fs.Path)1 Text (org.apache.hadoop.io.Text)1