Search in sources :

Example 26 with Counter

use of org.apache.hadoop.mapreduce.Counter in project parquet-mr by apache.

the class TestInputOutputFormat method value.

private static long value(Job job, String groupName, String name) throws Exception {
    // getGroup moved to AbstractCounters
    Method getGroup = org.apache.hadoop.mapreduce.Counters.class.getMethod("getGroup", String.class);
    // CounterGroup changed to an interface
    Method findCounter = org.apache.hadoop.mapreduce.CounterGroup.class.getMethod("findCounter", String.class);
    // Counter changed to an interface
    Method getValue = org.apache.hadoop.mapreduce.Counter.class.getMethod("getValue");
    CounterGroup group = (CounterGroup) getGroup.invoke(job.getCounters(), groupName);
    Counter counter = (Counter) findCounter.invoke(group, name);
    return (Long) getValue.invoke(counter);
}
Also used : Counter(org.apache.hadoop.mapreduce.Counter) CounterGroup(org.apache.hadoop.mapreduce.CounterGroup) Method(java.lang.reflect.Method)

Example 27 with Counter

use of org.apache.hadoop.mapreduce.Counter in project tez by apache.

the class TezTypeConverters method fromTez.

public static Counters fromTez(TezCounters tezCounters) {
    if (tezCounters == null) {
        return null;
    }
    Counters counters = new Counters();
    for (CounterGroup xGrp : tezCounters) {
        counters.addGroup(xGrp.getName(), xGrp.getDisplayName());
        for (TezCounter xCounter : xGrp) {
            Counter counter = counters.findCounter(xGrp.getName(), xCounter.getName());
            counter.setValue(xCounter.getValue());
        }
    }
    return counters;
}
Also used : TezCounter(org.apache.tez.common.counters.TezCounter) Counter(org.apache.hadoop.mapreduce.Counter) CounterGroup(org.apache.tez.common.counters.CounterGroup) TezCounters(org.apache.tez.common.counters.TezCounters) Counters(org.apache.hadoop.mapreduce.Counters) TezCounter(org.apache.tez.common.counters.TezCounter)

Example 28 with Counter

use of org.apache.hadoop.mapreduce.Counter in project cdap by caskdata.

the class BasicWorkflowToken method setMapReduceCounters.

public synchronized void setMapReduceCounters(Counters counters) {
    ImmutableMap.Builder<String, Map<String, Long>> countersBuilder = ImmutableMap.builder();
    for (CounterGroup group : counters) {
        ImmutableMap.Builder<String, Long> groupBuilder = ImmutableMap.builder();
        for (Counter counter : group) {
            groupBuilder.put(counter.getName(), counter.getValue());
            // Also put the counter to system scope.
            put(group.getName() + "." + counter.getName(), Value.of(counter.getValue()), WorkflowToken.Scope.SYSTEM);
        }
        countersBuilder.put(group.getName(), groupBuilder.build());
    }
    this.mapReduceCounters = countersBuilder.build();
}
Also used : Counter(org.apache.hadoop.mapreduce.Counter) CounterGroup(org.apache.hadoop.mapreduce.CounterGroup) ImmutableMap(com.google.common.collect.ImmutableMap) EnumMap(java.util.EnumMap) HashMap(java.util.HashMap) Map(java.util.Map) ImmutableMap(com.google.common.collect.ImmutableMap)

Example 29 with Counter

use of org.apache.hadoop.mapreduce.Counter in project hbase by apache.

the class TestTableMapReduce method verifyJobCountersAreEmitted.

/**
 * Verify scan counters are emitted from the job
 * @param job
 * @throws IOException
 */
private void verifyJobCountersAreEmitted(Job job) throws IOException {
    Counters counters = job.getCounters();
    Counter counter = counters.findCounter(TableRecordReaderImpl.HBASE_COUNTER_GROUP_NAME, "RPC_CALLS");
    assertNotNull("Unable to find Job counter for HBase scan metrics, RPC_CALLS", counter);
    assertTrue("Counter value for RPC_CALLS should be larger than 0", counter.getValue() > 0);
}
Also used : Counter(org.apache.hadoop.mapreduce.Counter) Counters(org.apache.hadoop.mapreduce.Counters)

Example 30 with Counter

use of org.apache.hadoop.mapreduce.Counter in project hbase by apache.

the class TestRowCounter method runCreateSubmittableJobWithArgs.

/**
 * Run the RowCounter map reduce job and verify the row count.
 *
 * @param args the command line arguments to be used for rowcounter job.
 * @param expectedCount the expected row count (result of map reduce job).
 * @throws Exception in case of any unexpected error.
 */
private void runCreateSubmittableJobWithArgs(String[] args, int expectedCount) throws Exception {
    Job job = RowCounter.createSubmittableJob(TEST_UTIL.getConfiguration(), args);
    long start = EnvironmentEdgeManager.currentTime();
    job.waitForCompletion(true);
    long duration = EnvironmentEdgeManager.currentTime() - start;
    LOG.debug("row count duration (ms): " + duration);
    assertTrue(job.isSuccessful());
    Counter counter = job.getCounters().findCounter(RowCounter.RowCounterMapper.Counters.ROWS);
    assertEquals(expectedCount, counter.getValue());
}
Also used : Counter(org.apache.hadoop.mapreduce.Counter) Job(org.apache.hadoop.mapreduce.Job)

Aggregations

Counter (org.apache.hadoop.mapreduce.Counter)51 Configuration (org.apache.hadoop.conf.Configuration)15 CounterGroup (org.apache.hadoop.mapreduce.CounterGroup)13 Job (org.apache.hadoop.mapreduce.Job)12 Counters (org.apache.hadoop.mapreduce.Counters)11 IOException (java.io.IOException)8 Path (org.apache.hadoop.fs.Path)7 Map (java.util.Map)4 FileSystem (org.apache.hadoop.fs.FileSystem)4 Test (org.junit.Test)4 TaskCounter (org.apache.hadoop.mapreduce.TaskCounter)3 FileNotFoundException (java.io.FileNotFoundException)2 SimpleDateFormat (java.text.SimpleDateFormat)2 ArrayList (java.util.ArrayList)2 ExecutionException (java.util.concurrent.ExecutionException)2 RejectedExecutionException (java.util.concurrent.RejectedExecutionException)2 TimeoutException (java.util.concurrent.TimeoutException)2 Schema (org.apache.avro.Schema)2 CustomOutputCommitter (org.apache.hadoop.CustomOutputCommitter)2 BytesWritable (org.apache.hadoop.io.BytesWritable)2