Search in sources :

Example 71 with DataInputBuffer

use of org.apache.hadoop.io.DataInputBuffer in project hadoop by apache.

the class TestCryptoStreams method getInputStream.

@Override
protected InputStream getInputStream(int bufferSize, byte[] key, byte[] iv) throws IOException {
    DataInputBuffer in = new DataInputBuffer();
    in.reset(buf, 0, bufLen);
    return new CryptoInputStream(new FakeInputStream(in), codec, bufferSize, key, iv);
}
Also used : DataInputBuffer(org.apache.hadoop.io.DataInputBuffer)

Example 72 with DataInputBuffer

use of org.apache.hadoop.io.DataInputBuffer in project hadoop by apache.

the class TestDelegationToken method testSerialization.

@Test
public void testSerialization() throws Exception {
    TestDelegationTokenIdentifier origToken = new TestDelegationTokenIdentifier(new Text("alice"), new Text("bob"), new Text("colin"));
    TestDelegationTokenIdentifier newToken = new TestDelegationTokenIdentifier();
    origToken.setIssueDate(123);
    origToken.setMasterKeyId(321);
    origToken.setMaxDate(314);
    origToken.setSequenceNumber(12345);
    // clone origToken into newToken
    DataInputBuffer inBuf = new DataInputBuffer();
    DataOutputBuffer outBuf = new DataOutputBuffer();
    origToken.write(outBuf);
    inBuf.reset(outBuf.getData(), 0, outBuf.getLength());
    newToken.readFields(inBuf);
    // now test the fields
    assertEquals("alice", newToken.getUser().getUserName());
    assertEquals(new Text("bob"), newToken.getRenewer());
    assertEquals("colin", newToken.getUser().getRealUser().getUserName());
    assertEquals(123, newToken.getIssueDate());
    assertEquals(321, newToken.getMasterKeyId());
    assertEquals(314, newToken.getMaxDate());
    assertEquals(12345, newToken.getSequenceNumber());
    assertEquals(origToken, newToken);
}
Also used : DataInputBuffer(org.apache.hadoop.io.DataInputBuffer) DataOutputBuffer(org.apache.hadoop.io.DataOutputBuffer) Text(org.apache.hadoop.io.Text) Test(org.junit.Test)

Example 73 with DataInputBuffer

use of org.apache.hadoop.io.DataInputBuffer in project hadoop by apache.

the class ReduceContextImpl method nextKeyValue.

/**
   * Advance to the next key/value pair.
   */
@Override
public boolean nextKeyValue() throws IOException, InterruptedException {
    if (!hasMore) {
        key = null;
        value = null;
        return false;
    }
    firstValue = !nextKeyIsSame;
    DataInputBuffer nextKey = input.getKey();
    currentRawKey.set(nextKey.getData(), nextKey.getPosition(), nextKey.getLength() - nextKey.getPosition());
    buffer.reset(currentRawKey.getBytes(), 0, currentRawKey.getLength());
    key = keyDeserializer.deserialize(key);
    DataInputBuffer nextVal = input.getValue();
    buffer.reset(nextVal.getData(), nextVal.getPosition(), nextVal.getLength() - nextVal.getPosition());
    value = valueDeserializer.deserialize(value);
    currentKeyLength = nextKey.getLength() - nextKey.getPosition();
    currentValueLength = nextVal.getLength() - nextVal.getPosition();
    if (isMarked) {
        backupStore.write(nextKey, nextVal);
    }
    hasMore = input.next();
    if (hasMore) {
        nextKey = input.getKey();
        nextKeyIsSame = comparator.compare(currentRawKey.getBytes(), 0, currentRawKey.getLength(), nextKey.getData(), nextKey.getPosition(), nextKey.getLength() - nextKey.getPosition()) == 0;
    } else {
        nextKeyIsSame = false;
    }
    inputValueCounter.increment(1);
    return true;
}
Also used : DataInputBuffer(org.apache.hadoop.io.DataInputBuffer)

Example 74 with DataInputBuffer

use of org.apache.hadoop.io.DataInputBuffer in project hadoop by apache.

the class TestFSCheckpointID method testFSCheckpointIDSerialization.

@Test
public void testFSCheckpointIDSerialization() throws IOException {
    Path inpath = new Path("/tmp/blah");
    FSCheckpointID cidin = new FSCheckpointID(inpath);
    DataOutputBuffer out = new DataOutputBuffer();
    cidin.write(out);
    out.close();
    FSCheckpointID cidout = new FSCheckpointID(null);
    DataInputBuffer in = new DataInputBuffer();
    in.reset(out.getData(), 0, out.getLength());
    cidout.readFields(in);
    in.close();
    assert cidin.equals(cidout);
}
Also used : Path(org.apache.hadoop.fs.Path) DataInputBuffer(org.apache.hadoop.io.DataInputBuffer) DataOutputBuffer(org.apache.hadoop.io.DataOutputBuffer) Test(org.junit.Test)

Example 75 with DataInputBuffer

use of org.apache.hadoop.io.DataInputBuffer in project hadoop by apache.

the class TestMerger method getValueAnswer.

private Answer<?> getValueAnswer(final String segmentName) {
    return new Answer<Void>() {

        int i = 0;

        public Void answer(InvocationOnMock invocation) {
            Object[] args = invocation.getArguments();
            DataInputBuffer key = (DataInputBuffer) args[0];
            key.reset(("Segment Value " + segmentName + i).getBytes(), 20);
            return null;
        }
    };
}
Also used : Answer(org.mockito.stubbing.Answer) Mockito.doAnswer(org.mockito.Mockito.doAnswer) DataInputBuffer(org.apache.hadoop.io.DataInputBuffer) InvocationOnMock(org.mockito.invocation.InvocationOnMock)

Aggregations

DataInputBuffer (org.apache.hadoop.io.DataInputBuffer)112 Test (org.junit.Test)49 DataOutputBuffer (org.apache.hadoop.io.DataOutputBuffer)45 IOException (java.io.IOException)24 Text (org.apache.hadoop.io.Text)20 Path (org.apache.hadoop.fs.Path)16 Configuration (org.apache.hadoop.conf.Configuration)13 IntWritable (org.apache.hadoop.io.IntWritable)11 Random (java.util.Random)10 DataInputStream (java.io.DataInputStream)9 BufferedInputStream (java.io.BufferedInputStream)8 HashMap (java.util.HashMap)8 DataOutputStream (java.io.DataOutputStream)6 LongWritable (org.apache.hadoop.io.LongWritable)6 SerializationFactory (org.apache.hadoop.io.serializer.SerializationFactory)6 IFile (org.apache.tez.runtime.library.common.sort.impl.IFile)6 BufferedOutputStream (java.io.BufferedOutputStream)5 BytesWritable (org.apache.hadoop.io.BytesWritable)5 FSDataInputStream (org.apache.hadoop.fs.FSDataInputStream)4 Credentials (org.apache.hadoop.security.Credentials)4