Search in sources :

Example 46 with OzoneInputStream

use of org.apache.hadoop.ozone.client.io.OzoneInputStream in project ozone by apache.

the class TestCloseContainerHandlingByClient method testMultiBlockWrites3.

@Test
public void testMultiBlockWrites3() throws Exception {
    String keyName = getKeyName();
    int keyLen = 4 * blockSize;
    OzoneOutputStream key = createKey(keyName, ReplicationType.RATIS, keyLen);
    KeyOutputStream keyOutputStream = (KeyOutputStream) key.getOutputStream();
    // With the initial size provided, it should have preallocated 4 blocks
    Assert.assertEquals(4, keyOutputStream.getStreamEntries().size());
    // write data 4 blocks and one more chunk
    byte[] writtenData = ContainerTestHelper.getFixedLengthString(keyString, keyLen).getBytes(UTF_8);
    byte[] data = Arrays.copyOfRange(writtenData, 0, 3 * blockSize + chunkSize);
    Assert.assertEquals(data.length, 3 * blockSize + chunkSize);
    key.write(data);
    Assert.assertTrue(key.getOutputStream() instanceof KeyOutputStream);
    // get the name of a valid container
    OmKeyArgs keyArgs = new OmKeyArgs.Builder().setVolumeName(volumeName).setBucketName(bucketName).setReplicationConfig(RatisReplicationConfig.getInstance(THREE)).setKeyName(keyName).setRefreshPipeline(true).build();
    waitForContainerClose(key);
    // write 3 more chunks worth of data. It will fail and new block will be
    // allocated. This write completes 4 blocks worth of data written to key
    data = Arrays.copyOfRange(writtenData, 3 * blockSize + chunkSize, keyLen);
    key.write(data);
    key.close();
    // read the key from OM again and match the length and data.
    OmKeyInfo keyInfo = cluster.getOzoneManager().lookupKey(keyArgs);
    List<OmKeyLocationInfo> keyLocationInfos = keyInfo.getKeyLocationVersions().get(0).getBlocksLatestVersionOnly();
    OzoneVolume volume = objectStore.getVolume(volumeName);
    OzoneBucket bucket = volume.getBucket(bucketName);
    OzoneInputStream inputStream = bucket.readKey(keyName);
    byte[] readData = new byte[keyLen];
    inputStream.read(readData);
    Assert.assertArrayEquals(writtenData, readData);
    // Though we have written only block initially, the close will hit
    // closeContainerException and remaining data in the chunkOutputStream
    // buffer will be copied into a different allocated block and will be
    // committed.
    long length = 0;
    for (OmKeyLocationInfo locationInfo : keyLocationInfos) {
        length += locationInfo.getLength();
    }
    Assert.assertEquals(4 * blockSize, length);
}
Also used : OzoneInputStream(org.apache.hadoop.ozone.client.io.OzoneInputStream) OzoneOutputStream(org.apache.hadoop.ozone.client.io.OzoneOutputStream) OmKeyArgs(org.apache.hadoop.ozone.om.helpers.OmKeyArgs) OmKeyLocationInfo(org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo) OzoneVolume(org.apache.hadoop.ozone.client.OzoneVolume) OzoneBucket(org.apache.hadoop.ozone.client.OzoneBucket) OmKeyInfo(org.apache.hadoop.ozone.om.helpers.OmKeyInfo) KeyOutputStream(org.apache.hadoop.ozone.client.io.KeyOutputStream) Test(org.junit.Test)

Example 47 with OzoneInputStream

use of org.apache.hadoop.ozone.client.io.OzoneInputStream in project ozone by apache.

the class TestOzoneFSWithObjectStoreCreate method createKey.

private void createKey(OzoneBucket ozoneBucket, String key, int length, byte[] input) throws Exception {
    OzoneOutputStream ozoneOutputStream = ozoneBucket.createKey(key, length);
    ozoneOutputStream.write(input);
    ozoneOutputStream.write(input, 0, 10);
    ozoneOutputStream.close();
    // Read the key with given key name.
    OzoneInputStream ozoneInputStream = ozoneBucket.readKey(key);
    byte[] read = new byte[length];
    ozoneInputStream.read(read, 0, length);
    ozoneInputStream.close();
    String inputString = new String(input, UTF_8);
    Assert.assertEquals(inputString, new String(read, UTF_8));
    // Read using filesystem.
    FSDataInputStream fsDataInputStream = o3fs.open(new Path(key));
    read = new byte[length];
    fsDataInputStream.read(read, 0, length);
    ozoneInputStream.close();
    Assert.assertEquals(inputString, new String(read, UTF_8));
}
Also used : Path(org.apache.hadoop.fs.Path) OzoneInputStream(org.apache.hadoop.ozone.client.io.OzoneInputStream) FSDataInputStream(org.apache.hadoop.fs.FSDataInputStream) OzoneOutputStream(org.apache.hadoop.ozone.client.io.OzoneOutputStream)

Aggregations

OzoneInputStream (org.apache.hadoop.ozone.client.io.OzoneInputStream)47 OzoneOutputStream (org.apache.hadoop.ozone.client.io.OzoneOutputStream)33 OzoneBucket (org.apache.hadoop.ozone.client.OzoneBucket)26 Test (org.junit.Test)26 OzoneVolume (org.apache.hadoop.ozone.client.OzoneVolume)22 OzoneKey (org.apache.hadoop.ozone.client.OzoneKey)17 IOException (java.io.IOException)15 OzoneKeyDetails (org.apache.hadoop.ozone.client.OzoneKeyDetails)13 Instant (java.time.Instant)12 HashMap (java.util.HashMap)11 LinkedHashMap (java.util.LinkedHashMap)10 HddsDatanodeService (org.apache.hadoop.ozone.HddsDatanodeService)8 ArrayList (java.util.ArrayList)7 OMException (org.apache.hadoop.ozone.om.exceptions.OMException)7 OmKeyArgs (org.apache.hadoop.ozone.om.helpers.OmKeyArgs)7 OmKeyInfo (org.apache.hadoop.ozone.om.helpers.OmKeyInfo)7 OmKeyLocationInfo (org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo)6 RepeatedOmKeyInfo (org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo)6 File (java.io.File)5 HttpHeaders (javax.ws.rs.core.HttpHeaders)5