Search in sources :

Example 36 with OzoneInputStream

use of org.apache.hadoop.ozone.client.io.OzoneInputStream in project ozone by apache.

the class TestOzoneRpcClientAbstract method testZReadKeyWithUnhealthyContainerReplica.

// Make this executed at last, for it has some side effect to other UTs
@Test
public void testZReadKeyWithUnhealthyContainerReplica() throws Exception {
    String volumeName = UUID.randomUUID().toString();
    String bucketName = UUID.randomUUID().toString();
    String value = "sample value";
    store.createVolume(volumeName);
    OzoneVolume volume = store.getVolume(volumeName);
    volume.createBucket(bucketName);
    OzoneBucket bucket = volume.getBucket(bucketName);
    String keyName1 = UUID.randomUUID().toString();
    // Write first key
    OzoneOutputStream out = bucket.createKey(keyName1, value.getBytes(UTF_8).length, ReplicationType.RATIS, THREE, new HashMap<>());
    out.write(value.getBytes(UTF_8));
    out.close();
    // Write second key
    String keyName2 = UUID.randomUUID().toString();
    value = "unhealthy container replica";
    out = bucket.createKey(keyName2, value.getBytes(UTF_8).length, ReplicationType.RATIS, THREE, new HashMap<>());
    out.write(value.getBytes(UTF_8));
    out.close();
    // Find container ID
    OzoneKey key = bucket.getKey(keyName2);
    long containerID = ((OzoneKeyDetails) key).getOzoneKeyLocations().get(0).getContainerID();
    // Set container replica to UNHEALTHY
    Container container;
    int index = 1;
    List<HddsDatanodeService> involvedDNs = new ArrayList<>();
    for (HddsDatanodeService hddsDatanode : cluster.getHddsDatanodes()) {
        container = hddsDatanode.getDatanodeStateMachine().getContainer().getContainerSet().getContainer(containerID);
        if (container == null) {
            continue;
        }
        container.markContainerUnhealthy();
        // Change first and second replica commit sequenceId
        if (index < 3) {
            long newBCSID = container.getBlockCommitSequenceId() - 1;
            try (ReferenceCountedDB db = BlockUtils.getDB((KeyValueContainerData) container.getContainerData(), cluster.getConf())) {
                db.getStore().getMetadataTable().put(OzoneConsts.BLOCK_COMMIT_SEQUENCE_ID, newBCSID);
            }
            container.updateBlockCommitSequenceId(newBCSID);
            index++;
        }
        involvedDNs.add(hddsDatanode);
    }
    // Restart DNs
    int dnCount = involvedDNs.size();
    for (index = 0; index < dnCount; index++) {
        if (index == dnCount - 1) {
            cluster.restartHddsDatanode(involvedDNs.get(index).getDatanodeDetails(), true);
        } else {
            cluster.restartHddsDatanode(involvedDNs.get(index).getDatanodeDetails(), false);
        }
    }
    StorageContainerManager scm = cluster.getStorageContainerManager();
    GenericTestUtils.waitFor(() -> {
        try {
            ContainerInfo containerInfo = scm.getContainerInfo(containerID);
            System.out.println("state " + containerInfo.getState());
            return containerInfo.getState() == HddsProtos.LifeCycleState.CLOSING;
        } catch (IOException e) {
            fail("Failed to get container info for " + e.getMessage());
            return false;
        }
    }, 1000, 10000);
    // Try reading keyName2
    try {
        GenericTestUtils.setLogLevel(XceiverClientGrpc.getLogger(), DEBUG);
        OzoneInputStream is = bucket.readKey(keyName2);
        byte[] content = new byte[100];
        is.read(content);
        String retValue = new String(content, UTF_8);
        Assert.assertTrue(value.equals(retValue.trim()));
    } catch (IOException e) {
        fail("Reading unhealthy replica should succeed.");
    }
}
Also used : OzoneInputStream(org.apache.hadoop.ozone.client.io.OzoneInputStream) StorageContainerManager(org.apache.hadoop.hdds.scm.server.StorageContainerManager) LinkedHashMap(java.util.LinkedHashMap) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) OzoneOutputStream(org.apache.hadoop.ozone.client.io.OzoneOutputStream) HddsDatanodeService(org.apache.hadoop.ozone.HddsDatanodeService) IOException(java.io.IOException) ReferenceCountedDB(org.apache.hadoop.ozone.container.common.utils.ReferenceCountedDB) OzoneVolume(org.apache.hadoop.ozone.client.OzoneVolume) OzoneBucket(org.apache.hadoop.ozone.client.OzoneBucket) Container(org.apache.hadoop.ozone.container.common.interfaces.Container) OzoneKeyDetails(org.apache.hadoop.ozone.client.OzoneKeyDetails) OzoneKey(org.apache.hadoop.ozone.client.OzoneKey) ContainerInfo(org.apache.hadoop.hdds.scm.container.ContainerInfo) Test(org.junit.Test)

Example 37 with OzoneInputStream

use of org.apache.hadoop.ozone.client.io.OzoneInputStream in project ozone by apache.

the class TestOzoneRpcClientAbstract method testKeyReadWriteForGDPR.

/**
 * Tests GDPR encryption/decryption.
 * 1. Create GDPR Enabled bucket.
 * 2. Create a Key in this bucket so it gets encrypted via GDPRSymmetricKey.
 * 3. Read key and validate the content/metadata is as expected because the
 * readKey will decrypt using the GDPR Symmetric Key with details from KeyInfo
 * Metadata.
 * 4. To check encryption, we forcibly update KeyInfo Metadata and remove the
 * gdprEnabled flag
 * 5. When we now read the key, {@link RpcClient} checks for GDPR Flag in
 * method createInputStream. If the gdprEnabled flag in metadata is set to
 * true, it decrypts using the GDPRSymmetricKey. Since we removed that flag
 * from metadata for this key, if will read the encrypted data as-is.
 * 6. Thus, when we compare this content with expected text, it should
 * not match as the decryption has not been performed.
 * @throws Exception
 */
@Test
public void testKeyReadWriteForGDPR() throws Exception {
    // Step 1
    String volumeName = UUID.randomUUID().toString();
    String bucketName = UUID.randomUUID().toString();
    String keyName = UUID.randomUUID().toString();
    store.createVolume(volumeName);
    OzoneVolume volume = store.getVolume(volumeName);
    BucketArgs args = BucketArgs.newBuilder().addMetadata(OzoneConsts.GDPR_FLAG, "true").build();
    volume.createBucket(bucketName, args);
    OzoneBucket bucket = volume.getBucket(bucketName);
    Assert.assertEquals(bucketName, bucket.getName());
    Assert.assertNotNull(bucket.getMetadata());
    Assert.assertEquals("true", bucket.getMetadata().get(OzoneConsts.GDPR_FLAG));
    // Step 2
    String text = "hello world";
    Map<String, String> keyMetadata = new HashMap<>();
    keyMetadata.put(OzoneConsts.GDPR_FLAG, "true");
    OzoneOutputStream out = bucket.createKey(keyName, text.getBytes(UTF_8).length, RATIS, ONE, keyMetadata);
    out.write(text.getBytes(UTF_8));
    out.close();
    Assert.assertNull(keyMetadata.get(OzoneConsts.GDPR_SECRET));
    // Step 3
    OzoneKeyDetails key = bucket.getKey(keyName);
    Assert.assertEquals(keyName, key.getName());
    Assert.assertEquals("true", key.getMetadata().get(OzoneConsts.GDPR_FLAG));
    Assert.assertEquals("AES", key.getMetadata().get(OzoneConsts.GDPR_ALGORITHM));
    Assert.assertNotNull(key.getMetadata().get(OzoneConsts.GDPR_SECRET));
    OzoneInputStream is = bucket.readKey(keyName);
    byte[] fileContent = new byte[text.getBytes(UTF_8).length];
    is.read(fileContent);
    Assert.assertTrue(verifyRatisReplication(volumeName, bucketName, keyName, RATIS, ONE));
    Assert.assertEquals(text, new String(fileContent, UTF_8));
    // Step 4
    OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
    OmKeyInfo omKeyInfo = omMetadataManager.getKeyTable(getBucketLayout()).get(omMetadataManager.getOzoneKey(volumeName, bucketName, keyName));
    omKeyInfo.getMetadata().remove(OzoneConsts.GDPR_FLAG);
    omMetadataManager.getKeyTable(getBucketLayout()).put(omMetadataManager.getOzoneKey(volumeName, bucketName, keyName), omKeyInfo);
    // Step 5
    key = bucket.getKey(keyName);
    Assert.assertEquals(keyName, key.getName());
    Assert.assertNull(key.getMetadata().get(OzoneConsts.GDPR_FLAG));
    is = bucket.readKey(keyName);
    fileContent = new byte[text.getBytes(UTF_8).length];
    is.read(fileContent);
    // Step 6
    Assert.assertNotEquals(text, new String(fileContent, UTF_8));
}
Also used : OzoneVolume(org.apache.hadoop.ozone.client.OzoneVolume) OzoneBucket(org.apache.hadoop.ozone.client.OzoneBucket) OzoneInputStream(org.apache.hadoop.ozone.client.io.OzoneInputStream) OzoneKeyDetails(org.apache.hadoop.ozone.client.OzoneKeyDetails) LinkedHashMap(java.util.LinkedHashMap) HashMap(java.util.HashMap) BucketArgs(org.apache.hadoop.ozone.client.BucketArgs) OMMetadataManager(org.apache.hadoop.ozone.om.OMMetadataManager) OmKeyInfo(org.apache.hadoop.ozone.om.helpers.OmKeyInfo) RepeatedOmKeyInfo(org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo) OzoneOutputStream(org.apache.hadoop.ozone.client.io.OzoneOutputStream) Test(org.junit.Test)

Example 38 with OzoneInputStream

use of org.apache.hadoop.ozone.client.io.OzoneInputStream in project ozone by apache.

the class TestOzoneRpcClientAbstract method testPutKeyRatisOneNode.

@Test
public void testPutKeyRatisOneNode() throws IOException {
    String volumeName = UUID.randomUUID().toString();
    String bucketName = UUID.randomUUID().toString();
    Instant testStartTime = Instant.now();
    String value = "sample value";
    store.createVolume(volumeName);
    OzoneVolume volume = store.getVolume(volumeName);
    volume.createBucket(bucketName);
    OzoneBucket bucket = volume.getBucket(bucketName);
    for (int i = 0; i < 10; i++) {
        String keyName = UUID.randomUUID().toString();
        OzoneOutputStream out = bucket.createKey(keyName, value.getBytes(UTF_8).length, ReplicationType.RATIS, ONE, new HashMap<>());
        out.write(value.getBytes(UTF_8));
        out.close();
        OzoneKey key = bucket.getKey(keyName);
        Assert.assertEquals(keyName, key.getName());
        OzoneInputStream is = bucket.readKey(keyName);
        byte[] fileContent = new byte[value.getBytes(UTF_8).length];
        is.read(fileContent);
        is.close();
        Assert.assertTrue(verifyRatisReplication(volumeName, bucketName, keyName, ReplicationType.RATIS, ONE));
        Assert.assertEquals(value, new String(fileContent, UTF_8));
        Assert.assertFalse(key.getCreationTime().isBefore(testStartTime));
        Assert.assertFalse(key.getModificationTime().isBefore(testStartTime));
    }
}
Also used : OzoneVolume(org.apache.hadoop.ozone.client.OzoneVolume) OzoneBucket(org.apache.hadoop.ozone.client.OzoneBucket) OzoneInputStream(org.apache.hadoop.ozone.client.io.OzoneInputStream) Instant(java.time.Instant) OzoneKey(org.apache.hadoop.ozone.client.OzoneKey) OzoneOutputStream(org.apache.hadoop.ozone.client.io.OzoneOutputStream) Test(org.junit.Test)

Example 39 with OzoneInputStream

use of org.apache.hadoop.ozone.client.io.OzoneInputStream in project ozone by apache.

the class TestOzoneRpcClientAbstract method testPutKey.

@Test
public void testPutKey() throws IOException {
    String volumeName = UUID.randomUUID().toString();
    String bucketName = UUID.randomUUID().toString();
    Instant testStartTime = Instant.now();
    String value = "sample value";
    store.createVolume(volumeName);
    OzoneVolume volume = store.getVolume(volumeName);
    volume.createBucket(bucketName);
    OzoneBucket bucket = volume.getBucket(bucketName);
    for (int i = 0; i < 10; i++) {
        String keyName = UUID.randomUUID().toString();
        OzoneOutputStream out = bucket.createKey(keyName, value.getBytes(UTF_8).length, RATIS, ONE, new HashMap<>());
        out.write(value.getBytes(UTF_8));
        out.close();
        OzoneKey key = bucket.getKey(keyName);
        Assert.assertEquals(keyName, key.getName());
        OzoneInputStream is = bucket.readKey(keyName);
        byte[] fileContent = new byte[value.getBytes(UTF_8).length];
        is.read(fileContent);
        Assert.assertTrue(verifyRatisReplication(volumeName, bucketName, keyName, RATIS, ONE));
        Assert.assertEquals(value, new String(fileContent, UTF_8));
        Assert.assertFalse(key.getCreationTime().isBefore(testStartTime));
        Assert.assertFalse(key.getModificationTime().isBefore(testStartTime));
    }
}
Also used : OzoneVolume(org.apache.hadoop.ozone.client.OzoneVolume) OzoneBucket(org.apache.hadoop.ozone.client.OzoneBucket) OzoneInputStream(org.apache.hadoop.ozone.client.io.OzoneInputStream) Instant(java.time.Instant) OzoneKey(org.apache.hadoop.ozone.client.OzoneKey) OzoneOutputStream(org.apache.hadoop.ozone.client.io.OzoneOutputStream) Test(org.junit.Test)

Example 40 with OzoneInputStream

use of org.apache.hadoop.ozone.client.io.OzoneInputStream in project ozone by apache.

the class TestOzoneRpcClientAbstract method readKey.

private void readKey(OzoneBucket bucket, String keyName, String data) throws IOException {
    OzoneKey key = bucket.getKey(keyName);
    Assert.assertEquals(keyName, key.getName());
    OzoneInputStream is = bucket.readKey(keyName);
    byte[] fileContent = new byte[data.getBytes(UTF_8).length];
    is.read(fileContent);
    is.close();
}
Also used : OzoneInputStream(org.apache.hadoop.ozone.client.io.OzoneInputStream) OzoneKey(org.apache.hadoop.ozone.client.OzoneKey)

Aggregations

OzoneInputStream (org.apache.hadoop.ozone.client.io.OzoneInputStream)47 OzoneOutputStream (org.apache.hadoop.ozone.client.io.OzoneOutputStream)33 OzoneBucket (org.apache.hadoop.ozone.client.OzoneBucket)26 Test (org.junit.Test)26 OzoneVolume (org.apache.hadoop.ozone.client.OzoneVolume)22 OzoneKey (org.apache.hadoop.ozone.client.OzoneKey)17 IOException (java.io.IOException)15 OzoneKeyDetails (org.apache.hadoop.ozone.client.OzoneKeyDetails)13 Instant (java.time.Instant)12 HashMap (java.util.HashMap)11 LinkedHashMap (java.util.LinkedHashMap)10 HddsDatanodeService (org.apache.hadoop.ozone.HddsDatanodeService)8 ArrayList (java.util.ArrayList)7 OMException (org.apache.hadoop.ozone.om.exceptions.OMException)7 OmKeyArgs (org.apache.hadoop.ozone.om.helpers.OmKeyArgs)7 OmKeyInfo (org.apache.hadoop.ozone.om.helpers.OmKeyInfo)7 OmKeyLocationInfo (org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo)6 RepeatedOmKeyInfo (org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo)6 File (java.io.File)5 HttpHeaders (javax.ws.rs.core.HttpHeaders)5