Search in sources :

Example 6 with Flaky

use of org.apache.ozone.test.tag.Flaky in project ozone by apache.

the class TestRootedOzoneFileSystem method testRenameToTrashEnabled.

/**
 * Check that  files are moved to trash since it is enabled by
 * fs.rename(src, dst, options).
 */
@Test
@Flaky({ "HDDS-5819", "HDDS-6451" })
public void testRenameToTrashEnabled() throws IOException {
    // Create a file
    String testKeyName = "testKey2";
    Path path = new Path(bucketPath, testKeyName);
    try (FSDataOutputStream stream = fs.create(path)) {
        stream.write(1);
    }
    // Call moveToTrash. We can't call protected fs.rename() directly
    trash.moveToTrash(path);
    // Construct paths
    String username = UserGroupInformation.getCurrentUser().getShortUserName();
    Path trashRoot = new Path(bucketPath, TRASH_PREFIX);
    Path userTrash = new Path(trashRoot, username);
    Path userTrashCurrent = new Path(userTrash, "Current");
    String key = path.toString().substring(1);
    Path trashPath = new Path(userTrashCurrent, key);
    // Trash Current directory should still have been created.
    Assert.assertTrue(ofs.exists(userTrashCurrent));
    // Check under trash, the key should be present
    Assert.assertTrue(ofs.exists(trashPath));
    // Cleanup
    ofs.delete(trashRoot, true);
}
Also used : Path(org.apache.hadoop.fs.Path) OFSPath(org.apache.hadoop.ozone.OFSPath) FSDataOutputStream(org.apache.hadoop.fs.FSDataOutputStream) Test(org.junit.Test) Flaky(org.apache.ozone.test.tag.Flaky)

Example 7 with Flaky

use of org.apache.ozone.test.tag.Flaky in project ozone by apache.

the class TestContainerStateMachineFailures method testApplyTransactionIdempotencyWithClosedContainer.

@Test
@Flaky("HDDS-6115")
public void testApplyTransactionIdempotencyWithClosedContainer() throws Exception {
    OzoneOutputStream key = objectStore.getVolume(volumeName).getBucket(bucketName).createKey("ratis", 1024, ReplicationType.RATIS, ReplicationFactor.ONE, new HashMap<>());
    // First write and flush creates a container in the datanode
    key.write("ratis".getBytes(UTF_8));
    key.flush();
    key.write("ratis".getBytes(UTF_8));
    KeyOutputStream groupOutputStream = (KeyOutputStream) key.getOutputStream();
    List<OmKeyLocationInfo> locationInfoList = groupOutputStream.getLocationInfoList();
    Assert.assertEquals(1, locationInfoList.size());
    OmKeyLocationInfo omKeyLocationInfo = locationInfoList.get(0);
    HddsDatanodeService dn = TestHelper.getDatanodeService(omKeyLocationInfo, cluster);
    ContainerData containerData = dn.getDatanodeStateMachine().getContainer().getContainerSet().getContainer(omKeyLocationInfo.getContainerID()).getContainerData();
    Assert.assertTrue(containerData instanceof KeyValueContainerData);
    key.close();
    ContainerStateMachine stateMachine = (ContainerStateMachine) TestHelper.getStateMachine(dn, omKeyLocationInfo.getPipeline());
    SimpleStateMachineStorage storage = (SimpleStateMachineStorage) stateMachine.getStateMachineStorage();
    Path parentPath = storage.findLatestSnapshot().getFile().getPath();
    stateMachine.takeSnapshot();
    Assert.assertTrue(parentPath.getParent().toFile().listFiles().length > 0);
    FileInfo snapshot = storage.findLatestSnapshot().getFile();
    Assert.assertNotNull(snapshot);
    long containerID = omKeyLocationInfo.getContainerID();
    Pipeline pipeline = cluster.getStorageContainerLocationClient().getContainerWithPipeline(containerID).getPipeline();
    XceiverClientSpi xceiverClient = xceiverClientManager.acquireClient(pipeline);
    ContainerProtos.ContainerCommandRequestProto.Builder request = ContainerProtos.ContainerCommandRequestProto.newBuilder();
    request.setDatanodeUuid(pipeline.getFirstNode().getUuidString());
    request.setCmdType(ContainerProtos.Type.CloseContainer);
    request.setContainerID(containerID);
    request.setCloseContainer(ContainerProtos.CloseContainerRequestProto.getDefaultInstance());
    try {
        xceiverClient.sendCommand(request.build());
    } catch (IOException e) {
        Assert.fail("Exception should not be thrown");
    }
    Assert.assertTrue(TestHelper.getDatanodeService(omKeyLocationInfo, cluster).getDatanodeStateMachine().getContainer().getContainerSet().getContainer(containerID).getContainerState() == ContainerProtos.ContainerDataProto.State.CLOSED);
    Assert.assertTrue(stateMachine.isStateMachineHealthy());
    try {
        stateMachine.takeSnapshot();
    } catch (IOException ioe) {
        Assert.fail("Exception should not be thrown");
    }
    FileInfo latestSnapshot = storage.findLatestSnapshot().getFile();
    Assert.assertFalse(snapshot.getPath().equals(latestSnapshot.getPath()));
}
Also used : Path(java.nio.file.Path) OzoneOutputStream(org.apache.hadoop.ozone.client.io.OzoneOutputStream) HddsDatanodeService(org.apache.hadoop.ozone.HddsDatanodeService) IOException(java.io.IOException) XceiverClientSpi(org.apache.hadoop.hdds.scm.XceiverClientSpi) OmKeyLocationInfo(org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo) KeyValueContainerData(org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData) Pipeline(org.apache.hadoop.hdds.scm.pipeline.Pipeline) ContainerStateMachine(org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine) FileInfo(org.apache.ratis.server.storage.FileInfo) SimpleStateMachineStorage(org.apache.ratis.statemachine.impl.SimpleStateMachineStorage) KeyOutputStream(org.apache.hadoop.ozone.client.io.KeyOutputStream) KeyValueContainerData(org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData) ContainerData(org.apache.hadoop.ozone.container.common.impl.ContainerData) Test(org.junit.jupiter.api.Test) Flaky(org.apache.ozone.test.tag.Flaky)

Example 8 with Flaky

use of org.apache.ozone.test.tag.Flaky in project ozone by apache.

the class TestRandomKeyGenerator method cleanObjectsTest.

@Test
@Flaky("HDDS-5993")
void cleanObjectsTest() {
    RandomKeyGenerator randomKeyGenerator = new RandomKeyGenerator(cluster.getConf());
    CommandLine cmd = new CommandLine(randomKeyGenerator);
    cmd.execute("--num-of-volumes", "2", "--num-of-buckets", "5", "--num-of-keys", "10", "--num-of-threads", "10", "--factor", "THREE", "--type", "RATIS", "--clean-objects");
    Assert.assertEquals(2, randomKeyGenerator.getNumberOfVolumesCreated());
    Assert.assertEquals(10, randomKeyGenerator.getNumberOfBucketsCreated());
    Assert.assertEquals(100, randomKeyGenerator.getNumberOfKeysAdded());
    Assert.assertEquals(2, randomKeyGenerator.getNumberOfVolumesCleaned());
    Assert.assertEquals(10, randomKeyGenerator.getNumberOfBucketsCleaned());
}
Also used : CommandLine(picocli.CommandLine) Test(org.junit.jupiter.api.Test) Flaky(org.apache.ozone.test.tag.Flaky)

Example 9 with Flaky

use of org.apache.ozone.test.tag.Flaky in project ozone by apache.

the class TestOzoneRpcClientAbstract method testZReadKeyWithUnhealthyContainerReplica.

// Make this executed at last, for it has some side effect to other UTs
@Test
@Flaky("HDDS-6151")
public void testZReadKeyWithUnhealthyContainerReplica() throws Exception {
    String volumeName = UUID.randomUUID().toString();
    String bucketName = UUID.randomUUID().toString();
    String value = "sample value";
    store.createVolume(volumeName);
    OzoneVolume volume = store.getVolume(volumeName);
    volume.createBucket(bucketName);
    OzoneBucket bucket = volume.getBucket(bucketName);
    String keyName1 = UUID.randomUUID().toString();
    // Write first key
    OzoneOutputStream out = bucket.createKey(keyName1, value.getBytes(UTF_8).length, ReplicationType.RATIS, THREE, new HashMap<>());
    out.write(value.getBytes(UTF_8));
    out.close();
    // Write second key
    String keyName2 = UUID.randomUUID().toString();
    value = "unhealthy container replica";
    out = bucket.createKey(keyName2, value.getBytes(UTF_8).length, ReplicationType.RATIS, THREE, new HashMap<>());
    out.write(value.getBytes(UTF_8));
    out.close();
    // Find container ID
    OzoneKey key = bucket.getKey(keyName2);
    long containerID = ((OzoneKeyDetails) key).getOzoneKeyLocations().get(0).getContainerID();
    // Set container replica to UNHEALTHY
    Container container;
    int index = 1;
    List<HddsDatanodeService> involvedDNs = new ArrayList<>();
    for (HddsDatanodeService hddsDatanode : cluster.getHddsDatanodes()) {
        container = hddsDatanode.getDatanodeStateMachine().getContainer().getContainerSet().getContainer(containerID);
        if (container == null) {
            continue;
        }
        container.markContainerUnhealthy();
        // Change first and second replica commit sequenceId
        if (index < 3) {
            long newBCSID = container.getBlockCommitSequenceId() - 1;
            KeyValueContainerData cData = (KeyValueContainerData) container.getContainerData();
            try (DBHandle db = BlockUtils.getDB(cData, cluster.getConf())) {
                db.getStore().getMetadataTable().put(cData.bcsIdKey(), newBCSID);
            }
            container.updateBlockCommitSequenceId(newBCSID);
            index++;
        }
        involvedDNs.add(hddsDatanode);
    }
    // Restart DNs
    int dnCount = involvedDNs.size();
    for (index = 0; index < dnCount; index++) {
        if (index == dnCount - 1) {
            cluster.restartHddsDatanode(involvedDNs.get(index).getDatanodeDetails(), true);
        } else {
            cluster.restartHddsDatanode(involvedDNs.get(index).getDatanodeDetails(), false);
        }
    }
    StorageContainerManager scm = cluster.getStorageContainerManager();
    GenericTestUtils.waitFor(() -> {
        try {
            ContainerInfo containerInfo = scm.getContainerInfo(containerID);
            System.out.println("state " + containerInfo.getState());
            return containerInfo.getState() == HddsProtos.LifeCycleState.CLOSING;
        } catch (IOException e) {
            fail("Failed to get container info for " + e.getMessage());
            return false;
        }
    }, 1000, 10000);
    // Try reading keyName2
    try {
        GenericTestUtils.setLogLevel(XceiverClientGrpc.getLogger(), DEBUG);
        OzoneInputStream is = bucket.readKey(keyName2);
        byte[] content = new byte[100];
        is.read(content);
        String retValue = new String(content, UTF_8);
        Assert.assertTrue(value.equals(retValue.trim()));
    } catch (IOException e) {
        fail("Reading unhealthy replica should succeed.");
    }
}
Also used : OzoneInputStream(org.apache.hadoop.ozone.client.io.OzoneInputStream) StorageContainerManager(org.apache.hadoop.hdds.scm.server.StorageContainerManager) LinkedHashMap(java.util.LinkedHashMap) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) OzoneOutputStream(org.apache.hadoop.ozone.client.io.OzoneOutputStream) HddsDatanodeService(org.apache.hadoop.ozone.HddsDatanodeService) IOException(java.io.IOException) KeyValueContainerData(org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData) OzoneVolume(org.apache.hadoop.ozone.client.OzoneVolume) OzoneBucket(org.apache.hadoop.ozone.client.OzoneBucket) Container(org.apache.hadoop.ozone.container.common.interfaces.Container) OzoneKeyDetails(org.apache.hadoop.ozone.client.OzoneKeyDetails) DBHandle(org.apache.hadoop.ozone.container.common.interfaces.DBHandle) OzoneKey(org.apache.hadoop.ozone.client.OzoneKey) ContainerInfo(org.apache.hadoop.hdds.scm.container.ContainerInfo) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) Test(org.junit.jupiter.api.Test) Flaky(org.apache.ozone.test.tag.Flaky)

Example 10 with Flaky

use of org.apache.ozone.test.tag.Flaky in project ozone by apache.

the class TestFailureHandlingByClient method testDatanodeExclusionWithMajorityCommit.

@Test
@Flaky("HDDS-3298")
public void testDatanodeExclusionWithMajorityCommit() throws Exception {
    startCluster();
    String keyName = UUID.randomUUID().toString();
    OzoneOutputStream key = createKey(keyName, ReplicationType.RATIS, blockSize);
    String data = ContainerTestHelper.getFixedLengthString(keyString, chunkSize);
    // get the name of a valid container
    Assert.assertTrue(key.getOutputStream() instanceof KeyOutputStream);
    KeyOutputStream keyOutputStream = (KeyOutputStream) key.getOutputStream();
    List<BlockOutputStreamEntry> streamEntryList = keyOutputStream.getStreamEntries();
    // Assert that 1 block will be preallocated
    Assert.assertEquals(1, streamEntryList.size());
    key.write(data.getBytes(UTF_8));
    key.flush();
    long containerId = streamEntryList.get(0).getBlockID().getContainerID();
    BlockID blockId = streamEntryList.get(0).getBlockID();
    ContainerInfo container = cluster.getStorageContainerManager().getContainerManager().getContainer(ContainerID.valueOf(containerId));
    Pipeline pipeline = cluster.getStorageContainerManager().getPipelineManager().getPipeline(container.getPipelineID());
    List<DatanodeDetails> datanodes = pipeline.getNodes();
    // shutdown 1 datanode. This will make sure the 2 way commit happens for
    // next write ops.
    cluster.shutdownHddsDatanode(datanodes.get(0));
    key.write(data.getBytes(UTF_8));
    key.write(data.getBytes(UTF_8));
    key.flush();
    Assert.assertTrue(keyOutputStream.getExcludeList().getDatanodes().contains(datanodes.get(0)));
    Assert.assertTrue(keyOutputStream.getExcludeList().getContainerIds().isEmpty());
    Assert.assertTrue(keyOutputStream.getExcludeList().getPipelineIds().isEmpty());
    // The close will just write to the buffer
    key.close();
    OmKeyArgs keyArgs = new OmKeyArgs.Builder().setVolumeName(volumeName).setBucketName(bucketName).setReplicationConfig(RatisReplicationConfig.getInstance(THREE)).setKeyName(keyName).setRefreshPipeline(true).build();
    OmKeyInfo keyInfo = cluster.getOzoneManager().lookupKey(keyArgs);
    // Make sure a new block is written
    Assert.assertNotEquals(keyInfo.getLatestVersionLocations().getBlocksLatestVersionOnly().get(0).getBlockID(), blockId);
    Assert.assertEquals(3 * data.getBytes(UTF_8).length, keyInfo.getDataSize());
    validateData(keyName, data.concat(data).concat(data).getBytes(UTF_8));
}
Also used : OzoneOutputStream(org.apache.hadoop.ozone.client.io.OzoneOutputStream) OmKeyArgs(org.apache.hadoop.ozone.om.helpers.OmKeyArgs) Pipeline(org.apache.hadoop.hdds.scm.pipeline.Pipeline) BlockOutputStreamEntry(org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry) DatanodeDetails(org.apache.hadoop.hdds.protocol.DatanodeDetails) ContainerInfo(org.apache.hadoop.hdds.scm.container.ContainerInfo) OmKeyInfo(org.apache.hadoop.ozone.om.helpers.OmKeyInfo) BlockID(org.apache.hadoop.hdds.client.BlockID) KeyOutputStream(org.apache.hadoop.ozone.client.io.KeyOutputStream) Test(org.junit.jupiter.api.Test) Flaky(org.apache.ozone.test.tag.Flaky)

Aggregations

Flaky (org.apache.ozone.test.tag.Flaky)16 Test (org.junit.jupiter.api.Test)13 IOException (java.io.IOException)4 Pipeline (org.apache.hadoop.hdds.scm.pipeline.Pipeline)4 OzoneOutputStream (org.apache.hadoop.ozone.client.io.OzoneOutputStream)4 Path (org.apache.hadoop.fs.Path)3 DatanodeDetails (org.apache.hadoop.hdds.protocol.DatanodeDetails)3 ContainerInfo (org.apache.hadoop.hdds.scm.container.ContainerInfo)3 StorageContainerManager (org.apache.hadoop.hdds.scm.server.StorageContainerManager)3 ObjectStore (org.apache.hadoop.ozone.client.ObjectStore)3 OzoneVolume (org.apache.hadoop.ozone.client.OzoneVolume)3 KeyOutputStream (org.apache.hadoop.ozone.client.io.KeyOutputStream)3 Test (org.junit.Test)3 ArrayList (java.util.ArrayList)2 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)2 ContainerWithPipeline (org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline)2 HddsDatanodeService (org.apache.hadoop.ozone.HddsDatanodeService)2 OzoneBucket (org.apache.hadoop.ozone.client.OzoneBucket)2 VolumeArgs (org.apache.hadoop.ozone.client.VolumeArgs)2 KeyValueContainerData (org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData)2