Search in sources :

Example 1 with OMVolumeCreateResponse

use of org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse in project ozone by apache.

the class TestOzoneManagerDoubleBufferWithOMResponse method testDoubleBufferWithMixOfTransactions.

/**
 * This test first creates a volume, and then does a mix of transactions
 * like create/delete buckets and add them to double buffer. Then it
 * verifies OM DB entries are matching with actual responses added to
 * double buffer or not.
 */
@Test
public void testDoubleBufferWithMixOfTransactions() throws Exception {
    // This test checks count, data in table is correct or not.
    Queue<OMBucketCreateResponse> bucketQueue = new ConcurrentLinkedQueue<>();
    Queue<OMBucketDeleteResponse> deleteBucketQueue = new ConcurrentLinkedQueue<>();
    String volumeName = UUID.randomUUID().toString();
    OMVolumeCreateResponse omVolumeCreateResponse = (OMVolumeCreateResponse) createVolume(volumeName, trxId.incrementAndGet());
    int bucketCount = 10;
    doMixTransactions(volumeName, bucketCount, deleteBucketQueue, bucketQueue);
    // As for every 2 transactions of create bucket we add deleted bucket.
    final int deleteCount = 5;
    // We are doing +1 for volume transaction.
    GenericTestUtils.waitFor(() -> doubleBuffer.getFlushedTransactionCount() == (bucketCount + deleteCount + 1), 100, 120000);
    Assert.assertEquals(1, omMetadataManager.countRowsInTable(omMetadataManager.getVolumeTable()));
    Assert.assertEquals(5, omMetadataManager.countRowsInTable(omMetadataManager.getBucketTable()));
    // Now after this in our DB we should have 5 buckets and one volume
    checkVolume(volumeName, omVolumeCreateResponse);
    checkCreateBuckets(bucketQueue);
    checkDeletedBuckets(deleteBucketQueue);
    // Check lastAppliedIndex is updated correctly or not.
    GenericTestUtils.waitFor(() -> bucketCount + deleteCount + 1 == lastAppliedIndex, 100, 30000);
    TransactionInfo transactionInfo = omMetadataManager.getTransactionInfoTable().get(TRANSACTION_INFO_KEY);
    assertNotNull(transactionInfo);
    Assert.assertEquals(lastAppliedIndex, transactionInfo.getTransactionIndex());
    Assert.assertEquals(term, transactionInfo.getTerm());
}
Also used : OMBucketDeleteResponse(org.apache.hadoop.ozone.om.response.bucket.OMBucketDeleteResponse) OMBucketCreateResponse(org.apache.hadoop.ozone.om.response.bucket.OMBucketCreateResponse) TransactionInfo(org.apache.hadoop.hdds.utils.TransactionInfo) ConcurrentLinkedQueue(java.util.concurrent.ConcurrentLinkedQueue) OMVolumeCreateResponse(org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse) Test(org.junit.Test)

Example 2 with OMVolumeCreateResponse

use of org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse in project ozone by apache.

the class TestOMVolumeCreateRequest method testValidateAndUpdateCacheWithZeroMaxUserVolumeCount.

@Test
public void testValidateAndUpdateCacheWithZeroMaxUserVolumeCount() throws Exception {
    when(ozoneManager.getMaxUserVolumeCount()).thenReturn(0L);
    String volumeName = UUID.randomUUID().toString();
    String adminName = "user1";
    String ownerName = "user1";
    long txLogIndex = 1;
    long expectedObjId = ozoneManager.getObjectIdFromTxId(txLogIndex);
    OMRequest originalRequest = createVolumeRequest(volumeName, adminName, ownerName);
    OMVolumeCreateRequest omVolumeCreateRequest = new OMVolumeCreateRequest(originalRequest);
    omVolumeCreateRequest.preExecute(ozoneManager);
    try {
        OMClientResponse omClientResponse = omVolumeCreateRequest.validateAndUpdateCache(ozoneManager, txLogIndex, ozoneManagerDoubleBufferHelper);
        Assert.assertTrue(omClientResponse instanceof OMVolumeCreateResponse);
        OMVolumeCreateResponse respone = (OMVolumeCreateResponse) omClientResponse;
        Assert.assertEquals(expectedObjId, respone.getOmVolumeArgs().getObjectID());
        Assert.assertEquals(txLogIndex, respone.getOmVolumeArgs().getUpdateID());
    } catch (IllegalArgumentException ex) {
        GenericTestUtils.assertExceptionContains("should be greater than zero", ex);
    }
}
Also used : OMRequest(org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest) OMClientResponse(org.apache.hadoop.ozone.om.response.OMClientResponse) OMVolumeCreateResponse(org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse) Test(org.junit.Test)

Example 3 with OMVolumeCreateResponse

use of org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse in project ozone by apache.

the class OMVolumeCreateRequest method validateAndUpdateCache.

@Override
public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager, long transactionLogIndex, OzoneManagerDoubleBufferHelper ozoneManagerDoubleBufferHelper) {
    CreateVolumeRequest createVolumeRequest = getOmRequest().getCreateVolumeRequest();
    Preconditions.checkNotNull(createVolumeRequest);
    VolumeInfo volumeInfo = createVolumeRequest.getVolumeInfo();
    OMMetrics omMetrics = ozoneManager.getMetrics();
    omMetrics.incNumVolumeCreates();
    String volume = volumeInfo.getVolume();
    String owner = volumeInfo.getOwnerName();
    OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(getOmRequest());
    OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
    // Doing this here, so we can do protobuf conversion outside of lock.
    boolean acquiredVolumeLock = false;
    boolean acquiredUserLock = false;
    IOException exception = null;
    OMClientResponse omClientResponse = null;
    OmVolumeArgs omVolumeArgs = null;
    Map<String, String> auditMap = null;
    try {
        omVolumeArgs = OmVolumeArgs.getFromProtobuf(volumeInfo);
        // when you create a volume, we set both Object ID and update ID.
        // The Object ID will never change, but update
        // ID will be set to transactionID each time we update the object.
        omVolumeArgs.setObjectID(ozoneManager.getObjectIdFromTxId(transactionLogIndex));
        omVolumeArgs.setUpdateID(transactionLogIndex, ozoneManager.isRatisEnabled());
        auditMap = omVolumeArgs.toAuditMap();
        // check acl
        if (ozoneManager.getAclsEnabled()) {
            checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME, OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.CREATE, volume, null, null);
        }
        // acquire lock.
        acquiredVolumeLock = omMetadataManager.getLock().acquireWriteLock(VOLUME_LOCK, volume);
        acquiredUserLock = omMetadataManager.getLock().acquireWriteLock(USER_LOCK, owner);
        String dbVolumeKey = omMetadataManager.getVolumeKey(volume);
        PersistedUserVolumeInfo volumeList = null;
        if (omMetadataManager.getVolumeTable().isExist(dbVolumeKey)) {
            LOG.debug("volume:{} already exists", omVolumeArgs.getVolume());
            throw new OMException("Volume already exists", OMException.ResultCodes.VOLUME_ALREADY_EXISTS);
        } else {
            String dbUserKey = omMetadataManager.getUserKey(owner);
            volumeList = omMetadataManager.getUserTable().get(dbUserKey);
            volumeList = addVolumeToOwnerList(volumeList, volume, owner, ozoneManager.getMaxUserVolumeCount(), transactionLogIndex);
            createVolume(omMetadataManager, omVolumeArgs, volumeList, dbVolumeKey, dbUserKey, transactionLogIndex);
            omResponse.setCreateVolumeResponse(CreateVolumeResponse.newBuilder().build());
            omClientResponse = new OMVolumeCreateResponse(omResponse.build(), omVolumeArgs, volumeList);
            LOG.debug("volume:{} successfully created", omVolumeArgs.getVolume());
        }
    } catch (IOException ex) {
        exception = ex;
        omClientResponse = new OMVolumeCreateResponse(createErrorOMResponse(omResponse, exception));
    } finally {
        if (omClientResponse != null) {
            omClientResponse.setFlushFuture(ozoneManagerDoubleBufferHelper.add(omClientResponse, transactionLogIndex));
        }
        if (acquiredUserLock) {
            omMetadataManager.getLock().releaseWriteLock(USER_LOCK, owner);
        }
        if (acquiredVolumeLock) {
            omMetadataManager.getLock().releaseWriteLock(VOLUME_LOCK, volume);
        }
    }
    // Performing audit logging outside of the lock.
    auditLog(ozoneManager.getAuditLogger(), buildAuditMessage(OMAction.CREATE_VOLUME, auditMap, exception, getOmRequest().getUserInfo()));
    // return response after releasing lock.
    if (exception == null) {
        LOG.info("created volume:{} for user:{}", volume, owner);
        omMetrics.incNumVolumes();
    } else {
        LOG.error("Volume creation failed for user:{} volume:{}", owner, volume, exception);
        omMetrics.incNumVolumeCreateFails();
    }
    return omClientResponse;
}
Also used : CreateVolumeRequest(org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.CreateVolumeRequest) OMClientResponse(org.apache.hadoop.ozone.om.response.OMClientResponse) OmVolumeArgs(org.apache.hadoop.ozone.om.helpers.OmVolumeArgs) VolumeInfo(org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.VolumeInfo) PersistedUserVolumeInfo(org.apache.hadoop.ozone.storage.proto.OzoneManagerStorageProtos.PersistedUserVolumeInfo) IOException(java.io.IOException) PersistedUserVolumeInfo(org.apache.hadoop.ozone.storage.proto.OzoneManagerStorageProtos.PersistedUserVolumeInfo) OMMetrics(org.apache.hadoop.ozone.om.OMMetrics) OMResponse(org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse) OMVolumeCreateResponse(org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse) OMMetadataManager(org.apache.hadoop.ozone.om.OMMetadataManager) OMException(org.apache.hadoop.ozone.om.exceptions.OMException)

Example 4 with OMVolumeCreateResponse

use of org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse in project ozone by apache.

the class TestOzoneManagerDoubleBufferWithOMResponse method testDoubleBufferWithMixOfTransactionsParallel.

/**
 * This test first creates a volume, and then does a mix of transactions
 * like create/delete buckets in parallel and add to double buffer. Then it
 * verifies OM DB entries are matching with actual responses added to
 * double buffer or not.
 */
@Test
public void testDoubleBufferWithMixOfTransactionsParallel() throws Exception {
    // This test checks count, data in table is correct or not.
    Queue<OMBucketCreateResponse> bucketQueue = new ConcurrentLinkedQueue<>();
    Queue<OMBucketDeleteResponse> deleteBucketQueue = new ConcurrentLinkedQueue<>();
    String volumeName1 = UUID.randomUUID().toString();
    OMVolumeCreateResponse omVolumeCreateResponse1 = (OMVolumeCreateResponse) createVolume(volumeName1, trxId.incrementAndGet());
    String volumeName2 = UUID.randomUUID().toString();
    OMVolumeCreateResponse omVolumeCreateResponse2 = (OMVolumeCreateResponse) createVolume(volumeName2, trxId.incrementAndGet());
    int bucketsPerVolume = 10;
    Daemon daemon1 = new Daemon(() -> doMixTransactions(volumeName1, bucketsPerVolume, deleteBucketQueue, bucketQueue));
    Daemon daemon2 = new Daemon(() -> doMixTransactions(volumeName2, bucketsPerVolume, deleteBucketQueue, bucketQueue));
    daemon1.start();
    daemon2.start();
    int bucketCount = 2 * bucketsPerVolume;
    // As for every 2 transactions of create bucket we add deleted bucket.
    final int deleteCount = 10;
    // We are doing +1 for volume transaction.
    GenericTestUtils.waitFor(() -> doubleBuffer.getFlushedTransactionCount() == (bucketCount + deleteCount + 2), 100, 120000);
    Assert.assertEquals(2, omMetadataManager.countRowsInTable(omMetadataManager.getVolumeTable()));
    Assert.assertEquals(10, omMetadataManager.countRowsInTable(omMetadataManager.getBucketTable()));
    // Now after this in our DB we should have 5 buckets and one volume
    checkVolume(volumeName1, omVolumeCreateResponse1);
    checkVolume(volumeName2, omVolumeCreateResponse2);
    checkCreateBuckets(bucketQueue);
    checkDeletedBuckets(deleteBucketQueue);
    // Not checking lastAppliedIndex here, because 2 daemon threads are
    // running in parallel, so lastAppliedIndex cannot be always
    // total transaction count. So, just checking here whether it is less
    // than total transaction count.
    Assert.assertTrue(lastAppliedIndex <= bucketCount + deleteCount + 2);
}
Also used : Daemon(org.apache.hadoop.util.Daemon) OMBucketDeleteResponse(org.apache.hadoop.ozone.om.response.bucket.OMBucketDeleteResponse) OMBucketCreateResponse(org.apache.hadoop.ozone.om.response.bucket.OMBucketCreateResponse) ConcurrentLinkedQueue(java.util.concurrent.ConcurrentLinkedQueue) OMVolumeCreateResponse(org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse) Test(org.junit.Test)

Aggregations

OMVolumeCreateResponse (org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse)4 Test (org.junit.Test)3 ConcurrentLinkedQueue (java.util.concurrent.ConcurrentLinkedQueue)2 OMClientResponse (org.apache.hadoop.ozone.om.response.OMClientResponse)2 OMBucketCreateResponse (org.apache.hadoop.ozone.om.response.bucket.OMBucketCreateResponse)2 OMBucketDeleteResponse (org.apache.hadoop.ozone.om.response.bucket.OMBucketDeleteResponse)2 IOException (java.io.IOException)1 TransactionInfo (org.apache.hadoop.hdds.utils.TransactionInfo)1 OMMetadataManager (org.apache.hadoop.ozone.om.OMMetadataManager)1 OMMetrics (org.apache.hadoop.ozone.om.OMMetrics)1 OMException (org.apache.hadoop.ozone.om.exceptions.OMException)1 OmVolumeArgs (org.apache.hadoop.ozone.om.helpers.OmVolumeArgs)1 CreateVolumeRequest (org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.CreateVolumeRequest)1 OMRequest (org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest)1 OMResponse (org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse)1 VolumeInfo (org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.VolumeInfo)1 PersistedUserVolumeInfo (org.apache.hadoop.ozone.storage.proto.OzoneManagerStorageProtos.PersistedUserVolumeInfo)1 Daemon (org.apache.hadoop.util.Daemon)1