Search in sources :

Example 11 with OmMultipartInfo

use of org.apache.hadoop.ozone.om.helpers.OmMultipartInfo in project ozone by apache.

the class TestOzoneRpcClientAbstract method testAbortUploadFailWithInProgressPartUpload.

@Test
public void testAbortUploadFailWithInProgressPartUpload() throws Exception {
    String volumeName = UUID.randomUUID().toString();
    String bucketName = UUID.randomUUID().toString();
    String keyName = UUID.randomUUID().toString();
    store.createVolume(volumeName);
    OzoneVolume volume = store.getVolume(volumeName);
    volume.createBucket(bucketName);
    OzoneBucket bucket = volume.getBucket(bucketName);
    OmMultipartInfo omMultipartInfo = bucket.initiateMultipartUpload(keyName, RATIS, ONE);
    Assert.assertNotNull(omMultipartInfo.getUploadID());
    // Do not close output stream.
    byte[] data = "data".getBytes(UTF_8);
    OzoneOutputStream ozoneOutputStream = bucket.createMultipartKey(keyName, data.length, 1, omMultipartInfo.getUploadID());
    ozoneOutputStream.write(data, 0, data.length);
    // Abort before completing part upload.
    bucket.abortMultipartUpload(keyName, omMultipartInfo.getUploadID());
    try {
        ozoneOutputStream.close();
        fail("testAbortUploadFailWithInProgressPartUpload failed");
    } catch (IOException ex) {
        assertTrue(ex instanceof OMException);
        assertEquals(NO_SUCH_MULTIPART_UPLOAD_ERROR, ((OMException) ex).getResult());
    }
}
Also used : OzoneVolume(org.apache.hadoop.ozone.client.OzoneVolume) OzoneBucket(org.apache.hadoop.ozone.client.OzoneBucket) OzoneOutputStream(org.apache.hadoop.ozone.client.io.OzoneOutputStream) IOException(java.io.IOException) OmMultipartInfo(org.apache.hadoop.ozone.om.helpers.OmMultipartInfo) OMException(org.apache.hadoop.ozone.om.exceptions.OMException) Test(org.junit.Test)

Example 12 with OmMultipartInfo

use of org.apache.hadoop.ozone.om.helpers.OmMultipartInfo in project ozone by apache.

the class TestOzoneRpcClientAbstract method initiateMultipartUpload.

private String initiateMultipartUpload(OzoneBucket bucket, String keyName, ReplicationType replicationType, ReplicationFactor replicationFactor) throws Exception {
    OmMultipartInfo multipartInfo = bucket.initiateMultipartUpload(keyName, replicationType, replicationFactor);
    String uploadID = multipartInfo.getUploadID();
    Assert.assertNotNull(uploadID);
    return uploadID;
}
Also used : OmMultipartInfo(org.apache.hadoop.ozone.om.helpers.OmMultipartInfo)

Example 13 with OmMultipartInfo

use of org.apache.hadoop.ozone.om.helpers.OmMultipartInfo in project ozone by apache.

the class TestOzoneRpcClientAbstract method testUploadPartOverrideWithStandAlone.

@Test
public void testUploadPartOverrideWithStandAlone() throws IOException {
    String volumeName = UUID.randomUUID().toString();
    String bucketName = UUID.randomUUID().toString();
    String keyName = UUID.randomUUID().toString();
    String sampleData = "sample Value";
    int partNumber = 1;
    store.createVolume(volumeName);
    OzoneVolume volume = store.getVolume(volumeName);
    volume.createBucket(bucketName);
    OzoneBucket bucket = volume.getBucket(bucketName);
    OmMultipartInfo multipartInfo = bucket.initiateMultipartUpload(keyName, RATIS, ONE);
    assertNotNull(multipartInfo);
    String uploadID = multipartInfo.getUploadID();
    Assert.assertEquals(volumeName, multipartInfo.getVolumeName());
    Assert.assertEquals(bucketName, multipartInfo.getBucketName());
    Assert.assertEquals(keyName, multipartInfo.getKeyName());
    assertNotNull(multipartInfo.getUploadID());
    OzoneOutputStream ozoneOutputStream = bucket.createMultipartKey(keyName, sampleData.length(), partNumber, uploadID);
    ozoneOutputStream.write(string2Bytes(sampleData), 0, sampleData.length());
    ozoneOutputStream.close();
    OmMultipartCommitUploadPartInfo commitUploadPartInfo = ozoneOutputStream.getCommitUploadPartInfo();
    assertNotNull(commitUploadPartInfo);
    String partName = commitUploadPartInfo.getPartName();
    assertNotNull(commitUploadPartInfo.getPartName());
    // Overwrite the part by creating part key with same part number
    // and different content.
    sampleData = "sample Data Changed";
    ozoneOutputStream = bucket.createMultipartKey(keyName, sampleData.length(), partNumber, uploadID);
    ozoneOutputStream.write(string2Bytes(sampleData), 0, "name".length());
    ozoneOutputStream.close();
    commitUploadPartInfo = ozoneOutputStream.getCommitUploadPartInfo();
    assertNotNull(commitUploadPartInfo);
    assertNotNull(commitUploadPartInfo.getPartName());
    // AWS S3 for same content generates same partName during upload part.
    // In AWS S3 ETag is generated from md5sum. In Ozone right now we
    // don't do this. For now to make things work for large file upload
    // through aws s3 cp, the partName are generated in a predictable fashion.
    // So, when a part is override partNames will still be same irrespective
    // of content in ozone s3. This will make S3 Mpu completeMPU pass when
    // comparing part names and large file uploads work using aws cp.
    assertEquals("Part names should be same", partName, commitUploadPartInfo.getPartName());
}
Also used : OzoneVolume(org.apache.hadoop.ozone.client.OzoneVolume) OzoneBucket(org.apache.hadoop.ozone.client.OzoneBucket) OmMultipartCommitUploadPartInfo(org.apache.hadoop.ozone.om.helpers.OmMultipartCommitUploadPartInfo) OzoneOutputStream(org.apache.hadoop.ozone.client.io.OzoneOutputStream) OmMultipartInfo(org.apache.hadoop.ozone.om.helpers.OmMultipartInfo) Test(org.junit.Test)

Example 14 with OmMultipartInfo

use of org.apache.hadoop.ozone.om.helpers.OmMultipartInfo in project ozone by apache.

the class TestOzoneRpcClientAbstract method testInitiateMultipartUploadWithDefaultReplication.

@Test
public void testInitiateMultipartUploadWithDefaultReplication() throws IOException {
    String volumeName = UUID.randomUUID().toString();
    String bucketName = UUID.randomUUID().toString();
    String keyName = UUID.randomUUID().toString();
    store.createVolume(volumeName);
    OzoneVolume volume = store.getVolume(volumeName);
    volume.createBucket(bucketName);
    OzoneBucket bucket = volume.getBucket(bucketName);
    OmMultipartInfo multipartInfo = bucket.initiateMultipartUpload(keyName);
    assertNotNull(multipartInfo);
    String uploadID = multipartInfo.getUploadID();
    Assert.assertEquals(volumeName, multipartInfo.getVolumeName());
    Assert.assertEquals(bucketName, multipartInfo.getBucketName());
    Assert.assertEquals(keyName, multipartInfo.getKeyName());
    assertNotNull(multipartInfo.getUploadID());
    // Call initiate multipart upload for the same key again, this should
    // generate a new uploadID.
    multipartInfo = bucket.initiateMultipartUpload(keyName);
    assertNotNull(multipartInfo);
    Assert.assertEquals(volumeName, multipartInfo.getVolumeName());
    Assert.assertEquals(bucketName, multipartInfo.getBucketName());
    Assert.assertEquals(keyName, multipartInfo.getKeyName());
    assertNotEquals(multipartInfo.getUploadID(), uploadID);
    assertNotNull(multipartInfo.getUploadID());
}
Also used : OzoneVolume(org.apache.hadoop.ozone.client.OzoneVolume) OzoneBucket(org.apache.hadoop.ozone.client.OzoneBucket) OmMultipartInfo(org.apache.hadoop.ozone.om.helpers.OmMultipartInfo) Test(org.junit.Test)

Example 15 with OmMultipartInfo

use of org.apache.hadoop.ozone.om.helpers.OmMultipartInfo in project ozone by apache.

the class OzoneManagerProtocolClientSideTranslatorPB method initiateMultipartUpload.

/**
 * Return the proxy object underlying this protocol translator.
 *
 * @return the proxy object underlying this protocol translator.
 */
@Override
public OmMultipartInfo initiateMultipartUpload(OmKeyArgs omKeyArgs) throws IOException {
    MultipartInfoInitiateRequest.Builder multipartInfoInitiateRequest = MultipartInfoInitiateRequest.newBuilder();
    KeyArgs.Builder keyArgs = KeyArgs.newBuilder().setVolumeName(omKeyArgs.getVolumeName()).setBucketName(omKeyArgs.getBucketName()).setKeyName(omKeyArgs.getKeyName()).addAllAcls(omKeyArgs.getAcls().stream().map(a -> OzoneAcl.toProtobuf(a)).collect(Collectors.toList()));
    if (omKeyArgs.getReplicationConfig() != null) {
        keyArgs.setFactor(ReplicationConfig.getLegacyFactor(omKeyArgs.getReplicationConfig()));
        keyArgs.setType(omKeyArgs.getReplicationConfig().getReplicationType());
    }
    multipartInfoInitiateRequest.setKeyArgs(keyArgs.build());
    OMRequest omRequest = createOMRequest(Type.InitiateMultiPartUpload).setInitiateMultiPartUploadRequest(multipartInfoInitiateRequest.build()).build();
    MultipartInfoInitiateResponse resp = handleError(submitRequest(omRequest)).getInitiateMultiPartUploadResponse();
    return new OmMultipartInfo(resp.getVolumeName(), resp.getBucketName(), resp.getKeyName(), resp.getMultipartUploadID());
}
Also used : OMRequest(org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest) OmKeyArgs(org.apache.hadoop.ozone.om.helpers.OmKeyArgs) DeleteKeyArgs(org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.DeleteKeyArgs) KeyArgs(org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.KeyArgs) OmMultipartInfo(org.apache.hadoop.ozone.om.helpers.OmMultipartInfo) MultipartInfoInitiateRequest(org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.MultipartInfoInitiateRequest) MultipartInfoInitiateResponse(org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.MultipartInfoInitiateResponse)

Aggregations

OmMultipartInfo (org.apache.hadoop.ozone.om.helpers.OmMultipartInfo)26 Test (org.junit.Test)17 OzoneBucket (org.apache.hadoop.ozone.client.OzoneBucket)16 OzoneVolume (org.apache.hadoop.ozone.client.OzoneVolume)15 OzoneOutputStream (org.apache.hadoop.ozone.client.io.OzoneOutputStream)11 OmMultipartCommitUploadPartInfo (org.apache.hadoop.ozone.om.helpers.OmMultipartCommitUploadPartInfo)7 OMException (org.apache.hadoop.ozone.om.exceptions.OMException)6 IOException (java.io.IOException)4 OmKeyArgs (org.apache.hadoop.ozone.om.helpers.OmKeyArgs)3 HashMap (java.util.HashMap)2 LinkedHashMap (java.util.LinkedHashMap)2 OzoneInputStream (org.apache.hadoop.ozone.client.io.OzoneInputStream)2 OmMultipartUploadCompleteInfo (org.apache.hadoop.ozone.om.helpers.OmMultipartUploadCompleteInfo)2 CacheBuilder (com.google.common.cache.CacheBuilder)1 ArrayList (java.util.ArrayList)1 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)1 Consumes (javax.ws.rs.Consumes)1 POST (javax.ws.rs.POST)1 Produces (javax.ws.rs.Produces)1 FSDataInputStream (org.apache.hadoop.fs.FSDataInputStream)1