Search in sources :

Example 1 with S3AsyncClient

use of software.amazon.awssdk.services.s3.S3AsyncClient in project legacy-jclouds-examples by jclouds.

the class MainApp method main.

public static void main(String[] args) throws IOException {
    if (args.length < PARAMETERS)
        throw new IllegalArgumentException(INVALID_SYNTAX);
    // Args
    String provider = args[0];
    // note that you can check if a provider is present ahead of time
    checkArgument(contains(allKeys, provider), "provider %s not in supported list: %s", provider, allKeys);
    String identity = args[1];
    String credential = args[2];
    String containerName = args[3];
    // Init
    BlobStoreContext context = ContextBuilder.newBuilder(provider).credentials(identity, credential).buildView(BlobStoreContext.class);
    try {
        // Create Container
        BlobStore blobStore = context.getBlobStore();
        blobStore.createContainerInLocation(null, containerName);
        // Add Blob
        Blob blob = blobStore.blobBuilder("test").payload("testdata").build();
        blobStore.putBlob(containerName, blob);
        // List Container
        for (StorageMetadata resourceMd : blobStore.list()) {
            if (resourceMd.getType() == StorageType.CONTAINER || resourceMd.getType() == StorageType.FOLDER) {
                // Use Map API
                Map<String, InputStream> containerMap = context.createInputStreamMap(resourceMd.getName());
                System.out.printf("  %s: %s entries%n", resourceMd.getName(), containerMap.size());
            }
        }
        // Use Provider API
        if (context.getBackendType().getRawType().equals(RestContext.class)) {
            RestContext<?, ?> rest = context.unwrap();
            if (rest.getApi() instanceof S3Client) {
                RestContext<S3Client, S3AsyncClient> providerContext = context.unwrap();
                providerContext.getApi().getBucketLogging(containerName);
            } else if (rest.getApi() instanceof SwiftClient) {
                RestContext<SwiftClient, SwiftAsyncClient> providerContext = context.unwrap();
                providerContext.getApi().getObjectInfo(containerName, "test");
            } else if (rest.getApi() instanceof AzureBlobClient) {
                RestContext<AzureBlobClient, AzureBlobAsyncClient> providerContext = context.unwrap();
                providerContext.getApi().getBlobProperties(containerName, "test");
            } else if (rest.getApi() instanceof AtmosClient) {
                RestContext<AtmosClient, AtmosAsyncClient> providerContext = context.unwrap();
                providerContext.getApi().getSystemMetadata(containerName + "/test");
            }
        }
    } finally {
        // Close connecton
        context.close();
        System.exit(0);
    }
}
Also used : StorageMetadata(org.jclouds.blobstore.domain.StorageMetadata) Blob(org.jclouds.blobstore.domain.Blob) SwiftClient(org.jclouds.openstack.swift.SwiftClient) InputStream(java.io.InputStream) RestContext(org.jclouds.rest.RestContext) AtmosClient(org.jclouds.atmos.AtmosClient) S3AsyncClient(org.jclouds.s3.S3AsyncClient) BlobStoreContext(org.jclouds.blobstore.BlobStoreContext) S3Client(org.jclouds.s3.S3Client) AzureBlobClient(org.jclouds.azureblob.AzureBlobClient) BlobStore(org.jclouds.blobstore.BlobStore) AzureBlobAsyncClient(org.jclouds.azureblob.AzureBlobAsyncClient)

Example 2 with S3AsyncClient

use of software.amazon.awssdk.services.s3.S3AsyncClient in project flink by apache.

the class AWSServicesTestUtils method listBucketObjects.

public static List<S3Object> listBucketObjects(S3AsyncClient s3, String bucketName) throws ExecutionException, InterruptedException {
    ListObjectsRequest listObjects = ListObjectsRequest.builder().bucket(bucketName).build();
    CompletableFuture<ListObjectsResponse> res = s3.listObjects(listObjects);
    return res.get().contents();
}
Also used : ListObjectsRequest(software.amazon.awssdk.services.s3.model.ListObjectsRequest) ListObjectsResponse(software.amazon.awssdk.services.s3.model.ListObjectsResponse)

Example 3 with S3AsyncClient

use of software.amazon.awssdk.services.s3.S3AsyncClient in project uploader by smoketurner.

the class UploaderApplication method run.

@Override
public void run(@Nonnull final UploaderConfiguration configuration, @Nonnull final Environment environment) throws Exception {
    final NettyConfiguration nettyConfig = configuration.getNetty();
    final AwsConfiguration awsConfig = configuration.getAws();
    // we create the event loop groups first so we can share them between
    // the Netty server receiving the requests and the AWS S3 client
    // uploading the batches to S3.
    final EventLoopGroup bossGroup = Netty.newBossEventLoopGroup();
    final EventLoopGroup workerGroup = Netty.newWorkerEventLoopGroup();
    environment.lifecycle().manage(new EventLoopGroupManager(bossGroup));
    environment.lifecycle().manage(new EventLoopGroupManager(workerGroup));
    final Size maxUploadSize = awsConfig.getMaxUploadSize();
    final EventLoopGroupConfiguration eventLoopConfig = EventLoopGroupConfiguration.builder().eventLoopGroup(workerGroup).build();
    final NettySdkHttpClientFactory nettyFactory = NettySdkHttpClientFactory.builder().eventLoopGroupConfiguration(eventLoopConfig).build();
    final ClientAsyncHttpConfiguration httpConfig = ClientAsyncHttpConfiguration.builder().httpClientFactory(nettyFactory).build();
    // build the asynchronous S3 client with the configured credentials
    // provider and region and use the same Netty event group as the server.
    final S3AsyncClient s3 = S3AsyncClient.builder().credentialsProvider(awsConfig.getCredentials()).region(awsConfig.getRegion()).asyncHttpConfiguration(httpConfig).build();
    environment.lifecycle().manage(new AutoCloseableManager(s3));
    final Uploader uploader = new Uploader(s3, awsConfig);
    final UploadInitializer initializer = new UploadInitializer(nettyConfig, uploader, maxUploadSize.toBytes());
    final ServerBootstrap bootstrap = new ServerBootstrap();
    // Start the server
    final ChannelFuture future = bootstrap.group(bossGroup, workerGroup).handler(new LoggingHandler(LogLevel.INFO)).option(ChannelOption.SO_BACKLOG, 128).channel(Netty.serverChannelType()).childOption(ChannelOption.SO_KEEPALIVE, true).childHandler(initializer).bind(nettyConfig.getListenPort());
    environment.lifecycle().manage(new ChannelFutureManager(future));
    // Resources
    environment.jersey().register(new BatchResource(uploader));
    environment.jersey().register(new PingResource());
    environment.jersey().register(new VersionResource());
}
Also used : ChannelFuture(io.netty.channel.ChannelFuture) AwsConfiguration(com.smoketurner.uploader.config.AwsConfiguration) NettySdkHttpClientFactory(software.amazon.awssdk.http.nio.netty.NettySdkHttpClientFactory) LoggingHandler(io.netty.handler.logging.LoggingHandler) EventLoopGroupConfiguration(software.amazon.awssdk.http.nio.netty.EventLoopGroupConfiguration) AutoCloseableManager(io.dropwizard.lifecycle.AutoCloseableManager) Size(io.dropwizard.util.Size) ChannelFutureManager(com.smoketurner.uploader.managed.ChannelFutureManager) VersionResource(com.smoketurner.uploader.resources.VersionResource) EventLoopGroupManager(com.smoketurner.uploader.managed.EventLoopGroupManager) ServerBootstrap(io.netty.bootstrap.ServerBootstrap) PingResource(com.smoketurner.uploader.resources.PingResource) S3AsyncClient(software.amazon.awssdk.services.s3.S3AsyncClient) ClientAsyncHttpConfiguration(software.amazon.awssdk.core.client.builder.ClientAsyncHttpConfiguration) EventLoopGroup(io.netty.channel.EventLoopGroup) UploadInitializer(com.smoketurner.uploader.handler.UploadInitializer) NettyConfiguration(com.smoketurner.uploader.config.NettyConfiguration) BatchResource(com.smoketurner.uploader.resources.BatchResource) Uploader(com.smoketurner.uploader.core.Uploader)

Example 4 with S3AsyncClient

use of software.amazon.awssdk.services.s3.S3AsyncClient in project flink by apache.

the class KinesisFirehoseTableITTest method readFromS3.

private List<Order> readFromS3() throws Exception {
    Deadline deadline = Deadline.fromNow(Duration.ofMinutes(1));
    List<S3Object> ordersObjects;
    List<Order> orders;
    do {
        Thread.sleep(1000);
        ordersObjects = listBucketObjects(s3AsyncClient, BUCKET_NAME);
        orders = readObjectsFromS3Bucket(s3AsyncClient, ordersObjects, BUCKET_NAME, responseBytes -> fromJson(new String(responseBytes.asByteArrayUnsafe()), Order.class));
    } while (deadline.hasTimeLeft() && orders.size() < NUM_ELEMENTS);
    return orders;
}
Also used : IntStream(java.util.stream.IntStream) JsonProperty(com.fasterxml.jackson.annotation.JsonProperty) Deadline(org.apache.flink.api.common.time.Deadline) BeforeClass(org.junit.BeforeClass) DockerImageName(org.testcontainers.utility.DockerImageName) S3Object(software.amazon.awssdk.services.s3.model.S3Object) AWSServicesTestUtils.createBucket(org.apache.flink.connector.aws.testutils.AWSServicesTestUtils.createBucket) DockerImageVersions(org.apache.flink.util.DockerImageVersions) LoggerFactory(org.slf4j.LoggerFactory) IamAsyncClient(software.amazon.awssdk.services.iam.IamAsyncClient) LocalstackContainer(org.apache.flink.connector.aws.testutils.LocalstackContainer) Network(org.testcontainers.containers.Network) TestUtils(org.apache.flink.tests.util.TestUtils) SdkSystemSetting(software.amazon.awssdk.core.SdkSystemSetting) KinesisFirehoseTestUtils.createFirehoseClient(org.apache.flink.connector.firehose.sink.testutils.KinesisFirehoseTestUtils.createFirehoseClient) KinesisFirehoseTestUtils.createDeliveryStream(org.apache.flink.connector.firehose.sink.testutils.KinesisFirehoseTestUtils.createDeliveryStream) SQLJobSubmission(org.apache.flink.tests.util.flink.SQLJobSubmission) After(org.junit.After) Duration(java.time.Duration) TestLogger(org.apache.flink.util.TestLogger) Timeout(org.junit.rules.Timeout) Assertions(org.assertj.core.api.Assertions) ClassRule(org.junit.ClassRule) SdkAsyncHttpClient(software.amazon.awssdk.http.async.SdkAsyncHttpClient) Path(java.nio.file.Path) Before(org.junit.Before) AWSServicesTestUtils.createIAMRole(org.apache.flink.connector.aws.testutils.AWSServicesTestUtils.createIAMRole) AfterClass(org.junit.AfterClass) AWSServicesTestUtils.createS3Client(org.apache.flink.connector.aws.testutils.AWSServicesTestUtils.createS3Client) Logger(org.slf4j.Logger) AWSServicesTestUtils.listBucketObjects(org.apache.flink.connector.aws.testutils.AWSServicesTestUtils.listBucketObjects) S3AsyncClient(software.amazon.awssdk.services.s3.S3AsyncClient) Files(java.nio.file.Files) ObjectMapper(com.fasterxml.jackson.databind.ObjectMapper) JsonProcessingException(com.fasterxml.jackson.core.JsonProcessingException) Test(org.junit.Test) Collectors(java.util.stream.Collectors) Objects(java.util.Objects) TimeUnit(java.util.concurrent.TimeUnit) List(java.util.List) AWSServicesTestUtils.createHttpClient(org.apache.flink.connector.aws.testutils.AWSServicesTestUtils.createHttpClient) Paths(java.nio.file.Paths) FirehoseAsyncClient(software.amazon.awssdk.services.firehose.FirehoseAsyncClient) FlinkContainers(org.apache.flink.tests.util.flink.container.FlinkContainers) AWSServicesTestUtils.readObjectsFromS3Bucket(org.apache.flink.connector.aws.testutils.AWSServicesTestUtils.readObjectsFromS3Bucket) AWSServicesTestUtils.createIamClient(org.apache.flink.connector.aws.testutils.AWSServicesTestUtils.createIamClient) Deadline(org.apache.flink.api.common.time.Deadline) S3Object(software.amazon.awssdk.services.s3.model.S3Object)

Example 5 with S3AsyncClient

use of software.amazon.awssdk.services.s3.S3AsyncClient in project flink by apache.

the class KinesisFirehoseSinkITCase method firehoseSinkWritesCorrectDataToMockAWSServices.

@Test
public void firehoseSinkWritesCorrectDataToMockAWSServices() throws Exception {
    LOG.info("1 - Creating the bucket for Firehose to deliver into...");
    createBucket(s3AsyncClient, BUCKET_NAME);
    LOG.info("2 - Creating the IAM Role for Firehose to write into the s3 bucket...");
    createIAMRole(iamAsyncClient, ROLE_NAME);
    LOG.info("3 - Creating the Firehose delivery stream...");
    createDeliveryStream(STREAM_NAME, BUCKET_NAME, ROLE_ARN, firehoseAsyncClient);
    KinesisFirehoseSink<String> kdsSink = KinesisFirehoseSink.<String>builder().setSerializationSchema(new SimpleStringSchema()).setDeliveryStreamName(STREAM_NAME).setMaxBatchSize(1).setFirehoseClientProperties(createConfig(mockFirehoseContainer.getEndpoint())).build();
    KinesisFirehoseTestUtils.getSampleDataGenerator(env, NUMBER_OF_ELEMENTS).sinkTo(kdsSink);
    env.execute("Integration Test");
    List<S3Object> objects = listBucketObjects(createS3Client(mockFirehoseContainer.getEndpoint(), httpClient), BUCKET_NAME);
    assertThat(objects.size()).isEqualTo(NUMBER_OF_ELEMENTS);
    assertThat(readObjectsFromS3Bucket(s3AsyncClient, objects, BUCKET_NAME, response -> new String(response.asByteArrayUnsafe()))).containsAll(KinesisFirehoseTestUtils.getSampleData(NUMBER_OF_ELEMENTS));
}
Also used : SimpleStringSchema(org.apache.flink.api.common.serialization.SimpleStringSchema) S3Object(software.amazon.awssdk.services.s3.model.S3Object) Test(org.junit.Test)

Aggregations

Test (org.junit.Test)2 S3AsyncClient (software.amazon.awssdk.services.s3.S3AsyncClient)2 S3Object (software.amazon.awssdk.services.s3.model.S3Object)2 JsonProperty (com.fasterxml.jackson.annotation.JsonProperty)1 JsonProcessingException (com.fasterxml.jackson.core.JsonProcessingException)1 ObjectMapper (com.fasterxml.jackson.databind.ObjectMapper)1 AwsConfiguration (com.smoketurner.uploader.config.AwsConfiguration)1 NettyConfiguration (com.smoketurner.uploader.config.NettyConfiguration)1 Uploader (com.smoketurner.uploader.core.Uploader)1 UploadInitializer (com.smoketurner.uploader.handler.UploadInitializer)1 ChannelFutureManager (com.smoketurner.uploader.managed.ChannelFutureManager)1 EventLoopGroupManager (com.smoketurner.uploader.managed.EventLoopGroupManager)1 BatchResource (com.smoketurner.uploader.resources.BatchResource)1 PingResource (com.smoketurner.uploader.resources.PingResource)1 VersionResource (com.smoketurner.uploader.resources.VersionResource)1 AutoCloseableManager (io.dropwizard.lifecycle.AutoCloseableManager)1 Size (io.dropwizard.util.Size)1 ServerBootstrap (io.netty.bootstrap.ServerBootstrap)1 ChannelFuture (io.netty.channel.ChannelFuture)1 EventLoopGroup (io.netty.channel.EventLoopGroup)1