Search in sources :

Example 1 with VideoContext

use of com.google.cloud.videointelligence.v1beta1.VideoContext in project google-cloud-java by GoogleCloudPlatform.

the class VideoIntelligenceServiceClientTest method annotateVideoExceptionTest.

@Test
@SuppressWarnings("all")
public void annotateVideoExceptionTest() throws Exception {
    StatusRuntimeException exception = new StatusRuntimeException(Status.INVALID_ARGUMENT);
    mockVideoIntelligenceService.addException(exception);
    try {
        String inputUri = "inputUri1707300727";
        List<Feature> features = new ArrayList<>();
        VideoContext videoContext = VideoContext.newBuilder().build();
        String outputUri = "outputUri-1273518802";
        String locationId = "locationId552319461";
        client.annotateVideoAsync(inputUri, features, videoContext, outputUri, locationId).get();
        Assert.fail("No exception raised");
    } catch (ExecutionException e) {
        Assert.assertEquals(ApiException.class, e.getCause().getClass());
        ApiException apiException = (ApiException) e.getCause();
        Assert.assertEquals(Status.INVALID_ARGUMENT.getCode(), apiException.getStatusCode());
    }
}
Also used : StatusRuntimeException(io.grpc.StatusRuntimeException) ArrayList(java.util.ArrayList) VideoContext(com.google.cloud.videointelligence.v1beta1.VideoContext) ExecutionException(java.util.concurrent.ExecutionException) Feature(com.google.cloud.videointelligence.v1beta1.Feature) ApiException(com.google.api.gax.grpc.ApiException) Test(org.junit.Test)

Example 2 with VideoContext

use of com.google.cloud.videointelligence.v1beta1.VideoContext in project java-docs-samples by GoogleCloudPlatform.

the class Detect method analyzeFacesBoundingBoxes.

// [START video_face_bounding_boxes]
/**
 * Detects faces' bounding boxes on the video at the provided Cloud Storage path.
 *
 * @param gcsUri the path to the video file to analyze.
 */
public static void analyzeFacesBoundingBoxes(String gcsUri) throws Exception {
    // Instantiate a com.google.cloud.videointelligence.v1p1beta1.VideoIntelligenceServiceClient
    try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
        // Set the configuration to include bounding boxes
        FaceConfig config = FaceConfig.newBuilder().setIncludeBoundingBoxes(true).build();
        // Set the video context with the above configuration
        VideoContext context = VideoContext.newBuilder().setFaceDetectionConfig(config).build();
        // Create the request
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputUri(gcsUri).addFeatures(Feature.FACE_DETECTION).setVideoContext(context).build();
        // asynchronously perform facial analysis on videos
        OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> response = client.annotateVideoAsync(request);
        System.out.println("Waiting for operation to complete...");
        boolean faceFound = false;
        // Display the results
        for (VideoAnnotationResults results : response.get(900, TimeUnit.SECONDS).getAnnotationResultsList()) {
            int faceCount = 0;
            // Display the results for each face
            for (FaceDetectionAnnotation faceAnnotation : results.getFaceDetectionAnnotationsList()) {
                faceFound = true;
                System.out.println("\nFace: " + ++faceCount);
                // Each FaceDetectionAnnotation has only one segment.
                for (FaceSegment segment : faceAnnotation.getSegmentsList()) {
                    double startTime = segment.getSegment().getStartTimeOffset().getSeconds() + segment.getSegment().getStartTimeOffset().getNanos() / 1e9;
                    double endTime = segment.getSegment().getEndTimeOffset().getSeconds() + segment.getSegment().getEndTimeOffset().getNanos() / 1e9;
                    System.out.printf("Segment location: %.3fs to %.3f\n", startTime, endTime);
                }
                // There are typically many frames for each face,
                try {
                    // Here we process only the first frame.
                    if (faceAnnotation.getFramesCount() > 0) {
                        // get the first frame
                        FaceDetectionFrame frame = faceAnnotation.getFrames(0);
                        double timeOffset = frame.getTimeOffset().getSeconds() + frame.getTimeOffset().getNanos() / 1e9;
                        System.out.printf("First frame time offset: %.3fs\n", timeOffset);
                        // print info on the first normalized bounding box
                        NormalizedBoundingBox box = frame.getAttributes(0).getNormalizedBoundingBox();
                        System.out.printf("\tLeft: %.3f\n", box.getLeft());
                        System.out.printf("\tTop: %.3f\n", box.getTop());
                        System.out.printf("\tBottom: %.3f\n", box.getBottom());
                        System.out.printf("\tRight: %.3f\n", box.getRight());
                    } else {
                        System.out.println("No frames found in annotation");
                    }
                } catch (IndexOutOfBoundsException ioe) {
                    System.out.println("Could not retrieve frame: " + ioe.getMessage());
                }
            }
        }
        if (!faceFound) {
            System.out.println("No faces detected in " + gcsUri);
        }
    }
}
Also used : FaceDetectionAnnotation(com.google.cloud.videointelligence.v1p1beta1.FaceDetectionAnnotation) AnnotateVideoRequest(com.google.cloud.videointelligence.v1p1beta1.AnnotateVideoRequest) VideoContext(com.google.cloud.videointelligence.v1p1beta1.VideoContext) FaceDetectionFrame(com.google.cloud.videointelligence.v1p1beta1.FaceDetectionFrame) FaceConfig(com.google.cloud.videointelligence.v1p1beta1.FaceConfig) VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1p1beta1.VideoIntelligenceServiceClient) AnnotateVideoProgress(com.google.cloud.videointelligence.v1p1beta1.AnnotateVideoProgress) FaceSegment(com.google.cloud.videointelligence.v1p1beta1.FaceSegment) NormalizedBoundingBox(com.google.cloud.videointelligence.v1p1beta1.NormalizedBoundingBox) VideoAnnotationResults(com.google.cloud.videointelligence.v1p1beta1.VideoAnnotationResults) AnnotateVideoResponse(com.google.cloud.videointelligence.v1p1beta1.AnnotateVideoResponse)

Example 3 with VideoContext

use of com.google.cloud.videointelligence.v1beta1.VideoContext in project beam by apache.

the class VideoIntelligenceIT method annotateVideoFromURIWithContext.

@Test
public void annotateVideoFromURIWithContext() {
    VideoContext context = VideoContext.newBuilder().setLabelDetectionConfig(LabelDetectionConfig.newBuilder().setModel("builtin/latest")).build();
    PCollection<List<VideoAnnotationResults>> annotationResults = testPipeline.apply(Create.of(KV.of(VIDEO_URI, context))).apply("Annotate video", VideoIntelligence.annotateFromUriWithContext(featureList));
    PAssert.that(annotationResults).satisfies(new VerifyVideoAnnotationResult());
    testPipeline.run().waitUntilFinish();
}
Also used : VideoContext(com.google.cloud.videointelligence.v1.VideoContext) ArrayList(java.util.ArrayList) List(java.util.List) Test(org.junit.Test)

Example 4 with VideoContext

use of com.google.cloud.videointelligence.v1beta1.VideoContext in project beam by apache.

the class AnnotateVideoBytesWithContextFn method processElement.

/**
 * ProcessElement implementation.
 */
@Override
public void processElement(ProcessContext context) throws ExecutionException, InterruptedException {
    ByteString element = context.element().getKey();
    VideoContext videoContext = context.element().getValue();
    List<VideoAnnotationResults> videoAnnotationResults = getVideoAnnotationResults(null, element, videoContext);
    context.output(videoAnnotationResults);
}
Also used : ByteString(com.google.protobuf.ByteString) VideoAnnotationResults(com.google.cloud.videointelligence.v1.VideoAnnotationResults) VideoContext(com.google.cloud.videointelligence.v1.VideoContext)

Example 5 with VideoContext

use of com.google.cloud.videointelligence.v1beta1.VideoContext in project beam by apache.

the class AnnotateVideoFromBytesFn method processElement.

/**
 * Implementation of ProcessElement.
 */
@Override
public void processElement(ProcessContext context) throws ExecutionException, InterruptedException {
    ByteString element = context.element();
    VideoContext videoContext = null;
    if (contextSideInput != null) {
        videoContext = context.sideInput(contextSideInput).get(element);
    }
    List<VideoAnnotationResults> videoAnnotationResults = getVideoAnnotationResults(null, element, videoContext);
    context.output(videoAnnotationResults);
}
Also used : ByteString(com.google.protobuf.ByteString) VideoAnnotationResults(com.google.cloud.videointelligence.v1.VideoAnnotationResults) VideoContext(com.google.cloud.videointelligence.v1.VideoContext)

Aggregations

VideoContext (com.google.cloud.videointelligence.v1.VideoContext)5 VideoAnnotationResults (com.google.cloud.videointelligence.v1.VideoAnnotationResults)4 AnnotateVideoProgress (com.google.cloud.videointelligence.v1p1beta1.AnnotateVideoProgress)3 AnnotateVideoRequest (com.google.cloud.videointelligence.v1p1beta1.AnnotateVideoRequest)3 AnnotateVideoResponse (com.google.cloud.videointelligence.v1p1beta1.AnnotateVideoResponse)3 VideoAnnotationResults (com.google.cloud.videointelligence.v1p1beta1.VideoAnnotationResults)3 VideoContext (com.google.cloud.videointelligence.v1p1beta1.VideoContext)3 VideoIntelligenceServiceClient (com.google.cloud.videointelligence.v1p1beta1.VideoIntelligenceServiceClient)3 ArrayList (java.util.ArrayList)3 Test (org.junit.Test)3 Feature (com.google.cloud.videointelligence.v1beta1.Feature)2 VideoContext (com.google.cloud.videointelligence.v1beta1.VideoContext)2 FaceConfig (com.google.cloud.videointelligence.v1p1beta1.FaceConfig)2 FaceDetectionAnnotation (com.google.cloud.videointelligence.v1p1beta1.FaceDetectionAnnotation)2 FaceDetectionFrame (com.google.cloud.videointelligence.v1p1beta1.FaceDetectionFrame)2 FaceSegment (com.google.cloud.videointelligence.v1p1beta1.FaceSegment)2 ByteString (com.google.protobuf.ByteString)2 ApiException (com.google.api.gax.grpc.ApiException)1 AnnotateVideoRequest (com.google.cloud.videointelligence.v1beta1.AnnotateVideoRequest)1 AnnotateVideoResponse (com.google.cloud.videointelligence.v1beta1.AnnotateVideoResponse)1