Search in sources :

Example 46 with VideoAnnotationResults

use of com.google.cloud.videointelligence.v1beta2.VideoAnnotationResults in project java-video-intelligence by googleapis.

the class DetectFacesGcs method detectFacesGcs.

// Detects faces in a video stored in Google Cloud Storage using the Cloud Video Intelligence API.
public static void detectFacesGcs(String gcsUri) throws Exception {
    try (VideoIntelligenceServiceClient videoIntelligenceServiceClient = VideoIntelligenceServiceClient.create()) {
        FaceDetectionConfig faceDetectionConfig = FaceDetectionConfig.newBuilder().setIncludeBoundingBoxes(true).setIncludeAttributes(true).build();
        VideoContext videoContext = VideoContext.newBuilder().setFaceDetectionConfig(faceDetectionConfig).build();
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputUri(gcsUri).addFeatures(Feature.FACE_DETECTION).setVideoContext(videoContext).build();
        // Detects faces in a video
        OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future = videoIntelligenceServiceClient.annotateVideoAsync(request);
        System.out.println("Waiting for operation to complete...");
        AnnotateVideoResponse response = future.get();
        // Gets annotations for video
        VideoAnnotationResults annotationResult = response.getAnnotationResultsList().get(0);
        // Annotations for list of people detected, tracked and recognized in video.
        for (FaceDetectionAnnotation faceDetectionAnnotation : annotationResult.getFaceDetectionAnnotationsList()) {
            System.out.print("Face detected:\n");
            for (Track track : faceDetectionAnnotation.getTracksList()) {
                VideoSegment segment = track.getSegment();
                System.out.printf("\tStart: %d.%.0fs\n", segment.getStartTimeOffset().getSeconds(), segment.getStartTimeOffset().getNanos() / 1e6);
                System.out.printf("\tEnd: %d.%.0fs\n", segment.getEndTimeOffset().getSeconds(), segment.getEndTimeOffset().getNanos() / 1e6);
                // Each segment includes timestamped objects that
                // include characteristics of the face detected.
                TimestampedObject firstTimestampedObject = track.getTimestampedObjects(0);
                for (DetectedAttribute attribute : firstTimestampedObject.getAttributesList()) {
                    // Attributes include glasses, headwear, smiling, direction of gaze
                    System.out.printf("\tAttribute %s: %s %s\n", attribute.getName(), attribute.getValue(), attribute.getConfidence());
                }
            }
        }
    }
}
Also used : FaceDetectionAnnotation(com.google.cloud.videointelligence.v1.FaceDetectionAnnotation) AnnotateVideoRequest(com.google.cloud.videointelligence.v1.AnnotateVideoRequest) VideoContext(com.google.cloud.videointelligence.v1.VideoContext) VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient) AnnotateVideoProgress(com.google.cloud.videointelligence.v1.AnnotateVideoProgress) VideoSegment(com.google.cloud.videointelligence.v1.VideoSegment) TimestampedObject(com.google.cloud.videointelligence.v1.TimestampedObject) VideoAnnotationResults(com.google.cloud.videointelligence.v1.VideoAnnotationResults) FaceDetectionConfig(com.google.cloud.videointelligence.v1.FaceDetectionConfig) DetectedAttribute(com.google.cloud.videointelligence.v1.DetectedAttribute) Track(com.google.cloud.videointelligence.v1.Track) AnnotateVideoResponse(com.google.cloud.videointelligence.v1.AnnotateVideoResponse)

Example 47 with VideoAnnotationResults

use of com.google.cloud.videointelligence.v1beta2.VideoAnnotationResults in project java-video-intelligence by googleapis.

the class TextDetection method detectTextGcs.

// [END video_detect_text_beta]
// [START video_detect_text_gcs_beta]
/**
 * Detect Text in a video.
 *
 * @param gcsUri the path to the video file to analyze.
 */
public static VideoAnnotationResults detectTextGcs(String gcsUri) throws Exception {
    try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
        // Create the request
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputUri(gcsUri).addFeatures(Feature.TEXT_DETECTION).build();
        // asynchronously perform object tracking on videos
        OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future = client.annotateVideoAsync(request);
        System.out.println("Waiting for operation to complete...");
        // The first result is retrieved because a single video was processed.
        AnnotateVideoResponse response = future.get(600, TimeUnit.SECONDS);
        VideoAnnotationResults results = response.getAnnotationResults(0);
        // Get only the first annotation for demo purposes.
        TextAnnotation annotation = results.getTextAnnotations(0);
        System.out.println("Text: " + annotation.getText());
        // Get the first text segment.
        TextSegment textSegment = annotation.getSegments(0);
        System.out.println("Confidence: " + textSegment.getConfidence());
        // For the text segment display it's time offset
        VideoSegment videoSegment = textSegment.getSegment();
        Duration startTimeOffset = videoSegment.getStartTimeOffset();
        Duration endTimeOffset = videoSegment.getEndTimeOffset();
        // Display the offset times in seconds, 1e9 is part of the formula to convert nanos to seconds
        System.out.println(String.format("Start time: %.2f", startTimeOffset.getSeconds() + startTimeOffset.getNanos() / 1e9));
        System.out.println(String.format("End time: %.2f", endTimeOffset.getSeconds() + endTimeOffset.getNanos() / 1e9));
        // Show the first result for the first frame in the segment.
        TextFrame textFrame = textSegment.getFrames(0);
        Duration timeOffset = textFrame.getTimeOffset();
        System.out.println(String.format("Time offset for the first frame: %.2f", timeOffset.getSeconds() + timeOffset.getNanos() / 1e9));
        // Display the rotated bounding box for where the text is on the frame.
        System.out.println("Rotated Bounding Box Vertices:");
        List<NormalizedVertex> vertices = textFrame.getRotatedBoundingBox().getVerticesList();
        for (NormalizedVertex normalizedVertex : vertices) {
            System.out.println(String.format("\tVertex.x: %.2f, Vertex.y: %.2f", normalizedVertex.getX(), normalizedVertex.getY()));
        }
        return results;
    }
}
Also used : AnnotateVideoRequest(com.google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest) Duration(com.google.protobuf.Duration) NormalizedVertex(com.google.cloud.videointelligence.v1p2beta1.NormalizedVertex) VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1p2beta1.VideoIntelligenceServiceClient) AnnotateVideoProgress(com.google.cloud.videointelligence.v1p2beta1.AnnotateVideoProgress) VideoSegment(com.google.cloud.videointelligence.v1p2beta1.VideoSegment) VideoAnnotationResults(com.google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults) TextFrame(com.google.cloud.videointelligence.v1p2beta1.TextFrame) TextSegment(com.google.cloud.videointelligence.v1p2beta1.TextSegment) TextAnnotation(com.google.cloud.videointelligence.v1p2beta1.TextAnnotation) AnnotateVideoResponse(com.google.cloud.videointelligence.v1p2beta1.AnnotateVideoResponse)

Example 48 with VideoAnnotationResults

use of com.google.cloud.videointelligence.v1beta2.VideoAnnotationResults in project java-video-intelligence by googleapis.

the class TrackObjects method trackObjectsGcs.

// [END video_object_tracking_beta]
// [START video_object_tracking_gcs_beta]
/**
 * Track objects in a video.
 *
 * @param gcsUri the path to the video file to analyze.
 */
public static VideoAnnotationResults trackObjectsGcs(String gcsUri) throws Exception {
    try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
        // Create the request
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputUri(gcsUri).addFeatures(Feature.OBJECT_TRACKING).setLocationId("us-east1").build();
        // asynchronously perform object tracking on videos
        OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future = client.annotateVideoAsync(request);
        System.out.println("Waiting for operation to complete...");
        // The first result is retrieved because a single video was processed.
        AnnotateVideoResponse response = future.get(450, TimeUnit.SECONDS);
        VideoAnnotationResults results = response.getAnnotationResults(0);
        // Get only the first annotation for demo purposes.
        ObjectTrackingAnnotation annotation = results.getObjectAnnotations(0);
        System.out.println("Confidence: " + annotation.getConfidence());
        if (annotation.hasEntity()) {
            Entity entity = annotation.getEntity();
            System.out.println("Entity description: " + entity.getDescription());
            System.out.println("Entity id:: " + entity.getEntityId());
        }
        if (annotation.hasSegment()) {
            VideoSegment videoSegment = annotation.getSegment();
            Duration startTimeOffset = videoSegment.getStartTimeOffset();
            Duration endTimeOffset = videoSegment.getEndTimeOffset();
            // Display the segment time in seconds, 1e9 converts nanos to seconds
            System.out.println(String.format("Segment: %.2fs to %.2fs", startTimeOffset.getSeconds() + startTimeOffset.getNanos() / 1e9, endTimeOffset.getSeconds() + endTimeOffset.getNanos() / 1e9));
        }
        // Here we print only the bounding box of the first frame in this segment.
        ObjectTrackingFrame frame = annotation.getFrames(0);
        // Display the offset time in seconds, 1e9 converts nanos to seconds
        Duration timeOffset = frame.getTimeOffset();
        System.out.println(String.format("Time offset of the first frame: %.2fs", timeOffset.getSeconds() + timeOffset.getNanos() / 1e9));
        // Display the bounding box of the detected object
        NormalizedBoundingBox normalizedBoundingBox = frame.getNormalizedBoundingBox();
        System.out.println("Bounding box position:");
        System.out.println("\tleft: " + normalizedBoundingBox.getLeft());
        System.out.println("\ttop: " + normalizedBoundingBox.getTop());
        System.out.println("\tright: " + normalizedBoundingBox.getRight());
        System.out.println("\tbottom: " + normalizedBoundingBox.getBottom());
        return results;
    }
}
Also used : VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1p2beta1.VideoIntelligenceServiceClient) ObjectTrackingFrame(com.google.cloud.videointelligence.v1p2beta1.ObjectTrackingFrame) AnnotateVideoProgress(com.google.cloud.videointelligence.v1p2beta1.AnnotateVideoProgress) Entity(com.google.cloud.videointelligence.v1p2beta1.Entity) NormalizedBoundingBox(com.google.cloud.videointelligence.v1p2beta1.NormalizedBoundingBox) ObjectTrackingAnnotation(com.google.cloud.videointelligence.v1p2beta1.ObjectTrackingAnnotation) VideoSegment(com.google.cloud.videointelligence.v1p2beta1.VideoSegment) AnnotateVideoRequest(com.google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest) VideoAnnotationResults(com.google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults) Duration(com.google.protobuf.Duration) AnnotateVideoResponse(com.google.cloud.videointelligence.v1p2beta1.AnnotateVideoResponse)

Example 49 with VideoAnnotationResults

use of com.google.cloud.videointelligence.v1beta2.VideoAnnotationResults in project java-video-intelligence by googleapis.

the class Detect method speechTranscription.

/**
 * Transcribe speech from a video stored on GCS.
 *
 * @param gcsUri the path to the video file to analyze.
 */
public static void speechTranscription(String gcsUri) throws Exception {
    // Instantiate a com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient
    try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
        // Set the language code
        SpeechTranscriptionConfig config = SpeechTranscriptionConfig.newBuilder().setLanguageCode("en-US").setEnableAutomaticPunctuation(true).build();
        // Set the video context with the above configuration
        VideoContext context = VideoContext.newBuilder().setSpeechTranscriptionConfig(config).build();
        // Create the request
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputUri(gcsUri).addFeatures(Feature.SPEECH_TRANSCRIPTION).setVideoContext(context).build();
        // asynchronously perform speech transcription on videos
        OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> response = client.annotateVideoAsync(request);
        System.out.println("Waiting for operation to complete...");
        // Display the results
        for (VideoAnnotationResults results : response.get(600, TimeUnit.SECONDS).getAnnotationResultsList()) {
            for (SpeechTranscription speechTranscription : results.getSpeechTranscriptionsList()) {
                try {
                    // Print the transcription
                    if (speechTranscription.getAlternativesCount() > 0) {
                        SpeechRecognitionAlternative alternative = speechTranscription.getAlternatives(0);
                        System.out.printf("Transcript: %s\n", alternative.getTranscript());
                        System.out.printf("Confidence: %.2f\n", alternative.getConfidence());
                        System.out.println("Word level information:");
                        for (WordInfo wordInfo : alternative.getWordsList()) {
                            double startTime = wordInfo.getStartTime().getSeconds() + wordInfo.getStartTime().getNanos() / 1e9;
                            double endTime = wordInfo.getEndTime().getSeconds() + wordInfo.getEndTime().getNanos() / 1e9;
                            System.out.printf("\t%4.2fs - %4.2fs: %s\n", startTime, endTime, wordInfo.getWord());
                        }
                    } else {
                        System.out.println("No transcription found");
                    }
                } catch (IndexOutOfBoundsException ioe) {
                    System.out.println("Could not retrieve frame: " + ioe.getMessage());
                }
            }
        }
    }
// [END video_speech_transcription_gcs]
}
Also used : AnnotateVideoRequest(com.google.cloud.videointelligence.v1.AnnotateVideoRequest) VideoContext(com.google.cloud.videointelligence.v1.VideoContext) VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient) AnnotateVideoProgress(com.google.cloud.videointelligence.v1.AnnotateVideoProgress) SpeechRecognitionAlternative(com.google.cloud.videointelligence.v1.SpeechRecognitionAlternative) SpeechTranscriptionConfig(com.google.cloud.videointelligence.v1.SpeechTranscriptionConfig) VideoAnnotationResults(com.google.cloud.videointelligence.v1.VideoAnnotationResults) SpeechTranscription(com.google.cloud.videointelligence.v1.SpeechTranscription) WordInfo(com.google.cloud.videointelligence.v1.WordInfo) AnnotateVideoResponse(com.google.cloud.videointelligence.v1.AnnotateVideoResponse)

Example 50 with VideoAnnotationResults

use of com.google.cloud.videointelligence.v1beta2.VideoAnnotationResults in project java-video-intelligence by googleapis.

the class DetectLogo method detectLogo.

public static void detectLogo(String localFilePath) throws IOException, ExecutionException, InterruptedException {
    // the "close" method on the client to safely clean up any remaining background resources.
    try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
        // Read the files contents
        Path path = Paths.get(localFilePath);
        byte[] data = Files.readAllBytes(path);
        ByteString inputContent = ByteString.copyFrom(data);
        // Build the request with the inputContent and set the Feature
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputContent(inputContent).addFeatures(Feature.LOGO_RECOGNITION).build();
        // Make the asynchronous request
        AnnotateVideoResponse response = client.annotateVideoAsync(request).get();
        // Get the first response, since we sent only one video.
        VideoAnnotationResults annotationResult = response.getAnnotationResultsList().get(0);
        // Annotations for list of logos detected, tracked and recognized in the video.
        for (LogoRecognitionAnnotation logoRecognitionAnnotation : annotationResult.getLogoRecognitionAnnotationsList()) {
            Entity entity = logoRecognitionAnnotation.getEntity();
            // Opaque entity ID. Some IDs may be available in [Google Knowledge Graph Search
            // API](https://developers.google.com/knowledge-graph/).
            System.out.printf("Entity Id: %s\n", entity.getEntityId());
            System.out.printf("Description: %s\n", entity.getDescription());
            // instance appearing in consecutive frames.
            for (Track track : logoRecognitionAnnotation.getTracksList()) {
                // Video segment of a track.
                VideoSegment segment = track.getSegment();
                Duration segmentStartTimeOffset = segment.getStartTimeOffset();
                System.out.printf("\n\tStart Time Offset: %s.%s\n", segmentStartTimeOffset.getSeconds(), segmentStartTimeOffset.getNanos());
                Duration segmentEndTimeOffset = segment.getEndTimeOffset();
                System.out.printf("\tEnd Time Offset: %s.%s\n", segmentEndTimeOffset.getSeconds(), segmentEndTimeOffset.getNanos());
                System.out.printf("\tConfidence: %s\n", track.getConfidence());
                // The object with timestamp and attributes per frame in the track.
                for (TimestampedObject timestampedObject : track.getTimestampedObjectsList()) {
                    // Normalized Bounding box in a frame, where the object is located.
                    NormalizedBoundingBox normalizedBoundingBox = timestampedObject.getNormalizedBoundingBox();
                    System.out.printf("\n\t\tLeft: %s\n", normalizedBoundingBox.getLeft());
                    System.out.printf("\t\tTop: %s\n", normalizedBoundingBox.getTop());
                    System.out.printf("\t\tRight: %s\n", normalizedBoundingBox.getRight());
                    System.out.printf("\t\tBottom: %s\n", normalizedBoundingBox.getBottom());
                    // Optional. The attributes of the object in the bounding box.
                    for (DetectedAttribute attribute : timestampedObject.getAttributesList()) {
                        System.out.printf("\n\t\t\tName: %s\n", attribute.getName());
                        System.out.printf("\t\t\tConfidence: %s\n", attribute.getConfidence());
                        System.out.printf("\t\t\tValue: %s\n", attribute.getValue());
                    }
                }
                // Optional. Attributes in the track level.
                for (DetectedAttribute trackAttribute : track.getAttributesList()) {
                    System.out.printf("\n\t\tName : %s\n", trackAttribute.getName());
                    System.out.printf("\t\tConfidence : %s\n", trackAttribute.getConfidence());
                    System.out.printf("\t\tValue : %s\n", trackAttribute.getValue());
                }
            }
            // of the same logo class appearing in one VideoSegment.
            for (VideoSegment logoRecognitionAnnotationSegment : logoRecognitionAnnotation.getSegmentsList()) {
                Duration logoRecognitionAnnotationSegmentStartTimeOffset = logoRecognitionAnnotationSegment.getStartTimeOffset();
                System.out.printf("\n\tStart Time Offset : %s.%s\n", logoRecognitionAnnotationSegmentStartTimeOffset.getSeconds(), logoRecognitionAnnotationSegmentStartTimeOffset.getNanos());
                Duration logoRecognitionAnnotationSegmentEndTimeOffset = logoRecognitionAnnotationSegment.getEndTimeOffset();
                System.out.printf("\tEnd Time Offset : %s.%s\n", logoRecognitionAnnotationSegmentEndTimeOffset.getSeconds(), logoRecognitionAnnotationSegmentEndTimeOffset.getNanos());
            }
        }
    }
}
Also used : Path(java.nio.file.Path) Entity(com.google.cloud.videointelligence.v1p3beta1.Entity) AnnotateVideoRequest(com.google.cloud.videointelligence.v1p3beta1.AnnotateVideoRequest) ByteString(com.google.protobuf.ByteString) Duration(com.google.protobuf.Duration) VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1p3beta1.VideoIntelligenceServiceClient) NormalizedBoundingBox(com.google.cloud.videointelligence.v1p3beta1.NormalizedBoundingBox) VideoSegment(com.google.cloud.videointelligence.v1p3beta1.VideoSegment) TimestampedObject(com.google.cloud.videointelligence.v1p3beta1.TimestampedObject) VideoAnnotationResults(com.google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults) LogoRecognitionAnnotation(com.google.cloud.videointelligence.v1p3beta1.LogoRecognitionAnnotation) DetectedAttribute(com.google.cloud.videointelligence.v1p3beta1.DetectedAttribute) Track(com.google.cloud.videointelligence.v1p3beta1.Track) AnnotateVideoResponse(com.google.cloud.videointelligence.v1p3beta1.AnnotateVideoResponse)

Aggregations

VideoAnnotationResults (com.google.cloud.videointelligence.v1.VideoAnnotationResults)44 AnnotateVideoResponse (com.google.cloud.videointelligence.v1.AnnotateVideoResponse)34 AnnotateVideoProgress (com.google.cloud.videointelligence.v1.AnnotateVideoProgress)33 AnnotateVideoRequest (com.google.cloud.videointelligence.v1.AnnotateVideoRequest)33 VideoIntelligenceServiceClient (com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient)33 VideoSegment (com.google.cloud.videointelligence.v1.VideoSegment)19 Duration (com.google.protobuf.Duration)18 Entity (com.google.cloud.videointelligence.v1.Entity)17 Path (java.nio.file.Path)14 Test (org.junit.Test)14 VideoContext (com.google.cloud.videointelligence.v1.VideoContext)10 LabelAnnotation (com.google.cloud.videointelligence.v1.LabelAnnotation)9 LabelSegment (com.google.cloud.videointelligence.v1.LabelSegment)9 DetectedAttribute (com.google.cloud.videointelligence.v1.DetectedAttribute)8 NormalizedBoundingBox (com.google.cloud.videointelligence.v1.NormalizedBoundingBox)8 TextAnnotation (com.google.cloud.videointelligence.v1.TextAnnotation)8 TimestampedObject (com.google.cloud.videointelligence.v1.TimestampedObject)8 Track (com.google.cloud.videointelligence.v1.Track)8 VideoAnnotationResults (com.google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults)8 AnnotateVideoResponse (com.google.cloud.videointelligence.v1p1beta1.AnnotateVideoResponse)5