Search in sources :

Example 6 with VideoSegment

use of com.google.cloud.videointelligence.v1p3beta1.VideoSegment in project java-video-intelligence by googleapis.

the class TextDetection method detectTextGcs.

// [END video_detect_text]
// [START video_detect_text_gcs]
/**
 * Detect Text in a video.
 *
 * @param gcsUri the path to the video file to analyze.
 */
public static VideoAnnotationResults detectTextGcs(String gcsUri) throws Exception {
    try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
        // Create the request
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputUri(gcsUri).addFeatures(Feature.TEXT_DETECTION).build();
        // asynchronously perform object tracking on videos
        OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future = client.annotateVideoAsync(request);
        System.out.println("Waiting for operation to complete...");
        // The first result is retrieved because a single video was processed.
        AnnotateVideoResponse response = future.get(300, TimeUnit.SECONDS);
        VideoAnnotationResults results = response.getAnnotationResults(0);
        // Get only the first annotation for demo purposes.
        TextAnnotation annotation = results.getTextAnnotations(0);
        System.out.println("Text: " + annotation.getText());
        // Get the first text segment.
        TextSegment textSegment = annotation.getSegments(0);
        System.out.println("Confidence: " + textSegment.getConfidence());
        // For the text segment display it's time offset
        VideoSegment videoSegment = textSegment.getSegment();
        Duration startTimeOffset = videoSegment.getStartTimeOffset();
        Duration endTimeOffset = videoSegment.getEndTimeOffset();
        // Display the offset times in seconds, 1e9 is part of the formula to convert nanos to seconds
        System.out.println(String.format("Start time: %.2f", startTimeOffset.getSeconds() + startTimeOffset.getNanos() / 1e9));
        System.out.println(String.format("End time: %.2f", endTimeOffset.getSeconds() + endTimeOffset.getNanos() / 1e9));
        // Show the first result for the first frame in the segment.
        TextFrame textFrame = textSegment.getFrames(0);
        Duration timeOffset = textFrame.getTimeOffset();
        System.out.println(String.format("Time offset for the first frame: %.2f", timeOffset.getSeconds() + timeOffset.getNanos() / 1e9));
        // Display the rotated bounding box for where the text is on the frame.
        System.out.println("Rotated Bounding Box Vertices:");
        List<NormalizedVertex> vertices = textFrame.getRotatedBoundingBox().getVerticesList();
        for (NormalizedVertex normalizedVertex : vertices) {
            System.out.println(String.format("\tVertex.x: %.2f, Vertex.y: %.2f", normalizedVertex.getX(), normalizedVertex.getY()));
        }
        return results;
    }
}
Also used : AnnotateVideoRequest(com.google.cloud.videointelligence.v1.AnnotateVideoRequest) Duration(com.google.protobuf.Duration) NormalizedVertex(com.google.cloud.videointelligence.v1.NormalizedVertex) VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient) AnnotateVideoProgress(com.google.cloud.videointelligence.v1.AnnotateVideoProgress) VideoSegment(com.google.cloud.videointelligence.v1.VideoSegment) VideoAnnotationResults(com.google.cloud.videointelligence.v1.VideoAnnotationResults) TextFrame(com.google.cloud.videointelligence.v1.TextFrame) TextSegment(com.google.cloud.videointelligence.v1.TextSegment) TextAnnotation(com.google.cloud.videointelligence.v1.TextAnnotation) AnnotateVideoResponse(com.google.cloud.videointelligence.v1.AnnotateVideoResponse)

Example 7 with VideoSegment

use of com.google.cloud.videointelligence.v1p3beta1.VideoSegment in project java-video-intelligence by googleapis.

the class TextDetection method detectText.

// [START video_detect_text]
/**
 * Detect text in a video.
 *
 * @param filePath the path to the video file to analyze.
 */
public static VideoAnnotationResults detectText(String filePath) throws Exception {
    try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
        // Read file
        Path path = Paths.get(filePath);
        byte[] data = Files.readAllBytes(path);
        // Create the request
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputContent(ByteString.copyFrom(data)).addFeatures(Feature.TEXT_DETECTION).build();
        // asynchronously perform object tracking on videos
        OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future = client.annotateVideoAsync(request);
        System.out.println("Waiting for operation to complete...");
        // The first result is retrieved because a single video was processed.
        AnnotateVideoResponse response = future.get(300, TimeUnit.SECONDS);
        VideoAnnotationResults results = response.getAnnotationResults(0);
        // Get only the first annotation for demo purposes.
        TextAnnotation annotation = results.getTextAnnotations(0);
        System.out.println("Text: " + annotation.getText());
        // Get the first text segment.
        TextSegment textSegment = annotation.getSegments(0);
        System.out.println("Confidence: " + textSegment.getConfidence());
        // For the text segment display it's time offset
        VideoSegment videoSegment = textSegment.getSegment();
        Duration startTimeOffset = videoSegment.getStartTimeOffset();
        Duration endTimeOffset = videoSegment.getEndTimeOffset();
        // Display the offset times in seconds, 1e9 is part of the formula to convert nanos to seconds
        System.out.println(String.format("Start time: %.2f", startTimeOffset.getSeconds() + startTimeOffset.getNanos() / 1e9));
        System.out.println(String.format("End time: %.2f", endTimeOffset.getSeconds() + endTimeOffset.getNanos() / 1e9));
        // Show the first result for the first frame in the segment.
        TextFrame textFrame = textSegment.getFrames(0);
        Duration timeOffset = textFrame.getTimeOffset();
        System.out.println(String.format("Time offset for the first frame: %.2f", timeOffset.getSeconds() + timeOffset.getNanos() / 1e9));
        // Display the rotated bounding box for where the text is on the frame.
        System.out.println("Rotated Bounding Box Vertices:");
        List<NormalizedVertex> vertices = textFrame.getRotatedBoundingBox().getVerticesList();
        for (NormalizedVertex normalizedVertex : vertices) {
            System.out.println(String.format("\tVertex.x: %.2f, Vertex.y: %.2f", normalizedVertex.getX(), normalizedVertex.getY()));
        }
        return results;
    }
}
Also used : Path(java.nio.file.Path) AnnotateVideoRequest(com.google.cloud.videointelligence.v1.AnnotateVideoRequest) Duration(com.google.protobuf.Duration) NormalizedVertex(com.google.cloud.videointelligence.v1.NormalizedVertex) VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient) AnnotateVideoProgress(com.google.cloud.videointelligence.v1.AnnotateVideoProgress) VideoSegment(com.google.cloud.videointelligence.v1.VideoSegment) VideoAnnotationResults(com.google.cloud.videointelligence.v1.VideoAnnotationResults) TextFrame(com.google.cloud.videointelligence.v1.TextFrame) TextSegment(com.google.cloud.videointelligence.v1.TextSegment) TextAnnotation(com.google.cloud.videointelligence.v1.TextAnnotation) AnnotateVideoResponse(com.google.cloud.videointelligence.v1.AnnotateVideoResponse)

Example 8 with VideoSegment

use of com.google.cloud.videointelligence.v1p3beta1.VideoSegment in project java-video-intelligence by googleapis.

the class TrackObjects method trackObjects.

// [START video_object_tracking]
/**
 * Track objects in a video.
 *
 * @param filePath the path to the video file to analyze.
 */
public static VideoAnnotationResults trackObjects(String filePath) throws Exception {
    try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
        // Read file
        Path path = Paths.get(filePath);
        byte[] data = Files.readAllBytes(path);
        // Create the request
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputContent(ByteString.copyFrom(data)).addFeatures(Feature.OBJECT_TRACKING).setLocationId("us-east1").build();
        // asynchronously perform object tracking on videos
        OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future = client.annotateVideoAsync(request);
        System.out.println("Waiting for operation to complete...");
        // The first result is retrieved because a single video was processed.
        AnnotateVideoResponse response = future.get(450, TimeUnit.SECONDS);
        VideoAnnotationResults results = response.getAnnotationResults(0);
        // Get only the first annotation for demo purposes.
        ObjectTrackingAnnotation annotation = results.getObjectAnnotations(0);
        System.out.println("Confidence: " + annotation.getConfidence());
        if (annotation.hasEntity()) {
            Entity entity = annotation.getEntity();
            System.out.println("Entity description: " + entity.getDescription());
            System.out.println("Entity id:: " + entity.getEntityId());
        }
        if (annotation.hasSegment()) {
            VideoSegment videoSegment = annotation.getSegment();
            Duration startTimeOffset = videoSegment.getStartTimeOffset();
            Duration endTimeOffset = videoSegment.getEndTimeOffset();
            // Display the segment time in seconds, 1e9 converts nanos to seconds
            System.out.println(String.format("Segment: %.2fs to %.2fs", startTimeOffset.getSeconds() + startTimeOffset.getNanos() / 1e9, endTimeOffset.getSeconds() + endTimeOffset.getNanos() / 1e9));
        }
        // Here we print only the bounding box of the first frame in this segment.
        ObjectTrackingFrame frame = annotation.getFrames(0);
        // Display the offset time in seconds, 1e9 converts nanos to seconds
        Duration timeOffset = frame.getTimeOffset();
        System.out.println(String.format("Time offset of the first frame: %.2fs", timeOffset.getSeconds() + timeOffset.getNanos() / 1e9));
        // Display the bounding box of the detected object
        NormalizedBoundingBox normalizedBoundingBox = frame.getNormalizedBoundingBox();
        System.out.println("Bounding box position:");
        System.out.println("\tleft: " + normalizedBoundingBox.getLeft());
        System.out.println("\ttop: " + normalizedBoundingBox.getTop());
        System.out.println("\tright: " + normalizedBoundingBox.getRight());
        System.out.println("\tbottom: " + normalizedBoundingBox.getBottom());
        return results;
    }
}
Also used : Path(java.nio.file.Path) Entity(com.google.cloud.videointelligence.v1.Entity) ObjectTrackingAnnotation(com.google.cloud.videointelligence.v1.ObjectTrackingAnnotation) AnnotateVideoRequest(com.google.cloud.videointelligence.v1.AnnotateVideoRequest) Duration(com.google.protobuf.Duration) VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient) ObjectTrackingFrame(com.google.cloud.videointelligence.v1.ObjectTrackingFrame) AnnotateVideoProgress(com.google.cloud.videointelligence.v1.AnnotateVideoProgress) NormalizedBoundingBox(com.google.cloud.videointelligence.v1.NormalizedBoundingBox) VideoSegment(com.google.cloud.videointelligence.v1.VideoSegment) VideoAnnotationResults(com.google.cloud.videointelligence.v1.VideoAnnotationResults) AnnotateVideoResponse(com.google.cloud.videointelligence.v1.AnnotateVideoResponse)

Example 9 with VideoSegment

use of com.google.cloud.videointelligence.v1p3beta1.VideoSegment in project java-video-intelligence by googleapis.

the class TrackObjects method trackObjectsGcs.

// [END video_object_tracking]
// [START video_object_tracking_gcs]
/**
 * Track objects in a video.
 *
 * @param gcsUri the path to the video file to analyze.
 */
public static VideoAnnotationResults trackObjectsGcs(String gcsUri) throws Exception {
    try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
        // Create the request
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputUri(gcsUri).addFeatures(Feature.OBJECT_TRACKING).setLocationId("us-east1").build();
        // asynchronously perform object tracking on videos
        OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future = client.annotateVideoAsync(request);
        System.out.println("Waiting for operation to complete...");
        // The first result is retrieved because a single video was processed.
        AnnotateVideoResponse response = future.get(450, TimeUnit.SECONDS);
        VideoAnnotationResults results = response.getAnnotationResults(0);
        // Get only the first annotation for demo purposes.
        ObjectTrackingAnnotation annotation = results.getObjectAnnotations(0);
        System.out.println("Confidence: " + annotation.getConfidence());
        if (annotation.hasEntity()) {
            Entity entity = annotation.getEntity();
            System.out.println("Entity description: " + entity.getDescription());
            System.out.println("Entity id:: " + entity.getEntityId());
        }
        if (annotation.hasSegment()) {
            VideoSegment videoSegment = annotation.getSegment();
            Duration startTimeOffset = videoSegment.getStartTimeOffset();
            Duration endTimeOffset = videoSegment.getEndTimeOffset();
            // Display the segment time in seconds, 1e9 converts nanos to seconds
            System.out.println(String.format("Segment: %.2fs to %.2fs", startTimeOffset.getSeconds() + startTimeOffset.getNanos() / 1e9, endTimeOffset.getSeconds() + endTimeOffset.getNanos() / 1e9));
        }
        // Here we print only the bounding box of the first frame in this segment.
        ObjectTrackingFrame frame = annotation.getFrames(0);
        // Display the offset time in seconds, 1e9 converts nanos to seconds
        Duration timeOffset = frame.getTimeOffset();
        System.out.println(String.format("Time offset of the first frame: %.2fs", timeOffset.getSeconds() + timeOffset.getNanos() / 1e9));
        // Display the bounding box of the detected object
        NormalizedBoundingBox normalizedBoundingBox = frame.getNormalizedBoundingBox();
        System.out.println("Bounding box position:");
        System.out.println("\tleft: " + normalizedBoundingBox.getLeft());
        System.out.println("\ttop: " + normalizedBoundingBox.getTop());
        System.out.println("\tright: " + normalizedBoundingBox.getRight());
        System.out.println("\tbottom: " + normalizedBoundingBox.getBottom());
        return results;
    }
}
Also used : VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient) ObjectTrackingFrame(com.google.cloud.videointelligence.v1.ObjectTrackingFrame) AnnotateVideoProgress(com.google.cloud.videointelligence.v1.AnnotateVideoProgress) Entity(com.google.cloud.videointelligence.v1.Entity) NormalizedBoundingBox(com.google.cloud.videointelligence.v1.NormalizedBoundingBox) ObjectTrackingAnnotation(com.google.cloud.videointelligence.v1.ObjectTrackingAnnotation) VideoSegment(com.google.cloud.videointelligence.v1.VideoSegment) AnnotateVideoRequest(com.google.cloud.videointelligence.v1.AnnotateVideoRequest) VideoAnnotationResults(com.google.cloud.videointelligence.v1.VideoAnnotationResults) Duration(com.google.protobuf.Duration) AnnotateVideoResponse(com.google.cloud.videointelligence.v1.AnnotateVideoResponse)

Example 10 with VideoSegment

use of com.google.cloud.videointelligence.v1p3beta1.VideoSegment in project java-video-intelligence by googleapis.

the class DetectPerson method detectPerson.

// Detects people in a video stored in a local file using the Cloud Video Intelligence API.
public static void detectPerson(String localFilePath) throws Exception {
    try (VideoIntelligenceServiceClient videoIntelligenceServiceClient = VideoIntelligenceServiceClient.create()) {
        // Reads a local video file and converts it to base64.
        Path path = Paths.get(localFilePath);
        byte[] data = Files.readAllBytes(path);
        ByteString inputContent = ByteString.copyFrom(data);
        PersonDetectionConfig personDetectionConfig = PersonDetectionConfig.newBuilder().setIncludeBoundingBoxes(true).setIncludePoseLandmarks(true).setIncludeAttributes(true).build();
        VideoContext videoContext = VideoContext.newBuilder().setPersonDetectionConfig(personDetectionConfig).build();
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputContent(inputContent).addFeatures(Feature.PERSON_DETECTION).setVideoContext(videoContext).build();
        // Detects people in a video
        // We get the first result because only one video is processed.
        OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future = videoIntelligenceServiceClient.annotateVideoAsync(request);
        System.out.println("Waiting for operation to complete...");
        AnnotateVideoResponse response = future.get();
        // Gets annotations for video
        VideoAnnotationResults annotationResult = response.getAnnotationResultsList().get(0);
        // Annotations for list of people detected, tracked and recognized in video.
        for (PersonDetectionAnnotation personDetectionAnnotation : annotationResult.getPersonDetectionAnnotationsList()) {
            System.out.print("Person detected:\n");
            for (Track track : personDetectionAnnotation.getTracksList()) {
                VideoSegment segment = track.getSegment();
                System.out.printf("\tStart: %d.%.0fs\n", segment.getStartTimeOffset().getSeconds(), segment.getStartTimeOffset().getNanos() / 1e6);
                System.out.printf("\tEnd: %d.%.0fs\n", segment.getEndTimeOffset().getSeconds(), segment.getEndTimeOffset().getNanos() / 1e6);
                // Each segment includes timestamped objects that include characteristic--e.g. clothes,
                // posture of the person detected.
                TimestampedObject firstTimestampedObject = track.getTimestampedObjects(0);
                // of the person detected.
                for (DetectedAttribute attribute : firstTimestampedObject.getAttributesList()) {
                    System.out.printf("\tAttribute: %s; Value: %s\n", attribute.getName(), attribute.getValue());
                }
                // Landmarks in person detection include body parts.
                for (DetectedLandmark attribute : firstTimestampedObject.getLandmarksList()) {
                    System.out.printf("\tLandmark: %s; Vertex: %f, %f\n", attribute.getName(), attribute.getPoint().getX(), attribute.getPoint().getY());
                }
            }
        }
    }
}
Also used : Path(java.nio.file.Path) AnnotateVideoRequest(com.google.cloud.videointelligence.v1.AnnotateVideoRequest) ByteString(com.google.protobuf.ByteString) VideoContext(com.google.cloud.videointelligence.v1.VideoContext) PersonDetectionAnnotation(com.google.cloud.videointelligence.v1.PersonDetectionAnnotation) PersonDetectionConfig(com.google.cloud.videointelligence.v1.PersonDetectionConfig) VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient) AnnotateVideoProgress(com.google.cloud.videointelligence.v1.AnnotateVideoProgress) VideoSegment(com.google.cloud.videointelligence.v1.VideoSegment) DetectedLandmark(com.google.cloud.videointelligence.v1.DetectedLandmark) TimestampedObject(com.google.cloud.videointelligence.v1.TimestampedObject) VideoAnnotationResults(com.google.cloud.videointelligence.v1.VideoAnnotationResults) DetectedAttribute(com.google.cloud.videointelligence.v1.DetectedAttribute) Track(com.google.cloud.videointelligence.v1.Track) AnnotateVideoResponse(com.google.cloud.videointelligence.v1.AnnotateVideoResponse)

Aggregations

AnnotateVideoProgress (com.google.cloud.videointelligence.v1.AnnotateVideoProgress)19 AnnotateVideoRequest (com.google.cloud.videointelligence.v1.AnnotateVideoRequest)19 AnnotateVideoResponse (com.google.cloud.videointelligence.v1.AnnotateVideoResponse)19 VideoAnnotationResults (com.google.cloud.videointelligence.v1.VideoAnnotationResults)19 VideoIntelligenceServiceClient (com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient)19 VideoSegment (com.google.cloud.videointelligence.v1.VideoSegment)19 Duration (com.google.protobuf.Duration)18 Path (java.nio.file.Path)12 DetectedAttribute (com.google.cloud.videointelligence.v1.DetectedAttribute)8 Entity (com.google.cloud.videointelligence.v1.Entity)8 NormalizedBoundingBox (com.google.cloud.videointelligence.v1.NormalizedBoundingBox)8 TimestampedObject (com.google.cloud.videointelligence.v1.TimestampedObject)8 Track (com.google.cloud.videointelligence.v1.Track)8 LogoRecognitionAnnotation (com.google.cloud.videointelligence.v1.LogoRecognitionAnnotation)4 NormalizedVertex (com.google.cloud.videointelligence.v1.NormalizedVertex)4 ObjectTrackingAnnotation (com.google.cloud.videointelligence.v1.ObjectTrackingAnnotation)4 ObjectTrackingFrame (com.google.cloud.videointelligence.v1.ObjectTrackingFrame)4 TextAnnotation (com.google.cloud.videointelligence.v1.TextAnnotation)4 TextFrame (com.google.cloud.videointelligence.v1.TextFrame)4 TextSegment (com.google.cloud.videointelligence.v1.TextSegment)4