Search in sources :

Example 1 with FaceDetectionConfig

use of com.google.cloud.videointelligence.v1.FaceDetectionConfig in project java-video-intelligence by googleapis.

the class DetectFaces method detectFaces.

// Detects faces in a video stored in a local file using the Cloud Video Intelligence API.
public static void detectFaces(String localFilePath) throws Exception {
    try (VideoIntelligenceServiceClient videoIntelligenceServiceClient = VideoIntelligenceServiceClient.create()) {
        // Reads a local video file and converts it to base64.
        Path path = Paths.get(localFilePath);
        byte[] data = Files.readAllBytes(path);
        ByteString inputContent = ByteString.copyFrom(data);
        FaceDetectionConfig faceDetectionConfig = FaceDetectionConfig.newBuilder().setIncludeBoundingBoxes(true).setIncludeAttributes(true).build();
        VideoContext videoContext = VideoContext.newBuilder().setFaceDetectionConfig(faceDetectionConfig).build();
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputContent(inputContent).addFeatures(Feature.FACE_DETECTION).setVideoContext(videoContext).build();
        // Detects faces in a video
        OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future = videoIntelligenceServiceClient.annotateVideoAsync(request);
        System.out.println("Waiting for operation to complete...");
        AnnotateVideoResponse response = future.get();
        // Gets annotations for video
        VideoAnnotationResults annotationResult = response.getAnnotationResultsList().get(0);
        // Annotations for list of faces detected, tracked and recognized in video.
        for (FaceDetectionAnnotation faceDetectionAnnotation : annotationResult.getFaceDetectionAnnotationsList()) {
            System.out.print("Face detected:\n");
            for (Track track : faceDetectionAnnotation.getTracksList()) {
                VideoSegment segment = track.getSegment();
                System.out.printf("\tStart: %d.%.0fs\n", segment.getStartTimeOffset().getSeconds(), segment.getStartTimeOffset().getNanos() / 1e6);
                System.out.printf("\tEnd: %d.%.0fs\n", segment.getEndTimeOffset().getSeconds(), segment.getEndTimeOffset().getNanos() / 1e6);
                // Each segment includes timestamped objects that
                // include characteristics of the face detected.
                TimestampedObject firstTimestampedObject = track.getTimestampedObjects(0);
                for (DetectedAttribute attribute : firstTimestampedObject.getAttributesList()) {
                    // Attributes include glasses, headwear, smiling, direction of gaze
                    System.out.printf("\tAttribute %s: %s %s\n", attribute.getName(), attribute.getValue(), attribute.getConfidence());
                }
            }
        }
    }
}
Also used : Path(java.nio.file.Path) FaceDetectionAnnotation(com.google.cloud.videointelligence.v1.FaceDetectionAnnotation) AnnotateVideoRequest(com.google.cloud.videointelligence.v1.AnnotateVideoRequest) ByteString(com.google.protobuf.ByteString) VideoContext(com.google.cloud.videointelligence.v1.VideoContext) VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient) AnnotateVideoProgress(com.google.cloud.videointelligence.v1.AnnotateVideoProgress) VideoSegment(com.google.cloud.videointelligence.v1.VideoSegment) TimestampedObject(com.google.cloud.videointelligence.v1.TimestampedObject) VideoAnnotationResults(com.google.cloud.videointelligence.v1.VideoAnnotationResults) FaceDetectionConfig(com.google.cloud.videointelligence.v1.FaceDetectionConfig) DetectedAttribute(com.google.cloud.videointelligence.v1.DetectedAttribute) Track(com.google.cloud.videointelligence.v1.Track) AnnotateVideoResponse(com.google.cloud.videointelligence.v1.AnnotateVideoResponse)

Example 2 with FaceDetectionConfig

use of com.google.cloud.videointelligence.v1.FaceDetectionConfig in project java-video-intelligence by googleapis.

the class DetectFacesGcs method detectFacesGcs.

// Detects faces in a video stored in Google Cloud Storage using the Cloud Video Intelligence API.
public static void detectFacesGcs(String gcsUri) throws Exception {
    try (VideoIntelligenceServiceClient videoIntelligenceServiceClient = VideoIntelligenceServiceClient.create()) {
        FaceDetectionConfig faceDetectionConfig = FaceDetectionConfig.newBuilder().setIncludeBoundingBoxes(true).setIncludeAttributes(true).build();
        VideoContext videoContext = VideoContext.newBuilder().setFaceDetectionConfig(faceDetectionConfig).build();
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputUri(gcsUri).addFeatures(Feature.FACE_DETECTION).setVideoContext(videoContext).build();
        // Detects faces in a video
        OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> future = videoIntelligenceServiceClient.annotateVideoAsync(request);
        System.out.println("Waiting for operation to complete...");
        AnnotateVideoResponse response = future.get();
        // Gets annotations for video
        VideoAnnotationResults annotationResult = response.getAnnotationResultsList().get(0);
        // Annotations for list of people detected, tracked and recognized in video.
        for (FaceDetectionAnnotation faceDetectionAnnotation : annotationResult.getFaceDetectionAnnotationsList()) {
            System.out.print("Face detected:\n");
            for (Track track : faceDetectionAnnotation.getTracksList()) {
                VideoSegment segment = track.getSegment();
                System.out.printf("\tStart: %d.%.0fs\n", segment.getStartTimeOffset().getSeconds(), segment.getStartTimeOffset().getNanos() / 1e6);
                System.out.printf("\tEnd: %d.%.0fs\n", segment.getEndTimeOffset().getSeconds(), segment.getEndTimeOffset().getNanos() / 1e6);
                // Each segment includes timestamped objects that
                // include characteristics of the face detected.
                TimestampedObject firstTimestampedObject = track.getTimestampedObjects(0);
                for (DetectedAttribute attribute : firstTimestampedObject.getAttributesList()) {
                    // Attributes include glasses, headwear, smiling, direction of gaze
                    System.out.printf("\tAttribute %s: %s %s\n", attribute.getName(), attribute.getValue(), attribute.getConfidence());
                }
            }
        }
    }
}
Also used : FaceDetectionAnnotation(com.google.cloud.videointelligence.v1.FaceDetectionAnnotation) AnnotateVideoRequest(com.google.cloud.videointelligence.v1.AnnotateVideoRequest) VideoContext(com.google.cloud.videointelligence.v1.VideoContext) VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient) AnnotateVideoProgress(com.google.cloud.videointelligence.v1.AnnotateVideoProgress) VideoSegment(com.google.cloud.videointelligence.v1.VideoSegment) TimestampedObject(com.google.cloud.videointelligence.v1.TimestampedObject) VideoAnnotationResults(com.google.cloud.videointelligence.v1.VideoAnnotationResults) FaceDetectionConfig(com.google.cloud.videointelligence.v1.FaceDetectionConfig) DetectedAttribute(com.google.cloud.videointelligence.v1.DetectedAttribute) Track(com.google.cloud.videointelligence.v1.Track) AnnotateVideoResponse(com.google.cloud.videointelligence.v1.AnnotateVideoResponse)

Aggregations

AnnotateVideoProgress (com.google.cloud.videointelligence.v1.AnnotateVideoProgress)2 AnnotateVideoRequest (com.google.cloud.videointelligence.v1.AnnotateVideoRequest)2 AnnotateVideoResponse (com.google.cloud.videointelligence.v1.AnnotateVideoResponse)2 DetectedAttribute (com.google.cloud.videointelligence.v1.DetectedAttribute)2 FaceDetectionAnnotation (com.google.cloud.videointelligence.v1.FaceDetectionAnnotation)2 FaceDetectionConfig (com.google.cloud.videointelligence.v1.FaceDetectionConfig)2 TimestampedObject (com.google.cloud.videointelligence.v1.TimestampedObject)2 Track (com.google.cloud.videointelligence.v1.Track)2 VideoAnnotationResults (com.google.cloud.videointelligence.v1.VideoAnnotationResults)2 VideoContext (com.google.cloud.videointelligence.v1.VideoContext)2 VideoIntelligenceServiceClient (com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient)2 VideoSegment (com.google.cloud.videointelligence.v1.VideoSegment)2 ByteString (com.google.protobuf.ByteString)1 Path (java.nio.file.Path)1