Search in sources :

Example 96 with Feature

use of com.google.cloud.vision.v1p4beta1.Feature in project spring-cloud-gcp by spring-cloud.

the class CloudVisionTemplate method analyzeImage.

/**
 * Analyze an image and extract the features of the image specified by
 * {@code featureTypes}.
 * <p>A feature describes the kind of Cloud Vision analysis one wishes to perform on an
 * image, such as text detection, image labelling, facial detection, etc. A full list of
 * feature types can be found in {@link Feature.Type}.
 * @param imageResource the image one wishes to analyze. The Cloud Vision APIs support
 *     image formats described here: https://cloud.google.com/vision/docs/supported-files
 * @param imageContext the image context used to customize the Vision API request
 * @param featureTypes the types of image analysis to perform on the image
 * @return the results of image analyses
 * @throws CloudVisionException if the image could not be read or if a malformed response
 *     is received from the Cloud Vision APIs
 */
public AnnotateImageResponse analyzeImage(Resource imageResource, ImageContext imageContext, Feature.Type... featureTypes) {
    ByteString imgBytes;
    try {
        imgBytes = ByteString.readFrom(imageResource.getInputStream());
    } catch (IOException ex) {
        throw new CloudVisionException("Failed to read image bytes from provided resource.", ex);
    }
    Image image = Image.newBuilder().setContent(imgBytes).build();
    List<Feature> featureList = Arrays.stream(featureTypes).map((featureType) -> Feature.newBuilder().setType(featureType).build()).collect(Collectors.toList());
    BatchAnnotateImagesRequest request = BatchAnnotateImagesRequest.newBuilder().addRequests(AnnotateImageRequest.newBuilder().addAllFeatures(featureList).setImageContext(imageContext).setImage(image)).build();
    BatchAnnotateImagesResponse batchResponse = this.imageAnnotatorClient.batchAnnotateImages(request);
    List<AnnotateImageResponse> annotateImageResponses = batchResponse.getResponsesList();
    if (!annotateImageResponses.isEmpty()) {
        return annotateImageResponses.get(0);
    } else {
        throw new CloudVisionException("Failed to receive valid response Vision APIs; empty response received.");
    }
}
Also used : Arrays(java.util.Arrays) AnnotateImageResponse(com.google.cloud.vision.v1.AnnotateImageResponse) Type(com.google.cloud.vision.v1.Feature.Type) IOException(java.io.IOException) Collectors(java.util.stream.Collectors) Feature(com.google.cloud.vision.v1.Feature) ByteString(com.google.protobuf.ByteString) List(java.util.List) Image(com.google.cloud.vision.v1.Image) ImageAnnotatorClient(com.google.cloud.vision.v1.ImageAnnotatorClient) ImageContext(com.google.cloud.vision.v1.ImageContext) AnnotateImageRequest(com.google.cloud.vision.v1.AnnotateImageRequest) BatchAnnotateImagesResponse(com.google.cloud.vision.v1.BatchAnnotateImagesResponse) BatchAnnotateImagesRequest(com.google.cloud.vision.v1.BatchAnnotateImagesRequest) Code(com.google.rpc.Code) Resource(org.springframework.core.io.Resource) Assert(org.springframework.util.Assert) BatchAnnotateImagesRequest(com.google.cloud.vision.v1.BatchAnnotateImagesRequest) ByteString(com.google.protobuf.ByteString) AnnotateImageResponse(com.google.cloud.vision.v1.AnnotateImageResponse) IOException(java.io.IOException) Image(com.google.cloud.vision.v1.Image) Feature(com.google.cloud.vision.v1.Feature) BatchAnnotateImagesResponse(com.google.cloud.vision.v1.BatchAnnotateImagesResponse)

Example 97 with Feature

use of com.google.cloud.vision.v1p4beta1.Feature in project java-mapollage by trixon.

the class Operation method addPolygons.

private void addPolygons(Folder polygonParent, List<Feature> features) {
    for (Feature feature : features) {
        if (feature instanceof Folder) {
            Folder folder = (Folder) feature;
            if (folder != mPathFolder && folder != mPathGapFolder && folder != mPolygonFolder) {
                Folder polygonFolder = polygonParent.createAndAddFolder().withName(folder.getName()).withOpen(true);
                mFolderPolygonInputs.put(polygonFolder, new ArrayList<>());
                addPolygons(polygonFolder, folder.getFeature());
                if (mFolderPolygonInputs.get(polygonFolder) != null) {
                    addPolygon(folder.getName(), mFolderPolygonInputs.get(polygonFolder), polygonParent);
                }
            }
        }
        if (feature instanceof Placemark) {
            Placemark placemark = (Placemark) feature;
            Point point = (Point) placemark.getGeometry();
            ArrayList<Coordinate> coordinates = mFolderPolygonInputs.computeIfAbsent(polygonParent, k -> new ArrayList<>());
            coordinates.addAll(point.getCoordinates());
        }
    }
    ArrayList<Coordinate> rootCoordinates = mFolderPolygonInputs.get(mPolygonFolder);
    if (polygonParent == mPolygonFolder && rootCoordinates != null) {
        addPolygon(mPolygonFolder.getName(), rootCoordinates, polygonParent);
    }
}
Also used : ProfilePlacemark(se.trixon.mapollage.profile.ProfilePlacemark) Placemark(de.micromata.opengis.kml.v_2_2_0.Placemark) Coordinate(de.micromata.opengis.kml.v_2_2_0.Coordinate) Point(de.micromata.opengis.kml.v_2_2_0.Point) Folder(de.micromata.opengis.kml.v_2_2_0.Folder) ProfileFolder(se.trixon.mapollage.profile.ProfileFolder) Feature(de.micromata.opengis.kml.v_2_2_0.Feature)

Example 98 with Feature

use of com.google.cloud.vision.v1p4beta1.Feature in project java-docs-samples by GoogleCloudPlatform.

the class OcrProcessImage method detectText.

// [END functions_ocr_process]
// [START functions_ocr_detect]
private void detectText(String bucket, String filename) {
    logger.info("Looking for text in image " + filename);
    List<AnnotateImageRequest> visionRequests = new ArrayList<>();
    String gcsPath = String.format("gs://%s/%s", bucket, filename);
    ImageSource imgSource = ImageSource.newBuilder().setGcsImageUri(gcsPath).build();
    Image img = Image.newBuilder().setSource(imgSource).build();
    Feature textFeature = Feature.newBuilder().setType(Feature.Type.TEXT_DETECTION).build();
    AnnotateImageRequest visionRequest = AnnotateImageRequest.newBuilder().addFeatures(textFeature).setImage(img).build();
    visionRequests.add(visionRequest);
    // Detect text in an image using the Cloud Vision API
    AnnotateImageResponse visionResponse;
    try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {
        visionResponse = client.batchAnnotateImages(visionRequests).getResponses(0);
        if (visionResponse == null || !visionResponse.hasFullTextAnnotation()) {
            logger.info(String.format("Image %s contains no text", filename));
            return;
        }
        if (visionResponse.hasError()) {
            // Log error
            logger.log(Level.SEVERE, "Error in vision API call: " + visionResponse.getError().getMessage());
            return;
        }
    } catch (IOException e) {
        // Log error (since IOException cannot be thrown by a Cloud Function)
        logger.log(Level.SEVERE, "Error detecting text: " + e.getMessage(), e);
        return;
    }
    String text = visionResponse.getFullTextAnnotation().getText();
    logger.info("Extracted text from image: " + text);
    // Detect language using the Cloud Translation API
    DetectLanguageRequest languageRequest = DetectLanguageRequest.newBuilder().setParent(LOCATION_NAME).setMimeType("text/plain").setContent(text).build();
    DetectLanguageResponse languageResponse;
    try (TranslationServiceClient client = TranslationServiceClient.create()) {
        languageResponse = client.detectLanguage(languageRequest);
    } catch (IOException e) {
        // Log error (since IOException cannot be thrown by a function)
        logger.log(Level.SEVERE, "Error detecting language: " + e.getMessage(), e);
        return;
    }
    if (languageResponse.getLanguagesCount() == 0) {
        logger.info("No languages were detected for text: " + text);
        return;
    }
    String languageCode = languageResponse.getLanguages(0).getLanguageCode();
    logger.info(String.format("Detected language %s for file %s", languageCode, filename));
    // Send a Pub/Sub translation request for every language we're going to translate to
    for (String targetLanguage : TO_LANGS) {
        logger.info("Sending translation request for language " + targetLanguage);
        OcrTranslateApiMessage message = new OcrTranslateApiMessage(text, filename, targetLanguage);
        ByteString byteStr = ByteString.copyFrom(message.toPubsubData());
        PubsubMessage pubsubApiMessage = PubsubMessage.newBuilder().setData(byteStr).build();
        try {
            publisher.publish(pubsubApiMessage).get();
        } catch (InterruptedException | ExecutionException e) {
            // Log error
            logger.log(Level.SEVERE, "Error publishing translation request: " + e.getMessage(), e);
            return;
        }
    }
}
Also used : TranslationServiceClient(com.google.cloud.translate.v3.TranslationServiceClient) DetectLanguageResponse(com.google.cloud.translate.v3.DetectLanguageResponse) ByteString(com.google.protobuf.ByteString) ImageAnnotatorClient(com.google.cloud.vision.v1.ImageAnnotatorClient) ArrayList(java.util.ArrayList) ByteString(com.google.protobuf.ByteString) IOException(java.io.IOException) Image(com.google.cloud.vision.v1.Image) Feature(com.google.cloud.vision.v1.Feature) DetectLanguageRequest(com.google.cloud.translate.v3.DetectLanguageRequest) PubsubMessage(com.google.pubsub.v1.PubsubMessage) AnnotateImageRequest(com.google.cloud.vision.v1.AnnotateImageRequest) AnnotateImageResponse(com.google.cloud.vision.v1.AnnotateImageResponse) ImageSource(com.google.cloud.vision.v1.ImageSource) ExecutionException(java.util.concurrent.ExecutionException)

Example 99 with Feature

use of com.google.cloud.vision.v1p4beta1.Feature in project java-docs-samples by GoogleCloudPlatform.

the class ImageMagick method blurOffensiveImages.

// [END run_imageproc_handler_setup]
// [END cloudrun_imageproc_handler_setup]
// [START cloudrun_imageproc_handler_analyze]
// [START run_imageproc_handler_analyze]
// Blurs uploaded images that are flagged as Adult or Violence.
public static void blurOffensiveImages(JsonObject data) {
    String fileName = data.get("name").getAsString();
    String bucketName = data.get("bucket").getAsString();
    BlobInfo blobInfo = BlobInfo.newBuilder(bucketName, fileName).build();
    // Construct URI to GCS bucket and file.
    String gcsPath = String.format("gs://%s/%s", bucketName, fileName);
    System.out.println(String.format("Analyzing %s", fileName));
    // Construct request.
    List<AnnotateImageRequest> requests = new ArrayList<>();
    ImageSource imgSource = ImageSource.newBuilder().setImageUri(gcsPath).build();
    Image img = Image.newBuilder().setSource(imgSource).build();
    Feature feature = Feature.newBuilder().setType(Type.SAFE_SEARCH_DETECTION).build();
    AnnotateImageRequest request = AnnotateImageRequest.newBuilder().addFeatures(feature).setImage(img).build();
    requests.add(request);
    // Send request to the Vision API.
    try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {
        BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
        List<AnnotateImageResponse> responses = response.getResponsesList();
        for (AnnotateImageResponse res : responses) {
            if (res.hasError()) {
                System.out.println(String.format("Error: %s\n", res.getError().getMessage()));
                return;
            }
            // Get Safe Search Annotations
            SafeSearchAnnotation annotation = res.getSafeSearchAnnotation();
            if (annotation.getAdultValue() == 5 || annotation.getViolenceValue() == 5) {
                System.out.println(String.format("Detected %s as inappropriate.", fileName));
                blur(blobInfo);
            } else {
                System.out.println(String.format("Detected %s as OK.", fileName));
            }
        }
    } catch (Exception e) {
        System.out.println(String.format("Error with Vision API: %s", e.getMessage()));
    }
}
Also used : SafeSearchAnnotation(com.google.cloud.vision.v1.SafeSearchAnnotation) ImageAnnotatorClient(com.google.cloud.vision.v1.ImageAnnotatorClient) ArrayList(java.util.ArrayList) BlobInfo(com.google.cloud.storage.BlobInfo) Image(com.google.cloud.vision.v1.Image) Feature(com.google.cloud.vision.v1.Feature) IOException(java.io.IOException) AnnotateImageRequest(com.google.cloud.vision.v1.AnnotateImageRequest) AnnotateImageResponse(com.google.cloud.vision.v1.AnnotateImageResponse) ImageSource(com.google.cloud.vision.v1.ImageSource) BatchAnnotateImagesResponse(com.google.cloud.vision.v1.BatchAnnotateImagesResponse)

Example 100 with Feature

use of com.google.cloud.vision.v1p4beta1.Feature in project java-vision by googleapis.

the class ITSystemTest method detectLandmarksUrlTest.

@Test
public void detectLandmarksUrlTest() throws Exception {
    ImageSource imgSource = ImageSource.newBuilder().setImageUri(SAMPLE_URI + "landmark/pofa.jpg").build();
    Image img = Image.newBuilder().setSource(imgSource).build();
    Feature feat = Feature.newBuilder().setType(Type.LANDMARK_DETECTION).build();
    AnnotateImageRequest request = AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
    List<String> actual = new ArrayList<>();
    int tryCount = 0;
    int maxTries = 3;
    while (tryCount < maxTries) {
        try {
            actual = addResponsesToList(request);
            break;
        } catch (StatusRuntimeException ex) {
            tryCount++;
            System.out.println("retrying due to request throttling or DOS prevention...");
            TimeUnit.SECONDS.sleep(30);
        }
    }
    assertThat(actual).contains("Palace of Fine Arts");
}
Also used : AnnotateImageRequest(com.google.cloud.vision.v1.AnnotateImageRequest) ArrayList(java.util.ArrayList) StatusRuntimeException(io.grpc.StatusRuntimeException) ImageSource(com.google.cloud.vision.v1.ImageSource) ByteString(com.google.protobuf.ByteString) ReferenceImage(com.google.cloud.vision.v1.ReferenceImage) Image(com.google.cloud.vision.v1.Image) Feature(com.google.cloud.vision.v1.Feature) CropHint(com.google.cloud.vision.v1.CropHint) Test(org.junit.Test)

Aggregations

Feature (org.osate.aadl2.Feature)82 ArrayList (java.util.ArrayList)78 Feature (com.google.cloud.vision.v1.Feature)72 AnnotateImageRequest (com.google.cloud.vision.v1.AnnotateImageRequest)69 Image (com.google.cloud.vision.v1.Image)69 BatchAnnotateImagesResponse (com.google.cloud.vision.v1.BatchAnnotateImagesResponse)66 ImageAnnotatorClient (com.google.cloud.vision.v1.ImageAnnotatorClient)66 AnnotateImageResponse (com.google.cloud.vision.v1.AnnotateImageResponse)63 ByteString (com.google.protobuf.ByteString)47 ImageSource (com.google.cloud.vision.v1.ImageSource)38 FileInputStream (java.io.FileInputStream)30 Subcomponent (org.osate.aadl2.Subcomponent)29 EntityAnnotation (com.google.cloud.vision.v1.EntityAnnotation)27 Classifier (org.osate.aadl2.Classifier)27 WebImage (com.google.cloud.vision.v1.WebDetection.WebImage)26 ComponentClassifier (org.osate.aadl2.ComponentClassifier)22 NamedElement (org.osate.aadl2.NamedElement)22 FeatureGroup (org.osate.aadl2.FeatureGroup)18 FeatureInstance (org.osate.aadl2.instance.FeatureInstance)17 FeatureGroupType (org.osate.aadl2.FeatureGroupType)16