Search in sources :

Example 11 with Page

use of com.google.cloud.dialogflow.cx.v3.Page in project aem-core-wcm-components by adobe.

the class PageTests method setupBeforeEach.

public void setupBeforeEach(CQClient adminClient, String rootPage, String pageRT, String segmentPath) throws ClientException {
    // create the test page
    testPage = Commons.createPage(adminClient, Commons.template, rootPage, "testPage", pageTitle, pageRT, "Test Page", 200);
    setupResources(segmentPath, adminClient, rootPage);
    page = new Page();
}
Also used : Page(com.adobe.cq.wcm.core.components.it.seljup.util.components.page.v1.Page) PropertiesPage(com.adobe.cq.testing.selenium.pageobject.cq.sites.PropertiesPage)

Example 12 with Page

use of com.google.cloud.dialogflow.cx.v3.Page in project java-vision by googleapis.

the class DetectBeta method detectHandwrittenOcr.

// [END vision_localize_objects_gcs_beta]
// [START vision_handwritten_ocr_beta]
/**
 * Performs handwritten text detection on a local image file.
 *
 * @param filePath The path to the local file to detect handwritten text on.
 * @param out A {@link PrintStream} to write the results to.
 * @throws Exception on errors while closing the client.
 * @throws IOException on Input/Output errors.
 */
public static void detectHandwrittenOcr(String filePath, PrintStream out) throws Exception {
    List<AnnotateImageRequest> requests = new ArrayList<>();
    ByteString imgBytes = ByteString.readFrom(new FileInputStream(filePath));
    Image img = Image.newBuilder().setContent(imgBytes).build();
    Feature feat = Feature.newBuilder().setType(Type.DOCUMENT_TEXT_DETECTION).build();
    // Set the Language Hint codes for handwritten OCR
    ImageContext imageContext = ImageContext.newBuilder().addLanguageHints("en-t-i0-handwrit").build();
    AnnotateImageRequest request = AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).setImageContext(imageContext).build();
    requests.add(request);
    try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {
        BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
        List<AnnotateImageResponse> responses = response.getResponsesList();
        client.close();
        for (AnnotateImageResponse res : responses) {
            if (res.hasError()) {
                out.printf("Error: %s\n", res.getError().getMessage());
                return;
            }
            // For full list of available annotations, see http://g.co/cloud/vision/docs
            TextAnnotation annotation = res.getFullTextAnnotation();
            for (Page page : annotation.getPagesList()) {
                String pageText = "";
                for (Block block : page.getBlocksList()) {
                    String blockText = "";
                    for (Paragraph para : block.getParagraphsList()) {
                        String paraText = "";
                        for (Word word : para.getWordsList()) {
                            String wordText = "";
                            for (Symbol symbol : word.getSymbolsList()) {
                                wordText = wordText + symbol.getText();
                                out.format("Symbol text: %s (confidence: %f)\n", symbol.getText(), symbol.getConfidence());
                            }
                            out.format("Word text: %s (confidence: %f)\n\n", wordText, word.getConfidence());
                            paraText = String.format("%s %s", paraText, wordText);
                        }
                        // Output Example using Paragraph:
                        out.println("\nParagraph: \n" + paraText);
                        out.format("Paragraph Confidence: %f\n", para.getConfidence());
                        blockText = blockText + paraText;
                    }
                    pageText = pageText + blockText;
                }
            }
            out.println("\nComplete annotation:");
            out.println(annotation.getText());
        }
    }
}
Also used : Word(com.google.cloud.vision.v1p3beta1.Word) ByteString(com.google.protobuf.ByteString) Symbol(com.google.cloud.vision.v1p3beta1.Symbol) ImageAnnotatorClient(com.google.cloud.vision.v1p3beta1.ImageAnnotatorClient) ArrayList(java.util.ArrayList) Page(com.google.cloud.vision.v1p3beta1.Page) ByteString(com.google.protobuf.ByteString) Image(com.google.cloud.vision.v1p3beta1.Image) Feature(com.google.cloud.vision.v1p3beta1.Feature) FileInputStream(java.io.FileInputStream) Paragraph(com.google.cloud.vision.v1p3beta1.Paragraph) AnnotateImageRequest(com.google.cloud.vision.v1p3beta1.AnnotateImageRequest) AnnotateImageResponse(com.google.cloud.vision.v1p3beta1.AnnotateImageResponse) Block(com.google.cloud.vision.v1p3beta1.Block) TextAnnotation(com.google.cloud.vision.v1p3beta1.TextAnnotation) ImageContext(com.google.cloud.vision.v1p3beta1.ImageContext) BatchAnnotateImagesResponse(com.google.cloud.vision.v1p3beta1.BatchAnnotateImagesResponse)

Example 13 with Page

use of com.google.cloud.dialogflow.cx.v3.Page in project java-vision by googleapis.

the class BatchAnnotateFilesGcs method batchAnnotateFilesGcs.

public static void batchAnnotateFilesGcs(String gcsUri) throws IOException {
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ImageAnnotatorClient imageAnnotatorClient = ImageAnnotatorClient.create()) {
        // You can send multiple files to be annotated, this sample demonstrates how to do this with
        // one file. If you want to use multiple files, you have to create a `AnnotateImageRequest`
        // object for each file that you want annotated.
        // First specify where the vision api can find the image
        GcsSource gcsSource = GcsSource.newBuilder().setUri(gcsUri).build();
        // Specify the input config with the file's uri and its type.
        // Supported mime_type: application/pdf, image/tiff, image/gif
        // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#inputconfig
        InputConfig inputConfig = InputConfig.newBuilder().setMimeType("application/pdf").setGcsSource(gcsSource).build();
        // Set the type of annotation you want to perform on the file
        // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.Feature.Type
        Feature feature = Feature.newBuilder().setType(Feature.Type.DOCUMENT_TEXT_DETECTION).build();
        // Build the request object for that one file. Note: for additional file you have to create
        // additional `AnnotateFileRequest` objects and store them in a list to be used below.
        // Since we are sending a file of type `application/pdf`, we can use the `pages` field to
        // specify which pages to process. The service can process up to 5 pages per document file.
        // https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.AnnotateFileRequest
        AnnotateFileRequest fileRequest = AnnotateFileRequest.newBuilder().setInputConfig(inputConfig).addFeatures(feature).addPages(// Process the first page
        1).addPages(// Process the second page
        2).addPages(// Process the last page
        -1).build();
        // Add each `AnnotateFileRequest` object to the batch request.
        BatchAnnotateFilesRequest request = BatchAnnotateFilesRequest.newBuilder().addRequests(fileRequest).build();
        // Make the synchronous batch request.
        BatchAnnotateFilesResponse response = imageAnnotatorClient.batchAnnotateFiles(request);
        // sample.
        for (AnnotateImageResponse imageResponse : response.getResponsesList().get(0).getResponsesList()) {
            System.out.format("Full text: %s%n", imageResponse.getFullTextAnnotation().getText());
            for (Page page : imageResponse.getFullTextAnnotation().getPagesList()) {
                for (Block block : page.getBlocksList()) {
                    System.out.format("%nBlock confidence: %s%n", block.getConfidence());
                    for (Paragraph par : block.getParagraphsList()) {
                        System.out.format("\tParagraph confidence: %s%n", par.getConfidence());
                        for (Word word : par.getWordsList()) {
                            System.out.format("\t\tWord confidence: %s%n", word.getConfidence());
                            for (Symbol symbol : word.getSymbolsList()) {
                                System.out.format("\t\t\tSymbol: %s, (confidence: %s)%n", symbol.getText(), symbol.getConfidence());
                            }
                        }
                    }
                }
            }
        }
    }
}
Also used : GcsSource(com.google.cloud.vision.v1.GcsSource) Word(com.google.cloud.vision.v1.Word) BatchAnnotateFilesRequest(com.google.cloud.vision.v1.BatchAnnotateFilesRequest) Symbol(com.google.cloud.vision.v1.Symbol) ImageAnnotatorClient(com.google.cloud.vision.v1.ImageAnnotatorClient) Page(com.google.cloud.vision.v1.Page) Feature(com.google.cloud.vision.v1.Feature) Paragraph(com.google.cloud.vision.v1.Paragraph) BatchAnnotateFilesResponse(com.google.cloud.vision.v1.BatchAnnotateFilesResponse) AnnotateImageResponse(com.google.cloud.vision.v1.AnnotateImageResponse) AnnotateFileRequest(com.google.cloud.vision.v1.AnnotateFileRequest) Block(com.google.cloud.vision.v1.Block) InputConfig(com.google.cloud.vision.v1.InputConfig)

Example 14 with Page

use of com.google.cloud.dialogflow.cx.v3.Page in project java-vision by googleapis.

the class Detect method detectDocumentTextGcs.

// [END vision_fulltext_detection]
/**
 * Performs document text detection on a remote image on Google Cloud Storage.
 *
 * @param gcsPath The path to the remote file on Google Cloud Storage to detect document text on.
 * @throws Exception on errors while closing the client.
 * @throws IOException on Input/Output errors.
 */
// [START vision_fulltext_detection_gcs]
public static void detectDocumentTextGcs(String gcsPath) throws IOException {
    List<AnnotateImageRequest> requests = new ArrayList<>();
    ImageSource imgSource = ImageSource.newBuilder().setGcsImageUri(gcsPath).build();
    Image img = Image.newBuilder().setSource(imgSource).build();
    Feature feat = Feature.newBuilder().setType(Type.DOCUMENT_TEXT_DETECTION).build();
    AnnotateImageRequest request = AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
    requests.add(request);
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {
        BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
        List<AnnotateImageResponse> responses = response.getResponsesList();
        client.close();
        for (AnnotateImageResponse res : responses) {
            if (res.hasError()) {
                System.out.format("Error: %s%n", res.getError().getMessage());
                return;
            }
            // For full list of available annotations, see http://g.co/cloud/vision/docs
            TextAnnotation annotation = res.getFullTextAnnotation();
            for (Page page : annotation.getPagesList()) {
                String pageText = "";
                for (Block block : page.getBlocksList()) {
                    String blockText = "";
                    for (Paragraph para : block.getParagraphsList()) {
                        String paraText = "";
                        for (Word word : para.getWordsList()) {
                            String wordText = "";
                            for (Symbol symbol : word.getSymbolsList()) {
                                wordText = wordText + symbol.getText();
                                System.out.format("Symbol text: %s (confidence: %f)%n", symbol.getText(), symbol.getConfidence());
                            }
                            System.out.format("Word text: %s (confidence: %f)%n%n", wordText, word.getConfidence());
                            paraText = String.format("%s %s", paraText, wordText);
                        }
                        // Output Example using Paragraph:
                        System.out.println("%nParagraph: %n" + paraText);
                        System.out.format("Paragraph Confidence: %f%n", para.getConfidence());
                        blockText = blockText + paraText;
                    }
                    pageText = pageText + blockText;
                }
            }
            System.out.println("%nComplete annotation:");
            System.out.println(annotation.getText());
        }
    }
}
Also used : Word(com.google.cloud.vision.v1.Word) Symbol(com.google.cloud.vision.v1.Symbol) ImageAnnotatorClient(com.google.cloud.vision.v1.ImageAnnotatorClient) ArrayList(java.util.ArrayList) Page(com.google.cloud.vision.v1.Page) ByteString(com.google.protobuf.ByteString) Image(com.google.cloud.vision.v1.Image) Feature(com.google.cloud.vision.v1.Feature) Paragraph(com.google.cloud.vision.v1.Paragraph) AnnotateImageRequest(com.google.cloud.vision.v1.AnnotateImageRequest) AnnotateImageResponse(com.google.cloud.vision.v1.AnnotateImageResponse) Block(com.google.cloud.vision.v1.Block) ImageSource(com.google.cloud.vision.v1.ImageSource) TextAnnotation(com.google.cloud.vision.v1.TextAnnotation) BatchAnnotateImagesResponse(com.google.cloud.vision.v1.BatchAnnotateImagesResponse)

Example 15 with Page

use of com.google.cloud.dialogflow.cx.v3.Page in project aem-core-wcm-components by Adobe-Marketing-Cloud.

the class PageTests method setupBeforeEach.

public void setupBeforeEach(CQClient adminClient, String rootPage, String pageRT, String segmentPath) throws ClientException {
    // create the test page
    testPage = Commons.createPage(adminClient, Commons.template, rootPage, "testPage", pageTitle, pageRT, "Test Page", 200);
    setupResources(segmentPath, adminClient, rootPage);
    page = new Page();
}
Also used : Page(com.adobe.cq.wcm.core.components.it.seljup.util.components.page.v1.Page) PropertiesPage(com.adobe.cq.testing.selenium.pageobject.cq.sites.PropertiesPage)

Aggregations

ArrayList (java.util.ArrayList)9 Test (org.junit.Test)9 ByteString (com.google.protobuf.ByteString)8 Page (com.google.cloud.dialogflow.cx.v3beta1.Page)7 AnnotateImageResponse (com.google.cloud.vision.v1.AnnotateImageResponse)6 Block (com.google.cloud.vision.v1.Block)6 Feature (com.google.cloud.vision.v1.Feature)6 ImageAnnotatorClient (com.google.cloud.vision.v1.ImageAnnotatorClient)6 Page (com.google.cloud.vision.v1.Page)6 Paragraph (com.google.cloud.vision.v1.Paragraph)6 Symbol (com.google.cloud.vision.v1.Symbol)6 Word (com.google.cloud.vision.v1.Word)6 Page (com.google.cloud.dialogflow.cx.v3.Page)4 FileInputStream (java.io.FileInputStream)4 PagesClient (com.google.cloud.dialogflow.cx.v3.PagesClient)3 AnnotateImageRequest (com.google.cloud.vision.v1.AnnotateImageRequest)3 BatchAnnotateImagesResponse (com.google.cloud.vision.v1.BatchAnnotateImagesResponse)3 Image (com.google.cloud.vision.v1.Image)3 TextAnnotation (com.google.cloud.vision.v1.TextAnnotation)3 PropertiesPage (com.adobe.cq.testing.selenium.pageobject.cq.sites.PropertiesPage)2