Search in sources :

Example 11 with Image

use of com.ibm.dtfj.image.j9.Image in project java-docs-samples by GoogleCloudPlatform.

the class Detect method detectProperties.

/**
 * Detects image properties such as color frequency from the specified local image.
 *
 * @param filePath The path to the file to detect properties.
 * @param out A {@link PrintStream} to write
 * @throws Exception on errors while closing the client.
 * @throws IOException on Input/Output errors.
 */
public static void detectProperties(String filePath, PrintStream out) throws Exception, IOException {
    List<AnnotateImageRequest> requests = new ArrayList<>();
    ByteString imgBytes = ByteString.readFrom(new FileInputStream(filePath));
    Image img = Image.newBuilder().setContent(imgBytes).build();
    Feature feat = Feature.newBuilder().setType(Type.IMAGE_PROPERTIES).build();
    AnnotateImageRequest request = AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
    requests.add(request);
    try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {
        BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
        List<AnnotateImageResponse> responses = response.getResponsesList();
        for (AnnotateImageResponse res : responses) {
            if (res.hasError()) {
                out.printf("Error: %s\n", res.getError().getMessage());
                return;
            }
            // For full list of available annotations, see http://g.co/cloud/vision/docs
            DominantColorsAnnotation colors = res.getImagePropertiesAnnotation().getDominantColors();
            for (ColorInfo color : colors.getColorsList()) {
                out.printf("fraction: %f\nr: %f, g: %f, b: %f\n", color.getPixelFraction(), color.getColor().getRed(), color.getColor().getGreen(), color.getColor().getBlue());
            }
        }
    }
}
Also used : ByteString(com.google.protobuf.ByteString) ImageAnnotatorClient(com.google.cloud.vision.v1.ImageAnnotatorClient) ArrayList(java.util.ArrayList) WebImage(com.google.cloud.vision.v1.WebDetection.WebImage) Image(com.google.cloud.vision.v1.Image) Feature(com.google.cloud.vision.v1.Feature) FileInputStream(java.io.FileInputStream) ColorInfo(com.google.cloud.vision.v1.ColorInfo) AnnotateImageRequest(com.google.cloud.vision.v1.AnnotateImageRequest) AnnotateImageResponse(com.google.cloud.vision.v1.AnnotateImageResponse) DominantColorsAnnotation(com.google.cloud.vision.v1.DominantColorsAnnotation) BatchAnnotateImagesResponse(com.google.cloud.vision.v1.BatchAnnotateImagesResponse)

Example 12 with Image

use of com.ibm.dtfj.image.j9.Image in project java-docs-samples by GoogleCloudPlatform.

the class Detect method detectCropHints.

// [END vision_web_entities_include_geo_results_uri]
/**
 * Suggests a region to crop to for a local file.
 *
 * @param filePath The path to the local file used for web annotation detection.
 * @param out A {@link PrintStream} to write the results to.
 * @throws Exception on errors while closing the client.
 * @throws IOException on Input/Output errors.
 */
public static void detectCropHints(String filePath, PrintStream out) throws Exception, IOException {
    List<AnnotateImageRequest> requests = new ArrayList<>();
    ByteString imgBytes = ByteString.readFrom(new FileInputStream(filePath));
    Image img = Image.newBuilder().setContent(imgBytes).build();
    Feature feat = Feature.newBuilder().setType(Type.CROP_HINTS).build();
    AnnotateImageRequest request = AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
    requests.add(request);
    try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {
        BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
        List<AnnotateImageResponse> responses = response.getResponsesList();
        for (AnnotateImageResponse res : responses) {
            if (res.hasError()) {
                out.printf("Error: %s\n", res.getError().getMessage());
                return;
            }
            // For full list of available annotations, see http://g.co/cloud/vision/docs
            CropHintsAnnotation annotation = res.getCropHintsAnnotation();
            for (CropHint hint : annotation.getCropHintsList()) {
                out.println(hint.getBoundingPoly());
            }
        }
    }
}
Also used : ByteString(com.google.protobuf.ByteString) ImageAnnotatorClient(com.google.cloud.vision.v1.ImageAnnotatorClient) ArrayList(java.util.ArrayList) WebImage(com.google.cloud.vision.v1.WebDetection.WebImage) Image(com.google.cloud.vision.v1.Image) Feature(com.google.cloud.vision.v1.Feature) FileInputStream(java.io.FileInputStream) AnnotateImageRequest(com.google.cloud.vision.v1.AnnotateImageRequest) CropHintsAnnotation(com.google.cloud.vision.v1.CropHintsAnnotation) AnnotateImageResponse(com.google.cloud.vision.v1.AnnotateImageResponse) CropHint(com.google.cloud.vision.v1.CropHint) BatchAnnotateImagesResponse(com.google.cloud.vision.v1.BatchAnnotateImagesResponse)

Example 13 with Image

use of com.ibm.dtfj.image.j9.Image in project java-docs-samples by GoogleCloudPlatform.

the class Detect method detectWebEntitiesIncludeGeoResults.

// [START vision_web_entities_include_geo_results]
/**
 * Find web entities given a local image.
 * @param filePath The path of the image to detect.
 * @param out A {@link PrintStream} to write the results to.
 * @throws Exception on errors while closing the client.
 * @throws IOException on Input/Output errors.
 */
public static void detectWebEntitiesIncludeGeoResults(String filePath, PrintStream out) throws Exception, IOException {
    // Instantiates a client
    try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {
        // Read in the local image
        ByteString contents = ByteString.readFrom(new FileInputStream(filePath));
        // Build the image
        Image image = Image.newBuilder().setContent(contents).build();
        // Enable `IncludeGeoResults`
        WebDetectionParams webDetectionParams = WebDetectionParams.newBuilder().setIncludeGeoResults(true).build();
        // Set the parameters for the image
        ImageContext imageContext = ImageContext.newBuilder().setWebDetectionParams(webDetectionParams).build();
        // Create the request with the image, imageContext, and the specified feature: web detection
        AnnotateImageRequest request = AnnotateImageRequest.newBuilder().addFeatures(Feature.newBuilder().setType(Type.WEB_DETECTION)).setImage(image).setImageContext(imageContext).build();
        // Perform the request
        BatchAnnotateImagesResponse response = client.batchAnnotateImages(Arrays.asList(request));
        // Display the results
        response.getResponsesList().stream().forEach(r -> r.getWebDetection().getWebEntitiesList().stream().forEach(entity -> {
            out.format("Description: %s\n", entity.getDescription());
            out.format("Score: %f\n", entity.getScore());
        }));
    }
}
Also used : WebDetectionParams(com.google.cloud.vision.v1.WebDetectionParams) WebDetectionParams(com.google.cloud.vision.v1.WebDetectionParams) Arrays(java.util.Arrays) WebPage(com.google.cloud.vision.v1.WebDetection.WebPage) Paragraph(com.google.cloud.vision.v1.Paragraph) ArrayList(java.util.ArrayList) ImageAnnotatorClient(com.google.cloud.vision.v1.ImageAnnotatorClient) WebImage(com.google.cloud.vision.v1.WebDetection.WebImage) AnnotateImageRequest(com.google.cloud.vision.v1.AnnotateImageRequest) BatchAnnotateImagesResponse(com.google.cloud.vision.v1.BatchAnnotateImagesResponse) EntityAnnotation(com.google.cloud.vision.v1.EntityAnnotation) WebLabel(com.google.cloud.vision.v1.WebDetection.WebLabel) CropHint(com.google.cloud.vision.v1.CropHint) PrintStream(java.io.PrintStream) AnnotateImageResponse(com.google.cloud.vision.v1.AnnotateImageResponse) ImageSource(com.google.cloud.vision.v1.ImageSource) WebDetection(com.google.cloud.vision.v1.WebDetection) Type(com.google.cloud.vision.v1.Feature.Type) LocationInfo(com.google.cloud.vision.v1.LocationInfo) FaceAnnotation(com.google.cloud.vision.v1.FaceAnnotation) Block(com.google.cloud.vision.v1.Block) CropHintsAnnotation(com.google.cloud.vision.v1.CropHintsAnnotation) IOException(java.io.IOException) FileInputStream(java.io.FileInputStream) Feature(com.google.cloud.vision.v1.Feature) Symbol(com.google.cloud.vision.v1.Symbol) ColorInfo(com.google.cloud.vision.v1.ColorInfo) ByteString(com.google.protobuf.ByteString) List(java.util.List) TextAnnotation(com.google.cloud.vision.v1.TextAnnotation) Image(com.google.cloud.vision.v1.Image) WebEntity(com.google.cloud.vision.v1.WebDetection.WebEntity) ImageContext(com.google.cloud.vision.v1.ImageContext) SafeSearchAnnotation(com.google.cloud.vision.v1.SafeSearchAnnotation) Page(com.google.cloud.vision.v1.Page) DominantColorsAnnotation(com.google.cloud.vision.v1.DominantColorsAnnotation) Word(com.google.cloud.vision.v1.Word) AnnotateImageRequest(com.google.cloud.vision.v1.AnnotateImageRequest) ByteString(com.google.protobuf.ByteString) ImageAnnotatorClient(com.google.cloud.vision.v1.ImageAnnotatorClient) WebImage(com.google.cloud.vision.v1.WebDetection.WebImage) Image(com.google.cloud.vision.v1.Image) FileInputStream(java.io.FileInputStream) ImageContext(com.google.cloud.vision.v1.ImageContext) BatchAnnotateImagesResponse(com.google.cloud.vision.v1.BatchAnnotateImagesResponse)

Example 14 with Image

use of com.ibm.dtfj.image.j9.Image in project openj9 by eclipse.

the class XMLIndexReader method setJ9DumpData.

public void setJ9DumpData(long environ, String osType, String osSubType, String cpuType, int cpuCount, long bytesMem, int pointerSize, Image[] imageRef, ImageAddressSpace[] addressSpaceRef, ImageProcess[] processRef) {
    Builder builder = null;
    if (_stream == null) {
        // extract directly from the file
        builder = new Builder(_coreFile, _reader, environ, _fileResolvingAgent);
    } else {
        // extract using the data stream
        builder = new Builder(_coreFile, _stream, environ, _fileResolvingAgent);
    }
    _coreFile.extract(builder);
    // Jazz 4961 : chamlain : NumberFormatException opening corrupt dump
    if (cpuType == null)
        cpuType = builder.getCPUType();
    String cpuSubType = builder.getCPUSubType();
    if (osType == null)
        osType = builder.getOSType();
    long creationTime = builder.getCreationTime();
    _coreImage = new Image(osType, osSubType, cpuType, cpuSubType, cpuCount, bytesMem, creationTime);
    ImageAddressSpace addressSpace = (ImageAddressSpace) builder.getAddressSpaces().next();
    ImageProcess process = (ImageProcess) addressSpace.getCurrentProcess();
    // If not sure, use the first address space/process pair found
    for (Iterator it = builder.getAddressSpaces(); it.hasNext(); ) {
        ImageAddressSpace addressSpace1 = (ImageAddressSpace) it.next();
        final boolean vb = false;
        if (vb)
            System.out.println("address space " + addressSpace1);
        _coreImage.addAddressSpace(addressSpace1);
        for (Iterator it2 = addressSpace1.getProcesses(); it2.hasNext(); ) {
            ImageProcess process1 = (ImageProcess) it2.next();
            if (vb)
                try {
                    System.out.println("process " + process1.getID());
                } catch (DataUnavailable e) {
                } catch (CorruptDataException e) {
                }
            if (process == null || isProcessForEnvironment(environ, addressSpace1, process1)) {
                addressSpace = addressSpace1;
                process = process1;
                if (vb)
                    System.out.println("default process for Runtime");
            }
        }
    }
    if (null != process) {
        // z/OS can have 64-bit or 31-bit processes, Java only reports 64-bit or 32-bit.
        if (process.getPointerSize() != pointerSize && !(process.getPointerSize() == 31 && pointerSize == 32)) {
            System.out.println("XML and core file pointer sizes differ " + process.getPointerSize() + "!=" + pointerSize);
        }
    } else {
        throw new IllegalStateException("No process found in the dump.");
    }
    imageRef[0] = _coreImage;
    addressSpaceRef[0] = addressSpace;
    processRef[0] = process;
}
Also used : ImageAddressSpace(com.ibm.dtfj.image.j9.ImageAddressSpace) ImageProcess(com.ibm.dtfj.image.j9.ImageProcess) Builder(com.ibm.dtfj.image.j9.Builder) Iterator(java.util.Iterator) DataUnavailable(com.ibm.dtfj.image.DataUnavailable) CorruptDataException(com.ibm.dtfj.image.CorruptDataException) Image(com.ibm.dtfj.image.j9.Image)

Example 15 with Image

use of com.ibm.dtfj.image.j9.Image in project openj9 by eclipse.

the class JavaObject method getSections.

/* (non-Javadoc)
	 * @see com.ibm.dtfj.java.JavaObject#getSections()
	 */
public Iterator getSections() {
    // (not initialized so that code paths will be compiler-validated)
    List sections;
    // arraylets have a more complicated scheme so handle them differently
    if (isArraylet()) {
        try {
            JavaArrayClass arrayForm = (JavaArrayClass) getJavaClass();
            // the first element comes immediately after the header so the offset to it is the size of the header
            // NOTE:  this header size does NOT count arraylet leaves
            int objectHeaderSize = arrayForm.getFirstElementOffset();
            // we require the pointer size in order to walk the leaf pointers in the spine
            int bytesPerPointer = _javaVM.bytesPerPointer();
            try {
                int instanceSize = arrayForm.getInstanceSize(this);
                // the instance size will include the header and the actual data inside the array so seperate them
                long contentDataSize = (long) (instanceSize - objectHeaderSize);
                // get the number of leaves, excluding the tail leaf (the tail leaf is the final leaf which points back into the spine).  There won't be one if there is isn't a remainder in this calculation since it would be empty
                int fullSizeLeaves = (int) (contentDataSize / _arrayletLeafSize);
                // find out how big the tail leaf would be
                long tailLeafSize = contentDataSize % _arrayletLeafSize;
                // if it is non-zero, we know that there must be one (bear in mind the fact that all arraylets have at least 1 leaf pointer - consider empty arrays)
                int totalLeafCount = (0 == tailLeafSize) ? fullSizeLeaves : (fullSizeLeaves + 1);
                // CMVC 153943 : DTFJ fix for zero-length arraylets - remove code to add 1 to the leaf count in the event that it is 0.
                // by always assuming there is a leaf means that when the image sections are determined it will cause an error as there
                // is no space allocated in this instace beyond the size of the spine.
                String nestedType = arrayForm.getLeafClass().getName();
                // 4-byte object alignment in realtime requires the long and double arraylets have padding which may need to be placed before the array data or after, depending on if the alignment succeeded at a natural boundary or not
                boolean alignmentCandidate = (4 == _objectAlignment) && ("double".equals(nestedType) || "long".equals(nestedType));
                // we will need a size for the section which includes the spine (and potentially the tail leaf or even all the leaves (in immortal))
                // start with the object header and the leaves
                long headerAndLeafPointers = objectHeaderSize + (totalLeafCount * bytesPerPointer);
                long spineSectionSize = headerAndLeafPointers;
                // we will now walk the leaves to see if this is an inline arraylet
                // first off, see if we would need padding to align the first inline data element
                long nextExpectedInteriorLeafAddress = _basePointer.getAddress() + headerAndLeafPointers;
                boolean doesHaveTailPadding = false;
                if (alignmentCandidate && (totalLeafCount > 0)) {
                    // alignment candidates need to have at least 1 leaf otherwise there is nothing to align
                    if (0 == (nextExpectedInteriorLeafAddress % 8)) {
                        // no need to add extra space here so the extra slot will be at the tail
                        doesHaveTailPadding = true;
                    } else {
                        // we need to bump up our expected location for alignment
                        nextExpectedInteriorLeafAddress += 4;
                        spineSectionSize += 4;
                        if (0 != (nextExpectedInteriorLeafAddress % 8)) {
                            // this can't happen so the core is corrupt
                            throw new CorruptDataException(new CorruptData("Arraylet leaf pointer misaligned for object", _basePointer));
                        }
                    }
                }
                Vector externalSections = null;
                for (int i = 0; i < totalLeafCount; i++) {
                    ImagePointer leafPointer = _basePointer.getPointerAt(objectHeaderSize + (i * bytesPerPointer));
                    if (leafPointer.getAddress() == nextExpectedInteriorLeafAddress) {
                        // this pointer is interior so add it to the spine section
                        long internalLeafSize = _arrayletLeafSize;
                        if (fullSizeLeaves == i) {
                            // this is the last leaf so get the tail leaf size
                            internalLeafSize = tailLeafSize;
                        }
                        spineSectionSize += internalLeafSize;
                        nextExpectedInteriorLeafAddress += internalLeafSize;
                    } else {
                        // this pointer is exterior so make it its own section
                        if (null == externalSections) {
                            externalSections = new Vector();
                        }
                        externalSections.add(new JavaObjectImageSection(leafPointer, _arrayletLeafSize));
                    }
                }
                if (doesHaveTailPadding) {
                    // now, add the extra 4 bytes to the end
                    spineSectionSize += 4;
                }
                // ensure that we are at least the minimum object size
                spineSectionSize = Math.max(spineSectionSize, _arrayletSpineSize);
                JavaObjectImageSection spineSection = new JavaObjectImageSection(_basePointer, spineSectionSize);
                if (null == externalSections) {
                    // create the section list, with the spine first (other parts of our implementation use the knowledge that the spine is first to reduce logic duplication)
                    sections = Collections.singletonList(spineSection);
                } else {
                    sections = new Vector();
                    sections.add(spineSection);
                    sections.addAll(externalSections);
                }
            } catch (MemoryAccessException e) {
                // if we had a memory access exception, the spine must be corrupt, or something
                sections = Collections.singletonList(new CorruptData("failed to walk arraylet spine", e.getPointer()));
            }
        } catch (CorruptDataException e) {
            sections = Collections.singletonList(e.getCorruptData());
        }
    } else {
        // currently J9 objects are atomic extents of memory but that changes with metronome and that will probably extend to other VM configurations, as well
        long size = 0;
        try {
            size = ((com.ibm.dtfj.java.j9.JavaAbstractClass) getJavaClass()).getInstanceSize(this);
            JavaObjectImageSection section = new JavaObjectImageSection(_basePointer, size);
            sections = Collections.singletonList(section);
        } catch (CorruptDataException e) {
            sections = Collections.singletonList(e.getCorruptData());
        }
    // XXX - handle the case of this corrupt data better (may require API change)
    }
    return sections.iterator();
}
Also used : CorruptDataException(com.ibm.dtfj.image.CorruptDataException) ImagePointer(com.ibm.dtfj.image.ImagePointer) ArrayList(java.util.ArrayList) List(java.util.List) CorruptData(com.ibm.dtfj.image.j9.CorruptData) Vector(java.util.Vector) MemoryAccessException(com.ibm.dtfj.image.MemoryAccessException)

Aggregations

AnnotateImageRequest (com.google.cloud.vision.v1.AnnotateImageRequest)28 AnnotateImageResponse (com.google.cloud.vision.v1.AnnotateImageResponse)28 BatchAnnotateImagesResponse (com.google.cloud.vision.v1.BatchAnnotateImagesResponse)28 Feature (com.google.cloud.vision.v1.Feature)28 Image (com.google.cloud.vision.v1.Image)28 ArrayList (java.util.ArrayList)28 ImageAnnotatorClient (com.google.cloud.vision.v1.ImageAnnotatorClient)27 WebImage (com.google.cloud.vision.v1.WebDetection.WebImage)25 ByteString (com.google.protobuf.ByteString)17 EntityAnnotation (com.google.cloud.vision.v1.EntityAnnotation)16 ImageSource (com.google.cloud.vision.v1.ImageSource)15 FileInputStream (java.io.FileInputStream)14 WebPage (com.google.cloud.vision.v1.WebDetection.WebPage)8 Block (com.google.cloud.vision.v1.Block)6 ColorInfo (com.google.cloud.vision.v1.ColorInfo)6 CropHint (com.google.cloud.vision.v1.CropHint)6 CropHintsAnnotation (com.google.cloud.vision.v1.CropHintsAnnotation)6 DominantColorsAnnotation (com.google.cloud.vision.v1.DominantColorsAnnotation)6 FaceAnnotation (com.google.cloud.vision.v1.FaceAnnotation)6 LocationInfo (com.google.cloud.vision.v1.LocationInfo)6