Search in sources :

Example 1 with Face

use of com.google.android.gms.vision.face.Face in project android-vision by googlesamples.

the class GooglyEyesActivity method createFaceDetector.

//==============================================================================================
// Detector
//==============================================================================================
/**
     * Creates the face detector and associated processing pipeline to support either front facing
     * mode or rear facing mode.  Checks if the detector is ready to use, and displays a low storage
     * warning if it was not possible to download the face library.
     */
@NonNull
private FaceDetector createFaceDetector(Context context) {
    // For both front facing and rear facing modes, the detector is initialized to do landmark
    // detection (to find the eyes), classification (to determine if the eyes are open), and
    // tracking.
    //
    // Use of "fast mode" enables faster detection for frontward faces, at the expense of not
    // attempting to detect faces at more varied angles (e.g., faces in profile).  Therefore,
    // faces that are turned too far won't be detected under fast mode.
    //
    // For front facing mode only, the detector will use the "prominent face only" setting,
    // which is optimized for tracking a single relatively large face.  This setting allows the
    // detector to take some shortcuts to make tracking faster, at the expense of not being able
    // to track multiple faces.
    //
    // Setting the minimum face size not only controls how large faces must be in order to be
    // detected, it also affects performance.  Since it takes longer to scan for smaller faces,
    // we increase the minimum face size for the rear facing mode a little bit in order to make
    // tracking faster (at the expense of missing smaller faces).  But this optimization is less
    // important for the front facing case, because when "prominent face only" is enabled, the
    // detector stops scanning for faces after it has found the first (large) face.
    FaceDetector detector = new FaceDetector.Builder(context).setLandmarkType(FaceDetector.ALL_LANDMARKS).setClassificationType(FaceDetector.ALL_CLASSIFICATIONS).setTrackingEnabled(true).setMode(FaceDetector.FAST_MODE).setProminentFaceOnly(mIsFrontFacing).setMinFaceSize(mIsFrontFacing ? 0.35f : 0.15f).build();
    Detector.Processor<Face> processor;
    if (mIsFrontFacing) {
        // For front facing mode, a single tracker instance is used with an associated focusing
        // processor.  This configuration allows the face detector to take some shortcuts to
        // speed up detection, in that it can quit after finding a single face and can assume
        // that the nextIrisPosition face position is usually relatively close to the last seen
        // face position.
        Tracker<Face> tracker = new GooglyFaceTracker(mGraphicOverlay);
        processor = new LargestFaceFocusingProcessor.Builder(detector, tracker).build();
    } else {
        // For rear facing mode, a factory is used to create per-face tracker instances.  A
        // tracker is created for each face and is maintained as long as the same face is
        // visible, enabling per-face state to be maintained over time.  This is used to store
        // the iris position and velocity for each face independently, simulating the motion of
        // the eyes of any number of faces over time.
        //
        // Both the front facing mode and the rear facing mode use the same tracker
        // implementation, avoiding the need for any additional code.  The only difference
        // between these cases is the choice of Processor: one that is specialized for tracking
        // a single face or one that can handle multiple faces.  Here, we use MultiProcessor,
        // which is a standard component of the mobile vision API for managing multiple items.
        MultiProcessor.Factory<Face> factory = new MultiProcessor.Factory<Face>() {

            @Override
            public Tracker<Face> create(Face face) {
                return new GooglyFaceTracker(mGraphicOverlay);
            }
        };
        processor = new MultiProcessor.Builder<>(factory).build();
    }
    detector.setProcessor(processor);
    if (!detector.isOperational()) {
        // Note: The first time that an app using face API is installed on a device, GMS will
        // download a native library to the device in order to do detection.  Usually this
        // completes before the app is run for the first time.  But if that download has not yet
        // completed, then the above call will not detect any faces.
        //
        // isOperational() can be used to check if the required native library is currently
        // available.  The detector will automatically become operational once the library
        // download completes on device.
        Log.w(TAG, "Face detector dependencies are not yet available.");
        // Check for low storage.  If there is low storage, the native library will not be
        // downloaded, so detection will not become operational.
        IntentFilter lowStorageFilter = new IntentFilter(Intent.ACTION_DEVICE_STORAGE_LOW);
        boolean hasLowStorage = registerReceiver(null, lowStorageFilter) != null;
        if (hasLowStorage) {
            Toast.makeText(this, R.string.low_storage_error, Toast.LENGTH_LONG).show();
            Log.w(TAG, getString(R.string.low_storage_error));
        }
    }
    return detector;
}
Also used : IntentFilter(android.content.IntentFilter) MultiProcessor(com.google.android.gms.vision.MultiProcessor) FaceDetector(com.google.android.gms.vision.face.FaceDetector) Detector(com.google.android.gms.vision.Detector) FaceDetector(com.google.android.gms.vision.face.FaceDetector) Face(com.google.android.gms.vision.face.Face) NonNull(android.support.annotation.NonNull)

Example 2 with Face

use of com.google.android.gms.vision.face.Face in project android-vision by googlesamples.

the class FaceGraphic method draw.

/**
     * Draws the face annotations for position on the supplied canvas.
     */
@Override
public void draw(Canvas canvas) {
    Face face = mFace;
    if (face == null) {
        return;
    }
    // Draws a circle at the position of the detected face, with the face's track id below.
    float x = translateX(face.getPosition().x + face.getWidth() / 2);
    float y = translateY(face.getPosition().y + face.getHeight() / 2);
    canvas.drawCircle(x, y, FACE_POSITION_RADIUS, mFacePositionPaint);
    canvas.drawText("id: " + mFaceId, x + ID_X_OFFSET, y + ID_Y_OFFSET, mIdPaint);
    canvas.drawText("happiness: " + String.format("%.2f", face.getIsSmilingProbability()), x - ID_X_OFFSET, y - ID_Y_OFFSET, mIdPaint);
    canvas.drawText("right eye: " + String.format("%.2f", face.getIsRightEyeOpenProbability()), x + ID_X_OFFSET * 2, y + ID_Y_OFFSET * 2, mIdPaint);
    canvas.drawText("left eye: " + String.format("%.2f", face.getIsLeftEyeOpenProbability()), x - ID_X_OFFSET * 2, y - ID_Y_OFFSET * 2, mIdPaint);
    // Draws a bounding box around the face.
    float xOffset = scaleX(face.getWidth() / 2.0f);
    float yOffset = scaleY(face.getHeight() / 2.0f);
    float left = x - xOffset;
    float top = y - yOffset;
    float right = x + xOffset;
    float bottom = y + yOffset;
    canvas.drawRect(left, top, right, bottom, mBoxPaint);
}
Also used : Face(com.google.android.gms.vision.face.Face)

Example 3 with Face

use of com.google.android.gms.vision.face.Face in project HomeMirror by HannahMitt.

the class MoodModule method createCameraSource.

/**
     * Creates and starts the camera.  Note that this uses a higher resolution in comparison
     * to other detection examples to enable the barcode detector to detect small barcodes
     * at long distances.
     */
private void createCameraSource() {
    Context context = mContextWeakReference.get();
    FaceDetector detector = new FaceDetector.Builder(context).setClassificationType(FaceDetector.ALL_CLASSIFICATIONS).build();
    detector.setProcessor(new Detector.Processor<Face>() {

        @Override
        public void release() {
        }

        @Override
        public void receiveDetections(final Detector.Detections<Face> detections) {
            final SparseArray<Face> detectedItems = detections.getDetectedItems();
            if (detectedItems.size() != 0) {
                final int key = detectedItems.keyAt(0);
                final Face face = detectedItems.get(key);
                final float isSmilingProbability = face.getIsSmilingProbability();
                String feedback = getFeedbackForSmileProbability(isSmilingProbability);
                mCallBacks.onShouldGivePositiveAffirmation(feedback);
            }
        }
    });
    if (!detector.isOperational()) {
        // Note: The first time that an app using face API is installed on a device, GMS will
        // download a native library to the device in order to do detection.  Usually this
        // completes before the app is run for the first time.  But if that download has not yet
        // completed, then the above call will not detect any faces.
        //
        // isOperational() can be used to check if the required native library is currently
        // available.  The detector will automatically become operational once the library
        // download completes on device.
        Log.w(TAG, "Face detector dependencies are not yet available.");
    }
    try {
        mCameraSource = new CameraSource.Builder(context, detector).setRequestedPreviewSize(640, 480).setFacing(CameraSource.CAMERA_FACING_FRONT).setRequestedFps(30.0f).build();
        mCameraSource.start();
    } catch (IOException | RuntimeException e) {
        Log.e(TAG, "Something went horribly wrong, with your face.", e);
    }
}
Also used : Context(android.content.Context) CameraSource(com.google.android.gms.vision.CameraSource) IOException(java.io.IOException) SparseArray(android.util.SparseArray) FaceDetector(com.google.android.gms.vision.face.FaceDetector) Detector(com.google.android.gms.vision.Detector) FaceDetector(com.google.android.gms.vision.face.FaceDetector) Face(com.google.android.gms.vision.face.Face)

Example 4 with Face

use of com.google.android.gms.vision.face.Face in project android-vision by googlesamples.

the class FaceGraphic method draw.

/**
     * Draws the face annotations for position, size, and ID on the supplied canvas.
     */
@Override
public void draw(Canvas canvas) {
    Face face = mFace;
    if (face == null) {
        return;
    }
    // Draws a circle at the position of the detected face, with the face's track id below.
    float cx = translateX(face.getPosition().x + face.getWidth() / 2);
    float cy = translateY(face.getPosition().y + face.getHeight() / 2);
    canvas.drawCircle(cx, cy, FACE_POSITION_RADIUS, mFacePositionPaint);
    canvas.drawText("id: " + getId(), cx + ID_X_OFFSET, cy + ID_Y_OFFSET, mIdPaint);
    // Draws an oval around the face.
    float xOffset = scaleX(face.getWidth() / 2.0f);
    float yOffset = scaleY(face.getHeight() / 2.0f);
    float left = cx - xOffset;
    float top = cy - yOffset;
    float right = cx + xOffset;
    float bottom = cy + yOffset;
    canvas.drawOval(left, top, right, bottom, mBoxPaint);
}
Also used : Face(com.google.android.gms.vision.face.Face)

Example 5 with Face

use of com.google.android.gms.vision.face.Face in project android-vision by googlesamples.

the class FaceView method drawFaceAnnotations.

/**
     * Draws a small circle for each detected landmark, centered at the detected landmark position.
     * <p>
     *
     * Note that eye landmarks are defined to be the midpoint between the detected eye corner
     * positions, which tends to place the eye landmarks at the lower eyelid rather than at the
     * pupil position.
     */
private void drawFaceAnnotations(Canvas canvas, double scale) {
    Paint paint = new Paint();
    paint.setColor(Color.GREEN);
    paint.setStyle(Paint.Style.STROKE);
    paint.setStrokeWidth(5);
    for (int i = 0; i < mFaces.size(); ++i) {
        Face face = mFaces.valueAt(i);
        for (Landmark landmark : face.getLandmarks()) {
            int cx = (int) (landmark.getPosition().x * scale);
            int cy = (int) (landmark.getPosition().y * scale);
            canvas.drawCircle(cx, cy, 10, paint);
        }
    }
}
Also used : Landmark(com.google.android.gms.vision.face.Landmark) Paint(android.graphics.Paint) Face(com.google.android.gms.vision.face.Face) Paint(android.graphics.Paint)

Aggregations

Face (com.google.android.gms.vision.face.Face)6 FaceDetector (com.google.android.gms.vision.face.FaceDetector)3 IntentFilter (android.content.IntentFilter)2 Detector (com.google.android.gms.vision.Detector)2 Context (android.content.Context)1 Bitmap (android.graphics.Bitmap)1 Paint (android.graphics.Paint)1 NonNull (android.support.annotation.NonNull)1 SparseArray (android.util.SparseArray)1 SafeFaceDetector (com.google.android.gms.samples.vision.face.patch.SafeFaceDetector)1 CameraSource (com.google.android.gms.vision.CameraSource)1 Frame (com.google.android.gms.vision.Frame)1 MultiProcessor (com.google.android.gms.vision.MultiProcessor)1 Landmark (com.google.android.gms.vision.face.Landmark)1 IOException (java.io.IOException)1 InputStream (java.io.InputStream)1