Search in sources :

Example 1 with ImageMotion2D

use of boofcv.abst.sfm.d2.ImageMotion2D in project BoofCV by lessthanoptimal.

the class FactoryMotion2D method createMotion2D.

/**
 * Estimates the 2D motion of an image using different models.
 *
 * @param ransacIterations Number of RANSAC iterations
 * @param inlierThreshold Threshold which defines an inlier.
 * @param outlierPrune If a feature is an outlier for this many turns in a row it is dropped. Try 2
 * @param absoluteMinimumTracks New features will be respawned if the number of inliers drop below this number.
 * @param respawnTrackFraction If the fraction of current inliers to the original number of inliers drops below
 *                             this fraction then new features are spawned.  Try 0.3
 * @param respawnCoverageFraction If the area covered drops by this fraction then spawn more features.  Try 0.8
 * @param refineEstimate Should it refine the model estimate using all inliers.
 * @param tracker Point feature tracker.
 * @param motionModel Instance of the model model used. Affine2D_F64 or Homography2D_F64
 * @param <I> Image input type.
 * @param <IT> Model model
 * @return  ImageMotion2D
 */
public static <I extends ImageBase<I>, IT extends InvertibleTransform> ImageMotion2D<I, IT> createMotion2D(int ransacIterations, double inlierThreshold, int outlierPrune, int absoluteMinimumTracks, double respawnTrackFraction, double respawnCoverageFraction, boolean refineEstimate, PointTracker<I> tracker, IT motionModel) {
    ModelManager<IT> manager;
    ModelGenerator<IT, AssociatedPair> fitter;
    DistanceFromModel<IT, AssociatedPair> distance;
    ModelFitter<IT, AssociatedPair> modelRefiner = null;
    if (motionModel instanceof Homography2D_F64) {
        GenerateHomographyLinear mf = new GenerateHomographyLinear(true);
        manager = (ModelManager) new ModelManagerHomography2D_F64();
        fitter = (ModelGenerator) mf;
        if (refineEstimate)
            modelRefiner = (ModelFitter) mf;
        distance = (DistanceFromModel) new DistanceHomographySq();
    } else if (motionModel instanceof Affine2D_F64) {
        manager = (ModelManager) new ModelManagerAffine2D_F64();
        GenerateAffine2D mf = new GenerateAffine2D();
        fitter = (ModelGenerator) mf;
        if (refineEstimate)
            modelRefiner = (ModelFitter) mf;
        distance = (DistanceFromModel) new DistanceAffine2DSq();
    } else if (motionModel instanceof Se2_F64) {
        manager = (ModelManager) new ModelManagerSe2_F64();
        MotionTransformPoint<Se2_F64, Point2D_F64> alg = new MotionSe2PointSVD_F64();
        GenerateSe2_AssociatedPair mf = new GenerateSe2_AssociatedPair(alg);
        fitter = (ModelGenerator) mf;
        distance = (DistanceFromModel) new DistanceSe2Sq();
    // no refine, already optimal
    } else {
        throw new RuntimeException("Unknown model type: " + motionModel.getClass().getSimpleName());
    }
    ModelMatcher<IT, AssociatedPair> modelMatcher = new Ransac(123123, manager, fitter, distance, ransacIterations, inlierThreshold);
    ImageMotionPointTrackerKey<I, IT> lowlevel = new ImageMotionPointTrackerKey<>(tracker, modelMatcher, modelRefiner, motionModel, outlierPrune);
    ImageMotionPtkSmartRespawn<I, IT> smartRespawn = new ImageMotionPtkSmartRespawn<>(lowlevel, absoluteMinimumTracks, respawnTrackFraction, respawnCoverageFraction);
    return new WrapImageMotionPtkSmartRespawn<>(smartRespawn);
}
Also used : Homography2D_F64(georegression.struct.homography.Homography2D_F64) ModelManagerHomography2D_F64(georegression.fitting.homography.ModelManagerHomography2D_F64) Ransac(org.ddogleg.fitting.modelset.ransac.Ransac) MotionSe2PointSVD_F64(georegression.fitting.se.MotionSe2PointSVD_F64) ModelManagerAffine2D_F64(georegression.fitting.affine.ModelManagerAffine2D_F64) AssociatedPair(boofcv.struct.geo.AssociatedPair) ModelManagerHomography2D_F64(georegression.fitting.homography.ModelManagerHomography2D_F64) WrapImageMotionPtkSmartRespawn(boofcv.abst.sfm.d2.WrapImageMotionPtkSmartRespawn) ModelManagerSe2_F64(georegression.fitting.se.ModelManagerSe2_F64) Se2_F64(georegression.struct.se.Se2_F64) ModelManagerSe2_F64(georegression.fitting.se.ModelManagerSe2_F64) ModelManagerAffine2D_F64(georegression.fitting.affine.ModelManagerAffine2D_F64) Affine2D_F64(georegression.struct.affine.Affine2D_F64) Point2D_F64(georegression.struct.point.Point2D_F64) WrapImageMotionPtkSmartRespawn(boofcv.abst.sfm.d2.WrapImageMotionPtkSmartRespawn)

Example 2 with ImageMotion2D

use of boofcv.abst.sfm.d2.ImageMotion2D in project BoofCV by lessthanoptimal.

the class ExampleVideoMosaic method main.

public static void main(String[] args) {
    // Configure the feature detector
    ConfigGeneralDetector confDetector = new ConfigGeneralDetector();
    confDetector.threshold = 1;
    confDetector.maxFeatures = 300;
    confDetector.radius = 3;
    // Use a KLT tracker
    PointTracker<GrayF32> tracker = FactoryPointTracker.klt(new int[] { 1, 2, 4, 8 }, confDetector, 3, GrayF32.class, GrayF32.class);
    // This estimates the 2D image motion
    // An Affine2D_F64 model also works quite well.
    ImageMotion2D<GrayF32, Homography2D_F64> motion2D = FactoryMotion2D.createMotion2D(220, 3, 2, 30, 0.6, 0.5, false, tracker, new Homography2D_F64());
    // wrap it so it output color images while estimating motion from gray
    ImageMotion2D<Planar<GrayF32>, Homography2D_F64> motion2DColor = new PlToGrayMotion2D<>(motion2D, GrayF32.class);
    // This fuses the images together
    StitchingFromMotion2D<Planar<GrayF32>, Homography2D_F64> stitch = FactoryMotion2D.createVideoStitch(0.5, motion2DColor, ImageType.pl(3, GrayF32.class));
    // Load an image sequence
    MediaManager media = DefaultMediaManager.INSTANCE;
    String fileName = UtilIO.pathExample("mosaic/airplane01.mjpeg");
    SimpleImageSequence<Planar<GrayF32>> video = media.openVideo(fileName, ImageType.pl(3, GrayF32.class));
    Planar<GrayF32> frame = video.next();
    // shrink the input image and center it
    Homography2D_F64 shrink = new Homography2D_F64(0.5, 0, frame.width / 4, 0, 0.5, frame.height / 4, 0, 0, 1);
    shrink = shrink.invert(null);
    // The mosaic will be larger in terms of pixels but the image will be scaled down.
    // To change this into stabilization just make it the same size as the input with no shrink.
    stitch.configure(frame.width, frame.height, shrink);
    // process the first frame
    stitch.process(frame);
    // Create the GUI for displaying the results + input image
    ImageGridPanel gui = new ImageGridPanel(1, 2);
    gui.setImage(0, 0, new BufferedImage(frame.width, frame.height, BufferedImage.TYPE_INT_RGB));
    gui.setImage(0, 1, new BufferedImage(frame.width, frame.height, BufferedImage.TYPE_INT_RGB));
    gui.setPreferredSize(new Dimension(3 * frame.width, frame.height * 2));
    ShowImages.showWindow(gui, "Example Mosaic", true);
    boolean enlarged = false;
    // process the video sequence one frame at a time
    while (video.hasNext()) {
        frame = video.next();
        if (!stitch.process(frame))
            throw new RuntimeException("You should handle failures");
        // if the current image is close to the image border recenter the mosaic
        StitchingFromMotion2D.Corners corners = stitch.getImageCorners(frame.width, frame.height, null);
        if (nearBorder(corners.p0, stitch) || nearBorder(corners.p1, stitch) || nearBorder(corners.p2, stitch) || nearBorder(corners.p3, stitch)) {
            stitch.setOriginToCurrent();
            // only enlarge the image once
            if (!enlarged) {
                enlarged = true;
                // double the image size and shift it over to keep it centered
                int widthOld = stitch.getStitchedImage().width;
                int heightOld = stitch.getStitchedImage().height;
                int widthNew = widthOld * 2;
                int heightNew = heightOld * 2;
                int tranX = (widthNew - widthOld) / 2;
                int tranY = (heightNew - heightOld) / 2;
                Homography2D_F64 newToOldStitch = new Homography2D_F64(1, 0, -tranX, 0, 1, -tranY, 0, 0, 1);
                stitch.resizeStitchImage(widthNew, heightNew, newToOldStitch);
                gui.setImage(0, 1, new BufferedImage(widthNew, heightNew, BufferedImage.TYPE_INT_RGB));
            }
            corners = stitch.getImageCorners(frame.width, frame.height, null);
        }
        // display the mosaic
        ConvertBufferedImage.convertTo(frame, gui.getImage(0, 0), true);
        ConvertBufferedImage.convertTo(stitch.getStitchedImage(), gui.getImage(0, 1), true);
        // draw a red quadrilateral around the current frame in the mosaic
        Graphics2D g2 = gui.getImage(0, 1).createGraphics();
        g2.setColor(Color.RED);
        g2.drawLine((int) corners.p0.x, (int) corners.p0.y, (int) corners.p1.x, (int) corners.p1.y);
        g2.drawLine((int) corners.p1.x, (int) corners.p1.y, (int) corners.p2.x, (int) corners.p2.y);
        g2.drawLine((int) corners.p2.x, (int) corners.p2.y, (int) corners.p3.x, (int) corners.p3.y);
        g2.drawLine((int) corners.p3.x, (int) corners.p3.y, (int) corners.p0.x, (int) corners.p0.y);
        gui.repaint();
        // throttle the speed just in case it's on a fast computer
        BoofMiscOps.pause(50);
    }
}
Also used : StitchingFromMotion2D(boofcv.alg.sfm.d2.StitchingFromMotion2D) PlToGrayMotion2D(boofcv.abst.sfm.d2.PlToGrayMotion2D) ConfigGeneralDetector(boofcv.abst.feature.detect.interest.ConfigGeneralDetector) Homography2D_F64(georegression.struct.homography.Homography2D_F64) BufferedImage(java.awt.image.BufferedImage) ConvertBufferedImage(boofcv.io.image.ConvertBufferedImage) GrayF32(boofcv.struct.image.GrayF32) MediaManager(boofcv.io.MediaManager) DefaultMediaManager(boofcv.io.wrapper.DefaultMediaManager) Planar(boofcv.struct.image.Planar) ImageGridPanel(boofcv.gui.image.ImageGridPanel)

Example 3 with ImageMotion2D

use of boofcv.abst.sfm.d2.ImageMotion2D in project BoofCV by lessthanoptimal.

the class ExampleVideoStabilization method main.

public static void main(String[] args) {
    // Configure the feature detector
    ConfigGeneralDetector confDetector = new ConfigGeneralDetector();
    confDetector.threshold = 10;
    confDetector.maxFeatures = 300;
    confDetector.radius = 2;
    // Use a KLT tracker
    PointTracker<GrayF32> tracker = FactoryPointTracker.klt(new int[] { 1, 2, 4, 8 }, confDetector, 3, GrayF32.class, GrayF32.class);
    // This estimates the 2D image motion
    // An Affine2D_F64 model also works quite well.
    ImageMotion2D<GrayF32, Homography2D_F64> motion2D = FactoryMotion2D.createMotion2D(200, 3, 2, 30, 0.6, 0.5, false, tracker, new Homography2D_F64());
    // wrap it so it output color images while estimating motion from gray
    ImageMotion2D<Planar<GrayF32>, Homography2D_F64> motion2DColor = new PlToGrayMotion2D<>(motion2D, GrayF32.class);
    // This fuses the images together
    StitchingFromMotion2D<Planar<GrayF32>, Homography2D_F64> stabilize = FactoryMotion2D.createVideoStitch(0.5, motion2DColor, ImageType.pl(3, GrayF32.class));
    // Load an image sequence
    MediaManager media = DefaultMediaManager.INSTANCE;
    String fileName = UtilIO.pathExample("shake.mjpeg");
    SimpleImageSequence<Planar<GrayF32>> video = media.openVideo(fileName, ImageType.pl(3, GrayF32.class));
    Planar<GrayF32> frame = video.next();
    // The output image size is the same as the input image size
    stabilize.configure(frame.width, frame.height, null);
    // process the first frame
    stabilize.process(frame);
    // Create the GUI for displaying the results + input image
    ImageGridPanel gui = new ImageGridPanel(1, 2);
    gui.setImage(0, 0, new BufferedImage(frame.width, frame.height, BufferedImage.TYPE_INT_RGB));
    gui.setImage(0, 1, new BufferedImage(frame.width, frame.height, BufferedImage.TYPE_INT_RGB));
    gui.autoSetPreferredSize();
    ShowImages.showWindow(gui, "Example Stabilization", true);
    // process the video sequence one frame at a time
    while (video.hasNext()) {
        if (!stabilize.process(video.next()))
            throw new RuntimeException("Don't forget to handle failures!");
        // display the stabilized image
        ConvertBufferedImage.convertTo(frame, gui.getImage(0, 0), true);
        ConvertBufferedImage.convertTo(stabilize.getStitchedImage(), gui.getImage(0, 1), true);
        gui.repaint();
        // throttle the speed just in case it's on a fast computer
        BoofMiscOps.pause(50);
    }
}
Also used : PlToGrayMotion2D(boofcv.abst.sfm.d2.PlToGrayMotion2D) ConfigGeneralDetector(boofcv.abst.feature.detect.interest.ConfigGeneralDetector) Homography2D_F64(georegression.struct.homography.Homography2D_F64) BufferedImage(java.awt.image.BufferedImage) ConvertBufferedImage(boofcv.io.image.ConvertBufferedImage) GrayF32(boofcv.struct.image.GrayF32) MediaManager(boofcv.io.MediaManager) DefaultMediaManager(boofcv.io.wrapper.DefaultMediaManager) Planar(boofcv.struct.image.Planar) ImageGridPanel(boofcv.gui.image.ImageGridPanel)

Example 4 with ImageMotion2D

use of boofcv.abst.sfm.d2.ImageMotion2D in project BoofCV by lessthanoptimal.

the class VideoStitchBaseApp method createAlgorithm.

protected StitchingFromMotion2D createAlgorithm(PointTracker<I> tracker) {
    if (imageType.getFamily() == ImageType.Family.PLANAR) {
        Class imageClass = this.imageType.getImageClass();
        ImageMotion2D<I, IT> motion = FactoryMotion2D.createMotion2D(maxIterations, inlierThreshold, 2, absoluteMinimumTracks, respawnTrackFraction, respawnCoverageFraction, false, tracker, fitModel);
        ImageMotion2D motion2DColor = new PlToGrayMotion2D(motion, imageClass);
        return FactoryMotion2D.createVideoStitch(maxJumpFraction, motion2DColor, imageType);
    } else {
        ImageMotion2D motion = FactoryMotion2D.createMotion2D(maxIterations, inlierThreshold, 2, absoluteMinimumTracks, respawnTrackFraction, respawnCoverageFraction, false, tracker, fitModel);
        return FactoryMotion2D.createVideoStitch(maxJumpFraction, motion, imageType);
    }
}
Also used : ImageMotion2D(boofcv.abst.sfm.d2.ImageMotion2D) PlToGrayMotion2D(boofcv.abst.sfm.d2.PlToGrayMotion2D)

Aggregations

PlToGrayMotion2D (boofcv.abst.sfm.d2.PlToGrayMotion2D)3 Homography2D_F64 (georegression.struct.homography.Homography2D_F64)3 ConfigGeneralDetector (boofcv.abst.feature.detect.interest.ConfigGeneralDetector)2 ImageGridPanel (boofcv.gui.image.ImageGridPanel)2 MediaManager (boofcv.io.MediaManager)2 ConvertBufferedImage (boofcv.io.image.ConvertBufferedImage)2 DefaultMediaManager (boofcv.io.wrapper.DefaultMediaManager)2 GrayF32 (boofcv.struct.image.GrayF32)2 Planar (boofcv.struct.image.Planar)2 BufferedImage (java.awt.image.BufferedImage)2 ImageMotion2D (boofcv.abst.sfm.d2.ImageMotion2D)1 WrapImageMotionPtkSmartRespawn (boofcv.abst.sfm.d2.WrapImageMotionPtkSmartRespawn)1 StitchingFromMotion2D (boofcv.alg.sfm.d2.StitchingFromMotion2D)1 AssociatedPair (boofcv.struct.geo.AssociatedPair)1 ModelManagerAffine2D_F64 (georegression.fitting.affine.ModelManagerAffine2D_F64)1 ModelManagerHomography2D_F64 (georegression.fitting.homography.ModelManagerHomography2D_F64)1 ModelManagerSe2_F64 (georegression.fitting.se.ModelManagerSe2_F64)1 MotionSe2PointSVD_F64 (georegression.fitting.se.MotionSe2PointSVD_F64)1 Affine2D_F64 (georegression.struct.affine.Affine2D_F64)1 Point2D_F64 (georegression.struct.point.Point2D_F64)1