Search in sources :

Example 1 with ConfigPointDetector

use of boofcv.abst.feature.detect.interest.ConfigPointDetector in project BoofCV by lessthanoptimal.

the class TestMonoOverhead_to_MonocularPlaneVisualOdometry method createAlgorithm.

protected MonocularPlaneVisualOdometry<GrayU8> createAlgorithm() {
    ConfigPKlt config = new ConfigPKlt();
    config.pyramidLevels = ConfigDiscreteLevels.levels(4);
    config.templateRadius = 3;
    ConfigPointDetector configDetector = new ConfigPointDetector();
    configDetector.type = PointDetectorTypes.SHI_TOMASI;
    configDetector.general.maxFeatures = 600;
    configDetector.general.radius = 3;
    configDetector.general.threshold = 1;
    PointTracker<GrayU8> tracker = FactoryPointTracker.klt(config, configDetector, GrayU8.class, GrayS16.class);
    double cellSize = 0.015;
    double ransacTol = 0.2;
    return FactoryVisualOdometry.monoPlaneOverhead(cellSize, 25, 0.5, ransacTol, 300, 2, 30, 0.5, 0.3, tracker, ImageType.single(GrayU8.class));
}
Also used : ConfigPKlt(boofcv.alg.tracker.klt.ConfigPKlt) GrayU8(boofcv.struct.image.GrayU8) ConfigPointDetector(boofcv.abst.feature.detect.interest.ConfigPointDetector)

Example 2 with ConfigPointDetector

use of boofcv.abst.feature.detect.interest.ConfigPointDetector in project BoofCV by lessthanoptimal.

the class TestFactoryDetectPoint method checkBlowUp.

/**
 * Go through every detector type and see if any of them blow up
 */
@Test
void checkBlowUp() {
    var config = new ConfigPointDetector();
    for (PointDetectorTypes type : PointDetectorTypes.values()) {
        config.type = type;
        FactoryDetectPoint.create(config, GrayU8.class, null);
        FactoryDetectPoint.create(config, GrayF32.class, null);
    }
}
Also used : PointDetectorTypes(boofcv.abst.feature.detect.interest.PointDetectorTypes) ConfigPointDetector(boofcv.abst.feature.detect.interest.ConfigPointDetector) Test(org.junit.jupiter.api.Test)

Example 3 with ConfigPointDetector

use of boofcv.abst.feature.detect.interest.ConfigPointDetector in project BoofCV by lessthanoptimal.

the class ExampleBackgroundRemovalMoving method main.

public static void main(String[] args) {
    // Example with a moving camera. Highlights why motion estimation is sometimes required
    String fileName = UtilIO.pathExample("tracking/chipmunk.mjpeg");
    // Camera has a bit of jitter in it. Static kinda works but motion reduces false positives
    // String fileName = UtilIO.pathExample("background/horse_jitter.mp4");
    // Comment/Uncomment to switch input image type
    ImageType imageType = ImageType.single(GrayF32.class);
    // ImageType imageType = ImageType.il(3, InterleavedF32.class);
    // ImageType imageType = ImageType.il(3, InterleavedU8.class);
    // Configure the feature detector
    ConfigPointDetector configDetector = new ConfigPointDetector();
    configDetector.type = PointDetectorTypes.SHI_TOMASI;
    configDetector.general.maxFeatures = 300;
    configDetector.general.radius = 6;
    configDetector.general.threshold = 10;
    // Use a KLT tracker
    PointTracker tracker = FactoryPointTracker.klt(4, configDetector, 3, GrayF32.class, null);
    // This estimates the 2D image motion
    ImageMotion2D<GrayF32, Homography2D_F64> motion2D = FactoryMotion2D.createMotion2D(500, 0.5, 3, 100, 0.6, 0.5, false, tracker, new Homography2D_F64());
    ConfigBackgroundBasic configBasic = new ConfigBackgroundBasic(30, 0.005f);
    // Configuration for Gaussian model. Note that the threshold changes depending on the number of image bands
    // 12 = gray scale and 40 = color
    ConfigBackgroundGaussian configGaussian = new ConfigBackgroundGaussian(12, 0.001f);
    configGaussian.initialVariance = 64;
    configGaussian.minimumDifference = 5;
    // Note that GMM doesn't interpolate the input image. Making it harder to model object edges.
    // However it runs faster because of this.
    ConfigBackgroundGmm configGmm = new ConfigBackgroundGmm();
    configGmm.initialVariance = 1600;
    configGmm.significantWeight = 1e-1f;
    // Comment/Uncomment to switch background mode
    BackgroundModelMoving background = FactoryBackgroundModel.movingBasic(configBasic, new PointTransformHomography_F32(), imageType);
    // FactoryBackgroundModel.movingGaussian(configGaussian, new PointTransformHomography_F32(), imageType);
    // FactoryBackgroundModel.movingGmm(configGmm,new PointTransformHomography_F32(), imageType);
    background.setUnknownValue(1);
    MediaManager media = DefaultMediaManager.INSTANCE;
    SimpleImageSequence video = media.openVideo(fileName, background.getImageType());
    // media.openCamera(null,640,480,background.getImageType());
    // ====== Initialize Images
    // storage for segmented image. Background = 0, Foreground = 1
    GrayU8 segmented = new GrayU8(video.getWidth(), video.getHeight());
    // Grey scale image that's the input for motion estimation
    GrayF32 grey = new GrayF32(segmented.width, segmented.height);
    // coordinate frames
    Homography2D_F32 firstToCurrent32 = new Homography2D_F32();
    Homography2D_F32 homeToWorld = new Homography2D_F32();
    homeToWorld.a13 = grey.width / 2;
    homeToWorld.a23 = grey.height / 2;
    // Create a background image twice the size of the input image. Tell it that the home is in the center
    background.initialize(grey.width * 2, grey.height * 2, homeToWorld);
    BufferedImage visualized = new BufferedImage(segmented.width, segmented.height, BufferedImage.TYPE_INT_RGB);
    ImageGridPanel gui = new ImageGridPanel(1, 2);
    gui.setImages(visualized, visualized);
    ShowImages.showWindow(gui, "Detections", true);
    double fps = 0;
    // smoothing factor for FPS
    double alpha = 0.01;
    while (video.hasNext()) {
        ImageBase input = video.next();
        long before = System.nanoTime();
        GConvertImage.convert(input, grey);
        if (!motion2D.process(grey)) {
            throw new RuntimeException("Should handle this scenario");
        }
        Homography2D_F64 firstToCurrent64 = motion2D.getFirstToCurrent();
        ConvertMatrixData.convert(firstToCurrent64, firstToCurrent32);
        background.segment(firstToCurrent32, input, segmented);
        background.updateBackground(firstToCurrent32, input);
        long after = System.nanoTime();
        fps = (1.0 - alpha) * fps + alpha * (1.0 / ((after - before) / 1e9));
        VisualizeBinaryData.renderBinary(segmented, false, visualized);
        gui.setImage(0, 0, (BufferedImage) video.getGuiImage());
        gui.setImage(0, 1, visualized);
        gui.repaint();
        System.out.println("FPS = " + fps);
        BoofMiscOps.sleep(5);
    }
}
Also used : ConfigBackgroundBasic(boofcv.factory.background.ConfigBackgroundBasic) BackgroundModelMoving(boofcv.alg.background.BackgroundModelMoving) SimpleImageSequence(boofcv.io.image.SimpleImageSequence) PointTransformHomography_F32(boofcv.alg.distort.PointTransformHomography_F32) Homography2D_F32(georegression.struct.homography.Homography2D_F32) Homography2D_F64(georegression.struct.homography.Homography2D_F64) BufferedImage(java.awt.image.BufferedImage) ImageType(boofcv.struct.image.ImageType) ConfigPointDetector(boofcv.abst.feature.detect.interest.ConfigPointDetector) ConfigBackgroundGaussian(boofcv.factory.background.ConfigBackgroundGaussian) GrayF32(boofcv.struct.image.GrayF32) MediaManager(boofcv.io.MediaManager) DefaultMediaManager(boofcv.io.wrapper.DefaultMediaManager) ConfigBackgroundGmm(boofcv.factory.background.ConfigBackgroundGmm) GrayU8(boofcv.struct.image.GrayU8) ImageGridPanel(boofcv.gui.image.ImageGridPanel) PointTracker(boofcv.abst.tracker.PointTracker) FactoryPointTracker(boofcv.factory.tracker.FactoryPointTracker) ImageBase(boofcv.struct.image.ImageBase)

Example 4 with ConfigPointDetector

use of boofcv.abst.feature.detect.interest.ConfigPointDetector in project BoofCV by lessthanoptimal.

the class ExampleTrackingKlt method main.

public static void main(String[] args) {
    // tune the tracker for the image size and visual appearance
    ConfigPointDetector configDetector = new ConfigPointDetector();
    configDetector.type = PointDetectorTypes.SHI_TOMASI;
    configDetector.general.radius = 8;
    configDetector.general.threshold = 1;
    ConfigPKlt configKlt = new ConfigPKlt(3);
    PointTracker<GrayF32> tracker = FactoryPointTracker.klt(configKlt, configDetector, GrayF32.class, null);
    // Open a webcam at a resolution close to 640x480
    Webcam webcam = UtilWebcamCapture.openDefault(640, 480);
    // Create the panel used to display the image and feature tracks
    ImagePanel gui = new ImagePanel();
    gui.setPreferredSize(webcam.getViewSize());
    ShowImages.showWindow(gui, "KLT Tracker", true);
    int minimumTracks = 100;
    while (true) {
        BufferedImage image = webcam.getImage();
        GrayF32 gray = ConvertBufferedImage.convertFrom(image, (GrayF32) null);
        tracker.process(gray);
        List<PointTrack> tracks = tracker.getActiveTracks(null);
        // Spawn tracks if there are too few
        if (tracks.size() < minimumTracks) {
            tracker.spawnTracks();
            tracks = tracker.getActiveTracks(null);
            minimumTracks = tracks.size() / 2;
        }
        // Draw the tracks
        Graphics2D g2 = image.createGraphics();
        for (PointTrack t : tracks) {
            VisualizeFeatures.drawPoint(g2, (int) t.pixel.x, (int) t.pixel.y, Color.RED);
        }
        gui.setImageUI(image);
    }
}
Also used : GrayF32(boofcv.struct.image.GrayF32) PointTrack(boofcv.abst.tracker.PointTrack) Webcam(com.github.sarxos.webcam.Webcam) ConfigPKlt(boofcv.alg.tracker.klt.ConfigPKlt) BufferedImage(java.awt.image.BufferedImage) ConvertBufferedImage(boofcv.io.image.ConvertBufferedImage) ConfigPointDetector(boofcv.abst.feature.detect.interest.ConfigPointDetector) ImagePanel(boofcv.gui.image.ImagePanel)

Example 5 with ConfigPointDetector

use of boofcv.abst.feature.detect.interest.ConfigPointDetector in project BoofCV by lessthanoptimal.

the class ExampleVideoMosaic method main.

public static void main(String[] args) {
    // Configure the feature detector
    ConfigPointDetector configDetector = new ConfigPointDetector();
    configDetector.type = PointDetectorTypes.SHI_TOMASI;
    configDetector.general.maxFeatures = 300;
    configDetector.general.radius = 3;
    configDetector.general.threshold = 1;
    // Use a KLT tracker
    PointTracker<GrayF32> tracker = FactoryPointTracker.klt(4, configDetector, 3, GrayF32.class, GrayF32.class);
    // This estimates the 2D image motion
    // An Affine2D_F64 model also works quite well.
    ImageMotion2D<GrayF32, Homography2D_F64> motion2D = FactoryMotion2D.createMotion2D(220, 3, 2, 30, 0.6, 0.5, false, tracker, new Homography2D_F64());
    // wrap it so it output color images while estimating motion from gray
    ImageMotion2D<Planar<GrayF32>, Homography2D_F64> motion2DColor = new PlToGrayMotion2D<>(motion2D, GrayF32.class);
    // This fuses the images together
    StitchingFromMotion2D<Planar<GrayF32>, Homography2D_F64> stitch = FactoryMotion2D.createVideoStitch(0.5, motion2DColor, ImageType.pl(3, GrayF32.class));
    // Load an image sequence
    MediaManager media = DefaultMediaManager.INSTANCE;
    String fileName = UtilIO.pathExample("mosaic/airplane01.mjpeg");
    SimpleImageSequence<Planar<GrayF32>> video = media.openVideo(fileName, ImageType.pl(3, GrayF32.class));
    Planar<GrayF32> frame = video.next();
    // shrink the input image and center it
    Homography2D_F64 shrink = new Homography2D_F64(0.5, 0, frame.width / 4, 0, 0.5, frame.height / 4, 0, 0, 1);
    shrink = shrink.invert(null);
    // The mosaic will be larger in terms of pixels but the image will be scaled down.
    // To change this into stabilization just make it the same size as the input with no shrink.
    stitch.configure(frame.width, frame.height, shrink);
    // process the first frame
    stitch.process(frame);
    // Create the GUI for displaying the results + input image
    ImageGridPanel gui = new ImageGridPanel(1, 2);
    gui.setImage(0, 0, new BufferedImage(frame.width, frame.height, BufferedImage.TYPE_INT_RGB));
    gui.setImage(0, 1, new BufferedImage(frame.width, frame.height, BufferedImage.TYPE_INT_RGB));
    gui.setPreferredSize(new Dimension(3 * frame.width, frame.height * 2));
    ShowImages.showWindow(gui, "Example Mosaic", true);
    boolean enlarged = false;
    // process the video sequence one frame at a time
    while (video.hasNext()) {
        frame = video.next();
        if (!stitch.process(frame))
            throw new RuntimeException("You should handle failures");
        // if the current image is close to the image border recenter the mosaic
        Quadrilateral_F64 corners = stitch.getImageCorners(frame.width, frame.height, null);
        if (nearBorder(corners.a, stitch) || nearBorder(corners.b, stitch) || nearBorder(corners.c, stitch) || nearBorder(corners.d, stitch)) {
            stitch.setOriginToCurrent();
            // only enlarge the image once
            if (!enlarged) {
                enlarged = true;
                // double the image size and shift it over to keep it centered
                int widthOld = stitch.getStitchedImage().width;
                int heightOld = stitch.getStitchedImage().height;
                int widthNew = widthOld * 2;
                int heightNew = heightOld * 2;
                int tranX = (widthNew - widthOld) / 2;
                int tranY = (heightNew - heightOld) / 2;
                Homography2D_F64 newToOldStitch = new Homography2D_F64(1, 0, -tranX, 0, 1, -tranY, 0, 0, 1);
                stitch.resizeStitchImage(widthNew, heightNew, newToOldStitch);
                gui.setImage(0, 1, new BufferedImage(widthNew, heightNew, BufferedImage.TYPE_INT_RGB));
            }
            corners = stitch.getImageCorners(frame.width, frame.height, null);
        }
        // display the mosaic
        ConvertBufferedImage.convertTo(frame, gui.getImage(0, 0), true);
        ConvertBufferedImage.convertTo(stitch.getStitchedImage(), gui.getImage(0, 1), true);
        // draw a red quadrilateral around the current frame in the mosaic
        Graphics2D g2 = gui.getImage(0, 1).createGraphics();
        g2.setColor(Color.RED);
        g2.drawLine((int) corners.a.x, (int) corners.a.y, (int) corners.b.x, (int) corners.b.y);
        g2.drawLine((int) corners.b.x, (int) corners.b.y, (int) corners.c.x, (int) corners.c.y);
        g2.drawLine((int) corners.c.x, (int) corners.c.y, (int) corners.d.x, (int) corners.d.y);
        g2.drawLine((int) corners.d.x, (int) corners.d.y, (int) corners.a.x, (int) corners.a.y);
        gui.repaint();
        // throttle the speed just in case it's on a fast computer
        BoofMiscOps.pause(50);
    }
}
Also used : PlToGrayMotion2D(boofcv.abst.sfm.d2.PlToGrayMotion2D) Quadrilateral_F64(georegression.struct.shapes.Quadrilateral_F64) Homography2D_F64(georegression.struct.homography.Homography2D_F64) BufferedImage(java.awt.image.BufferedImage) ConvertBufferedImage(boofcv.io.image.ConvertBufferedImage) ConfigPointDetector(boofcv.abst.feature.detect.interest.ConfigPointDetector) GrayF32(boofcv.struct.image.GrayF32) MediaManager(boofcv.io.MediaManager) DefaultMediaManager(boofcv.io.wrapper.DefaultMediaManager) Planar(boofcv.struct.image.Planar) ImageGridPanel(boofcv.gui.image.ImageGridPanel)

Aggregations

ConfigPointDetector (boofcv.abst.feature.detect.interest.ConfigPointDetector)8 GrayF32 (boofcv.struct.image.GrayF32)4 BufferedImage (java.awt.image.BufferedImage)4 ConfigPKlt (boofcv.alg.tracker.klt.ConfigPKlt)3 ImageGridPanel (boofcv.gui.image.ImageGridPanel)3 MediaManager (boofcv.io.MediaManager)3 ConvertBufferedImage (boofcv.io.image.ConvertBufferedImage)3 DefaultMediaManager (boofcv.io.wrapper.DefaultMediaManager)3 Homography2D_F64 (georegression.struct.homography.Homography2D_F64)3 PlToGrayMotion2D (boofcv.abst.sfm.d2.PlToGrayMotion2D)2 PointTracker (boofcv.abst.tracker.PointTracker)2 FactoryPointTracker (boofcv.factory.tracker.FactoryPointTracker)2 GrayU8 (boofcv.struct.image.GrayU8)2 Planar (boofcv.struct.image.Planar)2 PointDetectorTypes (boofcv.abst.feature.detect.interest.PointDetectorTypes)1 PointTrack (boofcv.abst.tracker.PointTrack)1 BackgroundModelMoving (boofcv.alg.background.BackgroundModelMoving)1 PointTransformHomography_F32 (boofcv.alg.distort.PointTransformHomography_F32)1 ConfigBackgroundBasic (boofcv.factory.background.ConfigBackgroundBasic)1 ConfigBackgroundGaussian (boofcv.factory.background.ConfigBackgroundGaussian)1