Search in sources :

Example 1 with PointTracker

use of boofcv.abst.feature.tracker.PointTracker in project BoofCV by lessthanoptimal.

the class ExampleBackgroundRemovalMoving method main.

public static void main(String[] args) {
    // Example with a moving camera.  Highlights why motion estimation is sometimes required
    String fileName = UtilIO.pathExample("tracking/chipmunk.mjpeg");
    // Camera has a bit of jitter in it.  Static kinda works but motion reduces false positives
    // String fileName = UtilIO.pathExample("background/horse_jitter.mp4");
    // Comment/Uncomment to switch input image type
    ImageType imageType = ImageType.single(GrayF32.class);
    // ImageType imageType = ImageType.il(3, InterleavedF32.class);
    // ImageType imageType = ImageType.il(3, InterleavedU8.class);
    // Configure the feature detector
    ConfigGeneralDetector confDetector = new ConfigGeneralDetector();
    confDetector.threshold = 10;
    confDetector.maxFeatures = 300;
    confDetector.radius = 6;
    // Use a KLT tracker
    PointTracker tracker = FactoryPointTracker.klt(new int[] { 1, 2, 4, 8 }, confDetector, 3, GrayF32.class, null);
    // This estimates the 2D image motion
    ImageMotion2D<GrayF32, Homography2D_F64> motion2D = FactoryMotion2D.createMotion2D(500, 0.5, 3, 100, 0.6, 0.5, false, tracker, new Homography2D_F64());
    ConfigBackgroundBasic configBasic = new ConfigBackgroundBasic(30, 0.005f);
    // Configuration for Gaussian model.  Note that the threshold changes depending on the number of image bands
    // 12 = gray scale and 40 = color
    ConfigBackgroundGaussian configGaussian = new ConfigBackgroundGaussian(12, 0.001f);
    configGaussian.initialVariance = 64;
    configGaussian.minimumDifference = 5;
    // Note that GMM doesn't interpolate the input image. Making it harder to model object edges.
    // However it runs faster because of this.
    ConfigBackgroundGmm configGmm = new ConfigBackgroundGmm();
    configGmm.initialVariance = 1600;
    configGmm.significantWeight = 1e-1f;
    // Comment/Uncomment to switch background mode
    BackgroundModelMoving background = FactoryBackgroundModel.movingBasic(configBasic, new PointTransformHomography_F32(), imageType);
    // FactoryBackgroundModel.movingGaussian(configGaussian, new PointTransformHomography_F32(), imageType);
    // FactoryBackgroundModel.movingGmm(configGmm,new PointTransformHomography_F32(), imageType);
    background.setUnknownValue(1);
    MediaManager media = DefaultMediaManager.INSTANCE;
    SimpleImageSequence video = media.openVideo(fileName, background.getImageType());
    // media.openCamera(null,640,480,background.getImageType());
    // ====== Initialize Images
    // storage for segmented image.  Background = 0, Foreground = 1
    GrayU8 segmented = new GrayU8(video.getNextWidth(), video.getNextHeight());
    // Grey scale image that's the input for motion estimation
    GrayF32 grey = new GrayF32(segmented.width, segmented.height);
    // coordinate frames
    Homography2D_F32 firstToCurrent32 = new Homography2D_F32();
    Homography2D_F32 homeToWorld = new Homography2D_F32();
    homeToWorld.a13 = grey.width / 2;
    homeToWorld.a23 = grey.height / 2;
    // Create a background image twice the size of the input image.  Tell it that the home is in the center
    background.initialize(grey.width * 2, grey.height * 2, homeToWorld);
    BufferedImage visualized = new BufferedImage(segmented.width, segmented.height, BufferedImage.TYPE_INT_RGB);
    ImageGridPanel gui = new ImageGridPanel(1, 2);
    gui.setImages(visualized, visualized);
    ShowImages.showWindow(gui, "Detections", true);
    double fps = 0;
    // smoothing factor for FPS
    double alpha = 0.01;
    while (video.hasNext()) {
        ImageBase input = video.next();
        long before = System.nanoTime();
        GConvertImage.convert(input, grey);
        if (!motion2D.process(grey)) {
            throw new RuntimeException("Should handle this scenario");
        }
        Homography2D_F64 firstToCurrent64 = motion2D.getFirstToCurrent();
        ConvertMatrixData.convert(firstToCurrent64, firstToCurrent32);
        background.segment(firstToCurrent32, input, segmented);
        background.updateBackground(firstToCurrent32, input);
        long after = System.nanoTime();
        fps = (1.0 - alpha) * fps + alpha * (1.0 / ((after - before) / 1e9));
        VisualizeBinaryData.renderBinary(segmented, false, visualized);
        gui.setImage(0, 0, (BufferedImage) video.getGuiImage());
        gui.setImage(0, 1, visualized);
        gui.repaint();
        System.out.println("FPS = " + fps);
        try {
            Thread.sleep(5);
        } catch (InterruptedException e) {
        }
    }
}
Also used : ConfigBackgroundBasic(boofcv.factory.background.ConfigBackgroundBasic) BackgroundModelMoving(boofcv.alg.background.BackgroundModelMoving) SimpleImageSequence(boofcv.io.image.SimpleImageSequence) PointTransformHomography_F32(boofcv.alg.distort.PointTransformHomography_F32) ConfigGeneralDetector(boofcv.abst.feature.detect.interest.ConfigGeneralDetector) Homography2D_F32(georegression.struct.homography.Homography2D_F32) Homography2D_F64(georegression.struct.homography.Homography2D_F64) BufferedImage(java.awt.image.BufferedImage) ImageType(boofcv.struct.image.ImageType) ConfigBackgroundGaussian(boofcv.factory.background.ConfigBackgroundGaussian) GrayF32(boofcv.struct.image.GrayF32) MediaManager(boofcv.io.MediaManager) DefaultMediaManager(boofcv.io.wrapper.DefaultMediaManager) ConfigBackgroundGmm(boofcv.factory.background.ConfigBackgroundGmm) GrayU8(boofcv.struct.image.GrayU8) ImageGridPanel(boofcv.gui.image.ImageGridPanel) PointTracker(boofcv.abst.feature.tracker.PointTracker) FactoryPointTracker(boofcv.factory.feature.tracker.FactoryPointTracker) ImageBase(boofcv.struct.image.ImageBase)

Example 2 with PointTracker

use of boofcv.abst.feature.tracker.PointTracker in project BoofCV by lessthanoptimal.

the class VisualizeStereoVisualOdometryApp method createStereoDepth.

private StereoVisualOdometry<I> createStereoDepth(int whichAlg) {
    Class derivType = GImageDerivativeOps.getDerivativeType(imageType);
    StereoDisparitySparse<I> disparity = FactoryStereoDisparity.regionSparseWta(2, 150, 3, 3, 30, -1, true, imageType);
    PkltConfig kltConfig = new PkltConfig();
    kltConfig.templateRadius = 3;
    kltConfig.pyramidScaling = new int[] { 1, 2, 4, 8 };
    if (whichAlg == 0) {
        ConfigGeneralDetector configDetector = new ConfigGeneralDetector(600, 3, 1);
        PointTrackerTwoPass<I> tracker = FactoryPointTrackerTwoPass.klt(kltConfig, configDetector, imageType, derivType);
        return FactoryVisualOdometry.stereoDepth(1.5, 120, 2, 200, 50, false, disparity, tracker, imageType);
    } else if (whichAlg == 1) {
        ConfigGeneralDetector configExtract = new ConfigGeneralDetector(600, 3, 1);
        GeneralFeatureDetector detector = FactoryPointTracker.createShiTomasi(configExtract, derivType);
        DescribeRegionPoint describe = FactoryDescribeRegionPoint.brief(null, imageType);
        ScoreAssociateHamming_B score = new ScoreAssociateHamming_B();
        AssociateDescription2D<TupleDesc_B> associate = new AssociateDescTo2D<>(FactoryAssociation.greedy(score, 150, true));
        PointTrackerTwoPass tracker = FactoryPointTrackerTwoPass.dda(detector, describe, associate, null, 1, imageType);
        return FactoryVisualOdometry.stereoDepth(1.5, 80, 3, 200, 50, false, disparity, tracker, imageType);
    } else if (whichAlg == 2) {
        PointTracker<I> tracker = FactoryPointTracker.combined_ST_SURF_KLT(new ConfigGeneralDetector(600, 3, 0), kltConfig, 50, null, null, imageType, derivType);
        PointTrackerTwoPass<I> twopass = new PointTrackerToTwoPass<>(tracker);
        return FactoryVisualOdometry.stereoDepth(1.5, 80, 3, 200, 50, false, disparity, twopass, imageType);
    } else if (whichAlg == 3) {
        ConfigGeneralDetector configDetector = new ConfigGeneralDetector(600, 3, 1);
        PointTracker<I> trackerLeft = FactoryPointTracker.klt(kltConfig, configDetector, imageType, derivType);
        PointTracker<I> trackerRight = FactoryPointTracker.klt(kltConfig, configDetector, imageType, derivType);
        DescribeRegionPoint describe = FactoryDescribeRegionPoint.surfFast(null, imageType);
        return FactoryVisualOdometry.stereoDualTrackerPnP(90, 2, 1.5, 1.5, 200, 50, trackerLeft, trackerRight, describe, imageType);
    } else if (whichAlg == 4) {
        // GeneralFeatureIntensity intensity =
        // FactoryIntensityPoint.hessian(HessianBlobIntensity.Type.TRACE,defaultType);
        GeneralFeatureIntensity intensity = FactoryIntensityPoint.shiTomasi(1, false, imageType);
        NonMaxSuppression nonmax = FactoryFeatureExtractor.nonmax(new ConfigExtract(2, 50, 0, true, false, true));
        GeneralFeatureDetector general = new GeneralFeatureDetector(intensity, nonmax);
        general.setMaxFeatures(600);
        DetectorInterestPointMulti detector = new GeneralToInterestMulti(general, 2, imageType, derivType);
        // DescribeRegionPoint describe = FactoryDescribeRegionPoint.brief(new ConfigBrief(true),defaultType);
        // DescribeRegionPoint describe = FactoryDescribeRegionPoint.pixelNCC(5,5,defaultType);
        DescribeRegionPoint describe = FactoryDescribeRegionPoint.surfFast(null, imageType);
        DetectDescribeMulti detDescMulti = new DetectDescribeMultiFusion(detector, null, describe);
        return FactoryVisualOdometry.stereoQuadPnP(1.5, 0.5, 75, Double.MAX_VALUE, 300, 50, detDescMulti, imageType);
    } else {
        throw new RuntimeException("Unknown selection");
    }
}
Also used : NonMaxSuppression(boofcv.abst.feature.detect.extract.NonMaxSuppression) GeneralToInterestMulti(boofcv.abst.feature.detect.interest.GeneralToInterestMulti) PkltConfig(boofcv.alg.tracker.klt.PkltConfig) FactoryDescribeRegionPoint(boofcv.factory.feature.describe.FactoryDescribeRegionPoint) DescribeRegionPoint(boofcv.abst.feature.describe.DescribeRegionPoint) ConfigGeneralDetector(boofcv.abst.feature.detect.interest.ConfigGeneralDetector) DetectDescribeMultiFusion(boofcv.abst.feature.detdesc.DetectDescribeMultiFusion) DetectorInterestPointMulti(boofcv.abst.feature.detect.interest.DetectorInterestPointMulti) ScoreAssociateHamming_B(boofcv.abst.feature.associate.ScoreAssociateHamming_B) FactoryPointTrackerTwoPass(boofcv.factory.feature.tracker.FactoryPointTrackerTwoPass) PointTrackerTwoPass(boofcv.abst.feature.tracker.PointTrackerTwoPass) ConfigExtract(boofcv.abst.feature.detect.extract.ConfigExtract) DetectDescribeMulti(boofcv.abst.feature.detdesc.DetectDescribeMulti) GeneralFeatureDetector(boofcv.alg.feature.detect.interest.GeneralFeatureDetector) PointTrackerToTwoPass(boofcv.abst.feature.tracker.PointTrackerToTwoPass) AssociateDescription2D(boofcv.abst.feature.associate.AssociateDescription2D) PointTracker(boofcv.abst.feature.tracker.PointTracker) FactoryPointTracker(boofcv.factory.feature.tracker.FactoryPointTracker) GeneralFeatureIntensity(boofcv.abst.feature.detect.intensity.GeneralFeatureIntensity)

Example 3 with PointTracker

use of boofcv.abst.feature.tracker.PointTracker in project BoofCV by lessthanoptimal.

the class VisualizeDepthVisualOdometryApp method changeSelectedAlgortihm.

private void changeSelectedAlgortihm(int whichAlg) {
    this.whichAlg = whichAlg;
    AlgType prevAlgType = this.algType;
    Class imageType = GrayU8.class;
    Class derivType = GImageDerivativeOps.getDerivativeType(imageType);
    DepthSparse3D<GrayU16> sparseDepth = new DepthSparse3D.I<>(1e-3);
    PkltConfig pkltConfig = new PkltConfig();
    pkltConfig.templateRadius = 3;
    pkltConfig.pyramidScaling = new int[] { 1, 2, 4, 8 };
    algType = AlgType.UNKNOWN;
    if (whichAlg == 0) {
        algType = AlgType.FEATURE;
        ConfigGeneralDetector configDetector = new ConfigGeneralDetector(600, 3, 1);
        PointTrackerTwoPass tracker = FactoryPointTrackerTwoPass.klt(pkltConfig, configDetector, imageType, derivType);
        alg = FactoryVisualOdometry.depthDepthPnP(1.5, 120, 2, 200, 50, false, sparseDepth, tracker, imageType, GrayU16.class);
    } else if (whichAlg == 1) {
        algType = AlgType.FEATURE;
        ConfigGeneralDetector configExtract = new ConfigGeneralDetector(600, 3, 1);
        GeneralFeatureDetector detector = FactoryPointTracker.createShiTomasi(configExtract, derivType);
        DescribeRegionPoint describe = FactoryDescribeRegionPoint.brief(null, imageType);
        ScoreAssociateHamming_B score = new ScoreAssociateHamming_B();
        AssociateDescription2D<TupleDesc_B> associate = new AssociateDescTo2D<>(FactoryAssociation.greedy(score, 150, true));
        PointTrackerTwoPass tracker = FactoryPointTrackerTwoPass.dda(detector, describe, associate, null, 1, imageType);
        alg = FactoryVisualOdometry.depthDepthPnP(1.5, 80, 3, 200, 50, false, sparseDepth, tracker, imageType, GrayU16.class);
    } else if (whichAlg == 2) {
        algType = AlgType.FEATURE;
        PointTracker tracker = FactoryPointTracker.combined_ST_SURF_KLT(new ConfigGeneralDetector(600, 3, 1), pkltConfig, 50, null, null, imageType, derivType);
        PointTrackerTwoPass twopass = new PointTrackerToTwoPass<>(tracker);
        alg = FactoryVisualOdometry.depthDepthPnP(1.5, 120, 3, 200, 50, false, sparseDepth, twopass, imageType, GrayU16.class);
    } else if (whichAlg == 3) {
        algType = AlgType.DIRECT;
        alg = FactoryVisualOdometry.depthDirect(sparseDepth, ImageType.pl(3, GrayF32.class), GrayU16.class);
    } else {
        throw new RuntimeException("Unknown selection");
    }
    if (algType != prevAlgType) {
        switch(prevAlgType) {
            case FEATURE:
                mainPanel.remove(featurePanel);
                break;
            case DIRECT:
                mainPanel.remove(directPanel);
                break;
            default:
                mainPanel.remove(algorithmPanel);
                break;
        }
        switch(algType) {
            case FEATURE:
                mainPanel.add(featurePanel, BorderLayout.NORTH);
                break;
            case DIRECT:
                mainPanel.add(directPanel, BorderLayout.NORTH);
                break;
            default:
                mainPanel.add(algorithmPanel, BorderLayout.NORTH);
                break;
        }
        mainPanel.invalidate();
    }
    setImageTypes(alg.getVisualType(), ImageType.single(alg.getDepthType()));
}
Also used : PkltConfig(boofcv.alg.tracker.klt.PkltConfig) FactoryDescribeRegionPoint(boofcv.factory.feature.describe.FactoryDescribeRegionPoint) DescribeRegionPoint(boofcv.abst.feature.describe.DescribeRegionPoint) ConfigGeneralDetector(boofcv.abst.feature.detect.interest.ConfigGeneralDetector) ScoreAssociateHamming_B(boofcv.abst.feature.associate.ScoreAssociateHamming_B) FactoryPointTrackerTwoPass(boofcv.factory.feature.tracker.FactoryPointTrackerTwoPass) PointTrackerTwoPass(boofcv.abst.feature.tracker.PointTrackerTwoPass) GeneralFeatureDetector(boofcv.alg.feature.detect.interest.GeneralFeatureDetector) PointTrackerToTwoPass(boofcv.abst.feature.tracker.PointTrackerToTwoPass) AssociateDescription2D(boofcv.abst.feature.associate.AssociateDescription2D) PointTracker(boofcv.abst.feature.tracker.PointTracker) FactoryPointTracker(boofcv.factory.feature.tracker.FactoryPointTracker)

Example 4 with PointTracker

use of boofcv.abst.feature.tracker.PointTracker in project BoofCV by lessthanoptimal.

the class VisualizeMonocularPlaneVisualOdometryApp method createVisualOdometry.

private MonocularPlaneVisualOdometry<I> createVisualOdometry(int whichAlg) {
    Class derivType = GImageDerivativeOps.getDerivativeType(imageClass);
    if (whichAlg == 0) {
        PkltConfig config = new PkltConfig();
        config.pyramidScaling = new int[] { 1, 2, 4, 8 };
        config.templateRadius = 3;
        ConfigGeneralDetector configDetector = new ConfigGeneralDetector(600, 3, 1);
        PointTracker<I> tracker = FactoryPointTracker.klt(config, configDetector, imageClass, derivType);
        return FactoryVisualOdometry.monoPlaneInfinity(75, 2, 1.5, 200, tracker, imageType);
    } else if (whichAlg == 1) {
        PkltConfig config = new PkltConfig();
        config.pyramidScaling = new int[] { 1, 2, 4, 8 };
        config.templateRadius = 3;
        ConfigGeneralDetector configDetector = new ConfigGeneralDetector(600, 3, 1);
        PointTracker<I> tracker = FactoryPointTracker.klt(config, configDetector, imageClass, derivType);
        double cellSize = 0.06;
        double inlierGroundTol = 1.5;
        return FactoryVisualOdometry.monoPlaneOverhead(cellSize, 25, 0.7, inlierGroundTol, 300, 2, 100, 0.5, 0.6, tracker, imageType);
    } else {
        throw new RuntimeException("Unknown selection");
    }
}
Also used : PkltConfig(boofcv.alg.tracker.klt.PkltConfig) ConfigGeneralDetector(boofcv.abst.feature.detect.interest.ConfigGeneralDetector) PointTracker(boofcv.abst.feature.tracker.PointTracker) FactoryPointTracker(boofcv.factory.feature.tracker.FactoryPointTracker)

Aggregations

ConfigGeneralDetector (boofcv.abst.feature.detect.interest.ConfigGeneralDetector)4 PointTracker (boofcv.abst.feature.tracker.PointTracker)4 FactoryPointTracker (boofcv.factory.feature.tracker.FactoryPointTracker)4 PkltConfig (boofcv.alg.tracker.klt.PkltConfig)3 AssociateDescription2D (boofcv.abst.feature.associate.AssociateDescription2D)2 ScoreAssociateHamming_B (boofcv.abst.feature.associate.ScoreAssociateHamming_B)2 DescribeRegionPoint (boofcv.abst.feature.describe.DescribeRegionPoint)2 PointTrackerToTwoPass (boofcv.abst.feature.tracker.PointTrackerToTwoPass)2 PointTrackerTwoPass (boofcv.abst.feature.tracker.PointTrackerTwoPass)2 GeneralFeatureDetector (boofcv.alg.feature.detect.interest.GeneralFeatureDetector)2 FactoryDescribeRegionPoint (boofcv.factory.feature.describe.FactoryDescribeRegionPoint)2 FactoryPointTrackerTwoPass (boofcv.factory.feature.tracker.FactoryPointTrackerTwoPass)2 DetectDescribeMulti (boofcv.abst.feature.detdesc.DetectDescribeMulti)1 DetectDescribeMultiFusion (boofcv.abst.feature.detdesc.DetectDescribeMultiFusion)1 ConfigExtract (boofcv.abst.feature.detect.extract.ConfigExtract)1 NonMaxSuppression (boofcv.abst.feature.detect.extract.NonMaxSuppression)1 GeneralFeatureIntensity (boofcv.abst.feature.detect.intensity.GeneralFeatureIntensity)1 DetectorInterestPointMulti (boofcv.abst.feature.detect.interest.DetectorInterestPointMulti)1 GeneralToInterestMulti (boofcv.abst.feature.detect.interest.GeneralToInterestMulti)1 BackgroundModelMoving (boofcv.alg.background.BackgroundModelMoving)1