Search in sources :

Example 11 with PkltConfig

use of boofcv.alg.tracker.klt.PkltConfig in project BoofCV by lessthanoptimal.

the class ExampleVisualOdometryDepth method main.

public static void main(String[] args) throws IOException {
    MediaManager media = DefaultMediaManager.INSTANCE;
    String directory = UtilIO.pathExample("kinect/straight");
    // load camera description and the video sequence
    VisualDepthParameters param = CalibrationIO.load(media.openFile(directory + "visualdepth.yaml"));
    // specify how the image features are going to be tracked
    PkltConfig configKlt = new PkltConfig();
    configKlt.pyramidScaling = new int[] { 1, 2, 4, 8 };
    configKlt.templateRadius = 3;
    PointTrackerTwoPass<GrayU8> tracker = FactoryPointTrackerTwoPass.klt(configKlt, new ConfigGeneralDetector(600, 3, 1), GrayU8.class, GrayS16.class);
    DepthSparse3D<GrayU16> sparseDepth = new DepthSparse3D.I<>(1e-3);
    // declares the algorithm
    DepthVisualOdometry<GrayU8, GrayU16> visualOdometry = FactoryVisualOdometry.depthDepthPnP(1.5, 120, 2, 200, 50, true, sparseDepth, tracker, GrayU8.class, GrayU16.class);
    // Pass in intrinsic/extrinsic calibration.  This can be changed in the future.
    visualOdometry.setCalibration(param.visualParam, new DoNothing2Transform2_F32());
    // Process the video sequence and output the location plus number of inliers
    SimpleImageSequence<GrayU8> videoVisual = media.openVideo(directory + "rgb.mjpeg", ImageType.single(GrayU8.class));
    SimpleImageSequence<GrayU16> videoDepth = media.openVideo(directory + "depth.mpng", ImageType.single(GrayU16.class));
    while (videoVisual.hasNext()) {
        GrayU8 visual = videoVisual.next();
        GrayU16 depth = videoDepth.next();
        if (!visualOdometry.process(visual, depth)) {
            throw new RuntimeException("VO Failed!");
        }
        Se3_F64 leftToWorld = visualOdometry.getCameraToWorld();
        Vector3D_F64 T = leftToWorld.getT();
        System.out.printf("Location %8.2f %8.2f %8.2f      inliers %s\n", T.x, T.y, T.z, inlierPercent(visualOdometry));
    }
}
Also used : VisualDepthParameters(boofcv.struct.calib.VisualDepthParameters) GrayU16(boofcv.struct.image.GrayU16) PkltConfig(boofcv.alg.tracker.klt.PkltConfig) ConfigGeneralDetector(boofcv.abst.feature.detect.interest.ConfigGeneralDetector) DoNothing2Transform2_F32(boofcv.struct.distort.DoNothing2Transform2_F32) Vector3D_F64(georegression.struct.point.Vector3D_F64) MediaManager(boofcv.io.MediaManager) DefaultMediaManager(boofcv.io.wrapper.DefaultMediaManager) GrayU8(boofcv.struct.image.GrayU8) Se3_F64(georegression.struct.se.Se3_F64)

Example 12 with PkltConfig

use of boofcv.alg.tracker.klt.PkltConfig in project BoofCV by lessthanoptimal.

the class ExampleVisualOdometryMonocularPlane method main.

public static void main(String[] args) {
    MediaManager media = DefaultMediaManager.INSTANCE;
    String directory = UtilIO.pathExample("vo/drc/");
    // load camera description and the video sequence
    MonoPlaneParameters calibration = CalibrationIO.load(media.openFile(directory + "mono_plane.yaml"));
    SimpleImageSequence<GrayU8> video = media.openVideo(directory + "left.mjpeg", ImageType.single(GrayU8.class));
    // specify how the image features are going to be tracked
    PkltConfig configKlt = new PkltConfig();
    configKlt.pyramidScaling = new int[] { 1, 2, 4, 8 };
    configKlt.templateRadius = 3;
    ConfigGeneralDetector configDetector = new ConfigGeneralDetector(600, 3, 1);
    PointTracker<GrayU8> tracker = FactoryPointTracker.klt(configKlt, configDetector, GrayU8.class, null);
    // declares the algorithm
    MonocularPlaneVisualOdometry<GrayU8> visualOdometry = FactoryVisualOdometry.monoPlaneInfinity(75, 2, 1.5, 200, tracker, ImageType.single(GrayU8.class));
    // Pass in intrinsic/extrinsic calibration.  This can be changed in the future.
    visualOdometry.setCalibration(calibration);
    // Process the video sequence and output the location plus number of inliers
    while (video.hasNext()) {
        GrayU8 image = video.next();
        if (!visualOdometry.process(image)) {
            System.out.println("Fault!");
            visualOdometry.reset();
        }
        Se3_F64 leftToWorld = visualOdometry.getCameraToWorld();
        Vector3D_F64 T = leftToWorld.getT();
        System.out.printf("Location %8.2f %8.2f %8.2f      inliers %s\n", T.x, T.y, T.z, inlierPercent(visualOdometry));
    }
}
Also used : Vector3D_F64(georegression.struct.point.Vector3D_F64) PkltConfig(boofcv.alg.tracker.klt.PkltConfig) MediaManager(boofcv.io.MediaManager) DefaultMediaManager(boofcv.io.wrapper.DefaultMediaManager) ConfigGeneralDetector(boofcv.abst.feature.detect.interest.ConfigGeneralDetector) GrayU8(boofcv.struct.image.GrayU8) MonoPlaneParameters(boofcv.struct.calib.MonoPlaneParameters) Se3_F64(georegression.struct.se.Se3_F64)

Example 13 with PkltConfig

use of boofcv.alg.tracker.klt.PkltConfig in project BoofCV by lessthanoptimal.

the class ExampleVisualOdometryStereo method main.

public static void main(String[] args) {
    MediaManager media = DefaultMediaManager.INSTANCE;
    String directory = UtilIO.pathExample("vo/backyard/");
    // load camera description and the video sequence
    StereoParameters stereoParam = CalibrationIO.load(media.openFile(directory + "stereo.yaml"));
    SimpleImageSequence<GrayU8> video1 = media.openVideo(directory + "left.mjpeg", ImageType.single(GrayU8.class));
    SimpleImageSequence<GrayU8> video2 = media.openVideo(directory + "right.mjpeg", ImageType.single(GrayU8.class));
    // specify how the image features are going to be tracked
    PkltConfig configKlt = new PkltConfig();
    configKlt.pyramidScaling = new int[] { 1, 2, 4, 8 };
    configKlt.templateRadius = 3;
    PointTrackerTwoPass<GrayU8> tracker = FactoryPointTrackerTwoPass.klt(configKlt, new ConfigGeneralDetector(600, 3, 1), GrayU8.class, GrayS16.class);
    // computes the depth of each point
    StereoDisparitySparse<GrayU8> disparity = FactoryStereoDisparity.regionSparseWta(0, 150, 3, 3, 30, -1, true, GrayU8.class);
    // declares the algorithm
    StereoVisualOdometry<GrayU8> visualOdometry = FactoryVisualOdometry.stereoDepth(1.5, 120, 2, 200, 50, true, disparity, tracker, GrayU8.class);
    // Pass in intrinsic/extrinsic calibration.  This can be changed in the future.
    visualOdometry.setCalibration(stereoParam);
    // Process the video sequence and output the location plus number of inliers
    while (video1.hasNext()) {
        GrayU8 left = video1.next();
        GrayU8 right = video2.next();
        if (!visualOdometry.process(left, right)) {
            throw new RuntimeException("VO Failed!");
        }
        Se3_F64 leftToWorld = visualOdometry.getCameraToWorld();
        Vector3D_F64 T = leftToWorld.getT();
        System.out.printf("Location %8.2f %8.2f %8.2f      inliers %s\n", T.x, T.y, T.z, inlierPercent(visualOdometry));
    }
}
Also used : PkltConfig(boofcv.alg.tracker.klt.PkltConfig) ConfigGeneralDetector(boofcv.abst.feature.detect.interest.ConfigGeneralDetector) Vector3D_F64(georegression.struct.point.Vector3D_F64) MediaManager(boofcv.io.MediaManager) DefaultMediaManager(boofcv.io.wrapper.DefaultMediaManager) GrayU8(boofcv.struct.image.GrayU8) StereoParameters(boofcv.struct.calib.StereoParameters) Se3_F64(georegression.struct.se.Se3_F64)

Example 14 with PkltConfig

use of boofcv.alg.tracker.klt.PkltConfig in project BoofCV by lessthanoptimal.

the class VisualizeDepthVisualOdometryApp method changeSelectedAlgortihm.

private void changeSelectedAlgortihm(int whichAlg) {
    this.whichAlg = whichAlg;
    AlgType prevAlgType = this.algType;
    Class imageType = GrayU8.class;
    Class derivType = GImageDerivativeOps.getDerivativeType(imageType);
    DepthSparse3D<GrayU16> sparseDepth = new DepthSparse3D.I<>(1e-3);
    PkltConfig pkltConfig = new PkltConfig();
    pkltConfig.templateRadius = 3;
    pkltConfig.pyramidScaling = new int[] { 1, 2, 4, 8 };
    algType = AlgType.UNKNOWN;
    if (whichAlg == 0) {
        algType = AlgType.FEATURE;
        ConfigGeneralDetector configDetector = new ConfigGeneralDetector(600, 3, 1);
        PointTrackerTwoPass tracker = FactoryPointTrackerTwoPass.klt(pkltConfig, configDetector, imageType, derivType);
        alg = FactoryVisualOdometry.depthDepthPnP(1.5, 120, 2, 200, 50, false, sparseDepth, tracker, imageType, GrayU16.class);
    } else if (whichAlg == 1) {
        algType = AlgType.FEATURE;
        ConfigGeneralDetector configExtract = new ConfigGeneralDetector(600, 3, 1);
        GeneralFeatureDetector detector = FactoryPointTracker.createShiTomasi(configExtract, derivType);
        DescribeRegionPoint describe = FactoryDescribeRegionPoint.brief(null, imageType);
        ScoreAssociateHamming_B score = new ScoreAssociateHamming_B();
        AssociateDescription2D<TupleDesc_B> associate = new AssociateDescTo2D<>(FactoryAssociation.greedy(score, 150, true));
        PointTrackerTwoPass tracker = FactoryPointTrackerTwoPass.dda(detector, describe, associate, null, 1, imageType);
        alg = FactoryVisualOdometry.depthDepthPnP(1.5, 80, 3, 200, 50, false, sparseDepth, tracker, imageType, GrayU16.class);
    } else if (whichAlg == 2) {
        algType = AlgType.FEATURE;
        PointTracker tracker = FactoryPointTracker.combined_ST_SURF_KLT(new ConfigGeneralDetector(600, 3, 1), pkltConfig, 50, null, null, imageType, derivType);
        PointTrackerTwoPass twopass = new PointTrackerToTwoPass<>(tracker);
        alg = FactoryVisualOdometry.depthDepthPnP(1.5, 120, 3, 200, 50, false, sparseDepth, twopass, imageType, GrayU16.class);
    } else if (whichAlg == 3) {
        algType = AlgType.DIRECT;
        alg = FactoryVisualOdometry.depthDirect(sparseDepth, ImageType.pl(3, GrayF32.class), GrayU16.class);
    } else {
        throw new RuntimeException("Unknown selection");
    }
    if (algType != prevAlgType) {
        switch(prevAlgType) {
            case FEATURE:
                mainPanel.remove(featurePanel);
                break;
            case DIRECT:
                mainPanel.remove(directPanel);
                break;
            default:
                mainPanel.remove(algorithmPanel);
                break;
        }
        switch(algType) {
            case FEATURE:
                mainPanel.add(featurePanel, BorderLayout.NORTH);
                break;
            case DIRECT:
                mainPanel.add(directPanel, BorderLayout.NORTH);
                break;
            default:
                mainPanel.add(algorithmPanel, BorderLayout.NORTH);
                break;
        }
        mainPanel.invalidate();
    }
    setImageTypes(alg.getVisualType(), ImageType.single(alg.getDepthType()));
}
Also used : PkltConfig(boofcv.alg.tracker.klt.PkltConfig) FactoryDescribeRegionPoint(boofcv.factory.feature.describe.FactoryDescribeRegionPoint) DescribeRegionPoint(boofcv.abst.feature.describe.DescribeRegionPoint) ConfigGeneralDetector(boofcv.abst.feature.detect.interest.ConfigGeneralDetector) ScoreAssociateHamming_B(boofcv.abst.feature.associate.ScoreAssociateHamming_B) FactoryPointTrackerTwoPass(boofcv.factory.feature.tracker.FactoryPointTrackerTwoPass) PointTrackerTwoPass(boofcv.abst.feature.tracker.PointTrackerTwoPass) GeneralFeatureDetector(boofcv.alg.feature.detect.interest.GeneralFeatureDetector) PointTrackerToTwoPass(boofcv.abst.feature.tracker.PointTrackerToTwoPass) AssociateDescription2D(boofcv.abst.feature.associate.AssociateDescription2D) PointTracker(boofcv.abst.feature.tracker.PointTracker) FactoryPointTracker(boofcv.factory.feature.tracker.FactoryPointTracker)

Example 15 with PkltConfig

use of boofcv.alg.tracker.klt.PkltConfig in project BoofCV by lessthanoptimal.

the class VisualizeMonocularPlaneVisualOdometryApp method createVisualOdometry.

private MonocularPlaneVisualOdometry<I> createVisualOdometry(int whichAlg) {
    Class derivType = GImageDerivativeOps.getDerivativeType(imageClass);
    if (whichAlg == 0) {
        PkltConfig config = new PkltConfig();
        config.pyramidScaling = new int[] { 1, 2, 4, 8 };
        config.templateRadius = 3;
        ConfigGeneralDetector configDetector = new ConfigGeneralDetector(600, 3, 1);
        PointTracker<I> tracker = FactoryPointTracker.klt(config, configDetector, imageClass, derivType);
        return FactoryVisualOdometry.monoPlaneInfinity(75, 2, 1.5, 200, tracker, imageType);
    } else if (whichAlg == 1) {
        PkltConfig config = new PkltConfig();
        config.pyramidScaling = new int[] { 1, 2, 4, 8 };
        config.templateRadius = 3;
        ConfigGeneralDetector configDetector = new ConfigGeneralDetector(600, 3, 1);
        PointTracker<I> tracker = FactoryPointTracker.klt(config, configDetector, imageClass, derivType);
        double cellSize = 0.06;
        double inlierGroundTol = 1.5;
        return FactoryVisualOdometry.monoPlaneOverhead(cellSize, 25, 0.7, inlierGroundTol, 300, 2, 100, 0.5, 0.6, tracker, imageType);
    } else {
        throw new RuntimeException("Unknown selection");
    }
}
Also used : PkltConfig(boofcv.alg.tracker.klt.PkltConfig) ConfigGeneralDetector(boofcv.abst.feature.detect.interest.ConfigGeneralDetector) PointTracker(boofcv.abst.feature.tracker.PointTracker) FactoryPointTracker(boofcv.factory.feature.tracker.FactoryPointTracker)

Aggregations

PkltConfig (boofcv.alg.tracker.klt.PkltConfig)18 ConfigGeneralDetector (boofcv.abst.feature.detect.interest.ConfigGeneralDetector)16 GrayU8 (boofcv.struct.image.GrayU8)7 Vector3D_F64 (georegression.struct.point.Vector3D_F64)4 Se3_F64 (georegression.struct.se.Se3_F64)4 DescribeRegionPoint (boofcv.abst.feature.describe.DescribeRegionPoint)3 PointTracker (boofcv.abst.feature.tracker.PointTracker)3 FactoryDescribeRegionPoint (boofcv.factory.feature.describe.FactoryDescribeRegionPoint)3 FactoryPointTracker (boofcv.factory.feature.tracker.FactoryPointTracker)3 MediaManager (boofcv.io.MediaManager)3 DefaultMediaManager (boofcv.io.wrapper.DefaultMediaManager)3 GrayF32 (boofcv.struct.image.GrayF32)3 GrayU16 (boofcv.struct.image.GrayU16)3 BufferedImage (java.awt.image.BufferedImage)3 AssociateDescription2D (boofcv.abst.feature.associate.AssociateDescription2D)2 ScoreAssociateHamming_B (boofcv.abst.feature.associate.ScoreAssociateHamming_B)2 PointTrack (boofcv.abst.feature.tracker.PointTrack)2 PointTrackerToTwoPass (boofcv.abst.feature.tracker.PointTrackerToTwoPass)2 PointTrackerTwoPass (boofcv.abst.feature.tracker.PointTrackerTwoPass)2 GeneralFeatureDetector (boofcv.alg.feature.detect.interest.GeneralFeatureDetector)2