Search in sources :

Example 56 with CameraPinholeBrown

use of boofcv.struct.calib.CameraPinholeBrown in project BoofCV by lessthanoptimal.

the class ExampleMultiBaselineStereo method main.

public static void main(String[] args) {
    // Compute a sparse reconstruction. This will give us intrinsic and extrinsic for all views
    var example = new ExampleMultiViewSparseReconstruction();
    // Specifies the "center" frame to use
    int centerViewIdx = 15;
    example.compute("tree_snow_01.mp4", true);
    // example.compute("ditch_02.mp4", true);
    // example.compute("holiday_display_01.mp4"", true);
    // example.compute("log_building_02.mp4"", true);
    // example.compute("drone_park_01.mp4", false);
    // example.compute("stone_sign.mp4", true);
    // We need a way to load images based on their ID. In this particular case the ID encodes the array index.
    var imageLookup = new LookUpImageFilesByIndex(example.imageFiles);
    // Next we tell it which view to use as the "center", which acts as the common view for all disparity images.
    // The process of selecting the best views to use as centers is a problem all it's own. To keep things
    // we just pick a frame.
    SceneWorkingGraph.View center = example.working.getAllViews().get(centerViewIdx);
    // The final scene refined by bundle adjustment is created by the Working graph. However the 3D relationship
    // between views is contained in the pairwise graph. A View in the working graph has a reference to the view
    // in the pairwise graph. Using that we will find all connected views that have a 3D relationship
    var pairedViewIdxs = new DogArray_I32();
    var sbaIndexToImageID = new TIntObjectHashMap<String>();
    // This relationship between pairwise and working graphs might seem (and is) a bit convoluted. The Pairwise
    // graph is the initial crude sketch of what might be connected. The working graph is an intermediate
    // data structure for computing the metric scene. SBA is a refinement of the working graph.
    // Iterate through all connected views in the pairwise graph and mark their indexes in the working graph
    center.pview.connections.forEach((m) -> {
        // if there isn't a 3D relationship just skip it
        if (!m.is3D)
            return;
        String connectedID = m.other(center.pview).id;
        SceneWorkingGraph.View connected = example.working.views.get(connectedID);
        // Make sure the pairwise view exists in the working graph too
        if (connected == null)
            return;
        // Add this view to the index to name/ID lookup table
        sbaIndexToImageID.put(connected.index, connectedID);
        // Note that this view is one which acts as the second image in the stereo pair
        pairedViewIdxs.add(connected.index);
    });
    // Add the center camera image to the ID look up table
    sbaIndexToImageID.put(centerViewIdx, center.pview.id);
    // Configure there stereo disparity algorithm which is used
    var configDisparity = new ConfigDisparityBMBest5();
    configDisparity.validateRtoL = 1;
    configDisparity.texture = 0.5;
    configDisparity.regionRadiusX = configDisparity.regionRadiusY = 4;
    configDisparity.disparityRange = 120;
    // This is the actual MBS algorithm mentioned previously. It selects the best disparity for each pixel
    // in the original image using a median filter.
    var multiBaseline = new MultiBaselineStereoIndependent<>(imageLookup, ImageType.SB_U8);
    multiBaseline.setStereoDisparity(FactoryStereoDisparity.blockMatchBest5(configDisparity, GrayU8.class, GrayF32.class));
    // Print out verbose debugging and profile information
    multiBaseline.setVerbose(System.out, null);
    multiBaseline.setVerboseProfiling(System.out);
    // Improve stereo by removing small regions, which tends to be noise. Consider adjusting the region size.
    multiBaseline.setDisparitySmoother(FactoryStereoDisparity.removeSpeckle(null, GrayF32.class));
    // Print out debugging information from the smoother
    // Objects.requireNonNull(multiBaseline.getDisparitySmoother()).setVerbose(System.out,null);
    // Creates a list where you can switch between different images/visualizations
    var listDisplay = new ListDisplayPanel();
    listDisplay.setPreferredSize(new Dimension(1000, 300));
    ShowImages.showWindow(listDisplay, "Intermediate Results", true);
    // We will display intermediate results as they come in
    multiBaseline.setListener((leftView, rightView, rectLeft, rectRight, disparity, mask, parameters, rect) -> {
        // Visualize the rectified stereo pair. You can interact with this window and verify
        // that the y-axis is  aligned
        var rectified = new RectifiedPairPanel(true);
        rectified.setImages(ConvertBufferedImage.convertTo(rectLeft, null), ConvertBufferedImage.convertTo(rectRight, null));
        // Cleans up the disparity image by zeroing out pixels that are outside the original image bounds
        RectifyImageOps.applyMask(disparity, mask, 0);
        // Display the colorized disparity
        BufferedImage colorized = VisualizeImageData.disparity(disparity, null, parameters.disparityRange, 0);
        SwingUtilities.invokeLater(() -> {
            listDisplay.addItem(rectified, "Rectified " + leftView + " " + rightView);
            listDisplay.addImage(colorized, leftView + " " + rightView);
        });
    });
    // Process the images and compute a single combined disparity image
    if (!multiBaseline.process(example.scene, center.index, pairedViewIdxs, sbaIndexToImageID::get)) {
        throw new RuntimeException("Failed to fuse stereo views");
    }
    // Extract the point cloud from the fused disparity image
    GrayF32 fusedDisparity = multiBaseline.getFusedDisparity();
    DisparityParameters fusedParam = multiBaseline.getFusedParam();
    BufferedImage colorizedDisp = VisualizeImageData.disparity(fusedDisparity, null, fusedParam.disparityRange, 0);
    ShowImages.showWindow(colorizedDisp, "Fused Disparity");
    // Now compute the point cloud it represents and the color of each pixel.
    // For the fused image, instead of being in rectified image coordinates it's in the original image coordinates
    // this makes extracting color much easier.
    var cloud = new DogArray<>(Point3D_F64::new);
    var cloudRgb = new DogArray_I32(cloud.size);
    // Load the center image in color
    var colorImage = new InterleavedU8(1, 1, 3);
    imageLookup.loadImage(center.pview.id, colorImage);
    // Since the fused image is in the original (i.e. distorted) pixel coordinates and is not rectified,
    // that needs to be taken in account by undistorting the image to create the point cloud.
    CameraPinholeBrown intrinsic = BundleAdjustmentOps.convert(example.scene.cameras.get(center.cameraIdx).model, colorImage.width, colorImage.height, null);
    Point2Transform2_F64 pixel_to_norm = new LensDistortionBrown(intrinsic).distort_F64(true, false);
    MultiViewStereoOps.disparityToCloud(fusedDisparity, fusedParam, new PointToPixelTransform_F64(pixel_to_norm), (pixX, pixY, x, y, z) -> {
        cloud.grow().setTo(x, y, z);
        cloudRgb.add(colorImage.get24(pixX, pixY));
    });
    // Configure the point cloud viewer
    PointCloudViewer pcv = VisualizeData.createPointCloudViewer();
    pcv.setCameraHFov(UtilAngle.radian(70));
    pcv.setTranslationStep(0.15);
    pcv.addCloud(cloud.toList(), cloudRgb.data);
    // pcv.setColorizer(new SingleAxisRgb.Z().fperiod(30.0));
    JComponent viewer = pcv.getComponent();
    viewer.setPreferredSize(new Dimension(600, 600));
    ShowImages.showWindow(viewer, "Point Cloud", true);
    System.out.println("Done");
}
Also used : Point3D_F64(georegression.struct.point.Point3D_F64) InterleavedU8(boofcv.struct.image.InterleavedU8) ListDisplayPanel(boofcv.gui.ListDisplayPanel) CameraPinholeBrown(boofcv.struct.calib.CameraPinholeBrown) ConfigDisparityBMBest5(boofcv.factory.disparity.ConfigDisparityBMBest5) LensDistortionBrown(boofcv.alg.distort.brown.LensDistortionBrown) RectifiedPairPanel(boofcv.gui.stereo.RectifiedPairPanel) BufferedImage(java.awt.image.BufferedImage) ConvertBufferedImage(boofcv.io.image.ConvertBufferedImage) PointCloudViewer(boofcv.visualize.PointCloudViewer) PointToPixelTransform_F64(boofcv.struct.distort.PointToPixelTransform_F64) LookUpImageFilesByIndex(boofcv.io.image.LookUpImageFilesByIndex) GrayU8(boofcv.struct.image.GrayU8) SceneWorkingGraph(boofcv.alg.structure.SceneWorkingGraph) MultiBaselineStereoIndependent(boofcv.alg.mvs.MultiBaselineStereoIndependent) Point2Transform2_F64(boofcv.struct.distort.Point2Transform2_F64) DogArray_I32(org.ddogleg.struct.DogArray_I32) DogArray(org.ddogleg.struct.DogArray) GrayF32(boofcv.struct.image.GrayF32) TIntObjectHashMap(gnu.trove.map.hash.TIntObjectHashMap) DisparityParameters(boofcv.alg.mvs.DisparityParameters)

Example 57 with CameraPinholeBrown

use of boofcv.struct.calib.CameraPinholeBrown in project BoofCV by lessthanoptimal.

the class ExampleStereoTwoViewsOneCamera method main.

public static void main(String[] args) {
    // specify location of images and calibration
    String calibDir = UtilIO.pathExample("calibration/mono/Sony_DSC-HX5V_Chess/");
    String imageDir = UtilIO.pathExample("stereo/");
    // Camera parameters
    CameraPinholeBrown intrinsic = CalibrationIO.load(new File(calibDir, "intrinsic.yaml"));
    // Input images from the camera moving left to right
    BufferedImage origLeft = UtilImageIO.loadImage(imageDir, "mono_wall_01.jpg");
    BufferedImage origRight = UtilImageIO.loadImage(imageDir, "mono_wall_02.jpg");
    // Input images with lens distortion
    GrayU8 distortedLeft = ConvertBufferedImage.convertFrom(origLeft, (GrayU8) null);
    GrayU8 distortedRight = ConvertBufferedImage.convertFrom(origRight, (GrayU8) null);
    // matched features between the two images
    List<AssociatedPair> matchedFeatures = ExampleComputeFundamentalMatrix.computeMatches(origLeft, origRight);
    // convert from pixel coordinates into normalized image coordinates
    List<AssociatedPair> matchedCalibrated = convertToNormalizedCoordinates(matchedFeatures, intrinsic);
    // Robustly estimate camera motion
    List<AssociatedPair> inliers = new ArrayList<>();
    Se3_F64 leftToRight = estimateCameraMotion(intrinsic, matchedCalibrated, inliers);
    drawInliers(origLeft, origRight, intrinsic, inliers);
    // Rectify and remove lens distortion for stereo processing
    DMatrixRMaj rectifiedK = new DMatrixRMaj(3, 3);
    DMatrixRMaj rectifiedR = new DMatrixRMaj(3, 3);
    GrayU8 rectifiedLeft = distortedLeft.createSameShape();
    GrayU8 rectifiedRight = distortedRight.createSameShape();
    GrayU8 rectifiedMask = distortedLeft.createSameShape();
    rectifyImages(distortedLeft, distortedRight, leftToRight, intrinsic, intrinsic, rectifiedLeft, rectifiedRight, rectifiedMask, rectifiedK, rectifiedR);
    // compute disparity
    ConfigDisparityBMBest5 config = new ConfigDisparityBMBest5();
    config.errorType = DisparityError.CENSUS;
    config.disparityMin = disparityMin;
    config.disparityRange = disparityRange;
    config.subpixel = true;
    config.regionRadiusX = config.regionRadiusY = 5;
    config.maxPerPixelError = 20;
    config.validateRtoL = 1;
    config.texture = 0.1;
    StereoDisparity<GrayU8, GrayF32> disparityAlg = FactoryStereoDisparity.blockMatchBest5(config, GrayU8.class, GrayF32.class);
    // process and return the results
    disparityAlg.process(rectifiedLeft, rectifiedRight);
    GrayF32 disparity = disparityAlg.getDisparity();
    RectifyImageOps.applyMask(disparity, rectifiedMask, 0);
    // show results
    BufferedImage visualized = VisualizeImageData.disparity(disparity, null, disparityRange, 0);
    BufferedImage outLeft = ConvertBufferedImage.convertTo(rectifiedLeft, null);
    BufferedImage outRight = ConvertBufferedImage.convertTo(rectifiedRight, null);
    ShowImages.showWindow(new RectifiedPairPanel(true, outLeft, outRight), "Rectification", true);
    ShowImages.showWindow(visualized, "Disparity", true);
    showPointCloud(disparity, outLeft, leftToRight, rectifiedK, rectifiedR, disparityMin, disparityRange);
    System.out.println("Total found " + matchedCalibrated.size());
    System.out.println("Total Inliers " + inliers.size());
}
Also used : AssociatedPair(boofcv.struct.geo.AssociatedPair) CameraPinholeBrown(boofcv.struct.calib.CameraPinholeBrown) ConfigDisparityBMBest5(boofcv.factory.disparity.ConfigDisparityBMBest5) ArrayList(java.util.ArrayList) DMatrixRMaj(org.ejml.data.DMatrixRMaj) RectifiedPairPanel(boofcv.gui.stereo.RectifiedPairPanel) BufferedImage(java.awt.image.BufferedImage) ConvertBufferedImage(boofcv.io.image.ConvertBufferedImage) File(java.io.File) Se3_F64(georegression.struct.se.Se3_F64)

Example 58 with CameraPinholeBrown

use of boofcv.struct.calib.CameraPinholeBrown in project BoofCV by lessthanoptimal.

the class ExampleFiducialRandomDots method main.

public static void main(String[] args) {
    // The definitions file specifies where dots are on each marker and other bits of meta data
    RandomDotDefinition defs = FiducialIO.loadRandomDotYaml(UtilIO.fileExample("fiducial/random_dots/descriptions.yaml"));
    // The only parameter that you have to set is markerLength. It's used to compute bounding
    // boxes and similar. If you don't know what the width is just set it to 1.0
    var config = new ConfigUchiyaMarker();
    config.markerWidth = defs.markerWidth;
    config.markerHeight = defs.markerHeight;
    Uchiya_to_FiducialDetector<GrayU8> detector = FactoryFiducial.randomDots(config, GrayU8.class);
    // Load / learn all the markers. This can take a few seconds if there are a lot of markers
    for (List<Point2D_F64> marker : defs.markers) {
        detector.addMarker(marker);
    }
    // If you want 3D pose information then the camera need sto be calibrated. If you don't provide
    // this information then just things like the bounding box will be returned
    CameraPinholeBrown intrinsic = CalibrationIO.load(UtilIO.fileExample("fiducial/random_dots/intrinsic.yaml"));
    detector.setLensDistortion(LensDistortionFactory.narrow(intrinsic), intrinsic.width, intrinsic.height);
    // It's now ready to start processing images. Let's load an image
    BufferedImage image = UtilImageIO.loadImageNotNull(UtilIO.pathExample("fiducial/random_dots/image02.jpg"));
    GrayU8 gray = ConvertBufferedImage.convertFrom(image, false, ImageType.SB_U8);
    detector.detect(gray);
    // Time to visualize the results
    Graphics2D g2 = image.createGraphics();
    var targetToSensor = new Se3_F64();
    var bounds = new Polygon2D_F64();
    var center = new Point2D_F64();
    for (int i = 0; i < detector.totalFound(); i++) {
        detector.getBounds(i, bounds);
        detector.getCenter(i, center);
        g2.setColor(new Color(50, 50, 255));
        g2.setStroke(new BasicStroke(10));
        VisualizeShapes.drawPolygon(bounds, true, 1.0, g2);
        VisualizeFiducial.drawLabel(center, "" + detector.getId(i), g2);
        System.out.println("Target ID = " + detector.getId(i));
        if (detector.is3D()) {
            detector.getFiducialToCamera(i, targetToSensor);
            VisualizeFiducial.drawCube(targetToSensor, intrinsic, detector.getWidth(i), 3, g2);
        }
    }
    ShowImages.showWindow(image, "Random Dot Markers", true);
}
Also used : CameraPinholeBrown(boofcv.struct.calib.CameraPinholeBrown) RandomDotDefinition(boofcv.io.fiducial.RandomDotDefinition) BufferedImage(java.awt.image.BufferedImage) ConvertBufferedImage(boofcv.io.image.ConvertBufferedImage) Polygon2D_F64(georegression.struct.shapes.Polygon2D_F64) Point2D_F64(georegression.struct.point.Point2D_F64) GrayU8(boofcv.struct.image.GrayU8) ConfigUchiyaMarker(boofcv.factory.fiducial.ConfigUchiyaMarker) Se3_F64(georegression.struct.se.Se3_F64)

Example 59 with CameraPinholeBrown

use of boofcv.struct.calib.CameraPinholeBrown in project BoofCV by lessthanoptimal.

the class TestBundlePinholeBrown method compareForward.

@Test
void compareForward() {
    CameraPinholeBrown cam = new CameraPinholeBrown(1);
    cam.fx = 300;
    cam.fy = 200;
    cam.cx = cam.cy = 400;
    cam.radial[0] = 0.01;
    cam.skew = 0.001;
    cam.t1 = 0.01;
    cam.t2 = -0.01;
    BundlePinholeBrown alg = BundleAdjustmentOps.convert(cam, (BundlePinholeBrown) null);
    Point2Transform2_F64 n2p = LensDistortionFactory.narrow(cam).distort_F64(false, true);
    Point2D_F64 found = new Point2D_F64();
    double X = 0.1, Y = -0.2, Z = 2;
    alg.project(X, Y, Z, found);
    Point2D_F64 expected = new Point2D_F64();
    n2p.compute(X / Z, Y / Z, expected);
    assertEquals(0.0, found.distance(expected), UtilEjml.TEST_F64);
}
Also used : CameraPinholeBrown(boofcv.struct.calib.CameraPinholeBrown) Point2D_F64(georegression.struct.point.Point2D_F64) Point2Transform2_F64(boofcv.struct.distort.Point2Transform2_F64) Test(org.junit.jupiter.api.Test)

Example 60 with CameraPinholeBrown

use of boofcv.struct.calib.CameraPinholeBrown in project BoofCV by lessthanoptimal.

the class TestBundlePinholeBrown method variousRadialLengths.

@Test
void variousRadialLengths() {
    for (int i = 0; i <= 3; i++) {
        CameraPinholeBrown cam = new CameraPinholeBrown(i);
        cam.fx = 300;
        cam.fy = 200;
        cam.cx = cam.cy = 400;
        for (int j = 0; j < i; j++) {
            cam.radial[j] = 0.02 - j * 0.001;
        }
        cam.t1 = -0.001;
        cam.t2 = 0.002;
        BundlePinholeBrown alg = BundleAdjustmentOps.convert(cam, (BundlePinholeBrown) null);
        double[][] parameters = new double[1][alg.getIntrinsicCount()];
        alg.getIntrinsic(parameters[0], 0);
        new GenericChecksBundleAdjustmentCamera(alg, 0.02) {
        }.setParameters(parameters).checkAll();
    }
}
Also used : CameraPinholeBrown(boofcv.struct.calib.CameraPinholeBrown) Test(org.junit.jupiter.api.Test)

Aggregations

CameraPinholeBrown (boofcv.struct.calib.CameraPinholeBrown)99 Test (org.junit.jupiter.api.Test)62 Se3_F64 (georegression.struct.se.Se3_F64)39 GrayF32 (boofcv.struct.image.GrayF32)29 Point2D_F64 (georegression.struct.point.Point2D_F64)19 SimulatePlanarWorld (boofcv.simulation.SimulatePlanarWorld)16 LensDistortionBrown (boofcv.alg.distort.brown.LensDistortionBrown)14 ConvertBufferedImage (boofcv.io.image.ConvertBufferedImage)14 BufferedImage (java.awt.image.BufferedImage)14 ArrayList (java.util.ArrayList)12 CameraPinhole (boofcv.struct.calib.CameraPinhole)11 Point2Transform2_F64 (boofcv.struct.distort.Point2Transform2_F64)10 DMatrixRMaj (org.ejml.data.DMatrixRMaj)10 File (java.io.File)9 StereoParameters (boofcv.struct.calib.StereoParameters)6 GrayU8 (boofcv.struct.image.GrayU8)6 Point3D_F64 (georegression.struct.point.Point3D_F64)6 ImageBase (boofcv.struct.image.ImageBase)5 ImageType (boofcv.struct.image.ImageType)5 DogArray (org.ddogleg.struct.DogArray)5