Search in sources :

Example 6 with PointCloudViewer

use of boofcv.visualize.PointCloudViewer in project BoofCV by lessthanoptimal.

the class ExampleMultiViewDenseReconstruction method visualizeInPointCloud.

public static void visualizeInPointCloud(List<Point3D_F64> cloud, DogArray_I32 colorsRgb, SceneStructureMetric structure) {
    PointCloudViewer viewer = VisualizeData.createPointCloudViewer();
    viewer.setFog(true);
    viewer.setDotSize(1);
    viewer.setTranslationStep(0.15);
    viewer.addCloud((idx, p) -> p.setTo(cloud.get(idx)), colorsRgb::get, cloud.size());
    viewer.setCameraHFov(UtilAngle.radian(60));
    SwingUtilities.invokeLater(() -> {
        // Show where the cameras are
        BoofSwingUtil.visualizeCameras(structure, viewer);
        // Display the point cloud
        viewer.getComponent().setPreferredSize(new Dimension(600, 600));
        ShowImages.showWindow(viewer.getComponent(), "Dense Reconstruction Cloud", true);
    });
}
Also used : PointCloudViewer(boofcv.visualize.PointCloudViewer)

Example 7 with PointCloudViewer

use of boofcv.visualize.PointCloudViewer in project BoofCV by lessthanoptimal.

the class ExamplePoseOfCalibrationTarget method main.

public static void main(String[] args) {
    // Load camera calibration
    CameraPinholeBrown intrinsic = CalibrationIO.load(UtilIO.pathExample("calibration/mono/Sony_DSC-HX5V_Chess/intrinsic.yaml"));
    LensDistortionNarrowFOV lensDistortion = new LensDistortionBrown(intrinsic);
    // load the video file
    String fileName = UtilIO.pathExample("tracking/chessboard_SonyDSC_01.mjpeg");
    SimpleImageSequence<GrayF32> video = DefaultMediaManager.INSTANCE.openVideo(fileName, ImageType.single(GrayF32.class));
    // DefaultMediaManager.INSTANCE.openCamera(null, 640, 480, ImageType.single(GrayF32.class));
    // Let's use the FiducialDetector interface since it is much easier than coding up
    // the entire thing ourselves. Look at FiducialDetector's code if you want to understand how it works.
    CalibrationFiducialDetector<GrayF32> detector = FactoryFiducial.calibChessboardX(null, new ConfigGridDimen(4, 5, 0.03), GrayF32.class);
    detector.setLensDistortion(lensDistortion, intrinsic.width, intrinsic.height);
    // Get the 2D coordinate of calibration points for visualization purposes
    List<Point2D_F64> calibPts = detector.getCalibrationPoints();
    // Set up visualization
    PointCloudViewer viewer = VisualizeData.createPointCloudViewer();
    viewer.setCameraHFov(PerspectiveOps.computeHFov(intrinsic));
    viewer.setTranslationStep(0.01);
    // white background
    viewer.setBackgroundColor(0xFFFFFF);
    // make the view more interest. From the side.
    DMatrixRMaj rotY = ConvertRotation3D_F64.rotY(-Math.PI / 2.0, null);
    viewer.setCameraToWorld(new Se3_F64(rotY, new Vector3D_F64(0.75, 0, 1.25)).invert(null));
    var imagePanel = new ImagePanel(intrinsic.width, intrinsic.height);
    var viewerComponent = viewer.getComponent();
    viewerComponent.setPreferredSize(new Dimension(intrinsic.width, intrinsic.height));
    var gui = new PanelGridPanel(1, imagePanel, viewerComponent);
    gui.setMaximumSize(gui.getPreferredSize());
    ShowImages.showWindow(gui, "Calibration Target Pose", true);
    // Allows the user to click on the image and pause
    var pauseHelper = new MousePauseHelper(gui);
    // saves the target's center location
    var path = new ArrayList<Point3D_F64>();
    // Process each frame in the video sequence
    var targetToCamera = new Se3_F64();
    while (video.hasNext()) {
        // detect calibration points
        detector.detect(video.next());
        if (detector.totalFound() == 1) {
            detector.getFiducialToCamera(0, targetToCamera);
            // Visualization. Show a path with green points and the calibration points in black
            viewer.clearPoints();
            Point3D_F64 center = new Point3D_F64();
            SePointOps_F64.transform(targetToCamera, center, center);
            path.add(center);
            for (Point3D_F64 p : path) {
                viewer.addPoint(p.x, p.y, p.z, 0x00FF00);
            }
            for (int j = 0; j < calibPts.size(); j++) {
                Point2D_F64 p = calibPts.get(j);
                Point3D_F64 p3 = new Point3D_F64(p.x, p.y, 0);
                SePointOps_F64.transform(targetToCamera, p3, p3);
                viewer.addPoint(p3.x, p3.y, p3.z, 0);
            }
        }
        imagePanel.setImage((BufferedImage) video.getGuiImage());
        viewerComponent.repaint();
        imagePanel.repaint();
        BoofMiscOps.pause(30);
        while (pauseHelper.isPaused()) {
            BoofMiscOps.pause(30);
        }
    }
}
Also used : ConfigGridDimen(boofcv.abst.fiducial.calib.ConfigGridDimen) Point3D_F64(georegression.struct.point.Point3D_F64) CameraPinholeBrown(boofcv.struct.calib.CameraPinholeBrown) LensDistortionBrown(boofcv.alg.distort.brown.LensDistortionBrown) LensDistortionNarrowFOV(boofcv.alg.distort.LensDistortionNarrowFOV) DMatrixRMaj(org.ejml.data.DMatrixRMaj) ArrayList(java.util.ArrayList) Vector3D_F64(georegression.struct.point.Vector3D_F64) PanelGridPanel(boofcv.gui.PanelGridPanel) GrayF32(boofcv.struct.image.GrayF32) Point2D_F64(georegression.struct.point.Point2D_F64) PointCloudViewer(boofcv.visualize.PointCloudViewer) Se3_F64(georegression.struct.se.Se3_F64) ImagePanel(boofcv.gui.image.ImagePanel) MousePauseHelper(boofcv.gui.MousePauseHelper)

Example 8 with PointCloudViewer

use of boofcv.visualize.PointCloudViewer in project BoofCV by lessthanoptimal.

the class ExampleMultiBaselineStereo method main.

public static void main(String[] args) {
    // Compute a sparse reconstruction. This will give us intrinsic and extrinsic for all views
    var example = new ExampleMultiViewSparseReconstruction();
    // Specifies the "center" frame to use
    int centerViewIdx = 15;
    example.compute("tree_snow_01.mp4", true);
    // example.compute("ditch_02.mp4", true);
    // example.compute("holiday_display_01.mp4"", true);
    // example.compute("log_building_02.mp4"", true);
    // example.compute("drone_park_01.mp4", false);
    // example.compute("stone_sign.mp4", true);
    // We need a way to load images based on their ID. In this particular case the ID encodes the array index.
    var imageLookup = new LookUpImageFilesByIndex(example.imageFiles);
    // Next we tell it which view to use as the "center", which acts as the common view for all disparity images.
    // The process of selecting the best views to use as centers is a problem all it's own. To keep things
    // we just pick a frame.
    SceneWorkingGraph.View center = example.working.getAllViews().get(centerViewIdx);
    // The final scene refined by bundle adjustment is created by the Working graph. However the 3D relationship
    // between views is contained in the pairwise graph. A View in the working graph has a reference to the view
    // in the pairwise graph. Using that we will find all connected views that have a 3D relationship
    var pairedViewIdxs = new DogArray_I32();
    var sbaIndexToImageID = new TIntObjectHashMap<String>();
    // This relationship between pairwise and working graphs might seem (and is) a bit convoluted. The Pairwise
    // graph is the initial crude sketch of what might be connected. The working graph is an intermediate
    // data structure for computing the metric scene. SBA is a refinement of the working graph.
    // Iterate through all connected views in the pairwise graph and mark their indexes in the working graph
    center.pview.connections.forEach((m) -> {
        // if there isn't a 3D relationship just skip it
        if (!m.is3D)
            return;
        String connectedID = m.other(center.pview).id;
        SceneWorkingGraph.View connected = example.working.views.get(connectedID);
        // Make sure the pairwise view exists in the working graph too
        if (connected == null)
            return;
        // Add this view to the index to name/ID lookup table
        sbaIndexToImageID.put(connected.index, connectedID);
        // Note that this view is one which acts as the second image in the stereo pair
        pairedViewIdxs.add(connected.index);
    });
    // Add the center camera image to the ID look up table
    sbaIndexToImageID.put(centerViewIdx, center.pview.id);
    // Configure there stereo disparity algorithm which is used
    var configDisparity = new ConfigDisparityBMBest5();
    configDisparity.validateRtoL = 1;
    configDisparity.texture = 0.5;
    configDisparity.regionRadiusX = configDisparity.regionRadiusY = 4;
    configDisparity.disparityRange = 120;
    // This is the actual MBS algorithm mentioned previously. It selects the best disparity for each pixel
    // in the original image using a median filter.
    var multiBaseline = new MultiBaselineStereoIndependent<>(imageLookup, ImageType.SB_U8);
    multiBaseline.setStereoDisparity(FactoryStereoDisparity.blockMatchBest5(configDisparity, GrayU8.class, GrayF32.class));
    // Print out verbose debugging and profile information
    multiBaseline.setVerbose(System.out, null);
    multiBaseline.setVerboseProfiling(System.out);
    // Improve stereo by removing small regions, which tends to be noise. Consider adjusting the region size.
    multiBaseline.setDisparitySmoother(FactoryStereoDisparity.removeSpeckle(null, GrayF32.class));
    // Print out debugging information from the smoother
    // Objects.requireNonNull(multiBaseline.getDisparitySmoother()).setVerbose(System.out,null);
    // Creates a list where you can switch between different images/visualizations
    var listDisplay = new ListDisplayPanel();
    listDisplay.setPreferredSize(new Dimension(1000, 300));
    ShowImages.showWindow(listDisplay, "Intermediate Results", true);
    // We will display intermediate results as they come in
    multiBaseline.setListener((leftView, rightView, rectLeft, rectRight, disparity, mask, parameters, rect) -> {
        // Visualize the rectified stereo pair. You can interact with this window and verify
        // that the y-axis is  aligned
        var rectified = new RectifiedPairPanel(true);
        rectified.setImages(ConvertBufferedImage.convertTo(rectLeft, null), ConvertBufferedImage.convertTo(rectRight, null));
        // Cleans up the disparity image by zeroing out pixels that are outside the original image bounds
        RectifyImageOps.applyMask(disparity, mask, 0);
        // Display the colorized disparity
        BufferedImage colorized = VisualizeImageData.disparity(disparity, null, parameters.disparityRange, 0);
        SwingUtilities.invokeLater(() -> {
            listDisplay.addItem(rectified, "Rectified " + leftView + " " + rightView);
            listDisplay.addImage(colorized, leftView + " " + rightView);
        });
    });
    // Process the images and compute a single combined disparity image
    if (!multiBaseline.process(example.scene, center.index, pairedViewIdxs, sbaIndexToImageID::get)) {
        throw new RuntimeException("Failed to fuse stereo views");
    }
    // Extract the point cloud from the fused disparity image
    GrayF32 fusedDisparity = multiBaseline.getFusedDisparity();
    DisparityParameters fusedParam = multiBaseline.getFusedParam();
    BufferedImage colorizedDisp = VisualizeImageData.disparity(fusedDisparity, null, fusedParam.disparityRange, 0);
    ShowImages.showWindow(colorizedDisp, "Fused Disparity");
    // Now compute the point cloud it represents and the color of each pixel.
    // For the fused image, instead of being in rectified image coordinates it's in the original image coordinates
    // this makes extracting color much easier.
    var cloud = new DogArray<>(Point3D_F64::new);
    var cloudRgb = new DogArray_I32(cloud.size);
    // Load the center image in color
    var colorImage = new InterleavedU8(1, 1, 3);
    imageLookup.loadImage(center.pview.id, colorImage);
    // Since the fused image is in the original (i.e. distorted) pixel coordinates and is not rectified,
    // that needs to be taken in account by undistorting the image to create the point cloud.
    CameraPinholeBrown intrinsic = BundleAdjustmentOps.convert(example.scene.cameras.get(center.cameraIdx).model, colorImage.width, colorImage.height, null);
    Point2Transform2_F64 pixel_to_norm = new LensDistortionBrown(intrinsic).distort_F64(true, false);
    MultiViewStereoOps.disparityToCloud(fusedDisparity, fusedParam, new PointToPixelTransform_F64(pixel_to_norm), (pixX, pixY, x, y, z) -> {
        cloud.grow().setTo(x, y, z);
        cloudRgb.add(colorImage.get24(pixX, pixY));
    });
    // Configure the point cloud viewer
    PointCloudViewer pcv = VisualizeData.createPointCloudViewer();
    pcv.setCameraHFov(UtilAngle.radian(70));
    pcv.setTranslationStep(0.15);
    pcv.addCloud(cloud.toList(), cloudRgb.data);
    // pcv.setColorizer(new SingleAxisRgb.Z().fperiod(30.0));
    JComponent viewer = pcv.getComponent();
    viewer.setPreferredSize(new Dimension(600, 600));
    ShowImages.showWindow(viewer, "Point Cloud", true);
    System.out.println("Done");
}
Also used : Point3D_F64(georegression.struct.point.Point3D_F64) InterleavedU8(boofcv.struct.image.InterleavedU8) ListDisplayPanel(boofcv.gui.ListDisplayPanel) CameraPinholeBrown(boofcv.struct.calib.CameraPinholeBrown) ConfigDisparityBMBest5(boofcv.factory.disparity.ConfigDisparityBMBest5) LensDistortionBrown(boofcv.alg.distort.brown.LensDistortionBrown) RectifiedPairPanel(boofcv.gui.stereo.RectifiedPairPanel) BufferedImage(java.awt.image.BufferedImage) ConvertBufferedImage(boofcv.io.image.ConvertBufferedImage) PointCloudViewer(boofcv.visualize.PointCloudViewer) PointToPixelTransform_F64(boofcv.struct.distort.PointToPixelTransform_F64) LookUpImageFilesByIndex(boofcv.io.image.LookUpImageFilesByIndex) GrayU8(boofcv.struct.image.GrayU8) SceneWorkingGraph(boofcv.alg.structure.SceneWorkingGraph) MultiBaselineStereoIndependent(boofcv.alg.mvs.MultiBaselineStereoIndependent) Point2Transform2_F64(boofcv.struct.distort.Point2Transform2_F64) DogArray_I32(org.ddogleg.struct.DogArray_I32) DogArray(org.ddogleg.struct.DogArray) GrayF32(boofcv.struct.image.GrayF32) TIntObjectHashMap(gnu.trove.map.hash.TIntObjectHashMap) DisparityParameters(boofcv.alg.mvs.DisparityParameters)

Example 9 with PointCloudViewer

use of boofcv.visualize.PointCloudViewer in project BoofCV by lessthanoptimal.

the class ExampleBundleAdjustment method visualizeInPointCloud.

private static void visualizeInPointCloud(SceneStructureMetric structure) {
    List<Point3D_F64> cloudXyz = new ArrayList<>();
    Point3D_F64 world = new Point3D_F64();
    Point3D_F64 camera = new Point3D_F64();
    for (int i = 0; i < structure.points.size; i++) {
        // Get 3D location
        SceneStructureCommon.Point p = structure.points.get(i);
        p.get(world);
        // Project point into an arbitrary view
        for (int j = 0; j < p.views.size; j++) {
            int viewIdx = p.views.get(j);
            SePointOps_F64.transform(structure.getParentToView(viewIdx), world, camera);
            cloudXyz.add(world.copy());
            break;
        }
    }
    PointCloudViewer viewer = VisualizeData.createPointCloudViewer();
    viewer.setFog(true);
    // makes it easier to see points without RGB color
    viewer.setColorizer(new SingleAxisRgb.Z().fperiod(20));
    // done for visualization to make it easier to separate objects with the fog
    viewer.setClipDistance(70);
    viewer.setDotSize(1);
    viewer.setTranslationStep(0.05);
    viewer.addCloud(cloudXyz);
    viewer.setCameraHFov(UtilAngle.radian(60));
    // Give it a good initial pose. This was determined through trial and error
    var cameraToWorld = new Se3_F64();
    cameraToWorld.T.setTo(-10.848385, -6.957626, 2.9747992);
    ConvertRotation3D_F64.eulerToMatrix(EulerType.XYZ, -2.734419, -0.27446, -0.24310, cameraToWorld.R);
    viewer.setCameraToWorld(cameraToWorld);
    SwingUtilities.invokeLater(() -> {
        viewer.getComponent().setPreferredSize(new Dimension(600, 600));
        ShowImages.showWindow(viewer.getComponent(), "Refined Scene", true);
    });
}
Also used : Point3D_F64(georegression.struct.point.Point3D_F64) ArrayList(java.util.ArrayList) PointCloudViewer(boofcv.visualize.PointCloudViewer) SceneStructureCommon(boofcv.abst.geo.bundle.SceneStructureCommon) Se3_F64(georegression.struct.se.Se3_F64)

Example 10 with PointCloudViewer

use of boofcv.visualize.PointCloudViewer in project BoofCV by lessthanoptimal.

the class ExampleDepthPointCloud method main.

public static void main(String[] args) {
    String nameRgb = UtilIO.pathExample("kinect/basket/basket_rgb.png");
    String nameDepth = UtilIO.pathExample("kinect/basket/basket_depth.png");
    String nameCalib = UtilIO.pathExample("kinect/basket/visualdepth.yaml");
    VisualDepthParameters param = CalibrationIO.load(nameCalib);
    BufferedImage buffered = UtilImageIO.loadImageNotNull(nameRgb);
    Planar<GrayU8> rgb = ConvertBufferedImage.convertFromPlanar(buffered, null, true, GrayU8.class);
    GrayU16 depth = ConvertBufferedImage.convertFrom(UtilImageIO.loadImageNotNull(nameDepth), null, GrayU16.class);
    var cloud = new DogArray<>(Point3D_F64::new);
    var cloudColor = new DogArray<>(() -> new int[3]);
    VisualDepthOps.depthTo3D(param.visualParam, rgb, depth, cloud, cloudColor);
    PointCloudViewer viewer = VisualizeData.createPointCloudViewer();
    viewer.setCameraHFov(PerspectiveOps.computeHFov(param.visualParam));
    viewer.setTranslationStep(15);
    for (int i = 0; i < cloud.size; i++) {
        Point3D_F64 p = cloud.get(i);
        int[] color = cloudColor.get(i);
        int c = (color[0] << 16) | (color[1] << 8) | color[2];
        viewer.addPoint(p.x, p.y, p.z, c);
    }
    viewer.getComponent().setPreferredSize(new Dimension(rgb.width, rgb.height));
    // ---------- Display depth image
    // use the actual max value in the image to maximize its appearance
    int maxValue = ImageStatistics.max(depth);
    BufferedImage depthOut = VisualizeImageData.disparity(depth, null, maxValue, 0);
    ShowImages.showWindow(depthOut, "Depth Image", true);
    // ---------- Display colorized point cloud
    ShowImages.showWindow(viewer.getComponent(), "Point Cloud", true);
    System.out.println("Total points = " + cloud.size);
}
Also used : VisualDepthParameters(boofcv.struct.calib.VisualDepthParameters) Point3D_F64(georegression.struct.point.Point3D_F64) GrayU16(boofcv.struct.image.GrayU16) DogArray(org.ddogleg.struct.DogArray) BufferedImage(java.awt.image.BufferedImage) ConvertBufferedImage(boofcv.io.image.ConvertBufferedImage) PointCloudViewer(boofcv.visualize.PointCloudViewer) GrayU8(boofcv.struct.image.GrayU8)

Aggregations

PointCloudViewer (boofcv.visualize.PointCloudViewer)10 Point3D_F64 (georegression.struct.point.Point3D_F64)5 DogArray (org.ddogleg.struct.DogArray)4 GrayF32 (boofcv.struct.image.GrayF32)3 Se3_F64 (georegression.struct.se.Se3_F64)3 ArrayList (java.util.ArrayList)3 LensDistortionBrown (boofcv.alg.distort.brown.LensDistortionBrown)2 DisparityParameters (boofcv.alg.mvs.DisparityParameters)2 ConvertBufferedImage (boofcv.io.image.ConvertBufferedImage)2 LookUpImageFilesByIndex (boofcv.io.image.LookUpImageFilesByIndex)2 CameraPinholeBrown (boofcv.struct.calib.CameraPinholeBrown)2 GrayU8 (boofcv.struct.image.GrayU8)2 BufferedImage (java.awt.image.BufferedImage)2 DogArray_I32 (org.ddogleg.struct.DogArray_I32)2 DMatrixRMaj (org.ejml.data.DMatrixRMaj)2 ConfigGridDimen (boofcv.abst.fiducial.calib.ConfigGridDimen)1 SceneStructureCommon (boofcv.abst.geo.bundle.SceneStructureCommon)1 DisparityToColorPointCloud (boofcv.alg.cloud.DisparityToColorPointCloud)1 PointCloudWriter (boofcv.alg.cloud.PointCloudWriter)1 LensDistortionNarrowFOV (boofcv.alg.distort.LensDistortionNarrowFOV)1