Search in sources :

Example 16 with LensDistortionBrown

use of boofcv.alg.distort.brown.LensDistortionBrown in project BoofCV by lessthanoptimal.

the class TestBaseDetectFiducialSquare method checkDetectRender.

private void checkDetectRender(int width, int height, CameraPinholeBrown intrinsic, boolean applyLens) {
    SimulatePlanarWorld simulator = new SimulatePlanarWorld();
    simulator.setCamera(intrinsic);
    double simulatedTargetWidth = 0.4;
    Se3_F64 markerToWorld = SpecialEuclideanOps_F64.eulerXyz(0, 0, 0.32, 0, Math.PI, 0, null);
    GrayF32 pattern = new GrayF32(100, 100);
    ImageMiscOps.fill(pattern, 0);
    ImageMiscOps.fillRectangle(pattern, 255, 25, 25, 50, 50);
    simulator.setBackground(255);
    simulator.resetScene();
    simulator.addSurface(markerToWorld, simulatedTargetWidth, pattern);
    simulator.render();
    // ShowImages.showWindow(simulator.getOutput(),"Simulated");
    // BoofMiscOps.sleep(10000);
    GrayU8 grayU8 = new GrayU8(width, height);
    ConvertImage.convert(simulator.getOutput(), grayU8);
    DetectCorner detector = new DetectCorner();
    if (applyLens)
        detector.configure(new LensDistortionBrown(intrinsic), width, height, false);
    detector.process(grayU8);
    assertEquals(1, detector.getFound().size);
}
Also used : SimulatePlanarWorld(boofcv.simulation.SimulatePlanarWorld) GrayF32(boofcv.struct.image.GrayF32) LensDistortionBrown(boofcv.alg.distort.brown.LensDistortionBrown) GrayU8(boofcv.struct.image.GrayU8) Se3_F64(georegression.struct.se.Se3_F64)

Example 17 with LensDistortionBrown

use of boofcv.alg.distort.brown.LensDistortionBrown in project BoofCV by lessthanoptimal.

the class ExampleFiducialHamming method main.

public static void main(String[] args) {
    String directory = UtilIO.pathExample("fiducial/square_hamming/aruco_25h7");
    // load the lens distortion parameters and the input image
    CameraPinholeBrown param = CalibrationIO.load(new File(directory, "intrinsic.yaml"));
    LensDistortionNarrowFOV lensDistortion = new LensDistortionBrown(param);
    // You need to create a different configuration for each dictionary type
    ConfigHammingMarker configMarker = ConfigHammingMarker.loadDictionary(HammingDictionary.ARUCO_MIP_25h7);
    FiducialDetector<GrayF32> detector = FactoryFiducial.squareHamming(configMarker, /*detector*/
    null, GrayF32.class);
    // Provide it lens parameters so that a 3D pose estimate is possible
    detector.setLensDistortion(lensDistortion, param.width, param.height);
    // Load and process all example images
    ListDisplayPanel gui = new ListDisplayPanel();
    for (int imageID = 1; imageID <= 3; imageID++) {
        String name = String.format("image%02d.jpg", imageID);
        System.out.println("processing: " + name);
        // Load the image
        BufferedImage buffered = UtilImageIO.loadImageNotNull(new File(directory, name).getPath());
        // Convert to a BoofCV format
        GrayF32 input = ConvertBufferedImage.convertFrom(buffered, (GrayF32) null);
        // Run the detector
        detector.detect(input);
        // Render a 3D compute on top of all detections
        Graphics2D g2 = buffered.createGraphics();
        var targetToSensor = new Se3_F64();
        var locationPixel = new Point2D_F64();
        var bounds = new Polygon2D_F64();
        for (int i = 0; i < detector.totalFound(); i++) {
            detector.getCenter(i, locationPixel);
            detector.getBounds(i, bounds);
            g2.setColor(new Color(50, 50, 255));
            g2.setStroke(new BasicStroke(10));
            VisualizeShapes.drawPolygon(bounds, true, 1.0, g2);
            if (detector.hasID())
                System.out.println("Target ID = " + detector.getId(i));
            if (detector.hasMessage())
                System.out.println("Message   = " + detector.getMessage(i));
            System.out.println("2D Image Location = " + locationPixel);
            if (detector.is3D()) {
                detector.getFiducialToCamera(i, targetToSensor);
                System.out.println("3D Location:");
                System.out.println(targetToSensor);
                VisualizeFiducial.drawCube(targetToSensor, param, detector.getWidth(i), 3, g2);
                VisualizeFiducial.drawLabelCenter(targetToSensor, param, "" + detector.getId(i), g2);
            } else {
                VisualizeFiducial.drawLabel(locationPixel, "" + detector.getId(i), g2);
            }
        }
        gui.addImage(buffered, name, ScaleOptions.ALL);
    }
    ShowImages.showWindow(gui, "Example Fiducial Hamming", true);
}
Also used : ListDisplayPanel(boofcv.gui.ListDisplayPanel) CameraPinholeBrown(boofcv.struct.calib.CameraPinholeBrown) ConfigHammingMarker(boofcv.factory.fiducial.ConfigHammingMarker) LensDistortionBrown(boofcv.alg.distort.brown.LensDistortionBrown) LensDistortionNarrowFOV(boofcv.alg.distort.LensDistortionNarrowFOV) BufferedImage(java.awt.image.BufferedImage) ConvertBufferedImage(boofcv.io.image.ConvertBufferedImage) Polygon2D_F64(georegression.struct.shapes.Polygon2D_F64) GrayF32(boofcv.struct.image.GrayF32) Point2D_F64(georegression.struct.point.Point2D_F64) File(java.io.File) Se3_F64(georegression.struct.se.Se3_F64)

Example 18 with LensDistortionBrown

use of boofcv.alg.distort.brown.LensDistortionBrown in project BoofCV by lessthanoptimal.

the class ExampleFiducialBinary method main.

public static void main(String[] args) {
    String directory = UtilIO.pathExample("fiducial/binary");
    // load the lens distortion parameters and the input image
    CameraPinholeBrown param = CalibrationIO.load(new File(directory, "intrinsic.yaml"));
    var lensDistortion = new LensDistortionBrown(param);
    BufferedImage input = UtilImageIO.loadImageNotNull(directory, "image0000.jpg");
    // BufferedImage input = UtilImageIO.loadImageNotNull(directory, "image0001.jpg");
    // BufferedImage input = UtilImageIO.loadImageNotNull(directory, "image0002.jpg");
    GrayF32 original = ConvertBufferedImage.convertFrom(input, true, ImageType.single(GrayF32.class));
    // Detect the fiducial
    FiducialDetector<GrayF32> detector = FactoryFiducial.squareBinary(new ConfigFiducialBinary(0.1), ConfigThreshold.local(ThresholdType.LOCAL_MEAN, 21), GrayF32.class);
    // new ConfigFiducialBinary(0.1), ConfigThreshold.fixed(100),GrayF32.class);
    detector.setLensDistortion(lensDistortion, param.width, param.height);
    detector.detect(original);
    // print the results
    Graphics2D g2 = input.createGraphics();
    var targetToSensor = new Se3_F64();
    var locationPixel = new Point2D_F64();
    var bounds = new Polygon2D_F64();
    for (int i = 0; i < detector.totalFound(); i++) {
        detector.getCenter(i, locationPixel);
        detector.getBounds(i, bounds);
        g2.setColor(new Color(50, 50, 255));
        g2.setStroke(new BasicStroke(10));
        VisualizeShapes.drawPolygon(bounds, true, 1.0, g2);
        if (detector.hasID())
            System.out.println("Target ID = " + detector.getId(i));
        if (detector.hasMessage())
            System.out.println("Message   = " + detector.getMessage(i));
        System.out.println("2D Image Location = " + locationPixel);
        if (detector.is3D()) {
            detector.getFiducialToCamera(i, targetToSensor);
            System.out.println("3D Location:");
            System.out.println(targetToSensor);
            VisualizeFiducial.drawCube(targetToSensor, param, detector.getWidth(i), 3, g2);
            VisualizeFiducial.drawLabelCenter(targetToSensor, param, "" + detector.getId(i), g2);
        } else {
            VisualizeFiducial.drawLabel(locationPixel, "" + detector.getId(i), g2);
        }
    }
    ShowImages.showWindow(input, "Fiducials", true);
}
Also used : ConfigFiducialBinary(boofcv.factory.fiducial.ConfigFiducialBinary) CameraPinholeBrown(boofcv.struct.calib.CameraPinholeBrown) LensDistortionBrown(boofcv.alg.distort.brown.LensDistortionBrown) BufferedImage(java.awt.image.BufferedImage) ConvertBufferedImage(boofcv.io.image.ConvertBufferedImage) Polygon2D_F64(georegression.struct.shapes.Polygon2D_F64) GrayF32(boofcv.struct.image.GrayF32) Point2D_F64(georegression.struct.point.Point2D_F64) File(java.io.File) Se3_F64(georegression.struct.se.Se3_F64)

Example 19 with LensDistortionBrown

use of boofcv.alg.distort.brown.LensDistortionBrown in project BoofCV by lessthanoptimal.

the class ColorizeMultiViewStereoResults method processMvsCloud.

/**
 * Extracts color information for the point cloud on a view by view basis.
 *
 * @param scene (Input) Geometric description of the scene
 * @param mvs (Input) Contains the 3D point cloud
 * @param indexColor (Output) RGB values are passed through to this function.
 */
public void processMvsCloud(SceneStructureMetric scene, MultiViewStereoFromKnownSceneStructure<?> mvs, BoofLambdas.IndexRgbConsumer indexColor) {
    // Get a list of views that were used as "centers"
    List<ViewInfo> centers = mvs.getListCenters();
    // Get the point cloud
    DogArray<Point3D_F64> cloud = mvs.getDisparityCloud().getCloud();
    // Step through each "center" view
    for (int centerIdx = 0; centerIdx < centers.size(); centerIdx++) {
        ViewInfo center = centers.get(centerIdx);
        if (!lookupImages.loadImage(center.relations.id, image))
            throw new RuntimeException("Couldn't find image: " + center.relations.id);
        // Which points came from this view/center
        int idx0 = mvs.getDisparityCloud().viewPointIdx.get(centerIdx);
        int idx1 = mvs.getDisparityCloud().viewPointIdx.get(centerIdx + 1);
        // Setup the camera projection model using bundle adjustment model directly
        BundleAdjustmentOps.convert(scene.getViewCamera(center.metric).model, image.width, image.height, intrinsic);
        Point2Transform2_F64 norm_to_pixel = new LensDistortionBrown(intrinsic).distort_F64(false, true);
        // Get the transform from world/cloud to this view
        scene.getWorldToView(center.metric, world_to_view, tmp);
        // Grab the colorized points from this view
        colorizer.process3(image, cloud.toList(), idx0, idx1, world_to_view, norm_to_pixel, indexColor);
    }
}
Also used : Point3D_F64(georegression.struct.point.Point3D_F64) LensDistortionBrown(boofcv.alg.distort.brown.LensDistortionBrown) Point2Transform2_F64(boofcv.struct.distort.Point2Transform2_F64) ViewInfo(boofcv.alg.mvs.MultiViewStereoFromKnownSceneStructure.ViewInfo)

Example 20 with LensDistortionBrown

use of boofcv.alg.distort.brown.LensDistortionBrown in project BoofCV by lessthanoptimal.

the class ExampleMultiBaselineStereo method main.

public static void main(String[] args) {
    // Compute a sparse reconstruction. This will give us intrinsic and extrinsic for all views
    var example = new ExampleMultiViewSparseReconstruction();
    // Specifies the "center" frame to use
    int centerViewIdx = 15;
    example.compute("tree_snow_01.mp4", true);
    // example.compute("ditch_02.mp4", true);
    // example.compute("holiday_display_01.mp4"", true);
    // example.compute("log_building_02.mp4"", true);
    // example.compute("drone_park_01.mp4", false);
    // example.compute("stone_sign.mp4", true);
    // We need a way to load images based on their ID. In this particular case the ID encodes the array index.
    var imageLookup = new LookUpImageFilesByIndex(example.imageFiles);
    // Next we tell it which view to use as the "center", which acts as the common view for all disparity images.
    // The process of selecting the best views to use as centers is a problem all it's own. To keep things
    // we just pick a frame.
    SceneWorkingGraph.View center = example.working.getAllViews().get(centerViewIdx);
    // The final scene refined by bundle adjustment is created by the Working graph. However the 3D relationship
    // between views is contained in the pairwise graph. A View in the working graph has a reference to the view
    // in the pairwise graph. Using that we will find all connected views that have a 3D relationship
    var pairedViewIdxs = new DogArray_I32();
    var sbaIndexToImageID = new TIntObjectHashMap<String>();
    // This relationship between pairwise and working graphs might seem (and is) a bit convoluted. The Pairwise
    // graph is the initial crude sketch of what might be connected. The working graph is an intermediate
    // data structure for computing the metric scene. SBA is a refinement of the working graph.
    // Iterate through all connected views in the pairwise graph and mark their indexes in the working graph
    center.pview.connections.forEach((m) -> {
        // if there isn't a 3D relationship just skip it
        if (!m.is3D)
            return;
        String connectedID = m.other(center.pview).id;
        SceneWorkingGraph.View connected = example.working.views.get(connectedID);
        // Make sure the pairwise view exists in the working graph too
        if (connected == null)
            return;
        // Add this view to the index to name/ID lookup table
        sbaIndexToImageID.put(connected.index, connectedID);
        // Note that this view is one which acts as the second image in the stereo pair
        pairedViewIdxs.add(connected.index);
    });
    // Add the center camera image to the ID look up table
    sbaIndexToImageID.put(centerViewIdx, center.pview.id);
    // Configure there stereo disparity algorithm which is used
    var configDisparity = new ConfigDisparityBMBest5();
    configDisparity.validateRtoL = 1;
    configDisparity.texture = 0.5;
    configDisparity.regionRadiusX = configDisparity.regionRadiusY = 4;
    configDisparity.disparityRange = 120;
    // This is the actual MBS algorithm mentioned previously. It selects the best disparity for each pixel
    // in the original image using a median filter.
    var multiBaseline = new MultiBaselineStereoIndependent<>(imageLookup, ImageType.SB_U8);
    multiBaseline.setStereoDisparity(FactoryStereoDisparity.blockMatchBest5(configDisparity, GrayU8.class, GrayF32.class));
    // Print out verbose debugging and profile information
    multiBaseline.setVerbose(System.out, null);
    multiBaseline.setVerboseProfiling(System.out);
    // Improve stereo by removing small regions, which tends to be noise. Consider adjusting the region size.
    multiBaseline.setDisparitySmoother(FactoryStereoDisparity.removeSpeckle(null, GrayF32.class));
    // Print out debugging information from the smoother
    // Objects.requireNonNull(multiBaseline.getDisparitySmoother()).setVerbose(System.out,null);
    // Creates a list where you can switch between different images/visualizations
    var listDisplay = new ListDisplayPanel();
    listDisplay.setPreferredSize(new Dimension(1000, 300));
    ShowImages.showWindow(listDisplay, "Intermediate Results", true);
    // We will display intermediate results as they come in
    multiBaseline.setListener((leftView, rightView, rectLeft, rectRight, disparity, mask, parameters, rect) -> {
        // Visualize the rectified stereo pair. You can interact with this window and verify
        // that the y-axis is  aligned
        var rectified = new RectifiedPairPanel(true);
        rectified.setImages(ConvertBufferedImage.convertTo(rectLeft, null), ConvertBufferedImage.convertTo(rectRight, null));
        // Cleans up the disparity image by zeroing out pixels that are outside the original image bounds
        RectifyImageOps.applyMask(disparity, mask, 0);
        // Display the colorized disparity
        BufferedImage colorized = VisualizeImageData.disparity(disparity, null, parameters.disparityRange, 0);
        SwingUtilities.invokeLater(() -> {
            listDisplay.addItem(rectified, "Rectified " + leftView + " " + rightView);
            listDisplay.addImage(colorized, leftView + " " + rightView);
        });
    });
    // Process the images and compute a single combined disparity image
    if (!multiBaseline.process(example.scene, center.index, pairedViewIdxs, sbaIndexToImageID::get)) {
        throw new RuntimeException("Failed to fuse stereo views");
    }
    // Extract the point cloud from the fused disparity image
    GrayF32 fusedDisparity = multiBaseline.getFusedDisparity();
    DisparityParameters fusedParam = multiBaseline.getFusedParam();
    BufferedImage colorizedDisp = VisualizeImageData.disparity(fusedDisparity, null, fusedParam.disparityRange, 0);
    ShowImages.showWindow(colorizedDisp, "Fused Disparity");
    // Now compute the point cloud it represents and the color of each pixel.
    // For the fused image, instead of being in rectified image coordinates it's in the original image coordinates
    // this makes extracting color much easier.
    var cloud = new DogArray<>(Point3D_F64::new);
    var cloudRgb = new DogArray_I32(cloud.size);
    // Load the center image in color
    var colorImage = new InterleavedU8(1, 1, 3);
    imageLookup.loadImage(center.pview.id, colorImage);
    // Since the fused image is in the original (i.e. distorted) pixel coordinates and is not rectified,
    // that needs to be taken in account by undistorting the image to create the point cloud.
    CameraPinholeBrown intrinsic = BundleAdjustmentOps.convert(example.scene.cameras.get(center.cameraIdx).model, colorImage.width, colorImage.height, null);
    Point2Transform2_F64 pixel_to_norm = new LensDistortionBrown(intrinsic).distort_F64(true, false);
    MultiViewStereoOps.disparityToCloud(fusedDisparity, fusedParam, new PointToPixelTransform_F64(pixel_to_norm), (pixX, pixY, x, y, z) -> {
        cloud.grow().setTo(x, y, z);
        cloudRgb.add(colorImage.get24(pixX, pixY));
    });
    // Configure the point cloud viewer
    PointCloudViewer pcv = VisualizeData.createPointCloudViewer();
    pcv.setCameraHFov(UtilAngle.radian(70));
    pcv.setTranslationStep(0.15);
    pcv.addCloud(cloud.toList(), cloudRgb.data);
    // pcv.setColorizer(new SingleAxisRgb.Z().fperiod(30.0));
    JComponent viewer = pcv.getComponent();
    viewer.setPreferredSize(new Dimension(600, 600));
    ShowImages.showWindow(viewer, "Point Cloud", true);
    System.out.println("Done");
}
Also used : Point3D_F64(georegression.struct.point.Point3D_F64) InterleavedU8(boofcv.struct.image.InterleavedU8) ListDisplayPanel(boofcv.gui.ListDisplayPanel) CameraPinholeBrown(boofcv.struct.calib.CameraPinholeBrown) ConfigDisparityBMBest5(boofcv.factory.disparity.ConfigDisparityBMBest5) LensDistortionBrown(boofcv.alg.distort.brown.LensDistortionBrown) RectifiedPairPanel(boofcv.gui.stereo.RectifiedPairPanel) BufferedImage(java.awt.image.BufferedImage) ConvertBufferedImage(boofcv.io.image.ConvertBufferedImage) PointCloudViewer(boofcv.visualize.PointCloudViewer) PointToPixelTransform_F64(boofcv.struct.distort.PointToPixelTransform_F64) LookUpImageFilesByIndex(boofcv.io.image.LookUpImageFilesByIndex) GrayU8(boofcv.struct.image.GrayU8) SceneWorkingGraph(boofcv.alg.structure.SceneWorkingGraph) MultiBaselineStereoIndependent(boofcv.alg.mvs.MultiBaselineStereoIndependent) Point2Transform2_F64(boofcv.struct.distort.Point2Transform2_F64) DogArray_I32(org.ddogleg.struct.DogArray_I32) DogArray(org.ddogleg.struct.DogArray) GrayF32(boofcv.struct.image.GrayF32) TIntObjectHashMap(gnu.trove.map.hash.TIntObjectHashMap) DisparityParameters(boofcv.alg.mvs.DisparityParameters)

Aggregations

LensDistortionBrown (boofcv.alg.distort.brown.LensDistortionBrown)25 CameraPinholeBrown (boofcv.struct.calib.CameraPinholeBrown)14 Se3_F64 (georegression.struct.se.Se3_F64)13 Test (org.junit.jupiter.api.Test)10 ImageBase (boofcv.struct.image.ImageBase)9 ImageType (boofcv.struct.image.ImageType)9 Point2Transform2_F64 (boofcv.struct.distort.Point2Transform2_F64)8 GrayF32 (boofcv.struct.image.GrayF32)8 ConvertBufferedImage (boofcv.io.image.ConvertBufferedImage)7 BufferedImage (java.awt.image.BufferedImage)7 Point2D_F64 (georegression.struct.point.Point2D_F64)6 LensDistortionNarrowFOV (boofcv.alg.distort.LensDistortionNarrowFOV)4 ListDisplayPanel (boofcv.gui.ListDisplayPanel)4 Point3D_F64 (georegression.struct.point.Point3D_F64)4 PointToPixelTransform_F64 (boofcv.struct.distort.PointToPixelTransform_F64)3 GrayU8 (boofcv.struct.image.GrayU8)3 Polygon2D_F64 (georegression.struct.shapes.Polygon2D_F64)3 File (java.io.File)3 FoundFiducial (boofcv.alg.fiducial.square.FoundFiducial)2 ConfigPolygonDetector (boofcv.factory.shape.ConfigPolygonDetector)2