use of boofcv.struct.geo.Point2D3D in project BoofCV by lessthanoptimal.
the class CheckEstimateNofPnP method checkZerosInput.
@Test
public void checkZerosInput() {
List<Point2D3D> inputs = new ArrayList<>();
for (int i = 0; i < 3; i++) {
inputs.add(new Point2D3D());
}
assertFalse(alg.process(inputs, solutions));
assertTrue(solutions.size() == 0);
}
use of boofcv.struct.geo.Point2D3D in project BoofCV by lessthanoptimal.
the class VisOdomPixelDepthPnP method performSecondPass.
private boolean performSecondPass(List<PointTrack> active, List<Point2D3D> obs) {
Se3_F64 keyToCurr = motionEstimator.getModelParameters();
Point3D_F64 cameraPt = new Point3D_F64();
Point2D_F64 predicted = new Point2D_F64();
// predict where each track should be given the just estimated motion
List<PointTrack> all = tracker.getAllTracks(null);
for (PointTrack t : all) {
Point2D3D p = t.getCookie();
SePointOps_F64.transform(keyToCurr, p.location, cameraPt);
normToPixel.compute(cameraPt.x / cameraPt.z, cameraPt.y / cameraPt.z, predicted);
tracker.setHint(predicted.x, predicted.y, t);
}
// redo tracking with the additional information
tracker.performSecondPass();
active.clear();
obs.clear();
tracker.getActiveTracks(active);
for (PointTrack t : active) {
Point2D3D p = t.getCookie();
pixelToNorm.compute(t.x, t.y, p.observation);
obs.add(p);
}
return motionEstimator.process(obs);
}
use of boofcv.struct.geo.Point2D3D in project BoofCV by lessthanoptimal.
the class FactoryVisualOdometry method stereoDepth.
/**
* Stereo vision based visual odometry algorithm which runs a sparse feature tracker in the left camera and
* estimates the range of tracks once when first detected using disparity between left and right cameras.
*
* @see VisOdomPixelDepthPnP
*
* @param thresholdAdd Add new tracks when less than this number are in the inlier set. Tracker dependent. Set to
* a value ≤ 0 to add features every frame.
* @param thresholdRetire Discard a track if it is not in the inlier set after this many updates. Try 2
* @param sparseDisparity Estimates the 3D location of features
* @param imageType Type of image being processed.
* @return StereoVisualOdometry
*/
public static <T extends ImageGray<T>> StereoVisualOdometry<T> stereoDepth(double inlierPixelTol, int thresholdAdd, int thresholdRetire, int ransacIterations, int refineIterations, boolean doublePass, StereoDisparitySparse<T> sparseDisparity, PointTrackerTwoPass<T> tracker, Class<T> imageType) {
// Range from sparse disparity
StereoSparse3D<T> pixelTo3D = new StereoSparse3D<>(sparseDisparity, imageType);
Estimate1ofPnP estimator = FactoryMultiView.computePnP_1(EnumPNP.P3P_FINSTERWALDER, -1, 2);
final DistanceModelMonoPixels<Se3_F64, Point2D3D> distance = new PnPDistanceReprojectionSq();
ModelManagerSe3_F64 manager = new ModelManagerSe3_F64();
EstimatorToGenerator<Se3_F64, Point2D3D> generator = new EstimatorToGenerator<>(estimator);
// 1/2 a pixel tolerance for RANSAC inliers
double ransacTOL = inlierPixelTol * inlierPixelTol;
ModelMatcher<Se3_F64, Point2D3D> motion = new Ransac<>(2323, manager, generator, distance, ransacIterations, ransacTOL);
RefinePnP refine = null;
if (refineIterations > 0) {
refine = FactoryMultiView.refinePnP(1e-12, refineIterations);
}
VisOdomPixelDepthPnP<T> alg = new VisOdomPixelDepthPnP<>(thresholdAdd, thresholdRetire, doublePass, motion, pixelTo3D, refine, tracker, null, null);
return new WrapVisOdomPixelDepthPnP<>(alg, pixelTo3D, distance, imageType);
}
use of boofcv.struct.geo.Point2D3D in project BoofCV by lessthanoptimal.
the class WrapPnPLepetitEPnP method process.
@Override
public boolean process(List<Point2D3D> inputs, Se3_F64 solution) {
for (int i = 0; i < inputs.size(); i++) {
Point2D3D pp = inputs.get(i);
worldPts.add(pp.location);
observed.add(pp.observation);
}
alg.process(worldPts, observed, solution);
worldPts.clear();
observed.clear();
return true;
}
use of boofcv.struct.geo.Point2D3D in project BoofCV by lessthanoptimal.
the class ExamplePnP method main.
public static void main(String[] args) {
// create an arbitrary transform from world to camera reference frames
Se3_F64 worldToCamera = new Se3_F64();
worldToCamera.getT().set(5, 10, -7);
ConvertRotation3D_F64.eulerToMatrix(EulerType.XYZ, 0.1, -0.3, 0, worldToCamera.getR());
ExamplePnP app = new ExamplePnP();
// Let's generate observations with no outliers
// NOTE: Image observations are in normalized image coordinates NOT pixels
List<Point2D3D> observations = app.createObservations(worldToCamera, 100);
System.out.println("Truth:");
worldToCamera.print();
System.out.println();
System.out.println("Estimated, assumed no outliers:");
app.estimateNoOutliers(observations).print();
System.out.println("Estimated, assumed that there are outliers:");
app.estimateOutliers(observations).print();
System.out.println();
System.out.println("Adding outliers");
System.out.println();
// add a bunch of outliers
app.addOutliers(observations, 50);
System.out.println("Estimated, assumed no outliers:");
app.estimateNoOutliers(observations).print();
System.out.println("Estimated, assumed that there are outliers:");
app.estimateOutliers(observations).print();
}
Aggregations