use of boofcv.struct.distort.Point2Transform2_F64 in project BoofCV by lessthanoptimal.
the class ImplRectifyImageOps_F64 method transformRectToPixel.
public static Point2Transform2_F64 transformRectToPixel(CameraPinholeBrown param, DMatrixRMaj rectify) {
Point2Transform2_F64 add_p_to_p = narrow(param).distort_F64(true, true);
DMatrixRMaj rectifyInv = new DMatrixRMaj(3, 3);
CommonOps_DDRM.invert(rectify, rectifyInv);
PointTransformHomography_F64 removeRect = new PointTransformHomography_F64(rectifyInv);
return new SequencePoint2Transform2_F64(removeRect, add_p_to_p);
}
use of boofcv.struct.distort.Point2Transform2_F64 in project BoofCV by lessthanoptimal.
the class ImplRectifyImageOps_F64 method allInsideLeft.
public static void allInsideLeft(CameraPinholeBrown paramLeft, DMatrixRMaj rectifyLeft, DMatrixRMaj rectifyRight, DMatrixRMaj rectifyK, ImageDimension rectifiedSize) {
// need to take in account the order in which image distort will remove rectification later on
paramLeft = new CameraPinholeBrown(paramLeft);
Point2Transform2_F64 tranLeft = transformPixelToRect(paramLeft, rectifyLeft);
Point2D_F64 work = new Point2D_F64();
RectangleLength2D_F64 bound = LensDistortionOps_F64.boundBoxInside(paramLeft.width, paramLeft.height, new PointToPixelTransform_F64(tranLeft), work);
LensDistortionOps_F64.roundInside(bound);
// Select scale to maintain the same number of pixels
double scale = Math.sqrt((paramLeft.width * paramLeft.height) / (bound.width * bound.height));
rectifiedSize.width = (int) (scale * bound.width + 0.5);
rectifiedSize.height = (int) (scale * bound.height + 0.5);
adjustCalibrated(rectifyLeft, rectifyRight, rectifyK, bound, scale);
}
use of boofcv.struct.distort.Point2Transform2_F64 in project BoofCV by lessthanoptimal.
the class ImplRectifyImageOps_F64 method fullViewLeft.
public static void fullViewLeft(int imageWidth, int imageHeight, DMatrixRMaj rectifyLeft, DMatrixRMaj rectifyRight) {
Point2Transform2_F64 tranLeft = new PointTransformHomography_F64(rectifyLeft);
Point2D_F64 work = new Point2D_F64();
RectangleLength2D_F64 bound = DistortImageOps.boundBox_F64(imageWidth, imageHeight, new PointToPixelTransform_F64(tranLeft), work);
double scaleX = imageWidth / bound.width;
double scaleY = imageHeight / bound.height;
double scale = Math.min(scaleX, scaleY);
adjustUncalibrated(rectifyLeft, rectifyRight, bound, scale);
}
use of boofcv.struct.distort.Point2Transform2_F64 in project BoofCV by lessthanoptimal.
the class ImplRectifyImageOps_F64 method fullViewLeft.
public static void fullViewLeft(CameraPinholeBrown paramLeft, DMatrixRMaj rectifyLeft, DMatrixRMaj rectifyRight, DMatrixRMaj rectifyK, ImageDimension rectifiedSize) {
// need to take in account the order in which image distort will remove rectification later on
paramLeft = new CameraPinholeBrown(paramLeft);
Point2Transform2_F64 tranLeft = transformPixelToRect(paramLeft, rectifyLeft);
Point2D_F64 work = new Point2D_F64();
RectangleLength2D_F64 bound = DistortImageOps.boundBox_F64(paramLeft.width, paramLeft.height, new PointToPixelTransform_F64(tranLeft), work);
// Select scale to maintain the same number of pixels
double scale = Math.sqrt((paramLeft.width * paramLeft.height) / (bound.width * bound.height));
rectifiedSize.width = (int) (scale * bound.width + 0.5);
rectifiedSize.height = (int) (scale * bound.height + 0.5);
adjustCalibrated(rectifyLeft, rectifyRight, rectifyK, bound, scale);
}
use of boofcv.struct.distort.Point2Transform2_F64 in project BoofCV by lessthanoptimal.
the class VisOdomMonoDepthPnP method estimateMotion.
/**
* Estimates motion from the set of tracks and their 3D location
*
* @return true if successful.
*/
private boolean estimateMotion(List<PointTrack> active) {
Point2Transform2_F64 pixelToNorm = cameraModels.get(0).pixelToNorm;
Objects.requireNonNull(framePrevious).frame_to_world.invert(world_to_prev);
// Create a list of observations for PnP
// normalized image coordinates and 3D in the previous keyframe's reference frame
observationsPnP.reset();
for (int activeIdx = 0; activeIdx < active.size(); activeIdx++) {
PointTrack pt = active.get(activeIdx);
// Build the list of tracks which are currently visible
initialVisible.add((Track) pt.cookie);
// Extract info needed to estimate motion
Point2D3D p = observationsPnP.grow();
pixelToNorm.compute(pt.pixel.x, pt.pixel.y, p.observation);
Track bt = pt.getCookie();
// Go from world coordinates to the previous frame
SePointOps_F64.transform(world_to_prev, bt.worldLoc, prevLoc4);
// Go from homogenous coordinates into 3D coordinates
PerspectiveOps.homogenousTo3dPositiveZ(prevLoc4, 1e8, 1e-7, p.location);
}
// estimate the motion up to a scale factor in translation
if (!motionEstimator.process(observationsPnP.toList()))
return false;
Se3_F64 previous_to_current;
if (refine != null) {
previous_to_current = new Se3_F64();
refine.fitModel(motionEstimator.getMatchSet(), motionEstimator.getModelParameters(), previous_to_current);
} else {
previous_to_current = motionEstimator.getModelParameters();
}
// Change everything back to the world frame
previous_to_current.invert(current_to_previous);
current_to_previous.concat(framePrevious.frame_to_world, frameCurrent.frame_to_world);
return true;
}
Aggregations