use of georegression.struct.se.Se3_F64 in project BoofCV by lessthanoptimal.
the class ExampleStereoTwoViewsOneCamera method estimateCameraMotion.
/**
* Estimates the camera motion robustly using RANSAC and a set of associated points.
*
* @param intrinsic Intrinsic camera parameters
* @param matchedNorm set of matched point features in normalized image coordinates
* @param inliers OUTPUT: Set of inlier features from RANSAC
* @return Found camera motion. Note translation has an arbitrary scale
*/
public static Se3_F64 estimateCameraMotion(CameraPinholeRadial intrinsic, List<AssociatedPair> matchedNorm, List<AssociatedPair> inliers) {
ModelMatcher<Se3_F64, AssociatedPair> epipolarMotion = FactoryMultiViewRobust.essentialRansac(new ConfigEssential(intrinsic), new ConfigRansac(200, 0.5));
if (!epipolarMotion.process(matchedNorm))
throw new RuntimeException("Motion estimation failed");
// save inlier set for debugging purposes
inliers.addAll(epipolarMotion.getMatchSet());
return epipolarMotion.getModelParameters();
}
use of georegression.struct.se.Se3_F64 in project BoofCV by lessthanoptimal.
the class ExampleMultiviewSceneReconstruction method initializeReconstruction.
/**
* Initialize the reconstruction by finding the image which is most similar to the "best" image. Estimate
* its pose up to a scale factor and create the initial set of 3D features
*/
private void initializeReconstruction(List<BufferedImage> colorImages, double[][] matrix, int bestImage) {
// Set all images, but the best one, as not having been estimated yet
estimatedImage = new boolean[colorImages.size()];
processedImage = new boolean[colorImages.size()];
estimatedImage[bestImage] = true;
processedImage[bestImage] = true;
// declare stored for found motion of each image
motionWorldToCamera = new Se3_F64[colorImages.size()];
for (int i = 0; i < colorImages.size(); i++) {
motionWorldToCamera[i] = new Se3_F64();
imageFeature3D.add(new ArrayList<Feature3D>());
}
// pick the image most similar to the original image to initialize pose estimation
int firstChild = findBestFit(matrix, bestImage);
initialize(bestImage, firstChild);
}
use of georegression.struct.se.Se3_F64 in project BoofCV by lessthanoptimal.
the class ExampleMultiviewSceneReconstruction method initialize.
/**
* Initialize the 3D world given these two images. imageA is assumed to be the origin of the world.
*/
private void initialize(int imageA, int imageB) {
System.out.println("Initializing 3D world using " + imageA + " and " + imageB);
// Compute the 3D pose and find valid image features
Se3_F64 motionAtoB = new Se3_F64();
List<AssociatedIndex> inliers = new ArrayList<>();
if (!estimateStereoPose(imageA, imageB, motionAtoB, inliers))
throw new RuntimeException("The first image pair is a bad keyframe!");
motionWorldToCamera[imageB].set(motionAtoB);
estimatedImage[imageB] = true;
processedImage[imageB] = true;
// create tracks for only those features in the inlier list
FastQueue<Point2D_F64> pixelsA = imagePixels.get(imageA);
FastQueue<Point2D_F64> pixelsB = imagePixels.get(imageB);
List<Feature3D> tracksA = imageFeature3D.get(imageA);
List<Feature3D> tracksB = imageFeature3D.get(imageB);
GrowQueue_I32 colorsA = imageColors.get(imageA);
for (int i = 0; i < inliers.size(); i++) {
AssociatedIndex a = inliers.get(i);
Feature3D t = new Feature3D();
t.color = colorsA.get(a.src);
t.obs.grow().set(pixelsA.get(a.src));
t.obs.grow().set(pixelsB.get(a.dst));
t.frame.add(imageA);
t.frame.add(imageB);
// compute the 3D coordinate of the feature
Point2D_F64 pa = pixelsA.get(a.src);
Point2D_F64 pb = pixelsB.get(a.dst);
if (!triangulate.triangulate(pa, pb, motionAtoB, t.worldPt))
continue;
// the feature has to be in front of the camera
if (t.worldPt.z > 0) {
featuresAll.add(t);
tracksA.add(t);
tracksB.add(t);
}
}
// adjust the scale so that it's not excessively large or small
normalizeScale(motionWorldToCamera[imageB], tracksA);
}
use of georegression.struct.se.Se3_F64 in project BoofCV by lessthanoptimal.
the class ExamplePnP method main.
public static void main(String[] args) {
// create an arbitrary transform from world to camera reference frames
Se3_F64 worldToCamera = new Se3_F64();
worldToCamera.getT().set(5, 10, -7);
ConvertRotation3D_F64.eulerToMatrix(EulerType.XYZ, 0.1, -0.3, 0, worldToCamera.getR());
ExamplePnP app = new ExamplePnP();
// Let's generate observations with no outliers
// NOTE: Image observations are in normalized image coordinates NOT pixels
List<Point2D3D> observations = app.createObservations(worldToCamera, 100);
System.out.println("Truth:");
worldToCamera.print();
System.out.println();
System.out.println("Estimated, assumed no outliers:");
app.estimateNoOutliers(observations).print();
System.out.println("Estimated, assumed that there are outliers:");
app.estimateOutliers(observations).print();
System.out.println();
System.out.println("Adding outliers");
System.out.println();
// add a bunch of outliers
app.addOutliers(observations, 50);
System.out.println("Estimated, assumed no outliers:");
app.estimateNoOutliers(observations).print();
System.out.println("Estimated, assumed that there are outliers:");
app.estimateOutliers(observations).print();
}
use of georegression.struct.se.Se3_F64 in project BoofCV by lessthanoptimal.
the class ExamplePnP method estimateNoOutliers.
/**
* Assumes all observations actually match the correct/real 3D point
*/
public Se3_F64 estimateNoOutliers(List<Point2D3D> observations) {
// Compute a single solution using EPNP
// 10 iterations is what JavaDoc recommends, but might need to be tuned.
// 0 test points. This parameters is actually ignored because EPNP only returns a single solution
Estimate1ofPnP pnp = FactoryMultiView.computePnP_1(EnumPNP.EPNP, 10, 0);
Se3_F64 worldToCamera = new Se3_F64();
pnp.process(observations, worldToCamera);
// For some applications the EPNP solution might be good enough, but let's refine it
RefinePnP refine = FactoryMultiView.refinePnP(1e-8, 200);
Se3_F64 refinedWorldToCamera = new Se3_F64();
if (!refine.fitModel(observations, worldToCamera, refinedWorldToCamera))
throw new RuntimeException("Refined failed! Input probably bad...");
return refinedWorldToCamera;
}
Aggregations