use of boofcv.abst.feature.detect.extract.NonMaxSuppression in project BoofCV by lessthanoptimal.
the class FactoryInterestPointAlgs method fastHessian.
/**
* Creates a Fast Hessian blob detector used by SURF.
*
* @param config Configuration for detector. Pass in null for default options.
* @param <II> Integral Image
* @return The feature detector
*/
public static <II extends ImageGray<II>> FastHessianFeatureDetector<II> fastHessian(@Nullable ConfigFastHessian config) {
if (config == null)
config = new ConfigFastHessian();
config.checkValidity();
// ignore border is overwritten by Fast Hessian at detection time
NonMaxSuppression extractor = FactoryFeatureExtractor.nonmax(new ConfigExtract(config.extractRadius, config.detectThreshold, 0, true));
return new FastHessianFeatureDetector<>(extractor, config.maxFeaturesPerScale, config.initialSampleSize, config.initialSize, config.numberScalesPerOctave, config.numberOfOctaves, config.scaleStepSize);
}
use of boofcv.abst.feature.detect.extract.NonMaxSuppression in project BoofCV by lessthanoptimal.
the class FactoryInterestPointAlgs method harrisLaplace.
/**
* Creates a {@link FeatureLaplacePyramid} which is uses the Harris corner detector.
*
* @param extractRadius Size of the feature used to detect the corners.
* @param detectThreshold Minimum corner intensity required
* @param maxFeatures Max number of features that can be found.
* @param imageType Type of input image.
* @param derivType Image derivative type.
* @return CornerLaplaceScaleSpace
*/
public static <T extends ImageGray<T>, D extends ImageGray<D>> FeatureLaplacePyramid<T, D> harrisLaplace(int extractRadius, float detectThreshold, int maxFeatures, Class<T> imageType, Class<D> derivType) {
GradientCornerIntensity<D> harris = FactoryIntensityPointAlg.harris(extractRadius, 0.04f, false, derivType);
GeneralFeatureIntensity<T, D> intensity = new WrapperGradientCornerIntensity<>(harris);
NonMaxSuppression extractor = FactoryFeatureExtractor.nonmax(new ConfigExtract(extractRadius, detectThreshold, extractRadius, true));
GeneralFeatureDetector<T, D> detector = new GeneralFeatureDetector<>(intensity, extractor);
detector.setMaxFeatures(maxFeatures);
AnyImageDerivative<T, D> deriv = GImageDerivativeOps.derivativeForScaleSpace(imageType, derivType);
ImageFunctionSparse<T> sparseLaplace = FactoryDerivativeSparse.createLaplacian(imageType, null);
return new FeatureLaplacePyramid<>(detector, sparseLaplace, deriv, 2);
}
use of boofcv.abst.feature.detect.extract.NonMaxSuppression in project BoofCV by lessthanoptimal.
the class TestWrapVisOdomQuadPnP method createAlgorithm.
@Override
public StereoVisualOdometry<GrayF32> createAlgorithm() {
GeneralFeatureIntensity intensity = FactoryIntensityPoint.shiTomasi(1, false, GrayF32.class);
NonMaxSuppression nonmax = FactoryFeatureExtractor.nonmax(new ConfigExtract(2, 1, 0, true, false, true));
GeneralFeatureDetector<GrayF32, GrayF32> general = new GeneralFeatureDetector<>(intensity, nonmax);
general.setMaxFeatures(600);
DetectorInterestPointMulti detector = new GeneralToInterestMulti(general, 2, GrayF32.class, GrayF32.class);
DescribeRegionPoint describe = FactoryDescribeRegionPoint.surfFast(null, GrayF32.class);
DetectDescribeMulti detDescMulti = new DetectDescribeMultiFusion(detector, null, describe);
return FactoryVisualOdometry.stereoQuadPnP(1.5, 0.5, 200, Double.MAX_VALUE, 300, 50, detDescMulti, GrayF32.class);
}
use of boofcv.abst.feature.detect.extract.NonMaxSuppression in project BoofCV by lessthanoptimal.
the class ExampleNonMaximumSupression method renderNonMax.
public static BufferedImage renderNonMax(GrayF32 intensity, int radius, float threshold) {
// Create and configure the feature detector
NonMaxSuppression nonmax = FactoryFeatureExtractor.nonmax(new ConfigExtract(radius, threshold));
// We will only searching for the maximums. Other variants will look for minimums or will exclude previous
// candidate detections from being detected twice
QueueCorner maximums = new QueueCorner();
nonmax.process(intensity, null, null, null, maximums);
// Visualize the intensity image
BufferedImage output = new BufferedImage(intensity.width, intensity.height, BufferedImage.TYPE_INT_RGB);
VisualizeImageData.colorizeSign(intensity, output, -1);
// render each maximum with a circle
Graphics2D g2 = output.createGraphics();
g2.setColor(Color.blue);
for (int i = 0; i < maximums.size(); i++) {
Point2D_I16 c = maximums.get(i);
VisualizeFeatures.drawCircle(g2, c.x, c.y, radius);
}
return output;
}
use of boofcv.abst.feature.detect.extract.NonMaxSuppression in project BoofCV by lessthanoptimal.
the class ExampleFeatureSurf method harder.
/**
* Configured exactly the same as the easy example above, but require a lot more code and a more in depth
* understanding of how SURF works and is configured. Instead of TupleDesc_F64, SurfFeature are computed in
* this case. They are almost the same as TupleDesc_F64, but contain the Laplacian's sign which can be used
* to speed up association. That is an example of how using less generalized interfaces can improve performance.
*
* @param image Input image type. DOES NOT NEED TO BE GrayF32, GrayU8 works too
*/
public static <II extends ImageGray<II>> void harder(GrayF32 image) {
// SURF works off of integral images
Class<II> integralType = GIntegralImageOps.getIntegralType(GrayF32.class);
// define the feature detection algorithm
NonMaxSuppression extractor = FactoryFeatureExtractor.nonmax(new ConfigExtract(2, 0, 5, true));
FastHessianFeatureDetector<II> detector = new FastHessianFeatureDetector<>(extractor, 200, 2, 9, 4, 4, 6);
// estimate orientation
OrientationIntegral<II> orientation = FactoryOrientationAlgs.sliding_ii(null, integralType);
DescribePointSurf<II> descriptor = FactoryDescribePointAlgs.<II>surfStability(null, integralType);
// compute the integral image of 'image'
II integral = GeneralizedImageOps.createSingleBand(integralType, image.width, image.height);
GIntegralImageOps.transform(image, integral);
// detect fast hessian features
detector.detect(integral);
// tell algorithms which image to process
orientation.setImage(integral);
descriptor.setImage(integral);
List<ScalePoint> points = detector.getFoundPoints();
List<BrightFeature> descriptions = new ArrayList<>();
for (ScalePoint p : points) {
// estimate orientation
orientation.setObjectRadius(p.scale * BoofDefaults.SURF_SCALE_TO_RADIUS);
double angle = orientation.compute(p.x, p.y);
// extract the SURF description for this region
BrightFeature desc = descriptor.createDescription();
descriptor.describe(p.x, p.y, angle, p.scale, desc);
// save everything for processing later on
descriptions.add(desc);
}
System.out.println("Found Features: " + points.size());
System.out.println("First descriptor's first value: " + descriptions.get(0).value[0]);
}
Aggregations