Search in sources :

Example 21 with Point

use of com.geophile.z.spatialobject.d2.Point in project mastodon-tracking by mastodon-sc.

the class DetectionUtil method findLocalMaxima.

public static final List<Point> findLocalMaxima(final RandomAccessibleInterval<FloatType> source, final double threshold, final ExecutorService service) {
    final FloatType val = new FloatType();
    val.setReal(threshold);
    final LocalNeighborhoodCheck<Point, FloatType> localNeighborhoodCheck = new LocalExtrema.MaximumCheck<>(val);
    final IntervalView<FloatType> extended = Views.interval(Views.extendMirrorSingle(source), Intervals.expand(source, 1));
    final RectangleShape shape = new RectangleShape(1, true);
    final int numTasks = Runtime.getRuntime().availableProcessors() / 2;
    List<Point> peaks = new ArrayList<>();
    try {
        peaks = LocalExtrema.findLocalExtrema(extended, localNeighborhoodCheck, shape, service, numTasks);
    } catch (InterruptedException | ExecutionException e) {
        e.printStackTrace();
    }
    return peaks;
}
Also used : RectangleShape(net.imglib2.algorithm.neighborhood.RectangleShape) ArrayList(java.util.ArrayList) Point(net.imglib2.Point) ExecutionException(java.util.concurrent.ExecutionException) Point(net.imglib2.Point) FloatType(net.imglib2.type.numeric.real.FloatType)

Example 22 with Point

use of com.geophile.z.spatialobject.d2.Point in project mastodon-tracking by mastodon-sc.

the class DetectionUtil method determineOptimalResolutionLevel.

/**
 * Determines the optimal resolution level for detection of an object of a
 * given size (in physical units).
 * <p>
 * The size here is specified in <b>physical units</b>. The calibration
 * information is retrieved from the spimData to estimate the object size in
 * pixel units.
 * <p>
 *
 * Typically, even with LSFMs, the Z sampling can be much lower than in X
 * and Y. The pixel size in Z is them much larger than in X and Y. For
 * instance on a 25x water objective imaging on a 2048x2048 sCMOS camera the
 * pixel size in X, Y and Z are respectively
 * <code>[ 0.35, 0.35, 1.5 ] µm</code>. This is going to be a common case
 * for microscopists using modern cameras.
 * <p>
 * There is a factor 4 between X and Z pixel sizes. The BDV conversion tool
 * picks this up correctly, and proposes the following mipmap scales:
 *
 * <pre>
 *0: [ 1 1 1
 *1:   2 2 1
 *2:   4 4 1
 *3:   8 8 2 ]
 * </pre>
 *
 * If we are to detect nuclei that are about 3µm in radius, we would like
 * them to have be at most 2.5 pixels in all directions at the optimal
 * resolution level.
 * <p>
 * This algorithm deals with this by doing the following:
 * <ul>
 * <li>Iterate to level i.
 * <li>Compute the size of object in all dimensions.
 * <li>Iterate to dimension d.
 * <li>If the size of object at this dimension is smaller than the limit,
 * then we stop at this level, but only if:
 * <ul>
 * <li>we lower the size of the object in this dimension even more (compared
 * with previous level).
 * <li>all dimensions are below the limit for the first time.
 * </ul>
 * </ul>
 * With the previous example, the algorithm performs as follow:
 * <ul>
 * <li>Iterate to level 0.
 * <li>At this level, the size of my object is <code>[ 9.6, 9.6, 2.0 ]
 * pixels</code>.
 * <li>The Z dimension has a size 2.0 pixels, below 2.5 pixels. But:
 * <ul>
 * <li>this is the first time,
 * <li>and the other dimensions are above the limit.
 * </ul>
 * <li>Iterate to level 1.
 * <li>At this level, the size of my object is
 * <code>[ 4.8, 4.8, 2.0 ] pixels</code>.
 * <li>The Z dimension has a size 2.0 pixels, below 2.5 pixels. But:
 * <ul>
 * <li>this is NOT the first time, but we did not decrease its size more.
 * <li>and the other dimensions are still above the limit.
 * </ul>
 * <li>Iterate to level 2.
 * <li>At this level, the size of my object is
 * <code>[ 2.4, 2.4, 2.0 ] pixels.</code>
 * <li>All dimensions are below the limit -&gt; we stop there.
 * </ul>
 * <p>
 * If the data does not ship multiple resolution levels, this methods return
 * 0.
 *
 * @param sources
 *            the image data.
 * @param size
 *            the size of an object measured at resolution level 0, <b>in
 *            physical units</b>.
 * @param minSizePixel
 *            the desired minimal size in pixel units of the same object in
 *            higher resolution levels.
 * @param timepoint
 *            the time-point to query.
 * @param setup
 *            the setup id to query.
 * @return the largest resolution level at which the object size is still
 *         larger than the minimal desired size. Returns 0 if the data does
 *         not ship multiple resolution levels.
 */
public static final int determineOptimalResolutionLevel(final List<SourceAndConverter<?>> sources, final double size, final double minSizePixel, final int timepoint, final int setup) {
    final int numMipmapLevels = sources.get(setup).getSpimSource().getNumMipmapLevels();
    int level = 0;
    final int nDims = numDimensions(sources, setup, timepoint);
    final double[] previousSizeInPix = new double[nDims];
    Arrays.fill(previousSizeInPix, Double.POSITIVE_INFINITY);
    final boolean[] belowLimit = new boolean[nDims];
    Arrays.fill(belowLimit, false);
    while (level < numMipmapLevels - 1) {
        /*
			 * There is probably a more compact way to implement this algorithm,
			 * but this one expresses what we have in mind.
			 */
        final double[] sizeThisLevel = new double[nDims];
        for (int d = 0; d < sizeThisLevel.length; d++) {
            final AffineTransform3D transform = getTransform(sources, timepoint, setup, level);
            sizeThisLevel[d] = size / Affine3DHelpers.extractScale(transform, 0);
            // Are we below the limit?
            if (sizeThisLevel[d] < minSizePixel) {
                // Yes! Was it the case the previous level?
                if (belowLimit[d]) {
                    // smaller?
                    if (sizeThisLevel[d] < previousSizeInPix[d]) {
                        // level.
                        break;
                    } else {
                    /*
							 * No. But know we are. If the others dimensions are
							 * fine we are fine, but we won't allow going
							 * smaller than this.
							 */
                    }
                }
                // Remember that we are below limit for this dimension.
                belowLimit[d] = true;
            } else {
            // We are not below limit. Let's go deeper.
            }
            previousSizeInPix[d] = sizeThisLevel[d];
        }
        // Now that we check all dimensions, are they all below limit?
        if (isAllTrue(belowLimit)) {
            // Yes! We stop there.
            break;
        }
        level++;
    }
    return level;
}
Also used : Point(net.imglib2.Point) AffineTransform3D(net.imglib2.realtransform.AffineTransform3D)

Example 23 with Point

use of com.geophile.z.spatialobject.d2.Point in project digitraffic-road by tmfg.

the class TmsStationMetadata2Datex2Converter method getMeasurementSiteRecord.

private static MeasurementSiteRecord getMeasurementSiteRecord(final TmsStation station, final RoadStationSensor sensor) {
    final fi.livi.digitraffic.tie.metadata.geojson.Point point = AbstractMetadataToFeatureConverter.getETRS89CoordinatesPoint(station.getRoadStation());
    final MeasurementSiteRecord measurementSiteRecord = new MeasurementSiteRecord().withId(getMeasurementSiteReference(station.getNaturalId(), sensor.getNaturalId())).withMeasurementSiteIdentification(getMeasurementSiteReference(station.getNaturalId(), sensor.getNaturalId())).withVersion(MEASUREMENT_SITE_RECORD_VERSION).withMeasurementSiteName(getName(sensor)).withMeasurementSiteLocation(new Point().withPointByCoordinates(new PointByCoordinates().withPointCoordinates(new PointCoordinates().withLongitude(point != null && point.getLongitude() != null ? point.getLongitude().floatValue() : 0).withLatitude(point != null && point.getLatitude() != null ? point.getLatitude().floatValue() : 0))));
    if (sensor.getAccuracy() != null) {
        measurementSiteRecord.withMeasurementSpecificCharacteristics(new MeasurementSiteRecordIndexMeasurementSpecificCharacteristics().withIndex(1).withMeasurementSpecificCharacteristics(new MeasurementSpecificCharacteristics().withAccuracy(sensor.getAccuracy() != null ? sensor.getAccuracy().floatValue() : 0)));
    }
    return measurementSiteRecord;
}
Also used : PointByCoordinates(fi.livi.digitraffic.tie.datex2.PointByCoordinates) MeasurementSiteRecordIndexMeasurementSpecificCharacteristics(fi.livi.digitraffic.tie.datex2.MeasurementSiteRecordIndexMeasurementSpecificCharacteristics) PointCoordinates(fi.livi.digitraffic.tie.datex2.PointCoordinates) MeasurementSiteRecordIndexMeasurementSpecificCharacteristics(fi.livi.digitraffic.tie.datex2.MeasurementSiteRecordIndexMeasurementSpecificCharacteristics) MeasurementSpecificCharacteristics(fi.livi.digitraffic.tie.datex2.MeasurementSpecificCharacteristics) Point(fi.livi.digitraffic.tie.datex2.Point) MeasurementSiteRecord(fi.livi.digitraffic.tie.datex2.MeasurementSiteRecord)

Example 24 with Point

use of com.geophile.z.spatialobject.d2.Point in project imagej-utils by embl-cba.

the class BdvUtils method getSourceIndicesAtSelectedPoint.

@Deprecated
public static // Use bdv-playground instead
ArrayList<Integer> getSourceIndicesAtSelectedPoint(Bdv bdv, RealPoint selectedPoint, boolean evalSourcesAtPointIn2D) {
    final ArrayList<Integer> sourceIndicesAtSelectedPoint = new ArrayList<>();
    final int numSources = bdv.getBdvHandle().getViewerPanel().state().getSources().size();
    for (int sourceIndex = 0; sourceIndex < numSources; sourceIndex++) {
        final SourceAndConverter<?> sourceState = bdv.getBdvHandle().getViewerPanel().state().getSources().get(sourceIndex);
        final Source<?> source = sourceState.getSpimSource();
        final long[] positionInSource = getPositionInSource(source, selectedPoint, 0, 0);
        Interval interval = source.getSource(0, 0);
        final Point point = new Point(positionInSource);
        if (evalSourcesAtPointIn2D) {
            final long[] min = new long[2];
            final long[] max = new long[2];
            final long[] positionInSource2D = new long[2];
            for (int d = 0; d < 2; d++) {
                min[d] = interval.min(d);
                max[d] = interval.max(d);
                positionInSource2D[d] = positionInSource[d];
            }
            final FinalInterval interval2D = new FinalInterval(min, max);
            final Point point2D = new Point(positionInSource2D);
            if (Intervals.contains(interval2D, point2D))
                sourceIndicesAtSelectedPoint.add(sourceIndex);
        } else {
            if (Intervals.contains(interval, point))
                sourceIndicesAtSelectedPoint.add(sourceIndex);
        }
    }
    return sourceIndicesAtSelectedPoint;
}
Also used : Point(net.imglib2.Point) Point(net.imglib2.Point)

Example 25 with Point

use of com.geophile.z.spatialobject.d2.Point in project imagej-ops by imagej.

the class ColocalisationTest method gaussianSmooth.

/**
 * Gaussian Smooth of the input image using intermediate float format.
 *
 * @param <T>
 * @param img
 * @param sigma
 * @return
 */
public static <T extends RealType<T> & NativeType<T>> Img<T> gaussianSmooth(RandomAccessibleInterval<T> img, double[] sigma) {
    Interval interval = Views.iterable(img);
    ImgFactory<T> outputFactory = new ArrayImgFactory<>(Util.getTypeFromInterval(img));
    final long[] dim = new long[img.numDimensions()];
    img.dimensions(dim);
    Img<T> output = outputFactory.create(dim);
    final long[] pos = new long[img.numDimensions()];
    Arrays.fill(pos, 0);
    Localizable origin = new Point(pos);
    ImgFactory<FloatType> tempFactory = new ArrayImgFactory<>(new FloatType());
    RandomAccessible<T> input = Views.extendMirrorSingle(img);
    Gauss.inFloat(sigma, input, interval, output, origin, tempFactory);
    return output;
}
Also used : Point(net.imglib2.Point) ArrayImgFactory(net.imglib2.img.array.ArrayImgFactory) Localizable(net.imglib2.Localizable) RandomAccessibleInterval(net.imglib2.RandomAccessibleInterval) Interval(net.imglib2.Interval) FloatType(net.imglib2.type.numeric.real.FloatType)

Aggregations

Point (net.imglib2.Point)33 ArrayList (java.util.ArrayList)16 FloatType (net.imglib2.type.numeric.real.FloatType)11 Test (org.junit.Test)9 List (java.util.List)8 Point (com.google.monitoring.v3.Point)7 Point (hr.fer.oop.recap2.task2.Point)7 FinalInterval (net.imglib2.FinalInterval)7 RealPoint (net.imglib2.RealPoint)7 TimeSeries (com.google.monitoring.v3.TimeSeries)6 Point (de.micromata.opengis.kml.v_2_2_0.Point)6 Interval (net.imglib2.Interval)6 RandomAccessibleInterval (net.imglib2.RandomAccessibleInterval)6 HyperSphere (net.imglib2.algorithm.region.hypersphere.HyperSphere)6 AffineTransform3D (net.imglib2.realtransform.AffineTransform3D)6 HashMap (java.util.HashMap)5 Metric (com.google.api.Metric)4 TimeInterval (com.google.monitoring.v3.TimeInterval)4 TypedValue (com.google.monitoring.v3.TypedValue)4 Map (java.util.Map)4