Search in sources :

Example 11 with Gamma

use of org.apache.commons.math3.special.Gamma in project GDSC-SMLM by aherbert.

the class PoissonGammaGaussianFisherInformation method findMaximum.

/**
 * Find the maximum of the integrated function A^2/P. P is the Poisson-Gamma convolution, A is the
 * partial gradient.
 *
 * <p>When both A and P are convolved with a Gaussian kernel, the integral of this function - 1 is
 * the Fisher information.
 *
 * <p>This method is used to determine the maximum of the function. The integral of A should equal
 * 1. The range can be determined using a fraction of the maximum, or integrating until the sum is
 * 1.
 *
 * @param theta the Poisson mean
 * @param rel Relative threshold
 * @return [max,max value,evaluations]
 */
public double[] findMaximum(final double theta, double rel) {
    if (theta < MIN_MEAN) {
        throw new IllegalArgumentException("Poisson mean must be positive");
    }
    final double dirac = PoissonGammaFunction.dirac(theta);
    final UnivariateFunction f = new UnivariateFunction() {

        double[] gradient = new double[1];

        @Override
        public double value(double x) {
            // Return F without the dirac. This is so that the maximum of the function
            // is known for relative height analysis.
            double value;
            if (x == 0) {
                gradient[0] = dirac / gain;
                value = gradient[0] * theta;
            } else {
                value = PoissonGammaFunction.unscaledPoissonGammaPartial(x, theta, gain, gradient);
            }
            return getF(value, gradient[0]);
        }
    };
    // Determine the extent of the function: A^2/P
    // Without convolution this will have a maximum that is above, and
    // converges to [Poisson mean * amplification].
    final double mean = theta * gain;
    final int iterations = 10;
    final QuickConvergenceChecker checker = new QuickConvergenceChecker(iterations);
    final BrentOptimizer opt = new BrentOptimizer(rel, Double.MIN_VALUE, checker);
    UnivariatePointValuePair pair;
    try {
        final double start = (theta < 1) ? 0 : mean;
        pair = opt.optimize(new SearchInterval(0, mean * 1.5, start), GoalType.MAXIMIZE, new MaxEval(50000), new UnivariateObjectiveFunction(f));
    } catch (final TooManyEvaluationsException ex) {
        // This should not occur with the quick checker fixed iterations
        // - just get the current best
        pair = checker.getBest();
    }
    final double[] max = new double[3];
    max[0] = pair.getPoint();
    max[1] = pair.getValue();
    max[2] = opt.getEvaluations();
    return max;
}
Also used : SearchInterval(org.apache.commons.math3.optim.univariate.SearchInterval) MaxEval(org.apache.commons.math3.optim.MaxEval) TooManyEvaluationsException(org.apache.commons.math3.exception.TooManyEvaluationsException) UnivariateFunction(org.apache.commons.math3.analysis.UnivariateFunction) UnivariateObjectiveFunction(org.apache.commons.math3.optim.univariate.UnivariateObjectiveFunction) BrentOptimizer(org.apache.commons.math3.optim.univariate.BrentOptimizer) UnivariatePointValuePair(org.apache.commons.math3.optim.univariate.UnivariatePointValuePair)

Example 12 with Gamma

use of org.apache.commons.math3.special.Gamma in project GDSC-SMLM by aherbert.

the class CameraModelAnalysis method cumulativeDistribution.

private static double[][] cumulativeDistribution(CameraModelAnalysisSettings settings, double[][] cdf, final LikelihoodFunction fun) {
    // Q. How to match this is the discrete cumulative histogram using the continuous
    // likelihood function:
    // 1. Compute integral up to the value
    // 2. Compute integral up to but not including the next value using trapezoid integration
    // 3. Compute integral up to but not including the next value using flat-top integration
    // Since the function will be used on continuous float data when fitting PSFs the best
    // match for how it will perform in practice is a continuous (trapezoid) integration.
    // The simplest is a flat-top integration.
    // Compute the probability at each value
    final double e = settings.getPhotons();
    double[] x = cdf[0];
    double[] y = new double[x.length];
    for (int i = 0; i < x.length; i++) {
        y[i] = fun.likelihood(x[i], e);
    }
    // Add more until the probability change is marginal
    double sum = MathUtils.sum(y);
    final TDoubleArrayList list = new TDoubleArrayList(y);
    for (int o = (int) x[x.length - 1] + 1; ; o++) {
        final double p = fun.likelihood(o, e);
        list.add(p);
        if (p == 0) {
            break;
        }
        sum += p;
        if (p / sum < PROBABILITY_DELTA) {
            break;
        }
    }
    final TDoubleArrayList list2 = new TDoubleArrayList(10);
    for (int o = (int) x[0] - 1; ; o--) {
        final double p = fun.likelihood(o, e);
        list2.add(p);
        if (p == 0) {
            break;
        }
        sum += p;
        if (p / sum < PROBABILITY_DELTA) {
            break;
        }
    }
    // Insert at start
    double start = x[0];
    if (!list2.isEmpty()) {
        start -= list2.size();
        list2.reverse();
        list.insert(0, list2.toArray());
    }
    y = list.toArray();
    x = SimpleArrayUtils.newArray(y.length, start, 1.0);
    if (settings.getSimpsonIntegration()) {
        int c0 = -1;
        double dirac = 0;
        int minc = 0;
        int maxc = 0;
        CachingUnivariateFunction uf = null;
        if (settings.getMode() == MODE_EM_CCD && isPoissonGammaLikelihoodFunction(settings)) {
            // A spike is expected at c=0 due to the Dirac delta contribution.
            // This breaks integration, especially when noise < 0.1.
            // Fix by integrating around c=0 fully then integrating the rest.
            c0 = Arrays.binarySearch(x, 0);
            final double noise = getReadNoise(settings);
            final double p = settings.getPhotons();
            if (noise == 0) {
                // Pure Poisson-Gamma. Just subtract the delta, do the simple integration
                // below and add the delta back. Only functions that support noise==0
                // will be allowed so this solution works.
                dirac = PoissonGammaFunction.dirac(p);
                if (c0 != -1) {
                    y[c0] -= dirac;
                }
            } else {
                // Fix integration around c=0 using the range of the Gaussian
                minc = (int) Math.max(x[0], Math.floor(-5 * noise));
                maxc = (int) Math.min(x[x.length - 1], Math.ceil(5 * noise));
                uf = new CachingUnivariateFunction(fun, p);
            }
        }
        // Use Simpson's integration with n=4 to get the integral of the probability
        // over the range of each count.
        // Note the Poisson-Gamma function cannot be integrated with the
        // Dirac delta function at c==0
        // Compute the extra function points
        final double[] f = new double[y.length * SIMPSON_N + 1];
        int index = f.length;
        final int mod;
        if (settings.getRoundDown()) {
            // Do this final value outside the loop as y[index/n] does not exist
            mod = 0;
            index--;
            f[index] = fun.likelihood(start + index * SIMPSON_H, e);
        } else {
            // Used when computing for rounding up/down
            mod = SIMPSON_N_2;
            start -= SIMPSON_N_2 * SIMPSON_H;
        }
        while (index-- > 0) {
            if (index % SIMPSON_N == mod) {
                f[index] = y[index / SIMPSON_N];
            } else {
                f[index] = fun.likelihood(start + index * SIMPSON_H, e);
            }
        }
        // Compute indices for the integral
        final TIntArrayList tmp = new TIntArrayList(SIMPSON_N);
        for (int j = 1; j <= SIMPSON_N_2 - 1; j++) {
            tmp.add(2 * j);
        }
        final int[] i2 = tmp.toArray();
        tmp.resetQuick();
        for (int j = 1; j <= SIMPSON_N_2; j++) {
            tmp.add(2 * j - 1);
        }
        final int[] i4 = tmp.toArray();
        // Compute integral
        for (int i = 0; i < y.length; i++) {
            final int a = i * SIMPSON_N;
            final int b = a + SIMPSON_N;
            sum = f[a] + f[b];
            for (int j = i2.length; j-- > 0; ) {
                sum += 2 * f[a + i2[j]];
            }
            for (int j = i4.length; j-- > 0; ) {
                sum += 4 * f[a + i4[j]];
            }
            sum *= SIMPSON_H / 3;
            // System.out.printf("y[%d] = %f => %f\n", i, y[i], s);
            y[i] = sum;
        }
        // Fix Poisson-Gamma ...
        if (c0 != -1) {
            if (uf != null) {
                // Convolved Poisson-Gamma. Fix in the range of the Gaussian around c=0
                final CustomSimpsonIntegrator in = new CustomSimpsonIntegrator(SIMPSON_RELATIVE_ACCURACY, SIMPSON_ABSOLUTE_ACCURACY, SIMPSON_MIN_ITERATION_COUNT, CustomSimpsonIntegrator.SIMPSON_MAX_ITERATIONS_COUNT);
                final double lower = (settings.getRoundDown()) ? 0 : -0.5;
                final double upper = lower + 1;
                // Switch from c<=maxc to c<maxc. Avoid double computation at minc==maxc
                if (maxc != minc) {
                    maxc++;
                }
                maxc++;
                for (int c = minc, i = Arrays.binarySearch(x, minc); c < maxc; c++, i++) {
                    uf.reset();
                    try {
                        y[i] = in.integrate(2000, uf, c + lower, c + upper);
                    } catch (final TooManyEvaluationsException ex) {
                        // Q. Is the last sum valid?
                        if (in.getLastSum() > 0) {
                            y[i] = in.getLastSum();
                        } else {
                            // Otherwise use all the cached values to compute a sum
                            // using the trapezoid rule. This will underestimate the sum.
                            // Note: The Simpson integrator will have computed the edge values
                            // as the first two values in the cache.
                            final double[] g = uf.list.toArray();
                            final double dx = (g[3] - g[1]) / in.getN();
                            final int total = 1 + 2 * ((int) in.getN());
                            sum = 0;
                            for (int j = 4; j < total; j += 2) {
                                sum += g[j];
                            }
                            y[i] = (g[0] + g[2] + 2 * sum) / dx;
                        }
                    }
                }
            } else {
                // Pure Poisson-Gamma. Just add back the delta.
                y[c0] += dirac;
            }
        }
    }
    // Simple flat-top integration
    sum = 0;
    for (int i = 0; i < y.length; i++) {
        sum += y[i];
        y[i] = sum;
    }
    return new double[][] { x, y };
}
Also used : TooManyEvaluationsException(org.apache.commons.math3.exception.TooManyEvaluationsException) TDoubleArrayList(gnu.trove.list.array.TDoubleArrayList) CustomSimpsonIntegrator(uk.ac.sussex.gdsc.smlm.math3.analysis.integration.CustomSimpsonIntegrator) TIntArrayList(gnu.trove.list.array.TIntArrayList)

Example 13 with Gamma

use of org.apache.commons.math3.special.Gamma in project GDSC-SMLM by aherbert.

the class AstigmatismModelManager method doCurveFit.

private boolean doCurveFit() {
    // Estimate:
    // Focal plane = where width is at a minimum
    // s0x/s0y = the min width of x/y
    // gamma = Half the distance between the focal planes
    // z0 = half way between the two focal planes
    // d = depth of focus
    double[] smoothSx = fitSx;
    double[] smoothSy = fitSy;
    if (pluginSettings.getSmoothing() > 0) {
        final LoessInterpolator loess = new LoessInterpolator(pluginSettings.getSmoothing(), 0);
        smoothSx = loess.smooth(fitZ, fitSx);
        smoothSy = loess.smooth(fitZ, fitSy);
        final Plot plot = widthPlot.getPlot();
        plot.setColor(Color.RED);
        plot.addPoints(fitZ, smoothSx, Plot.LINE);
        plot.setColor(Color.BLUE);
        plot.addPoints(fitZ, smoothSy, Plot.LINE);
        plot.setColor(Color.BLACK);
        plot.updateImage();
    }
    final int focalPlaneXindex = SimpleArrayUtils.findMinIndex(smoothSx);
    final int focalPlaneYindex = SimpleArrayUtils.findMinIndex(smoothSy);
    final double s0x = smoothSx[focalPlaneXindex];
    final double s0y = smoothSy[focalPlaneYindex];
    final double focalPlaneX = fitZ[focalPlaneXindex];
    final double focalPlaneY = fitZ[focalPlaneYindex];
    double gamma = Math.abs(focalPlaneY - focalPlaneX) / 2;
    final double z0 = (focalPlaneX + focalPlaneY) / 2;
    final double d = (estimateD(focalPlaneXindex, fitZ, smoothSx) + estimateD(focalPlaneYindex, fitZ, smoothSy)) / 2;
    // Start with Ax, Bx, Ay, By as zero.
    final double Ax = 0;
    final double Bx = 0;
    final double Ay = 0;
    final double By = 0;
    // If this is not the case we can invert the gamma parameter.
    if (focalPlaneXindex < focalPlaneYindex) {
        gamma = -gamma;
    }
    // Use an LVM fitter with numerical gradients.
    final double initialStepBoundFactor = 100;
    final double costRelativeTolerance = 1e-10;
    final double parRelativeTolerance = 1e-10;
    final double orthoTolerance = 1e-10;
    final double threshold = Precision.SAFE_MIN;
    // We optimise against both sx and sy as a combined y-value.
    final double[] y = new double[fitZ.length * 2];
    System.arraycopy(fitSx, 0, y, 0, fitSx.length);
    System.arraycopy(fitSy, 0, y, fitSx.length, fitSy.length);
    final LevenbergMarquardtOptimizer optimizer = new LevenbergMarquardtOptimizer(initialStepBoundFactor, costRelativeTolerance, parRelativeTolerance, orthoTolerance, threshold);
    parameters = new double[9];
    parameters[P_GAMMA] = gamma;
    parameters[P_Z0] = z0;
    parameters[P_D] = d;
    parameters[P_S0X] = s0x;
    parameters[P_AX] = Ax;
    parameters[P_BX] = Bx;
    parameters[P_S0Y] = s0y;
    parameters[P_AY] = Ay;
    parameters[P_BY] = By;
    record("Initial", parameters);
    if (pluginSettings.getShowEstimatedCurve()) {
        plotFit(parameters);
        IJ.showMessage(TITLE, "Showing the estimated curve parameters.\nClick OK to continue.");
    }
    // @formatter:off
    final LeastSquaresBuilder builder = new LeastSquaresBuilder().maxEvaluations(Integer.MAX_VALUE).maxIterations(3000).start(parameters).target(y);
    if (pluginSettings.getWeightedFit()) {
        builder.weight(new DiagonalMatrix(getWeights(smoothSx, smoothSy)));
    }
    final AstigmatismVectorFunction vf = new AstigmatismVectorFunction();
    builder.model(vf, new AstigmatismMatrixFunction());
    final LeastSquaresProblem problem = builder.build();
    try {
        final Optimum optimum = optimizer.optimize(problem);
        parameters = optimum.getPoint().toArray();
        record("Final", parameters);
        plotFit(parameters);
        saveResult(optimum);
    } catch (final Exception ex) {
        IJ.error(TITLE, "Failed to fit curve: " + ex.getMessage());
        return false;
    }
    return true;
}
Also used : Plot(ij.gui.Plot) ConversionException(uk.ac.sussex.gdsc.core.data.utils.ConversionException) LeastSquaresBuilder(org.apache.commons.math3.fitting.leastsquares.LeastSquaresBuilder) LoessInterpolator(org.apache.commons.math3.analysis.interpolation.LoessInterpolator) Optimum(org.apache.commons.math3.fitting.leastsquares.LeastSquaresOptimizer.Optimum) LevenbergMarquardtOptimizer(org.apache.commons.math3.fitting.leastsquares.LevenbergMarquardtOptimizer) DiagonalMatrix(org.apache.commons.math3.linear.DiagonalMatrix) LeastSquaresProblem(org.apache.commons.math3.fitting.leastsquares.LeastSquaresProblem)

Example 14 with Gamma

use of org.apache.commons.math3.special.Gamma in project deeplearning4j by deeplearning4j.

the class TestReconstructionDistributions method testExponentialLogProb.

@Test
public void testExponentialLogProb() {
    Nd4j.getRandom().setSeed(12345);
    int inputSize = 4;
    int[] mbs = new int[] { 1, 2, 5 };
    Random r = new Random(12345);
    for (boolean average : new boolean[] { true, false }) {
        for (int minibatch : mbs) {
            INDArray x = Nd4j.zeros(minibatch, inputSize);
            for (int i = 0; i < minibatch; i++) {
                for (int j = 0; j < inputSize; j++) {
                    x.putScalar(i, j, r.nextInt(2));
                }
            }
            //i.e., pre-afn gamma
            INDArray distributionParams = Nd4j.rand(minibatch, inputSize).muli(2).subi(1);
            INDArray gammas = Transforms.tanh(distributionParams, true);
            ReconstructionDistribution dist = new ExponentialReconstructionDistribution("tanh");
            double negLogProb = dist.negLogProbability(x, distributionParams, average);
            INDArray exampleNegLogProb = dist.exampleNegLogProbability(x, distributionParams);
            assertArrayEquals(new int[] { minibatch, 1 }, exampleNegLogProb.shape());
            //Calculate the same thing, but using Apache Commons math
            double logProbSum = 0.0;
            for (int i = 0; i < minibatch; i++) {
                double exampleSum = 0.0;
                for (int j = 0; j < inputSize; j++) {
                    double gamma = gammas.getDouble(i, j);
                    double lambda = Math.exp(gamma);
                    double mean = 1.0 / lambda;
                    //Commons math uses mean = 1/lambda
                    ExponentialDistribution exp = new ExponentialDistribution(mean);
                    double xVal = x.getDouble(i, j);
                    double thisLogProb = exp.logDensity(xVal);
                    logProbSum += thisLogProb;
                    exampleSum += thisLogProb;
                }
                assertEquals(-exampleNegLogProb.getDouble(i), exampleSum, 1e-6);
            }
            double expNegLogProb;
            if (average) {
                expNegLogProb = -logProbSum / minibatch;
            } else {
                expNegLogProb = -logProbSum;
            }
            //                System.out.println(x);
            //                System.out.println(expNegLogProb + "\t" + logProb + "\t" + (logProb / expNegLogProb));
            assertEquals(expNegLogProb, negLogProb, 1e-6);
            //Also: check random sampling...
            int count = minibatch * inputSize;
            INDArray arr = Nd4j.linspace(-3, 3, count).reshape(minibatch, inputSize);
            INDArray sampleMean = dist.generateAtMean(arr);
            INDArray sampleRandom = dist.generateRandom(arr);
            for (int i = 0; i < minibatch; i++) {
                for (int j = 0; j < inputSize; j++) {
                    double d1 = sampleMean.getDouble(i, j);
                    double d2 = sampleRandom.getDouble(i, j);
                    assertTrue(d1 >= 0.0);
                    assertTrue(d2 >= 0.0);
                }
            }
        }
    }
}
Also used : Random(java.util.Random) INDArray(org.nd4j.linalg.api.ndarray.INDArray) ExponentialReconstructionDistribution(org.deeplearning4j.nn.conf.layers.variational.ExponentialReconstructionDistribution) ExponentialDistribution(org.apache.commons.math3.distribution.ExponentialDistribution) GaussianReconstructionDistribution(org.deeplearning4j.nn.conf.layers.variational.GaussianReconstructionDistribution) ReconstructionDistribution(org.deeplearning4j.nn.conf.layers.variational.ReconstructionDistribution) ExponentialReconstructionDistribution(org.deeplearning4j.nn.conf.layers.variational.ExponentialReconstructionDistribution) BernoulliReconstructionDistribution(org.deeplearning4j.nn.conf.layers.variational.BernoulliReconstructionDistribution) Test(org.junit.Test)

Example 15 with Gamma

use of org.apache.commons.math3.special.Gamma in project GDSC-SMLM by aherbert.

the class BaseFunctionSolverTest method drawGaussian.

/**
	 * Draw a Gaussian with Poisson shot noise and Gaussian read noise.
	 *
	 * @param params
	 *            The Gaussian parameters
	 * @param noise
	 *            The read noise
	 * @param noiseModel
	 *            the noise model
	 * @return The data
	 */
double[] drawGaussian(double[] params, double[] noise, NoiseModel noiseModel) {
    double[] data = new double[size * size];
    int n = params.length / 6;
    Gaussian2DFunction f = GaussianFunctionFactory.create2D(n, size, size, flags, null);
    f.initialise(params);
    // Poisson noise
    for (int i = 0; i < data.length; i++) {
        CustomPoissonDistribution dist = new CustomPoissonDistribution(randomGenerator, 1);
        double e = f.eval(i);
        if (e > 0) {
            dist.setMeanUnsafe(e);
            data[i] = dist.sample();
        }
    }
    // Simulate EM-gain
    if (noiseModel == NoiseModel.EMCCD) {
        // Use a gamma distribution
        // Since the call random.nextGamma(...) creates a Gamma distribution 
        // which pre-calculates factors only using the scale parameter we 
        // create a custom gamma distribution where the shape can be set as a property.
        CustomGammaDistribution dist = new CustomGammaDistribution(randomGenerator, 1, emGain);
        for (int i = 0; i < data.length; i++) {
            if (data[i] > 0) {
                dist.setShapeUnsafe(data[i]);
                // The sample will amplify the signal so we remap to the original scale
                data[i] = dist.sample() / emGain;
            }
        }
    }
    // Read-noise
    if (noise != null) {
        for (int i = 0; i < data.length; i++) {
            data[i] += randomGenerator.nextGaussian() * noise[i];
        }
    }
    //gdsc.core.ij.Utils.display("Spot", data, size, size);
    return data;
}
Also used : CustomGammaDistribution(org.apache.commons.math3.distribution.CustomGammaDistribution) Gaussian2DFunction(gdsc.smlm.function.gaussian.Gaussian2DFunction) CustomPoissonDistribution(org.apache.commons.math3.distribution.CustomPoissonDistribution)

Aggregations

TooManyEvaluationsException (org.apache.commons.math3.exception.TooManyEvaluationsException)8 UnivariateFunction (org.apache.commons.math3.analysis.UnivariateFunction)5 GammaDistribution (org.apache.commons.math3.distribution.GammaDistribution)5 MaxEval (org.apache.commons.math3.optim.MaxEval)4 Plot (ij.gui.Plot)3 InitialGuess (org.apache.commons.math3.optim.InitialGuess)3 PointValuePair (org.apache.commons.math3.optim.PointValuePair)3 MultivariateFunctionMappingAdapter (org.apache.commons.math3.optim.nonlinear.scalar.MultivariateFunctionMappingAdapter)3 ObjectiveFunction (org.apache.commons.math3.optim.nonlinear.scalar.ObjectiveFunction)3 VisibleForTesting (com.google.common.annotations.VisibleForTesting)2 Sets (com.google.common.collect.Sets)2 Point (java.awt.Point)2 File (java.io.File)2 IOException (java.io.IOException)2 java.util (java.util)2 BiFunction (java.util.function.BiFunction)2 Predicate (java.util.function.Predicate)2 Collectors (java.util.stream.Collectors)2 IntStream (java.util.stream.IntStream)2 Stream (java.util.stream.Stream)2