Search in sources :

Example 6 with KMeans

use of smile.clustering.KMeans in project smile by haifengl.

the class SmileUtils method learnGaussianRadialBasis.

/**
     * Learns Gaussian RBF function and centers from data. The centers are
     * chosen as the centroids of K-Means. The standard deviation (i.e. width)
     * of Gaussian radial basis function is estimated by the p-nearest neighbors
     * (among centers, not all samples) heuristic. A suggested value for
     * p is 2.
     * @param x the training dataset.
     * @param centers an array to store centers on output. Its length is used as k of k-means.
     * @param p the number of nearest neighbors of centers to estimate the width
     * of Gaussian RBF functions.
     * @return Gaussian RBF functions with parameter learned from data.
     */
public static GaussianRadialBasis[] learnGaussianRadialBasis(double[][] x, double[][] centers, int p) {
    if (p < 1) {
        throw new IllegalArgumentException("Invalid number of nearest neighbors: " + p);
    }
    int k = centers.length;
    KMeans kmeans = new KMeans(x, k, 10);
    System.arraycopy(kmeans.centroids(), 0, centers, 0, k);
    p = Math.min(p, k - 1);
    double[] r = new double[k];
    GaussianRadialBasis[] rbf = new GaussianRadialBasis[k];
    for (int i = 0; i < k; i++) {
        for (int j = 0; j < k; j++) {
            r[j] = Math.distance(centers[i], centers[j]);
        }
        Arrays.sort(r);
        double r0 = 0.0;
        for (int j = 1; j <= p; j++) {
            r0 += r[j];
        }
        r0 /= p;
        rbf[i] = new GaussianRadialBasis(r0);
    }
    return rbf;
}
Also used : GaussianRadialBasis(smile.math.rbf.GaussianRadialBasis) KMeans(smile.clustering.KMeans)

Example 7 with KMeans

use of smile.clustering.KMeans in project smile by haifengl.

the class KMeansImputation method impute.

@Override
public void impute(double[][] data) throws MissingValueImputationException {
    int[] count = new int[data[0].length];
    for (int i = 0; i < data.length; i++) {
        int n = 0;
        for (int j = 0; j < data[i].length; j++) {
            if (Double.isNaN(data[i][j])) {
                n++;
                count[j]++;
            }
        }
        if (n == data[i].length) {
            throw new MissingValueImputationException("The whole row " + i + " is missing");
        }
    }
    for (int i = 0; i < data[0].length; i++) {
        if (count[i] == data.length) {
            throw new MissingValueImputationException("The whole column " + i + " is missing");
        }
    }
    KMeans kmeans = KMeans.lloyd(data, k, Integer.MAX_VALUE, runs);
    for (int i = 0; i < k; i++) {
        if (kmeans.getClusterSize()[i] > 0) {
            double[][] d = new double[kmeans.getClusterSize()[i]][];
            for (int j = 0, m = 0; j < data.length; j++) {
                if (kmeans.getClusterLabel()[j] == i) {
                    d[m++] = data[j];
                }
            }
            columnAverageImpute(d);
        }
    }
    // In case of some clusters miss all values in some columns.
    columnAverageImpute(data);
}
Also used : KMeans(smile.clustering.KMeans)

Example 8 with KMeans

use of smile.clustering.KMeans in project smile by haifengl.

the class KMeansDemo method learn.

@Override
public JComponent learn() {
    long clock = System.currentTimeMillis();
    KMeans kmeans = new KMeans(dataset[datasetIndex], clusterNumber, 100, 4);
    System.out.format("K-Means clusterings %d samples in %dms\n", dataset[datasetIndex].length, System.currentTimeMillis() - clock);
    PlotCanvas plot = ScatterPlot.plot(dataset[datasetIndex], kmeans.getClusterLabel(), pointLegend, Palette.COLORS);
    plot.points(kmeans.centroids(), '@');
    return plot;
}
Also used : KMeans(smile.clustering.KMeans) PlotCanvas(smile.plot.PlotCanvas)

Example 9 with KMeans

use of smile.clustering.KMeans in project smile by haifengl.

the class GaussianProcessRegressionTest method testAilerons.

/**
     * Test of learn method, of class GaussianProcessRegression.
     */
@Test
public void testAilerons() {
    System.out.println("ailerons");
    ArffParser parser = new ArffParser();
    parser.setResponseIndex(40);
    try {
        AttributeDataset data = parser.parse(smile.data.parser.IOUtils.getTestDataFile("weka/regression/ailerons.arff"));
        double[][] x = data.toArray(new double[data.size()][]);
        Math.standardize(x);
        double[] y = data.toArray(new double[data.size()]);
        for (int i = 0; i < y.length; i++) {
            y[i] *= 10000;
        }
        int[] perm = Math.permutate(x.length);
        double[][] datax = new double[4000][];
        double[] datay = new double[datax.length];
        for (int i = 0; i < datax.length; i++) {
            datax[i] = x[perm[i]];
            datay[i] = y[perm[i]];
        }
        int n = datax.length;
        int k = 10;
        CrossValidation cv = new CrossValidation(n, k);
        double rss = 0.0;
        double sparseRSS30 = 0.0;
        for (int i = 0; i < k; i++) {
            double[][] trainx = Math.slice(datax, cv.train[i]);
            double[] trainy = Math.slice(datay, cv.train[i]);
            double[][] testx = Math.slice(datax, cv.test[i]);
            double[] testy = Math.slice(datay, cv.test[i]);
            GaussianProcessRegression<double[]> rkhs = new GaussianProcessRegression<>(trainx, trainy, new GaussianKernel(183.96), 0.1);
            KMeans kmeans = new KMeans(trainx, 30, 10);
            double[][] centers = kmeans.centroids();
            double r0 = 0.0;
            for (int l = 0; l < centers.length; l++) {
                for (int j = 0; j < l; j++) {
                    r0 += Math.distance(centers[l], centers[j]);
                }
            }
            r0 /= (2 * centers.length);
            System.out.println("Kernel width = " + r0);
            GaussianProcessRegression<double[]> sparse30 = new GaussianProcessRegression<>(trainx, trainy, centers, new GaussianKernel(r0), 0.1);
            for (int j = 0; j < testx.length; j++) {
                double r = testy[j] - rkhs.predict(testx[j]);
                rss += r * r;
                r = testy[j] - sparse30.predict(testx[j]);
                sparseRSS30 += r * r;
            }
        }
        System.out.println("Regular 10-CV MSE = " + rss / n);
        System.out.println("Sparse (30) 10-CV MSE = " + sparseRSS30 / n);
    } catch (Exception ex) {
        System.err.println(ex);
    }
}
Also used : AttributeDataset(smile.data.AttributeDataset) KMeans(smile.clustering.KMeans) ArffParser(smile.data.parser.ArffParser) CrossValidation(smile.validation.CrossValidation) GaussianKernel(smile.math.kernel.GaussianKernel) Test(org.junit.Test)

Example 10 with KMeans

use of smile.clustering.KMeans in project smile by haifengl.

the class GaussianProcessRegressionTest method testBank32nh.

/**
     * Test of learn method, of class GaussianProcessRegression.
     */
@Test
public void testBank32nh() {
    System.out.println("bank32nh");
    ArffParser parser = new ArffParser();
    parser.setResponseIndex(32);
    try {
        AttributeDataset data = parser.parse(smile.data.parser.IOUtils.getTestDataFile("weka/regression/bank32nh.arff"));
        double[] y = data.toArray(new double[data.size()]);
        double[][] x = data.toArray(new double[data.size()][]);
        Math.standardize(x);
        int[] perm = Math.permutate(x.length);
        double[][] datax = new double[4000][];
        double[] datay = new double[datax.length];
        for (int i = 0; i < datax.length; i++) {
            datax[i] = x[perm[i]];
            datay[i] = y[perm[i]];
        }
        int n = datax.length;
        int k = 10;
        CrossValidation cv = new CrossValidation(n, k);
        double rss = 0.0;
        double sparseRSS30 = 0.0;
        for (int i = 0; i < k; i++) {
            double[][] trainx = Math.slice(datax, cv.train[i]);
            double[] trainy = Math.slice(datay, cv.train[i]);
            double[][] testx = Math.slice(datax, cv.test[i]);
            double[] testy = Math.slice(datay, cv.test[i]);
            GaussianProcessRegression<double[]> rkhs = new GaussianProcessRegression<>(trainx, trainy, new GaussianKernel(55.3), 0.1);
            KMeans kmeans = new KMeans(trainx, 30, 10);
            double[][] centers = kmeans.centroids();
            double r0 = 0.0;
            for (int l = 0; l < centers.length; l++) {
                for (int j = 0; j < l; j++) {
                    r0 += Math.distance(centers[l], centers[j]);
                }
            }
            r0 /= (2 * centers.length);
            System.out.println("Kernel width = " + r0);
            GaussianProcessRegression<double[]> sparse30 = new GaussianProcessRegression<>(trainx, trainy, centers, new GaussianKernel(r0), 0.1);
            for (int j = 0; j < testx.length; j++) {
                double r = testy[j] - rkhs.predict(testx[j]);
                rss += r * r;
                r = testy[j] - sparse30.predict(testx[j]);
                sparseRSS30 += r * r;
            }
        }
        System.out.println("Regular 10-CV MSE = " + rss / n);
        System.out.println("Sparse (30) 10-CV MSE = " + sparseRSS30 / n);
    } catch (Exception ex) {
        System.err.println(ex);
    }
}
Also used : AttributeDataset(smile.data.AttributeDataset) KMeans(smile.clustering.KMeans) ArffParser(smile.data.parser.ArffParser) CrossValidation(smile.validation.CrossValidation) GaussianKernel(smile.math.kernel.GaussianKernel) Test(org.junit.Test)

Aggregations

KMeans (smile.clustering.KMeans)11 Test (org.junit.Test)6 AttributeDataset (smile.data.AttributeDataset)6 ArffParser (smile.data.parser.ArffParser)6 GaussianKernel (smile.math.kernel.GaussianKernel)6 CrossValidation (smile.validation.CrossValidation)6 GaussianRadialBasis (smile.math.rbf.GaussianRadialBasis)3 PlotCanvas (smile.plot.PlotCanvas)1