Search in sources :

Example 6 with FastTaxonomyFacetCounts

use of org.apache.lucene.facet.taxonomy.FastTaxonomyFacetCounts in project lucene-solr by apache.

the class SimpleFacetsExample method facetsWithSearch.

/** User runs a query and counts facets. */
private List<FacetResult> facetsWithSearch() throws IOException {
    DirectoryReader indexReader = DirectoryReader.open(indexDir);
    IndexSearcher searcher = new IndexSearcher(indexReader);
    TaxonomyReader taxoReader = new DirectoryTaxonomyReader(taxoDir);
    FacetsCollector fc = new FacetsCollector();
    // MatchAllDocsQuery is for "browsing" (counts facets
    // for all non-deleted docs in the index); normally
    // you'd use a "normal" query:
    FacetsCollector.search(searcher, new MatchAllDocsQuery(), 10, fc);
    // Retrieve results
    List<FacetResult> results = new ArrayList<>();
    // Count both "Publish Date" and "Author" dimensions
    Facets facets = new FastTaxonomyFacetCounts(taxoReader, config, fc);
    results.add(facets.getTopChildren(10, "Author"));
    results.add(facets.getTopChildren(10, "Publish Date"));
    indexReader.close();
    taxoReader.close();
    return results;
}
Also used : IndexSearcher(org.apache.lucene.search.IndexSearcher) FastTaxonomyFacetCounts(org.apache.lucene.facet.taxonomy.FastTaxonomyFacetCounts) Facets(org.apache.lucene.facet.Facets) DirectoryReader(org.apache.lucene.index.DirectoryReader) TaxonomyReader(org.apache.lucene.facet.taxonomy.TaxonomyReader) DirectoryTaxonomyReader(org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader) ArrayList(java.util.ArrayList) FacetResult(org.apache.lucene.facet.FacetResult) MatchAllDocsQuery(org.apache.lucene.search.MatchAllDocsQuery) DirectoryTaxonomyReader(org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader) FacetsCollector(org.apache.lucene.facet.FacetsCollector)

Example 7 with FastTaxonomyFacetCounts

use of org.apache.lucene.facet.taxonomy.FastTaxonomyFacetCounts in project lucene-solr by apache.

the class MultiCategoryListsFacetsExample method search.

/** User runs a query and counts facets. */
private List<FacetResult> search() throws IOException {
    DirectoryReader indexReader = DirectoryReader.open(indexDir);
    IndexSearcher searcher = new IndexSearcher(indexReader);
    TaxonomyReader taxoReader = new DirectoryTaxonomyReader(taxoDir);
    FacetsCollector fc = new FacetsCollector();
    // MatchAllDocsQuery is for "browsing" (counts facets
    // for all non-deleted docs in the index); normally
    // you'd use a "normal" query:
    FacetsCollector.search(searcher, new MatchAllDocsQuery(), 10, fc);
    // Retrieve results
    List<FacetResult> results = new ArrayList<>();
    // Count both "Publish Date" and "Author" dimensions
    Facets author = new FastTaxonomyFacetCounts("author", taxoReader, config, fc);
    results.add(author.getTopChildren(10, "Author"));
    Facets pubDate = new FastTaxonomyFacetCounts("pubdate", taxoReader, config, fc);
    results.add(pubDate.getTopChildren(10, "Publish Date"));
    indexReader.close();
    taxoReader.close();
    return results;
}
Also used : IndexSearcher(org.apache.lucene.search.IndexSearcher) FastTaxonomyFacetCounts(org.apache.lucene.facet.taxonomy.FastTaxonomyFacetCounts) Facets(org.apache.lucene.facet.Facets) DirectoryReader(org.apache.lucene.index.DirectoryReader) TaxonomyReader(org.apache.lucene.facet.taxonomy.TaxonomyReader) DirectoryTaxonomyReader(org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader) ArrayList(java.util.ArrayList) FacetResult(org.apache.lucene.facet.FacetResult) MatchAllDocsQuery(org.apache.lucene.search.MatchAllDocsQuery) DirectoryTaxonomyReader(org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader) FacetsCollector(org.apache.lucene.facet.FacetsCollector)

Example 8 with FastTaxonomyFacetCounts

use of org.apache.lucene.facet.taxonomy.FastTaxonomyFacetCounts in project lucene-solr by apache.

the class SimpleFacetsExample method facetsOnly.

/** User runs a query and counts facets only without collecting the matching documents.*/
private List<FacetResult> facetsOnly() throws IOException {
    DirectoryReader indexReader = DirectoryReader.open(indexDir);
    IndexSearcher searcher = new IndexSearcher(indexReader);
    TaxonomyReader taxoReader = new DirectoryTaxonomyReader(taxoDir);
    FacetsCollector fc = new FacetsCollector();
    // MatchAllDocsQuery is for "browsing" (counts facets
    // for all non-deleted docs in the index); normally
    // you'd use a "normal" query:
    searcher.search(new MatchAllDocsQuery(), fc);
    // Retrieve results
    List<FacetResult> results = new ArrayList<>();
    // Count both "Publish Date" and "Author" dimensions
    Facets facets = new FastTaxonomyFacetCounts(taxoReader, config, fc);
    results.add(facets.getTopChildren(10, "Author"));
    results.add(facets.getTopChildren(10, "Publish Date"));
    indexReader.close();
    taxoReader.close();
    return results;
}
Also used : IndexSearcher(org.apache.lucene.search.IndexSearcher) FastTaxonomyFacetCounts(org.apache.lucene.facet.taxonomy.FastTaxonomyFacetCounts) Facets(org.apache.lucene.facet.Facets) DirectoryReader(org.apache.lucene.index.DirectoryReader) TaxonomyReader(org.apache.lucene.facet.taxonomy.TaxonomyReader) DirectoryTaxonomyReader(org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader) ArrayList(java.util.ArrayList) FacetResult(org.apache.lucene.facet.FacetResult) MatchAllDocsQuery(org.apache.lucene.search.MatchAllDocsQuery) DirectoryTaxonomyReader(org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader) FacetsCollector(org.apache.lucene.facet.FacetsCollector)

Example 9 with FastTaxonomyFacetCounts

use of org.apache.lucene.facet.taxonomy.FastTaxonomyFacetCounts in project lucene-solr by apache.

the class FacetTestCase method getTaxonomyFacetCounts.

public Facets getTaxonomyFacetCounts(TaxonomyReader taxoReader, FacetsConfig config, FacetsCollector c, String indexFieldName) throws IOException {
    Facets facets;
    if (random().nextBoolean()) {
        facets = new FastTaxonomyFacetCounts(indexFieldName, taxoReader, config, c);
    } else {
        OrdinalsReader ordsReader = new DocValuesOrdinalsReader(indexFieldName);
        if (random().nextBoolean()) {
            ordsReader = new CachedOrdinalsReader(ordsReader);
        }
        facets = new TaxonomyFacetCounts(ordsReader, taxoReader, config, c);
    }
    return facets;
}
Also used : CachedOrdinalsReader(org.apache.lucene.facet.taxonomy.CachedOrdinalsReader) FastTaxonomyFacetCounts(org.apache.lucene.facet.taxonomy.FastTaxonomyFacetCounts) DocValuesOrdinalsReader(org.apache.lucene.facet.taxonomy.DocValuesOrdinalsReader) CachedOrdinalsReader(org.apache.lucene.facet.taxonomy.CachedOrdinalsReader) DocValuesOrdinalsReader(org.apache.lucene.facet.taxonomy.DocValuesOrdinalsReader) OrdinalsReader(org.apache.lucene.facet.taxonomy.OrdinalsReader) FastTaxonomyFacetCounts(org.apache.lucene.facet.taxonomy.FastTaxonomyFacetCounts) TaxonomyFacetCounts(org.apache.lucene.facet.taxonomy.TaxonomyFacetCounts)

Example 10 with FastTaxonomyFacetCounts

use of org.apache.lucene.facet.taxonomy.FastTaxonomyFacetCounts in project lucene-solr by apache.

the class TestRandomSamplingFacetsCollector method testRandomSampling.

public void testRandomSampling() throws Exception {
    Directory dir = newDirectory();
    Directory taxoDir = newDirectory();
    Random random = random();
    DirectoryTaxonomyWriter taxoWriter = new DirectoryTaxonomyWriter(taxoDir);
    RandomIndexWriter writer = new RandomIndexWriter(random, dir);
    FacetsConfig config = new FacetsConfig();
    final int numCategories = 10;
    int numDocs = atLeast(10000);
    for (int i = 0; i < numDocs; i++) {
        Document doc = new Document();
        doc.add(new StringField("EvenOdd", (i % 2 == 0) ? "even" : "odd", Store.NO));
        doc.add(new FacetField("iMod10", Integer.toString(i % numCategories)));
        writer.addDocument(config.build(taxoWriter, doc));
    }
    writer.forceMerge(CHI_SQUARE_VALUES.length - 1);
    // NRT open
    IndexSearcher searcher = newSearcher(writer.getReader());
    TaxonomyReader taxoReader = new DirectoryTaxonomyReader(taxoWriter);
    IOUtils.close(writer, taxoWriter);
    // Test empty results
    RandomSamplingFacetsCollector collectRandomZeroResults = new RandomSamplingFacetsCollector(numDocs / 10, random.nextLong());
    // There should be no divisions by zero
    searcher.search(new TermQuery(new Term("EvenOdd", "NeverMatches")), collectRandomZeroResults);
    // There should be no divisions by zero and no null result
    assertNotNull(collectRandomZeroResults.getMatchingDocs());
    // There should be no results at all
    for (MatchingDocs doc : collectRandomZeroResults.getMatchingDocs()) {
        assertEquals(0, doc.totalHits);
    }
    // Now start searching and retrieve results.
    // Use a query to select half of the documents.
    TermQuery query = new TermQuery(new Term("EvenOdd", "even"));
    // 10% of total docs, 20% of the hits
    RandomSamplingFacetsCollector random10Percent = new RandomSamplingFacetsCollector(numDocs / 10, random.nextLong());
    FacetsCollector fc = new FacetsCollector();
    searcher.search(query, MultiCollector.wrap(fc, random10Percent));
    final List<MatchingDocs> matchingDocs = random10Percent.getMatchingDocs();
    // count the total hits and sampled docs, also store the number of sampled
    // docs per segment
    int totalSampledDocs = 0, totalHits = 0;
    int[] numSampledDocs = new int[matchingDocs.size()];
    //    System.out.println("numSegments=" + numSampledDocs.length);
    for (int i = 0; i < numSampledDocs.length; i++) {
        MatchingDocs md = matchingDocs.get(i);
        final DocIdSetIterator iter = md.bits.iterator();
        while (iter.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) ++numSampledDocs[i];
        totalSampledDocs += numSampledDocs[i];
        totalHits += md.totalHits;
    }
    // compute the chi-square value for the sampled documents' distribution
    float chi_square = 0;
    for (int i = 0; i < numSampledDocs.length; i++) {
        MatchingDocs md = matchingDocs.get(i);
        float ei = (float) md.totalHits / totalHits;
        if (ei > 0.0f) {
            float oi = (float) numSampledDocs[i] / totalSampledDocs;
            chi_square += (Math.pow(ei - oi, 2) / ei);
        }
    }
    // Verify that the chi-square value isn't too big. According to
    // http://en.wikipedia.org/wiki/Chi-squared_distribution#Table_of_.CF.872_value_vs_p-value,
    // we basically verify that there is a really small chance of hitting a very
    // bad sample (p-value < 0.05), for n-degrees of freedom. The number 'n' depends
    // on the number of segments.
    assertTrue("chisquare not statistically significant enough: " + chi_square, chi_square < CHI_SQUARE_VALUES[numSampledDocs.length]);
    // Test amortized counts - should be 5X the sampled count, but maximum numDocs/10
    final FastTaxonomyFacetCounts random10FacetCounts = new FastTaxonomyFacetCounts(taxoReader, config, random10Percent);
    final FacetResult random10Result = random10FacetCounts.getTopChildren(10, "iMod10");
    final FacetResult amortized10Result = random10Percent.amortizeFacetCounts(random10Result, config, searcher);
    for (int i = 0; i < amortized10Result.labelValues.length; i++) {
        LabelAndValue amortized = amortized10Result.labelValues[i];
        LabelAndValue sampled = random10Result.labelValues[i];
        // since numDocs may not divide by 10 exactly, allow for some slack in the amortized count 
        assertEquals(amortized.value.floatValue(), Math.min(5 * sampled.value.floatValue(), numDocs / 10.f), 1.0);
    }
    IOUtils.close(searcher.getIndexReader(), taxoReader, dir, taxoDir);
}
Also used : IndexSearcher(org.apache.lucene.search.IndexSearcher) TermQuery(org.apache.lucene.search.TermQuery) FastTaxonomyFacetCounts(org.apache.lucene.facet.taxonomy.FastTaxonomyFacetCounts) TaxonomyReader(org.apache.lucene.facet.taxonomy.TaxonomyReader) DirectoryTaxonomyReader(org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader) MatchingDocs(org.apache.lucene.facet.FacetsCollector.MatchingDocs) Term(org.apache.lucene.index.Term) Document(org.apache.lucene.document.Document) DirectoryTaxonomyWriter(org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter) Random(java.util.Random) StringField(org.apache.lucene.document.StringField) DocIdSetIterator(org.apache.lucene.search.DocIdSetIterator) RandomIndexWriter(org.apache.lucene.index.RandomIndexWriter) Directory(org.apache.lucene.store.Directory) DirectoryTaxonomyReader(org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader)

Aggregations

FastTaxonomyFacetCounts (org.apache.lucene.facet.taxonomy.FastTaxonomyFacetCounts)10 TaxonomyReader (org.apache.lucene.facet.taxonomy.TaxonomyReader)8 DirectoryTaxonomyReader (org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader)8 IndexSearcher (org.apache.lucene.search.IndexSearcher)8 FacetResult (org.apache.lucene.facet.FacetResult)7 Facets (org.apache.lucene.facet.Facets)7 FacetsCollector (org.apache.lucene.facet.FacetsCollector)7 DirectoryReader (org.apache.lucene.index.DirectoryReader)7 ArrayList (java.util.ArrayList)5 MatchAllDocsQuery (org.apache.lucene.search.MatchAllDocsQuery)5 DrillDownQuery (org.apache.lucene.facet.DrillDownQuery)2 HashMap (java.util.HashMap)1 Random (java.util.Random)1 Document (org.apache.lucene.document.Document)1 StringField (org.apache.lucene.document.StringField)1 MatchingDocs (org.apache.lucene.facet.FacetsCollector.MatchingDocs)1 SortedSetDocValuesFacetCounts (org.apache.lucene.facet.sortedset.SortedSetDocValuesFacetCounts)1 CachedOrdinalsReader (org.apache.lucene.facet.taxonomy.CachedOrdinalsReader)1 DocValuesOrdinalsReader (org.apache.lucene.facet.taxonomy.DocValuesOrdinalsReader)1 OrdinalsReader (org.apache.lucene.facet.taxonomy.OrdinalsReader)1