Search in sources :

Example 61 with SortedSetDocValuesField

use of org.apache.lucene.document.SortedSetDocValuesField in project lucene-solr by apache.

the class TestMemoryIndex method testDocValues.

public void testDocValues() throws Exception {
    Document doc = new Document();
    doc.add(new NumericDocValuesField("numeric", 29L));
    doc.add(new SortedNumericDocValuesField("sorted_numeric", 33L));
    doc.add(new SortedNumericDocValuesField("sorted_numeric", 32L));
    doc.add(new SortedNumericDocValuesField("sorted_numeric", 32L));
    doc.add(new SortedNumericDocValuesField("sorted_numeric", 31L));
    doc.add(new SortedNumericDocValuesField("sorted_numeric", 30L));
    doc.add(new BinaryDocValuesField("binary", new BytesRef("a")));
    doc.add(new SortedDocValuesField("sorted", new BytesRef("b")));
    doc.add(new SortedSetDocValuesField("sorted_set", new BytesRef("f")));
    doc.add(new SortedSetDocValuesField("sorted_set", new BytesRef("d")));
    doc.add(new SortedSetDocValuesField("sorted_set", new BytesRef("d")));
    doc.add(new SortedSetDocValuesField("sorted_set", new BytesRef("c")));
    MemoryIndex mi = MemoryIndex.fromDocument(doc, analyzer);
    LeafReader leafReader = mi.createSearcher().getIndexReader().leaves().get(0).reader();
    NumericDocValues numericDocValues = leafReader.getNumericDocValues("numeric");
    assertEquals(0, numericDocValues.nextDoc());
    assertEquals(29L, numericDocValues.longValue());
    SortedNumericDocValues sortedNumericDocValues = leafReader.getSortedNumericDocValues("sorted_numeric");
    assertEquals(0, sortedNumericDocValues.nextDoc());
    assertEquals(5, sortedNumericDocValues.docValueCount());
    assertEquals(30L, sortedNumericDocValues.nextValue());
    assertEquals(31L, sortedNumericDocValues.nextValue());
    assertEquals(32L, sortedNumericDocValues.nextValue());
    assertEquals(32L, sortedNumericDocValues.nextValue());
    assertEquals(33L, sortedNumericDocValues.nextValue());
    BinaryDocValues binaryDocValues = leafReader.getBinaryDocValues("binary");
    assertEquals(0, binaryDocValues.nextDoc());
    assertEquals("a", binaryDocValues.binaryValue().utf8ToString());
    SortedDocValues sortedDocValues = leafReader.getSortedDocValues("sorted");
    assertEquals(0, sortedDocValues.nextDoc());
    assertEquals("b", sortedDocValues.binaryValue().utf8ToString());
    assertEquals(0, sortedDocValues.ordValue());
    assertEquals("b", sortedDocValues.lookupOrd(0).utf8ToString());
    SortedSetDocValues sortedSetDocValues = leafReader.getSortedSetDocValues("sorted_set");
    assertEquals(3, sortedSetDocValues.getValueCount());
    assertEquals(0, sortedSetDocValues.nextDoc());
    assertEquals(0L, sortedSetDocValues.nextOrd());
    assertEquals(1L, sortedSetDocValues.nextOrd());
    assertEquals(2L, sortedSetDocValues.nextOrd());
    assertEquals(SortedSetDocValues.NO_MORE_ORDS, sortedSetDocValues.nextOrd());
    assertEquals("c", sortedSetDocValues.lookupOrd(0L).utf8ToString());
    assertEquals("d", sortedSetDocValues.lookupOrd(1L).utf8ToString());
    assertEquals("f", sortedSetDocValues.lookupOrd(2L).utf8ToString());
}
Also used : SortedNumericDocValues(org.apache.lucene.index.SortedNumericDocValues) NumericDocValues(org.apache.lucene.index.NumericDocValues) SortedNumericDocValues(org.apache.lucene.index.SortedNumericDocValues) LeafReader(org.apache.lucene.index.LeafReader) Document(org.apache.lucene.document.Document) BinaryDocValuesField(org.apache.lucene.document.BinaryDocValuesField) BinaryDocValues(org.apache.lucene.index.BinaryDocValues) SortedDocValues(org.apache.lucene.index.SortedDocValues) SortedNumericDocValuesField(org.apache.lucene.document.SortedNumericDocValuesField) SortedNumericDocValuesField(org.apache.lucene.document.SortedNumericDocValuesField) NumericDocValuesField(org.apache.lucene.document.NumericDocValuesField) SortedSetDocValues(org.apache.lucene.index.SortedSetDocValues) SortedDocValuesField(org.apache.lucene.document.SortedDocValuesField) SortedSetDocValuesField(org.apache.lucene.document.SortedSetDocValuesField) BytesRef(org.apache.lucene.util.BytesRef)

Example 62 with SortedSetDocValuesField

use of org.apache.lucene.document.SortedSetDocValuesField in project lucene-solr by apache.

the class TestIndexWriterExceptions2 method testBasics.

// just one thread, serial merge policy, hopefully debuggable
public void testBasics() throws Exception {
    // disable slow things: we don't rely upon sleeps here.
    Directory dir = newDirectory();
    if (dir instanceof MockDirectoryWrapper) {
        ((MockDirectoryWrapper) dir).setThrottling(MockDirectoryWrapper.Throttling.NEVER);
        ((MockDirectoryWrapper) dir).setUseSlowOpenClosers(false);
    }
    // log all exceptions we hit, in case we fail (for debugging)
    ByteArrayOutputStream exceptionLog = new ByteArrayOutputStream();
    PrintStream exceptionStream = new PrintStream(exceptionLog, true, "UTF-8");
    //PrintStream exceptionStream = System.out;
    // create lots of non-aborting exceptions with a broken analyzer
    final long analyzerSeed = random().nextLong();
    Analyzer analyzer = new Analyzer() {

        @Override
        protected TokenStreamComponents createComponents(String fieldName) {
            MockTokenizer tokenizer = new MockTokenizer(MockTokenizer.SIMPLE, false);
            // TODO: can we turn this on? our filter is probably too evil
            tokenizer.setEnableChecks(false);
            TokenStream stream = tokenizer;
            // emit some payloads
            if (fieldName.contains("payloads")) {
                stream = new MockVariableLengthPayloadFilter(new Random(analyzerSeed), stream);
            }
            stream = new CrankyTokenFilter(stream, new Random(analyzerSeed));
            return new TokenStreamComponents(tokenizer, stream);
        }
    };
    // create lots of aborting exceptions with a broken codec
    // we don't need a random codec, as we aren't trying to find bugs in the codec here.
    Codec inner = RANDOM_MULTIPLIER > 1 ? Codec.getDefault() : new AssertingCodec();
    Codec codec = new CrankyCodec(inner, new Random(random().nextLong()));
    IndexWriterConfig conf = newIndexWriterConfig(analyzer);
    // just for now, try to keep this test reproducible
    conf.setMergeScheduler(new SerialMergeScheduler());
    conf.setCodec(codec);
    int numDocs = atLeast(500);
    IndexWriter iw = new IndexWriter(dir, conf);
    try {
        boolean allowAlreadyClosed = false;
        for (int i = 0; i < numDocs; i++) {
            // TODO: add crankyDocValuesFields, etc
            Document doc = new Document();
            doc.add(newStringField("id", Integer.toString(i), Field.Store.NO));
            doc.add(new NumericDocValuesField("dv", i));
            doc.add(new BinaryDocValuesField("dv2", new BytesRef(Integer.toString(i))));
            doc.add(new SortedDocValuesField("dv3", new BytesRef(Integer.toString(i))));
            doc.add(new SortedSetDocValuesField("dv4", new BytesRef(Integer.toString(i))));
            doc.add(new SortedSetDocValuesField("dv4", new BytesRef(Integer.toString(i - 1))));
            doc.add(new SortedNumericDocValuesField("dv5", i));
            doc.add(new SortedNumericDocValuesField("dv5", i - 1));
            doc.add(newTextField("text1", TestUtil.randomAnalysisString(random(), 20, true), Field.Store.NO));
            // ensure we store something
            doc.add(new StoredField("stored1", "foo"));
            doc.add(new StoredField("stored1", "bar"));
            // ensure we get some payloads
            doc.add(newTextField("text_payloads", TestUtil.randomAnalysisString(random(), 6, true), Field.Store.NO));
            // ensure we get some vectors
            FieldType ft = new FieldType(TextField.TYPE_NOT_STORED);
            ft.setStoreTermVectors(true);
            doc.add(newField("text_vectors", TestUtil.randomAnalysisString(random(), 6, true), ft));
            doc.add(new IntPoint("point", random().nextInt()));
            doc.add(new IntPoint("point2d", random().nextInt(), random().nextInt()));
            if (random().nextInt(10) > 0) {
                // single doc
                try {
                    iw.addDocument(doc);
                    // we made it, sometimes delete our doc, or update a dv
                    int thingToDo = random().nextInt(4);
                    if (thingToDo == 0) {
                        iw.deleteDocuments(new Term("id", Integer.toString(i)));
                    } else if (thingToDo == 1) {
                        iw.updateNumericDocValue(new Term("id", Integer.toString(i)), "dv", i + 1L);
                    } else if (thingToDo == 2) {
                        iw.updateBinaryDocValue(new Term("id", Integer.toString(i)), "dv2", new BytesRef(Integer.toString(i + 1)));
                    }
                } catch (AlreadyClosedException ace) {
                    // OK: writer was closed by abort; we just reopen now:
                    assertTrue(iw.deleter.isClosed());
                    assertTrue(allowAlreadyClosed);
                    allowAlreadyClosed = false;
                    conf = newIndexWriterConfig(analyzer);
                    // just for now, try to keep this test reproducible
                    conf.setMergeScheduler(new SerialMergeScheduler());
                    conf.setCodec(codec);
                    iw = new IndexWriter(dir, conf);
                } catch (Exception e) {
                    if (e.getMessage() != null && e.getMessage().startsWith("Fake IOException")) {
                        exceptionStream.println("\nTEST: got expected fake exc:" + e.getMessage());
                        e.printStackTrace(exceptionStream);
                        allowAlreadyClosed = true;
                    } else {
                        Rethrow.rethrow(e);
                    }
                }
            } else {
                // block docs
                Document doc2 = new Document();
                doc2.add(newStringField("id", Integer.toString(-i), Field.Store.NO));
                doc2.add(newTextField("text1", TestUtil.randomAnalysisString(random(), 20, true), Field.Store.NO));
                doc2.add(new StoredField("stored1", "foo"));
                doc2.add(new StoredField("stored1", "bar"));
                doc2.add(newField("text_vectors", TestUtil.randomAnalysisString(random(), 6, true), ft));
                try {
                    iw.addDocuments(Arrays.asList(doc, doc2));
                    // we made it, sometimes delete our docs
                    if (random().nextBoolean()) {
                        iw.deleteDocuments(new Term("id", Integer.toString(i)), new Term("id", Integer.toString(-i)));
                    }
                } catch (AlreadyClosedException ace) {
                    // OK: writer was closed by abort; we just reopen now:
                    assertTrue(iw.deleter.isClosed());
                    assertTrue(allowAlreadyClosed);
                    allowAlreadyClosed = false;
                    conf = newIndexWriterConfig(analyzer);
                    // just for now, try to keep this test reproducible
                    conf.setMergeScheduler(new SerialMergeScheduler());
                    conf.setCodec(codec);
                    iw = new IndexWriter(dir, conf);
                } catch (Exception e) {
                    if (e.getMessage() != null && e.getMessage().startsWith("Fake IOException")) {
                        exceptionStream.println("\nTEST: got expected fake exc:" + e.getMessage());
                        e.printStackTrace(exceptionStream);
                        allowAlreadyClosed = true;
                    } else {
                        Rethrow.rethrow(e);
                    }
                }
            }
            if (random().nextInt(10) == 0) {
                // trigger flush:
                try {
                    if (random().nextBoolean()) {
                        DirectoryReader ir = null;
                        try {
                            ir = DirectoryReader.open(iw, random().nextBoolean(), false);
                            TestUtil.checkReader(ir);
                        } finally {
                            IOUtils.closeWhileHandlingException(ir);
                        }
                    } else {
                        iw.commit();
                    }
                    if (DirectoryReader.indexExists(dir)) {
                        TestUtil.checkIndex(dir);
                    }
                } catch (AlreadyClosedException ace) {
                    // OK: writer was closed by abort; we just reopen now:
                    assertTrue(iw.deleter.isClosed());
                    assertTrue(allowAlreadyClosed);
                    allowAlreadyClosed = false;
                    conf = newIndexWriterConfig(analyzer);
                    // just for now, try to keep this test reproducible
                    conf.setMergeScheduler(new SerialMergeScheduler());
                    conf.setCodec(codec);
                    iw = new IndexWriter(dir, conf);
                } catch (Exception e) {
                    if (e.getMessage() != null && e.getMessage().startsWith("Fake IOException")) {
                        exceptionStream.println("\nTEST: got expected fake exc:" + e.getMessage());
                        e.printStackTrace(exceptionStream);
                        allowAlreadyClosed = true;
                    } else {
                        Rethrow.rethrow(e);
                    }
                }
            }
        }
        try {
            iw.close();
        } catch (Exception e) {
            if (e.getMessage() != null && e.getMessage().startsWith("Fake IOException")) {
                exceptionStream.println("\nTEST: got expected fake exc:" + e.getMessage());
                e.printStackTrace(exceptionStream);
                try {
                    iw.rollback();
                } catch (Throwable t) {
                }
            } else {
                Rethrow.rethrow(e);
            }
        }
        dir.close();
    } catch (Throwable t) {
        System.out.println("Unexpected exception: dumping fake-exception-log:...");
        exceptionStream.flush();
        System.out.println(exceptionLog.toString("UTF-8"));
        System.out.flush();
        Rethrow.rethrow(t);
    }
    if (VERBOSE) {
        System.out.println("TEST PASSED: dumping fake-exception-log:...");
        System.out.println(exceptionLog.toString("UTF-8"));
    }
}
Also used : TokenStream(org.apache.lucene.analysis.TokenStream) CrankyTokenFilter(org.apache.lucene.analysis.CrankyTokenFilter) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) Analyzer(org.apache.lucene.analysis.Analyzer) Document(org.apache.lucene.document.Document) CrankyCodec(org.apache.lucene.codecs.cranky.CrankyCodec) AssertingCodec(org.apache.lucene.codecs.asserting.AssertingCodec) Codec(org.apache.lucene.codecs.Codec) StoredField(org.apache.lucene.document.StoredField) Random(java.util.Random) SortedNumericDocValuesField(org.apache.lucene.document.SortedNumericDocValuesField) NumericDocValuesField(org.apache.lucene.document.NumericDocValuesField) SortedDocValuesField(org.apache.lucene.document.SortedDocValuesField) CrankyCodec(org.apache.lucene.codecs.cranky.CrankyCodec) BytesRef(org.apache.lucene.util.BytesRef) Directory(org.apache.lucene.store.Directory) MockDirectoryWrapper(org.apache.lucene.store.MockDirectoryWrapper) AssertingCodec(org.apache.lucene.codecs.asserting.AssertingCodec) PrintStream(java.io.PrintStream) ByteArrayOutputStream(java.io.ByteArrayOutputStream) BinaryDocValuesField(org.apache.lucene.document.BinaryDocValuesField) IntPoint(org.apache.lucene.document.IntPoint) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) FieldType(org.apache.lucene.document.FieldType) MockTokenizer(org.apache.lucene.analysis.MockTokenizer) IntPoint(org.apache.lucene.document.IntPoint) SortedNumericDocValuesField(org.apache.lucene.document.SortedNumericDocValuesField) SortedSetDocValuesField(org.apache.lucene.document.SortedSetDocValuesField) MockVariableLengthPayloadFilter(org.apache.lucene.analysis.MockVariableLengthPayloadFilter)

Example 63 with SortedSetDocValuesField

use of org.apache.lucene.document.SortedSetDocValuesField in project lucene-solr by apache.

the class SortableBinaryField method createFields.

@Override
public List<IndexableField> createFields(SchemaField field, Object value) {
    if (field.hasDocValues()) {
        List<IndexableField> fields = new ArrayList<>();
        IndexableField storedField = createField(field, value);
        fields.add(storedField);
        ByteBuffer byteBuffer = toObject(storedField);
        BytesRef bytes = new BytesRef(byteBuffer.array(), byteBuffer.arrayOffset() + byteBuffer.position(), byteBuffer.remaining());
        if (field.multiValued()) {
            fields.add(new SortedSetDocValuesField(field.getName(), bytes));
        } else {
            fields.add(new SortedDocValuesField(field.getName(), bytes));
        }
        return fields;
    } else {
        return Collections.singletonList(createField(field, value));
    }
}
Also used : IndexableField(org.apache.lucene.index.IndexableField) ArrayList(java.util.ArrayList) SortedDocValuesField(org.apache.lucene.document.SortedDocValuesField) SortedSetDocValuesField(org.apache.lucene.document.SortedSetDocValuesField) ByteBuffer(java.nio.ByteBuffer) BytesRef(org.apache.lucene.util.BytesRef)

Example 64 with SortedSetDocValuesField

use of org.apache.lucene.document.SortedSetDocValuesField in project lucene-solr by apache.

the class TestFieldCacheVsDocValues method doTestSortedSetVsUninvertedField.

private void doTestSortedSetVsUninvertedField(int minLength, int maxLength) throws Exception {
    Directory dir = newDirectory();
    IndexWriterConfig conf = new IndexWriterConfig(new MockAnalyzer(random()));
    RandomIndexWriter writer = new RandomIndexWriter(random(), dir, conf);
    // index some docs
    int numDocs = atLeast(300);
    for (int i = 0; i < numDocs; i++) {
        Document doc = new Document();
        Field idField = new StringField("id", Integer.toString(i), Field.Store.NO);
        doc.add(idField);
        final int length = TestUtil.nextInt(random(), minLength, maxLength);
        int numValues = random().nextInt(17);
        // create a random list of strings
        List<String> values = new ArrayList<>();
        for (int v = 0; v < numValues; v++) {
            values.add(TestUtil.randomSimpleString(random(), minLength, length));
        }
        // add in any order to the indexed field
        ArrayList<String> unordered = new ArrayList<>(values);
        Collections.shuffle(unordered, random());
        for (String v : values) {
            doc.add(newStringField("indexed", v, Field.Store.NO));
        }
        // add in any order to the dv field
        ArrayList<String> unordered2 = new ArrayList<>(values);
        Collections.shuffle(unordered2, random());
        for (String v : unordered2) {
            doc.add(new SortedSetDocValuesField("dv", new BytesRef(v)));
        }
        writer.addDocument(doc);
        if (random().nextInt(31) == 0) {
            writer.commit();
        }
    }
    // delete some docs
    int numDeletions = random().nextInt(numDocs / 10);
    for (int i = 0; i < numDeletions; i++) {
        int id = random().nextInt(numDocs);
        writer.deleteDocuments(new Term("id", Integer.toString(id)));
    }
    // compare per-segment
    DirectoryReader ir = writer.getReader();
    for (LeafReaderContext context : ir.leaves()) {
        LeafReader r = context.reader();
        SortedSetDocValues expected = FieldCache.DEFAULT.getDocTermOrds(r, "indexed", null);
        SortedSetDocValues actual = r.getSortedSetDocValues("dv");
        assertEquals(r.maxDoc(), expected, actual);
    }
    ir.close();
    writer.forceMerge(1);
    // now compare again after the merge
    ir = writer.getReader();
    LeafReader ar = getOnlyLeafReader(ir);
    SortedSetDocValues expected = FieldCache.DEFAULT.getDocTermOrds(ar, "indexed", null);
    SortedSetDocValues actual = ar.getSortedSetDocValues("dv");
    assertEquals(ir.maxDoc(), expected, actual);
    ir.close();
    writer.close();
    dir.close();
}
Also used : LeafReader(org.apache.lucene.index.LeafReader) DirectoryReader(org.apache.lucene.index.DirectoryReader) ArrayList(java.util.ArrayList) Term(org.apache.lucene.index.Term) Document(org.apache.lucene.document.Document) StringField(org.apache.lucene.document.StringField) NumericDocValuesField(org.apache.lucene.document.NumericDocValuesField) SortedSetDocValuesField(org.apache.lucene.document.SortedSetDocValuesField) BinaryDocValuesField(org.apache.lucene.document.BinaryDocValuesField) SortedDocValuesField(org.apache.lucene.document.SortedDocValuesField) Field(org.apache.lucene.document.Field) MockAnalyzer(org.apache.lucene.analysis.MockAnalyzer) SortedSetDocValues(org.apache.lucene.index.SortedSetDocValues) StringField(org.apache.lucene.document.StringField) LeafReaderContext(org.apache.lucene.index.LeafReaderContext) SortedSetDocValuesField(org.apache.lucene.document.SortedSetDocValuesField) RandomIndexWriter(org.apache.lucene.index.RandomIndexWriter) BytesRef(org.apache.lucene.util.BytesRef) Directory(org.apache.lucene.store.Directory) IndexWriterConfig(org.apache.lucene.index.IndexWriterConfig)

Example 65 with SortedSetDocValuesField

use of org.apache.lucene.document.SortedSetDocValuesField in project lucene-solr by apache.

the class TestJoinUtil method testEquals.

public void testEquals() throws Exception {
    final int numDocs = atLeast(random(), 50);
    try (final Directory dir = newDirectory()) {
        try (final RandomIndexWriter w = new RandomIndexWriter(random(), dir, newIndexWriterConfig(new MockAnalyzer(random())).setMergePolicy(newLogMergePolicy()))) {
            boolean multiValued = random().nextBoolean();
            String joinField = multiValued ? "mvField" : "svField";
            for (int id = 0; id < numDocs; id++) {
                Document doc = new Document();
                doc.add(new TextField("id", "" + id, Field.Store.NO));
                doc.add(new TextField("name", "name" + (id % 7), Field.Store.NO));
                if (multiValued) {
                    int numValues = 1 + random().nextInt(2);
                    for (int i = 0; i < numValues; i++) {
                        doc.add(new SortedSetDocValuesField(joinField, new BytesRef("" + random().nextInt(13))));
                    }
                } else {
                    doc.add(new SortedDocValuesField(joinField, new BytesRef("" + random().nextInt(13))));
                }
                w.addDocument(doc);
            }
            Set<ScoreMode> scoreModes = EnumSet.allOf(ScoreMode.class);
            ScoreMode scoreMode1 = RandomPicks.randomFrom(random(), scoreModes);
            scoreModes.remove(scoreMode1);
            ScoreMode scoreMode2 = RandomPicks.randomFrom(random(), scoreModes);
            final Query x;
            try (IndexReader r = w.getReader()) {
                IndexSearcher indexSearcher = new IndexSearcher(r);
                x = JoinUtil.createJoinQuery(joinField, multiValued, joinField, new TermQuery(new Term("name", "name5")), indexSearcher, scoreMode1);
                assertEquals("identical calls to createJoinQuery", x, JoinUtil.createJoinQuery(joinField, multiValued, joinField, new TermQuery(new Term("name", "name5")), indexSearcher, scoreMode1));
                assertFalse("score mode (" + scoreMode1 + " != " + scoreMode2 + "), but queries are equal", x.equals(JoinUtil.createJoinQuery(joinField, multiValued, joinField, new TermQuery(new Term("name", "name5")), indexSearcher, scoreMode2)));
                assertFalse("from fields (joinField != \"other_field\") but queries equals", x.equals(JoinUtil.createJoinQuery(joinField, multiValued, "other_field", new TermQuery(new Term("name", "name5")), indexSearcher, scoreMode1)));
                assertFalse("from fields (\"other_field\" != joinField) but queries equals", x.equals(JoinUtil.createJoinQuery("other_field", multiValued, joinField, new TermQuery(new Term("name", "name5")), indexSearcher, scoreMode1)));
                assertFalse("fromQuery (name:name5 != name:name6) but queries equals", x.equals(JoinUtil.createJoinQuery("other_field", multiValued, joinField, new TermQuery(new Term("name", "name6")), indexSearcher, scoreMode1)));
            }
            for (int i = 0; i < 13; i++) {
                Document doc = new Document();
                doc.add(new TextField("id", "new_id", Field.Store.NO));
                doc.add(new TextField("name", "name5", Field.Store.NO));
                if (multiValued) {
                    int numValues = 1 + random().nextInt(2);
                    for (int j = 0; j < numValues; j++) {
                        doc.add(new SortedSetDocValuesField(joinField, new BytesRef("" + i)));
                    }
                } else {
                    doc.add(new SortedDocValuesField(joinField, new BytesRef("" + i)));
                }
                w.addDocument(doc);
            }
            try (IndexReader r = w.getReader()) {
                IndexSearcher indexSearcher = new IndexSearcher(r);
                assertFalse("Query shouldn't be equal, because different index readers ", x.equals(JoinUtil.createJoinQuery(joinField, multiValued, joinField, new TermQuery(new Term("name", "name5")), indexSearcher, scoreMode1)));
            }
        }
    }
}
Also used : IndexSearcher(org.apache.lucene.search.IndexSearcher) TermQuery(org.apache.lucene.search.TermQuery) Query(org.apache.lucene.search.Query) MatchNoDocsQuery(org.apache.lucene.search.MatchNoDocsQuery) FieldValueQuery(org.apache.lucene.search.FieldValueQuery) MatchAllDocsQuery(org.apache.lucene.search.MatchAllDocsQuery) TermQuery(org.apache.lucene.search.TermQuery) BooleanQuery(org.apache.lucene.search.BooleanQuery) Term(org.apache.lucene.index.Term) Document(org.apache.lucene.document.Document) DoublePoint(org.apache.lucene.document.DoublePoint) LongPoint(org.apache.lucene.document.LongPoint) IntPoint(org.apache.lucene.document.IntPoint) FloatPoint(org.apache.lucene.document.FloatPoint) MockAnalyzer(org.apache.lucene.analysis.MockAnalyzer) SortedDocValuesField(org.apache.lucene.document.SortedDocValuesField) IndexReader(org.apache.lucene.index.IndexReader) TextField(org.apache.lucene.document.TextField) SortedSetDocValuesField(org.apache.lucene.document.SortedSetDocValuesField) RandomIndexWriter(org.apache.lucene.index.RandomIndexWriter) BytesRef(org.apache.lucene.util.BytesRef) Directory(org.apache.lucene.store.Directory)

Aggregations

SortedSetDocValuesField (org.apache.lucene.document.SortedSetDocValuesField)98 BytesRef (org.apache.lucene.util.BytesRef)96 Document (org.apache.lucene.document.Document)82 Directory (org.apache.lucene.store.Directory)74 RandomIndexWriter (org.apache.lucene.index.RandomIndexWriter)38 MockAnalyzer (org.apache.lucene.analysis.MockAnalyzer)36 SortedDocValuesField (org.apache.lucene.document.SortedDocValuesField)33 NumericDocValuesField (org.apache.lucene.document.NumericDocValuesField)27 IndexReader (org.apache.lucene.index.IndexReader)27 StringField (org.apache.lucene.document.StringField)23 BinaryDocValuesField (org.apache.lucene.document.BinaryDocValuesField)22 SortedNumericDocValuesField (org.apache.lucene.document.SortedNumericDocValuesField)20 ArrayList (java.util.ArrayList)18 Analyzer (org.apache.lucene.analysis.Analyzer)14 IndexableField (org.apache.lucene.index.IndexableField)13 Field (org.apache.lucene.document.Field)12 DirectoryReader (org.apache.lucene.index.DirectoryReader)11 LeafReader (org.apache.lucene.index.LeafReader)11 IntPoint (org.apache.lucene.document.IntPoint)10 StoredField (org.apache.lucene.document.StoredField)10