Search in sources :

Example 6 with FieldsConsumer

use of org.apache.lucene.codecs.FieldsConsumer in project lucene-solr by apache.

the class FSTOrdPostingsFormat method fieldsConsumer.

@Override
public FieldsConsumer fieldsConsumer(SegmentWriteState state) throws IOException {
    PostingsWriterBase postingsWriter = new Lucene50PostingsWriter(state);
    boolean success = false;
    try {
        FieldsConsumer ret = new FSTOrdTermsWriter(state, postingsWriter);
        success = true;
        return ret;
    } finally {
        if (!success) {
            IOUtils.closeWhileHandlingException(postingsWriter);
        }
    }
}
Also used : PostingsWriterBase(org.apache.lucene.codecs.PostingsWriterBase) FieldsConsumer(org.apache.lucene.codecs.FieldsConsumer) Lucene50PostingsWriter(org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter)

Example 7 with FieldsConsumer

use of org.apache.lucene.codecs.FieldsConsumer in project lucene-solr by apache.

the class TestCodecs method write.

private void write(SegmentInfo si, final FieldInfos fieldInfos, final Directory dir, final FieldData[] fields) throws Throwable {
    final Codec codec = si.getCodec();
    final SegmentWriteState state = new SegmentWriteState(InfoStream.getDefault(), dir, si, fieldInfos, null, newIOContext(random()));
    Arrays.sort(fields);
    FieldsConsumer consumer = codec.postingsFormat().fieldsConsumer(state);
    boolean success = false;
    try {
        consumer.write(new DataFields(fields));
        success = true;
    } finally {
        if (success) {
            IOUtils.close(consumer);
        } else {
            IOUtils.closeWhileHandlingException(consumer);
        }
    }
}
Also used : Codec(org.apache.lucene.codecs.Codec) FieldsConsumer(org.apache.lucene.codecs.FieldsConsumer)

Example 8 with FieldsConsumer

use of org.apache.lucene.codecs.FieldsConsumer in project lucene-solr by apache.

the class LuceneVarGapDocFreqInterval method fieldsConsumer.

@Override
public FieldsConsumer fieldsConsumer(SegmentWriteState state) throws IOException {
    PostingsWriterBase docs = new Lucene50PostingsWriter(state);
    // TODO: should we make the terms index more easily
    // pluggable?  Ie so that this codec would record which
    // index impl was used, and switch on loading?
    // Or... you must make a new Codec for this?
    TermsIndexWriterBase indexWriter;
    boolean success = false;
    try {
        indexWriter = new VariableGapTermsIndexWriter(state, new VariableGapTermsIndexWriter.EveryNOrDocFreqTermSelector(docFreqThreshold, termIndexInterval));
        success = true;
    } finally {
        if (!success) {
            docs.close();
        }
    }
    success = false;
    try {
        // Must use BlockTermsWriter (not BlockTree) because
        // BlockTree doens't support ords (yet)...
        FieldsConsumer ret = new BlockTermsWriter(indexWriter, state, docs);
        success = true;
        return ret;
    } finally {
        if (!success) {
            try {
                docs.close();
            } finally {
                indexWriter.close();
            }
        }
    }
}
Also used : PostingsWriterBase(org.apache.lucene.codecs.PostingsWriterBase) VariableGapTermsIndexWriter(org.apache.lucene.codecs.blockterms.VariableGapTermsIndexWriter) BlockTermsWriter(org.apache.lucene.codecs.blockterms.BlockTermsWriter) TermsIndexWriterBase(org.apache.lucene.codecs.blockterms.TermsIndexWriterBase) FieldsConsumer(org.apache.lucene.codecs.FieldsConsumer) Lucene50PostingsWriter(org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter)

Example 9 with FieldsConsumer

use of org.apache.lucene.codecs.FieldsConsumer in project lucene-solr by apache.

the class MockRandomPostingsFormat method fieldsConsumer.

@Override
public FieldsConsumer fieldsConsumer(SegmentWriteState state) throws IOException {
    int minSkipInterval;
    if (state.segmentInfo.maxDoc() > 1000000) {
        // Test2BPostings can OOME otherwise:
        minSkipInterval = 3;
    } else {
        minSkipInterval = 2;
    }
    // we pull this before the seed intentionally: because it's not consumed at runtime
    // (the skipInterval is written into postings header).
    // NOTE: Currently not passed to postings writer.
    //       before, it was being passed in wrongly as acceptableOverhead!
    int skipInterval = TestUtil.nextInt(seedRandom, minSkipInterval, 10);
    if (LuceneTestCase.VERBOSE) {
        System.out.println("MockRandomCodec: skipInterval=" + skipInterval);
    }
    final long seed = seedRandom.nextLong();
    if (LuceneTestCase.VERBOSE) {
        System.out.println("MockRandomCodec: writing to seg=" + state.segmentInfo.name + " formatID=" + state.segmentSuffix + " seed=" + seed);
    }
    final String seedFileName = IndexFileNames.segmentFileName(state.segmentInfo.name, state.segmentSuffix, SEED_EXT);
    try (IndexOutput out = state.directory.createOutput(seedFileName, state.context)) {
        CodecUtil.writeIndexHeader(out, "MockRandomSeed", 0, state.segmentInfo.getId(), state.segmentSuffix);
        out.writeLong(seed);
        CodecUtil.writeFooter(out);
    }
    final Random random = new Random(seed);
    // consume a random for buffersize
    random.nextInt();
    PostingsWriterBase postingsWriter = new Lucene50PostingsWriter(state);
    final FieldsConsumer fields;
    final int t1 = random.nextInt(5);
    if (t1 == 0) {
        boolean success = false;
        try {
            fields = new FSTTermsWriter(state, postingsWriter);
            success = true;
        } finally {
            if (!success) {
                postingsWriter.close();
            }
        }
    } else if (t1 == 1) {
        boolean success = false;
        try {
            fields = new FSTOrdTermsWriter(state, postingsWriter);
            success = true;
        } finally {
            if (!success) {
                postingsWriter.close();
            }
        }
    } else if (t1 == 2) {
        if (LuceneTestCase.VERBOSE) {
            System.out.println("MockRandomCodec: writing BlockTree terms dict");
        }
        // TODO: would be nice to allow 1 but this is very
        // slow to write
        final int minTermsInBlock = TestUtil.nextInt(random, 2, 100);
        final int maxTermsInBlock = Math.max(2, (minTermsInBlock - 1) * 2 + random.nextInt(100));
        boolean success = false;
        try {
            fields = new BlockTreeTermsWriter(state, postingsWriter, minTermsInBlock, maxTermsInBlock);
            success = true;
        } finally {
            if (!success) {
                postingsWriter.close();
            }
        }
    } else if (t1 == 3) {
        if (LuceneTestCase.VERBOSE) {
            System.out.println("MockRandomCodec: writing Block terms dict");
        }
        boolean success = false;
        final TermsIndexWriterBase indexWriter;
        try {
            if (random.nextBoolean()) {
                int termIndexInterval = TestUtil.nextInt(random, 1, 100);
                if (LuceneTestCase.VERBOSE) {
                    System.out.println("MockRandomCodec: fixed-gap terms index (tii=" + termIndexInterval + ")");
                }
                indexWriter = new FixedGapTermsIndexWriter(state, termIndexInterval);
            } else {
                final VariableGapTermsIndexWriter.IndexTermSelector selector;
                final int n2 = random.nextInt(3);
                if (n2 == 0) {
                    final int tii = TestUtil.nextInt(random, 1, 100);
                    selector = new VariableGapTermsIndexWriter.EveryNTermSelector(tii);
                    if (LuceneTestCase.VERBOSE) {
                        System.out.println("MockRandomCodec: variable-gap terms index (tii=" + tii + ")");
                    }
                } else if (n2 == 1) {
                    final int docFreqThresh = TestUtil.nextInt(random, 2, 100);
                    final int tii = TestUtil.nextInt(random, 1, 100);
                    selector = new VariableGapTermsIndexWriter.EveryNOrDocFreqTermSelector(docFreqThresh, tii);
                } else {
                    final long seed2 = random.nextLong();
                    final int gap = TestUtil.nextInt(random, 2, 40);
                    if (LuceneTestCase.VERBOSE) {
                        System.out.println("MockRandomCodec: random-gap terms index (max gap=" + gap + ")");
                    }
                    selector = new VariableGapTermsIndexWriter.IndexTermSelector() {

                        final Random rand = new Random(seed2);

                        @Override
                        public boolean isIndexTerm(BytesRef term, TermStats stats) {
                            return rand.nextInt(gap) == gap / 2;
                        }

                        @Override
                        public void newField(FieldInfo fieldInfo) {
                        }
                    };
                }
                indexWriter = new VariableGapTermsIndexWriter(state, selector);
            }
            success = true;
        } finally {
            if (!success) {
                postingsWriter.close();
            }
        }
        success = false;
        try {
            fields = new BlockTermsWriter(indexWriter, state, postingsWriter);
            success = true;
        } finally {
            if (!success) {
                try {
                    postingsWriter.close();
                } finally {
                    indexWriter.close();
                }
            }
        }
    } else if (t1 == 4) {
        // Use OrdsBlockTree terms dict
        if (LuceneTestCase.VERBOSE) {
            System.out.println("MockRandomCodec: writing OrdsBlockTree");
        }
        // TODO: would be nice to allow 1 but this is very
        // slow to write
        final int minTermsInBlock = TestUtil.nextInt(random, 2, 100);
        final int maxTermsInBlock = Math.max(2, (minTermsInBlock - 1) * 2 + random.nextInt(100));
        boolean success = false;
        try {
            fields = new OrdsBlockTreeTermsWriter(state, postingsWriter, minTermsInBlock, maxTermsInBlock);
            success = true;
        } finally {
            if (!success) {
                postingsWriter.close();
            }
        }
    } else {
        // BUG!
        throw new AssertionError();
    }
    return fields;
}
Also used : FieldsConsumer(org.apache.lucene.codecs.FieldsConsumer) OrdsBlockTreeTermsWriter(org.apache.lucene.codecs.blocktreeords.OrdsBlockTreeTermsWriter) FSTTermsWriter(org.apache.lucene.codecs.memory.FSTTermsWriter) PostingsWriterBase(org.apache.lucene.codecs.PostingsWriterBase) Random(java.util.Random) BlockTermsWriter(org.apache.lucene.codecs.blockterms.BlockTermsWriter) BytesRef(org.apache.lucene.util.BytesRef) FSTOrdTermsWriter(org.apache.lucene.codecs.memory.FSTOrdTermsWriter) BlockTreeTermsWriter(org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter) OrdsBlockTreeTermsWriter(org.apache.lucene.codecs.blocktreeords.OrdsBlockTreeTermsWriter) IndexOutput(org.apache.lucene.store.IndexOutput) TermStats(org.apache.lucene.codecs.TermStats) VariableGapTermsIndexWriter(org.apache.lucene.codecs.blockterms.VariableGapTermsIndexWriter) FixedGapTermsIndexWriter(org.apache.lucene.codecs.blockterms.FixedGapTermsIndexWriter) TermsIndexWriterBase(org.apache.lucene.codecs.blockterms.TermsIndexWriterBase) Lucene50PostingsWriter(org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter) FieldInfo(org.apache.lucene.index.FieldInfo)

Example 10 with FieldsConsumer

use of org.apache.lucene.codecs.FieldsConsumer in project lucene-solr by apache.

the class Lucene50PostingsFormat method fieldsConsumer.

@Override
public FieldsConsumer fieldsConsumer(SegmentWriteState state) throws IOException {
    PostingsWriterBase postingsWriter = new Lucene50PostingsWriter(state);
    boolean success = false;
    try {
        FieldsConsumer ret = new BlockTreeTermsWriter(state, postingsWriter, minTermBlockSize, maxTermBlockSize);
        success = true;
        return ret;
    } finally {
        if (!success) {
            IOUtils.closeWhileHandlingException(postingsWriter);
        }
    }
}
Also used : PostingsWriterBase(org.apache.lucene.codecs.PostingsWriterBase) BlockTreeTermsWriter(org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter) FieldsConsumer(org.apache.lucene.codecs.FieldsConsumer)

Aggregations

FieldsConsumer (org.apache.lucene.codecs.FieldsConsumer)14 PostingsWriterBase (org.apache.lucene.codecs.PostingsWriterBase)9 Lucene50PostingsWriter (org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter)7 BlockTermsWriter (org.apache.lucene.codecs.blockterms.BlockTermsWriter)4 TermsIndexWriterBase (org.apache.lucene.codecs.blockterms.TermsIndexWriterBase)4 VariableGapTermsIndexWriter (org.apache.lucene.codecs.blockterms.VariableGapTermsIndexWriter)3 BytesRef (org.apache.lucene.util.BytesRef)3 IOException (java.io.IOException)2 MockAnalyzer (org.apache.lucene.analysis.MockAnalyzer)2 Codec (org.apache.lucene.codecs.Codec)2 FieldsProducer (org.apache.lucene.codecs.FieldsProducer)2 FixedGapTermsIndexWriter (org.apache.lucene.codecs.blockterms.FixedGapTermsIndexWriter)2 BlockTreeTermsWriter (org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter)2 Document (org.apache.lucene.document.Document)2 Directory (org.apache.lucene.store.Directory)2 FlushInfo (org.apache.lucene.store.FlushInfo)2 IOContext (org.apache.lucene.store.IOContext)2 ArrayList (java.util.ArrayList)1 HashMap (java.util.HashMap)1 Random (java.util.Random)1