Search in sources :

Example 91 with MockDirectoryWrapper

use of org.apache.lucene.store.MockDirectoryWrapper in project lucene-solr by apache.

the class TestIndexWriterExceptions method testExceptionOnCtor.

public void testExceptionOnCtor() throws Exception {
    UOEDirectory uoe = new UOEDirectory();
    Directory d = new MockDirectoryWrapper(random(), uoe);
    IndexWriter iw = new IndexWriter(d, newIndexWriterConfig(null));
    iw.addDocument(new Document());
    iw.close();
    uoe.doFail = true;
    expectThrows(UnsupportedOperationException.class, () -> {
        new IndexWriter(d, newIndexWriterConfig(null));
    });
    uoe.doFail = false;
    d.close();
}
Also used : MockDirectoryWrapper(org.apache.lucene.store.MockDirectoryWrapper) Document(org.apache.lucene.document.Document) RAMDirectory(org.apache.lucene.store.RAMDirectory) Directory(org.apache.lucene.store.Directory)

Example 92 with MockDirectoryWrapper

use of org.apache.lucene.store.MockDirectoryWrapper in project lucene-solr by apache.

the class TestIndexWriterExceptions method testTooManyFileException.

// See LUCENE-4870 TooManyOpenFiles errors are thrown as
// FNFExceptions which can trigger data loss.
public void testTooManyFileException() throws Exception {
    // Create failure that throws Too many open files exception randomly
    MockDirectoryWrapper.Failure failure = new MockDirectoryWrapper.Failure() {

        @Override
        public MockDirectoryWrapper.Failure reset() {
            doFail = false;
            return this;
        }

        @Override
        public void eval(MockDirectoryWrapper dir) throws IOException {
            if (doFail) {
                if (random().nextBoolean()) {
                    throw new FileNotFoundException("some/file/name.ext (Too many open files)");
                }
            }
        }
    };
    MockDirectoryWrapper dir = newMockDirectory();
    // The exception is only thrown on open input
    dir.setFailOnOpenInput(true);
    dir.failOn(failure);
    // Create an index with one document
    IndexWriterConfig iwc = new IndexWriterConfig(new MockAnalyzer(random()));
    IndexWriter iw = new IndexWriter(dir, iwc);
    Document doc = new Document();
    doc.add(new StringField("foo", "bar", Field.Store.NO));
    // add a document
    iw.addDocument(doc);
    iw.commit();
    DirectoryReader ir = DirectoryReader.open(dir);
    assertEquals(1, ir.numDocs());
    ir.close();
    iw.close();
    // Open and close the index a few times
    for (int i = 0; i < 10; i++) {
        failure.setDoFail();
        iwc = new IndexWriterConfig(new MockAnalyzer(random()));
        try {
            iw = new IndexWriter(dir, iwc);
        } catch (AssertionError ex) {
            // This is fine: we tripped IW's assert that all files it's about to fsync do exist:
            assertTrue(ex.getMessage().matches("file .* does not exist; files=\\[.*\\]"));
        } catch (CorruptIndexException ex) {
            // Exceptions are fine - we are running out of file handlers here
            continue;
        } catch (FileNotFoundException | NoSuchFileException ex) {
            continue;
        }
        failure.clearDoFail();
        iw.close();
        ir = DirectoryReader.open(dir);
        assertEquals("lost document after iteration: " + i, 1, ir.numDocs());
        ir.close();
    }
    // Check if document is still there
    failure.clearDoFail();
    ir = DirectoryReader.open(dir);
    assertEquals(1, ir.numDocs());
    ir.close();
    dir.close();
}
Also used : MockDirectoryWrapper(org.apache.lucene.store.MockDirectoryWrapper) FileNotFoundException(java.io.FileNotFoundException) NoSuchFileException(java.nio.file.NoSuchFileException) Document(org.apache.lucene.document.Document) MockAnalyzer(org.apache.lucene.analysis.MockAnalyzer) StringField(org.apache.lucene.document.StringField)

Example 93 with MockDirectoryWrapper

use of org.apache.lucene.store.MockDirectoryWrapper in project lucene-solr by apache.

the class TestIndexWriterExceptions method testDocumentsWriterAbort.

// make sure an aborting exception closes the writer:
public void testDocumentsWriterAbort() throws IOException {
    MockDirectoryWrapper dir = newMockDirectory();
    FailOnlyOnFlush failure = new FailOnlyOnFlush();
    failure.setDoFail();
    dir.failOn(failure);
    IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random())).setMaxBufferedDocs(2));
    Document doc = new Document();
    String contents = "aa bb cc dd ee ff gg hh ii jj kk";
    doc.add(newTextField("content", contents, Field.Store.NO));
    boolean hitError = false;
    writer.addDocument(doc);
    expectThrows(IOException.class, () -> {
        writer.addDocument(doc);
    });
    // only one flush should fail:
    assertFalse(hitError);
    hitError = true;
    assertTrue(writer.deleter.isClosed());
    assertTrue(writer.isClosed());
    assertFalse(DirectoryReader.indexExists(dir));
    dir.close();
}
Also used : MockDirectoryWrapper(org.apache.lucene.store.MockDirectoryWrapper) MockAnalyzer(org.apache.lucene.analysis.MockAnalyzer) Document(org.apache.lucene.document.Document)

Example 94 with MockDirectoryWrapper

use of org.apache.lucene.store.MockDirectoryWrapper in project lucene-solr by apache.

the class TestIndexWriterExceptions method testNoLostDeletesOrUpdates.

// Make sure if we hit a transient IOException (e.g., disk
// full), and then the exception stops (e.g., disk frees
// up), so we successfully close IW or open an NRT
// reader, we don't lose any deletes or updates:
public void testNoLostDeletesOrUpdates() throws Throwable {
    int deleteCount = 0;
    int docBase = 0;
    int docCount = 0;
    MockDirectoryWrapper dir = newMockDirectory();
    final AtomicBoolean shouldFail = new AtomicBoolean();
    dir.failOn(new MockDirectoryWrapper.Failure() {

        @Override
        public void eval(MockDirectoryWrapper dir) throws IOException {
            if (shouldFail.get() == false) {
                // flushing buffer, on closing the file:
                return;
            }
            if (random().nextInt(3) != 2) {
                return;
            }
            StackTraceElement[] trace = Thread.currentThread().getStackTrace();
            boolean sawSeal = false;
            boolean sawWrite = false;
            for (int i = 0; i < trace.length; i++) {
                if ("sealFlushedSegment".equals(trace[i].getMethodName())) {
                    sawSeal = true;
                    break;
                }
                if ("writeLiveDocs".equals(trace[i].getMethodName()) || "writeFieldUpdates".equals(trace[i].getMethodName())) {
                    sawWrite = true;
                }
            }
            // the segment is aborted and docs are lost:
            if (sawWrite && sawSeal == false) {
                if (VERBOSE) {
                    System.out.println("TEST: now fail; thread=" + Thread.currentThread().getName() + " exc:");
                    new Throwable().printStackTrace(System.out);
                }
                shouldFail.set(false);
                throw new FakeIOException();
            }
        }
    });
    RandomIndexWriter w = null;
    boolean tragic = false;
    for (int iter = 0; iter < 10 * RANDOM_MULTIPLIER; iter++) {
        int numDocs = atLeast(100);
        if (VERBOSE) {
            System.out.println("\nTEST: iter=" + iter + " numDocs=" + numDocs + " docBase=" + docBase + " delCount=" + deleteCount);
        }
        if (w == null) {
            IndexWriterConfig iwc = newIndexWriterConfig(new MockAnalyzer(random()));
            w = new RandomIndexWriter(random(), dir, iwc);
            // Since we hit exc during merging, a partial
            // forceMerge can easily return when there are still
            // too many segments in the index:
            w.setDoRandomForceMergeAssert(false);
        }
        for (int i = 0; i < numDocs; i++) {
            Document doc = new Document();
            doc.add(new StringField("id", "" + (docBase + i), Field.Store.NO));
            doc.add(new NumericDocValuesField("f", 1L));
            doc.add(new NumericDocValuesField("cf", 2L));
            doc.add(new BinaryDocValuesField("bf", TestBinaryDocValuesUpdates.toBytes(1L)));
            doc.add(new BinaryDocValuesField("bcf", TestBinaryDocValuesUpdates.toBytes(2L)));
            w.addDocument(doc);
        }
        docCount += numDocs;
        // TODO: we could make the test more evil, by letting
        // it throw more than one exc, randomly, before "recovering"
        // TODO: we could also install an infoStream and try
        // to fail in "more evil" places inside BDS
        shouldFail.set(true);
        boolean doClose = false;
        try {
            for (int i = 0; i < numDocs; i++) {
                if (random().nextInt(10) == 7) {
                    boolean fieldUpdate = random().nextBoolean();
                    int docid = docBase + i;
                    if (fieldUpdate) {
                        long value = iter;
                        if (VERBOSE) {
                            System.out.println("  update id=" + docid + " to value " + value);
                        }
                        Term idTerm = new Term("id", Integer.toString(docid));
                        if (random().nextBoolean()) {
                            // update only numeric field
                            w.updateDocValues(idTerm, new NumericDocValuesField("f", value), new NumericDocValuesField("cf", value * 2));
                        } else if (random().nextBoolean()) {
                            w.updateDocValues(idTerm, new BinaryDocValuesField("bf", TestBinaryDocValuesUpdates.toBytes(value)), new BinaryDocValuesField("bcf", TestBinaryDocValuesUpdates.toBytes(value * 2)));
                        } else {
                            w.updateDocValues(idTerm, new NumericDocValuesField("f", value), new NumericDocValuesField("cf", value * 2), new BinaryDocValuesField("bf", TestBinaryDocValuesUpdates.toBytes(value)), new BinaryDocValuesField("bcf", TestBinaryDocValuesUpdates.toBytes(value * 2)));
                        }
                    }
                    // sometimes do both deletes and updates
                    if (!fieldUpdate || random().nextBoolean()) {
                        if (VERBOSE) {
                            System.out.println("  delete id=" + docid);
                        }
                        deleteCount++;
                        w.deleteDocuments(new Term("id", "" + docid));
                    }
                }
            }
            // Trigger writeLiveDocs + writeFieldUpdates so we hit fake exc:
            IndexReader r = w.getReader();
            // Sometimes we will make it here (we only randomly
            // throw the exc):
            assertEquals(docCount - deleteCount, r.numDocs());
            r.close();
            // Sometimes close, so the disk full happens on close:
            if (random().nextBoolean()) {
                if (VERBOSE) {
                    System.out.println("  now close writer");
                }
                doClose = true;
                w.commit();
                w.close();
                w = null;
            }
        } catch (Throwable t) {
            // throws it as a wrapped IOE, so don't fail in this case.
            if (t instanceof FakeIOException || (t.getCause() instanceof FakeIOException)) {
                // expected
                if (VERBOSE) {
                    System.out.println("TEST: hit expected IOE");
                }
                if (t instanceof AlreadyClosedException) {
                    // FakeIOExc struck during merge and writer is now closed:
                    w = null;
                    tragic = true;
                }
            } else {
                throw t;
            }
        }
        shouldFail.set(false);
        if (w != null) {
            MergeScheduler ms = w.w.getConfig().getMergeScheduler();
            if (ms instanceof ConcurrentMergeScheduler) {
                ((ConcurrentMergeScheduler) ms).sync();
            }
            if (w.w.getTragicException() != null) {
                // Tragic exc in CMS closed the writer
                w = null;
            }
        }
        IndexReader r;
        if (doClose && w != null) {
            if (VERBOSE) {
                System.out.println("  now 2nd close writer");
            }
            w.close();
            w = null;
        }
        if (w == null || random().nextBoolean()) {
            // disk" bits are good:
            if (VERBOSE) {
                System.out.println("TEST: verify against non-NRT reader");
            }
            if (w != null) {
                w.commit();
            }
            r = DirectoryReader.open(dir);
        } else {
            if (VERBOSE) {
                System.out.println("TEST: verify against NRT reader");
            }
            r = w.getReader();
        }
        if (tragic == false) {
            assertEquals(docCount - deleteCount, r.numDocs());
        }
        BytesRef scratch = new BytesRef();
        for (LeafReaderContext context : r.leaves()) {
            LeafReader reader = context.reader();
            Bits liveDocs = reader.getLiveDocs();
            NumericDocValues f = reader.getNumericDocValues("f");
            NumericDocValues cf = reader.getNumericDocValues("cf");
            BinaryDocValues bf = reader.getBinaryDocValues("bf");
            BinaryDocValues bcf = reader.getBinaryDocValues("bcf");
            for (int i = 0; i < reader.maxDoc(); i++) {
                if (liveDocs == null || liveDocs.get(i)) {
                    assertEquals(i, f.advance(i));
                    assertEquals(i, cf.advance(i));
                    assertEquals(i, bf.advance(i));
                    assertEquals(i, bcf.advance(i));
                    assertEquals("doc=" + (docBase + i), cf.longValue(), f.longValue() * 2);
                    assertEquals("doc=" + (docBase + i), TestBinaryDocValuesUpdates.getValue(bcf), TestBinaryDocValuesUpdates.getValue(bf) * 2);
                }
            }
        }
        r.close();
        // Sometimes re-use RIW, other times open new one:
        if (w != null && random().nextBoolean()) {
            if (VERBOSE) {
                System.out.println("TEST: close writer");
            }
            w.close();
            w = null;
        }
        docBase += numDocs;
    }
    if (w != null) {
        w.close();
    }
    // Final verify:
    if (tragic == false) {
        IndexReader r = DirectoryReader.open(dir);
        assertEquals(docCount - deleteCount, r.numDocs());
        r.close();
    }
    dir.close();
}
Also used : AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) Document(org.apache.lucene.document.Document) MockAnalyzer(org.apache.lucene.analysis.MockAnalyzer) SortedNumericDocValuesField(org.apache.lucene.document.SortedNumericDocValuesField) NumericDocValuesField(org.apache.lucene.document.NumericDocValuesField) BytesRef(org.apache.lucene.util.BytesRef) MockDirectoryWrapper(org.apache.lucene.store.MockDirectoryWrapper) FakeIOException(org.apache.lucene.store.MockDirectoryWrapper.FakeIOException) IOException(java.io.IOException) BinaryDocValuesField(org.apache.lucene.document.BinaryDocValuesField) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) FakeIOException(org.apache.lucene.store.MockDirectoryWrapper.FakeIOException) StringField(org.apache.lucene.document.StringField) Bits(org.apache.lucene.util.Bits)

Example 95 with MockDirectoryWrapper

use of org.apache.lucene.store.MockDirectoryWrapper in project lucene-solr by apache.

the class TestIndexWriterExceptions method testTermVectorExceptions.

public void testTermVectorExceptions() throws IOException {
    FailOnTermVectors[] failures = new FailOnTermVectors[] { new FailOnTermVectors(FailOnTermVectors.AFTER_INIT_STAGE), new FailOnTermVectors(FailOnTermVectors.INIT_STAGE) };
    int num = atLeast(1);
    iters: for (int j = 0; j < num; j++) {
        for (FailOnTermVectors failure : failures) {
            MockDirectoryWrapper dir = newMockDirectory();
            IndexWriter w = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random())));
            dir.failOn(failure);
            int numDocs = 10 + random().nextInt(30);
            for (int i = 0; i < numDocs; i++) {
                Document doc = new Document();
                // random TV
                Field field = newTextField(random(), "field", "a field", Field.Store.YES);
                doc.add(field);
                try {
                    w.addDocument(doc);
                    assertFalse(field.fieldType().storeTermVectors());
                } catch (RuntimeException e) {
                    assertTrue(e.getMessage().startsWith(FailOnTermVectors.EXC_MSG));
                    // This is an aborting exception, so writer is closed:
                    assertTrue(w.deleter.isClosed());
                    assertTrue(w.isClosed());
                    dir.close();
                    continue iters;
                }
                if (random().nextInt(20) == 0) {
                    w.commit();
                    TestUtil.checkIndex(dir);
                }
            }
            Document document = new Document();
            document.add(new TextField("field", "a field", Field.Store.YES));
            w.addDocument(document);
            for (int i = 0; i < numDocs; i++) {
                Document doc = new Document();
                Field field = newTextField(random(), "field", "a field", Field.Store.YES);
                doc.add(field);
                // random TV
                try {
                    w.addDocument(doc);
                    assertFalse(field.fieldType().storeTermVectors());
                } catch (RuntimeException e) {
                    assertTrue(e.getMessage().startsWith(FailOnTermVectors.EXC_MSG));
                }
                if (random().nextInt(20) == 0) {
                    w.commit();
                    TestUtil.checkIndex(dir);
                }
            }
            document = new Document();
            document.add(new TextField("field", "a field", Field.Store.YES));
            w.addDocument(document);
            w.close();
            IndexReader reader = DirectoryReader.open(dir);
            assertTrue(reader.numDocs() > 0);
            SegmentInfos sis = SegmentInfos.readLatestCommit(dir);
            for (LeafReaderContext context : reader.leaves()) {
                assertFalse(context.reader().getFieldInfos().hasVectors());
            }
            reader.close();
            dir.close();
        }
    }
}
Also used : MockDirectoryWrapper(org.apache.lucene.store.MockDirectoryWrapper) Document(org.apache.lucene.document.Document) StringField(org.apache.lucene.document.StringField) SortedNumericDocValuesField(org.apache.lucene.document.SortedNumericDocValuesField) StoredField(org.apache.lucene.document.StoredField) NumericDocValuesField(org.apache.lucene.document.NumericDocValuesField) SortedSetDocValuesField(org.apache.lucene.document.SortedSetDocValuesField) BinaryDocValuesField(org.apache.lucene.document.BinaryDocValuesField) SortedDocValuesField(org.apache.lucene.document.SortedDocValuesField) Field(org.apache.lucene.document.Field) TextField(org.apache.lucene.document.TextField) MockAnalyzer(org.apache.lucene.analysis.MockAnalyzer) TextField(org.apache.lucene.document.TextField)

Aggregations

MockDirectoryWrapper (org.apache.lucene.store.MockDirectoryWrapper)121 Document (org.apache.lucene.document.Document)61 MockAnalyzer (org.apache.lucene.analysis.MockAnalyzer)55 Directory (org.apache.lucene.store.Directory)32 IOException (java.io.IOException)30 TextField (org.apache.lucene.document.TextField)17 RAMDirectory (org.apache.lucene.store.RAMDirectory)17 AlreadyClosedException (org.apache.lucene.store.AlreadyClosedException)15 BaseDirectoryWrapper (org.apache.lucene.store.BaseDirectoryWrapper)15 FakeIOException (org.apache.lucene.store.MockDirectoryWrapper.FakeIOException)15 FieldType (org.apache.lucene.document.FieldType)14 Field (org.apache.lucene.document.Field)12 Random (java.util.Random)11 NumericDocValuesField (org.apache.lucene.document.NumericDocValuesField)11 Failure (org.apache.lucene.store.MockDirectoryWrapper.Failure)11 BytesRef (org.apache.lucene.util.BytesRef)11 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)10 Codec (org.apache.lucene.codecs.Codec)10 StringField (org.apache.lucene.document.StringField)9 IndexSearcher (org.apache.lucene.search.IndexSearcher)9