Search in sources :

Example 16 with DummyShardLock

use of org.opensearch.test.DummyShardLock in project OpenSearch by opensearch-project.

the class StoreTests method testCanOpenIndex.

public void testCanOpenIndex() throws IOException {
    final ShardId shardId = new ShardId("index", "_na_", 1);
    IndexWriterConfig iwc = newIndexWriterConfig();
    Path tempDir = createTempDir();
    final BaseDirectoryWrapper dir = newFSDirectory(tempDir);
    assertFalse(StoreUtils.canOpenIndex(logger, tempDir, shardId, (id, l, d) -> new DummyShardLock(id)));
    IndexWriter writer = new IndexWriter(dir, iwc);
    Document doc = new Document();
    doc.add(new StringField("id", "1", random().nextBoolean() ? Field.Store.YES : Field.Store.NO));
    writer.addDocument(doc);
    writer.commit();
    writer.close();
    assertTrue(StoreUtils.canOpenIndex(logger, tempDir, shardId, (id, l, d) -> new DummyShardLock(id)));
    Store store = new Store(shardId, INDEX_SETTINGS, dir, new DummyShardLock(shardId));
    store.markStoreCorrupted(new CorruptIndexException("foo", "bar"));
    assertFalse(StoreUtils.canOpenIndex(logger, tempDir, shardId, (id, l, d) -> new DummyShardLock(id)));
    store.close();
}
Also used : ShardId(org.opensearch.index.shard.ShardId) Path(java.nio.file.Path) IndexNotFoundException(org.apache.lucene.index.IndexNotFoundException) NoMergePolicy(org.apache.lucene.index.NoMergePolicy) NoSuchFileException(java.nio.file.NoSuchFileException) Arrays(java.util.Arrays) IndexFormatTooNewException(org.apache.lucene.index.IndexFormatTooNewException) Date(java.util.Date) Term(org.apache.lucene.index.Term) Matchers.not(org.hamcrest.Matchers.not) Random(java.util.Random) InputStreamStreamInput(org.opensearch.common.io.stream.InputStreamStreamInput) ChecksumIndexInput(org.apache.lucene.store.ChecksumIndexInput) Matchers.hasKey(org.hamcrest.Matchers.hasKey) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) Document(org.apache.lucene.document.Document) ByteArrayInputStream(java.io.ByteArrayInputStream) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) CodecUtil(org.apache.lucene.codecs.CodecUtil) Directory(org.apache.lucene.store.Directory) Map(java.util.Map) Lucene(org.opensearch.common.lucene.Lucene) DummyShardLock(org.opensearch.test.DummyShardLock) IOContext(org.apache.lucene.store.IOContext) Path(java.nio.file.Path) TimeValue(org.opensearch.common.unit.TimeValue) Matchers.notNullValue(org.hamcrest.Matchers.notNullValue) BytesRef(org.apache.lucene.util.BytesRef) Index(org.opensearch.index.Index) DirectoryReader(org.apache.lucene.index.DirectoryReader) OpenSearchTestCase(org.opensearch.test.OpenSearchTestCase) ExceptionsHelper(org.opensearch.ExceptionsHelper) Settings(org.opensearch.common.settings.Settings) ReplicationTracker(org.opensearch.index.seqno.ReplicationTracker) OutputStreamStreamOutput(org.opensearch.common.io.stream.OutputStreamStreamOutput) SegmentInfos(org.apache.lucene.index.SegmentInfos) FileNotFoundException(java.io.FileNotFoundException) MockAnalyzer(org.apache.lucene.analysis.MockAnalyzer) Engine(org.opensearch.index.engine.Engine) Matchers.instanceOf(org.hamcrest.Matchers.instanceOf) IndexWriter(org.apache.lucene.index.IndexWriter) List(java.util.List) SortedDocValuesField(org.apache.lucene.document.SortedDocValuesField) Matchers.equalTo(org.hamcrest.Matchers.equalTo) IndexSettings(org.opensearch.index.IndexSettings) Matchers.greaterThan(org.hamcrest.Matchers.greaterThan) Matchers.is(org.hamcrest.Matchers.is) Matchers.anyOf(org.hamcrest.Matchers.anyOf) IndexWriterConfig(org.apache.lucene.index.IndexWriterConfig) ShardLock(org.opensearch.env.ShardLock) Matchers.containsString(org.hamcrest.Matchers.containsString) Matchers.endsWith(org.hamcrest.Matchers.endsWith) IndexSettingsModule(org.opensearch.test.IndexSettingsModule) ByteArrayOutputStream(java.io.ByteArrayOutputStream) IndexMetadata(org.opensearch.cluster.metadata.IndexMetadata) StringField(org.apache.lucene.document.StringField) TestUtil(org.apache.lucene.util.TestUtil) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) BaseDirectoryWrapper(org.apache.lucene.store.BaseDirectoryWrapper) ByteBuffersDirectory(org.apache.lucene.store.ByteBuffersDirectory) NoDeletionPolicy(org.apache.lucene.index.NoDeletionPolicy) UUIDs(org.opensearch.common.UUIDs) IndexOutput(org.apache.lucene.store.IndexOutput) RetentionLease(org.opensearch.index.seqno.RetentionLease) Matchers.empty(org.hamcrest.Matchers.empty) IndexInput(org.apache.lucene.store.IndexInput) Iterator(java.util.Iterator) SnapshotDeletionPolicy(org.apache.lucene.index.SnapshotDeletionPolicy) VersionUtils.randomVersion(org.opensearch.test.VersionUtils.randomVersion) IndexFileNames(org.apache.lucene.index.IndexFileNames) Matchers(org.hamcrest.Matchers) IOException(java.io.IOException) IndexFormatTooOldException(org.apache.lucene.index.IndexFormatTooOldException) Version(org.apache.lucene.util.Version) IOUtils(org.opensearch.core.internal.io.IOUtils) ShardId(org.opensearch.index.shard.ShardId) TransportNodesListShardStoreMetadata(org.opensearch.indices.store.TransportNodesListShardStoreMetadata) KeepOnlyLastCommitDeletionPolicy(org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy) Field(org.apache.lucene.document.Field) FilterDirectory(org.apache.lucene.store.FilterDirectory) NIOFSDirectory(org.apache.lucene.store.NIOFSDirectory) Collections.unmodifiableMap(java.util.Collections.unmodifiableMap) TextField(org.apache.lucene.document.TextField) IndexWriter(org.apache.lucene.index.IndexWriter) StringField(org.apache.lucene.document.StringField) BaseDirectoryWrapper(org.apache.lucene.store.BaseDirectoryWrapper) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) DummyShardLock(org.opensearch.test.DummyShardLock) Document(org.apache.lucene.document.Document) IndexWriterConfig(org.apache.lucene.index.IndexWriterConfig)

Example 17 with DummyShardLock

use of org.opensearch.test.DummyShardLock in project OpenSearch by opensearch-project.

the class StoreTests method testHistoryUUIDCanBeForced.

public void testHistoryUUIDCanBeForced() throws IOException {
    final ShardId shardId = new ShardId("index", "_na_", 1);
    try (Store store = new Store(shardId, INDEX_SETTINGS, StoreTests.newDirectory(random()), new DummyShardLock(shardId))) {
        store.createEmpty(Version.LATEST);
        SegmentInfos segmentInfos = Lucene.readSegmentInfos(store.directory());
        assertThat(segmentInfos.getUserData(), hasKey(Engine.HISTORY_UUID_KEY));
        final String oldHistoryUUID = segmentInfos.getUserData().get(Engine.HISTORY_UUID_KEY);
        store.bootstrapNewHistory();
        segmentInfos = Lucene.readSegmentInfos(store.directory());
        assertThat(segmentInfos.getUserData(), hasKey(Engine.HISTORY_UUID_KEY));
        assertThat(segmentInfos.getUserData().get(Engine.HISTORY_UUID_KEY), not(equalTo(oldHistoryUUID)));
    }
}
Also used : ShardId(org.opensearch.index.shard.ShardId) SegmentInfos(org.apache.lucene.index.SegmentInfos) DummyShardLock(org.opensearch.test.DummyShardLock) Matchers.containsString(org.hamcrest.Matchers.containsString)

Example 18 with DummyShardLock

use of org.opensearch.test.DummyShardLock in project OpenSearch by opensearch-project.

the class StoreTests method testCorruptionMarkerVersionCheck.

public void testCorruptionMarkerVersionCheck() throws IOException {
    final ShardId shardId = new ShardId("index", "_na_", 1);
    // I use ram dir to prevent that virusscanner being a PITA
    final Directory dir = new ByteBuffersDirectory();
    try (Store store = new Store(shardId, INDEX_SETTINGS, dir, new DummyShardLock(shardId))) {
        final String corruptionMarkerName = Store.CORRUPTED_MARKER_NAME_PREFIX + UUIDs.randomBase64UUID();
        try (IndexOutput output = dir.createOutput(corruptionMarkerName, IOContext.DEFAULT)) {
            CodecUtil.writeHeader(output, Store.CODEC, Store.CORRUPTED_MARKER_CODEC_VERSION + randomFrom(1, 2, -1, -2, -3));
        // we only need the header to trigger the exception
        }
        final IOException ioException = expectThrows(IOException.class, store::failIfCorrupted);
        assertThat(ioException, anyOf(instanceOf(IndexFormatTooOldException.class), instanceOf(IndexFormatTooNewException.class)));
        assertThat(ioException.getMessage(), containsString(corruptionMarkerName));
    }
}
Also used : ShardId(org.opensearch.index.shard.ShardId) ByteBuffersDirectory(org.apache.lucene.store.ByteBuffersDirectory) IndexOutput(org.apache.lucene.store.IndexOutput) DummyShardLock(org.opensearch.test.DummyShardLock) Matchers.containsString(org.hamcrest.Matchers.containsString) IOException(java.io.IOException) Directory(org.apache.lucene.store.Directory) ByteBuffersDirectory(org.apache.lucene.store.ByteBuffersDirectory) FilterDirectory(org.apache.lucene.store.FilterDirectory) NIOFSDirectory(org.apache.lucene.store.NIOFSDirectory)

Example 19 with DummyShardLock

use of org.opensearch.test.DummyShardLock in project OpenSearch by opensearch-project.

the class StoreTests method testEnsureIndexHasHistoryUUID.

public void testEnsureIndexHasHistoryUUID() throws IOException {
    final ShardId shardId = new ShardId("index", "_na_", 1);
    try (Store store = new Store(shardId, INDEX_SETTINGS, StoreTests.newDirectory(random()), new DummyShardLock(shardId))) {
        store.createEmpty(Version.LATEST);
        // remove the history uuid
        IndexWriterConfig iwc = new IndexWriterConfig(null).setCommitOnClose(false).setMergePolicy(NoMergePolicy.INSTANCE).setOpenMode(IndexWriterConfig.OpenMode.APPEND);
        try (IndexWriter writer = new IndexWriter(store.directory(), iwc)) {
            Map<String, String> newCommitData = new HashMap<>();
            for (Map.Entry<String, String> entry : writer.getLiveCommitData()) {
                if (entry.getKey().equals(Engine.HISTORY_UUID_KEY) == false) {
                    newCommitData.put(entry.getKey(), entry.getValue());
                }
            }
            writer.setLiveCommitData(newCommitData.entrySet());
            writer.commit();
        }
        store.ensureIndexHasHistoryUUID();
        SegmentInfos segmentInfos = Lucene.readSegmentInfos(store.directory());
        assertThat(segmentInfos.getUserData(), hasKey(Engine.HISTORY_UUID_KEY));
    }
}
Also used : ShardId(org.opensearch.index.shard.ShardId) SegmentInfos(org.apache.lucene.index.SegmentInfos) IndexWriter(org.apache.lucene.index.IndexWriter) HashMap(java.util.HashMap) DummyShardLock(org.opensearch.test.DummyShardLock) Matchers.containsString(org.hamcrest.Matchers.containsString) Map(java.util.Map) HashMap(java.util.HashMap) Collections.unmodifiableMap(java.util.Collections.unmodifiableMap) IndexWriterConfig(org.apache.lucene.index.IndexWriterConfig)

Example 20 with DummyShardLock

use of org.opensearch.test.DummyShardLock in project OpenSearch by opensearch-project.

the class FsRepositoryTests method testSnapshotAndRestore.

public void testSnapshotAndRestore() throws IOException, InterruptedException {
    ThreadPool threadPool = new TestThreadPool(getClass().getSimpleName());
    try (Directory directory = newDirectory()) {
        Path repo = createTempDir();
        Settings settings = Settings.builder().put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toAbsolutePath()).put(Environment.PATH_REPO_SETTING.getKey(), repo.toAbsolutePath()).putList(Environment.PATH_DATA_SETTING.getKey(), tmpPaths()).put("location", repo).put("compress", randomBoolean()).put("chunk_size", randomIntBetween(100, 1000), ByteSizeUnit.BYTES).build();
        int numDocs = indexDocs(directory);
        RepositoryMetadata metadata = new RepositoryMetadata("test", "fs", settings);
        FsRepository repository = new FsRepository(metadata, new Environment(settings, null), NamedXContentRegistry.EMPTY, BlobStoreTestUtil.mockClusterService(), new RecoverySettings(settings, new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS)));
        repository.start();
        final Settings indexSettings = Settings.builder().put(IndexMetadata.SETTING_INDEX_UUID, "myindexUUID").build();
        IndexSettings idxSettings = IndexSettingsModule.newIndexSettings("myindex", indexSettings);
        ShardId shardId = new ShardId(idxSettings.getIndex(), 1);
        Store store = new Store(shardId, idxSettings, directory, new DummyShardLock(shardId));
        SnapshotId snapshotId = new SnapshotId("test", "test");
        IndexId indexId = new IndexId(idxSettings.getIndex().getName(), idxSettings.getUUID());
        IndexCommit indexCommit = Lucene.getIndexCommit(Lucene.readSegmentInfos(store.directory()), store.directory());
        final PlainActionFuture<String> future1 = PlainActionFuture.newFuture();
        runGeneric(threadPool, () -> {
            IndexShardSnapshotStatus snapshotStatus = IndexShardSnapshotStatus.newInitializing(null);
            repository.snapshotShard(store, null, snapshotId, indexId, indexCommit, null, snapshotStatus, Version.CURRENT, Collections.emptyMap(), future1);
            future1.actionGet();
            IndexShardSnapshotStatus.Copy copy = snapshotStatus.asCopy();
            assertEquals(copy.getTotalFileCount(), copy.getIncrementalFileCount());
        });
        final String shardGeneration = future1.actionGet();
        Lucene.cleanLuceneIndex(directory);
        expectThrows(org.apache.lucene.index.IndexNotFoundException.class, () -> Lucene.readSegmentInfos(directory));
        DiscoveryNode localNode = new DiscoveryNode("foo", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT);
        ShardRouting routing = ShardRouting.newUnassigned(shardId, true, new RecoverySource.SnapshotRecoverySource("test", new Snapshot("foo", snapshotId), Version.CURRENT, indexId), new UnassignedInfo(UnassignedInfo.Reason.EXISTING_INDEX_RESTORED, ""));
        routing = ShardRoutingHelper.initialize(routing, localNode.getId(), 0);
        RecoveryState state = new RecoveryState(routing, localNode, null);
        final PlainActionFuture<Void> futureA = PlainActionFuture.newFuture();
        runGeneric(threadPool, () -> repository.restoreShard(store, snapshotId, indexId, shardId, state, futureA));
        futureA.actionGet();
        assertTrue(state.getIndex().recoveredBytes() > 0);
        assertEquals(0, state.getIndex().reusedFileCount());
        assertEquals(indexCommit.getFileNames().size(), state.getIndex().recoveredFileCount());
        assertEquals(numDocs, Lucene.readSegmentInfos(directory).totalMaxDoc());
        deleteRandomDoc(store.directory());
        SnapshotId incSnapshotId = new SnapshotId("test1", "test1");
        IndexCommit incIndexCommit = Lucene.getIndexCommit(Lucene.readSegmentInfos(store.directory()), store.directory());
        Collection<String> commitFileNames = incIndexCommit.getFileNames();
        final PlainActionFuture<String> future2 = PlainActionFuture.newFuture();
        runGeneric(threadPool, () -> {
            IndexShardSnapshotStatus snapshotStatus = IndexShardSnapshotStatus.newInitializing(shardGeneration);
            repository.snapshotShard(store, null, incSnapshotId, indexId, incIndexCommit, null, snapshotStatus, Version.CURRENT, Collections.emptyMap(), future2);
            future2.actionGet();
            IndexShardSnapshotStatus.Copy copy = snapshotStatus.asCopy();
            assertEquals(2, copy.getIncrementalFileCount());
            assertEquals(commitFileNames.size(), copy.getTotalFileCount());
        });
        // roll back to the first snap and then incrementally restore
        RecoveryState firstState = new RecoveryState(routing, localNode, null);
        final PlainActionFuture<Void> futureB = PlainActionFuture.newFuture();
        runGeneric(threadPool, () -> repository.restoreShard(store, snapshotId, indexId, shardId, firstState, futureB));
        futureB.actionGet();
        assertEquals("should reuse everything except of .liv and .si", commitFileNames.size() - 2, firstState.getIndex().reusedFileCount());
        RecoveryState secondState = new RecoveryState(routing, localNode, null);
        final PlainActionFuture<Void> futureC = PlainActionFuture.newFuture();
        runGeneric(threadPool, () -> repository.restoreShard(store, incSnapshotId, indexId, shardId, secondState, futureC));
        futureC.actionGet();
        assertEquals(secondState.getIndex().reusedFileCount(), commitFileNames.size() - 2);
        assertEquals(secondState.getIndex().recoveredFileCount(), 2);
        List<RecoveryState.FileDetail> recoveredFiles = secondState.getIndex().fileDetails().stream().filter(f -> f.reused() == false).collect(Collectors.toList());
        Collections.sort(recoveredFiles, Comparator.comparing(RecoveryState.FileDetail::name));
        assertTrue(recoveredFiles.get(0).name(), recoveredFiles.get(0).name().endsWith(".liv"));
        assertTrue(recoveredFiles.get(1).name(), recoveredFiles.get(1).name().endsWith("segments_" + incIndexCommit.getGeneration()));
    } finally {
        terminate(threadPool);
    }
}
Also used : IndexShardSnapshotStatus(org.opensearch.index.snapshots.IndexShardSnapshotStatus) NoMergePolicy(org.apache.lucene.index.NoMergePolicy) Term(org.apache.lucene.index.Term) TestThreadPool(org.opensearch.threadpool.TestThreadPool) ByteSizeUnit(org.opensearch.common.unit.ByteSizeUnit) Version(org.opensearch.Version) CodecReader(org.apache.lucene.index.CodecReader) Document(org.apache.lucene.document.Document) DiscoveryNode(org.opensearch.cluster.node.DiscoveryNode) IndexId(org.opensearch.repositories.IndexId) PlainActionFuture(org.opensearch.action.support.PlainActionFuture) RecoveryState(org.opensearch.indices.recovery.RecoveryState) Directory(org.apache.lucene.store.Directory) Lucene(org.opensearch.common.lucene.Lucene) DummyShardLock(org.opensearch.test.DummyShardLock) UnassignedInfo(org.opensearch.cluster.routing.UnassignedInfo) RecoverySettings(org.opensearch.indices.recovery.RecoverySettings) Path(java.nio.file.Path) BytesRef(org.apache.lucene.util.BytesRef) SnapshotId(org.opensearch.snapshots.SnapshotId) OpenSearchTestCase(org.opensearch.test.OpenSearchTestCase) Collection(java.util.Collection) Settings(org.opensearch.common.settings.Settings) Store(org.opensearch.index.store.Store) Collectors(java.util.stream.Collectors) MockAnalyzer(org.apache.lucene.analysis.MockAnalyzer) FilterMergePolicy(org.apache.lucene.index.FilterMergePolicy) CountDownLatch(java.util.concurrent.CountDownLatch) IndexWriter(org.apache.lucene.index.IndexWriter) List(java.util.List) SortedDocValuesField(org.apache.lucene.document.SortedDocValuesField) IndexSettings(org.opensearch.index.IndexSettings) ShardRoutingHelper(org.opensearch.cluster.routing.ShardRoutingHelper) IndexSettingsModule(org.opensearch.test.IndexSettingsModule) IndexCommit(org.apache.lucene.index.IndexCommit) IndexMetadata(org.opensearch.cluster.metadata.IndexMetadata) StringField(org.apache.lucene.document.StringField) ThreadPool(org.opensearch.threadpool.ThreadPool) TestUtil(org.apache.lucene.util.TestUtil) RecoverySource(org.opensearch.cluster.routing.RecoverySource) IndexShardSnapshotStatus(org.opensearch.index.snapshots.IndexShardSnapshotStatus) ClusterSettings(org.opensearch.common.settings.ClusterSettings) Environment(org.opensearch.env.Environment) Collections.emptyMap(java.util.Collections.emptyMap) BlobStoreTestUtil(org.opensearch.repositories.blobstore.BlobStoreTestUtil) Collections.emptySet(java.util.Collections.emptySet) RepositoryMetadata(org.opensearch.cluster.metadata.RepositoryMetadata) IOException(java.io.IOException) ShardRouting(org.opensearch.cluster.routing.ShardRouting) ShardId(org.opensearch.index.shard.ShardId) Field(org.apache.lucene.document.Field) Snapshot(org.opensearch.snapshots.Snapshot) NamedXContentRegistry(org.opensearch.common.xcontent.NamedXContentRegistry) TextField(org.apache.lucene.document.TextField) Comparator(java.util.Comparator) Collections(java.util.Collections) IOSupplier(org.apache.lucene.util.IOSupplier) DiscoveryNode(org.opensearch.cluster.node.DiscoveryNode) ClusterSettings(org.opensearch.common.settings.ClusterSettings) UnassignedInfo(org.opensearch.cluster.routing.UnassignedInfo) IndexSettings(org.opensearch.index.IndexSettings) TestThreadPool(org.opensearch.threadpool.TestThreadPool) ThreadPool(org.opensearch.threadpool.ThreadPool) Store(org.opensearch.index.store.Store) TestThreadPool(org.opensearch.threadpool.TestThreadPool) RecoverySource(org.opensearch.cluster.routing.RecoverySource) ShardId(org.opensearch.index.shard.ShardId) DummyShardLock(org.opensearch.test.DummyShardLock) RecoverySettings(org.opensearch.indices.recovery.RecoverySettings) RecoveryState(org.opensearch.indices.recovery.RecoveryState) RecoverySettings(org.opensearch.indices.recovery.RecoverySettings) Settings(org.opensearch.common.settings.Settings) IndexSettings(org.opensearch.index.IndexSettings) ClusterSettings(org.opensearch.common.settings.ClusterSettings) Directory(org.apache.lucene.store.Directory) Path(java.nio.file.Path) IndexId(org.opensearch.repositories.IndexId) IndexCommit(org.apache.lucene.index.IndexCommit) SnapshotId(org.opensearch.snapshots.SnapshotId) Snapshot(org.opensearch.snapshots.Snapshot) RepositoryMetadata(org.opensearch.cluster.metadata.RepositoryMetadata) Environment(org.opensearch.env.Environment) ShardRouting(org.opensearch.cluster.routing.ShardRouting)

Aggregations

DummyShardLock (org.opensearch.test.DummyShardLock)20 ShardId (org.opensearch.index.shard.ShardId)16 Matchers.containsString (org.hamcrest.Matchers.containsString)10 IndexWriter (org.apache.lucene.index.IndexWriter)8 BytesRef (org.apache.lucene.util.BytesRef)8 IndexSettings (org.opensearch.index.IndexSettings)8 Document (org.apache.lucene.document.Document)7 TextField (org.apache.lucene.document.TextField)7 IndexWriterConfig (org.apache.lucene.index.IndexWriterConfig)7 Directory (org.apache.lucene.store.Directory)7 IOException (java.io.IOException)6 MockAnalyzer (org.apache.lucene.analysis.MockAnalyzer)6 SortedDocValuesField (org.apache.lucene.document.SortedDocValuesField)6 Term (org.apache.lucene.index.Term)6 Path (java.nio.file.Path)5 FilterDirectory (org.apache.lucene.store.FilterDirectory)5 Settings (org.opensearch.common.settings.Settings)5 Store (org.opensearch.index.store.Store)5 ArrayList (java.util.ArrayList)4 StringField (org.apache.lucene.document.StringField)4