Search in sources :

Example 1 with PlainActionFuture

use of org.opensearch.action.support.PlainActionFuture in project OpenSearch by opensearch-project.

the class PercolatorFieldMapperTests method testQueryWithRewrite.

public void testQueryWithRewrite() throws Exception {
    addQueryFieldMappings();
    client().prepareIndex("remote").setId("1").setSource("field", "value").get();
    QueryBuilder queryBuilder = termsLookupQuery("field", new TermsLookup("remote", "1", "field"));
    ParsedDocument doc = mapperService.documentMapper("doc").parse(new SourceToParse("test", "doc", "1", BytesReference.bytes(XContentFactory.jsonBuilder().startObject().field(fieldName, queryBuilder).endObject()), XContentType.JSON));
    BytesRef qbSource = doc.rootDoc().getFields(fieldType.queryBuilderField.name())[0].binaryValue();
    QueryShardContext shardContext = indexService.newQueryShardContext(randomInt(20), null, () -> {
        throw new UnsupportedOperationException();
    }, null);
    PlainActionFuture<QueryBuilder> future = new PlainActionFuture<>();
    Rewriteable.rewriteAndFetch(queryBuilder, shardContext, future);
    assertQueryBuilder(qbSource, future.get());
}
Also used : ParsedDocument(org.opensearch.index.mapper.ParsedDocument) PlainActionFuture(org.opensearch.action.support.PlainActionFuture) SourceToParse(org.opensearch.index.mapper.SourceToParse) QueryShardContext(org.opensearch.index.query.QueryShardContext) TermsLookup(org.opensearch.indices.TermsLookup) BoostingQueryBuilder(org.opensearch.index.query.BoostingQueryBuilder) BoolQueryBuilder(org.opensearch.index.query.BoolQueryBuilder) HasChildQueryBuilder(org.opensearch.join.query.HasChildQueryBuilder) ConstantScoreQueryBuilder(org.opensearch.index.query.ConstantScoreQueryBuilder) FunctionScoreQueryBuilder(org.opensearch.index.query.functionscore.FunctionScoreQueryBuilder) QueryBuilder(org.opensearch.index.query.QueryBuilder) ScriptQueryBuilder(org.opensearch.index.query.ScriptQueryBuilder) DisMaxQueryBuilder(org.opensearch.index.query.DisMaxQueryBuilder) RangeQueryBuilder(org.opensearch.index.query.RangeQueryBuilder) HasParentQueryBuilder(org.opensearch.join.query.HasParentQueryBuilder) MatchAllQueryBuilder(org.opensearch.index.query.MatchAllQueryBuilder) BytesRef(org.apache.lucene.util.BytesRef)

Example 2 with PlainActionFuture

use of org.opensearch.action.support.PlainActionFuture in project OpenSearch by opensearch-project.

the class AzureStorageCleanupThirdPartyTests method ensureSasTokenPermissions.

private void ensureSasTokenPermissions() {
    final BlobStoreRepository repository = getRepository();
    final PlainActionFuture<Void> future = PlainActionFuture.newFuture();
    repository.threadPool().generic().execute(ActionRunnable.wrap(future, l -> {
        final AzureBlobStore blobStore = (AzureBlobStore) repository.blobStore();
        final String account = "default";
        final Tuple<BlobServiceClient, Supplier<Context>> client = blobStore.getService().client(account);
        final BlobContainerClient blobContainer = client.v1().getBlobContainerClient(blobStore.toString());
        try {
            SocketAccess.doPrivilegedException(() -> blobContainer.existsWithResponse(null, client.v2().get()));
            future.onFailure(new RuntimeException("The SAS token used in this test allowed for checking container existence. This test only supports tokens " + "that grant only the documented permission requirements for the Azure repository plugin."));
        } catch (BlobStorageException e) {
            if (e.getStatusCode() == HttpURLConnection.HTTP_FORBIDDEN) {
                future.onResponse(null);
            } else {
                future.onFailure(e);
            }
        }
    }));
    future.actionGet();
}
Also used : HttpURLConnection(java.net.HttpURLConnection) Matchers.blankOrNullString(org.hamcrest.Matchers.blankOrNullString) AfterClass(org.junit.AfterClass) BlobStoreRepository(org.opensearch.repositories.blobstore.BlobStoreRepository) Context(com.azure.core.util.Context) BlobContainerClient(com.azure.storage.blob.BlobContainerClient) BlobStorageException(com.azure.storage.blob.models.BlobStorageException) MockSecureSettings(org.opensearch.common.settings.MockSecureSettings) ActionRunnable(org.opensearch.action.ActionRunnable) Collection(java.util.Collection) AbstractThirdPartyRepositoryTestCase(org.opensearch.repositories.AbstractThirdPartyRepositoryTestCase) Matchers.not(org.hamcrest.Matchers.not) Settings(org.opensearch.common.settings.Settings) Supplier(java.util.function.Supplier) Plugin(org.opensearch.plugins.Plugin) Strings(org.opensearch.common.Strings) Tuple(org.opensearch.common.collect.Tuple) AcknowledgedResponse(org.opensearch.action.support.master.AcknowledgedResponse) SecureSettings(org.opensearch.common.settings.SecureSettings) BlobServiceClient(com.azure.storage.blob.BlobServiceClient) PlainActionFuture(org.opensearch.action.support.PlainActionFuture) Matchers.equalTo(org.hamcrest.Matchers.equalTo) Schedulers(reactor.core.scheduler.Schedulers) Context(com.azure.core.util.Context) BlobContainerClient(com.azure.storage.blob.BlobContainerClient) BlobStoreRepository(org.opensearch.repositories.blobstore.BlobStoreRepository) Matchers.blankOrNullString(org.hamcrest.Matchers.blankOrNullString) BlobStorageException(com.azure.storage.blob.models.BlobStorageException) Tuple(org.opensearch.common.collect.Tuple)

Example 3 with PlainActionFuture

use of org.opensearch.action.support.PlainActionFuture in project OpenSearch by opensearch-project.

the class AzureBlobContainer method deleteBlobsIgnoringIfNotExists.

@Override
public void deleteBlobsIgnoringIfNotExists(List<String> blobNames) throws IOException {
    final PlainActionFuture<Void> result = PlainActionFuture.newFuture();
    if (blobNames.isEmpty()) {
        result.onResponse(null);
    } else {
        final GroupedActionListener<Void> listener = new GroupedActionListener<>(ActionListener.map(result, v -> null), blobNames.size());
        final ExecutorService executor = threadPool.executor(AzureRepositoryPlugin.REPOSITORY_THREAD_POOL_NAME);
        // TODO: Upgrade to newer non-blocking Azure SDK 11 and execute delete requests in parallel that way.
        for (String blobName : blobNames) {
            executor.execute(ActionRunnable.run(listener, () -> {
                logger.trace("deleteBlob({})", blobName);
                try {
                    blobStore.deleteBlob(buildKey(blobName));
                } catch (BlobStorageException e) {
                    if (e.getStatusCode() != HttpURLConnection.HTTP_NOT_FOUND) {
                        throw new IOException(e);
                    }
                } catch (URISyntaxException e) {
                    throw new IOException(e);
                }
            }));
        }
    }
    try {
        result.actionGet();
    } catch (Exception e) {
        throw new IOException("Exception during bulk delete", e);
    }
}
Also used : HttpURLConnection(java.net.HttpURLConnection) NoSuchFileException(java.nio.file.NoSuchFileException) BlobStorageException(com.azure.storage.blob.models.BlobStorageException) ActionRunnable(org.opensearch.action.ActionRunnable) ThreadPool(org.opensearch.threadpool.ThreadPool) URISyntaxException(java.net.URISyntaxException) BlobContainer(org.opensearch.common.blobstore.BlobContainer) GroupedActionListener(org.opensearch.action.support.GroupedActionListener) Constants(com.azure.storage.common.implementation.Constants) PlainActionFuture(org.opensearch.action.support.PlainActionFuture) Map(java.util.Map) ActionListener(org.opensearch.action.ActionListener) BlobMetadata(org.opensearch.common.blobstore.BlobMetadata) ExecutorService(java.util.concurrent.ExecutorService) IOException(java.io.IOException) FileInputStream(java.io.FileInputStream) Nullable(org.opensearch.common.Nullable) List(java.util.List) Logger(org.apache.logging.log4j.Logger) DeleteResult(org.opensearch.common.blobstore.DeleteResult) BlobPath(org.opensearch.common.blobstore.BlobPath) AbstractBlobContainer(org.opensearch.common.blobstore.support.AbstractBlobContainer) LogManager(org.apache.logging.log4j.LogManager) BlobInputStream(com.azure.storage.blob.specialized.BlobInputStream) InputStream(java.io.InputStream) GroupedActionListener(org.opensearch.action.support.GroupedActionListener) ExecutorService(java.util.concurrent.ExecutorService) IOException(java.io.IOException) URISyntaxException(java.net.URISyntaxException) BlobStorageException(com.azure.storage.blob.models.BlobStorageException) NoSuchFileException(java.nio.file.NoSuchFileException) BlobStorageException(com.azure.storage.blob.models.BlobStorageException) URISyntaxException(java.net.URISyntaxException) IOException(java.io.IOException)

Example 4 with PlainActionFuture

use of org.opensearch.action.support.PlainActionFuture in project OpenSearch by opensearch-project.

the class CompletionStatsCache method get.

CompletionStats get(String... fieldNamePatterns) {
    final PlainActionFuture<CompletionStats> newFuture = new PlainActionFuture<>();
    // final PlainActionFuture<CompletionStats> oldFuture = completionStatsFutureRef.compareAndExchange(null, newFuture);
    // except JDK8 doesn't have compareAndExchange so we emulate it:
    final PlainActionFuture<CompletionStats> oldFuture;
    synchronized (completionStatsFutureMutex) {
        if (completionStatsFuture == null) {
            completionStatsFuture = newFuture;
            oldFuture = null;
        } else {
            oldFuture = completionStatsFuture;
        }
    }
    if (oldFuture != null) {
        // we lost the race, someone else is already computing stats, so we wait for that to finish
        return filterCompletionStatsByFieldName(fieldNamePatterns, oldFuture.actionGet());
    }
    // we won the race, nobody else is already computing stats, so it's up to us
    ActionListener.completeWith(newFuture, () -> {
        long sizeInBytes = 0;
        final ObjectLongHashMap<String> completionFields = new ObjectLongHashMap<>();
        try (Engine.Searcher currentSearcher = searcherSupplier.get()) {
            for (LeafReaderContext atomicReaderContext : currentSearcher.getIndexReader().leaves()) {
                LeafReader atomicReader = atomicReaderContext.reader();
                for (FieldInfo info : atomicReader.getFieldInfos()) {
                    Terms terms = atomicReader.terms(info.name);
                    if (terms instanceof CompletionTerms) {
                        // TODO: currently we load up the suggester for reporting its size
                        final long fstSize = ((CompletionTerms) terms).suggester().ramBytesUsed();
                        completionFields.addTo(info.name, fstSize);
                        sizeInBytes += fstSize;
                    }
                }
            }
        }
        return new CompletionStats(sizeInBytes, new FieldMemoryStats(completionFields));
    });
    boolean success = false;
    final CompletionStats completionStats;
    try {
        completionStats = newFuture.actionGet();
        success = true;
    } finally {
        if (success == false) {
            // completionStatsFutureRef.compareAndSet(newFuture, null); except we're not using AtomicReference in JDK8
            synchronized (completionStatsFutureMutex) {
                if (completionStatsFuture == newFuture) {
                    completionStatsFuture = null;
                }
            }
        }
    }
    return filterCompletionStatsByFieldName(fieldNamePatterns, completionStats);
}
Also used : ObjectLongHashMap(com.carrotsearch.hppc.ObjectLongHashMap) LeafReader(org.apache.lucene.index.LeafReader) Terms(org.apache.lucene.index.Terms) CompletionTerms(org.apache.lucene.search.suggest.document.CompletionTerms) CompletionTerms(org.apache.lucene.search.suggest.document.CompletionTerms) PlainActionFuture(org.opensearch.action.support.PlainActionFuture) LeafReaderContext(org.apache.lucene.index.LeafReaderContext) FieldMemoryStats(org.opensearch.common.FieldMemoryStats) FieldInfo(org.apache.lucene.index.FieldInfo) CompletionStats(org.opensearch.search.suggest.completion.CompletionStats)

Example 5 with PlainActionFuture

use of org.opensearch.action.support.PlainActionFuture in project OpenSearch by opensearch-project.

the class IndexRecoveryIT method testRecoverLocallyUpToGlobalCheckpoint.

public void testRecoverLocallyUpToGlobalCheckpoint() throws Exception {
    internalCluster().ensureAtLeastNumDataNodes(2);
    List<String> nodes = randomSubsetOf(2, StreamSupport.stream(clusterService().state().nodes().getDataNodes().spliterator(), false).map(node -> node.value.getName()).collect(Collectors.toSet()));
    String indexName = "test-index";
    createIndex(indexName, Settings.builder().put("index.number_of_shards", 1).put("index.number_of_replicas", 1).put(IndexService.GLOBAL_CHECKPOINT_SYNC_INTERVAL_SETTING.getKey(), "12h").put("index.routing.allocation.include._name", String.join(",", nodes)).build());
    ensureGreen(indexName);
    int numDocs = randomIntBetween(0, 100);
    indexRandom(randomBoolean(), false, randomBoolean(), IntStream.range(0, numDocs).mapToObj(n -> client().prepareIndex(indexName).setSource("num", n)).collect(toList()));
    // avoid refresh when we are failing a shard
    client().admin().indices().prepareRefresh(indexName).get();
    String failingNode = randomFrom(nodes);
    PlainActionFuture<StartRecoveryRequest> startRecoveryRequestFuture = new PlainActionFuture<>();
    // Peer recovery fails if the primary does not see the recovering replica in the replication group (when the cluster state
    // update on the primary is delayed). To verify the local recovery stats, we have to manually remember this value in the
    // first try because the local recovery happens once and its stats is reset when the recovery fails.
    SetOnce<Integer> localRecoveredOps = new SetOnce<>();
    for (String node : nodes) {
        MockTransportService transportService = (MockTransportService) internalCluster().getInstance(TransportService.class, node);
        transportService.addSendBehavior((connection, requestId, action, request, options) -> {
            if (action.equals(PeerRecoverySourceService.Actions.START_RECOVERY)) {
                final RecoveryState recoveryState = internalCluster().getInstance(IndicesService.class, failingNode).getShardOrNull(new ShardId(resolveIndex(indexName), 0)).recoveryState();
                assertThat(recoveryState.getTranslog().recoveredOperations(), equalTo(recoveryState.getTranslog().totalLocal()));
                if (startRecoveryRequestFuture.isDone()) {
                    assertThat(recoveryState.getTranslog().totalLocal(), equalTo(0));
                    recoveryState.getTranslog().totalLocal(localRecoveredOps.get());
                    recoveryState.getTranslog().incrementRecoveredOperations(localRecoveredOps.get());
                } else {
                    localRecoveredOps.set(recoveryState.getTranslog().totalLocal());
                    startRecoveryRequestFuture.onResponse((StartRecoveryRequest) request);
                }
            }
            if (action.equals(PeerRecoveryTargetService.Actions.FILE_CHUNK)) {
                RetentionLeases retentionLeases = internalCluster().getInstance(IndicesService.class, node).indexServiceSafe(resolveIndex(indexName)).getShard(0).getRetentionLeases();
                throw new AssertionError("expect an operation-based recovery:" + "retention leases" + Strings.toString(retentionLeases) + "]");
            }
            connection.sendRequest(requestId, action, request, options);
        });
    }
    IndexShard shard = internalCluster().getInstance(IndicesService.class, failingNode).getShardOrNull(new ShardId(resolveIndex(indexName), 0));
    final long lastSyncedGlobalCheckpoint = shard.getLastSyncedGlobalCheckpoint();
    final long localCheckpointOfSafeCommit;
    try (Engine.IndexCommitRef safeCommitRef = shard.acquireSafeIndexCommit()) {
        localCheckpointOfSafeCommit = SequenceNumbers.loadSeqNoInfoFromLuceneCommit(safeCommitRef.getIndexCommit().getUserData().entrySet()).localCheckpoint;
    }
    final long maxSeqNo = shard.seqNoStats().getMaxSeqNo();
    shard.failShard("test", new IOException("simulated"));
    StartRecoveryRequest startRecoveryRequest = startRecoveryRequestFuture.actionGet();
    logger.info("--> start recovery request: starting seq_no {}, commit {}", startRecoveryRequest.startingSeqNo(), startRecoveryRequest.metadataSnapshot().getCommitUserData());
    SequenceNumbers.CommitInfo commitInfoAfterLocalRecovery = SequenceNumbers.loadSeqNoInfoFromLuceneCommit(startRecoveryRequest.metadataSnapshot().getCommitUserData().entrySet());
    assertThat(commitInfoAfterLocalRecovery.localCheckpoint, equalTo(lastSyncedGlobalCheckpoint));
    assertThat(commitInfoAfterLocalRecovery.maxSeqNo, equalTo(lastSyncedGlobalCheckpoint));
    assertThat(startRecoveryRequest.startingSeqNo(), equalTo(lastSyncedGlobalCheckpoint + 1));
    ensureGreen(indexName);
    assertThat((long) localRecoveredOps.get(), equalTo(lastSyncedGlobalCheckpoint - localCheckpointOfSafeCommit));
    for (RecoveryState recoveryState : client().admin().indices().prepareRecoveries().get().shardRecoveryStates().get(indexName)) {
        if (startRecoveryRequest.targetNode().equals(recoveryState.getTargetNode())) {
            assertThat("expect an operation-based recovery", recoveryState.getIndex().fileDetails(), empty());
            assertThat("total recovered translog operations must include both local and remote recovery", recoveryState.getTranslog().recoveredOperations(), greaterThanOrEqualTo(Math.toIntExact(maxSeqNo - localCheckpointOfSafeCommit)));
        }
    }
    for (String node : nodes) {
        MockTransportService transportService = (MockTransportService) internalCluster().getInstance(TransportService.class, node);
        transportService.clearAllRules();
    }
}
Also used : MockTransportService(org.opensearch.test.transport.MockTransportService) SetOnce(org.apache.lucene.util.SetOnce) IndexShard(org.opensearch.index.shard.IndexShard) IndicesService(org.opensearch.indices.IndicesService) IOException(java.io.IOException) RetentionLeases(org.opensearch.index.seqno.RetentionLeases) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) ShardId(org.opensearch.index.shard.ShardId) PlainActionFuture(org.opensearch.action.support.PlainActionFuture) TransportService(org.opensearch.transport.TransportService) MockTransportService(org.opensearch.test.transport.MockTransportService) SequenceNumbers(org.opensearch.index.seqno.SequenceNumbers) Engine(org.opensearch.index.engine.Engine)

Aggregations

PlainActionFuture (org.opensearch.action.support.PlainActionFuture)138 ShardId (org.opensearch.index.shard.ShardId)54 Settings (org.opensearch.common.settings.Settings)41 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)38 ShardRouting (org.opensearch.cluster.routing.ShardRouting)38 ClusterState (org.opensearch.cluster.ClusterState)37 ExecutionException (java.util.concurrent.ExecutionException)35 ActionListener (org.opensearch.action.ActionListener)34 IOException (java.io.IOException)33 List (java.util.List)32 DiscoveryNode (org.opensearch.cluster.node.DiscoveryNode)32 ArrayList (java.util.ArrayList)29 IndexShard (org.opensearch.index.shard.IndexShard)28 HashSet (java.util.HashSet)26 CountDownLatch (java.util.concurrent.CountDownLatch)25 ThreadPool (org.opensearch.threadpool.ThreadPool)25 OpenSearchTestCase (org.opensearch.test.OpenSearchTestCase)24 Collections (java.util.Collections)22 Matchers.hasToString (org.hamcrest.Matchers.hasToString)22 TransportRequest (org.opensearch.transport.TransportRequest)22