Search in sources :

Example 6 with ManagedExecutor

use of org.eclipse.microprofile.context.ManagedExecutor in project quarkus-quickstarts by quarkusio.

the class TransactionalResource method async2.

@POST
@Produces(MediaType.TEXT_PLAIN)
@Transactional
@Path("async-with-completion-stage")
public CompletionStage<Integer> async2() throws SystemException {
    System.out.printf("submitting async job ...%n");
    ContextManagerProvider cmp = ContextManagerProvider.INSTANCE.get();
    ManagedExecutor me = cmp.getContextManager().newManagedExecutorBuilder().propagated(ThreadContext.TRANSACTION).build();
    Transaction txnToPropagate = TransactionManager.transactionManager().getTransaction();
    return me.supplyAsync(() -> {
        try {
            Transaction txn = TransactionManager.transactionManager().getTransaction();
            if (!txn.equals(txnToPropagate)) {
                // the original transaction, txnToPropagate, should have been propagated to the new thread
                return -1;
            }
            // should return Status.STATUS_ACTIVE
            return getTransactionStatus();
        } catch (SystemException e) {
            return -1;
        }
    });
}
Also used : Transaction(javax.transaction.Transaction) SystemException(javax.transaction.SystemException) ManagedExecutor(org.eclipse.microprofile.context.ManagedExecutor) ContextManagerProvider(org.eclipse.microprofile.context.spi.ContextManagerProvider) Path(javax.ws.rs.Path) POST(javax.ws.rs.POST) Produces(javax.ws.rs.Produces) Transactional(javax.transaction.Transactional)

Example 7 with ManagedExecutor

use of org.eclipse.microprofile.context.ManagedExecutor in project quarkus-quickstarts by quarkusio.

the class TransactionalResource method async1.

@POST
@Produces(MediaType.TEXT_PLAIN)
@Transactional
@Path("async-with-suspended")
public void async1(@Suspended AsyncResponse ar) throws SystemException {
    // the framework will have started a transaction because of the @Transactional annotation
    Transaction txnToPropagate = TransactionManager.transactionManager().getTransaction();
    ContextManagerProvider cmp = ContextManagerProvider.INSTANCE.get();
    ManagedExecutor me = cmp.getContextManager().newManagedExecutorBuilder().propagated(ThreadContext.TRANSACTION).build();
    // the transaction should be active (because of the @Transactional annotation)
    if (getTransactionStatus() != Status.STATUS_ACTIVE) {
        ar.resume(Response.status(Response.Status.PRECONDITION_FAILED).entity(getTransactionStatus()).build());
    }
    me.submit(() -> {
        try {
            Transaction txn = TransactionManager.transactionManager().getTransaction();
            if (!txn.equals(txnToPropagate)) {
                // the original transaction, txnToPropagate, should have been propagated to the new thread
                ar.resume(Response.status(Response.Status.PRECONDITION_FAILED).entity(getTransactionStatus()).build());
            }
            // should return Status.STATUS_ACTIVE
            ar.resume(Response.ok().entity(getTransactionStatus()).build());
        } catch (SystemException e) {
            ar.resume(e);
        }
    });
}
Also used : Transaction(javax.transaction.Transaction) SystemException(javax.transaction.SystemException) ManagedExecutor(org.eclipse.microprofile.context.ManagedExecutor) ContextManagerProvider(org.eclipse.microprofile.context.spi.ContextManagerProvider) Path(javax.ws.rs.Path) POST(javax.ws.rs.POST) Produces(javax.ws.rs.Produces) Transactional(javax.transaction.Transactional)

Example 8 with ManagedExecutor

use of org.eclipse.microprofile.context.ManagedExecutor in project camunda-bpm-platform by camunda.

the class CamundaEngineRecorder method configureJobExecutor.

protected void configureJobExecutor(ProcessEngineConfigurationImpl configuration, CamundaEngineConfig config) {
    int maxPoolSize = config.jobExecutor.threadPool.maxPoolSize;
    int queueSize = config.jobExecutor.threadPool.queueSize;
    // create a non-bean ManagedExecutor instance. This instance
    // uses it's own Executor/thread pool.
    ManagedExecutor managedExecutor = SmallRyeManagedExecutor.builder().maxQueued(queueSize).maxAsync(maxPoolSize).withNewExecutorService().build();
    ManagedJobExecutor quarkusJobExecutor = new ManagedJobExecutor(managedExecutor);
    // apply job executor configuration properties
    PropertyHelper.applyProperties(quarkusJobExecutor, config.jobExecutor.genericConfig, PropertyHelper.KEBAB_CASE);
    configuration.setJobExecutor(quarkusJobExecutor);
}
Also used : SmallRyeManagedExecutor(io.smallrye.context.SmallRyeManagedExecutor) ManagedExecutor(org.eclipse.microprofile.context.ManagedExecutor)

Example 9 with ManagedExecutor

use of org.eclipse.microprofile.context.ManagedExecutor in project quarkus-quickstarts by quarkusio.

the class TransactionalResource method asyncIssue6471Reproducer.

@POST
@Produces(MediaType.TEXT_PLAIN)
@Transactional
@Path("async-6471-reproducer")
public void asyncIssue6471Reproducer(@Suspended AsyncResponse ar) throws SystemException {
    System.out.printf("submitting async job ...%n");
    Transaction txnToPropagate = TransactionManager.transactionManager().getTransaction();
    // the transaction should be active (because of the @Transactional annotation)
    if (getTransactionStatus() != Status.STATUS_ACTIVE) {
        ar.resume(Response.status(Response.Status.PRECONDITION_FAILED).entity(getTransactionStatus()).build());
    }
    ContextManagerProvider cmp = ContextManagerProvider.INSTANCE.get();
    ManagedExecutor me = cmp.getContextManager().newManagedExecutorBuilder().propagated(ThreadContext.TRANSACTION).build();
    me.submit(() -> {
        System.out.printf("running async job ...%n");
        try {
            Transaction txn = TransactionManager.transactionManager().getTransaction();
            if (!txn.equals(txnToPropagate)) {
                // the original transaction, txnToPropagate, should have been propagated to the new thread
                ar.resume(Response.status(Response.Status.PRECONDITION_FAILED).entity(getTransactionStatus()).build());
            }
            // execute a long running business activity and resume when done
            System.out.printf("resuming long running async job ...%n");
            // the transaction started via the @Transactional annotation should still be active
            // but due to Quarkus issue 6471 there is no interceptor for extending the transaction bondary
            // (see the issue for further details)
            // should return Status.STATUS_ACTIVE
            ar.resume(Response.ok().entity(getTransactionStatus()).build());
        } catch (SystemException e) {
            System.out.printf("resuming async job with exception: %s%n", e.getMessage());
            ar.resume(e);
        }
    });
}
Also used : Transaction(javax.transaction.Transaction) SystemException(javax.transaction.SystemException) ManagedExecutor(org.eclipse.microprofile.context.ManagedExecutor) ContextManagerProvider(org.eclipse.microprofile.context.spi.ContextManagerProvider) Path(javax.ws.rs.Path) POST(javax.ws.rs.POST) Produces(javax.ws.rs.Produces) Transactional(javax.transaction.Transactional)

Example 10 with ManagedExecutor

use of org.eclipse.microprofile.context.ManagedExecutor in project kas-fleetshard by bf2fc6cc711aee1a0c2a.

the class ManagedKafkaSync method syncKafkaClusters.

/**
 * Update the local state based upon the remote ManagedKafkas
 * The strategy here is to take a pass over the list and find any deferred work
 * Then execute that deferred work using the {@link ManagedExecutor} but with
 * a refresh of the state to ensure we're still acting appropriately.
 */
@Timed(value = "sync.poll", extraTags = { "resource", "ManagedKafka" }, description = "The time spent processing polling calls")
@Counted(value = "sync.poll", extraTags = { "resource", "ManagedKafka" }, description = "The number of polling calls")
public void syncKafkaClusters() {
    Map<String, ManagedKafka> remotes = new HashMap<>();
    for (ManagedKafka remoteManagedKafka : controlPlane.getKafkaClusters()) {
        Objects.requireNonNull(remoteManagedKafka.getId());
        Objects.requireNonNull(remoteManagedKafka.getMetadata().getNamespace());
        remotes.put(ControlPlane.managedKafkaKey(remoteManagedKafka), remoteManagedKafka);
        ManagedKafkaSpec remoteSpec = remoteManagedKafka.getSpec();
        Objects.requireNonNull(remoteSpec);
        String localKey = Cache.namespaceKeyFunc(remoteManagedKafka.getMetadata().getNamespace(), remoteManagedKafka.getMetadata().getName());
        ManagedKafka existing = lookup.getLocalManagedKafka(localKey);
        if (existing == null) {
            if (!remoteSpec.isDeleted()) {
                reconcileAsync(ControlPlane.managedKafkaKey(remoteManagedKafka), localKey);
            } else {
                // we've successfully removed locally, but control plane is not aware
                // we need to send another status update to let them know
                ManagedKafkaStatusBuilder statusBuilder = new ManagedKafkaStatusBuilder();
                statusBuilder.withConditions(ConditionUtils.buildCondition(Type.Ready, Status.False).reason(Reason.Deleted));
                // fire and forget the async call - if it fails, we'll retry on the next poll
                controlPlane.updateKafkaClusterStatus(() -> {
                    return Map.of(remoteManagedKafka.getId(), statusBuilder.build());
                });
            }
        } else {
            final String localNamespace = existing.getMetadata().getNamespace();
            final String managedKafkaId = existing.getMetadata().getAnnotations() == null ? null : existing.getMetadata().getAnnotations().get(MANAGEDKAFKA_ID_LABEL);
            Namespace n = kubeClient.namespaces().withName(localNamespace).get();
            if (n != null) {
                String namespaceLabel = Optional.ofNullable(n.getMetadata().getLabels()).map(m -> m.get(MANAGEDKAFKA_ID_NAMESPACE_LABEL)).orElse("");
                if (managedKafkaId != null && !namespaceLabel.equals(managedKafkaId)) {
                    kubeClient.namespaces().withName(localNamespace).edit(namespace -> new NamespaceBuilder(namespace).editMetadata().addToLabels(MANAGEDKAFKA_ID_NAMESPACE_LABEL, managedKafkaId).endMetadata().build());
                }
            }
            if (specChanged(remoteSpec, existing) || !Objects.equals(existing.getPlacementId(), remoteManagedKafka.getPlacementId())) {
                reconcileAsync(ControlPlane.managedKafkaKey(remoteManagedKafka), localKey);
            }
        }
    }
    // process final removals
    for (ManagedKafka local : lookup.getLocalManagedKafkas()) {
        if (remotes.get(ControlPlane.managedKafkaKey(local)) != null || !deleteAllowed(local)) {
            continue;
        }
        reconcileAsync(null, Cache.metaNamespaceKeyFunc(local));
    }
}
Also used : ManagedKafkaResourceClient(org.bf2.common.ManagedKafkaResourceClient) HttpURLConnection(java.net.HttpURLConnection) Status(org.bf2.operator.resources.v1alpha1.ManagedKafkaCondition.Status) Timed(io.micrometer.core.annotation.Timed) Logger(org.jboss.logging.Logger) Cache(io.fabric8.kubernetes.client.informers.cache.Cache) HashMap(java.util.HashMap) Inject(javax.inject.Inject) ControlPlane(org.bf2.sync.controlplane.ControlPlane) Map(java.util.Map) ExecutorService(java.util.concurrent.ExecutorService) KubernetesClientException(io.fabric8.kubernetes.client.KubernetesClientException) LocalLookup(org.bf2.sync.informer.LocalLookup) Type(org.bf2.operator.resources.v1alpha1.ManagedKafkaCondition.Type) Scheduled(io.quarkus.scheduler.Scheduled) OperandUtils(org.bf2.common.OperandUtils) NDC(org.jboss.logging.NDC) ManagedKafkaStatusBuilder(org.bf2.operator.resources.v1alpha1.ManagedKafkaStatusBuilder) ConditionUtils(org.bf2.common.ConditionUtils) Reason(org.bf2.operator.resources.v1alpha1.ManagedKafkaCondition.Reason) Objects(java.util.Objects) Counted(io.micrometer.core.annotation.Counted) Namespace(io.fabric8.kubernetes.api.model.Namespace) NamespaceBuilder(io.fabric8.kubernetes.api.model.NamespaceBuilder) KubernetesClient(io.fabric8.kubernetes.client.KubernetesClient) ManagedExecutor(org.eclipse.microprofile.context.ManagedExecutor) Optional(java.util.Optional) ApplicationScoped(javax.enterprise.context.ApplicationScoped) ManagedKafkaSpec(org.bf2.operator.resources.v1alpha1.ManagedKafkaSpec) ManagedKafka(org.bf2.operator.resources.v1alpha1.ManagedKafka) ConcurrentExecution(io.quarkus.scheduler.Scheduled.ConcurrentExecution) ManagedKafka(org.bf2.operator.resources.v1alpha1.ManagedKafka) HashMap(java.util.HashMap) ManagedKafkaStatusBuilder(org.bf2.operator.resources.v1alpha1.ManagedKafkaStatusBuilder) ManagedKafkaSpec(org.bf2.operator.resources.v1alpha1.ManagedKafkaSpec) Namespace(io.fabric8.kubernetes.api.model.Namespace) NamespaceBuilder(io.fabric8.kubernetes.api.model.NamespaceBuilder) Counted(io.micrometer.core.annotation.Counted) Timed(io.micrometer.core.annotation.Timed)

Aggregations

ManagedExecutor (org.eclipse.microprofile.context.ManagedExecutor)13 Path (javax.ws.rs.Path)4 SystemException (javax.transaction.SystemException)3 Transaction (javax.transaction.Transaction)3 Transactional (javax.transaction.Transactional)3 POST (javax.ws.rs.POST)3 Produces (javax.ws.rs.Produces)3 Test (org.junit.jupiter.api.Test)3 SmallRyeManagedExecutor (io.smallrye.context.SmallRyeManagedExecutor)2 Objects (java.util.Objects)2 Optional (java.util.Optional)2 ApplicationScoped (javax.enterprise.context.ApplicationScoped)2 Inject (javax.inject.Inject)2 ContextManagerProvider (org.eclipse.microprofile.context.spi.ContextManagerProvider)2 Employee (com.gepardec.mega.domain.model.Employee)1 EmployeeService (com.gepardec.mega.service.api.EmployeeService)1 ZepService (com.gepardec.mega.zep.ZepService)1 ZepServiceException (com.gepardec.mega.zep.ZepServiceException)1 ImmutableList (com.google.common.collect.ImmutableList)1 ImmutableMap (com.google.common.collect.ImmutableMap)1