Search in sources :

Example 96 with TypeToken

use of com.google.common.reflect.TypeToken in project CorfuDB by CorfuDB.

the class UndoTest method ckMultiStreamRollback.

/**
     * In this test, transactions are started on two threads, 1, 2.
     * Then two things happen:
     *
     * 1. some updates are committed
     *  then t1 resumes and should roll back these commits.
     *
     * 2. then t2 is resumed and makes optimistic updates, which should roll
     * back
     * @throws Exception
     */
@Test
public void ckMultiStreamRollback() throws Exception {
    ArrayList<Map> maps = new ArrayList<>();
    final int nmaps = 3;
    for (int i = 0; i < nmaps; i++) maps.add((SMRMap<Integer, String>) instantiateCorfuObject(new TypeToken<SMRMap<Integer, String>>() {
    }, "test stream" + i));
    // before t1 starts
    crossStream(maps, normalValue);
    // t1 starts transaction.
    // snapshot should include all the keys inserted above
    t(t1, () -> {
        WWTXBegin();
        // size() is called to make the TX obtains a snapshot at this point,
        maps.get(0).size();
    // and does not wait to lazily obtain it later, when it reads for
    // the first time
    });
    // t2 starts transaction.
    t(t2, () -> {
        WWTXBegin();
        // size() is called to make the TX obtains a snapshot at this point,
        maps.get(0).size();
    // and does not wait to lazily obtain it later, when it reads for
    // the first time
    });
    // t3 modifies everything
    t(t3, () -> crossStream(maps, specialValue));
    // t1 should undo everything by t2 and by t3
    t(t1, () -> {
        for (Map m : maps) {
            assertThat(m.get(specialKey)).isEqualTo(normalValue);
            assertThat(m.get(specialKey + 1)).isEqualTo(normalValue);
        }
    });
    // now, t2 optimistically modifying everything, but
    // not yet committing
    t(t2, () -> {
        for (Map m : maps) m.put(specialKey, specialValue2);
    });
    // main thread, t2's work should be committed
    for (Map m : maps) {
        assertThat(m.get(specialKey)).isEqualTo(specialValue);
        assertThat(m.get(specialKey + 1)).isEqualTo(specialValue);
    }
    // now, try to commit t2
    t(t2, () -> {
        boolean aborted = false;
        try {
            TXEnd();
        } catch (TransactionAbortedException te) {
            aborted = true;
        }
        assertThat(aborted);
    });
    // back to main thread, t2's work should be committed
    for (Map m : maps) {
        assertThat(m.get(specialKey)).isEqualTo(specialValue);
        assertThat(m.get(specialKey + 1)).isEqualTo(specialValue);
    }
}
Also used : SMRMap(org.corfudb.runtime.collections.SMRMap) TypeToken(com.google.common.reflect.TypeToken) ArrayList(java.util.ArrayList) Map(java.util.Map) SMRMap(org.corfudb.runtime.collections.SMRMap) TransactionAbortedException(org.corfudb.runtime.exceptions.TransactionAbortedException) Test(org.junit.Test)

Example 97 with TypeToken

use of com.google.common.reflect.TypeToken in project azure-sdk-for-java by Azure.

the class DatabaseAccountsInner method failoverPriorityChangeWithServiceResponseAsync.

/**
     * Changes the failover priority for the Azure DocumentDB database account. A failover priority of 0 indicates a write region. The maximum value for a failover priority = (total number of regions - 1). Failover priority values must be unique for each of the regions in which the database account exists.
     *
     * @param resourceGroupName Name of an Azure resource group.
     * @param accountName DocumentDB database account name.
     * @throws IllegalArgumentException thrown if parameters fail the validation
     * @return the observable for the request
     */
public Observable<ServiceResponse<Void>> failoverPriorityChangeWithServiceResponseAsync(String resourceGroupName, String accountName) {
    if (this.client.subscriptionId() == null) {
        throw new IllegalArgumentException("Parameter this.client.subscriptionId() is required and cannot be null.");
    }
    if (resourceGroupName == null) {
        throw new IllegalArgumentException("Parameter resourceGroupName is required and cannot be null.");
    }
    if (accountName == null) {
        throw new IllegalArgumentException("Parameter accountName is required and cannot be null.");
    }
    if (this.client.apiVersion() == null) {
        throw new IllegalArgumentException("Parameter this.client.apiVersion() is required and cannot be null.");
    }
    final String failoverPoliciesConverted = null;
    FailoverPolicies failoverParameters = new FailoverPolicies();
    failoverParameters.withFailoverPolicies(null);
    Observable<Response<ResponseBody>> observable = service.failoverPriorityChange(this.client.subscriptionId(), resourceGroupName, accountName, this.client.apiVersion(), this.client.acceptLanguage(), failoverParameters, this.client.userAgent());
    return client.getAzureClient().getPostOrDeleteResultAsync(observable, new TypeToken<Void>() {
    }.getType());
}
Also used : Response(retrofit2.Response) ServiceResponse(com.microsoft.rest.ServiceResponse) TypeToken(com.google.common.reflect.TypeToken) FailoverPolicies(com.microsoft.azure.management.documentdb.FailoverPolicies)

Example 98 with TypeToken

use of com.google.common.reflect.TypeToken in project azure-sdk-for-java by Azure.

the class DatabaseAccountsInner method regenerateKeyWithServiceResponseAsync.

/**
     * Regenerates an access key for the specified Azure DocumentDB database account.
     *
     * @param resourceGroupName Name of an Azure resource group.
     * @param accountName DocumentDB database account name.
     * @param keyKind The access key to regenerate. Possible values include: 'primary', 'secondary', 'primaryReadonly', 'secondaryReadonly'
     * @throws IllegalArgumentException thrown if parameters fail the validation
     * @return the observable for the request
     */
public Observable<ServiceResponse<Void>> regenerateKeyWithServiceResponseAsync(String resourceGroupName, String accountName, KeyKind keyKind) {
    if (this.client.subscriptionId() == null) {
        throw new IllegalArgumentException("Parameter this.client.subscriptionId() is required and cannot be null.");
    }
    if (resourceGroupName == null) {
        throw new IllegalArgumentException("Parameter resourceGroupName is required and cannot be null.");
    }
    if (accountName == null) {
        throw new IllegalArgumentException("Parameter accountName is required and cannot be null.");
    }
    if (this.client.apiVersion() == null) {
        throw new IllegalArgumentException("Parameter this.client.apiVersion() is required and cannot be null.");
    }
    if (keyKind == null) {
        throw new IllegalArgumentException("Parameter keyKind is required and cannot be null.");
    }
    DatabaseAccountRegenerateKeyParameters keyToRegenerate = new DatabaseAccountRegenerateKeyParameters();
    keyToRegenerate.withKeyKind(keyKind);
    Observable<Response<ResponseBody>> observable = service.regenerateKey(this.client.subscriptionId(), resourceGroupName, accountName, this.client.apiVersion(), this.client.acceptLanguage(), keyToRegenerate, this.client.userAgent());
    return client.getAzureClient().getPostOrDeleteResultAsync(observable, new TypeToken<Void>() {
    }.getType());
}
Also used : Response(retrofit2.Response) ServiceResponse(com.microsoft.rest.ServiceResponse) DatabaseAccountRegenerateKeyParameters(com.microsoft.azure.management.documentdb.DatabaseAccountRegenerateKeyParameters) TypeToken(com.google.common.reflect.TypeToken)

Example 99 with TypeToken

use of com.google.common.reflect.TypeToken in project azure-tools-for-java by Microsoft.

the class SparkSubmitModel method tryToCreateBatchSparkJob.

private void tryToCreateBatchSparkJob(@NotNull final IClusterDetail selectedClusterDetail) throws HDIException, IOException {
    SparkBatchSubmission.getInstance().setCredentialsProvider(selectedClusterDetail.getHttpUserName(), selectedClusterDetail.getHttpPassword());
    HttpResponse response = SparkBatchSubmission.getInstance().createBatchSparkJob(SparkSubmitHelper.getLivyConnectionURL(selectedClusterDetail), submissionParameter);
    if (response.getCode() == 201 || response.getCode() == 200) {
        HDInsightUtil.showInfoOnSubmissionMessageWindow("Info : Submit to spark cluster successfully.");
        postEventProperty.put("IsSubmitSucceed", "true");
        String jobLink = String.format("%s/sparkhistory", selectedClusterDetail.getConnectionUrl());
        HDInsightUtil.setHyperLinkWithText("See spark job view from ", jobLink, jobLink);
        @SuppressWarnings("serial") final SparkSubmitResponse sparkSubmitResponse = new Gson().fromJson(response.getMessage(), new TypeToken<SparkSubmitResponse>() {
        }.getType());
        // Set submitted spark application id and http request info for stopping running application
        Display.getDefault().syncExec(new Runnable() {

            @Override
            public void run() {
                SparkSubmissionToolWindowView view = HDInsightUtil.getSparkSubmissionToolWindowView();
                view.setSparkApplicationStopInfo(selectedClusterDetail.getConnectionUrl(), sparkSubmitResponse.getId());
                view.setStopButtonState(true);
                view.getJobStatusManager().resetJobStateManager();
            }
        });
        SparkSubmitHelper.getInstance().printRunningLogStreamingly(sparkSubmitResponse.getId(), selectedClusterDetail, postEventProperty);
    } else {
        HDInsightUtil.showErrorMessageOnSubmissionMessageWindow(String.format("Error : Failed to submit to spark cluster. error code : %d, reason :  %s.", response.getCode(), response.getContent()));
        postEventProperty.put("IsSubmitSucceed", "false");
        postEventProperty.put("SubmitFailedReason", response.getContent());
        AppInsightsClient.create(Messages.SparkSubmissionButtonClickEvent, null, postEventProperty);
    }
}
Also used : SparkSubmitResponse(com.microsoft.azure.hdinsight.spark.common.SparkSubmitResponse) TypeToken(com.google.common.reflect.TypeToken) HttpResponse(com.microsoft.azure.hdinsight.sdk.common.HttpResponse) Gson(com.google.gson.Gson) SparkSubmissionToolWindowView(com.microsoft.azuretools.hdinsight.SparkSubmissionToolWindowView)

Example 100 with TypeToken

use of com.google.common.reflect.TypeToken in project azure-tools-for-java by Microsoft.

the class SparkSubmitHelper method printRunningLogStreamingly.

public void printRunningLogStreamingly(/*Project project,*/
int id, IClusterDetail clusterDetail, Map<String, String> postEventProperty) throws IOException {
    try {
        boolean isFailedJob = false;
        boolean isKilledJob = false;
        int from_index = 0;
        int pre_index;
        int times = 0;
        HDInsightUtil.getSparkSubmissionToolWindowView().setInfo("======================Begin printing out spark job log.=======================");
        while (true) {
            pre_index = from_index;
            if (HDInsightUtil.getSparkSubmissionToolWindowView().getJobStatusManager().isJobKilled()) {
                isKilledJob = true;
                break;
            }
            from_index = printoutJobLog(/*project, */
            id, from_index, clusterDetail);
            HttpResponse statusHttpResponse = SparkBatchSubmission.getInstance().getBatchSparkJobStatus(clusterDetail.getConnectionUrl() + "/livy/batches", id);
            SparkSubmitResponse status = new Gson().fromJson(statusHttpResponse.getMessage(), new TypeToken<SparkSubmitResponse>() {
            }.getType());
            // only the lines of the log are same between two http requests, we try to get the job status
            if (from_index == pre_index) {
                String finalStatus = status.getState().toLowerCase();
                if (finalStatus.equals("error") || finalStatus.equals("success") || finalStatus.equals("dead")) {
                    if (finalStatus.equals("error") || finalStatus.equals("dead")) {
                        isFailedJob = true;
                    }
                    if (!HDInsightUtil.getSparkSubmissionToolWindowView().getJobStatusManager().isJobKilled()) {
                        printoutJobLog(id, from_index, clusterDetail);
                        HDInsightUtil.getSparkSubmissionToolWindowView().setInfo("======================Finish printing out spark job log.=======================");
                    } else {
                        isKilledJob = true;
                    }
                    break;
                }
            }
            Thread.sleep(getIntervalTime(times));
            times++;
        }
        if (isKilledJob) {
            postEventProperty.put("IsKilled", "true");
            AppInsightsClient.create(Messages.SparkSubmissionButtonClickEvent, Activator.getDefault().getBundle().getVersion().toString(), postEventProperty);
            return;
        }
        if (isFailedJob) {
            postEventProperty.put("IsRunningSucceed", "false");
            HDInsightUtil.getSparkSubmissionToolWindowView().setError("Error : Your submitted job run failed");
        } else {
            postEventProperty.put("IsRunningSucceed", "true");
            HDInsightUtil.getSparkSubmissionToolWindowView().setInfo("The Spark application completed successfully");
        }
        AppInsightsClient.create(Messages.SparkSubmissionButtonClickEvent, Activator.getDefault().getBundle().getVersion().toString(), postEventProperty);
    } catch (Exception e) {
        if (HDInsightUtil.getSparkSubmissionToolWindowView().getJobStatusManager().isJobKilled() == false) {
            HDInsightUtil.getSparkSubmissionToolWindowView().setError("Error : Failed to getting running log. Exception : " + e.toString());
        } else {
            postEventProperty.put("IsKilled", "true");
        }
        AppInsightsClient.create(Messages.SparkSubmissionButtonClickEvent, Activator.getDefault().getBundle().getVersion().toString(), postEventProperty);
    }
}
Also used : SparkSubmitResponse(com.microsoft.azure.hdinsight.spark.common.SparkSubmitResponse) TypeToken(com.google.common.reflect.TypeToken) HttpResponse(com.microsoft.azure.hdinsight.sdk.common.HttpResponse) Gson(com.google.gson.Gson) SftpException(com.jcraft.jsch.SftpException) HDIException(com.microsoft.azure.hdinsight.sdk.common.HDIException) IOException(java.io.IOException) AzureCmdException(com.microsoft.azuretools.azurecommons.helpers.AzureCmdException) JSchException(com.jcraft.jsch.JSchException)

Aggregations

TypeToken (com.google.common.reflect.TypeToken)135 Test (org.junit.Test)60 HttpResponse (co.cask.common.http.HttpResponse)26 URL (java.net.URL)24 ServiceResponse (com.microsoft.rest.ServiceResponse)22 Response (retrofit2.Response)22 BinaryEncoder (co.cask.cdap.common.io.BinaryEncoder)18 BinaryDecoder (co.cask.cdap.common.io.BinaryDecoder)17 PipedInputStream (java.io.PipedInputStream)17 PipedOutputStream (java.io.PipedOutputStream)17 ReflectionDatumReader (co.cask.cdap.internal.io.ReflectionDatumReader)16 List (java.util.List)16 Map (java.util.Map)11 ImmutableList (com.google.common.collect.ImmutableList)9 Type (java.lang.reflect.Type)9 AbstractViewTest (org.corfudb.runtime.view.AbstractViewTest)9 NotFoundException (co.cask.cdap.common.NotFoundException)8 VirtualMachineScaleSetVMInstanceIDs (com.microsoft.azure.management.compute.VirtualMachineScaleSetVMInstanceIDs)8 Gson (com.google.gson.Gson)7 JsonObject (com.google.gson.JsonObject)7