Search in sources :

Example 76 with Builder

use of okhttp3.HttpUrl.Builder in project androidthings-deskclock by leinardi.

the class NetworkModule method provideOkHttpClient.

@Provides
@Singleton
public OkHttpClient provideOkHttpClient(@Named(NETWORK_TIMEOUT) Long networkTimeout) {
    OkHttpClient.Builder builder = new OkHttpClient.Builder();
    if (BuildConfig.DEBUG) {
        HttpLoggingInterceptor logging = new HttpLoggingInterceptor();
        logging.setLevel(HttpLoggingInterceptor.Level.BODY);
        builder.addInterceptor(logging);
    }
    return builder.connectTimeout(networkTimeout, TimeUnit.SECONDS).readTimeout(networkTimeout, TimeUnit.SECONDS).writeTimeout(networkTimeout, TimeUnit.SECONDS).build();
}
Also used : OkHttpClient(okhttp3.OkHttpClient) HttpLoggingInterceptor(okhttp3.logging.HttpLoggingInterceptor) Singleton(javax.inject.Singleton) Provides(dagger.Provides)

Example 77 with Builder

use of okhttp3.HttpUrl.Builder in project java-sdk by watson-developer-cloud.

the class SpeechToText method recognizeUsingWebSocket.

/**
 * Sends audio and returns transcription results for recognition requests over a WebSocket connection. Requests and
 * responses are enabled over a single TCP connection that abstracts much of the complexity of the request to offer
 * efficient implementation, low latency, high throughput, and an asynchronous response. By default, only final
 * results are returned for any request; to enable interim results, set the interimResults parameter to true.
 *
 * The service imposes a data size limit of 100 MB per utterance (per recognition request). You can send multiple
 * utterances over a single WebSocket connection. The service automatically detects the endianness of the incoming
 * audio and, for audio that includes multiple channels, downmixes the audio to one-channel mono during transcoding.
 * (For the audio/l16 format, you can specify the endianness.)
 *
 * @param recognizeOptions the recognize options
 * @param callback the {@link RecognizeCallback} instance where results will be sent
 * @return the {@link WebSocket}
 */
public WebSocket recognizeUsingWebSocket(RecognizeOptions recognizeOptions, RecognizeCallback callback) {
    Validator.notNull(recognizeOptions, "recognizeOptions cannot be null");
    Validator.notNull(recognizeOptions.audio(), "audio cannot be null");
    Validator.notNull(callback, "callback cannot be null");
    HttpUrl.Builder urlBuilder = HttpUrl.parse(getEndPoint() + "/v1/recognize").newBuilder();
    if (recognizeOptions.model() != null) {
        urlBuilder.addQueryParameter("model", recognizeOptions.model());
    }
    if (recognizeOptions.customizationId() != null) {
        urlBuilder.addQueryParameter("customization_id", recognizeOptions.customizationId());
    }
    if (recognizeOptions.acousticCustomizationId() != null) {
        urlBuilder.addQueryParameter("acoustic_customization_id", recognizeOptions.acousticCustomizationId());
    }
    if (recognizeOptions.version() != null) {
        urlBuilder.addQueryParameter("version", recognizeOptions.version());
    }
    if (recognizeOptions.customizationWeight() != null) {
        urlBuilder.addQueryParameter("customization_weight", String.valueOf(recognizeOptions.customizationWeight()));
    }
    String url = urlBuilder.toString().replace("https://", "wss://");
    Request.Builder builder = new Request.Builder().url(url);
    setAuthentication(builder);
    setDefaultHeaders(builder);
    OkHttpClient client = configureHttpClient();
    return client.newWebSocket(builder.build(), new SpeechToTextWebSocketListener(recognizeOptions, callback));
}
Also used : OkHttpClient(okhttp3.OkHttpClient) SpeechToTextWebSocketListener(com.ibm.watson.developer_cloud.speech_to_text.v1.websocket.SpeechToTextWebSocketListener) Request(okhttp3.Request) HttpUrl(okhttp3.HttpUrl)

Example 78 with Builder

use of okhttp3.HttpUrl.Builder in project java-sdk by watson-developer-cloud.

the class SpeechToText method addCorpus.

/**
 * Adds a corpus text file to a custom language model.
 *
 * Adds a single corpus text file of new training data to a custom language model. Use multiple requests to submit
 * multiple corpus text files. You must use credentials for the instance of the service that owns a model to add a
 * corpus to it. Note that adding a corpus does not affect the custom language model until you train the model for the
 * new data by using the `POST /v1/customizations/{customization_id}/train` method. Submit a plain text file that
 * contains sample sentences from the domain of interest to enable the service to extract words in context. The more
 * sentences you add that represent the context in which speakers use words from the domain, the better the service's
 * recognition accuracy. For guidelines about adding a corpus text file and for information about how the service
 * parses a corpus file, see [Preparing a corpus text
 * file](https://console.bluemix.net/docs/services/speech-to-text/language-resource.html#prepareCorpus). The call
 * returns an HTTP 201 response code if the corpus is valid. The service then asynchronously processes the contents of
 * the corpus and automatically extracts new words that it finds. This can take on the order of a minute or two to
 * complete depending on the total number of words and the number of new words in the corpus, as well as the current
 * load on the service. You cannot submit requests to add additional corpora or words to the custom model, or to train
 * the model, until the service's analysis of the corpus for the current request completes. Use the `GET
 * /v1/customizations/{customization_id}/corpora/{corpus_name}` method to check the status of the analysis. The
 * service auto-populates the model's words resource with any word that is not found in its base vocabulary; these are
 * referred to as out-of-vocabulary (OOV) words. You can use the `GET /v1/customizations/{customization_id}/words`
 * method to examine the words resource, using other words method to eliminate typos and modify how words are
 * pronounced as needed. To add a corpus file that has the same name as an existing corpus, set the allow_overwrite
 * query parameter to true; otherwise, the request fails. Overwriting an existing corpus causes the service to process
 * the corpus text file and extract OOV words anew. Before doing so, it removes any OOV words associated with the
 * existing corpus from the model's words resource unless they were also added by another corpus or they have been
 * modified in some way with the `POST /v1/customizations/{customization_id}/words` or `PUT
 * /v1/customizations/{customization_id}/words/{word_name}` method. The service limits the overall amount of data that
 * you can add to a custom model to a maximum of 10 million total words from all corpora combined. Also, you can add
 * no more than 30 thousand new custom words to a model; this includes words that the service extracts from corpora
 * and words that you add directly.
 *
 * @param addCorpusOptions the {@link AddCorpusOptions} containing the options for the call
 * @return a {@link ServiceCall} with a response type of Void
 */
public ServiceCall<Void> addCorpus(AddCorpusOptions addCorpusOptions) {
    Validator.notNull(addCorpusOptions, "addCorpusOptions cannot be null");
    String[] pathSegments = { "v1/customizations", "corpora" };
    String[] pathParameters = { addCorpusOptions.customizationId(), addCorpusOptions.corpusName() };
    RequestBuilder builder = RequestBuilder.post(RequestBuilder.constructHttpUrl(getEndPoint(), pathSegments, pathParameters));
    if (addCorpusOptions.allowOverwrite() != null) {
        builder.query("allow_overwrite", String.valueOf(addCorpusOptions.allowOverwrite()));
    }
    MultipartBody.Builder multipartBuilder = new MultipartBody.Builder();
    multipartBuilder.setType(MultipartBody.FORM);
    RequestBody corpusFileBody = RequestUtils.inputStreamBody(addCorpusOptions.corpusFile(), addCorpusOptions.corpusFileContentType());
    multipartBuilder.addFormDataPart("corpus_file", addCorpusOptions.corpusFilename(), corpusFileBody);
    builder.body(multipartBuilder.build());
    return createServiceCall(builder.build(), ResponseConverterUtils.getVoid());
}
Also used : RequestBuilder(com.ibm.watson.developer_cloud.http.RequestBuilder) MultipartBody(okhttp3.MultipartBody) RequestBuilder(com.ibm.watson.developer_cloud.http.RequestBuilder) RequestBody(okhttp3.RequestBody) InputStreamRequestBody(com.ibm.watson.developer_cloud.http.InputStreamRequestBody)

Example 79 with Builder

use of okhttp3.HttpUrl.Builder in project java-sdk by watson-developer-cloud.

the class VisualRecognition method updateClassifier.

/**
 * Update a classifier.
 *
 * Update a custom classifier by adding new positive or negative classes (examples) or by adding new images to
 * existing classes. You must supply at least one set of positive or negative examples. For details, see [Updating
 * custom
 * classifiers]
 * (https://console.bluemix.net/docs/services/visual-recognition/customizing.html#updating-custom-classifiers).
 * Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class
 * names). The service assumes UTF-8 encoding if it encounters non-ASCII characters. **Important:** You can't update a
 * custom classifier with an API key for a Lite plan. To update a custom classifier on a Lite plan, create another
 * service instance on a Standard plan and re-create your custom classifier. **Tip:** Don't make retraining calls on a
 * classifier until the status is ready. When you submit retraining requests in parallel, the last request overwrites
 * the previous requests. The retrained property shows the last time the classifier retraining finished.
 *
 * @param updateClassifierOptions the {@link UpdateClassifierOptions} containing the options for the call
 * @return a {@link ServiceCall} with a response type of {@link Classifier}
 */
public ServiceCall<Classifier> updateClassifier(UpdateClassifierOptions updateClassifierOptions) {
    Validator.notNull(updateClassifierOptions, "updateClassifierOptions cannot be null");
    Validator.isTrue((updateClassifierOptions.classNames().size() > 0) || (updateClassifierOptions.negativeExamples() != null), "At least one of classnamePositiveExamples or negativeExamples must be supplied.");
    String[] pathSegments = { "v3/classifiers" };
    String[] pathParameters = { updateClassifierOptions.classifierId() };
    RequestBuilder builder = RequestBuilder.post(RequestBuilder.constructHttpUrl(getEndPoint(), pathSegments, pathParameters));
    builder.query(VERSION, versionDate);
    MultipartBody.Builder multipartBuilder = new MultipartBody.Builder();
    multipartBuilder.setType(MultipartBody.FORM);
    // Classes
    for (String className : updateClassifierOptions.classNames()) {
        String dataName = className + "_positive_examples";
        File positiveExamples = updateClassifierOptions.positiveExamplesByClassName(className);
        RequestBody body = RequestUtils.fileBody(positiveExamples, "application/octet-stream");
        multipartBuilder.addFormDataPart(dataName, positiveExamples.getName(), body);
    }
    if (updateClassifierOptions.negativeExamples() != null) {
        RequestBody negativeExamplesBody = RequestUtils.inputStreamBody(updateClassifierOptions.negativeExamples(), "application/octet-stream");
        multipartBuilder.addFormDataPart("negative_examples", updateClassifierOptions.negativeExamplesFilename(), negativeExamplesBody);
    }
    builder.body(multipartBuilder.build());
    return createServiceCall(builder.build(), ResponseConverterUtils.getObject(Classifier.class));
}
Also used : RequestBuilder(com.ibm.watson.developer_cloud.http.RequestBuilder) MultipartBody(okhttp3.MultipartBody) RequestBuilder(com.ibm.watson.developer_cloud.http.RequestBuilder) Classifier(com.ibm.watson.developer_cloud.visual_recognition.v3.model.Classifier) File(java.io.File) RequestBody(okhttp3.RequestBody)

Example 80 with Builder

use of okhttp3.HttpUrl.Builder in project java-sdk by watson-developer-cloud.

the class VisualRecognition method detectFaces.

/**
 * Detect faces in images.
 *
 * **Important:** On April 2, 2018, the identity information in the response to calls to the Face model was removed.
 * The identity information refers to the `name` of the person, `score`, and `type_hierarchy` knowledge graph. For
 * details about the enhanced Face model, see the [Release
 * notes](https://console.bluemix.net/docs/services/visual-recognition/release-notes.html#2april2018). Analyze and get
 * data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so
 * no training is necessary. The Detect faces method does not support general biometric facial recognition. Supported
 * image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel
 * density is 32X32 pixels per inch.
 *
 * @param detectFacesOptions the {@link DetectFacesOptions} containing the options for the call
 * @return a {@link ServiceCall} with a response type of {@link DetectedFaces}
 */
public ServiceCall<DetectedFaces> detectFaces(DetectFacesOptions detectFacesOptions) {
    Validator.notNull(detectFacesOptions, "detectFacesOptions cannot be null");
    Validator.isTrue((detectFacesOptions.imagesFile() != null) || (detectFacesOptions.url() != null) || (detectFacesOptions.parameters() != null), "At least one of imagesFile, url, or parameters must be supplied.");
    String[] pathSegments = { "v3/detect_faces" };
    RequestBuilder builder = RequestBuilder.post(RequestBuilder.constructHttpUrl(getEndPoint(), pathSegments));
    builder.query(VERSION, versionDate);
    MultipartBody.Builder multipartBuilder = new MultipartBody.Builder();
    multipartBuilder.setType(MultipartBody.FORM);
    if (detectFacesOptions.imagesFile() != null) {
        RequestBody imagesFileBody = RequestUtils.inputStreamBody(detectFacesOptions.imagesFile(), detectFacesOptions.imagesFileContentType());
        multipartBuilder.addFormDataPart("images_file", detectFacesOptions.imagesFilename(), imagesFileBody);
    }
    if (detectFacesOptions.parameters() != null) {
        multipartBuilder.addFormDataPart("parameters", detectFacesOptions.parameters());
    }
    if (detectFacesOptions.url() != null) {
        multipartBuilder.addFormDataPart("url", detectFacesOptions.url());
    }
    builder.body(multipartBuilder.build());
    return createServiceCall(builder.build(), ResponseConverterUtils.getObject(DetectedFaces.class));
}
Also used : RequestBuilder(com.ibm.watson.developer_cloud.http.RequestBuilder) MultipartBody(okhttp3.MultipartBody) RequestBuilder(com.ibm.watson.developer_cloud.http.RequestBuilder) DetectedFaces(com.ibm.watson.developer_cloud.visual_recognition.v3.model.DetectedFaces) RequestBody(okhttp3.RequestBody)

Aggregations

Request (okhttp3.Request)206 Response (okhttp3.Response)148 OkHttpClient (okhttp3.OkHttpClient)142 IOException (java.io.IOException)111 RequestBody (okhttp3.RequestBody)81 Test (org.junit.Test)75 HttpUrl (okhttp3.HttpUrl)47 File (java.io.File)42 MockResponse (okhttp3.mockwebserver.MockResponse)42 MultipartBody (okhttp3.MultipartBody)40 Map (java.util.Map)39 HttpLoggingInterceptor (okhttp3.logging.HttpLoggingInterceptor)31 Call (okhttp3.Call)29 Interceptor (okhttp3.Interceptor)29 Retrofit (retrofit2.Retrofit)29 Builder (okhttp3.OkHttpClient.Builder)26 RecordedRequest (okhttp3.mockwebserver.RecordedRequest)25 ResponseBody (okhttp3.ResponseBody)24 HashMap (java.util.HashMap)22 FormBody (okhttp3.FormBody)21