Search in sources :

Example 1 with TrainAcousticModelOptions

use of com.ibm.watson.speech_to_text.v1.model.TrainAcousticModelOptions in project java-sdk by watson-developer-cloud.

the class SpeechToTextTest method testTrainAcousticModelWOptions.

// Test the trainAcousticModel operation with a valid options model parameter
@Test
public void testTrainAcousticModelWOptions() throws Throwable {
    // Register a mock response
    String mockResponseBody = "{\"warnings\": [{\"code\": \"invalid_audio_files\", \"message\": \"message\"}]}";
    String trainAcousticModelPath = "/v1/acoustic_customizations/testString/train";
    server.enqueue(new MockResponse().setHeader("Content-type", "application/json").setResponseCode(200).setBody(mockResponseBody));
    // Construct an instance of the TrainAcousticModelOptions model
    TrainAcousticModelOptions trainAcousticModelOptionsModel = new TrainAcousticModelOptions.Builder().customizationId("testString").customLanguageModelId("testString").build();
    // Invoke trainAcousticModel() with a valid options model and verify the result
    Response<TrainingResponse> response = speechToTextService.trainAcousticModel(trainAcousticModelOptionsModel).execute();
    assertNotNull(response);
    TrainingResponse responseObj = response.getResult();
    assertNotNull(responseObj);
    // Verify the contents of the request sent to the mock server
    RecordedRequest request = server.takeRequest();
    assertNotNull(request);
    assertEquals(request.getMethod(), "POST");
    // Verify request path
    String parsedPath = TestUtilities.parseReqPath(request);
    assertEquals(parsedPath, trainAcousticModelPath);
    // Verify query params
    Map<String, String> query = TestUtilities.parseQueryString(request);
    assertNotNull(query);
    assertEquals(query.get("custom_language_model_id"), "testString");
}
Also used : RecordedRequest(okhttp3.mockwebserver.RecordedRequest) MockResponse(okhttp3.mockwebserver.MockResponse) TrainAcousticModelOptions(com.ibm.watson.speech_to_text.v1.model.TrainAcousticModelOptions) TrainingResponse(com.ibm.watson.speech_to_text.v1.model.TrainingResponse) Test(org.testng.annotations.Test)

Example 2 with TrainAcousticModelOptions

use of com.ibm.watson.speech_to_text.v1.model.TrainAcousticModelOptions in project java-sdk by watson-developer-cloud.

the class SpeechToTextTest method testTrainAcousticModel.

/**
 * Test train acoustic model.
 *
 * @throws InterruptedException the interrupted exception
 * @throws FileNotFoundException the file not found exception
 */
@Test
public void testTrainAcousticModel() throws InterruptedException, FileNotFoundException {
    server.enqueue(new MockResponse().addHeader(CONTENT_TYPE, HttpMediaType.APPLICATION_JSON).setBody("{}"));
    String id = "foo";
    String languageModelId = "bar";
    TrainAcousticModelOptions trainOptions = new TrainAcousticModelOptions.Builder().customizationId(id).customLanguageModelId(languageModelId).build();
    service.trainAcousticModel(trainOptions).execute();
    final RecordedRequest request = server.takeRequest();
    assertEquals("POST", request.getMethod());
    assertEquals(String.format(PATH_ACOUSTIC_TRAIN, id) + "?custom_language_model_id=bar", request.getPath());
}
Also used : RecordedRequest(okhttp3.mockwebserver.RecordedRequest) MockResponse(okhttp3.mockwebserver.MockResponse) TrainAcousticModelOptions(com.ibm.watson.developer_cloud.speech_to_text.v1.model.TrainAcousticModelOptions) ByteString(okio.ByteString) WatsonServiceUnitTest(com.ibm.watson.developer_cloud.WatsonServiceUnitTest) Test(org.junit.Test)

Example 3 with TrainAcousticModelOptions

use of com.ibm.watson.speech_to_text.v1.model.TrainAcousticModelOptions in project java-sdk by watson-developer-cloud.

the class SpeechToText method trainAcousticModel.

/**
 * Train a custom acoustic model.
 *
 * <p>Initiates the training of a custom acoustic model with new or changed audio resources. After
 * adding or deleting audio resources for a custom acoustic model, use this method to begin the
 * actual training of the model on the latest audio data. The custom acoustic model does not
 * reflect its changed data until you train it. You must use credentials for the instance of the
 * service that owns a model to train it.
 *
 * <p>The training method is asynchronous. Training time depends on the cumulative amount of audio
 * data that the custom acoustic model contains and the current load on the service. When you
 * train or retrain a model, the service uses all of the model's audio data in the training.
 * Training a custom acoustic model takes approximately as long as the length of its cumulative
 * audio data. For example, it takes approximately 2 hours to train a model that contains a total
 * of 2 hours of audio. The method returns an HTTP 200 response code to indicate that the training
 * process has begun.
 *
 * <p>You can monitor the status of the training by using the [Get a custom acoustic
 * model](#getacousticmodel) method to poll the model's status. Use a loop to check the status
 * once a minute. The method returns an `AcousticModel` object that includes `status` and
 * `progress` fields. A status of `available` indicates that the custom model is trained and ready
 * to use. The service cannot train a model while it is handling another request for the model.
 * The service cannot accept subsequent training requests, or requests to add new audio resources,
 * until the existing training request completes.
 *
 * <p>You can use the optional `custom_language_model_id` parameter to specify the GUID of a
 * separately created custom language model that is to be used during training. Train with a
 * custom language model if you have verbatim transcriptions of the audio files that you have
 * added to the custom model or you have either corpora (text files) or a list of words that are
 * relevant to the contents of the audio files. For training to succeed, both of the custom models
 * must be based on the same version of the same base model, and the custom language model must be
 * fully trained and available.
 *
 * <p>**Note:** Acoustic model customization is supported only for use with previous-generation
 * models. It is not supported for next-generation models.
 *
 * <p>**See also:** * [Train the custom acoustic
 * model](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-acoustic#trainModel-acoustic)
 * * [Using custom acoustic and custom language models
 * together](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-useBoth#useBoth)
 *
 * <p>### Training failures
 *
 * <p>Training can fail to start for the following reasons: * The service is currently handling
 * another request for the custom model, such as another training request or a request to add
 * audio resources to the model. * The custom model contains less than 10 minutes or more than 200
 * hours of audio data. * You passed a custom language model with the `custom_language_model_id`
 * query parameter that is not in the available state. A custom language model must be fully
 * trained and available to be used to train a custom acoustic model. * You passed an incompatible
 * custom language model with the `custom_language_model_id` query parameter. Both custom models
 * must be based on the same version of the same base model. * The custom model contains one or
 * more invalid audio resources. You can correct the invalid audio resources or set the `strict`
 * parameter to `false` to exclude the invalid resources from the training. The model must contain
 * at least one valid resource for training to succeed.
 *
 * @param trainAcousticModelOptions the {@link TrainAcousticModelOptions} containing the options
 *     for the call
 * @return a {@link ServiceCall} with a result of type {@link TrainingResponse}
 */
public ServiceCall<TrainingResponse> trainAcousticModel(TrainAcousticModelOptions trainAcousticModelOptions) {
    com.ibm.cloud.sdk.core.util.Validator.notNull(trainAcousticModelOptions, "trainAcousticModelOptions cannot be null");
    Map<String, String> pathParamsMap = new HashMap<String, String>();
    pathParamsMap.put("customization_id", trainAcousticModelOptions.customizationId());
    RequestBuilder builder = RequestBuilder.post(RequestBuilder.resolveRequestUrl(getServiceUrl(), "/v1/acoustic_customizations/{customization_id}/train", pathParamsMap));
    Map<String, String> sdkHeaders = SdkCommon.getSdkHeaders("speech_to_text", "v1", "trainAcousticModel");
    for (Entry<String, String> header : sdkHeaders.entrySet()) {
        builder.header(header.getKey(), header.getValue());
    }
    builder.header("Accept", "application/json");
    if (trainAcousticModelOptions.customLanguageModelId() != null) {
        builder.query("custom_language_model_id", String.valueOf(trainAcousticModelOptions.customLanguageModelId()));
    }
    ResponseConverter<TrainingResponse> responseConverter = ResponseConverterUtils.getValue(new com.google.gson.reflect.TypeToken<TrainingResponse>() {
    }.getType());
    return createServiceCall(builder.build(), responseConverter);
}
Also used : RequestBuilder(com.ibm.cloud.sdk.core.http.RequestBuilder) HashMap(java.util.HashMap) TrainingResponse(com.ibm.watson.speech_to_text.v1.model.TrainingResponse)

Aggregations

TrainingResponse (com.ibm.watson.speech_to_text.v1.model.TrainingResponse)2 MockResponse (okhttp3.mockwebserver.MockResponse)2 RecordedRequest (okhttp3.mockwebserver.RecordedRequest)2 RequestBuilder (com.ibm.cloud.sdk.core.http.RequestBuilder)1 WatsonServiceUnitTest (com.ibm.watson.developer_cloud.WatsonServiceUnitTest)1 TrainAcousticModelOptions (com.ibm.watson.developer_cloud.speech_to_text.v1.model.TrainAcousticModelOptions)1 TrainAcousticModelOptions (com.ibm.watson.speech_to_text.v1.model.TrainAcousticModelOptions)1 HashMap (java.util.HashMap)1 ByteString (okio.ByteString)1 Test (org.junit.Test)1 Test (org.testng.annotations.Test)1