Search in sources :

Example 6 with Pronunciation

use of com.ibm.watson.text_to_speech.v1.model.Pronunciation in project java-sdk by watson-developer-cloud.

the class TextToSpeech method addCustomPrompt.

/**
 * Add a custom prompt.
 *
 * <p>Adds a custom prompt to a custom model. A prompt is defined by the text that is to be
 * spoken, the audio for that text, a unique user-specified ID for the prompt, and an optional
 * speaker ID. The information is used to generate prosodic data that is not visible to the user.
 * This data is used by the service to produce the synthesized audio upon request. You must use
 * credentials for the instance of the service that owns a custom model to add a prompt to it. You
 * can add a maximum of 1000 custom prompts to a single custom model.
 *
 * <p>You are recommended to assign meaningful values for prompt IDs. For example, use `goodbye`
 * to identify a prompt that speaks a farewell message. Prompt IDs must be unique within a given
 * custom model. You cannot define two prompts with the same name for the same custom model. If
 * you provide the ID of an existing prompt, the previously uploaded prompt is replaced by the new
 * information. The existing prompt is reprocessed by using the new text and audio and, if
 * provided, new speaker model, and the prosody data associated with the prompt is updated.
 *
 * <p>The quality of a prompt is undefined if the language of a prompt does not match the language
 * of its custom model. This is consistent with any text or SSML that is specified for a speech
 * synthesis request. The service makes a best-effort attempt to render the specified text for the
 * prompt; it does not validate that the language of the text matches the language of the model.
 *
 * <p>Adding a prompt is an asynchronous operation. Although it accepts less audio than speaker
 * enrollment, the service must align the audio with the provided text. The time that it takes to
 * process a prompt depends on the prompt itself. The processing time for a reasonably sized
 * prompt generally matches the length of the audio (for example, it takes 20 seconds to process a
 * 20-second prompt).
 *
 * <p>For shorter prompts, you can wait for a reasonable amount of time and then check the status
 * of the prompt with the [Get a custom prompt](#getcustomprompt) method. For longer prompts,
 * consider using that method to poll the service every few seconds to determine when the prompt
 * becomes available. No prompt can be used for speech synthesis if it is in the `processing` or
 * `failed` state. Only prompts that are in the `available` state can be used for speech
 * synthesis.
 *
 * <p>When it processes a request, the service attempts to align the text and the audio that are
 * provided for the prompt. The text that is passed with a prompt must match the spoken audio as
 * closely as possible. Optimally, the text and audio match exactly. The service does its best to
 * align the specified text with the audio, and it can often compensate for mismatches between the
 * two. But if the service cannot effectively align the text and the audio, possibly because the
 * magnitude of mismatches between the two is too great, processing of the prompt fails.
 *
 * <p>### Evaluating a prompt
 *
 * <p>Always listen to and evaluate a prompt to determine its quality before using it in
 * production. To evaluate a prompt, include only the single prompt in a speech synthesis request
 * by using the following SSML extension, in this case for a prompt whose ID is `goodbye`:
 *
 * <p>`&lt;ibm:prompt id="goodbye"/&gt;`
 *
 * <p>In some cases, you might need to rerecord and resubmit a prompt as many as five times to
 * address the following possible problems: * The service might fail to detect a mismatch between
 * the prompt’s text and audio. The longer the prompt, the greater the chance for misalignment
 * between its text and audio. Therefore, multiple shorter prompts are preferable to a single long
 * prompt. * The text of a prompt might include a word that the service does not recognize. In
 * this case, you can create a custom word and pronunciation pair to tell the service how to
 * pronounce the word. You must then re-create the prompt. * The quality of the input audio might
 * be insufficient or the service’s processing of the audio might fail to detect the intended
 * prosody. Submitting new audio for the prompt can correct these issues.
 *
 * <p>If a prompt that is created without a speaker ID does not adequately reflect the intended
 * prosody, enrolling the speaker and providing a speaker ID for the prompt is one recommended
 * means of potentially improving the quality of the prompt. This is especially important for
 * shorter prompts such as "good-bye" or "thank you," where less audio data makes it more
 * difficult to match the prosody of the speaker. Custom prompts are supported only for use with
 * US English custom models and voices.
 *
 * <p>**See also:** * [Add a custom
 * prompt](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-create#tbe-create-add-prompt)
 * * [Evaluate a custom
 * prompt](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-create#tbe-create-evaluate-prompt)
 * * [Rules for creating custom
 * prompts](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-rules#tbe-rules-prompts).
 *
 * @param addCustomPromptOptions the {@link AddCustomPromptOptions} containing the options for the
 *     call
 * @return a {@link ServiceCall} with a result of type {@link Prompt}
 */
public ServiceCall<Prompt> addCustomPrompt(AddCustomPromptOptions addCustomPromptOptions) {
    com.ibm.cloud.sdk.core.util.Validator.notNull(addCustomPromptOptions, "addCustomPromptOptions cannot be null");
    Map<String, String> pathParamsMap = new HashMap<String, String>();
    pathParamsMap.put("customization_id", addCustomPromptOptions.customizationId());
    pathParamsMap.put("prompt_id", addCustomPromptOptions.promptId());
    RequestBuilder builder = RequestBuilder.post(RequestBuilder.resolveRequestUrl(getServiceUrl(), "/v1/customizations/{customization_id}/prompts/{prompt_id}", pathParamsMap));
    Map<String, String> sdkHeaders = SdkCommon.getSdkHeaders("text_to_speech", "v1", "addCustomPrompt");
    for (Entry<String, String> header : sdkHeaders.entrySet()) {
        builder.header(header.getKey(), header.getValue());
    }
    builder.header("Accept", "application/json");
    MultipartBody.Builder multipartBuilder = new MultipartBody.Builder();
    multipartBuilder.setType(MultipartBody.FORM);
    multipartBuilder.addFormDataPart("metadata", addCustomPromptOptions.metadata().toString());
    okhttp3.RequestBody fileBody = RequestUtils.inputStreamBody(addCustomPromptOptions.file(), "audio/wav");
    multipartBuilder.addFormDataPart("file", "filename", fileBody);
    builder.body(multipartBuilder.build());
    ResponseConverter<Prompt> responseConverter = ResponseConverterUtils.getValue(new com.google.gson.reflect.TypeToken<Prompt>() {
    }.getType());
    return createServiceCall(builder.build(), responseConverter);
}
Also used : RequestBuilder(com.ibm.cloud.sdk.core.http.RequestBuilder) HashMap(java.util.HashMap) RequestBuilder(com.ibm.cloud.sdk.core.http.RequestBuilder) MultipartBody(okhttp3.MultipartBody) Prompt(com.ibm.watson.text_to_speech.v1.model.Prompt)

Aggregations

Pronunciation (com.ibm.watson.text_to_speech.v1.model.Pronunciation)3 RequestBuilder (com.ibm.cloud.sdk.core.http.RequestBuilder)2 Pronunciation (com.ibm.watson.developer_cloud.text_to_speech.v1.model.Pronunciation)2 GetPronunciationOptions (com.ibm.watson.text_to_speech.v1.model.GetPronunciationOptions)2 Test (org.junit.Test)2 WatsonServiceTest (com.ibm.watson.common.WatsonServiceTest)1 WatsonServiceTest (com.ibm.watson.developer_cloud.WatsonServiceTest)1 RequestBuilder (com.ibm.watson.developer_cloud.http.RequestBuilder)1 GetPronunciationOptions (com.ibm.watson.developer_cloud.text_to_speech.v1.model.GetPronunciationOptions)1 Prompt (com.ibm.watson.text_to_speech.v1.model.Prompt)1 HashMap (java.util.HashMap)1 MultipartBody (okhttp3.MultipartBody)1 MockResponse (okhttp3.mockwebserver.MockResponse)1 RecordedRequest (okhttp3.mockwebserver.RecordedRequest)1 Test (org.testng.annotations.Test)1