Search in sources :

Example 46 with SpeechRecognitionAlternative

use of com.google.cloud.videointelligence.v1.SpeechRecognitionAlternative in project java-speech by googleapis.

the class SpeechAdaptation method speechAdaptation.

public static void speechAdaptation(String uriPath) throws IOException {
    // the "close" method on the client to safely clean up any remaining background resources.
    try (SpeechClient speechClient = SpeechClient.create()) {
        // Provides "hints" to the speech recognizer to favor specific words and phrases in the
        // results.
        // https://cloud.google.com/speech-to-text/docs/reference/rpc/google.cloud.speech.v1p1beta1#google.cloud.speech.v1p1beta1.SpeechContext
        SpeechContext speechContext = SpeechContext.newBuilder().addPhrases("Brooklyn Bridge").setBoost(20.0F).build();
        // Configure recognition config to match your audio file.
        RecognitionConfig config = RecognitionConfig.newBuilder().setEncoding(RecognitionConfig.AudioEncoding.MP3).setSampleRateHertz(44100).setLanguageCode("en-US").addSpeechContexts(speechContext).build();
        // Set the path to your audio file
        RecognitionAudio audio = RecognitionAudio.newBuilder().setUri(uriPath).build();
        // Make the request
        RecognizeRequest request = RecognizeRequest.newBuilder().setConfig(config).setAudio(audio).build();
        // Display the results
        RecognizeResponse response = speechClient.recognize(request);
        for (SpeechRecognitionResult result : response.getResultsList()) {
            // First alternative is the most probable result
            SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
            System.out.printf("Transcript: %s\n", alternative.getTranscript());
        }
    }
}
Also used : SpeechRecognitionAlternative(com.google.cloud.speech.v1p1beta1.SpeechRecognitionAlternative) RecognitionAudio(com.google.cloud.speech.v1p1beta1.RecognitionAudio) SpeechContext(com.google.cloud.speech.v1p1beta1.SpeechContext) RecognizeRequest(com.google.cloud.speech.v1p1beta1.RecognizeRequest) RecognitionConfig(com.google.cloud.speech.v1p1beta1.RecognitionConfig) SpeechClient(com.google.cloud.speech.v1p1beta1.SpeechClient) RecognizeResponse(com.google.cloud.speech.v1p1beta1.RecognizeResponse) SpeechRecognitionResult(com.google.cloud.speech.v1p1beta1.SpeechRecognitionResult)

Example 47 with SpeechRecognitionAlternative

use of com.google.cloud.videointelligence.v1.SpeechRecognitionAlternative in project java-speech by googleapis.

the class SpeechModelAdaptationBeta method transcribeWithModelAdaptation.

/**
 * Transcribe with model adaptation
 *
 * @param projectId your project id
 * @param location the region
 * @param gcsUri the path to the audio file
 */
public static void transcribeWithModelAdaptation(String projectId, String location, String gcsUri, String customClassId, String phraseSetId) throws Exception {
    // the "close" method on the client to safely clean up any remaining background resources.
    try (AdaptationClient adaptationClient = AdaptationClient.create()) {
        // Create `PhraseSet` and `CustomClasses` to create custom lists of similar
        // items that are likely to occur in your input data.
        // The parent resource where the custom class and phrase set will be created.
        LocationName parent = LocationName.of(projectId, location);
        // Create the custom class
        CreateCustomClassRequest classRequest = CreateCustomClassRequest.newBuilder().setParent(parent.toString()).setCustomClassId(customClassId).setCustomClass(CustomClass.newBuilder().addItems(ClassItem.newBuilder().setValue("sushido")).addItems(ClassItem.newBuilder().setValue("altura")).addItems(ClassItem.newBuilder().setValue("taneda")).build()).build();
        CustomClass classResponse = adaptationClient.createCustomClass(classRequest);
        // Create the phrase set
        CreatePhraseSetRequest phraseRequest = CreatePhraseSetRequest.newBuilder().setParent(parent.toString()).setPhraseSetId(phraseSetId).setPhraseSet(PhraseSet.newBuilder().setBoost(10).addPhrases(Phrase.newBuilder().setValue(String.format("Visit restaurants like %s%n", customClassId))).build()).build();
        PhraseSet phraseResponse = adaptationClient.createPhraseSet(phraseRequest);
        // Next section shows how to use the newly created custom class and phrase set
        // to send a transcription request with speech adaptation
        // Speech adaptation configuration
        SpeechAdaptation speechAdaptation = SpeechAdaptation.newBuilder().addCustomClasses(classResponse).addPhraseSets(phraseResponse).build();
        // the "close" method on the client to safely clean up any remaining background resources.
        try (SpeechClient speechClient = SpeechClient.create()) {
            // The path to the audio file to transcribe
            // gcsUri URI for audio file in Cloud Storage, e.g. gs://[BUCKET]/[FILE]
            // Builds the sync recognize request
            RecognitionConfig config = RecognitionConfig.newBuilder().setEncoding(AudioEncoding.FLAC).setSampleRateHertz(16000).setLanguageCode("en-US").setAdaptation(// Set the adaptation object
            speechAdaptation).build();
            RecognitionAudio audio = RecognitionAudio.newBuilder().setUri(gcsUri).build();
            // Performs speech recognition on the audio file.
            RecognizeResponse response = speechClient.recognize(config, audio);
            List<SpeechRecognitionResult> results = response.getResultsList();
            for (SpeechRecognitionResult result : results) {
                // There can be several alternative transcripts for a given chunk of speech. Just use the
                // first (most likely) one here.
                SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
                System.out.printf("Adapted Transcription: %s%n", alternative.getTranscript());
            }
        }
    } catch (ApiException e) {
        System.out.println("Client Interaction Error: \n" + e.toString());
    }
}
Also used : PhraseSet(com.google.cloud.speech.v1p1beta1.PhraseSet) RecognizeResponse(com.google.cloud.speech.v1p1beta1.RecognizeResponse) SpeechRecognitionResult(com.google.cloud.speech.v1p1beta1.SpeechRecognitionResult) LocationName(com.google.cloud.speech.v1p1beta1.LocationName) CustomClass(com.google.cloud.speech.v1p1beta1.CustomClass) CreatePhraseSetRequest(com.google.cloud.speech.v1p1beta1.CreatePhraseSetRequest) CreateCustomClassRequest(com.google.cloud.speech.v1p1beta1.CreateCustomClassRequest) SpeechRecognitionAlternative(com.google.cloud.speech.v1p1beta1.SpeechRecognitionAlternative) RecognitionAudio(com.google.cloud.speech.v1p1beta1.RecognitionAudio) SpeechAdaptation(com.google.cloud.speech.v1p1beta1.SpeechAdaptation) RecognitionConfig(com.google.cloud.speech.v1p1beta1.RecognitionConfig) SpeechClient(com.google.cloud.speech.v1p1beta1.SpeechClient) AdaptationClient(com.google.cloud.speech.v1p1beta1.AdaptationClient) ApiException(com.google.api.gax.rpc.ApiException)

Example 48 with SpeechRecognitionAlternative

use of com.google.cloud.videointelligence.v1.SpeechRecognitionAlternative in project java-speech by googleapis.

the class SpeechProfanityFilter method speechProfanityFilter.

/**
 * Transcribe a remote audio file with multi-channel recognition
 *
 * @param gcsUri the path to the audio file
 */
public static void speechProfanityFilter(String gcsUri) throws Exception {
    // Instantiates a client with GOOGLE_APPLICATION_CREDENTIALS
    try (SpeechClient speech = SpeechClient.create()) {
        // Configure remote file request
        RecognitionConfig config = RecognitionConfig.newBuilder().setEncoding(AudioEncoding.FLAC).setLanguageCode("en-US").setSampleRateHertz(16000).setProfanityFilter(true).build();
        // Set the remote path for the audio file
        RecognitionAudio audio = RecognitionAudio.newBuilder().setUri(gcsUri).build();
        // Use blocking call to get audio transcript
        RecognizeResponse response = speech.recognize(config, audio);
        List<SpeechRecognitionResult> results = response.getResultsList();
        for (SpeechRecognitionResult result : results) {
            // There can be several alternative transcripts for a given chunk of speech. Just use the
            // first (most likely) one here.
            SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
            System.out.printf("Transcription: %s\n", alternative.getTranscript());
        }
    }
}
Also used : SpeechRecognitionAlternative(com.google.cloud.speech.v1.SpeechRecognitionAlternative) RecognitionAudio(com.google.cloud.speech.v1.RecognitionAudio) RecognitionConfig(com.google.cloud.speech.v1.RecognitionConfig) SpeechClient(com.google.cloud.speech.v1.SpeechClient) RecognizeResponse(com.google.cloud.speech.v1.RecognizeResponse) SpeechRecognitionResult(com.google.cloud.speech.v1.SpeechRecognitionResult)

Example 49 with SpeechRecognitionAlternative

use of com.google.cloud.videointelligence.v1.SpeechRecognitionAlternative in project java-speech by googleapis.

the class TranscribeContextClasses method transcribeContextClasses.

// Provides "hints" to the speech recognizer to favor specific classes of words in the results.
static void transcribeContextClasses(String storageUri) throws IOException {
    // the "close" method on the client to safely clean up any remaining background resources.
    try (SpeechClient speechClient = SpeechClient.create()) {
        // SpeechContext: to configure your speech_context see:
        // https://cloud.google.com/speech-to-text/docs/reference/rpc/google.cloud.speech.v1#speechcontext
        // Full list of supported phrases (class tokens) here:
        // https://cloud.google.com/speech-to-text/docs/class-tokens
        SpeechContext speechContext = SpeechContext.newBuilder().addPhrases("$TIME").build();
        // RecognitionConfig: to configure your encoding and sample_rate_hertz, see:
        // https://cloud.google.com/speech-to-text/docs/reference/rpc/google.cloud.speech.v1#recognitionconfig
        RecognitionConfig config = RecognitionConfig.newBuilder().setEncoding(RecognitionConfig.AudioEncoding.LINEAR16).setSampleRateHertz(8000).setLanguageCode("en-US").addSpeechContexts(speechContext).build();
        // Set the path to your audio file
        RecognitionAudio audio = RecognitionAudio.newBuilder().setUri(storageUri).build();
        // Build the request
        RecognizeRequest request = RecognizeRequest.newBuilder().setConfig(config).setAudio(audio).build();
        // Perform the request
        RecognizeResponse response = speechClient.recognize(request);
        for (SpeechRecognitionResult result : response.getResultsList()) {
            // First alternative is the most probable result
            SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
            System.out.printf("Transcript: %s\n", alternative.getTranscript());
        }
    }
}
Also used : SpeechRecognitionAlternative(com.google.cloud.speech.v1.SpeechRecognitionAlternative) RecognitionAudio(com.google.cloud.speech.v1.RecognitionAudio) SpeechContext(com.google.cloud.speech.v1.SpeechContext) RecognizeRequest(com.google.cloud.speech.v1.RecognizeRequest) RecognitionConfig(com.google.cloud.speech.v1.RecognitionConfig) SpeechClient(com.google.cloud.speech.v1.SpeechClient) RecognizeResponse(com.google.cloud.speech.v1.RecognizeResponse) SpeechRecognitionResult(com.google.cloud.speech.v1.SpeechRecognitionResult)

Example 50 with SpeechRecognitionAlternative

use of com.google.cloud.videointelligence.v1.SpeechRecognitionAlternative in project java-video-intelligence by googleapis.

the class Detect method speechTranscription.

/**
 * Transcribe speech from a video stored on GCS.
 *
 * @param gcsUri the path to the video file to analyze.
 */
public static void speechTranscription(String gcsUri) throws Exception {
    // Instantiate a com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient
    try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
        // Set the language code
        SpeechTranscriptionConfig config = SpeechTranscriptionConfig.newBuilder().setLanguageCode("en-US").setEnableAutomaticPunctuation(true).build();
        // Set the video context with the above configuration
        VideoContext context = VideoContext.newBuilder().setSpeechTranscriptionConfig(config).build();
        // Create the request
        AnnotateVideoRequest request = AnnotateVideoRequest.newBuilder().setInputUri(gcsUri).addFeatures(Feature.SPEECH_TRANSCRIPTION).setVideoContext(context).build();
        // asynchronously perform speech transcription on videos
        OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> response = client.annotateVideoAsync(request);
        System.out.println("Waiting for operation to complete...");
        // Display the results
        for (VideoAnnotationResults results : response.get(600, TimeUnit.SECONDS).getAnnotationResultsList()) {
            for (SpeechTranscription speechTranscription : results.getSpeechTranscriptionsList()) {
                try {
                    // Print the transcription
                    if (speechTranscription.getAlternativesCount() > 0) {
                        SpeechRecognitionAlternative alternative = speechTranscription.getAlternatives(0);
                        System.out.printf("Transcript: %s\n", alternative.getTranscript());
                        System.out.printf("Confidence: %.2f\n", alternative.getConfidence());
                        System.out.println("Word level information:");
                        for (WordInfo wordInfo : alternative.getWordsList()) {
                            double startTime = wordInfo.getStartTime().getSeconds() + wordInfo.getStartTime().getNanos() / 1e9;
                            double endTime = wordInfo.getEndTime().getSeconds() + wordInfo.getEndTime().getNanos() / 1e9;
                            System.out.printf("\t%4.2fs - %4.2fs: %s\n", startTime, endTime, wordInfo.getWord());
                        }
                    } else {
                        System.out.println("No transcription found");
                    }
                } catch (IndexOutOfBoundsException ioe) {
                    System.out.println("Could not retrieve frame: " + ioe.getMessage());
                }
            }
        }
    }
// [END video_speech_transcription_gcs]
}
Also used : AnnotateVideoRequest(com.google.cloud.videointelligence.v1.AnnotateVideoRequest) VideoContext(com.google.cloud.videointelligence.v1.VideoContext) VideoIntelligenceServiceClient(com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient) AnnotateVideoProgress(com.google.cloud.videointelligence.v1.AnnotateVideoProgress) SpeechRecognitionAlternative(com.google.cloud.videointelligence.v1.SpeechRecognitionAlternative) SpeechTranscriptionConfig(com.google.cloud.videointelligence.v1.SpeechTranscriptionConfig) VideoAnnotationResults(com.google.cloud.videointelligence.v1.VideoAnnotationResults) SpeechTranscription(com.google.cloud.videointelligence.v1.SpeechTranscription) WordInfo(com.google.cloud.videointelligence.v1.WordInfo) AnnotateVideoResponse(com.google.cloud.videointelligence.v1.AnnotateVideoResponse)

Aggregations

RecognitionConfig (com.google.cloud.speech.v1.RecognitionConfig)24 SpeechClient (com.google.cloud.speech.v1.SpeechClient)24 SpeechRecognitionAlternative (com.google.cloud.speech.v1.SpeechRecognitionAlternative)24 Path (java.nio.file.Path)23 RecognitionConfig (com.google.cloud.speech.v1p1beta1.RecognitionConfig)22 SpeechClient (com.google.cloud.speech.v1p1beta1.SpeechClient)22 SpeechRecognitionAlternative (com.google.cloud.speech.v1p1beta1.SpeechRecognitionAlternative)22 RecognitionAudio (com.google.cloud.speech.v1.RecognitionAudio)21 RecognitionAudio (com.google.cloud.speech.v1p1beta1.RecognitionAudio)20 SpeechRecognitionResult (com.google.cloud.speech.v1.SpeechRecognitionResult)19 SpeechRecognitionResult (com.google.cloud.speech.v1p1beta1.SpeechRecognitionResult)18 LongRunningRecognizeResponse (com.google.cloud.speech.v1p1beta1.LongRunningRecognizeResponse)17 StreamingRecognitionConfig (com.google.cloud.speech.v1.StreamingRecognitionConfig)16 LongRunningRecognizeResponse (com.google.cloud.speech.v1.LongRunningRecognizeResponse)14 RecognizeResponse (com.google.cloud.speech.v1.RecognizeResponse)14 RecognizeResponse (com.google.cloud.speech.v1p1beta1.RecognizeResponse)12 ByteString (com.google.protobuf.ByteString)12 StreamingRecognizeResponse (com.google.cloud.speech.v1.StreamingRecognizeResponse)10 StreamingRecognitionConfig (com.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig)10 LongRunningRecognizeMetadata (com.google.cloud.speech.v1p1beta1.LongRunningRecognizeMetadata)8