Search in sources :

Example 1 with TarsosDSPAudioInputStream

use of be.tarsos.dsp.io.TarsosDSPAudioInputStream in project cythara by gstraube.

the class AudioDispatcherFactory method fromPipe.

/**
 * Create a stream from a piped sub process and use that to create a new
 * {@link AudioDispatcher} The sub-process writes a WAV-header and
 * PCM-samples to standard out. The header is ignored and the PCM samples
 * are are captured and interpreted. Examples of executables that can
 * convert audio in any format and write to stdout are ffmpeg and avconv.
 *
 * @param source
 *            The file or stream to capture.
 * @param targetSampleRate
 *            The target sample rate.
 * @param audioBufferSize
 *            The number of samples used in the buffer.
 * @param bufferOverlap
 * 			  The number of samples to overlap the current and previous buffer.
 * @return A new audioprocessor.
 */
public static AudioDispatcher fromPipe(final String source, final int targetSampleRate, final int audioBufferSize, final int bufferOverlap) {
    PipedAudioStream f = new PipedAudioStream(source);
    TarsosDSPAudioInputStream audioStream = f.getMonoStream(targetSampleRate, 0);
    return new AudioDispatcher(audioStream, audioBufferSize, bufferOverlap);
}
Also used : TarsosDSPAudioInputStream(be.tarsos.dsp.io.TarsosDSPAudioInputStream) PipedAudioStream(be.tarsos.dsp.io.PipedAudioStream) AudioDispatcher(be.tarsos.dsp.AudioDispatcher)

Example 2 with TarsosDSPAudioInputStream

use of be.tarsos.dsp.io.TarsosDSPAudioInputStream in project cythara by gstraube.

the class AudioDispatcherFactory method fromDefaultMicrophone.

/**
 * Create a new AudioDispatcher connected to the default microphone.
 *
 * @param sampleRate
 *            The requested sample rate.
 * @param audioBufferSize
 *            The size of the audio buffer (in samples).
 *
 * @param bufferOverlap
 *            The size of the overlap (in samples).
 * @return A new AudioDispatcher
 */
public static AudioDispatcher fromDefaultMicrophone(final int sampleRate, final int audioBufferSize, final int bufferOverlap) {
    int minAudioBufferSize = AudioRecord.getMinBufferSize(sampleRate, android.media.AudioFormat.CHANNEL_IN_MONO, android.media.AudioFormat.ENCODING_PCM_16BIT);
    int minAudioBufferSizeInSamples = minAudioBufferSize / 2;
    if (minAudioBufferSizeInSamples <= audioBufferSize) {
        AudioRecord audioInputStream = new AudioRecord(MediaRecorder.AudioSource.MIC, sampleRate, android.media.AudioFormat.CHANNEL_IN_MONO, android.media.AudioFormat.ENCODING_PCM_16BIT, audioBufferSize * 2);
        TarsosDSPAudioFormat format = new TarsosDSPAudioFormat(sampleRate, 16, 1, true, false);
        TarsosDSPAudioInputStream audioStream = new AndroidAudioInputStream(audioInputStream, format);
        // start recording ! Opens the stream.
        audioInputStream.startRecording();
        return new AudioDispatcher(audioStream, audioBufferSize, bufferOverlap);
    } else {
        throw new IllegalArgumentException("Buffer size too small should be at least " + (minAudioBufferSize * 2));
    }
}
Also used : AudioRecord(android.media.AudioRecord) TarsosDSPAudioInputStream(be.tarsos.dsp.io.TarsosDSPAudioInputStream) TarsosDSPAudioFormat(be.tarsos.dsp.io.TarsosDSPAudioFormat) AudioDispatcher(be.tarsos.dsp.AudioDispatcher)

Aggregations

AudioDispatcher (be.tarsos.dsp.AudioDispatcher)2 TarsosDSPAudioInputStream (be.tarsos.dsp.io.TarsosDSPAudioInputStream)2 AudioRecord (android.media.AudioRecord)1 PipedAudioStream (be.tarsos.dsp.io.PipedAudioStream)1 TarsosDSPAudioFormat (be.tarsos.dsp.io.TarsosDSPAudioFormat)1