From 344c3c4513b426ad8bf48a6719b1cf850936e48d Mon Sep 17 00:00:00 2001 From: Igor Zinken Date: Sat, 28 Dec 2019 22:59:20 +0100 Subject: [PATCH] Support AAudio based input recording to enable full duplex AAudio support * Initial implementation of AAudio based input recording * Verified duplex operation on Pixel 3 * Made unit test mode working with openSL mock, as before * Updated example activity * Fix handlers * Buffer size and temporary buffers now update during runtime buffer size updates. Use preferred floating point format for AAUdio, use memory cloning for faster writes. * README and inline doc copy updates * and again * Fall back to OpenSL default --- .gitignore | 1 + README.md | 35 +- build.gradle | 2 +- src/main/cpp/Android.mk | 13 +- src/main/cpp/Application.mk | 1 + src/main/cpp/audioengine.cpp | 14 +- src/main/cpp/drivers/aaudio_io.cpp | 707 +++++++++++------- src/main/cpp/drivers/aaudio_io.h | 146 ++-- src/main/cpp/drivers/adapter.cpp | 22 +- src/main/cpp/drivers/adapter.h | 20 +- src/main/cpp/events/basesynthevent.cpp | 2 +- src/main/cpp/global.cpp | 7 - src/main/cpp/global.h | 10 +- .../nl/igorski/example/MWEngineActivity.java | 77 +- src/main/res/layout/main.xml | 82 +- src/main/res/values/strings.xml | 18 +- 16 files changed, 699 insertions(+), 458 deletions(-) diff --git a/.gitignore b/.gitignore index 95993ebe..463d1a89 100644 --- a/.gitignore +++ b/.gitignore @@ -10,4 +10,5 @@ src/main/cpp/jni/java_interface_wrap.cpp libs/* obj build +debug local.properties diff --git a/README.md b/README.md index 5bb80ce1..728c4927 100644 --- a/README.md +++ b/README.md @@ -2,9 +2,7 @@ MWEngine is.. ============= ...an audio engine for Android, using either OpenSL (compatible with Android 4.1 and up) or AAudio -(Android 8.0 and up) as the drivers for low latency audio performance. The engine has been written for both -[MikroWave](https://play.google.com/store/apps/details?id=nl.igorski.mikrowave.free&hl=en) and -[Kosm](https://play.google.com/store/apps/details?id=nl.igorski.kosm&hl=en) to provide fast live audio synthesis. MWEngine is also used by [TIZE - Beat Maker, Music Maker](https://play.google.com/store/apps/details?id=com.tizemusic.tize). +(Android 8.0 and up) as the drivers for low latency audio performance. MWEngine provides an architecture that allows you to work with audio within a _musical context_. It is easy to build upon the base classes and create your own noise generating mayhem. A few keywords describing the @@ -16,21 +14,29 @@ out-of-the-box possibilities are: * effect chains operating on individual input/output channels * sample playback with real time pitch shifting * bouncing output to WAV files, either live (during a performance) or "offline" - + Also note that MWEngine's underlying audio drivers are _the same as Google Oboe uses_, MWEngine and Oboe are merely abstraction layers to solve the same problem, only in different ways. Additionally, MWEngine provides a complete audio processing environment. +#### Who uses this ? + +The engine has been written for both [MikroWave](https://play.google.com/store/apps/details?id=nl.igorski.mikrowave.free&hl=en) and +[Kosm](https://play.google.com/store/apps/details?id=nl.igorski.kosm&hl=en) to provide fast live audio synthesis. + +While developments on those apps are scarce, the engine itself has been continuously improved and is now also +used by third party app developers, such as [TIZE - Beat Maker, Music Maker](https://play.google.com/store/apps/details?id=com.tizemusic.tize). + ### The [Issue Tracker](https://github.com/igorski/MWEngine/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc) is your point of contact Bug reports, feature requests, questions and discussions are welcome on the GitHub Issue Tracker, please do not send e-mails through the development website. However, please search before posting to avoid duplicates, and limit to one issue per post. Please vote on feature requests by using the Thumbs Up/Down reaction on the first post. -### C++ ??? What about Java ? +### C++ ??? What about Java / Kotlin ? Though the library is written in C++ (and can be used solely within this context), the library can be built using JNI (Java Native Interface) which makes its API expose itself to Java, while still executing in a native layer outside of -the Dalvik/ART VM. In other words : high performance of the engine is ensured by the native layer operations, while +the JVM. In other words : high performance of the engine is ensured by the native layer operations, while ease of development is ensured by delegating application logic / UI to the realm of the Android Java SDK. Whether you intend to use MWEngine for its sample based playback or to leverage its built-in synthesizer and @@ -139,19 +145,16 @@ sequence going using the library. To install the demo: first build the library as described above, and then run the build script to deploy the .APK onto an attached device/emulator (note that older emulated devices can only operate at a sample rate of 8 kHz!). -### Note on AAudio +### Note on OpenSL / AAudio drivers + +Currently it is not possible to switch between audio drivers on the fly, rather you must precompile +the library for use with a specific driver. By default, the library will compile for OpenSL for a +wider range of supported devices. If you want to use AAudio instead (and thus are targeting solely +devices running Android 8 and up) : -The AAudio implementation has been built using (in Google's words): _"a Preview release of the AAudio library. The API -might change in backward-incompatible ways in future releases. It is not recommended for production use."_ so use it -at your own peril. To use AAudio instead of OpenSL: - * change the desired driver in _global.h_ from type 0 (OpenSL) to 1 (AAudio) - * update the _Android.mk_ file to include all required adapters and libraries (simply set _BUILD_AAUDIO_ to 'true') - * update target in _project.properties_ to _android-26_ - -Once AAudio is a stable library, MWEngine will allow on-the-fly switching between OpenSL and AAudio drivers. -(!) MWEngine does not support recording from the device inputs using AAudio just yet, (https://github.com/igorski/MWEngine/issues/70) references this feature. +Should you require support for both driver variants, please file a feature request in the repository's issue tracker. ### Contributors diff --git a/build.gradle b/build.gradle index 88a0c423..2791594c 100644 --- a/build.gradle +++ b/build.gradle @@ -15,7 +15,7 @@ android { defaultConfig { applicationId "nl.igorski.example" - minSdkVersion 16 + minSdkVersion 26 // can go down to 16 when using OpenSL as the audio driver targetSdkVersion 27 versionCode 1 versionName "1.0.0" diff --git a/src/main/cpp/Android.mk b/src/main/cpp/Android.mk index 462ddaac..ed38b86c 100755 --- a/src/main/cpp/Android.mk +++ b/src/main/cpp/Android.mk @@ -1,6 +1,3 @@ -# Experimental AAudio support, set to true when building for AAudio (requires NDK target 26) -BUILD_AAUDIO = false - LOCAL_PATH := $(call my-dir) LOCAL_SRC_FILES := \ @@ -32,6 +29,7 @@ global.cpp \ jni/javabridge.cpp \ drivers/adapter.cpp \ drivers/opensl_io.c \ +drivers/aaudio_io.cpp \ utilities/utils.cpp \ audioengine.cpp \ audiobuffer.cpp \ @@ -94,14 +92,7 @@ modules/envelopefollower.cpp \ modules/lfo.cpp \ modules/routeableoscillator.cpp \ -ifeq ($(BUILD_AAUDIO),true) - LOCAL_SRC_FILES += \ - drivers/aaudio_io.cpp \ - - LOCAL_LDLIBS := -laaudio -endif - -LOCAL_LDLIBS += -lOpenSLES -landroid -latomic -llog +LOCAL_LDLIBS += -lOpenSLES -laaudio -landroid -latomic -llog include $(BUILD_SHARED_LIBRARY) diff --git a/src/main/cpp/Application.mk b/src/main/cpp/Application.mk index 1d85f7dd..782a2666 100755 --- a/src/main/cpp/Application.mk +++ b/src/main/cpp/Application.mk @@ -3,6 +3,7 @@ APP_STL := c++_static APP_CPPFLAGS += -std=c++11 -Werror -fexceptions -frtti #APP_CPPFLAGS += -Wall APP_ABI := x86 x86_64 armeabi-v7a arm64-v8a +APP_PLATFORM = android-26 ifeq ($(TARGET_ARCH_ABI), x86) LOCAL_CFLAGS += -m32 diff --git a/src/main/cpp/audioengine.cpp b/src/main/cpp/audioengine.cpp index d1362654..a95125b5 100755 --- a/src/main/cpp/audioengine.cpp +++ b/src/main/cpp/audioengine.cpp @@ -151,8 +151,9 @@ namespace MWEngine { // generate the input buffer used for recording from the device's input // as well as the temporary buffer used to merge the input into - recbufferIn = new float[ AudioEngineProps::BUFFER_SIZE ](); + recbufferIn = new float[ AudioEngineProps::BUFFER_SIZE * AudioEngineProps::INPUT_CHANNELS ](); inputChannel->createOutputBuffer(); + #endif // accumulates all channels ("master strip") @@ -260,17 +261,19 @@ namespace MWEngine { // record audio from Android device ? if (( recordDeviceInput || recordInputToDisk ) && AudioEngineProps::INPUT_CHANNELS > 0 ) { - int recSamps = DriverAdapter::getInput( recbufferIn ); + int recordedSamples = DriverAdapter::getInput( recbufferIn, amountOfSamples ); SAMPLE_TYPE* recBufferChannel = inputChannel->getOutputBuffer()->getBufferForChannel( 0 ); - for ( int j = 0; j < recSamps; ++j ) + for ( int j = 0; j < recordedSamples; ++j ) { recBufferChannel[ j ] = recbufferIn[ j ];//static_cast( recbufferIn[ j ] ); + } // apply processing chain onto the input std::vector processors = inputChannel->processingChain->getActiveProcessors(); - for ( int k = 0; k < processors.size(); ++k ) + for ( int k = 0; k < processors.size(); ++k ) { processors[ k ]->process( inputChannel->getOutputBuffer(), AudioEngineProps::INPUT_CHANNELS == 1 ); + } // merge recording into current input buffer for instant monitoring @@ -373,8 +376,9 @@ namespace MWEngine { } // write cache if it didn't happen yet ;) (bus processors are (currently) non-cacheable) - if ( mustCache ) + if ( mustCache ) { mustCache = !writeChannelCache( channel, channelBuffer, cacheReadPos ); + } // write the channel buffer into the combined output buffer, apply channel volume // note live events are always audible as their volume is relative to the instrument diff --git a/src/main/cpp/drivers/aaudio_io.cpp b/src/main/cpp/drivers/aaudio_io.cpp index 712922c3..1bc86a8f 100644 --- a/src/main/cpp/drivers/aaudio_io.cpp +++ b/src/main/cpp/drivers/aaudio_io.cpp @@ -1,19 +1,33 @@ /** - * Copyright 2017 The Android Open Source Project + * The MIT License (MIT) * - * Licensed under the Apache License, Version 2.0 (the "License"); + * Copyright (c) 2017-2019 Igor Zinken - https://www.igorski.nl + * + * AAudio driver implementation adapted from the Android Open Source Project + * + * Licensed under the Apache License, Version 2.0 (the "License" ); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. + * Permission is hereby granted, free of charge, to any person obtaining a copy of + * this software and associated documentation files (the "Software"), to deal in + * the Software without restriction, including without limitation the rights to + * use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of + * the Software, and to permit persons to whom the Software is furnished to do so, + * subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS + * FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR + * COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER + * IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ - #include "aaudio_io.h" #include "../global.h" #include "../audioengine.h" @@ -32,64 +46,31 @@ static const int32_t audioFormatEnum[] = { AAUDIO_FORMAT_PCM_I16, AAUDIO_FORMAT_PCM_FLOAT, }; -static const int32_t audioFormatCount = sizeof(audioFormatEnum)/ - sizeof(audioFormatEnum[0]); - -static const uint32_t sampleFormatBPP[] = { - 0xffff, - 0xffff, - 16, //I16 - 32, //FLOAT -}; -uint16_t SampleFormatToBpp(aaudio_format_t format) { - for (int32_t i = 0; i < audioFormatCount; ++i) { - if (audioFormatEnum[i] == format) - return sampleFormatBPP[i]; - } - return 0xffff; -} -static const char * audioFormatStr[] = { - "AAUDIO_FORMAT_INVALID", // = -1, - "AAUDIO_FORMAT_UNSPECIFIED", // = 0, - "AAUDIO_FORMAT_PCM_I16", - "AAUDIO_FORMAT_PCM_FLOAT", -}; -const char* FormatToString(aaudio_format_t format) { - for (int32_t i = 0; i < audioFormatCount; ++i) { - if (audioFormatEnum[i] == format) - return audioFormatStr[i]; - } - return "UNKNOW_AUDIO_FORMAT"; -} int64_t timestamp_to_nanoseconds(timespec ts){ - return (ts.tv_sec * (int64_t) NANOS_PER_SECOND) + ts.tv_nsec; + return (ts.tv_sec * (int64_t) NANOS_PER_SECOND) + ts.tv_nsec; } int64_t get_time_nanoseconds(clockid_t clockid){ - timespec ts; - clock_gettime(clockid, &ts); - return timestamp_to_nanoseconds(ts); + timespec ts; + clock_gettime(clockid, &ts); + return timestamp_to_nanoseconds(ts); } /** * Every time the playback stream requires data this method will be called. * - * @param stream the audio stream which is requesting data, this is the playStream_ object - * @param userData the context in which the function is being called, in this case it will be the - * AAudio instance + * @param stream the audio stream which is requesting data, this is the _outputStream object + * @param userData the context in which the function is being called (AAudio_IO instance) * @param audioData an empty buffer into which we can write our audio data * @param numFrames the number of audio frames which are required * @return Either AAUDIO_CALLBACK_RESULT_CONTINUE if the stream should continue requesting data - * or AAUDIO_CALLBACK_RESULT_STOP if the stream should stop. - * - * @see AAudio#dataCallback + * or AAUDIO_CALLBACK_RESULT_STOP if the stream should stop. */ -aaudio_data_callback_result_t dataCallback(AAudioStream *stream, void *userData, - void *audioData, int32_t numFrames) { - assert(userData && audioData); - AAudio_IO *audioEngine = reinterpret_cast(userData); - return audioEngine->dataCallback(stream, audioData, numFrames); +aaudio_data_callback_result_t dataCallback( AAudioStream* stream, void* userData, void* audioData, int32_t numFrames ) { + assert(userData && audioData); + AAudio_IO* instance = reinterpret_cast( userData ); + return instance->dataCallback( stream, audioData, numFrames ); } /** @@ -99,40 +80,51 @@ aaudio_data_callback_result_t dataCallback(AAudioStream *stream, void *userData, * recreation and restart. * * @param stream the stream with the error - * @param userData the context in which the function is being called, in this case it will be the - * AAudio instance + * @param userData the context in which the function is being called (AAudio_IO instance) * @param error the error which occured, a human readable string can be obtained using * AAudio_convertResultToText(error); - * - * @see AAudio#errorCallback */ -void errorCallback(AAudioStream *stream, - void *userData, - aaudio_result_t error) { - assert(userData); - AAudio_IO *audioEngine = reinterpret_cast(userData); - audioEngine->errorCallback(stream, error); +void errorCallback( AAudioStream* stream, void *userData, aaudio_result_t error ) { + assert(userData); + AAudio_IO* instance = reinterpret_cast( userData ); + instance->errorCallback( stream, error ); } -AAudio_IO::AAudio_IO( int amountOfChannels ) { +AAudio_IO::AAudio_IO( int amountOfInputChannels, int amountOfOutputChannels ) { + + _inputChannelCount = ( int16_t ) amountOfInputChannels; + _outputChannelCount = ( int16_t ) amountOfOutputChannels; - sampleChannels_ = amountOfChannels; - sampleFormat_ = AAUDIO_FORMAT_PCM_I16; + // MWEngine operates internally using floating point resolution + // if floating point is supported by the hardware, we'd like to use it so we + // can omit converting samples when reading and writing from the streams + // sampleFormat can be updated during stream creation, if so, we will convert sample + // formats as "AAudio might perform sample conversion on its own" <- nicely vague Google ! - // Create the output stream. By not specifying an audio device id we are telling AAudio that - // we want the stream to be created using the default playback audio device. - createPlaybackStream(); + _sampleFormat = AAUDIO_FORMAT_PCM_FLOAT; - // created the buffer the output will be written into - _enqueuedBuffer = new int16_t[ AudioEngineProps::BUFFER_SIZE * sampleChannels_ ]{ 0 }; + createAllStreams(); - render = false; + render = false; } -AAudio_IO::~AAudio_IO(){ +AAudio_IO::~AAudio_IO() { + closeAllStreams(); + + if ( _enqueuedOutputBuffer != nullptr ) { + delete _enqueuedOutputBuffer; + _enqueuedOutputBuffer = nullptr; + } + + if ( _recordBuffer != nullptr ) { + delete _recordBuffer; + _recordBuffer = nullptr; + } - closeOutputStream(); - delete _enqueuedBuffer; + if ( _recordBufferI != nullptr ) { + delete _recordBufferI; + _recordBufferI = nullptr; + } } /** @@ -143,193 +135,368 @@ AAudio_IO::~AAudio_IO(){ * @param deviceId the audio device id, can be obtained through an {@link AudioDeviceInfo} object * using Java/JNI. */ -void AAudio_IO::setDeviceId(int32_t deviceId){ +void AAudio_IO::setDeviceId( int32_t deviceId ) { - playbackDeviceId_ = deviceId; + _outputDeviceId = deviceId; - // If this is a different device from the one currently in use then restart the stream - int32_t currentDeviceId = AAudioStream_getDeviceId(playStream_); - if (deviceId != currentDeviceId) restartStream(); + // If this is a different device from the one currently in use then restart the stream + int32_t currentDeviceId = AAudioStream_getDeviceId( _outputStream ); + if ( _outputDeviceId != currentDeviceId ) { + restartStreams(); + } } -/** - * Creates a stream builder which can be used to construct streams - * @return a new stream builder object - */ -AAudioStreamBuilder* AAudio_IO::createStreamBuilder() { +void AAudio_IO::setRecordingDeviceId( int32_t deviceId ) { - AAudioStreamBuilder *builder = nullptr; - aaudio_result_t result = AAudio_createStreamBuilder(&builder); - if (result != AAUDIO_OK && !builder) { - Debug::log( "AAudio_IO::Error creating stream builder: %s", AAudio_convertResultToText(result)); - } - return builder; + _inputDeviceId = deviceId; + + // If this is a different device from the one currently in use then restart the stream + int32_t currentDeviceId = AAudioStream_getDeviceId( _inputStream ); + if ( _inputDeviceId != currentDeviceId ) { + restartStreams(); + } } /** - * Creates an audio stream for playback. The audio device used will depend on playbackDeviceId_. + * Creates a stream builder which can be used to construct AAudioStreams */ -void AAudio_IO::createPlaybackStream(){ +AAudioStreamBuilder* AAudio_IO::createStreamBuilder() { + AAudioStreamBuilder* builder = nullptr; + aaudio_result_t result = AAudio_createStreamBuilder( &builder ); + if ( result != AAUDIO_OK && !builder ) { + Debug::log( "AAudio_IO::Error creating stream builder: %s", AAudio_convertResultToText( result )); + } + return builder; +} - AAudioStreamBuilder* builder = createStreamBuilder(); +void AAudio_IO::createAllStreams() { - if (builder != nullptr){ + // Create the output stream + // This will also create the appropriate read and write buffers - setupPlaybackStreamParameters(builder); + createOutputStream(); - aaudio_result_t result = AAudioStreamBuilder_openStream(builder, &playStream_); + // Create the recording stream + // Note: The order of stream creation is important. We create the playback stream first, + // then use properties from the playback stream (e.g. sample rate) to create the + // recording stream. By matching the properties we should get the lowest latency path - if (result == AAUDIO_OK && playStream_ != nullptr){ + if ( _inputChannelCount > 0 ) { + createInputStream(); + } - // check that we got PCM_I16 format - if (sampleFormat_ != AAudioStream_getFormat(playStream_)) { - Debug::log( "AAudio_IO::Sample format is not PCM_I16"); - } + // Now start the recording stream first so that we can read from it during the playback + // stream's dataCallback - which is delegated to the driver adapter using getInput() - - sampleRate_ = AAudioStream_getSampleRate(playStream_); - framesPerBurst_ = AAudioStream_getFramesPerBurst(playStream_); + if ( _inputStream != nullptr ) { + startStream( _inputStream ); + } + if ( _outputStream != nullptr ) { + startStream( _outputStream ); + } +} + +void AAudio_IO::createInputStream() { - // Set the buffer size to the burst size - this will give us the minimum possible latency - AAudioStream_setBufferSizeInFrames(playStream_, framesPerBurst_); - bufSizeInFrames_ = framesPerBurst_; + AAudioStreamBuilder* builder = createStreamBuilder(); -// PrintAudioStreamInfo(playStream_); + if ( builder == nullptr ) { + Debug::log( "AAudio_IO::Unable to obtain an AAudioStreamBuilder object" ); + return; + } + setupInputStream( builder ); + + // Now that the parameters are set up we can open the stream + aaudio_result_t result = AAudioStreamBuilder_openStream( builder, &_inputStream ); + if ( result == AAUDIO_OK && _inputStream != nullptr ) { + if ( AAudioStream_getPerformanceMode( _inputStream ) != AAUDIO_PERFORMANCE_MODE_LOW_LATENCY ){ + Debug::log( "AAudio_IO::Input stream is NOT low latency. Check your requested format, sample rate and channel count" ); + } +// PrintAudioStreamInfo( _inputStream ); + } else { + Debug::log( "Failed to create recording stream. Error: %s", AAudio_convertResultToText( result )); + } + AAudioStreamBuilder_delete( builder ); +} - // Start the stream - the dataCallback function will start being called - result = AAudioStream_requestStart(playStream_); - if (result != AAUDIO_OK) { - Debug::log( "AAudio_IO::Error starting stream. %s", AAudio_convertResultToText(result)); - } +void AAudio_IO::createOutputStream() { - // Store the underrun count so we can tune the latency in the dataCallback - playStreamUnderrunCount_ = AAudioStream_getXRunCount(playStream_); + AAudioStreamBuilder* builder = createStreamBuilder(); - } else { - Debug::log( "AAudio_IO::Failed to create stream. Error: %s", AAudio_convertResultToText(result)); + if ( builder == nullptr ) { + Debug::log( "AAudio_IO::Unable to obtain an AAudioStreamBuilder object" ); + return; } - AAudioStreamBuilder_delete(builder); + setupOutputStream( builder ); + + aaudio_result_t result = AAudioStreamBuilder_openStream( builder, &_outputStream ); + + if ( result == AAUDIO_OK && _outputStream != nullptr ) { + + if ( AAudioStream_getPerformanceMode( _outputStream ) != AAUDIO_PERFORMANCE_MODE_LOW_LATENCY ){ + Debug::log( "AAudio_IO::Output stream is NOT low latency. Check your requested format, sample rate and channel count" ); + } + + // verify requested format and update in case hardware does not support it + // ideally we work in floating point across the engine to omit the need to convert samples + + if ( _sampleFormat != AAudioStream_getFormat( _outputStream )) { + Debug::log( "AAudio_IO::Sample format does not match requested format %d", _sampleFormat ); + _sampleFormat = AAudioStream_getFormat( _outputStream ); + } + + _sampleRate = AAudioStream_getSampleRate( _outputStream ); + _framesPerBurst = AAudioStream_getFramesPerBurst( _outputStream ); + + AudioEngineProps::SAMPLE_RATE = _sampleRate; + + // Set the buffer size to the burst size - this will give us the minimum possible latency + // This will also create the temporary read and write buffers - } else { - Debug::log( "AAudio_IO::Unable to obtain an AAudioStreamBuilder object"); - } + updateBufferSizeInFrames( AAudioStream_setBufferSizeInFrames( _outputStream, _framesPerBurst )); + +// PrintAudioStreamInfo(_outputStream); + + // Store the underrun count so we can tune the latency in the dataCallback + _underrunCountOutputStream = AAudioStream_getXRunCount( _outputStream ); + + } else { + Debug::log( "AAudio_IO::Failed to create stream. Error: %s", AAudio_convertResultToText( result )); + } + AAudioStreamBuilder_delete( builder ); } /** * Sets the stream parameters which are specific to playback, including device id and the * dataCallback function, which must be set for low latency playback. - * @param builder The playback stream builder */ -void AAudio_IO::setupPlaybackStreamParameters(AAudioStreamBuilder *builder) { - AAudioStreamBuilder_setDeviceId(builder, playbackDeviceId_); - AAudioStreamBuilder_setFormat(builder, sampleFormat_); - AAudioStreamBuilder_setChannelCount(builder, sampleChannels_); - - // We request EXCLUSIVE mode since this will give us the lowest possible latency. - // If EXCLUSIVE mode isn't available the builder will fall back to SHARED mode. - AAudioStreamBuilder_setSharingMode(builder, AAUDIO_SHARING_MODE_EXCLUSIVE); - AAudioStreamBuilder_setPerformanceMode(builder, AAUDIO_PERFORMANCE_MODE_LOW_LATENCY); - AAudioStreamBuilder_setDirection(builder, AAUDIO_DIRECTION_OUTPUT); - AAudioStreamBuilder_setDataCallback(builder, ::dataCallback, this); - AAudioStreamBuilder_setErrorCallback(builder, ::errorCallback, this); +void AAudio_IO::setupOutputStream ( AAudioStreamBuilder* builder ) { + AAudioStreamBuilder_setDeviceId ( builder, _outputDeviceId ); + AAudioStreamBuilder_setFormat ( builder, _sampleFormat ); + AAudioStreamBuilder_setChannelCount( builder, _outputChannelCount ); + + // We request EXCLUSIVE mode since this will give us the lowest possible latency. + // If EXCLUSIVE mode isn't available the builder will fall back to SHARED mode. + + AAudioStreamBuilder_setSharingMode ( builder, AAUDIO_SHARING_MODE_EXCLUSIVE ); + AAudioStreamBuilder_setPerformanceMode( builder, AAUDIO_PERFORMANCE_MODE_LOW_LATENCY ); + AAudioStreamBuilder_setDirection ( builder, AAUDIO_DIRECTION_OUTPUT ); + AAudioStreamBuilder_setDataCallback ( builder, ::dataCallback, this ); + AAudioStreamBuilder_setErrorCallback ( builder, ::errorCallback, this ); +} + +/** + * Sets the stream parameters which are specific to recording, including the sample rate which + * is determined from the playback stream. + */ +void AAudio_IO::setupInputStream( AAudioStreamBuilder* builder ) { + AAudioStreamBuilder_setDeviceId ( builder, _inputDeviceId ); + AAudioStreamBuilder_setSampleRate ( builder, _sampleRate ); + AAudioStreamBuilder_setChannelCount( builder, _inputChannelCount ); + AAudioStreamBuilder_setFormat ( builder, _sampleFormat ); + + // We request EXCLUSIVE mode since this will give us the lowest possible latency. + // If EXCLUSIVE mode isn't available the builder will fall back to SHARED mode. + + AAudioStreamBuilder_setSharingMode ( builder, AAUDIO_SHARING_MODE_EXCLUSIVE ); + AAudioStreamBuilder_setPerformanceMode( builder, AAUDIO_PERFORMANCE_MODE_LOW_LATENCY ); + AAudioStreamBuilder_setDirection ( builder, AAUDIO_DIRECTION_INPUT ); + AAudioStreamBuilder_setErrorCallback ( builder, ::errorCallback, this ); } -void AAudio_IO::closeOutputStream(){ +void AAudio_IO::startStream( AAudioStream* stream ) { + aaudio_result_t result = AAudioStream_requestStart( stream ); + if ( result != AAUDIO_OK ) { + Debug::log( "AAudio_IO::Error starting stream. %s", AAudio_convertResultToText( result )); + } +} - if (playStream_ != nullptr){ - aaudio_result_t result = AAudioStream_requestStop(playStream_); - if (result != AAUDIO_OK){ - Debug::log( "AAudio_IO::Error stopping output stream. %s", AAudio_convertResultToText(result)); +void AAudio_IO::stopStream( AAudioStream* stream ) { + if ( stream == nullptr ) { + return; } + aaudio_result_t result = AAudioStream_requestStop( stream ); + if ( result != AAUDIO_OK ) { + Debug::log( "AAudio_IO::Error stopping stream. %s", AAudio_convertResultToText( result )); + } +} - result = AAudioStream_close(playStream_); - if (result != AAUDIO_OK){ - Debug::log( "AAudio_IO::Error closing output stream. %s", AAudio_convertResultToText(result)); +void AAudio_IO::closeStream( AAudioStream* stream ) { + if ( stream == nullptr ) { + return; + } + stopStream( stream ); + aaudio_result_t result = AAudioStream_close( stream ); + if ( result != AAUDIO_OK ) { + Debug::log( "AAudio_IO::Error closing stream. %s", AAudio_convertResultToText( result )); } - } } /** - * @see dataCallback function at top of this file + * Invoked whenever the AAudio drivers frame buffer size has updated + * through AAudioStream_setBufferSizeInFrames (see dataCallback()) + * + * This allows us to synchronize the changes across the engine and ensures we have the + * appropriate size for our temporary read/write buffers */ -aaudio_data_callback_result_t AAudio_IO::dataCallback(AAudioStream *stream, - void *audioData, - int32_t numFrames) { - assert(stream == playStream_); - - int32_t underrunCount = AAudioStream_getXRunCount(playStream_); - aaudio_result_t bufferSize = AAudioStream_getBufferSizeInFrames(playStream_); - bool hasUnderrunCountIncreased = false; - bool shouldChangeBufferSize = false; - - if (underrunCount > playStreamUnderrunCount_){ - playStreamUnderrunCount_ = underrunCount; - hasUnderrunCountIncreased = true; - } - - if (hasUnderrunCountIncreased && bufferSizeSelection_ == BUFFER_SIZE_AUTOMATIC){ - - /** - * This is a buffer size tuning algorithm. If the number of underruns (i.e. instances where - * we were unable to supply sufficient data to the stream) has increased since the last callback - * we will try to increase the buffer size by the burst size, which will give us more protection - * against underruns in future, at the cost of additional latency. - */ - bufferSize += framesPerBurst_; // Increase buffer size by one burst - shouldChangeBufferSize = true; - } else if (bufferSizeSelection_ > 0 && (bufferSizeSelection_ * framesPerBurst_) != bufferSize){ - - // If the buffer size selection has changed then update it here - bufferSize = bufferSizeSelection_ * framesPerBurst_; - shouldChangeBufferSize = true; - } - - if (shouldChangeBufferSize){ - Debug::log( "AAudio_IO::Setting buffer size to %d", bufferSize); - bufferSize = AAudioStream_setBufferSizeInFrames(stream, bufferSize); - if (bufferSize > 0) { - bufSizeInFrames_ = bufferSize; - } else { - Debug::log( "AAudio_IO::Error setting buffer size: %s", AAudio_convertResultToText(bufferSize)); +void AAudio_IO::updateBufferSizeInFrames( int bufferSize ) { + bool update = _bufferSizeInFrames != bufferSize || _enqueuedOutputBuffer == nullptr; + + if ( !update ) { + return; + } + + Debug::log( "AAudio_IO::Setting buffer size to %d", bufferSize ); + + _bufferSizeInFrames = bufferSize; + + // sync across engine + AudioEngineProps::BUFFER_SIZE = _bufferSizeInFrames; + + // update temporary buffers as their size is now known (this operation should always happen + // before or after a read and write ensuring no data loss / null pointer) + + // create the temporary buffers used to write data from and to the AudioEngine during playback and recording + delete _enqueuedOutputBuffer; + _enqueuedOutputBuffer = new float[ _bufferSizeInFrames * _outputChannelCount ]{ 0 }; + + if ( _inputChannelCount > 0 ) { + if ( _sampleFormat == AAUDIO_FORMAT_PCM_I16 ) { + delete _recordBufferI; + _recordBufferI = new int16_t[ _bufferSizeInFrames * _inputChannelCount ]{ 0 }; + } else { + delete _recordBuffer; + _recordBuffer = new float[ _bufferSizeInFrames * _inputChannelCount ]{ 0 }; + } + } +} + +aaudio_data_callback_result_t AAudio_IO::dataCallback( AAudioStream* stream, void *audioData, int32_t numFrames ) { + assert( stream == _outputStream ); + + int32_t underrunCount = AAudioStream_getXRunCount( stream ); + aaudio_result_t bufferSize = AAudioStream_getBufferSizeInFrames( stream ); + bool hasUnderrunCountIncreased = false; + bool shouldChangeBufferSize = false; + + if ( underrunCount > _underrunCountOutputStream ) { + _underrunCountOutputStream = underrunCount; + hasUnderrunCountIncreased = true; } - } - //Debug::log( "AAudio_IO::numFrames %d, Underruns %d, buffer size %d", numFrames, underrunCount, bufferSize); + if ( hasUnderrunCountIncreased && _bufferSizeSelection == BUFFER_SIZE_AUTOMATIC ) { + + // This is a buffer size tuning algorithm. If the number of underruns (i.e. instances where + // we were unable to supply sufficient data to the stream) has increased since the last callback + // we will try to increase the buffer size by the burst size, which will give us more protection + // against underruns in the future, at the cost of additional latency. + + bufferSize += _framesPerBurst; // Increase buffer size by one burst + shouldChangeBufferSize = true; + } + else if ( _bufferSizeSelection > 0 && ( _bufferSizeSelection * _framesPerBurst ) != bufferSize ) + { + // If the buffer size selection has changed then update it here + bufferSize = _bufferSizeSelection * _framesPerBurst; + shouldChangeBufferSize = true; + } + + // Debug::log( "AAudio_IO::numFrames %d, Underruns %d, buffer size %d", numFrames, underrunCount, bufferSize); + + // rendering requested by AudioEngine ? (through the driver adapter) + + if ( render ) { + + // if there is an input stream and recording is active, read the stream contents + + if ( _inputStream != nullptr && ( AudioEngine::recordDeviceInput || AudioEngine::recordInputToDisk )) { + + // drain existing buffer contents on first write to make sure no lingering data is present + + if ( _flushInputOnCallback ) { + flushInputStream( audioData, numFrames ); + _flushInputOnCallback = false; + } + + aaudio_result_t readFrames = AAudioStream_read( + _inputStream, + _sampleFormat == AAUDIO_FORMAT_PCM_I16 ? ( void* ) _recordBufferI : ( void* ) _recordBuffer, + std::min( _bufferSizeInFrames, numFrames ), static_cast( 0 ) + ); + + if ( readFrames < 0 ) { + Debug::log( "AAudio_IO::AAudioStream_read() returns read %s frames", AAudio_convertResultToText( readFrames )); + } + } + + // invoke the render() method of the engine to collect audio into the enqueued buffer + // if it returns false, we can stop this stream (render thread has stopped) - // rendering requested ? + if ( !AudioEngine::render( numFrames )) { + return AAUDIO_CALLBACK_RESULT_STOP; + } - if ( render ) { + // write enqueued buffer into the output buffer (both contain interleaved samples) - // invoke the render() method of the engine to collect audio - // if it returns false, we can stop this stream (render thread has stopped) + int samplesToWrite = numFrames * _outputChannelCount; - if ( !AudioEngine::render( numFrames )) - return AAUDIO_CALLBACK_RESULT_STOP; - } + if ( _sampleFormat == AAUDIO_FORMAT_PCM_I16 ) { - // write enqueued buffer into the output buffer (both interleaved int16_t) + // ideally the hardware supports floating point samples, in case it is running + // as 16-bit PCM, convert the samples provided by the engine - int16_t* outputBuffer = static_cast( audioData ); - for ( int i = 0; i < numFrames; ++i ) { - outputBuffer[ i ] = _enqueuedBuffer[ i ]; - } + auto outputBuffer = static_cast( audioData ); + for ( int i = 0; i < samplesToWrite; ++i ) { + outputBuffer[ i ] = ( int16_t ) ( _enqueuedOutputBuffer[ i ] * CONV16BIT ); + } + } else { - calculateCurrentOutputLatencyMillis(stream, ¤tOutputLatencyMillis_); + // hardware supports floating point operation, copy the buffer contents directly - return AAUDIO_CALLBACK_RESULT_CONTINUE; + memcpy( static_cast( audioData ), _enqueuedOutputBuffer, samplesToWrite * sizeof( float )); + } + } + + calculateCurrentOutputLatencyMillis( stream, ¤tOutputLatencyMillis_ ); + + if ( shouldChangeBufferSize ) { + bufferSize = AAudioStream_setBufferSizeInFrames( stream, bufferSize ); + if ( bufferSize > 0 ) { + updateBufferSizeInFrames( bufferSize ); + } else { + Debug::log( "AAudio_IO::Error setting buffer size: %s", AAudio_convertResultToText( bufferSize )); + } + } + + return AAUDIO_CALLBACK_RESULT_CONTINUE; } /** - * enqueue a buffer for rendering in the next callback + * enqueue a buffer (of interleaved samples) for rendering + * this is invoked by AudioEngine::render() upon request of the dataCallback method + */ +void AAudio_IO::enqueueOutputBuffer( float* sourceBuffer, int amountOfSamples ) { + memcpy( _enqueuedOutputBuffer, sourceBuffer, amountOfSamples * sizeof( float )); +} + +/** + * retrieve the recorded input buffer populated by the dataCallback method * this is invoked by AudioEngine::render() - * - * buffer already contains interleaved samples, merely need to be converted - * from floating point values into 16-bit shorts */ -void AAudio_IO::enqueueBuffer( float* outputBuffer, int amountOfSamples ) { - for ( int i = 0; i < amountOfSamples; ++i ) { - _enqueuedBuffer[ i ] = ( int16_t )( outputBuffer[ i ] * CONV16BIT ); +int AAudio_IO::getEnqueuedInputBuffer( float* destinationBuffer, int amountOfSamples ) { + if ( _sampleFormat == AAUDIO_FORMAT_PCM_I16 ) { + + // ideally the hardware supports floating point samples, in case it is running + // as 16-bit PCM, convert the samples into floating point for use in the engine + + for ( int i = 0; i < amountOfSamples; ++i ) { + destinationBuffer[ i ] = ( float ) _recordBufferI[ i ] * ( float ) CONVMYFLT; + } + } else { + memcpy( destinationBuffer, _recordBuffer, amountOfSamples * sizeof( float )); } + return amountOfSamples; // TODO (?) : assumption here that the amount read equals the given recordBuffer size } /** @@ -351,82 +518,102 @@ void AAudio_IO::enqueueBuffer( float* outputBuffer, int amountOfSamples ) { * @return AAUDIO_OK or a negative error. It is normal to receive an error soon after a stream * has started because the timestamps are not yet available. */ -aaudio_result_t -AAudio_IO::calculateCurrentOutputLatencyMillis(AAudioStream *stream, double *latencyMillis) { +aaudio_result_t AAudio_IO::calculateCurrentOutputLatencyMillis( AAudioStream* stream, double *latencyMillis ) { - // Get the time that a known audio frame was presented for playing - int64_t existingFrameIndex; - int64_t existingFramePresentationTime; - aaudio_result_t result = AAudioStream_getTimestamp(stream, - CLOCK_MONOTONIC, - &existingFrameIndex, - &existingFramePresentationTime); + // Get the time that a known audio frame was presented for playing + int64_t existingFrameIndex; + int64_t existingFramePresentationTime; + aaudio_result_t result = AAudioStream_getTimestamp( stream, + CLOCK_MONOTONIC, + &existingFrameIndex, + &existingFramePresentationTime ); - if (result == AAUDIO_OK){ + if ( result == AAUDIO_OK ) { + // Get the write index for the next audio frame + int64_t writeIndex = AAudioStream_getFramesWritten(stream); - // Get the write index for the next audio frame - int64_t writeIndex = AAudioStream_getFramesWritten(stream); + // Calculate the number of frames between our known frame and the write index + int64_t frameIndexDelta = writeIndex - existingFrameIndex; - // Calculate the number of frames between our known frame and the write index - int64_t frameIndexDelta = writeIndex - existingFrameIndex; + // Calculate the time which the next frame will be presented + int64_t frameTimeDelta = (frameIndexDelta * NANOS_PER_SECOND) / _sampleRate; + int64_t nextFramePresentationTime = existingFramePresentationTime + frameTimeDelta; - // Calculate the time which the next frame will be presented - int64_t frameTimeDelta = (frameIndexDelta * NANOS_PER_SECOND) / sampleRate_; - int64_t nextFramePresentationTime = existingFramePresentationTime + frameTimeDelta; + // Assume that the next frame will be written at the current time + int64_t nextFrameWriteTime = get_time_nanoseconds(CLOCK_MONOTONIC); - // Assume that the next frame will be written at the current time - int64_t nextFrameWriteTime = get_time_nanoseconds(CLOCK_MONOTONIC); + // Calculate the latency + *latencyMillis = (double) (nextFramePresentationTime - nextFrameWriteTime) + / NANOS_PER_MILLISECOND; + } else { + Debug::log( "AAudio_IO::Error calculating latency: %s", AAudio_convertResultToText( result )); + } + return result; +} + +void AAudio_IO::errorCallback( AAudioStream* stream, aaudio_result_t error ) { - // Calculate the latency - *latencyMillis = (double) (nextFramePresentationTime - nextFrameWriteTime) - / NANOS_PER_MILLISECOND; - } else { - Debug::log( "AAudio_IO::Error calculating latency: %s", AAudio_convertResultToText(result)); - } + assert(stream == _outputStream || stream == _inputStream); + Debug::log( "AAudio_IO::errorCallback result: %s", AAudio_convertResultToText( error )); - return result; + aaudio_stream_state_t streamState = AAudioStream_getState( _outputStream ); + if ( streamState == AAUDIO_STREAM_STATE_DISCONNECTED ) { + // Handle stream restart on a separate thread + std::function restartStreams = std::bind( &AAudio_IO::restartStreams, this ); + _streamRestartThread = new std::thread( restartStreams ); + } +} + +void AAudio_IO::closeAllStreams() { + if ( _outputStream != nullptr ) { + closeStream( _outputStream ); + _outputStream = nullptr; + } + if ( _inputStream != nullptr ) { + closeStream( _inputStream ); + _inputStream = nullptr; + } } /** - * @see errorCallback function at top of this file + * Drain the recording stream of any existing data by reading from it until it's empty. This is + * usually run to clear out any stale data before performing an actual read operation, thereby + * obtaining the most recently recorded data and the best possbile recording latency. + * + * @param audioData A buffer which the existing data can be read into + * @param numFrames The number of frames to read in a single read operation, this is typically the + * size of `audioData`. */ -void AAudio_IO::errorCallback(AAudioStream *stream, - aaudio_result_t error){ - - assert(stream == playStream_); - Debug::log( "AAudio_IO::errorCallback result: %s", AAudio_convertResultToText(error)); - - aaudio_stream_state_t streamState = AAudioStream_getState(playStream_); - if (streamState == AAUDIO_STREAM_STATE_DISCONNECTED){ - - // Handle stream restart on a separate thread - std::function restartStream = std::bind(&AAudio_IO::restartStream, this); - streamRestartThread_ = new std::thread(restartStream); - } +void AAudio_IO::flushInputStream( void *audioData, int32_t numFrames ) { + aaudio_result_t clearedFrames = 0; + do { + clearedFrames = AAudioStream_read( _inputStream, audioData, numFrames, 0 ); + } while ( clearedFrames > 0 ); } -void AAudio_IO::restartStream(){ - Debug::log( "AAudio_IO::Restarting stream"); +void AAudio_IO::restartStreams() { - if (restartingLock_.try_lock()){ - closeOutputStream(); - createPlaybackStream(); - restartingLock_.unlock(); - } else { - Debug::log( "AAudio_IO::Restart stream operation already in progress - ignoring this request"); - // We were unable to obtain the restarting lock which means the restart operation is currently - // active. This is probably because we received successive "stream disconnected" events. - // Internal issue b/63087953 - } + Debug::log( "AAudio_IO::Restarting streams" ); + + if ( _restartingLock.try_lock() ) { + closeAllStreams(); + createAllStreams(); + _restartingLock.unlock(); + } else { + Debug::log( "AAudio_IO::Restart stream operation already in progress - ignoring this request" ); + // We were unable to obtain the restarting lock which means the restart operation is currently + // active. This is probably because we received successive "stream disconnected" events. + // Internal issue b/63087953 + } } double AAudio_IO::getCurrentOutputLatencyMillis() { - return currentOutputLatencyMillis_; + return currentOutputLatencyMillis_; } -void AAudio_IO::setBufferSizeInBursts(int32_t numBursts) { - AAudio_IO::bufferSizeSelection_ = numBursts; +void AAudio_IO::setBufferSizeInBursts( int32_t numBursts ) { + AAudio_IO::_bufferSizeSelection = numBursts; } } // E.O namespace MWEngine diff --git a/src/main/cpp/drivers/aaudio_io.h b/src/main/cpp/drivers/aaudio_io.h index f4734c53..966ce2ce 100644 --- a/src/main/cpp/drivers/aaudio_io.h +++ b/src/main/cpp/drivers/aaudio_io.h @@ -1,5 +1,9 @@ -/* - * Copyright 2017 The Android Open Source Project +/** + * The MIT License (MIT) + * + * Copyright (c) 2017-2019 Igor Zinken - https://www.igorski.nl + * + * AAudio driver implementation adapted from the Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. @@ -7,11 +11,22 @@ * * http://www.apache.org/licenses/LICENSE-2.0 * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. + * Permission is hereby granted, free of charge, to any person obtaining a copy of + * this software and associated documentation files (the "Software"), to deal in + * the Software without restriction, including without limitation the rights to + * use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of + * the Software, and to permit persons to whom the Software is furnished to do so, + * subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS + * FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR + * COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER + * IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ #ifndef AAUDIO_PLAYAUDIOENGINE_H #define AAUDIO_PLAYAUDIOENGINE_H @@ -27,66 +42,73 @@ namespace MWEngine { #define NANOS_PER_SECOND 1000000000L #define NANOS_PER_MILLISECOND 1000000L -uint16_t SampleFormatToBpp(aaudio_format_t format); -/* - * GetSystemTicks(void): return the time in micro sec - */ -__inline__ uint64_t GetSystemTicks(void) { - struct timeval Time; - gettimeofday( &Time, NULL ); - - return (static_cast(1000000) * Time.tv_sec + Time.tv_usec); -} - void PrintAudioStreamInfo(const AAudioStream * stream); int64_t timestamp_to_nanoseconds(timespec ts); int64_t get_time_nanoseconds(clockid_t clockid); -class AAudio_IO { - -public: - AAudio_IO( int amountOfChannels ); - ~AAudio_IO(); - void setDeviceId(int32_t deviceId); - void setBufferSizeInBursts(int32_t numBursts); - aaudio_data_callback_result_t dataCallback(AAudioStream *stream, - void *audioData, - int32_t numFrames); - void errorCallback(AAudioStream *stream, - aaudio_result_t __unused error); - double getCurrentOutputLatencyMillis(); - void enqueueBuffer( float* outputBuffer, int amountOfSamples ); - bool render; - -private: - - int32_t playbackDeviceId_ = AAUDIO_UNSPECIFIED; - int32_t sampleRate_; - int16_t sampleChannels_; - int16_t* _enqueuedBuffer; - aaudio_format_t sampleFormat_; - - AAudioStream *playStream_; - - int32_t playStreamUnderrunCount_; - int32_t bufSizeInFrames_; - int32_t framesPerBurst_; - double currentOutputLatencyMillis_ = 0; - int32_t bufferSizeSelection_ = BUFFER_SIZE_AUTOMATIC; - - std::thread* streamRestartThread_; - std::mutex restartingLock_; - - void createPlaybackStream(); - void closeOutputStream(); - void restartStream(); - - AAudioStreamBuilder* createStreamBuilder(); - void setupPlaybackStreamParameters(AAudioStreamBuilder *builder); - - aaudio_result_t calculateCurrentOutputLatencyMillis(AAudioStream *stream, double *latencyMillis); -}; +class AAudio_IO +{ + public: + AAudio_IO( int amountOfInputChannels, int amountOfOutputChannels ); + ~AAudio_IO(); + void setDeviceId ( int32_t deviceId ); + void setRecordingDeviceId ( int32_t recordingDeviceId ); + void setBufferSizeInBursts( int32_t numBursts ); + aaudio_data_callback_result_t dataCallback( AAudioStream *stream, + void *audioData, + int32_t numFrames ); + void errorCallback( AAudioStream *stream, aaudio_result_t __unused error ); + double getCurrentOutputLatencyMillis(); + int getEnqueuedInputBuffer( float* destinationBuffer, int amountOfSamples ); + void enqueueOutputBuffer ( float* sourceBuffer, int amountOfSamples ); + bool render; + + private: + + // By not specifying an audio device id we are telling AAudio that + // we want the stream to be created using the default playback audio device. + int32_t _outputDeviceId = AAUDIO_UNSPECIFIED; + int32_t _inputDeviceId = AAUDIO_UNSPECIFIED; + + int32_t _sampleRate; + int16_t _inputChannelCount; + int16_t _outputChannelCount; + aaudio_format_t _sampleFormat; + float* _enqueuedOutputBuffer = nullptr; + float* _recordBuffer = nullptr; + int16_t* _recordBufferI = nullptr; + bool _flushInputOnCallback = true; + + AAudioStream* _inputStream = nullptr; + AAudioStream* _outputStream = nullptr; + + int32_t _underrunCountOutputStream; + int32_t _bufferSizeInFrames; + int32_t _framesPerBurst; + double currentOutputLatencyMillis_ = 0; + int32_t _bufferSizeSelection = BUFFER_SIZE_AUTOMATIC; + + std::thread* _streamRestartThread; + std::mutex _restartingLock; + + void createInputStream(); + void createOutputStream(); + void createAllStreams(); + void startStream( AAudioStream* stream ); + void stopStream ( AAudioStream* stream ); + void closeStream( AAudioStream* stream ); + void closeAllStreams(); + void flushInputStream( void *audioData, int32_t numFrames ); + void restartStreams(); + + AAudioStreamBuilder* createStreamBuilder(); + void setupOutputStream( AAudioStreamBuilder* builder ); + void setupInputStream ( AAudioStreamBuilder* builder ); + void updateBufferSizeInFrames( int bufferSize ); + + aaudio_result_t calculateCurrentOutputLatencyMillis(AAudioStream *stream, double *latencyMillis); + }; } // E.O namespace MWEngine diff --git a/src/main/cpp/drivers/adapter.cpp b/src/main/cpp/drivers/adapter.cpp index 8a852d56..80c60a70 100644 --- a/src/main/cpp/drivers/adapter.cpp +++ b/src/main/cpp/drivers/adapter.cpp @@ -1,7 +1,7 @@ /** * The MIT License (MIT) * - * Copyright (c) 2017-2018 Igor Zinken - http://www.igorski.nl + * Copyright (c) 2017-2019 Igor Zinken - http://www.igorski.nl * * Permission is hereby granted, free of charge, to any person obtaining a copy of * this software and associated documentation files (the "Software"), to deal in @@ -52,13 +52,18 @@ namespace DriverAdapter { // AAudio driver_aAudio = new AAudio_IO( - AudioEngineProps::OUTPUT_CHANNELS + AudioEngineProps::INPUT_CHANNELS, AudioEngineProps::OUTPUT_CHANNELS ); - // TODO: specify these from outside? + + if ( driver_aAudio == nullptr ) { + return false; + } + // TODO: allow specifying these from the outside? // driver_aAudio->setDeviceId(); + // driver_aAudio->setRecordingDeviceId(); driver_aAudio->setBufferSizeInBursts( 1 ); // Google provides {0, 1, 2, 4, 8} as values - return ( driver_aAudio != nullptr ); + return true; #endif } @@ -99,18 +104,19 @@ namespace DriverAdapter { android_AudioOut( driver_openSL, outputBuffer, amountOfSamples ); #elif DRIVER == 1 // AAudio - driver_aAudio->enqueueBuffer( outputBuffer, amountOfSamples ); + driver_aAudio->enqueueOutputBuffer( outputBuffer, amountOfSamples ); #endif } - int getInput( float* recordBuffer ) { + int getInput( float* recordBuffer, int amountOfSamples ) { #if DRIVER == 0 // OpenSL return android_AudioIn( driver_openSL, recordBuffer, AudioEngineProps::BUFFER_SIZE ); +#elif DRIVER == 1 + // AAudio + return driver_aAudio->getEnqueuedInputBuffer( recordBuffer, amountOfSamples ); #endif - // TODO: no AAudio recording yet - return 0; } } diff --git a/src/main/cpp/drivers/adapter.h b/src/main/cpp/drivers/adapter.h index f7ae7567..1eee8005 100644 --- a/src/main/cpp/drivers/adapter.h +++ b/src/main/cpp/drivers/adapter.h @@ -1,7 +1,7 @@ /** * The MIT License (MIT) * - * Copyright (c) 2017-2018 Igor Zinken - http://www.igorski.nl + * Copyright (c) 2017-2019 Igor Zinken - http://www.igorski.nl * * Permission is hereby granted, free of charge, to any person obtaining a copy of * this software and associated documentation files (the "Software"), to deal in @@ -25,22 +25,24 @@ #include "../global.h" -// whether to include the OpenSL, AAudio or mocked (unit test mode) driver for audio output - -#if DRIVER == 0 - -// OpenSL +// whether to include the OpenSL, AAudio or OpenSL mock (used during unit tests) driver for audio output #ifdef MOCK_ENGINE // mocking requested, e.g. unit test mode #include "../tests/helpers/mock_opensl_io.h" +// run as mocked OpenSL driver +#undef DRIVER +#define DRIVER 0 +#endif -#else +#if DRIVER == 0 // production build for OpenSL +#ifndef MOCK_ENGINE #include "opensl_io.h" #endif +#endif -#elif DRIVER == 1 +#if DRIVER == 1 // production build for AAudio #include "aaudio_io.h" #endif @@ -72,7 +74,7 @@ namespace DriverAdapter { // get the input buffer from the driver (when recording) // and write it into given recordBuffer // returns integer value of amount of recorded samples - int getInput( float* recordBuffer ); + int getInput( float* recordBuffer, int amountOfSamples ); #if DRIVER == 0 diff --git a/src/main/cpp/events/basesynthevent.cpp b/src/main/cpp/events/basesynthevent.cpp index 710c1bcb..23c3c24c 100755 --- a/src/main/cpp/events/basesynthevent.cpp +++ b/src/main/cpp/events/basesynthevent.cpp @@ -189,7 +189,7 @@ void BaseSynthEvent::calculateBuffers() if ( isSequenced ) { - setEventStart( position * ( int ) AudioEngine::samples_per_step ); + setEventStart( position * AudioEngine::samples_per_step ); setEventLength(( int )( length * AudioEngine::samples_per_step )); setEventEnd( _eventStart + _eventLength ); } diff --git a/src/main/cpp/global.cpp b/src/main/cpp/global.cpp index 3bc3344d..32a406c4 100755 --- a/src/main/cpp/global.cpp +++ b/src/main/cpp/global.cpp @@ -33,11 +33,4 @@ namespace AudioEngineProps { int OUTPUT_CHANNELS = 1; } -/* used for threading */ - -void *print_message( void* ) -{ - return 0; -} - } // E.O namespace MWEngine diff --git a/src/main/cpp/global.h b/src/main/cpp/global.h index d251544f..2e563a06 100755 --- a/src/main/cpp/global.h +++ b/src/main/cpp/global.h @@ -1,7 +1,7 @@ /** * The MIT License (MIT) * - * Copyright (c) 2013-2018 Igor Zinken - http://www.igorski.nl + * Copyright (c) 2013-2019 Igor Zinken - http://www.igorski.nl * * Permission is hereby granted, free of charge, to any person obtaining a copy of * this software and associated documentation files (the "Software"), to deal in @@ -40,12 +40,11 @@ namespace MWEngine { #define PRECISION 2 // if you wish to record audio from the Android device input, uncomment the RECORD_DEVICE_INPUT definition -// (note this requires both android.permission.RECORD_AUDIO and android.permission.MODIFY_AUDIO_SETTINGS with a -// positive value for AudioEngineProps::INPUT_CHANNELS) +// (note this requires both android.permission.RECORD_AUDIO and android.permission.MODIFY_AUDIO_SETTINGS) -//#define RECORD_DEVICE_INPUT +#define RECORD_DEVICE_INPUT -// if you wish to write the engine output to the devices file system, uncomment the ALLOW_WRITING definition +// if you wish to write the engine output to the devices file system, uncomment the RECORD_TO_DISK definition // (note this requires android.permission.WRITE_EXTERNAL_STORAGE), like RECORD_AUDIO this requires a Runtime Permission // grant when compiling for target SDK level 23 (Android M) @@ -99,7 +98,6 @@ const SAMPLE_TYPE TWO_PI = PI * 2.0; // other const int WAVE_TABLE_PRECISION = 128; // the amount of samples contained within a wave table -extern void *print_message( void* ); } // E.O namespace MWEngine diff --git a/src/main/java/nl/igorski/example/MWEngineActivity.java b/src/main/java/nl/igorski/example/MWEngineActivity.java index ba265528..18dae0de 100644 --- a/src/main/java/nl/igorski/example/MWEngineActivity.java +++ b/src/main/java/nl/igorski/example/MWEngineActivity.java @@ -52,7 +52,7 @@ public final class MWEngineActivity extends Activity { private int BUFFER_SIZE; private int OUTPUT_CHANNELS = 2; // 1 = mono, 2 = stereo - private static int STEPS_PER_MEASURE = 16; // amount of subdivisions within a single measure + private static int STEPS_PER_MEASURE = 16; // amount of subdivisions within a single measure private static String LOG_TAG = "MWENGINE"; // logcat identifier /* public methods */ @@ -75,9 +75,7 @@ public void onCreate( Bundle savedInstanceState ) { Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.WRITE_EXTERNAL_STORAGE }; - - // Check if we have all the necessary permissions, if not prompt user - + // Check if we have all the necessary permissions, if not: prompt user int permission = checkSelfPermission( Manifest.permission.RECORD_AUDIO ); if ( permission != PackageManager.PERMISSION_GRANTED ) requestPermissions( PERMISSIONS, 8081981 ); @@ -136,32 +134,17 @@ private void init() { // STEP 4 : attach event handlers to the UI elements (see main.xml layout) - final Button playPauseButton = ( Button ) findViewById( R.id.PlayPauseButton ); - playPauseButton.setOnClickListener( new PlayClickHandler() ); - - final Button liveNoteButton = ( Button ) findViewById( R.id.LiveNoteButton ); - liveNoteButton.setOnTouchListener( new LiveNoteHandler() ); - - final Button recordButton = ( Button ) findViewById( R.id.RecordButton ); - recordButton.setOnClickListener( new RecordHandler() ); - - final SeekBar filterSlider = ( SeekBar ) findViewById( R.id.FilterCutoffSlider ); - filterSlider.setOnSeekBarChangeListener( new FilterCutOffChangeHandler() ); + findViewById( R.id.PlayPauseButton ).setOnClickListener( new PlayClickHandler() ); + findViewById( R.id.RecordButton ).setOnClickListener( new RecordOutputHandler() ); + findViewById( R.id.LiveNoteButton ).setOnTouchListener( new LiveNoteHandler() ); + findViewById( R.id.RecordInputButton ).setOnTouchListener( new RecordInputHandler() ); - final SeekBar decaySlider = ( SeekBar ) findViewById( R.id.SynthDecaySlider ); - decaySlider.setOnSeekBarChangeListener( new SynthDecayChangeHandler() ); - - final SeekBar feedbackSlider = ( SeekBar ) findViewById( R.id.MixSlider ); - feedbackSlider.setOnSeekBarChangeListener( new DelayMixChangeHandler() ); - - final SeekBar pitchSlider = ( SeekBar ) findViewById( R.id.PitchSlider ); - pitchSlider.setOnSeekBarChangeListener( new PitchChangeHandler() ); - - final SeekBar tempoSlider = ( SeekBar ) findViewById( R.id.TempoSlider ); - tempoSlider.setOnSeekBarChangeListener( new TempoChangeHandler() ); - - final SeekBar volumeSlider = ( SeekBar ) findViewById( R.id.VolumeSlider ); - volumeSlider.setOnSeekBarChangeListener( new VolumeChangeHandler() ); + (( SeekBar ) findViewById( R.id.FilterCutoffSlider )).setOnSeekBarChangeListener( new FilterCutOffChangeHandler() ); + (( SeekBar ) findViewById( R.id.SynthDecaySlider )).setOnSeekBarChangeListener( new SynthDecayChangeHandler() ); + (( SeekBar ) findViewById( R.id.MixSlider )).setOnSeekBarChangeListener( new DelayMixChangeHandler() ); + (( SeekBar ) findViewById( R.id.PitchSlider )).setOnSeekBarChangeListener( new PitchChangeHandler() ); + (( SeekBar ) findViewById( R.id.TempoSlider )).setOnSeekBarChangeListener( new TempoChangeHandler() ); + (( SeekBar ) findViewById( R.id.VolumeSlider )).setOnSeekBarChangeListener( new VolumeChangeHandler() ); _inited = true; } @@ -350,6 +333,17 @@ public void onClick( View v ) { } } + private class RecordOutputHandler implements View.OnClickListener { + @Override + public void onClick( View v ) { + _isRecording = !_isRecording; + _engine.setRecordingState( + _isRecording, Environment.getExternalStorageDirectory().getAbsolutePath() + "/Download/mwengine_output.wav" + ); + (( Button ) v ).setText( _isRecording ? R.string.rec_btn_off : R.string.rec_btn_on ); + } + } + private class LiveNoteHandler implements View.OnTouchListener { @Override public boolean onTouch( View v, MotionEvent event ) { @@ -357,7 +351,6 @@ public boolean onTouch( View v, MotionEvent event ) { case MotionEvent.ACTION_DOWN: _liveEvent.play(); return true; - case MotionEvent.ACTION_UP: _liveEvent.stop(); return true; @@ -366,14 +359,18 @@ public boolean onTouch( View v, MotionEvent event ) { } } - private class RecordHandler implements View.OnClickListener { + private class RecordInputHandler implements View.OnTouchListener { @Override - public void onClick( View v ) { - _isRecording = !_isRecording; - _engine.setRecordingState( - _isRecording, Environment.getExternalStorageDirectory().getAbsolutePath() + "/Download/mwengine_output.wav" - ); - (( Button ) v ).setText( _isRecording ? R.string.rec_btn_off : R.string.rec_btn_on ); + public boolean onTouch( View v, MotionEvent event ) { + switch( event.getAction()) { + case MotionEvent.ACTION_DOWN: + _engine.recordInput( true ); + return true; + case MotionEvent.ACTION_UP: + _engine.recordInput( false ); + return true; + } + return false; } } @@ -436,7 +433,6 @@ private class StateObserver implements MWEngine.IObserver { private final Notifications.ids[] _notificationEnums = Notifications.ids.values(); // cache the enumerations (from native layer) as int Array public void handleNotification( final int aNotificationId ) { switch ( _notificationEnums[ aNotificationId ]) { - case ERROR_HARDWARE_UNAVAILABLE: Log.d( LOG_TAG, "ERROR : received Open SL error callback from native layer" ); // re-initialize thread @@ -449,11 +445,9 @@ public void handleNotification( final int aNotificationId ) { Log.d( LOG_TAG, "exceeded maximum amount of retries. Cannot continue using audio engine" ); } break; - case MARKER_POSITION_REACHED: Log.d( LOG_TAG, "Marker position has been reached" ); break; - case RECORDING_COMPLETED: Log.d( LOG_TAG, "Recording has completed" ); break; @@ -475,8 +469,6 @@ public void handleNotification( final int aNotificationId, final int aNotificati Log.d( LOG_TAG, "seq. position: " + sequencerPosition + ", buffer offset: " + aNotificationValue + ", elapsed samples: " + elapsedSamples ); break; - - case RECORDED_SNIPPET_READY: runOnUiThread( new Runnable() { public void run() { @@ -485,7 +477,6 @@ public void run() { } }); break; - case RECORDED_SNIPPET_SAVED: Log.d( LOG_TAG, "Recorded snippet " + aNotificationValue + " saved to storage" ); break; diff --git a/src/main/res/layout/main.xml b/src/main/res/layout/main.xml index 2748dad0..54805988 100644 --- a/src/main/res/layout/main.xml +++ b/src/main/res/layout/main.xml @@ -7,40 +7,45 @@ + android:layout_width="fill_parent" + android:layout_height="wrap_content" + android:padding="15dip" + android:text="@string/app_help" + android:layout_marginBottom="5dip" + /> - + android:layout_marginBottom="1dip" + android:layout_gravity="center" + android:orientation="horizontal" + >