Skip to content

Commit

Permalink
Support AAudio based input recording to enable full duplex AAudio sup…
Browse files Browse the repository at this point in the history
…port

* Initial implementation of AAudio based input recording
* Verified duplex operation on Pixel 3
* Made unit test mode working with openSL mock, as before
* Updated example activity
* Fix handlers
* Buffer size and temporary buffers now update during runtime buffer size updates. Use preferred floating point format for AAUdio, use memory cloning for faster writes.
* README and inline doc copy updates
* and again
* Fall back to OpenSL default
  • Loading branch information
igorski authored Dec 28, 2019
1 parent 81faa2f commit 344c3c4
Show file tree
Hide file tree
Showing 16 changed files with 699 additions and 458 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,5 @@ src/main/cpp/jni/java_interface_wrap.cpp
libs/*
obj
build
debug
local.properties
35 changes: 19 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,7 @@ MWEngine is..
=============

...an audio engine for Android, using either OpenSL (compatible with Android 4.1 and up) or AAudio
(Android 8.0 and up) as the drivers for low latency audio performance. The engine has been written for both
[MikroWave](https://play.google.com/store/apps/details?id=nl.igorski.mikrowave.free&hl=en) and
[Kosm](https://play.google.com/store/apps/details?id=nl.igorski.kosm&hl=en) to provide fast live audio synthesis. MWEngine is also used by [TIZE - Beat Maker, Music Maker](https://play.google.com/store/apps/details?id=com.tizemusic.tize).
(Android 8.0 and up) as the drivers for low latency audio performance.

MWEngine provides an architecture that allows you to work with audio within a _musical context_. It is easy to
build upon the base classes and create your own noise generating mayhem. A few keywords describing the
Expand All @@ -16,21 +14,29 @@ out-of-the-box possibilities are:
* effect chains operating on individual input/output channels
* sample playback with real time pitch shifting
* bouncing output to WAV files, either live (during a performance) or "offline"

Also note that MWEngine's underlying audio drivers are _the same as Google Oboe uses_, MWEngine and
Oboe are merely abstraction layers to solve the same problem, only in different ways. Additionally, MWEngine provides a complete audio processing environment.

#### Who uses this ?

The engine has been written for both [MikroWave](https://play.google.com/store/apps/details?id=nl.igorski.mikrowave.free&hl=en) and
[Kosm](https://play.google.com/store/apps/details?id=nl.igorski.kosm&hl=en) to provide fast live audio synthesis.

While developments on those apps are scarce, the engine itself has been continuously improved and is now also
used by third party app developers, such as [TIZE - Beat Maker, Music Maker](https://play.google.com/store/apps/details?id=com.tizemusic.tize).

### The [Issue Tracker](https://github.com/igorski/MWEngine/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc) is your point of contact

Bug reports, feature requests, questions and discussions are welcome on the GitHub Issue Tracker, please do not send e-mails through the development website. However, please search before posting to avoid duplicates, and limit to one issue per post.

Please vote on feature requests by using the Thumbs Up/Down reaction on the first post.

### C++ ??? What about Java ?
### C++ ??? What about Java / Kotlin ?

Though the library is written in C++ (and can be used solely within this context), the library can be built using JNI
(Java Native Interface) which makes its API expose itself to Java, while still executing in a native layer outside of
the Dalvik/ART VM. In other words : high performance of the engine is ensured by the native layer operations, while
the JVM. In other words : high performance of the engine is ensured by the native layer operations, while
ease of development is ensured by delegating application logic / UI to the realm of the Android Java SDK.

Whether you intend to use MWEngine for its sample based playback or to leverage its built-in synthesizer and
Expand Down Expand Up @@ -139,19 +145,16 @@ sequence going using the library.
To install the demo: first build the library as described above, and then run the build script to deploy the .APK onto an
attached device/emulator (note that older emulated devices can only operate at a sample rate of 8 kHz!).

### Note on AAudio
### Note on OpenSL / AAudio drivers

Currently it is not possible to switch between audio drivers on the fly, rather you must precompile
the library for use with a specific driver. By default, the library will compile for OpenSL for a
wider range of supported devices. If you want to use AAudio instead (and thus are targeting solely
devices running Android 8 and up) :

The AAudio implementation has been built using (in Google's words): _"a Preview release of the AAudio library. The API
might change in backward-incompatible ways in future releases. It is not recommended for production use."_ so use it
at your own peril. To use AAudio instead of OpenSL:

* change the desired driver in _global.h_ from type 0 (OpenSL) to 1 (AAudio)
* update the _Android.mk_ file to include all required adapters and libraries (simply set _BUILD_AAUDIO_ to 'true')
* update target in _project.properties_ to _android-26_

Once AAudio is a stable library, MWEngine will allow on-the-fly switching between OpenSL and AAudio drivers.

(!) MWEngine does not support recording from the device inputs using AAudio just yet, (https://github.com/igorski/MWEngine/issues/70) references this feature.
Should you require support for both driver variants, please file a feature request in the repository's issue tracker.

### Contributors

Expand Down
2 changes: 1 addition & 1 deletion build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ android {

defaultConfig {
applicationId "nl.igorski.example"
minSdkVersion 16
minSdkVersion 26 // can go down to 16 when using OpenSL as the audio driver
targetSdkVersion 27
versionCode 1
versionName "1.0.0"
Expand Down
13 changes: 2 additions & 11 deletions src/main/cpp/Android.mk
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
# Experimental AAudio support, set to true when building for AAudio (requires NDK target 26)
BUILD_AAUDIO = false

LOCAL_PATH := $(call my-dir)
LOCAL_SRC_FILES := \

Expand Down Expand Up @@ -32,6 +29,7 @@ global.cpp \
jni/javabridge.cpp \
drivers/adapter.cpp \
drivers/opensl_io.c \
drivers/aaudio_io.cpp \
utilities/utils.cpp \
audioengine.cpp \
audiobuffer.cpp \
Expand Down Expand Up @@ -94,14 +92,7 @@ modules/envelopefollower.cpp \
modules/lfo.cpp \
modules/routeableoscillator.cpp \

ifeq ($(BUILD_AAUDIO),true)
LOCAL_SRC_FILES += \
drivers/aaudio_io.cpp \

LOCAL_LDLIBS := -laaudio
endif

LOCAL_LDLIBS += -lOpenSLES -landroid -latomic -llog
LOCAL_LDLIBS += -lOpenSLES -laaudio -landroid -latomic -llog

include $(BUILD_SHARED_LIBRARY)

Expand Down
1 change: 1 addition & 0 deletions src/main/cpp/Application.mk
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ APP_STL := c++_static
APP_CPPFLAGS += -std=c++11 -Werror -fexceptions -frtti
#APP_CPPFLAGS += -Wall
APP_ABI := x86 x86_64 armeabi-v7a arm64-v8a
APP_PLATFORM = android-26

ifeq ($(TARGET_ARCH_ABI), x86)
LOCAL_CFLAGS += -m32
Expand Down
14 changes: 9 additions & 5 deletions src/main/cpp/audioengine.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -151,8 +151,9 @@ namespace MWEngine {
// generate the input buffer used for recording from the device's input
// as well as the temporary buffer used to merge the input into

recbufferIn = new float[ AudioEngineProps::BUFFER_SIZE ]();
recbufferIn = new float[ AudioEngineProps::BUFFER_SIZE * AudioEngineProps::INPUT_CHANNELS ]();
inputChannel->createOutputBuffer();

#endif
// accumulates all channels ("master strip")

Expand Down Expand Up @@ -260,17 +261,19 @@ namespace MWEngine {
// record audio from Android device ?
if (( recordDeviceInput || recordInputToDisk ) && AudioEngineProps::INPUT_CHANNELS > 0 )
{
int recSamps = DriverAdapter::getInput( recbufferIn );
int recordedSamples = DriverAdapter::getInput( recbufferIn, amountOfSamples );
SAMPLE_TYPE* recBufferChannel = inputChannel->getOutputBuffer()->getBufferForChannel( 0 );

for ( int j = 0; j < recSamps; ++j )
for ( int j = 0; j < recordedSamples; ++j ) {
recBufferChannel[ j ] = recbufferIn[ j ];//static_cast<float>( recbufferIn[ j ] );
}

// apply processing chain onto the input

std::vector<BaseProcessor*> processors = inputChannel->processingChain->getActiveProcessors();
for ( int k = 0; k < processors.size(); ++k )
for ( int k = 0; k < processors.size(); ++k ) {
processors[ k ]->process( inputChannel->getOutputBuffer(), AudioEngineProps::INPUT_CHANNELS == 1 );
}

// merge recording into current input buffer for instant monitoring

Expand Down Expand Up @@ -373,8 +376,9 @@ namespace MWEngine {
}

// write cache if it didn't happen yet ;) (bus processors are (currently) non-cacheable)
if ( mustCache )
if ( mustCache ) {
mustCache = !writeChannelCache( channel, channelBuffer, cacheReadPos );
}

// write the channel buffer into the combined output buffer, apply channel volume
// note live events are always audible as their volume is relative to the instrument
Expand Down
Loading

0 comments on commit 344c3c4

Please sign in to comment.