React Native Audio

Latest NPM Release NPM Downloads CircleCI GitHub Repo stars Dr. Pogodin Studio

React Native (RN) Audio library for Android, iOS, and macOS (Catalyst) platforms, with support of new and old RN architectures. It covers:

  • Input audio stream (microphone) listening / recording.
  • Audio samples playback.
  • Utility functions for audio system management.
  • More stuff to come...

Sponsor

Contributors

Content

Installation

  • Install the package and its peer dependencies

    npx install-peerdeps @dr.pogodin/react-native-audio
    
  • Follow react-native-permissions documentation to setup your app for asking the user for the RECORD_AUDIO (Android) and/or Microphone (iOS) permissions. react-native-audio library will automatically ask for these permissions, if needed, when a stream .start() method is called, provided the app has been correctly configured to ask for them.

Getting Started

A better Getting Started tutorial is to be written, however the main idea is this:

import {
  AUDIO_FORMATS,
  AUDIO_SOURCES,
  CHANNEL_CONFIGS,
  InputAudioStream,
} from "@dr.pogodin/react-native-audio";

function createAndStartAudioStream() {
  const stream = new InputAudioStream(
    AUDIO_SOURCES.RAW,
    44100, // Sample rate in Hz.
    CHANNEL_CONFIGS.MONO,
    AUDIO_FORMATS.PCM_16BIT,
    4096, // Sampling size.
  );

  stream.addErrorListener((error) => {
    // Do something with a stream error.
  });

  stream.addChunkListener((chunk, chunkId) => {
    // Pause the stream for the chunk processing. The point is: if your chunk
    // processing in this function is too slow, and chunks arrive faster than
    // this callback is able to handle them, it will rapidly crash the app,
    // with out of memory error. Muting the stream ignores any new chunks
    // until stream.unmute() is called, thus protecting from the crash.
    // And if your chunk processing is rapid enough, not chunks won't be
    // skipped. The "chunkId" argument is just sequential chunk numbers,
    // by which you may judge whether any chunks have been skipped between
    // this callback calls or not.
    stream.mute();

    // Do something with the chunk.

    // Resumes the stream.
    stream.unmute();
  });

  stream.start();

  // Call stream.destroy() to stop the stream and release any associated
  // resources. If you need to temporarily stop and then resume the stream,
  // use .mute() and .unmute() methods instead.
}

and on top of this the library will include other auxiliary methods related to audio input and output.

API Reference

Classes

InputAudioStream

class InputAudioStream;

The InputAudioStream class, as its name suggests, represents individual input audio streams, capturing audio data in the configured format from the specified audio source.

constructor()

const stream = new InputAudioStream(
  audioSource: AUDIO_SOURCES,
  sampleRate: number,
  channelConfig: CHANNEL_CONFIGS,
  audioFormat: AUDIO_FORMATS,
  samplingSize: number,
  stopInBackground: boolean = true,
);

Creates a new InputAudioStream instance. The newly created stream does not record audio, neither consumes resources at the native side until its .start() method is called.

  • audioSourceAUDIO_SOURCES — The audio source this stream will listen to. Currently, it is supported for Android only; on iOS this value is just ignored, and the stream captures audio data from the default input source of the device.
  • sampleRatenumber — Sample rate [Hz]. 44100 Hz is the recommended value, as it is the only rate that is guaranteed to work on all Android (and many other) devices.
  • channelConfigCHANNEL_CONFIGSMono or Stereo stream mode.
  • audioFormatAUDIO_FORMATS — Audio format.
  • samplingSizenumber — Sampling (data chunk) size, expressed as the number of samples per channel in the chunk.
  • stopInBackgroundboolean — Optional. It true (default) the stream will automatically pause itself when the app leaves the foreground, and the stream will automatically resume itself when the app returns to the foreground.

.addChunkListener()

stream.addChunkListener(listener: ChunkListener): void;

Adds a new audio data chunk listener to the stream. See .removeChunkListener() to subsequently remove the listener from the stream.

Note: It is safe to call it repeatedly for the same listener & stream pair — the listener still won't be added to the stream more than once.

  • listenerChunkListener — The callback to call with audio data chunks when they arrive.

.addErrorListener()

stream.addErrorListener(listener: ErrorListener): void;

Adds a new error listener to the stream. See .removeErrorListener() to subsequently remove the listener from the stream.

Note: It is safe to call it repeatedly for the same listener & stream pair — the listener still won't be added to the stream more than once.

  • listenerErrorListener — The callback to call with error details, if any error happens in the stream.

.destroy()

stream.destroy(): void;

Destroys the stream — stops the recording, and releases all related resources, both at the native and JS sides. Once a stream is destroyed, it cannot be re-used.

.mute()

stream.mute(): void;

Mutes the stream. A muted stream still continues to capture audio data chunks from the audio source, and thus keeps incrementing chunk IDs (see ChunkListener), but it discards all data chunks immediately after the capture, without sending them to the JavaScript layer, thus causing the minimal performance and memory overhead possible without interrupting the recording.

Calling .mute() on a muted, or non-active (not recording) audio stream has no effect. See also .active, .muted.

.removeChunkListener()

stream.removeChunkListener(listener: ChunkListener): void;

Removes the listener from the stream. No operation if given listener is not connected to the stream. See .addChunkListener() to add the listener.

.removeErrorListener()

stream.removeErrorListener(listener: ErrorListener): void;

Removes the listener from the stream. No operation if given listener is not connected to the stream. See .addErrorListener() to connect the listener.

.start()

stream.start(): Promise<boolean>;

Starts the audio stream recording. This method actually initializes the stream on the native side, and starts the recording.

Note: If necessary, this method will ask app user for the audio recoding permission, using the react-native-permissions library.

  • Resolves to boolean value — true if the stream has started successfully and is .active, false otherwise.

.stop()

stream.stop(): Promise<void>;

Stops the stream. Unlike the .mute() method, .stop() actually stops the audio stream and releases its resources on the native side; however, unlike the .destroy() method, it does not release its resource in the JS layer (i.e. does not drop references to all connected listeners), thus allowing to .start() this stream instance again (which will technically will init a new stream on the native side, but it will be opaque to the end user on the JS side).

  • Resolves once the stream is stopped.

.unmute()

stream.unmute(): void;

Unmutes a previously .muted stream. It has no effect if called on inactive (non started), or already muted stream.

.active

stream.active: boolean;

Read-only. true when the stream is started and recording, false otherwise.

Note: .active will be true for a started and .muted stream.

.audioFormat

stream.audioFormat: AUDIO_FORMATS;

Read-only. Holds the audio format value provided to InputAudioStream's constructor(). AUDIO_FORMATS enum provides valid format values.

.audioSource

stream.audioSource: AUDIO_SOURCES;

Read-only. Holds the audio source value provided to InputAudioStream's constructor(). As of now it only has an affect on Android devices, and it is ignored for iOS. AUDIO_SOURCES enum provides valid audio source values.

.channelConfig

stream.channelConfig: CHANNEL_CONFIGS;

Read-only. Holds the channel mode (Mono or Stereo) value provided to InputAudioStream's constructor(). CHANNEL_CONFIGS enum provides valid channel mode values.

.muted

stream.muted: boolean;

Read-only. true when the stream is muted by .mute(), false otherwise.

.sampleRate

stream.sampleRate: number;

Read-only. Holds the stream's sample rate provided to the stream constructor(), in [Hz].

.samplingSize

stream.samplingSize: number;

Read-only. Holds the stream's sampling (audio data chunk) size, provided to the stream constructor(). The value is the number of samples per channel, thus for multi-channel streams the actual chunk size will be a multiple of this number, and also the sample size in bytes may vary for different .audioFormat.

.stopInBackground

stream.stopInBackground: boolean;

Read-only. true if the stream is set to automatically .stop() when the app leaves foreground, and .start() again when it returns to the foreground.

SamplePlayer

class SamplePlayer;

Represents an audio sample player. It is intended for loading into the memory a set of short audio fragments, which then can be played at demand with a low latency.

On Android we use SoundPool for the underlying implementation, you may check its documentation for further details. In particular note: each decoded sound is internally limited to one megabyte storage, whcih represents approximately 5.6 seconds at 44.1Hz stereo (the duration is proportionally longer at lower sample rates or a channel mask of mono).

constructor()

const player = new SamplePlayer();

Creates a new SamplePlayer instance. Note that this creation of SamplePlayer instance already allocates some resources at the native side, thus to release those resources you MUST USE its .destroy() method once the instance is not needed anymore.

.addErrorListener()

player.addErrorListener(listener: ErrorListener): void;

Adds an error listener to the player. Does nothing if given listener is already added to this player.

.destroy()

player.destroy(): Promise<void>;

Destroys player instance, releasing all related resources. Once destroyed the player instance can't be reused.

  • Resolves once completed.

.load()

player.load(sampleName: string, samplePath: string): Promise<void>;

Loads an (additional) audio sample into the player.

  • sampleNamestring — Sample name, by which you'll refer to the loaded sample in other methods, like .play(), SamplePlayer.stop(), and .unload(). If it matches a name of a previously loaded sample, that sample will be replaced.
  • samplePathstring — Path to the sample file on the device. For now, only loading samples from regular files is supported (e.g. not possible to load from Android asset, without first copying the asset into a regular file).
  • Resolves once the sample is loaded and decoded, thus ready to be played.

.play()

player.play(sampleName: string, loop: boolean): Promise<void>;

Plays an audio sample, previously loaded with .load() method.

NOTE: In the current implementation, starting a sample playback always stops the ongoing playback of a sample previously played by the same player, if any. There is no technical barrier to support playback of multiple samples at the same time, it just needs some more development effort.

NOTE: Use .addErrorListener() method to recieve details of any errors that happen during the playback. Although .play() itself rejects if the playback fails to start, that rejection message does not provide any details beyond the fact of the failure, and it also does not capture any further errors (as the playback itself is asynchronous).

  • sampleNamestring — Sample name, assinged when loading it with the .load() method.
  • loopboolean — Set true to infinitely loop the sample; or false to play it once.
  • Resolves once the playback is launched; rejects if the playback fails to start due to some error.

.removeErrorListener()

player.removeErrorListener(listener: ErrorListener): void;

Removes listener from this player, or does nothing if the listener is not connected to the player.

.stop()

player.stop(sampleName: string): Promise<void>;

Stops sample playback, does nothing if the sample is not being played by this player.

  • sampleNamestring — Sample name.
  • Resolves once completed.

.unload()

player.unload(sampleName: string): Promise<void>;

Unloads an audio sample previouly loaded into this player.

  • sampleNamestring — Sample name.
  • Resolves once completed.

Constants

AUDIO_FORMATS

enum AUDIO_FORMATS {
  PCM_8BIT: number;
  PCM_16BIT: number;
  PCM_FLOAT: number;
};

Provides valid .audioFormat values. See Android documentation for exact definitions of these three formats; they should be the same on iOS devices.

Note: At least Android allows for other audio formats, which we may include here in future.

AUDIO_SOURCES

enum AUDIO_SOURCES {
  CAMCODER: number;
  DEFAULT: number;
  MIC: number;
  REMOTE_SUBMIX: number;
  RAW: number;
  VOICE_CALL: number;
  VOICE_COMMUNICATION: number;
  VOICE_DOWNLINK: number;
  VOICE_PERFORMANCE: number;
  VOICE_RECOGNITION: number;
  VOICE_UPLINK: number;
};

Provides valid .audioSource values. As of now, they have effect for Android devices only, and for them they represent corresponding values of MediaRecorder.AudioSource.

CHANNEL_CONFIGS

enum CHANNEL_CONFIGS {
  MONO: number;
  STEREO: number;
};

Provides valid .channelConfig values.

Note: As of now, it provides only two values, MONO and STEREO, however, at least Android seems to support additional channels, which might be added in future, see Android's AudioFormat documentation.

IS_MAC_CATALYST

const IS_MAC_CATALYST: boolean;

Equals true if the app is running on the macOS (Catalyst) platform; false otherwise.

Functions

configAudioSystem()

function configAudioSystem(): Promise<void>;

Configures audio system (input & output devices).

Currently it does nothing on Android; on iOS it (re-)configures the audio session, setting the Play & Record category and activating the session.

Note: On iOS, if Play & Record category is not available on the device, it sets the Playback category instead; and if neither category is available, the function rejects its result promise. The function also sets the following options for the iOS audio session: AllowBluetooth, AllowBluetoothA2DP, and DefaultToSpeaker.

See iOS documentation for further details about iOS audio sessions and categories.

  • Resolves once completed.

getInputAvailable()

function getInputAvailable(): Promise<boolean>;
  • Resolves true if device has an available audio input source, false otherwise.

Types

ChunkListener

type ChunkListener = (chunk: Buffer, chunkId: number) => void;

The type of audio data chunk listeners that can be connected to an InputAudioStream with .addChunkListener() method.

  • chunkBuffer — Audio data chunk in the format specified upon the audio stream construction. Buffer implementation for RN is provided by the buffer library.
  • chunkIdnumber — Consequtive chunk number. When a stream is .muted the chunk numbers are still incremented for discarted audio chunks, thus chunkId may be used to judge whether any chunks were missed while a stream was muted.

ErrorListener

type ErrorListener = (error: Error) => void;

The type of error listeners that can be connected to an InputAudioStream with .addErrorListener() method.

  • errorError — Stream error.
© Dr. Pogodin Studio, 2018–2024 — ‌doc@pogodin.studio‌ — ‌Terms of Service