React Native Audio

Latest NPM Release NPM Downloads CircleCI GitHub Repo stars Dr. Pogodin Studio

React Native (RN) Audio library for Android and iOS platforms, with support of new and old RN architectures. It covers:

  • Input audio stream (microphone) listening / recording.
  • Audio samples playback.
  • Utility functions for audio system management.
  • More stuff to come...




  • Install the package and its peer dependencies

    npx install-peerdeps @dr.pogodin/react-native-audio
  • Follow react-native-permissions documentation to setup your app for asking the user for the RECORD_AUDIO (Android) and/or Microphone (iOS) permissions. react-native-audio library will automatically ask for these permissions, if needed, when a stream .start() method is called, provided the app has been correctly configured to ask for them.

Getting Started

A better Getting Started tutorial is to be written, however the main idea is this:

import {
} from "@dr.pogodin/react-native-audio";

function createAndStartAudioStream() {
  const stream = new InputAudioStream(
    44100, // Sample rate in Hz.
    4096, // Sampling size.

  stream.addErrorListener((error) => {
    // Do something with a stream error.

  stream.addChunkListener((chunk, chunkId) => {
    // Pause the stream for the chunk processing. The point is: if your chunk
    // processing in this function is too slow, and chunks arrive faster than
    // this callback is able to handle them, it will rapidly crash the app,
    // with out of memory error. Muting the stream ignores any new chunks
    // until stream.unmute() is called, thus protecting from the crash.
    // And if your chunk processing is rapid enough, not chunks won't be
    // skipped. The "chunkId" argument is just sequential chunk numbers,
    // by which you may judge whether any chunks have been skipped between
    // this callback calls or not.

    // Do something with the chunk.

    // Resumes the stream.


  // Call stream.destroy() to stop the stream and release any associated
  // resources. If you need to temporarily stop and then resume the stream,
  // use .mute() and .unmute() methods instead.

and on top of this the library will include other auxiliary methods related to audio input and output.

API Reference



class InputAudioStream;

The InputAudioStream class, as its name suggests, represents individual input audio streams, capturing audio data in the configured format from the specified audio source.


const stream = new InputAudioStream(
  audioSource: AUDIO_SOURCES,
  sampleRate: number,
  channelConfig: CHANNEL_CONFIGS,
  audioFormat: AUDIO_FORMATS,
  samplingSize: number,
  stopInBackground: boolean = true,

Creates a new InputAudioStream instance. The newly created stream does not record audio, neither consumes resources at the native side until its .start() method is called.

  • audioSourceAUDIO_SOURCES — The audio source this stream will listen to. Currently, it is supported for Android only; on iOS this value is just ignored, and the stream captures audio data from the default input source of the device.
  • sampleRatenumber — Sample rate [Hz]. 44100 Hz is the recommended value, as it is the only rate that is guaranteed to work on all Android (and many other) devices.
  • channelConfigCHANNEL_CONFIGSMono or Stereo stream mode.
  • audioFormatAUDIO_FORMATS — Audio format.
  • samplingSizenumber — Sampling (data chunk) size, expressed as the number of samples per channel in the chunk.
  • stopInBackgroundboolean — Optional. It true (default) the stream will automatically pause itself when the app leaves the foreground, and the stream will automatically resume itself when the app returns to the foreground.


stream.addChunkListener(listener: ChunkListener): void;

Adds a new audio data chunk listener to the stream. See .removeChunkListener() to subsequently remove the listener from the stream.

Note: It is safe to call it repeatedly for the same listener & stream pair — the listener still won't be added to the stream more than once.

  • listenerChunkListener — The callback to call with audio data chunks when they arrive.


stream.addErrorListener(listener: ErrorListener): void;

Adds a new error listener to the stream. See .removeErrorListener() to subsequently remove the listener from the stream.

Note: It is safe to call it repeatedly for the same listener & stream pair — the listener still won't be added to the stream more than once.

  • listenerErrorListener — The callback to call with error details, if any error happens in the stream.


stream.destroy(): void;

Destroys the stream — stops the recording, and releases all related resources, both at the native and JS sides. Once a stream is destroyed, it cannot be re-used.


stream.mute(): void;

Mutes the stream. A muted stream still continues to capture audio data chunks from the audio source, and thus keeps incrementing chunk IDs (see ChunkListener), but it discards all data chunks immediately after the capture, without sending them to the JavaScript layer, thus causing the minimal performance and memory overhead possible without interrupting the recording.

Calling .mute() on a muted, or non-active (not recording) audio stream has no effect. See also .active, .muted.


stream.removeChunkListener(listener: ChunkListener): void;

Removes the listener from the stream. No operation if given listener is not connected to the stream. See .addChunkListener() to add the listener.


stream.removeErrorListener(listener: ErrorListener): void;

Removes the listener from the stream. No operation if given listener is not connected to the stream. See .addErrorListener() to connect the listener.


stream.start(): Promise<boolean>;

Starts the audio stream recording. This method actually initializes the stream on the native side, and starts the recording.

Note: If necessary, this method will ask app user for the audio recoding permission, using the react-native-permissions library.

  • Resolves to boolean value — true if the stream has started successfully and is .active, false otherwise.


stream.stop(): Promise<void>;

Stops the stream. Unlike the .mute() method, .stop() actually stops the audio stream and releases its resources on the native side; however, unlike the .destroy() method, it does not release its resource in the JS layer (i.e. does not drop references to all connected listeners), thus allowing to .start() this stream instance again (which will technically will init a new stream on the native side, but it will be opaque to the end user on the JS side).

  • Resolves once the stream is stopped.


stream.unmute(): void;

Unmutes a previously .muted stream. It has no effect if called on inactive (non started), or already muted stream.

.active boolean;

Read-only. true when the stream is started and recording, false otherwise.

Note: .active will be true for a started and .muted stream.


stream.audioFormat: AUDIO_FORMATS;

Read-only. Holds the audio format value provided to InputAudioStream's constructor(). AUDIO_FORMATS enum provides valid format values.


stream.audioSource: AUDIO_SOURCES;

Read-only. Holds the audio source value provided to InputAudioStream's constructor(). As of now it only has an affect on Android devices, and it is ignored for iOS. AUDIO_SOURCES enum provides valid audio source values.


stream.channelConfig: CHANNEL_CONFIGS;

Read-only. Holds the channel mode (Mono or Stereo) value provided to InputAudioStream's constructor(). CHANNEL_CONFIGS enum provides valid channel mode values.


stream.muted: boolean;

Read-only. true when the stream is muted by .mute(), false otherwise.


stream.sampleRate: number;

Read-only. Holds the stream's sample rate provided to the stream constructor(), in [Hz].


stream.samplingSize: number;

Read-only. Holds the stream's sampling (audio data chunk) size, provided to the stream constructor(). The value is the number of samples per channel, thus for multi-channel streams the actual chunk size will be a multiple of this number, and also the sample size in bytes may vary for different .audioFormat.


stream.stopInBackground: boolean;

Read-only. true if the stream is set to automatically .stop() when the app leaves foreground, and .start() again when it returns to the foreground.


class SamplePlayer;

Represents an audio sample player. It is intended for loading into the memory a set of short audio fragments, which then can be played at demand with a low latency.

On Android we use SoundPool for the underlying implementation, you may check its documentation for further details. In particular note: each decoded sound is internally limited to one megabyte storage, whcih represents approximately 5.6 seconds at 44.1Hz stereo (the duration is proportionally longer at lower sample rates or a channel mask of mono).


const player = new SamplePlayer();

Creates a new SamplePlayer instance. Note that this creation of SamplePlayer instance already allocates some resources at the native side, thus to release those resources you MUST USE its .destroy() method once the instance is not needed anymore.


player.addErrorListener(listener: ErrorListener): void;

Adds an error listener to the player. Does nothing if given listener is already added to this player.


player.destroy(): Promise<void>;

Destroys player instance, releasing all related resources. Once destroyed the player instance can't be reused.

  • Resolves once completed.


player.load(sampleName: string, samplePath: string): Promise<void>;

Loads an (additional) audio sample into the player.

  • sampleNamestring — Sample name, by which you'll refer to the loaded sample in other methods, like .play(), SamplePlayer.stop(), and .unload(). If it matches a name of a previously loaded sample, that sample will be replaced.
  • samplePathstring — Path to the sample file on the device. For now, only loading samples from regular files is supported (e.g. not possible to load from Android asset, without first copying the asset into a regular file).
  • Resolves once the sample is loaded and decoded, thus ready to be played.

.play() string, loop: boolean): Promise<void>;

Plays an audio sample, previously loaded with .load() method.

NOTE: In the current implementation, starting a sample playback always stops the ongoing playback of a sample previously played by the same player, if any. There is no technical barrier to support playback of multiple samples at the same time, it just needs some more development effort.

NOTE: Use .addErrorListener() method to recieve details of any errors that happen during the playback. Although .play() itself rejects if the playback fails to start, that rejection message does not provide any details beyond the fact of the failure, and it also does not capture any further errors (as the playback itself is asynchronous).

  • sampleNamestring — Sample name, assinged when loading it with the .load() method.
  • loopboolean — Set true to infinitely loop the sample; or false to play it once.
  • Resolves once the playback is launched; rejects if the playback fails to start due to some error.


player.removeErrorListener(listener: ErrorListener): void;

Removes listener from this player, or does nothing if the listener is not connected to the player.


player.stop(sampleName: string): Promise<void>;

Stops sample playback, does nothing if the sample is not being played by this player.

  • sampleNamestring — Sample name.
  • Resolves once completed.


player.unload(sampleName: string): Promise<void>;

Unloads an audio sample previouly loaded into this player.

  • sampleNamestring — Sample name.
  • Resolves once completed.



  PCM_8BIT: number;
  PCM_16BIT: number;
  PCM_FLOAT: number;

Provides valid .audioFormat values. See Android documentation for exact definitions of these three formats; they should be the same on iOS devices.

Note: At least Android allows for other audio formats, which we may include here in future.


  CAMCODER: number;
  DEFAULT: number;
  MIC: number;
  REMOTE_SUBMIX: number;
  RAW: number;
  VOICE_CALL: number;
  VOICE_UPLINK: number;

Provides valid .audioSource values. As of now, they have effect for Android devices only, and for them they represent corresponding values of MediaRecorder.AudioSource.


  MONO: number;
  STEREO: number;

Provides valid .channelConfig values.

Note: As of now, it provides only two values, MONO and STEREO, however, at least Android seems to support additional channels, which might be added in future, see Android's AudioFormat documentation.


const IS_MAC_CATALYST: boolean;

Equals true if the app is running on the macOS (Catalyst) platform; false otherwise.



function configAudioSystem(): Promise<void>;

Configures audio system (input & output devices).

Currently it does nothing on Android; on iOS it (re-)configures the audio session, setting the Play & Record category and activating the session.

Note: On iOS, if Play & Record category is not available on the device, it sets the Playback category instead; and if neither category is available, the function rejects its result promise. The function also sets the following options for the iOS audio session: AllowBluetooth, AllowBluetoothA2DP, and DefaultToSpeaker.

See iOS documentation for further details about iOS audio sessions and categories.

  • Resolves once completed.


function getInputAvailable(): Promise<boolean>;
  • Resolves true if device has an available audio input source, false otherwise.



type ChunkListener = (chunk: Buffer, chunkId: number) => void;

The type of audio data chunk listeners that can be connected to an InputAudioStream with .addChunkListener() method.

  • chunkBuffer — Audio data chunk in the format specified upon the audio stream construction. Buffer implementation for RN is provided by the buffer library.
  • chunkIdnumber — Consequtive chunk number. When a stream is .muted the chunk numbers are still incremented for discarted audio chunks, thus chunkId may be used to judge whether any chunks were missed while a stream was muted.


type ErrorListener = (error: Error) => void;

The type of error listeners that can be connected to an InputAudioStream with .addErrorListener() method.

  • errorError — Stream error.
© Dr. Pogodin Studio, 2018–2024 — ‌‌ — ‌Terms of Service