0

I'm using the react-native-live-audio-stream package to record.

I followed the instructions for in the npm

// yarn add buffer
import { Buffer } from 'buffer';
  ...
LiveAudioStream.on('data', data => {
  var chunk = Buffer.from(data, 'base64');
});

For now... all I want to do is save everything until the end, save it to a file, and convert it back into readable audio.

Can anyone help me with this?

My init is as follows:

        LiveAudioStream.init({
          wavFile: 'audio.wav',
          sampleRate: 32000, // default is 44100 but 32000 is adequate for accurate voice recognition
          channels: 1, // 1 or 2, default 1
          bitsPerSample: 16, // 8 or 16, default 16
          // For audioSource values see:
          // https://developer.android.com/reference/android/media/MediaRecorder.AudioSource
          audioSource: 1, // 1: MIC audio source for Android
          bufferSize: 4096, // default is 2048
        });

I have looked up every stack overflow and reddit about it but no one gives conclusive answers... just stuff like this

Thanks in advance!

2
  • Why does everyone keep insisting on base64 for anything related to audio/video? It's the wrong tool for most jobs. All you end up doing is adding a ton of overhead with no benefit. Commented Nov 26, 2024 at 7:13
  • Teach me oh wise one, and I will be taught... Commented Nov 27, 2024 at 10:20

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.