I want to publish audio stream to an RTMP server, for real time audio live streaming, from mobile device (with Android for example).
Suppose the mobile device has a way to yield to me those datas in real time (ex: using Oboe library). Packet by packet (a packet contains a certain number of audio frames).
When live streaming, there are some really custom computations to those datas that requires that I must send then little by little (packet by packet?) to the RTMP server.
I’m trying to use FFMPEG for that purpose, and have similar problem with this thread’s question: How to publish self made stream with ffmpeg and c++ to rtmp server?. But the answer there is not detailed is not enough for me.
I tried reading FFMPEG code source, with the help of the documentation, but there are still some challenges I must face since I’m new to the streaming domain. What I need to know is:
- Before I go any further, is it possible to compile and use FFMPEG (or only the part that I need from it) for that purpose in Android (and iOS)? (Yes or No is enough)
- How to properly configure FFMPEG for that purpose? (
- What is the proper way to write the stream (
AVStream)? (I read somewhere that the packet needs to be of a specific size, and other stuffs too)
- I can handle the audio packet by packet and encoded.
- Audio is encoded as mp3.
- Audio has default sample rate of 44100 Hz, 320kb/s bitrate and some other details already known so that FFMPEG doesn’t need to guess it.
I’m using react-native. For android: native modules to communicate with Java, JNI to communicate Java with C++, Oboe to record and play audio. For iOS: not a problem for the moment.
I use node-media-server as RTMP server.
Source: Windows Questions C++