当前位置: 首页 > article >正文

Android平台RTSP转RTMP推送之采集麦克风音频转发

技术背景

RTSP转RTMP推送,好多开发者第一想到的是采用ffmpeg命令行的形式,如果对ffmpeg比较熟,而且产品不要额外的定制和更高阶的要求,未尝不可,如果对产品稳定性、时延、断网重连等有更高的技术诉求,比较好的办法,还是采用我们的技术实现。

技术实现

以大牛直播SDK的多路RTSP转RTMP推送模块为例,首先拉取RTSP流,把未解码的H.264/H.265、AAC/PCMA/PCMU数据回调上来,然后通过调用推送模块的编码后数据接口,同步转发出去,整体下来,几无多少延迟。如果需要把数据投递到轻量级RTSP服务也可以。系统设计架构图如下:

1. 拉流:通过RTSP直播播放SDK的数据回调接口,拿到音视频数据;

2. 转推:通过RTMP直播推送SDK的编码后数据输入接口,把回调上来的数据,传给RTMP直播推送模块,实现RTSP数据流到RTMP服务器的转发;

3. 录像:如果需要录像,借助RTSP直播播放SDK,拉到音视频数据后,直接存储MP4文件即可;

4. 快照:如果需要实时快照,拉流后,解码调用播放端快照接口,生成快照,因为快照涉及到video数据解码,如无必要,可不必开启,不然会额外消耗性能。

5. 拉流预览:如需预览拉流数据,只要调用播放端的播放接口,即可实现拉流数据预览;

6. 数据转AAC后转发:考虑到好多监控设备出来的音频可能是PCMA/PCMU的,如需要更通用的音频格式,可以转AAC后,在通过RTMP推送;

7. 转推RTMP实时静音:只需要在传audio数据的地方,加个判断即可;

8. 拉流速度反馈:通过RTSP播放端的实时码率反馈event,拿到实时带宽占用即可;

9. 整体网络状态反馈:考虑到有些摄像头可能会临时或异常关闭,RTMP服务器亦是,可以通过推拉流的event回调状态,查看那整体网络情况,如此界定:是拉不到流,还是推不到RTMP服务器。

多路RTMP/RTSP转RTMP推送模块功能支持:

  1. 支持拉取rtmp流;
  2. 支持拉取rtsp流;
  3. Windows支持本地flv文件转发(支持制定文件位置转发,或转发过程中seek);
  4. 支持本地预览;
  5. 支持转发过程中,实时静音;
  6. 支持转发过程中,切换rtmp/rtsp url,此外,windows平台还支持切换本地flv文件;
  7. 支持录像模块扩展,可边转发边录制,每个文件录制开始结束,均有状态回馈;
  8. 支持内网RTSP网关模块扩展,拉取的流数据,可以流入到内网RTSP网关模块,对外微型RTSP媒体流服务(RTSP url),便于内网访问;
  9. 音频:AAC,并支持拉流后的音频(PCMU/PCMA,Speex等)转AAC后再转发;
  10. 视频:H.264、H.265,支持h265转发(rtsp/rtmp h265转rtmp h265推送);

上述实现,2016年我们已经非常成熟,本次要谈的,是开发者实际场景用到的一个技术需求,如何实现视频用RTSP数据源获取到的,音频采集麦克风的数据。

废话不多说,上代码:

先说开始拉流、停止拉流设计如下,如果是用rtsp的audio,那么我们就开启audio数据的回调,如果采用麦克风的,这里只要开video的即可。

/*
 * SmartRelayDemo.java
 * Created by daniusdk.com
 * weChat: xinsheng120
 */
private boolean StartPull()
{
	if ( isPulling )
		return false;

	if(!isPlaying)
	{
		if (!OpenPullHandle())
			return false;
	}

	if(audio_opt_ == 2)
	{
		libPlayer.SmartPlayerSetAudioDataCallback(player_handle_, new PlayerAudioDataCallback(stream_publisher_));
	}
	if(video_opt_ == 2)
	{
		libPlayer.SmartPlayerSetVideoDataCallback(player_handle_, new PlayerVideoDataCallback(stream_publisher_));
	}

	int is_pull_trans_code  = 1;
	libPlayer.SmartPlayerSetPullStreamAudioTranscodeAAC(player_handle_, is_pull_trans_code);

	int startRet = libPlayer.SmartPlayerStartPullStream(player_handle_);

	if (startRet != 0) {
		Log.e(TAG, "Failed to start pull stream!");

		if(!isPlaying)
		{
			releasePlayerHandle();
		}

		return false;
	}

	isPulling = true;
	return true;
}

private void StopPull()
{
	if ( !isPulling )
		return;

	isPulling = false;

	if (null == libPlayer || 0 == player_handle_)
		return;

	libPlayer.SmartPlayerStopPullStream(player_handle_);

	if ( !isPlaying)
	{
		releasePlayerHandle();
	}
}

OpenPullHandle()实现逻辑如下,常规的参数设置,和event callback设置等。

private boolean OpenPullHandle()
{
	//playbackUrl可自定义
	playbackUrl = "rtsp://admin:daniulive12345@192.168.0.120:554/h264/ch1/main/av_stream";

	if (playbackUrl == null) {
		Log.e(TAG, "playback URL is null...");
		return false;
	}

	player_handle_ = libPlayer.SmartPlayerOpen(context_);

	if (player_handle_ == 0) {
		Log.e(TAG, "playerHandle is null..");
		return false;
	}

	libPlayer.SetSmartPlayerEventCallbackV2(player_handle_,
			new EventHandlePlayerV2());

	libPlayer.SmartPlayerSetBuffer(player_handle_, playBuffer);

	// set report download speed
	libPlayer.SmartPlayerSetReportDownloadSpeed(player_handle_, 1, 2);

	//设置RTSP超时时间
	int rtsp_timeout = 10;
	libPlayer.SmartPlayerSetRTSPTimeout(player_handle_, rtsp_timeout);

	//设置RTSP TCP/UDP模式自动切换
	int is_auto_switch_tcp_udp = 1;
	libPlayer.SmartPlayerSetRTSPAutoSwitchTcpUdp(player_handle_, is_auto_switch_tcp_udp);

	libPlayer.SmartPlayerSaveImageFlag(player_handle_, 1);

	// It only used when playback RTSP stream..
	//libPlayer.SmartPlayerSetRTSPTcpMode(playerHandle, 1);

	libPlayer.SmartPlayerSetUrl(player_handle_, playbackUrl);

	return true;
}

拉流后,转推RTMP的设计如下:

btnRTMPPusher.setOnClickListener(new Button.OnClickListener() {

	// @Override
	public void onClick(View v) {

		if (stream_publisher_.is_rtmp_publishing()) {
			stopPush();

			btnRTMPPusher.setText("推送RTMP");
			return;
		}

		Log.i(TAG, "onClick start push rtmp..");
		InitAndSetConfig();

		String rtmp_pusher_url = "rtmp://192.168.0.104:1935/hls/stream1";

		//String rtmp_pusher_url = relayStreamUrl;

		if (!stream_publisher_.SetURL(rtmp_pusher_url))
			Log.e(TAG, "Failed to set publish stream URL..");

		boolean start_ret = stream_publisher_.StartPublisher();
		if (!start_ret) {
			stream_publisher_.try_release();
			Log.e(TAG, "Failed to start push stream..");
			return;
		}

		startAudioRecorder();

		btnRTMPPusher.setText("停止推送");
	}
});

InitAndSetConfig()设计如下:

private void InitAndSetConfig() {
	if (null == libPublisher)
		return;

	if (!stream_publisher_.empty())
		return;

	Log.i(TAG, "InitAndSetConfig video width: " + video_width_ + ", height" + video_height_);

	long handle = libPublisher.SmartPublisherOpen(context_, audio_opt_, video_opt_,  video_width_, video_height_);
	if (0==handle) {
		Log.e(TAG, "sdk open failed!");
		return;
	}

	Log.i(TAG, "publisherHandle=" + handle);

	int fps = 25;
	int gop = fps * 3;

	initialize_publisher(libPublisher, handle, video_width_, video_height_, fps, gop);

	stream_publisher_.set(libPublisher, handle);
}

这里可以看到,我们在转推RTMP的时候,调用了startAudioRecorder()来做麦克风的采集:

void startAudioRecorder() {

	if(audio_opt_ != 1)
		return;

	if (audio_recorder_ != null)
		return;

	audio_recorder_ = new NTAudioRecordV2(this);

	Log.i(TAG, "startAudioRecorder call audio_recorder_.start()+++...");

	audio_recorder_callback_ = new NTAudioRecordV2CallbackImpl(stream_publisher_, null);

	audio_recorder_.AddCallback(audio_recorder_callback_);

	if (!audio_recorder_.Start(is_pcma_ ? 8000 : 44100, 1) ) {
		audio_recorder_.RemoveCallback(audio_recorder_callback_);
		audio_recorder_callback_ = null;

		audio_recorder_ = null;

		Log.e(TAG, "startAudioRecorder start failed.");
	}
	else {
		Log.i(TAG, "startAudioRecorder call audio_recorder_.start() OK---...");
	}
}

void stopAudioRecorder() {
	if (null == audio_recorder_)
		return;

	Log.i(TAG, "stopAudioRecorder+++");

	audio_recorder_.Stop();

	if (audio_recorder_callback_ != null) {
		audio_recorder_.RemoveCallback(audio_recorder_callback_);
		audio_recorder_callback_ = null;
	}

	audio_recorder_ = null;

	Log.i(TAG, "stopAudioRecorder---");
}

采集到的audio回调上来后,我们调RTMP推送接口,把数据投递下去即可:

private static class NTAudioRecordV2CallbackImpl implements NTAudioRecordV2Callback {
	private WeakReference<LibPublisherWrapper> publisher_0_;
	private WeakReference<LibPublisherWrapper> publisher_1_;

	public NTAudioRecordV2CallbackImpl(LibPublisherWrapper publisher_0, LibPublisherWrapper publisher_1) {
		if (publisher_0 != null)
			publisher_0_ = new WeakReference<>(publisher_0);

		if (publisher_1 != null)
			publisher_1_ = new WeakReference<>(publisher_1);
	}

	private final LibPublisherWrapper get_publisher_0() {
		if (publisher_0_ !=null)
			return publisher_0_.get();

		return null;
	}

	private final LibPublisherWrapper get_publisher_1() {
		if (publisher_1_ != null)
			return publisher_1_.get();

		return null;
	}

	@Override
	public void onNTAudioRecordV2Frame(ByteBuffer data, int size, int sampleRate, int channel, int per_channel_sample_number) {

		 //Log.i(TAG, "onNTAudioRecordV2Frame size=" + size + " sampleRate=" + sampleRate + " channel=" + channel
		 //			 + " per_channel_sample_number=" + per_channel_sample_number);

		LibPublisherWrapper publisher_0 = get_publisher_0();
		if (publisher_0 != null)
			publisher_0.OnPCMData(data, size, sampleRate, channel, per_channel_sample_number);

		LibPublisherWrapper publisher_1 = get_publisher_1();
		if (publisher_1 != null)
			publisher_1.OnPCMData(data, size, sampleRate, channel, per_channel_sample_number);

	}
}

编码后的视频投递设计如下:

class PlayerVideoDataCallback implements NTVideoDataCallback
{
	private WeakReference<LibPublisherWrapper> publisher_;
	private int video_buffer_size = 0;
	private ByteBuffer video_buffer_ = null;

	public PlayerVideoDataCallback(LibPublisherWrapper publisher) {
		if (publisher != null)
			publisher_ = new WeakReference<>(publisher);
	}

	@Override
	public ByteBuffer getVideoByteBuffer(int size)
	{
		if( size < 1 )
		{
			return null;
		}

		if ( size <= video_buffer_size &&  video_buffer_ != null )
		{
			return  video_buffer_;
		}

		video_buffer_size = size + 1024;
		video_buffer_size = (video_buffer_size+0xf) & (~0xf);

		video_buffer_ = ByteBuffer.allocateDirect(video_buffer_size);

		return video_buffer_;
	}

	public void onVideoDataCallback(int ret, int video_codec_id, int sample_size, int is_key_frame, long timestamp, int width, int height, long presentation_timestamp)
	{
		if ( video_buffer_ == null)
			return;

		LibPublisherWrapper publisher = publisher_.get();
		if (null == publisher)
			return;

		if (!publisher.is_publishing())
			return;

		video_buffer_.rewind();

		publisher.PostVideoEncodedData(video_codec_id, video_buffer_, sample_size, is_key_frame, timestamp, presentation_timestamp);

	}
}

总结

从我发的Android平台RTSP转RTMP推送的demo界面,可以看到,这个demo,不是单纯的RTSP转RTMP推送的,还可以实现RTSP流获取后,回调上来解码后的数据,然后添加动态水印或其他处理后,把video数据二次编码推送出去。或者audio数据二次处理。

此外,还可以实现拉流的数据预览播放、把数据注入到轻量级RTSP服务模块,然后二次编码的数据,本地录像、快照等。一个好的RTSP转RTMP推送的模块,一定要足够的灵活,扩展性好,才能很快的实现客户的技术诉求。以上抛砖引玉,感兴趣的开发者,可以跟我单独探讨。


http://www.kler.cn/a/378228.html

相关文章:

  • python人工智能算法
  • C#-类:声明类、声明类对象
  • Ant Design Pro 框架 - fieldProps 的使用
  • Android:ViewPaper动态添加移除第一页
  • 论面向服务架构设计及其应用
  • 【idea】idea更新遇到的坑
  • 【C++】多态的语法与底层原理
  • MATLAB算法实战应用案例精讲-【数模应用】PageRank(附MATLAB、C++、python和R语言代码实现)
  • 《Java 实现快速排序:原理剖析与代码详解》
  • thinkphp中命令行工具think使用,可用于快速生成控制器,模型,中间件等
  • 智源推出小时级超长视频理解大模型Video-XL
  • MVC(Model-View-Controller)模式概述
  • 【WPF】深入理解并发、并行、单线程、多线程、同步、异步概念
  • __attribute__ ((__packed__))
  • 计算机网络:网络层 —— 路由信息协议 RIP
  • 智驭模板引擎管理系统(SmartTemplate Manager)
  • k8s环境下rabbitmq安装社区插件:rabbitmq_delayed_message_exchange
  • 施耐德EcoStruxure Machine SCADA Expert(EMSE)ModbusTcp通讯(二十二)
  • Linux系统安全配置
  • Javaweb梳理8——数据库设计
  • Servlet 3.0 新特性全解
  • 数据库作业5
  • 轻量数据持久化 shelve | sqlite3
  • AI风险及数据合规问题
  • js 期约到底是什么?
  • Ubuntu 系统Python环境管理(全、简)