魅蓝note2开发者选项中awesomeplayer是什么 和nuplayer到底有什么区别?

nuplayer / awesomeplayer_三星s5吧_百度贴吧
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&签到排名:今日本吧第个签到,本吧因你更精彩,明天继续来努力!
本吧签到人数:0成为超级会员,使用一键签到本月漏签0次!成为超级会员,赠送8张补签卡连续签到:天&&累计签到:天超级会员单次开通12个月以上,赠送连续签到卡3张
关注:90,507贴子:
nuplayer / awesomeplayer收藏
请问nuplayer和awesomeplayer有什么区别区别?
是的,我也顺便显摆一下棒棒糖
三星s5,「亚马逊东西一样价更低」,正品保障,货到付款!三星s5,30日内退换货,送货上门!
路过。。。
试试不就知道了  ——「现在是北京时间日Wed 04时31分43秒919毫秒。在今年的第56天,你的目标实现了多少呢?」
我也想知道
不懂飘过。
登录百度帐号推荐应用
为兴趣而生,贴吧更懂你。或您现在的位置:
android 多媒体框架服务之StagefrightPlayer和OMXCodec实现原理学习
1.1StageFright和openCore和NuPlayer的关系上图可知,stagefright是在MediaPlayerService这一层加入的,和opencore是并列的,在选用opencore还是stagefright的代码切换上也非常容易。Android上的MediaPlayer播放底层框架已经经历了多次变动,从最早先的OpenCore到后来的StageFright再到现在的NuPlayerDriver,在工作开始接触Android的时候已经移除了OpenCore所以对OpenCore的了解仅仅停留在听说过,这些框架在演进过程中一般都是先两种框架并存,然后再在某个版本中将其移除,早先Android中使用的是Stagefright + NuPlayer并存的方式,其中前者负责播放本地的媒体文件,后者用于播放网络流媒体文件,但是在后来的Android L开始NuPlayeri渐渐开始替代了Stagefright,目前本地播放已经切换到NuPlayer上了,在Android N AOPS 源代码中甚至移除了Stagefright。1.2 OpenMAXOpenMAX简介android系统中的编解码器部分用的是openmax,以后会深入了解。openmax是一套标准接口,各家硬件厂商都可以遵循这个标准来做自己的实现,发挥自己芯片特性。然后提供给android系统来用。因为大部分的机顶盒芯片产品硬件的编解码是它的优势,可以把这种优势完全融入到android平台中。以后手机高清视频硬解码也会是个趋势。第一层:OpenMaxDL(DevelopmentLayer,开发层)第二层:OpenMaxIL(IntegrationLayer,集成层)第三层:OpenMaxAL(ApplictionLayer,应用层)OpenMaxIL 处在中间层的位置,OpenMAX IL作为音频,视频和图像编解码器能与多媒体编解码器交互,并以统一的行为支持组件(例如资源和皮肤)。这些编解码器或许是软硬件的混合体,对用户是的底层接口应用于嵌入式或/和移动设备。它提供了应用程序和媒体框架,透明的。本质上不存在这种标准化的接口,编解码器供应商必须写私有的或者封闭的接口,集成进移动设备。IL的主要目的是使用特征集合为编解码器提供一个系统抽象,为解决多个不同媒体系统之间轻便性的问题。refenence2 .1.3 StageFright基于Stagefight的MediaPLayer框架的结构:stageFright is a player .上图可以看出播放过程主要涉及3个进程: app端进程,媒体框架服务(stageFright),OMX服务.实际使用还经常用到一个专门做跨进程内存共享管理的进程(MemoryDeal).注意后面会经常讲到&客户端&,有时并不是指app端,要区分开来.- 应用层和framework层?使用到MediaPlayer的应用很多,最常见的就是Music和Video,如果要了解这些应用的实现可以看下AOSP代码中的packages/apps,这些代码中用到了frameworks/base/media/所提供的MediaPlayer接口,这些接口都十分简单,我们只需要知道这些接口的具体功能就可以开发出一款功能较为齐全的Music Player.- Native Media Player 层:应用层的native实现.通过binder机制和Service交互.Media Player Service 部分:从Native层发出的IPC请求将会由Media Player Service 部分进行处理.MediaPlayerService是在frameworks/av/media/mediaserver/main_mediaserver.cpp的main方法中初始化的,在main方法中还启动了多个Android系统服务比如AudioFlinger, CameraService等,实例化Media Player Service 子系统的工作包括MediaPlayerService对象的创建,以及内置底层Media PLayer播放框架工厂的注册,一旦 MediaPlayerService 服务启动,MediaPlayerService将会接受Native MediaPlayer 层的IPC请求,并且为每个操作media内容的请求实例化一个MediaPlayerService::Client对象, Client有一个createPlayer 的方法可以使用特定的工厂类为某个特定的类型创建一个本地media player,后续的发向native层的请求都会交给刚刚提到的native 层的 media palyer来处理,这里的media player指的是StagefrightPlayer或者Nuplayerdriver.但是我们这里先不讨论Nuplayerdriver。 分析文件:StageFright主要是对AwesomePlayer的封装.AwesomePlayer是事件驱动的播放器.本文主要分析它对视频流的处理.重点在AwesomePlayer::onVideoEvent函数.其中从流中read packet,parse,decode .都在mVideoSource-&read(&mVideoBuffer, &options);函数完成.该函数在OMXCodec.cpp实现.其中read(extract)和parse 在AwesomePlyaer调用其它组件(如MPEG4Extractor)完成,参数mVideoBuffer即为解码后的帧图像,decode则是调用OMXCodec的服务接口.也就是解码时又通过Binder做了一次跨进程通信.关于OMXCodec Service的一些文件:- 接口定义:IOMX.h- 客户端类:OMXCodec.cppOMXClient.cppIOMX.cpp (BpOMX类/BnOMX类)- 服务端类:OMX.cppOMXNodeInstance.cpp示例函数fillOutpuBuffer. 见源码书签和注释.以上类是通过Binder机制实现的.因为这我第一个学习的android Framework.下面先简单介绍Binder机制:2. Binder机制简单介绍binder是android 系统下的一种IPC机制。是进程间交互的一种方式。在开发android应用时,脑袋一定要一直保持C/S结构的思想。android应用的开发说白了就是通过android提供的一系列的服务来完成自己的目的,咱们刚才也的那个播放器的apk也是需要android提供的播放器的服务来完成的。apk是一个独立的进程,android的系统服务也是很多个独立的进程。binder的功能就是把client 和 service 连接起来。2.1 入门实例Binder简单实例 - lansehai的专栏 - 博客频道 - CSDN.NET2.2 总结a. IInterface 的继承类里声明将要跨进程调用的函数.声明为纯虚函数. (如IOMX.h中IOMX类).a. 如果服务端和客户端的创建都在同一个进程中,interface_cast会直接获得xx的Bn实例,就是相当于直接声明了一个xx类型。OMXClient#getOMX()c. BnBinder是Service端接口.binder是代理(BpBinder).Service实现业务功能.客户端需要实现发送功能.2.3 Imemory调用OMXCodec Component需要进程间共享内存.android在实现进程共享内存时使用Binder和匿名共享内存实现了一套共享内存机制.MemoryHeapBase和MemoryBase.前者是匿名共享内存类,以页为单位.后者是基于MemoryHeapBase的匿名共享内存,使用偏移值表示.ta解决多个clinets一个Service 内存共享问题.重要函数总结:getMemory成员函数getMemory用来获取内部的MemoryHeapBase对象的IMemoryHeap接口.如果成员变量mHeap的值为NULL,就表示这个BpMemory对象尚未建立好匿名共享内存,于是,就会通过一个Binder进程间调用去Server端请求匿名共享内存信息,在这些信息中,最重要的就是这个Server端的MemoryHeapBase对象的引用heap了,通过这个引用可以在Client端进程中创建一个BpMemoryHeap远程接口,最后将这个BpMemoryHeap远程接口保存在成员变量mHeap中,同时,从Server端获得的信息还包括这块匿名共享内存在整个匿名共享内存中的偏移位置以及大小。这样,这个BpMemory对象中的匿名共享内存就准备就绪了。pointer()成员函数pointer()用来获取内部所维护的匿名共享内存的基地址;size()成员函数size()用来获取内部所维护的匿名共享内存的大小; offset()用来获取内部所维护的这部分匿名共享内存在整个匿名共享内存中的偏移量。 使用实例:在BpSharedBuffer类的成员函数transact中,向Server端发出了一个请求代码为GET_BUFFER的Binder进程间调用请求,请求Server端返回一个匿名共享内存对象的远程接口IMemory,它实际指向的是一个BpMemory对象,获得了这个对象之后,就将它返回给调用者;在BnSharedBuffer类的成员函数onTransact中,当它接收到从Client端发送过来的代码为GET_BUFFER的Binder进程间调用请求后,便调用其子类的getBuffer成员函数来获一个匿名共享内存对象接口IMemory,它实际指向的是一个MemoryBase对象,获得了这个对象之后,就把它返回给Client端。(详细示例见链接最后一节)2.4 IOMXIOMX定义了OMXCodec的接口.如fillBuffer,emptyBuffer.3. StageFright底层具体实现说白了既然StageFright是个播放器,那么它至少有4大部分:source、demux、decoder、output。1.source:数据源,数据的来源不一定都是本地file,也有可能是网路上的各种协议例如:http、rtsp、HLS等。2.demux解复用:视频文件一般情况下都是把音视频的ES流交织的通过某种规则放在一起。这种规则就是容器规则。现在有很多不同的容器格式。如ts、mp4、flv、mkv、avi、rmvb等等。demux的功能就是把音视频的ES流从容器中剥离出来,然后分别送到不同的解码器中。3.decoder解码:解码器&播放器的核心模块。分为音频和视频解码器。4.output输出:输出部分分为音频和视频输出。解码后的音频(pcm)和视频(yuv)的原始数据需要得到音视频的output模块的支持才能真正的让人的感官系统(眼和耳)辨识到。所以,播放器大致分成上述4部分。怎么抽象的实现这4大部分、以及找到一种合理的方式将这几部分组织并运动起来。是每个播放器不同的实现方式而已。AwesomePlayer是实现播放的底层操作者,它在StagefrightPlayer初始化的时候被创建,它负责将对应的音频视频和对应的解码器对应起来。这里涉及到了MediaExtractor,它会从媒体文件中抽取到有效的头信息。并返回对应的引用。在准备播放的时候AwesomePlayer通过OMXCodec来根据媒体文件类型创建解码器,解码器是驻留在OMX子系统上(OMX是OpenMAX在Android上面的实现),这些解码器主要用于处理内存缓冲,转化成原始数据格式,这部分的实现代码主要在frameworks/av/media/libstagefright/omx 以及frameworks/av/media/libstagefright/codecs 目录下, Stagefright Media Player和 OMX部件(Component)是通过IPC方式交互的.AwesomePlayer最终会处理应用层发出的播放,暂停,停止等请求,这些请求往往和媒体类型有关联对于音频文件.AwesomePlayer 将会创建一个AudioPlayer来对文件进行处理,比如当前文件只有音频部分需要播放,这时候AwesomePlayer将会调用AudioPlayer::start()进行播放,一旦用户提交了其他新的请求AudioPlayer会使用MediaSource对象来和底层的OMX子系统进行交互。3.1 StageFright和OMXCodec通信StageFright数据处理流程重点在read函数.从流中read packet,parse,decode .都在mVideoSource-&read(&mVideoBuffer, &options);函数完成.3.2底层进程通信对缓存的管理:read函数将数据读到缓存并处理后,如何传到OMXCodec Servcie?read循环最终调用了:OMXCodec::drainInputBuffer(BufferInfo *info),最终通过emptyBuffer传递缓存.status_t emptyBuffer(
node_id node,
buffer_id buffer,
OMX_U32 range_offset, OMX_U32 range_length,
OMX_U32 flags, OMX_TICKS timestamp)通过分别分析客户端和服务端的emptyBuffer函数得到以下信息:服务端:a. 约定buffer_id 为OMX_BUFFERHEADERTYPE *指针.客户端如何对应客户端读出的缓存给服务端?
err = mOMX-&emptyBuffer(
mNode, info-&mBuffer, 0, offset,
flags, timestampUs);mBuffer即为BufferId.实质为指针.服务端转为OMX_BUFFERHEADERTYPE *类型后,调用CopyToOMX(header),调用OpenMAX接口,将输入buffer拷贝到Codec硬件内存.解码后调用fillOutputBuffer将缓存从Codec拷贝回来,同样是跨进程调用,实现函数是fillBuffer.服务端fillBuffer实现把缓存拷贝到对应的buffer_id.如何初始化缓存?![(&&==/8819530.png)~~客户端allocateBuffersOnPort函数初始化缓存池.这个函数比较重要,直接画图来看他的调用和IPC通信:Created with Rapha?l 2.1.0AwesomePlayerAwesomePlayerOMXCodecOMXCodecMemoryDealerMemoryDealerOMXClient.cppOMXClient.cppIOMX.cppIOMX.cppOMX.cppOMX.cppOMXNodeInstance.cppOMXNodeInstance.cpp媒体框架进程allocateBuffersOnPortnew MemoryDealer();然后调用MemoryDealer::allocate函数分配得到一个Imemory指针spmOMX-&allocateBuffer[backup]这里还封装了一次,只有使用remote Codec时才跨进程调用上一步中的判断也是使用远程Codec时调用allocateBufferBackupgetOMX(node)-&allocateBufferBackupallocateBufferbackup 这里就是跨进程调用了.IOMX.cpp里BpIOMX::allocateBufferbackup 使用writeInt32等完成发送,Service端BnBinder::OnTransact也在这里实现接收.其中最特别的又是writeStrongBinder,直接把Imemory对象发送出去,服务端得到一个Imemory的客户端BpBinder,以便可以访问共享内存.allocateBufferbackup继续封装 note right of OMXNodeInstance.cpp:调用Codec接口的地方,也就是调用芯片厂商提供的接口.例如OMX_AllocateBuffer分配得到.OMX_BUFFERHEADERTYPE 指针.再通过reply写回客户端,也就是客户端可以得到一个OMX_BUFFERHEADERTYPE 的指针.OMX_BUFFERHEADERTYPE的一个重要成员BufferMeta将Imemory的共享内存封装进来.最后函数返回客户端一个buffer_id,实际就是一个在服务端保存的OMX_BUFFERHEADERTYPE指针.然后把这个指针绑定到对应的portIndex, MMediaPlayer使用时再绑定到对应的MediaInfo上.这样下次OMXCodec调用drainInputBuffer这样的函数时,就能通过MediaInfo在OMXCodec Component 找到对应bufferId ,进而通过emptyBuffer这样的跨进程调用通知服务端(Component),服务端有bufferId后,转为BufferHeader指针后遍知道该使用那块共享内存了.再补充几个其中的几个信息:通过MemoryDealer分配匿名共享内存. 通过mOMX-&allocateBufferWithBackup(mNode, portIndex, mem, &buffer);分配指定大小共享缓存.并绑定到buffer_id.(以下带mOMX 字样的都是IPC通信 ,对应的服务端函数在OMXNodeInstance.cpp).也就是客户端的每个bufferInfo保存了一个Service对应的OMX_BUFFERHEADERTYPE对象指针. 将客户端的共享缓存地址通过mem-&pointer();赋值给info::mData. 注意MediaBuffer的初始化:info.mMediaBuffer = new MediaBuffer(info.mData, info.mSize);MediaBuffer::mData 即为info.mData.也就是MediaSource,例如MPEG4Extractor 的read /parse 操作也是直接对共享缓存操作. OMXCodec::allocateBuffersOnPort为每个buffer分配缓存和index.同时保存缓存信息到服务端:mOMX-&storeMetaDataInBuffers. 以及把缓存队列传到MediaSource(此例是MPEG4Exctractor):mSource-&setBuffers(buffers) 服务端分配缓存见OMXNodeInstance::allocateBufferbackup.服务端的共享内存指针保存在BufferMeta buffer_meta.例如emptyBuffer调用CopyTOOMX可以把芯片缓存拷贝到共享内存缓存. 顺便学习以下如何写Binder机制中客户端的发送函数,一个BpBinder函数是长这样的:virtual status_t allocateBufferWithBackup() {
Parcel data,
data.writeInterfaceToken(IOMX::getInterfaceDescriptor());
data.writeIntPtr((intptr_t)node);
data.writeInt32(port_index);
data.writeStrongBinder(params-&asBinder());
remote()-&transact(ALLOC_BUFFER_WITH_BACKUP, data, &reply);
status_t err = reply.readInt32();//...
*buffer = (void*)reply.readIntPtr();
}通过IMemory的学习可知,IOMX客户端通过MemoryDealer构建MemoryBase对象是Imemory的远程接口实现.而OMX Compoent的接收onTransact中,读取的是一个匿名共享内存对象的远程接口Imemory,它实质指向一个BpMemory对象.接收完通过interface_cast(data.readStrongBinder())转为Imemory调用.//todo- node_id 是啥? 怎么区分remoteOMX & localOMX?OMXCodec.cpp不是直接调用OMXCodec的客户端接口.中间还封装了一个类OMXClient.cpp.当判断使用remoteOMX时才调用remote接口.即BpOMX.而BpOMX在IOMX.cpp实现.node_id是区分local/remote的关键.猜测noteId是标记媒体框架服务(MediaPlayer)的客户端程序.Given a node_id and the calling process& pid, returns true iff the implementation of the OMX interface lives in the same process.猜测是如果同进程调用的话不再重新映射共享内存?OMXObserver是干么用的?如何读取packet,Source Input?err = mSource-&read(&srcBuffer, &options); 保存在 MediaBuffer::mData. MediaCodec和OMXCodec的关系.其实在openmax接口设计中,他不光能用来当编解码。通过他的组件可以组成一个完整的播放器,包括sourc、demux、decode、output。1. android系统中只用openmax来做code,所以android向上抽象了一层OMXCodec,提供给上层播放器用。播放器中音视频解码器mVideosource、mAudiosource都是OMXCodec的实例。2. OMXCodec通过IOMX 依赖binder机制 获得 OMX服务,OMX服务 才是openmax 在android中 实现。3. OMX把软编解码和硬件编解码统一看作插件的形式管理起来。由文中可知,MediaPlayer(stageFright),MediaCodec 都是调用OMXCodec,stageFright和MediaCodec是平行关系,无相互调用. OMXCodec和ACodec都是更底层的东西.OMXCodec和ACocdecNuPlayer调用ACodec(支持网络流).
»延伸阅读
»要闻导读
阅读排行榜ACodec与MediaCodec的通知。OMX的组件解码之后,当ACodec的onOMXFillBufferDone会被回调,去取得解码后的数据。
ACodec在onOMXFillBufferDone调用后会调用notify通知MediaCodec(notify-&setInt32(&what&, CodecBase::kWhatDrainThisBuffer);//发给MediaCodec的消息。
MediaCodec收到ACodec发的消息之后会更新updateBuffers(kPortIndexOutput, msg),同时onOutputBufferAvailable()中通知Decoder。
1.ACodec的onOMXFillBufferDone;
bool ACodec::BaseState::onOMXFillBufferDone(
IOMX::buffer_id bufferID,
size_t rangeOffset, size_t rangeLength,
OMX_U32 flags,
int64_t timeUs,
int fenceFd) {
ALOGV(&[%s] onOMXFillBufferDone %u time %& PRId64 & us, flags = 0x%08x&,
mCodec-&mComponentName.c_str(), bufferID, timeUs, flags);
status_t err= OK;
#if TRACK_BUFFER_TIMING
index = mCodec-&mBufferStats.indexOfKey(timeUs);
if (index &= 0) {
ACodec::BufferStats *stats = &mCodec-&mBufferStats.editValueAt(index);
stats-&mFillBufferDoneTimeUs = ALooper::GetNowUs();
ALOGI(&frame PTS %lld: %lld&,
stats-&mFillBufferDoneTimeUs - stats-&mEmptyBufferTimeUs);
mCodec-&mBufferStats.removeItemsAt(index);
stats = NULL;
BufferInfo *info =
mCodec-&findBufferByID(kPortIndexOutput, bufferID, &index);//根据bufferID找到ACodec的BufferInfo
BufferInfo::Status status = BufferInfo::getSafeStatus(info);
if (status != BufferInfo::OWNED_BY_COMPONENT) {
ALOGE(&Wrong ownership in FBD: %s(%d) buffer #%u&, _asString(status), status, bufferID);
mCodec-&dumpBuffers(kPortIndexOutput);
mCodec-&signalError(OMX_ErrorUndefined, FAILED_TRANSACTION);
if (fenceFd &= 0) {
::close(fenceFd);
info-&mDequeuedAt = ++mCodec-&mDequeueC
info-&mStatus = BufferInfo::OWNED_BY_US;
if (info-&mRenderInfo != NULL) {
// The fence for an emptied buffer must have signaled, but there still could be queued
// or out-of-order dequeued buffers in the render queue prior to this buffer. Drop these,
// as we will soon requeue this buffer to the surface. While in theory we could still keep
// track of buffers that are requeued to the surface, it is better to add support to the
// buffer-queue to notify us of released buffers and their fences (in the future).
mCodec-&notifyOfRenderedFrames(true /* dropIncomplete */);
// byte buffers cannot take fences, so wait for any fence now
if (mCodec-&mNativeWindow == NULL) {
(void)mCodec-&waitForFence(fenceFd, &onOMXFillBufferDone&);
fenceFd = -1;
info-&setReadFence(fenceFd, &onOMXFillBufferDone&);
PortMode mode = getPortMode(kPortIndexOutput);
switch (mode) {
case KEEP_BUFFERS:
case RESUBMIT_BUFFERS:
if (rangeLength == 0 && (!(flags & OMX_BUFFERFLAG_EOS)
|| mCodec-&mPortEOS[kPortIndexOutput])) {
ALOGV(&[%s] calling fillBuffer %u&,
mCodec-&mComponentName.c_str(), info-&mBufferID);
err = mCodec-&mOMX-&fillBuffer(mCodec-&mNode, info-&mBufferID, info-&mFenceFd);//继续填充outputbuffer
info-&mFenceFd = -1;
if (err != OK) {
mCodec-&signalError(OMX_ErrorUndefined, makeNoSideEffectStatus(err));
info-&mStatus = BufferInfo::OWNED_BY_COMPONENT;
sp&AMessage& reply =
new AMessage(kWhatOutputBufferDrained, mCodec);//ACodec生成reply消息,最终会传给MediaCodec
if (!mCodec-&mSentFormat && rangeLength & 0) {
mCodec-&sendFormatChange(reply);
if (mCodec-&usingMetadataOnEncoderOutput()) {
native_handle_t *handle = NULL;
VideoGrallocMetadata &grallocMeta = *(VideoGrallocMetadata *)info-&mData-&data();
VideoNativeMetadata &nativeMeta = *(VideoNativeMetadata *)info-&mData-&data();
if (info-&mData-&size() &= sizeof(grallocMeta)
&& grallocMeta.eType == kMetadataBufferTypeGrallocSource) {
handle = (native_handle_t *)(uintptr_t)grallocMeta.pH
} else if (info-&mData-&size() &= sizeof(nativeMeta)
&& nativeMeta.eType == kMetadataBufferTypeANWBuffer) {
#ifdef OMX_ANDROID_COMPILE_AS_32BIT_ON_64BIT_PLATFORMS
// ANativeWindowBuffer is only valid on 32-bit/mediaserver process
handle = NULL;
handle = (native_handle_t *)nativeMeta.pBuffer-&
info-&mData-&meta()-&setPointer(&handle&, handle);
info-&mData-&meta()-&setInt32(&rangeOffset&, rangeOffset);
info-&mData-&meta()-&setInt32(&rangeLength&, rangeLength);
info-&mData-&setRange(rangeOffset, rangeLength);
if (mCodec-&mNativeWindow == NULL) {
if (IsIDR(info-&mData)) {
ALOGI(&IDR frame&);
if (mCodec-&mSkipCutBuffer != NULL) {
mCodec-&mSkipCutBuffer-&submit(info-&mData);
info-&mData-&meta()-&setInt64(&timeUs&, timeUs);
info-&mData-&meta()-&setObject(&graphic-buffer&, info-&mGraphicBuffer);//在info的mDate(sp&ABuffer&)的mMeta(sp&AMessage&)中设置graphic-buffer
sp&AMessage& notify = mCodec-&mNotify-&dup();
notify-&setInt32(&what&, CodecBase::kWhatDrainThisBuffer);//发给MediaCodec的消息
notify-&setInt32(&buffer-id&, info-&mBufferID);
notify-&setBuffer(&buffer&, info-&mData);
notify-&setInt32(&flags&, flags);
reply-&setInt32(&buffer-id&, info-&mBufferID);//reply在发给(传给)MediaCodec的BufferInfo的mNotify的时候,已经设置的buffer-id
(void)mCodec-&setDSModeHint(reply, flags, timeUs);
notify-&setMessage(&reply&, reply);//把reply设进去,用于MediaCodec向ACodec发消息
notify-&post();//notify发出去之后MediaCodec会处理
info-&mStatus = BufferInfo::OWNED_BY_DOWNSTREAM;
if (flags & OMX_BUFFERFLAG_EOS) {
ALOGV(&[%s] saw output EOS&, mCodec-&mComponentName.c_str());
sp&AMessage& notify = mCodec-&mNotify-&dup();
notify-&setInt32(&what&, CodecBase::kWhatEOS);
notify-&setInt32(&err&, mCodec-&mInputEOSResult);
notify-&post();
mCodec-&mPortEOS[kPortIndexOutput] =
case FREE_BUFFERS:
err = mCodec-&freeBuffer(kPortIndexOutput, index);
if (err != OK) {
mCodec-&signalError(OMX_ErrorUndefined, makeNoSideEffectStatus(err));
ALOGE(&Invalid port mode: %d&, mode);
2.MediaCodec收到ACodec发的消息之后会更新updateBuffers(kPortIndexOutput, msg);同时onOutputBufferAvailable()中通知Decoder
case CodecBase::kWhatDrainThisBuffer:
/* size_t index = */updateBuffers(kPortIndexOutput, msg);//updateBuffers
if (mState == FLUSHING
|| mState == STOPPING
|| mState == RELEASING) {
returnBuffersToCodecOnPort(kPortIndexOutput);
sp&ABuffer&
CHECK(msg-&findBuffer(&buffer&, &buffer));
int32_t omxF
CHECK(msg-&findInt32(&flags&, &omxFlags));
buffer-&meta()-&setInt32(&omxFlags&, omxFlags);
if (mFlags & kFlagGatherCodecSpecificData) {
// This is the very first output buffer after a
// format change was signalled, it'll either contain
// the one piece of codec specific data we can expect
// or there won't be codec specific data.
if (omxFlags & OMX_BUFFERFLAG_CODECCONFIG) {
status_t err =
amendOutputFormatWithCodecSpecificData(buffer);
if (err != OK) {
ALOGE(&Codec spit out malformed codec &
&specific data!&);
mFlags &= ~kFlagGatherCodecSpecificD
if (mFlags & kFlagIsAsync) {
onOutputFormatChanged();
mFlags |= kFlagOutputFormatC
if (mFlags & kFlagIsAsync) {
onOutputBufferAvailable();//通知Decoder
} else if (mFlags & kFlagDequeueOutputPending) {
CHECK(handleDequeueOutputBuffer(mDequeueOutputReplyID));
++mDequeueOutputTimeoutG
mFlags &= ~kFlagDequeueOutputP
mDequeueOutputReplyID = 0;
postActivityNotificationIfPossible();
3.1 MediaCodec的updateBuffers,找到随消息过来的reply,并存放在MediaCodec的BufferInfo的mNotify中。
size_t MediaCodec::updateBuffers(
int32_t portIndex, const sp&AMessage& &msg) {
CHECK(portIndex == kPortIndexInput || portIndex == kPortIndexOutput);
uint32_t bufferID;
CHECK(msg-&findInt32(&buffer-id&, (int32_t*)&bufferID));
Mutex::Autolock al(mBufferLock);
Vector&BufferInfo& *buffers = &mPortBuffers[portIndex];
for (size_t i = 0; i & buffers-&size(); ++i) {
BufferInfo *info = &buffers-&editItemAt(i);
if (info-&mBufferID == bufferID) {
CHECK(info-&mNotify == NULL);
CHECK(msg-&findMessage(&reply&, &info-&mNotify));//找到随消息过来的reply,并存放在MediaCodec的BufferInfo的mNotify中
info-&mFormat =
(portIndex == kPortIndexInput) ? mInputFormat : mOutputF
mAvailPortBuffers[portIndex].push_back(i);
TRESPASS();
3.2 MediaCodec的onOutputBufferAvailable,通知Decoder有可用的output buffer.
void MediaCodec::onOutputBufferAvailable() {
while ((index = dequeuePortBuffer(kPortIndexOutput)) &= 0) {
const sp&ABuffer& &buffer =
mPortBuffers[kPortIndexOutput].itemAt(index).mD
sp&AMessage& msg = mCallback-&dup();//发送给Decoder的消息,kWhatCodecNotify
msg-&setInt32(&callbackID&, CB_OUTPUT_AVAILABLE);//设置callbackID,CB_OUTPUT_AVAILABLE,通知有可用的outputbuffer
msg-&setInt32(&index&, index);
msg-&setSize(&offset&, buffer-&offset());
msg-&setSize(&size&, buffer-&size());
int64_t timeUs;
CHECK(buffer-&meta()-&findInt64(&timeUs&, &timeUs));
msg-&setInt64(&timeUs&, timeUs);
int32_t omxF
CHECK(buffer-&meta()-&findInt32(&omxFlags&, &omxFlags));
uint32_t flags = 0;
if (omxFlags & OMX_BUFFERFLAG_SYNCFRAME) {
flags |= BUFFER_FLAG_SYNCFRAME;
if (omxFlags & OMX_BUFFERFLAG_CODECCONFIG) {
flags |= BUFFER_FLAG_CODECCONFIG;
if (omxFlags & OMX_BUFFERFLAG_EOS) {
flags |= BUFFER_FLAG_EOS;
msg-&setInt32(&flags&, flags);
msg-&post();
3.Decoder收到从MediaCodec::onOutputBufferAvailable发回的回调消息kWhatCodecNotify之后,知道有可用的buffer,就handleAnOutputBuffer.
void NuPlayer::Decoder::onMessageReceived(const sp&AMessage& &msg) {
ALOGV(&[%s] onMessage: %s&, mComponentName.c_str(), msg-&debugString().c_str());
switch (msg-&what()) {
case kWhatCodecNotify://MediaCodec发回的回调消息
int32_t cbID;
CHECK(msg-&findInt32(&callbackID&, &cbID));
ALOGV(&[%s] kWhatCodecNotify: cbID = %d, paused = %d&,
mIsAudio ? &audio& : &video&, cbID, mPaused);
if (mPaused) {
switch (cbID) {
case MediaCodec::CB_INPUT_AVAILABLE:
CHECK(msg-&findInt32(&index&, &index));
handleAnInputBuffer(index);
case MediaCodec::CB_OUTPUT_AVAILABLE: // CB_OUTPUT_AVAILABLE,有可用的output buffer
int64_t timeUs;
CHECK(msg-&findInt32(&index&, &index));
CHECK(msg-&findSize(&offset&, &offset));
CHECK(msg-&findSize(&size&, &size));
CHECK(msg-&findInt64(&timeUs&, &timeUs));
CHECK(msg-&findInt32(&flags&, &flags));
handleAnOutputBuffer(index, offset, size, timeUs, flags);//开始handleAnOutputBuffer
case MediaCodec::CB_OUTPUT_FORMAT_CHANGED:
sp&AMessage&
CHECK(msg-&findMessage(&format&, &format));
handleOutputFormatChange(format);
case MediaCodec::CB_ERROR:
CHECK(msg-&findInt32(&err&, &err));
ALOGE(&Decoder (%s) reported error : 0x%x&,
mIsAudio ? &audio& : &video&, err);
handleError(err);
TRESPASS();
case kWhatRenderBuffer:
if (!isStaleReply(msg)) {
onRenderBuffer(msg);
case kWhatSetVideoSurface:
sp&AReplyToken& replyID;
CHECK(msg-&senderAwaitsResponse(&replyID));
sp&RefBase&
CHECK(msg-&findObject(&surface&, &obj));
sp&Surface& surface = static_cast&Surface *&(obj.get()); // non-null
int32_t err = INVALID_OPERATION;
// NOTE: in practice mSurface is always non-null, but checking here for completeness
if (mCodec != NULL && mSurface != NULL) {
// TODO: once AwesomePlayer is removed, remove this automatic connecting
// to the surface by MediaPlayerService.
// at this point MediaPlayerService::client has already connected to the
// surface, which MediaCodec does not expect
err = native_window_api_disconnect(surface.get(), NATIVE_WINDOW_API_MEDIA);
if (err == OK) {
err = mCodec-&setSurface(surface);
ALOGI_IF(err, &codec setSurface returned: %d&, err);
if (err == OK) {
// reconnect to the old surface as MPS::Client will expect to
// be able to disconnect from it.
(void)native_window_api_connect(mSurface.get(), NATIVE_WINDOW_API_MEDIA);
mSurface =
if (err != OK) {
// reconnect to the new surface on error as MPS::Client will expect to
// be able to disconnect from it.
(void)native_window_api_connect(surface.get(), NATIVE_WINDOW_API_MEDIA);
sp&AMessage& response = new AM
response-&setInt32(&err&, err);
response-&postReply(replyID);
DecoderBase::onMessageReceived(msg);
4.Decoder::handleAnOutputBuffer, 之后Decoder要与Render交互确定是否渲染。参看《NuPlayerDecoder与NuPlayerRender分析》
bool NuPlayer::Decoder::handleAnOutputBuffer(
size_t index,
size_t offset,
size_t size,
int64_t timeUs,
int32_t flags) {
CHECK_LT(bufferIx, mOutputBuffers.size());
sp&ABuffer&
mCodec-&getOutputBuffer(index, &buffer);//用index找到对应的ABuffer
if (index &= mOutputBuffers.size()) {
for (size_t i = mOutputBuffers.size(); i &= ++i) {
mOutputBuffers.add();
mOutputBuffers.editItemAt(index) =
buffer-&setRange(offset, size);
buffer-&meta()-&clear();
buffer-&meta()-&setInt64(&timeUs&, timeUs);
setPcmFormat(buffer-&meta());
bool eos = flags & MediaCodec::BUFFER_FLAG_EOS;
// we do not expect CODECCONFIG or SYNCFRAME for decoder
sp&AMessage& reply = new AMessage(kWhatRenderBuffer, this);//Decoder发给Renderer的用于回调的消息
reply-&setSize(&buffer-ix&, index);
reply-&setInt32(&generation&, mBufferGeneration);
if (eos) {
ALOGI(&[%s] saw output EOS&, mIsAudio ? &audio& : &video&);
buffer-&meta()-&setInt32(&eos&, true);
reply-&setInt32(&eos&, true);
} else if (mSkipRenderingUntilMediaTimeUs &= 0) {
if (timeUs & mSkipRenderingUntilMediaTimeUs) {
ALOGV(&[%s] dropping buffer at time %lld as requested.&,
mComponentName.c_str(), (long long)timeUs);
reply-&post();
mSkipRenderingUntilMediaTimeUs = -1;
mNumFramesTotal += !mIsA
// wait until 1st frame comes out to signal resume complete
notifyResumeCompleteIfNecessary();
if (mRenderer != NULL) {
// send the buffer to renderer.
mRenderer-&queueBuffer(mIsAudio, buffer, reply);//Renderer的相关处理,注意设进去的reply
if (eos && !isDiscontinuityPending()) {
mRenderer-&queueEOS(mIsAudio, ERROR_END_OF_STREAM);
5. Decoder 接受 renderer对(mVideoQueue的QueueEntry)buffer发回的消费消息,并在Decoder的onRenderBuffer中调用MediaCodec的mCodec&renderOutputBufferAndRelease(bufferIx, timestampNs)或者mCodec-&releaseOutputBuffer(bufferIx)。
5.1. Decoder 接受 renderer对(mVideoQueue的QueueEntry)buffer发回的消息进行消费
case kWhatRenderBuffer:
if (!isStaleReply(msg)) {
onRenderBuffer(msg);
5.2 Decoder的onRenderBuffer
void NuPlayer::Decoder::onRenderBuffer(const sp&AMessage& &msg) {
size_t bufferIx;
CHECK(msg-&findSize(&buffer-ix&, &bufferIx));//找到buffer-ix
if (!mIsAudio) {
int64_t timeUs;
sp&ABuffer& buffer = mOutputBuffers[bufferIx];
buffer-&meta()-&findInt64(&timeUs&, &timeUs);
if (mCCDecoder != NULL && mCCDecoder-&isSelected()) {
mCCDecoder-&display(timeUs);
if (msg-&findInt32(&render&, &render) && render) {
int64_t timestampNs;
CHECK(msg-&findInt64(&timestampNs&, ×tampNs));
err = mCodec-&renderOutputBufferAndRelease(bufferIx, timestampNs);//Decoder发给MediaCodec渲染加release
mNumOutputFramesDropped += !mIsA
err = mCodec-&releaseOutputBuffer(bufferIx);//不渲染直接release
if (err != OK) {
ALOGE(&failed to release output buffer for %s (err=%d)&,
mComponentName.c_str(), err);
handleError(err);
if (msg-&findInt32(&eos&, &eos) && eos
&& isDiscontinuityPending()) {
finishHandleDiscontinuity(true /* flushOnTimeChange */);
6.Decoder到MediaCodec
MediaCodec的mCodec-&renderOutputBufferAndRelease(bufferIx, timestampNs)或者mCodec-&releaseOutputBuffer(bufferIx),一个会真的渲染一个不渲染.
最终会到MediaCodec的onReleaseOutputBuffer(msg)处理.
status_t MediaCodec::renderOutputBufferAndRelease(size_t index, int64_t timestampNs) {
& & sp&AMessage& msg = new AMessage(kWhatReleaseOutputBuffer, this);//发送消息
& & msg-&setSize(&index&, index);
& & msg-&setInt32(&render&, true);//设置是否渲染
& & msg-&setInt64(&timestampNs&, timestampNs);//timestampNs
& & sp&AMessage&
& & return PostAndAwaitResponse(msg, &response);
status_t MediaCodec::releaseOutputBuffer(size_t index) {
& & sp&AMessage& msg = new AMessage(kWhatReleaseOutputBuffer, this);//发送消息
& & msg-&setSize(&index&, index);
& & sp&AMessage&
& & return PostAndAwaitResponse(msg, &response);
然后到MediaCodec的onReleaseOutputBuffer(msg)处理;
& & & & case kWhatReleaseOutputBuffer:
& & & & & & sp&AReplyToken& replyID;
& & & & & & CHECK(msg-&senderAwaitsResponse(&replyID));
& & & & & & if (!isExecuting()) {
& & & & & & & & PostReplyWithError(replyID, INVALID_OPERATION);
& & & & & & & &
& & & & & & } else if (mFlags & kFlagStickyError) {
& & & & & & & & PostReplyWithError(replyID, getStickyError());
& & & & & & & &
& & & & & & }
& & & & & & status_t err = onReleaseOutputBuffer(msg);//onReleaseOutputBuffer(msg)处理
& & & & & & PostReplyWithError(replyID, err);
& & & & & &
//onReleaseOutputBuffer
status_t MediaCodec::onReleaseOutputBuffer(const sp&AMessage& &msg) {
CHECK(msg-&findSize(&index&, &index));
if (!msg-&findInt32(&render&, &render)) { //设置了render为true则渲染,否则不渲染直接release buffer
render = 0;
if (!isExecuting()) {
return -EINVAL;
if (index &= mPortBuffers[kPortIndexOutput].size()) {
return -ERANGE;
BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index);
if (info-&mNotify == NULL || !info-&mOwnedByClient) {
return -EACCES;
// synchronization boundary for getBufferAndFormat
Mutex::Autolock al(mBufferLock);
info-&mOwnedByClient =
if (render && info-&mData != NULL && info-&mData-&size() != 0) { //render是否为true
info-&mNotify-&setInt32(&render&, true);//reply设置render为true
int64_t mediaTimeUs = -1;
info-&mData-&meta()-&findInt64(&timeUs&, &mediaTimeUs);
int64_t renderTimeNs = 0;
if (!msg-&findInt64(&timestampNs&, &renderTimeNs)) {//Renderer给的timestampNs
// use media timestamp if client did not request a specific render timestamp
ALOGV(&using buffer PTS of %lld&, (long long)mediaTimeUs);
renderTimeNs = mediaTimeUs * 1000;
info-&mNotify-&setInt64(&timestampNs&, renderTimeNs);
if (mSoftRenderer != NULL) {//是否用softRender,如果设置过SoftwareRender而则用软件渲染,不然就让ACodec去硬件渲染
std::list&FrameRenderTracker::Info& doneFrames = mSoftRenderer-&render(
info-&mData-&data(), info-&mData-&size(),
mediaTimeUs, renderTimeNs, NULL, info-&mFormat);
// if we are running, notify rendered frames
if (!doneFrames.empty() && mState == STARTED && mOnFrameRenderedNotification != NULL) {
sp&AMessage& notify = mOnFrameRenderedNotification-&dup();
sp&AMessage& data = new AM
if (CreateFramesRenderedMessage(doneFrames, data)) {
notify-&setMessage(&data&, data);
notify-&post();
info-&mNotify-&post();//info-mNotify保存着ACodec传过来的reply消息,ACdoec会硬件渲染,不管硬件渲染还是软件熏染都会fillbuffer
info-&mNotify = NULL;
return OK;
7.MediaCodec到ACodec,MediaCodec::onReleaseOutputBuffer中判断使用硬件渲染后者软件渲染,并使用info-&mNotify中ACodec传过来的reply消息,最终发给ACodec接受并处理。
ACodec接受消息后调用onOutputBufferDrained(msg),看是正在的硬件渲染。
bool ACodec::BaseState::onMessageReceived(const sp&AMessage& &msg) {
switch (msg-&what()) {
case kWhatInputBufferFilled:
onInputBufferFilled(msg);
case kWhatOutputBufferDrained://MediaCodec消费reply消息
onOutputBufferDrained(msg);
case ACodec::kWhatOMXMessageList:
return checkOMXMessage(msg) ? onOMXMessageList(msg) :
case ACodec::kWhatOMXMessageItem:
// no need to check as we already did it for kWhatOMXMessageList
return onOMXMessage(msg);
case ACodec::kWhatOMXMessage:
return checkOMXMessage(msg) ? onOMXMessage(msg) :
case ACodec::kWhatSetSurface:
sp&AReplyToken& replyID;
CHECK(msg-&senderAwaitsResponse(&replyID));
sp&RefBase&
CHECK(msg-&findObject(&surface&, &obj));
status_t err = mCodec-&handleSetSurface(static_cast&Surface *&(obj.get()));
sp&AMessage& response = new AM
response-&setInt32(&err&, err);
response-&postReply(replyID);
case ACodec::kWhatCreateInputSurface:
case ACodec::kWhatSetInputSurface:
case ACodec::kWhatSignalEndOfInputStream:
// This may result in an app illegal state exception.
ALOGE(&Message 0x%x was not handled&, msg-&what());
mCodec-&signalError(OMX_ErrorUndefined, INVALID_OPERATION);
case ACodec::kWhatOMXDied:
// This will result in kFlagSawMediaServerDie handling in MediaCodec.
ALOGE(&OMX/mediaserver died, signalling error!&);
mCodec-&signalError(OMX_ErrorResourcesLost, DEAD_OBJECT);
case ACodec::kWhatReleaseCodecInstance:
ALOGI(&[%s] forcing the release of codec&,
mCodec-&mComponentName.c_str());
status_t err = mCodec-&mOMX-&freeNode(mCodec-&mNode);
ALOGE_IF(&[%s] failed to release codec instance: err=%d&,
mCodec-&mComponentName.c_str(), err);
sp&AMessage& notify = mCodec-&mNotify-&dup();
notify-&setInt32(&what&, CodecBase::kWhatShutdownCompleted);
notify-&post();
8.ACodec的onOutputBufferDrained(msg);//真正的硬件渲染
void ACodec::BaseState::onOutputBufferDrained(const sp&AMessage& &msg) {
IOMX::buffer_id bufferID;
CHECK(msg-&findInt32(&buffer-id&, (int32_t*)&bufferID));//找到bufferID
BufferInfo *info = mCodec-&findBufferByID(kPortIndexOutput, bufferID, &index);//根据bufferID找到ACodec的BufferInfo
BufferInfo::Status status = BufferInfo::getSafeStatus(info);
if (status != BufferInfo::OWNED_BY_DOWNSTREAM) {
ALOGE(&Wrong ownership in OBD: %s(%d) buffer #%u&, _asString(status), status, bufferID);
mCodec-&dumpBuffers(kPortIndexOutput);
mCodec-&signalError(OMX_ErrorUndefined, FAILED_TRANSACTION);
android_native_rect_
if (msg-&findRect(&crop&, &crop.left, &crop.top, &crop.right, &crop.bottom)) {
status_t err = native_window_set_crop(mCodec-&mNativeWindow.get(), &crop);
ALOGW_IF(err != NO_ERROR, &failed to set crop: %d&, err);
bool skip = mCodec-&getDSModeHint(msg);
if (!skip && mCodec-&mNativeWindow != NULL //mCodec-&mNativeWindow != NULL 才用硬件渲染。软件渲染在MediaCodec的onReleaseOutputBuffer已经用SoftwareRender处理
&& msg-&findInt32(&render&, &render) && render != 0
&& info-&mData != NULL && info-&mData-&size() != 0) {
ATRACE_NAME(&render&);
// The client wants this buffer to be rendered.
// save buffers sent to the surface so we can get render time when they return
int64_t mediaTimeUs = -1;
info-&mData-&meta()-&findInt64(&timeUs&, &mediaTimeUs);
if (mediaTimeUs &= 0) {
mCodec-&mRenderTracker.onFrameQueued(
mediaTimeUs, info-&mGraphicBuffer, new Fence(::dup(info-&mFenceFd)));
int64_t timestampNs = 0;
if (!msg-&findInt64(&timestampNs&, ×tampNs)) {
// use media timestamp if client did not request a specific render timestamp
if (info-&mData-&meta()-&findInt64(&timeUs&, ×tampNs)) {
ALOGV(&using buffer PTS of %lld&, (long long)timestampNs);
timestampNs *= 1000;
err = native_window_set_buffers_timestamp(mCodec-&mNativeWindow.get(), timestampNs);//使用timestampNs
ALOGW_IF(err != NO_ERROR, &failed to set buffer timestamp: %d&, err);
info-&checkReadFence(&onOutputBufferDrained before queueBuffer&);
err = mCodec-&mNativeWindow-&queueBuffer(
mCodec-&mNativeWindow.get(), info-&mGraphicBuffer.get(), info-&mFenceFd);//插入mNaiveWindow(surface)进行渲染
info-&mFenceFd = -1;
if (err == OK) {
info-&mStatus = BufferInfo::OWNED_BY_NATIVE_WINDOW;
ALOGE(&queueBuffer failed in onOutputBufferDrained: %d&, err);
mCodec-&signalError(OMX_ErrorUndefined, makeNoSideEffectStatus(err));
info-&mStatus = BufferInfo::OWNED_BY_US;
// keeping read fence as write fence to avoid clobbering
info-&mIsReadFence =
if (mCodec-&mNativeWindow != NULL &&
(info-&mData == NULL || info-&mData-&size() != 0)) {
// move read fence into write fence to avoid clobbering
info-&mIsReadFence =
ATRACE_NAME(&frame-drop&);
info-&mStatus = BufferInfo::OWNED_BY_US;
PortMode mode = getPortMode(kPortIndexOutput);
switch (mode) {
case KEEP_BUFFERS:
// XXX fishy, revisit!!! What about the FREE_BUFFERS case below?
if (info-&mStatus == BufferInfo::OWNED_BY_NATIVE_WINDOW) {
// We cannot resubmit the buffer we just rendered, dequeue
// the spare instead.
info = mCodec-&dequeueBufferFromNativeWindow();
case RESUBMIT_BUFFERS:
if (!mCodec-&mPortEOS[kPortIndexOutput]) {
if (info-&mStatus == BufferInfo::OWNED_BY_NATIVE_WINDOW) {
// We cannot resubmit the buffer we just rendered, dequeue
// the spare instead.
info = mCodec-&dequeueBufferFromNativeWindow();
if (info != NULL) {
ALOGV(&[%s] calling fillBuffer %u&,
mCodec-&mComponentName.c_str(), info-&mBufferID);
info-&checkWriteFence(&onOutputBufferDrained::RESUBMIT_BUFFERS&);
status_t err = mCodec-&mOMX-&fillBuffer(
mCodec-&mNode, info-&mBufferID, info-&mFenceFd);//渲染之后重新填充buffer,然后回调onOMXFillBufferDone
info-&mFenceFd = -1;
if (err == OK) {
info-&mStatus = BufferInfo::OWNED_BY_COMPONENT;
mCodec-&signalError(OMX_ErrorUndefined, makeNoSideEffectStatus(err));
case FREE_BUFFERS:
status_t err = mCodec-&freeBuffer(kPortIndexOutput, index);
if (err != OK) {
mCodec-&signalError(OMX_ErrorUndefined, makeNoSideEffectStatus(err));
ALOGE(&Invalid port mode: %d&, mode);
参考知识库
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:2301次
排名:千里之外
原创:12篇}

我要回帖

更多关于 nuplayer 分析 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信