学说韩国直播视频19禁文视频直播

播放列表加载中...
正在载入...
分享视频:
嵌入代码:
拍下二维码,随时随地看视频
韩国妹子学说山东话,让人忍受不了
上 传 者:
内容介绍:
韩国妹子学说山东话,让人忍受不了
我来说点啥
版权所有 CopyRight
| 京网文[0号 |
| 京公网安备:
互联网药品信息服务资格证:(京)-非经营性- | 广播电视节目制作经营许可证:(京)字第403号
<img src="" width="34" height="34"/>
<img src=""/>
<li data-vid="">
<img src=""/><i data-vid="" class="ckl_plays">
<img width="132" height="99" src=""/>
在线人数:
<li data-vid="">
<img src=""/><i data-vid="" class="ckl_plays">
<img src="///img/blank.png" data-src=""/>
<img src="///img/blank.png" data-src="http://"/>
<li data-vid="" class="cfix">
src="///img/blank.png" data-src=""/>
<i data-vid="" class="ckl_plays">
<li data-vid="" class="cfix">
src="///img/blank.png" data-src=""/><i data-vid="" class="ckl_plays">
没有数据!
{upload_level_name}
粉丝 {fans_count}
{video_count}
{description}更多频道内容在这里查看
爱奇艺用户将能永久保存播放记录
过滤短视频
暂无长视频(电视剧、纪录片、动漫、综艺、电影)播放记录,
使用您的微博帐号登录,即刻尊享微博用户专属服务。
使用您的QQ帐号登录,即刻尊享QQ用户专属服务。
使用您的人人帐号登录,即刻尊享人人用户专属服务。
按住视频可进行拖动
把视频贴到Blog或BBS
当前浏览器仅支持手动复制代码
视频地址:
flash地址:
html代码:
通用代码:
通用代码可同时支持电脑和移动设备的分享播放
收藏成功,可进入查看所有收藏列表
方式1:用手机看
用爱奇艺APP或微信扫一扫,在手机上继续观看:
yy文er号直播 文儿学
方式2:一键下载至手机
限爱奇艺安卓6.0以上版本
使用微信扫一扫,扫描左侧二维码,下载爱奇艺移动APP
其他安装方式:手机浏览器输入短链接http://71.am/udn
下载安装包到本机:&&
设备搜寻中...
请确保您要连接的设备(仅限安卓)登录了同一爱奇艺账号 且安装并开启不低于V6.0以上版本的爱奇艺客户端
连接失败!
请确保您要连接的设备(仅限安卓)登录了同一爱奇艺账号 且安装并开启不低于V6.0以上版本的爱奇艺客户端
部安卓(Android)设备,请点击进行选择
请您在手机端下载爱奇艺移动APP(仅支持安卓客户端)
使用微信扫一扫,下载爱奇艺移动APP
其他安装方式:手机浏览器输入短链接http://71.am/udn
下载安装包到本机:&&
爱奇艺云推送
请您在手机端登录爱奇艺移动APP(仅支持安卓客户端)
使用微信扫一扫,下载爱奇艺移动APP
180秒后更新
打开爱奇艺移动APP,点击“我的-扫一扫”,扫描左侧二维码进行登录
没有安装爱奇艺视频最新客户端?
爸爸去哪儿2游戏 立即参与
yy文er号直播 文儿学
播放量数据:
你可能还想订阅他们:
{{#needAdBadge}} 广告{{/needAdBadge}}
&正在加载...
您使用浏览器不支持直接复制的功能,建议您使用Ctrl+C或右键全选进行地址复制
安装爱奇艺视频客户端,
马上开始为您下载本片
5秒后自动消失
&li data-elem="tabtitle" data-seq="{{seq}}"& &a href="javascript:void(0);"& &span>{{start}}-{{end}}&/span& &/a& &/li&
&li data-downloadSelect-elem="item" data-downloadSelect-selected="false" data-downloadSelect-tvid="{{tvid}}"& &a href="javascript:void(0);"&{{pd}}&/a&
选择您要下载的《
色情低俗内容
血腥暴力内容
广告或欺诈内容
侵犯了我的权力
还可以输入
您使用浏览器不支持直接复制的功能,建议您使用Ctrl+C或右键全选进行地址复制分享给朋友:通用代码: <input id="link4" type="text" class="form_input form_input_s" value="" />复 制flash地址: 复 制html代码: <input type="text" class="form_input form_input_s" id="link3" value="" />复 制分享视频到站外获取收益&&手机扫码分享视频二维码2小时内有效韩国人气主播文允娜直播 青草主播性感热舞自拍150619下载至电脑扫码用手机看用或微信扫码在手机上继续观看二维码2小时内有效韩国人气主播文允娜直播 青草主播性感热舞自拍150619扫码用手机继续看用或微信扫码在手机上继续观看二维码2小时内有效,扫码后可分享给好友没有优酷APP?立即下载请根据您的设备选择下载版本
药品服务许可证(京)-经营- 节目制作经营许可证京字670号 请使用者仔细阅读优酷、、Copyright(C)2017 优酷
版权所有不良信息举报电话:PushNative.class
package com.dongnaoedu.live.
import com.dongnaoedu.live.listener.LiveStateChangeL
* 调用C代码进行编码与推流
public class PushNative {
public static final int CONNECT_FAILED = 101;
public static final int INIT_FAILED = 102;
LiveStateChangeListener liveStateChangeL
* 接收Native层抛出的错误
public void throwNativeError(int code){
if(liveStateChangeListener != null){
liveStateChangeListener.onError(code);
public native void startPush(String url);
public native void stopPush();
public native void release();
* 设置视频参数
public native void setVideoOptions(int width, int height, int bitrate, int fps);
* 设置音频参数
* sampleRateInHz
public native void setAudioOptions(int sampleRateInHz, int channel);
* 发送视频数据
public native void fireVideo(byte[] data);
* 发送音频数据
public native void fireAudio(byte[] data, int len);
public void setLiveStateChangeListener(LiveStateChangeListener liveStateChangeListener) {
this.liveStateChangeListener = liveStateChangeL
public void removeLiveStateChangeListener(){
this.liveStateChangeListener = null;
System.loadLibrary("dn_live");
LiveStateChangeListener.class
package com.dongnaoedu.live.
public interface LiveStateChangeListener {
* 发送错误
void onError(int code);
AudioParam.class
package com.dongnaoedu.live.params;
public class AudioParam {
private int sampleRateInHz = 44100;
private int channel = 1;
public AudioParam() {
public AudioParam(int sampleRateInHz, int channel) {
this.sampleRateInHz = sampleRateInHz;
this.channel =
public int getSampleRateInHz() {
return sampleRateInHz;
public void setSampleRateInHz(int sampleRateInHz) {
this.sampleRateInHz = sampleRateInHz;
public int getChannel() {
public void setChannel(int channel) {
this.channel =
VideoParam.class
package com.dongnaoedu.live.
* 视频数据参数
public class VideoParam {
private int
private int
private int bitrate = 480000;
private int fps = 25;
private int cameraId;
public VideoParam(int width, int height, int cameraId) {
this.width =
this.height =
this.cameraId = cameraId;
public int getWidth() {
public void setWidth(int width) {
this.width =
public int getHeight() {
public void setHeight(int height) {
this.height =
public int getCameraId() {
return cameraId;
public void setCameraId(int cameraId) {
this.cameraId = cameraId;
public int getBitrate() {
public void setBitrate(int bitrate) {
this.bitrate =
public int getFps() {
public void setFps(int fps) {
this.fps =
AudioPusher.class
package com.dongnaoedu.live.
import com.dongnaoedu.live.jni.PushN
import com.dongnaoedu.live.params.AudioP
import android.media.AudioF
import android.media.AudioR
import android.media.MediaRecorder.AudioS
public class AudioPusher extends Pusher{
private AudioParam audioP
private AudioRecord audioR
private boolean isPushing = false;
private int minBufferS
private PushNative pushN
public AudioPusher(AudioParam audioParam, PushNative pushNative) {
this.audioParam = audioP
this.pushNative = pushN
int channelConfig = audioParam.getChannel() == 1 ?
AudioFormat.CHANNEL_IN_MONO : AudioFormat.CHANNEL_IN_STEREO;
minBufferSize = AudioRecord.getMinBufferSize(audioParam.getSampleRateInHz(), channelConfig, AudioFormat.ENCODING_PCM_16BIT);
audioRecord = new AudioRecord(AudioSource.MIC,
audioParam.getSampleRateInHz(),
channelConfig,
AudioFormat.ENCODING_PCM_16BIT, minBufferSize);
public void startPush() {
isPushing = true;
pushNative.setAudioOptions(audioParam.getSampleRateInHz(), audioParam.getChannel());
new Thread(new AudioRecordTask()).start();
public void stopPush() {
isPushing = false;
audioRecord.stop();
public void release() {
if(audioRecord != null){
audioRecord.release();
audioRecord = null;
class AudioRecordTask implements Runnable{
public void run() {
audioRecord.startRecording();
while(isPushing){
byte[] buffer = new byte[minBufferSize];
int len = audioRecord.read(buffer, 0, buffer.length);
if(len & 0){
pushNative.fireAudio(buffer, len);
LivePusher.class
package com.dongnaoedu.live.
import com.dongnaoedu.live.jni.PushN
import com.dongnaoedu.live.listener.LiveStateChangeL
import com.dongnaoedu.live.params.AudioP
import com.dongnaoedu.live.params.VideoP
import android.hardware.Camera.CameraI
import android.view.SurfaceH
import android.view.SurfaceHolder.C
public class LivePusher implements Callback {
private SurfaceHolder surfaceH
private VideoPusher videoP
private AudioPusher audioP
private PushNative pushN
public LivePusher(SurfaceHolder surfaceHolder) {
this.surfaceHolder = surfaceH
surfaceHolder.addCallback(this);
prepare();
* 预览准备
private void prepare() {
pushNative = new PushNative();
VideoParam videoParam = new VideoParam(480, 320, CameraInfo.CAMERA_FACING_BACK);
videoPusher = new VideoPusher(surfaceHolder,videoParam,pushNative);
AudioParam audioParam = new AudioParam();
audioPusher = new AudioPusher(audioParam,pushNative);
* 切换摄像头
public void switchCamera() {
videoPusher.switchCamera();
* 开始推流
* liveStateChangeListener
public void startPush(String url,LiveStateChangeListener liveStateChangeListener) {
videoPusher.startPush();
audioPusher.startPush();
pushNative.startPush(url);
pushNative.setLiveStateChangeListener(liveStateChangeListener);
* 停止推流
public void stopPush() {
videoPusher.stopPush();
audioPusher.stopPush();
pushNative.stopPush();
pushNative.removeLiveStateChangeListener();
* 释放资源
private void release() {
videoPusher.release();
audioPusher.release();
pushNative.release();
public void surfaceCreated(SurfaceHolder holder) {
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
public void surfaceDestroyed(SurfaceHolder holder) {
stopPush();
release();
Pusher.class
package com.dongnaoedu.live.
public abstract class Pusher {
public abstract void startPush();
public abstract void stopPush();
public abstract void release();
VideoPusher.class
package com.dongnaoedu.live.
import java.io.IOE
import com.dongnaoedu.live.jni.PushN
import com.dongnaoedu.live.params.VideoP
import android.graphics.ImageF
import android.hardware.C
import android.hardware.Camera.CameraI
import android.hardware.Camera.PreviewC
import android.util.L
import android.view.SurfaceH
import android.view.SurfaceHolder.C
public class VideoPusher extends Pusher implements Callback, PreviewCallback{
private SurfaceHolder surfaceH
private Camera mC
private VideoParam videoP
private byte[]
private boolean isPushing = false;
private PushNative pushN
public VideoPusher(SurfaceHolder surfaceHolder, VideoParam videoParams, PushNative pushNative) {
this.surfaceHolder = surfaceH
this.videoParams = videoP
this.pushNative = pushN
surfaceHolder.addCallback(this);
public void startPush() {
pushNative.setVideoOptions(videoParams.getWidth(),
videoParams.getHeight(), videoParams.getBitrate(), videoParams.getFps());
isPushing = true;
public void stopPush() {
isPushing = false;
public void surfaceCreated(SurfaceHolder holder) {
startPreview();
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
public void surfaceDestroyed(SurfaceHolder holder) {
public void release() {
stopPreview();
* 切换摄像头
public void switchCamera() {
if(videoParams.getCameraId() == CameraInfo.CAMERA_FACING_BACK){
videoParams.setCameraId(CameraInfo.CAMERA_FACING_FRONT);
videoParams.setCameraId(CameraInfo.CAMERA_FACING_BACK);
stopPreview();
startPreview();
* 开始预览
private void startPreview() {
mCamera = Camera.open(videoParams.getCameraId());
Camera.Parameters parameters = mCamera.getParameters();
parameters.setPreviewFormat(ImageFormat.NV21);
parameters.setPreviewSize(videoParams.getWidth(), videoParams.getHeight());
mCamera.setParameters(parameters);
mCamera.setPreviewDisplay(surfaceHolder);
buffers = new byte[videoParams.getWidth() * videoParams.getHeight() * 4];
mCamera.addCallbackBuffer(buffers);
mCamera.setPreviewCallbackWithBuffer(this);
mCamera.startPreview();
} catch (IOException e) {
e.printStackTrace();
* 停止预览
private void stopPreview() {
if(mCamera != null){
mCamera.stopPreview();
mCamera.release();
mCamera = null;
public void onPreviewFrame(byte[] data, Camera camera) {
if(mCamera != null){
mCamera.addCallbackBuffer(buffers);
if(isPushing){
pushNative.fireVideo(data);
MainActivity.class
package com.dongnaoedu.
import com.dongnaoedu.live.jni.PushN
import com.dongnaoedu.live.listener.LiveStateChangeL
import com.dongnaoedu.live.pusher.LiveP
import android.app.A
import android.os.B
import android.os.H
import android.util.L
import android.view.SurfaceV
import android.view.V
import android.widget.B
import android.widget.T
public class MainActivity extends Activity implements LiveStateChangeListener {
static final String URL = "";
private LiveP
private Handler handler = new Handler(){
public void handleMessage(android.os.Message msg) {
switch (msg.what) {
case PushNative.CONNECT_FAILED:
Toast.makeText(MainActivity.this, "连接失败", Toast.LENGTH_SHORT).show();
case PushNative.INIT_FAILED:
Toast.makeText(MainActivity.this, "初始化失败", Toast.LENGTH_SHORT).show();
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
SurfaceView surfaceView = (SurfaceView) findViewById(R.id.surface);
live = new LivePusher(surfaceView.getHolder());
* 开始直播
public void mStartLive(View view) {
Button btn = (Button)
if(btn.getText().equals("开始直播")){
live.startPush(URL,this);
btn.setText("停止直播");
live.stopPush();
btn.setText("开始直播");
* 切换摄像头
public void mSwitchCamera(View btn) {
live.switchCamera();
public void onError(int code) {
handler.sendEmptyMessage(code);
activity_main.xml
xmlns:android="/apk/res/android"
xmlns:tools="/tools"
android:layout_width="match_parent"
android:layout_height="match_parent" &
android:id="@+id/surface"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center" /&
android:id="@+id/adcontainer"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_alignParentLeft="true"
android:orientation="horizontal" &
android:id="@+id/btn_push"
android:layout_width="wrap_content"
android:layout_height="match_parent"
android:onClick="mStartLive"
android:text="开始直播"/&
android:id="@+id/btn_camera_switch"
android:layout_width="wrap_content"
android:layout_height="match_parent"
android:text="切换摄像头"
android:onClick="mSwitchCamera"/&
看这些源码之前先阅读Android音视频学习第4章:视频直播实现之推送视频篇,再阅读第5章,最后看完整版源码
#include "com_dongnaoedu_live_jni_PushNative.h"
#include &android/log.h&
#include &android/native_window_jni.h&
#include &android/native_window.h&
#define LOGI(FORMAT,...) __android_log_print(ANDROID_LOG_INFO,"jason",FORMAT,##__VA_ARGS__)
#define LOGE(FORMAT,...) __android_log_print(ANDROID_LOG_ERROR,"jason",FORMAT,##__VA_ARGS__)
#include &pthread.h&
#include "queue.h"
#include "x264.h"
#include "rtmp.h"
#include "faac.h"
#ifndef TRUE
#define TRUE
#define FALSE
#define CONNECT_FAILED 101
#define INIT_FAILED 102
x264_picture_t pic_
x264_picture_t pic_
int y_len, u_len, v_
x264_t *video_encode_
unsigned int start_
pthread_mutex_
pthread_cond_
char *rtmp_
int is_pushing = FALSE;
faacEncHandle audio_encode_
unsigned long nInputS
unsigned long nMaxOutputB
jobject jobj_push_
jclass jcls_push_
jmethodID jmid_throw_native_
JavaVM *javaVM;
* 添加AAC头信息
void add_aac_sequence_header(){
unsigned char *
unsigned long
faacEncGetDecoderSpecificInfo(audio_encode_handle,&buf,&len);
int body_size = 2 +
RTMPPacket *packet = malloc(sizeof(RTMPPacket));
RTMPPacket_Alloc(packet,body_size);
RTMPPacket_Reset(packet);
unsigned char * body = packet-&m_
body[0] = 0xAF;
body[1] = 0x00;
memcpy(&body[2], buf, len);
packet-&m_packetType = RTMP_PACKET_TYPE_AUDIO;
packet-&m_nBodySize = body_
packet-&m_nChannel = 0x04;
packet-&m_hasAbsTimestamp = 0;
packet-&m_nTimeStamp = 0;
packet-&m_headerType = RTMP_PACKET_SIZE_MEDIUM;
add_rtmp_packet(packet);
free(buf);
* 添加AAC rtmp packet
void add_aac_body(unsigned char *buf, int len){
int body_size = 2 +
RTMPPacket *packet = malloc(sizeof(RTMPPacket));
RTMPPacket_Alloc(packet,body_size);
RTMPPacket_Reset(packet);
unsigned char * body = packet-&m_
body[0] = 0xAF;
body[1] = 0x01;
memcpy(&body[2], buf, len);
packet-&m_packetType = RTMP_PACKET_TYPE_AUDIO;
packet-&m_nBodySize = body_
packet-&m_nChannel = 0x04;
packet-&m_hasAbsTimestamp = 0;
packet-&m_headerType = RTMP_PACKET_SIZE_LARGE;
packet-&m_nTimeStamp = RTMP_GetTime() - start_
add_rtmp_packet(packet);
jint JNI_OnLoad(JavaVM* vm, void* reserved){
return JNI_VERSION_1_4;
* 向Java层发送错误信息
void throwNativeError(JNIEnv *env,int code){
(*env)-&CallVoidMethod(env,jobj_push_native,jmid_throw_native_error,code);
* 从队列中不断拉取RTMPPacket发送给流媒体服务器)
void *push_thread(void * arg){
(*javaVM)-&AttachCurrentThread(javaVM,&env,NULL);
RTMP *rtmp = RTMP_Alloc();
if(!rtmp){
LOGE("rtmp初始化失败");
RTMP_Init(rtmp);
rtmp-&Link.timeout = 5;
RTMP_SetupURL(rtmp,rtmp_path);
RTMP_EnableWrite(rtmp);
if(!RTMP_Connect(rtmp,NULL)){
LOGE("%s","RTMP 连接失败");
throwNativeError(env,CONNECT_FAILED);
start_time = RTMP_GetTime();
if(!RTMP_ConnectStream(rtmp,0)){
LOGE("%s","RTMP ConnectStream failed");
throwNativeError(env,CONNECT_FAILED);
is_pushing = TRUE;
add_aac_sequence_header();
while(is_pushing){
pthread_mutex_lock(&mutex);
pthread_cond_wait(&cond,&mutex);
RTMPPacket *packet = queue_get_first();
if(packet){
queue_delete_first();
packet-&m_nInfoField2 = rtmp-&m_stream_
int i = RTMP_SendPacket(rtmp,packet,TRUE);
LOGE("RTMP 断开");
RTMPPacket_Free(packet);
pthread_mutex_unlock(&mutex);
LOGI("%s","rtmp send packet");
RTMPPacket_Free(packet);
pthread_mutex_unlock(&mutex);
LOGI("%s","释放资源");
free(rtmp_path);
RTMP_Close(rtmp);
RTMP_Free(rtmp);
(*javaVM)-&DetachCurrentThread(javaVM);
JNIEXPORT void JNICALL Java_com_dongnaoedu_live_jni_PushNative_startPush
(JNIEnv *env, jobject jobj, jstring url_jstr){
jobj_push_native = (*env)-&NewGlobalRef(env,jobj);
jclass jcls_push_native_tmp = (*env)-&GetObjectClass(env,jobj);
jcls_push_native = (*env)-&NewGlobalRef(env,jcls_push_native_tmp);
if(jcls_push_native_tmp == NULL){
LOGI("%s","NULL");
LOGI("%s","not NULL");
jmid_throw_native_error = (*env)-&GetMethodID(env,jcls_push_native_tmp,"throwNativeError","(I)V");
const char* url_cstr = (*env)-&GetStringUTFChars(env,url_jstr,NULL);
rtmp_path = malloc(strlen(url_cstr) + 1);
memset(rtmp_path,0,strlen(url_cstr) + 1);
memcpy(rtmp_path,url_cstr,strlen(url_cstr));
pthread_mutex_init(&mutex,NULL);
pthread_cond_init(&cond,NULL);
create_queue();
pthread_t push_thread_
pthread_create(&push_thread_id, NULL,push_thread, NULL);
(*env)-&ReleaseStringUTFChars(env,url_jstr,url_cstr);
JNIEXPORT void JNICALL Java_com_dongnaoedu_live_jni_PushNative_stopPush
(JNIEnv *env, jobject jobj){
is_pushing = FALSE;
JNIEXPORT void JNICALL Java_com_dongnaoedu_live_jni_PushNative_release
(JNIEnv *env, jobject jobj){
(*env)-&DeleteGlobalRef(env,jcls_push_native);
(*env)-&DeleteGlobalRef(env,jobj_push_native);
(*env)-&DeleteGlobalRef(env,jmid_throw_native_error);
* 设置视频参数
JNIEXPORT void JNICALL Java_com_dongnaoedu_live_jni_PushNative_setVideoOptions
(JNIEnv *env, jobject jobj, jint width, jint height, jint bitrate, jint fps){
x264_param_
x264_param_default_preset(&param,"ultrafast","zerolatency");
param.i_csp = X264_CSP_I420;
param.i_width
param.i_height =
y_len = width *
u_len = y_len / 4;
v_len = u_
param.rc.i_rc_method = X264_RC_CRF;
param.rc.i_bitrate = bitrate / 1000;
param.rc.i_vbv_max_bitrate = bitrate / 1000 * 1.2;
param.b_vfr_input = 0;
param.i_fps_num =
param.i_fps_den = 1;
param.i_timebase_den = param.i_fps_
param.i_timebase_num = param.i_fps_
param.i_threads = 1;
param.b_repeat_headers = 1;
param.i_level_idc = 51;
x264_param_apply_profile(&param,"baseline");
x264_picture_alloc(&pic_in, param.i_csp, param.i_width, param.i_height);
pic_in.i_pts = 0;
video_encode_handle = x264_encoder_open(&param);
if(video_encode_handle){
LOGI("打开视频编码器成功");
throwNativeError(env,INIT_FAILED);
* 音频编码器配置
JNIEXPORT void JNICALL Java_com_dongnaoedu_live_jni_PushNative_setAudioOptions
(JNIEnv *env, jobject jobj, jint sampleRateInHz, jint numChannels){
audio_encode_handle = faacEncOpen(sampleRateInHz,numChannels,&nInputSamples,&nMaxOutputBytes);
if(!audio_encode_handle){
LOGE("音频编码器打开失败");
faacEncConfigurationPtr p_config = faacEncGetCurrentConfiguration(audio_encode_handle);
p_config-&mpegVersion = MPEG4;
p_config-&allowMidside = 1;
p_config-&aacObjectType = LOW;
p_config-&outputFormat = 0;
p_config-&useTns = 1;
p_config-&useLfe = 0;
p_config-&quantqual = 100;
p_config-&bandWidth = 0;
p_config-&shortctl = SHORTCTL_NORMAL;
if(!faacEncSetConfiguration(audio_encode_handle,p_config)){
LOGE("%s","音频编码器配置失败..");
throwNativeError(env,INIT_FAILED);
LOGI("%s","音频编码器配置成功");
* 加入RTMPPacket队列,等待发送线程发送
void add_rtmp_packet(RTMPPacket *packet){
pthread_mutex_lock(&mutex);
if(is_pushing){
queue_append_last(packet);
pthread_cond_signal(&cond);
pthread_mutex_unlock(&mutex);
* 发送h264 SPS与PPS参数集
void add_264_sequence_header(unsigned char* pps,unsigned char* sps,int pps_len,int sps_len){
int body_size = 16 + sps_len + pps_
RTMPPacket *packet = malloc(sizeof(RTMPPacket));
RTMPPacket_Alloc(packet,body_size);
RTMPPacket_Reset(packet);
unsigned char * body = packet-&m_
int i = 0;
body[i++] = 0x17;
body[i++] = 0x00;
body[i++] = 0x00;
body[i++] = 0x00;
body[i++] = 0x00;
body[i++] = 0x01;
body[i++] = sps[1];
body[i++] = sps[2];
body[i++] = sps[3];
body[i++] = 0xFF;
body[i++] = 0xE1;
body[i++] = (sps_len && 8) & 0xff;
body[i++] = sps_len & 0xff;
memcpy(&body[i], sps, sps_len);
body[i++] = 0x01;
body[i++] = (pps_len && 8) & 0xff;
body[i++] = (pps_len) & 0xff;
memcpy(&body[i], pps, pps_len);
packet-&m_packetType = RTMP_PACKET_TYPE_VIDEO;
packet-&m_nBodySize = body_
packet-&m_nTimeStamp = 0;
packet-&m_hasAbsTimestamp = 0;
packet-&m_nChannel = 0x04;
packet-&m_headerType = RTMP_PACKET_SIZE_MEDIUM;
add_rtmp_packet(packet);
* 发送h264帧信息
void add_264_body(unsigned char *buf ,int len){
if(buf[2] == 0x00){
}else if(buf[2] == 0x01){
int body_size = len + 9;
RTMPPacket *packet = malloc(sizeof(RTMPPacket));
RTMPPacket_Alloc(packet,body_size);
unsigned char * body = packet-&m_
int type = buf[0] & 0x1f;
body[0] = 0x27;
if (type == NAL_SLICE_IDR) {
body[0] = 0x17;
body[1] = 0x01;
body[2] = 0x00;
body[3] = 0x00;
body[4] = 0x00;
body[5] = (len && 24) & 0xff;
body[6] = (len && 16) & 0xff;
body[7] = (len && 8) & 0xff;
body[8] = (len) & 0xff;
memcpy(&body[9], buf, len);
packet-&m_hasAbsTimestamp = 0;
packet-&m_nBodySize = body_
packet-&m_packetType = RTMP_PACKET_TYPE_VIDEO;
packet-&m_nChannel = 0x04;
packet-&m_headerType = RTMP_PACKET_SIZE_LARGE;
packet-&m_nTimeStamp = RTMP_GetTime() - start_
add_rtmp_packet(packet);
* 将采集到视频数据进行编码
JNIEXPORT void JNICALL Java_com_dongnaoedu_live_jni_PushNative_fireVideo
(JNIEnv *env, jobject jobj, jbyteArray buffer){
jbyte* nv21_buffer = (*env)-&GetByteArrayElements(env,buffer,NULL);
jbyte* u = pic_in.img.plane[1];
jbyte* v = pic_in.img.plane[2];
memcpy(pic_in.img.plane[0], nv21_buffer, y_len);
for (i = 0; i & u_ i++) {
*(u + i) = *(nv21_buffer + y_len + i * 2 + 1);
*(v + i) = *(nv21_buffer + y_len + i * 2);
x264_nal_t *nal = NULL;
int n_nal = -1;
if(x264_encoder_encode(video_encode_handle,&nal, &n_nal,&pic_in,&pic_out) & 0){
LOGE("%s","编码失败");
int sps_len , pps_
unsigned char sps[100];
unsigned char pps[100];
memset(sps,0,100);
memset(pps,0,100);
pic_in.i_pts += 1;
for(i=0; i & n_ i++){
if(nal[i].i_type == NAL_SPS){
sps_len = nal[i].i_payload - 4;
memcpy(sps,nal[i].p_payload + 4,sps_len);
}else if(nal[i].i_type == NAL_PPS){
pps_len = nal[i].i_payload - 4;
memcpy(pps,nal[i].p_payload + 4,pps_len);
add_264_sequence_header(pps,sps,pps_len,sps_len);
add_264_body(nal[i].p_payload,nal[i].i_payload);
(*env)-&ReleaseByteArrayElements(env,buffer,nv21_buffer,NULL);
* 对音频采样数据进行AAC编码
JNIEXPORT void JNICALL Java_com_dongnaoedu_live_jni_PushNative_fireAudio
(JNIEnv *env, jobject jobj, jbyteArray buffer, jint len){
unsigned char *
jbyte* b_buffer = (*env)-&GetByteArrayElements(env, buffer, 0);
pcmbuf = (short*) malloc(nInputSamples * sizeof(int));
bitbuf = (unsigned char*) malloc(nMaxOutputBytes * sizeof(unsigned char));
int nByteCount = 0;
unsigned int nBufferSize = (unsigned int) len / 2;
unsigned short* buf = (unsigned short*) b_
while (nByteCount & nBufferSize) {
int audioLength = nInputS
if ((nByteCount + nInputSamples) &= nBufferSize) {
audioLength = nBufferSize - nByteC
for (i = 0; i & audioL i++) {
int s = ((int16_t *) buf + nByteCount)[i];
pcmbuf[i] = s && 8;
nByteCount += nInputS
int byteslen = faacEncEncode(audio_encode_handle, pcmbuf, audioLength,
bitbuf, nMaxOutputBytes);
if (byteslen & 1) {
add_aac_body(bitbuf, byteslen);
(*env)-&ReleaseByteArrayElements(env, buffer, b_buffer, NULL);
if (bitbuf)
free(bitbuf);
if (pcmbuf)
free(pcmbuf);
双向队列queue.c
#include &stdio.h&
#include &malloc.h&
typedef struct queue_node {
struct queue_node*
struct queue_node*
static node *phead = NULL;
static int count = 0;
static node* create_node(void *pval) {
node *pnode = NULL;
pnode = (node *) malloc(sizeof(node));
if (pnode) {
pnode-&prev = pnode-&next =
pnode-&p =
int create_queue() {
phead = create_node(NULL);
if (!phead) {
return -1;
count = 0;
int queue_is_empty() {
return count == 0;
int queue_size() {
return count;
static node* get_node(int index) {
if (index & 0 || index &= count) {
return NULL;
if (index &= (count / 2)) {
int i = 0;
node *pnode = phead-&
while ((i++) & index)
pnode = pnode-&
int j = 0;
int rindex = count - index - 1;
node *rnode = phead-&
while ((j++) & rindex)
rnode = rnode-&
static node* get_first_node() {
return get_node(0);
static node* get_last_node() {
return get_node(count - 1);
void* queue_get(int index) {
node *pindex = get_node(index);
if (!pindex) {
return NULL;
return pindex-&p;
void* queue_get_first() {
return queue_get(0);
void* queue_get_last() {
return queue_get(count - 1);
int queue_insert(int index, void* pval) {
if (index == 0)
return queue_insert_first(pval);
node *pindex = get_node(index);
if (!pindex)
return -1;
node *pnode = create_node(pval);
if (!pnode)
return -1;
pnode-&prev = pindex-&
pnode-&next =
pindex-&prev-&next =
pindex-&prev =
int queue_insert_first(void *pval) {
node *pnode = create_node(pval);
if (!pnode)
return -1;
pnode-&prev =
pnode-&next = phead-&
phead-&next-&prev =
phead-&next =
int queue_append_last(void *pval) {
node *pnode = create_node(pval);
if (!pnode)
return -1;
pnode-&next =
pnode-&prev = phead-&
phead-&prev-&next =
phead-&prev =
int queue_delete(int index) {
node *pindex = get_node(index);
if (!pindex) {
return -1;
pindex-&next-&prev = pindex-&
pindex-&prev-&next = pindex-&
free(pindex);
int queue_delete_first() {
return queue_delete(0);
int queue_delete_last() {
return queue_delete(count - 1);
int destroy_queue() {
if (!phead) {
return -1;
node *pnode = phead-&
node *ptmp = NULL;
while (pnode != phead) {
pnode = pnode-&
free(ptmp);
free(phead);
phead = NULL;
count = 0;
Android.mk
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/faac/include
LOCAL_SRC_FILES := faac/libfaac.a
$(info $(LOCAL_SRC_FILES))
include $(PREBUILT_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/x264/include
LOCAL_SRC_FILES := x264/libx264.a
$(info $(LOCAL_SRC_FILES))
include $(PREBUILT_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE
:= rtmpdump
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/rtmpdump/include
LOCAL_SRC_FILES := rtmpdump/librtmp.a
$(info $(LOCAL_SRC_FILES))
include $(PREBUILT_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE
:= dn_live
LOCAL_SRC_FILES := dn_live.c queue.c
LOCAL_STATIC_LIBRARIES := x264 faac rtmpdump
LOCAL_LDLIBS := -llog
include $(BUILD_SHARED_LIBRARY)
Application.mk
APP_ABI :=
APP_PLATFORM := android-9
APP_STL := gnustl_static
参考知识库
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:2444次
排名:千里之外
原创:11篇
评论:10条}

我要回帖

更多关于 国足对韩国直播视频 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信