GL C++显示相机YUV视频数据使用帧缓冲FBO后期处理,实现滤镜功能。
一.前言:
GitHub地址:GitHub - wangyongyao1989/WyFFmpeg: 音视频相关基础实现
系列文章:
1. OpenGL Texture C++ 预览Camera视频;
2. OpenGL Texture C++ Camera Filter滤镜;
3. OpenGL 自定义SurfaceView Texture C++预览Camera视频;
4. OpenGL Texture C++ Camera Filter滤镜视频录制;
5. OpenGL C++ 视频中添加图片及文字水印播放录制;
6. OpenGL C++使用帧缓冲FBO显示相机YUV视频数据;
显示效果:
————————————————
OpenGL帧缓冲FBO后期处理,实现滤镜功能。
二、功能实现的前置知识储备:
- 本系列文章中1. OpenGL Texture C++ 预览Camera视频;基于GLSurfaceView创建OpenGL运行环境为基础,把Camera的YUV数据传入OpenGL的片段着色器所创建的三个纹理(Texture)中对视频图像的显示;
- 本系列文章中2. OpenGL Texture C++ Camera Filter滤镜;在文章1的基础上对render过程进行优化,传入不同滤镜类型的片段着色器程序。来实现Camera滤镜视频切换显示。
- 本系列文章中3. OpenGL 自定义SurfaceView Texture C++预览Camera视频;在文章1的基础上,用自定义的类GLSurfaceView方式创建一个GLThread的OpenGL运行环境,把Camera的YUV数据传入OpenGL的片段着色器所创建的三个纹理(Texture)中对视频图像的显示。
- 本系列文章中4. OpenGL Texture C++ Camera Filter滤镜视频录制; 基于文章1/2/3的代码,结合Google开源项目grafika中的WindowSurface.java/Coregl.java/TextureMovieEncoder2.java/VideoEncoderCore.java创建出录制视频的surface并根据此切换GLContext上下文交换(swapBuffer)渲染与显示的视频数据,最终在VideoEncoderCore中通过MediaCodec获取出encodeData写入MediaMuxer成为MP4文件格式的视频。
- 本系列文章中5:OpenGL C++视频中添加图片及文字水印播放并录制;基于文章1/2/3/4的代码,视频图片水印以另一个着色器程序的形式加入图片纹理,纹理相关的知识可参考本博主的另一篇文章:LearnOpenGL之入门基础-CSDN博客的知识储备;而视频水印文字在前面的基础上再加上另外一个着色器程序的形式加入文字渲染,OpenGL文字渲染可参考本博主的一篇文章:LearnOpenGL之文字渲染_opengl文字渲染接口-CSDN博客。
- 本篇6:实现使用OpenGL的帧缓存FBO技术,将多个渲染结果合成到最终的图像中,在高级OpenGL的本博主练习集中LearnOpenGL之高级OpenGL(1)有关OpenGL帧缓冲FBO的详细介绍实现。
三、帧缓冲对象的简介:
帧缓冲对象(Framebuffer Object, FBO)是 OpenGL 中用于管理渲染目标的核心工具。它允许你将渲染结果输出到纹理、渲染缓冲对象(Renderbuffer)或其他自定义的缓冲区,而不是默认的屏幕缓冲区。帧缓冲在图形渲染中具有重要作用。
以下是帧缓冲的主要作用和应用场景:
离屏渲染(Off-screen Rendering):
帧缓冲允许你将场景渲染到一个离屏的缓冲区(例如纹理),而不是直接渲染到屏幕。这在以下场景中非常有用:
- 反射和折射:将场景渲染到纹理,然后将纹理应用到反射或折射表面(例如镜子、水面)。
- 环境贴图:生成动态的环境贴图(例如立方体贴图),用于实现动态反射或天空盒。
- GUI渲染:将复杂的 GUI 渲染到纹理,然后将其作为纹理应用到屏幕上。
后期处理(Post-processing):
帧缓冲是实现后期处理的核心工具。通过将场景渲染到纹理,然后对纹理进行处理,可以实现各种视觉效果:
- 模糊效果(Blur):使用高斯模糊对渲染结果进行处理,实现景深或运动模糊效果。
-
色调映射(Tone Mapping):将 HDR 渲染结果映射到 LDR 显示范围。
-
颜色校正:调整亮度、对比度、饱和度等。
-
边缘检测:使用 Sobel 或 Canny 算法检测边缘,实现卡通渲染或轮廓效果。
在个人的github的AndroidLearnOpenGL练习集中,GLFBOPostProcessing.cpp实现了运用帧缓冲的后期处理(Post-processing)滤镜效果。
阴影映射(Shadow Mapping):
帧缓冲用于生成深度贴图,从而实现动态阴影效果:
- 阴影映射:从光源视角渲染场景,将深度值存储到帧缓冲的深度附件。
- 软阴影:通过模糊深度贴图实现软阴影效果。
多目标渲染(Mulitpe Render Targets,MRT):
帧缓冲可以附加多个颜色附件,用于多目标渲染(MRT)。这在以下场景中非常有用:
- 延迟渲染(Deferred Rendering):将几何信息(位置、法线、颜色等)存储到多个纹理,然后在光照阶段使用这些纹理进行计算。
- G-Buffer:生成几何缓冲区(G-Buffer),用于存储场景的几何和材质信息。
抗锯齿(Anti-aliasing):
帧缓冲可以用于实现抗锯齿效果,尤其是多采样抗锯齿(MSAA):
- 多采样纹理:将场景渲染到多采样纹理,然后解析到普通纹理。
- 多采样渲染缓冲对象:将场景渲染到多采样渲染缓冲对象,然后解析到普通纹理。
动态纹理生成:
帧缓冲可以用于动态生成纹理,例如程序化纹理或动态更新的纹理:
- 程序化纹理:通过渲染生成程序化纹理(例如噪声纹理、渐变纹理)。
- 动态贴图:实时更新纹理内容(例如动态水面、动态天空盒)。
VR渲染:
在虚拟现实(VR)中,帧缓冲用于分别渲染左右眼的视图:
- 立体渲染:分别为左右眼生成不同的视图。
- VR后期处理:对左右眼的渲染结果分别进行后期处理。
粒子系统和特效:
帧缓冲可以用于实现复杂的粒子系统和特效:
- 粒子系统:将粒子渲染到纹理,然后对纹理进行混合或模糊。
- 屏幕空间特效:例如屏幕空间反射(SSR)、屏幕空间环境光遮蔽(SSAO)。
图像合成:
帧缓冲可以用于将多个渲染结果合成到最终的图像中:
- 分层渲染:将不同的场景元素(例如背景、角色、特效)分别渲染到不同的纹理,然后合成。
- UI叠加:将 UI 元素渲染到纹理,然后叠加到场景上。
科学可视化:
帧缓冲可以用于科学数据的可视化,例如体渲染、流场可视化等:
- 体渲染:将体数据渲染到纹理,然后进行混合和光照计算。
- 流场可视化:将流场数据渲染到纹理,然后生成箭头或流线。
帧缓冲的核心组件:
帧缓冲由以下组件组成:
- 颜色附件:用于存储颜色信息(通常是纹理)。
- 深度附件:用于存储深度信息(可以是纹理或渲染缓冲对象)。
- 模版附件:用于存储模板信息(可以是纹理或渲染缓冲对象)。
四、GL C++显示相机YUV视频数据使用帧缓冲FBO后期处理,实现滤镜功能的代码实现
接下的代码来源于系列文章中5中的GLDrawTextVideoRender.cpp的代码改造,并结合练习集AndroidLearnOpenGL中的GLFBOPostProcessing.cpp,实现的帧缓冲FBO后期处理代码合成的。
基于以上的代码参考,YUV的视频绘制及显示、图片水印的功能已经在系列文章5中实现了。现在重点的内容在帧缓冲FBO后期处理上,代码实现如下:
Java代码层:
package com.wangyongyao.glplay.view;
import android.content.Context;
import android.graphics.Point;
import android.opengl.GLSurfaceView;
import android.util.AttributeSet;
import android.util.Log;
import android.util.Size;
import android.view.GestureDetector;
import android.view.MotionEvent;
import android.view.ScaleGestureDetector;
import android.view.Surface;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import androidx.annotation.NonNull;
import com.wangyongyao.glplay.OpenGLPlayCallJni;
import com.wangyongyao.glplay.camera.Camera2Helper2;
import com.wangyongyao.glplay.camera.GLCamera2Listener;
import com.wangyongyao.glplay.utils.OpenGLPlayFileUtils;
/**
* author : wangyongyao https://github.com/wangyongyao1989
* Create Time : 2025/1/16
* Descibe : AndroidLearnOpenGL com.wangyongyao.views
*/
public class GLFBOPostProcessingView extends SurfaceView implements SurfaceHolder.Callback, GLCamera2Listener {
private static String TAG = GLFBOPostProcessingView.class.getSimpleName();
private OpenGLPlayCallJni mJniCall;
private Context mContext;
private SurfaceHolder mHolder;
private Surface mSurface;
private Camera2Helper2 camera2Helper;
public GLFBOPostProcessingView(Context context, OpenGLPlayCallJni jniCall) {
super(context);
mContext = context;
mJniCall = jniCall;
init();
}
public GLFBOPostProcessingView(Context context, AttributeSet attrs) {
super(context, attrs);
mContext = context;
init();
}
private void init() {
//获取SurfaceHolder对象
mHolder = getHolder();
//注册SurfaceHolder的回调方法
mHolder.addCallback(this);
String vertexScreenPath = OpenGLPlayFileUtils.getModelFilePath(mContext
, "fbo_screen_vertex.glsl");
String fragScreenPath = OpenGLPlayFileUtils.getModelFilePath(mContext
, "fbo_screen_fragment.glsl");
String fragOppositionPath = OpenGLPlayFileUtils.getModelFilePath(mContext
, "fbo_post_opposition_fragment.glsl");
String fragGrayScalePath = OpenGLPlayFileUtils.getModelFilePath(mContext
, "fbo_post_gray_scale_fragment.glsl");
String fragWeightedGrayPath = OpenGLPlayFileUtils.getModelFilePath(mContext
, "fbo_post_weighted_gray_fragment.glsl");
String fragNuclearEffectPath = OpenGLPlayFileUtils.getModelFilePath(mContext
, "fbo_post_nuclear_effect_fragment.glsl");
String fYUVPath = OpenGLPlayFileUtils.getModelFilePath(mContext
, "fbo_video_play_frament.glsl");
String vYUVPath = OpenGLPlayFileUtils.getModelFilePath(mContext
, "fbo_video_play_vert.glsl");
String picPath = OpenGLPlayFileUtils.getModelFilePath(mContext
, "yao.jpg");
String picVertexPath = OpenGLPlayFileUtils.getModelFilePath(mContext
, "draw_text_video_play_vert.glsl");
String picFragPath = OpenGLPlayFileUtils.getModelFilePath(mContext
, "draw_pic_frament.glsl");
if (mJniCall != null) {
mJniCall.setFBOPostProcessingGLSLPath(
fragScreenPath, vertexScreenPath
, fragOppositionPath
, fragGrayScalePath
, fragWeightedGrayPath
, fragNuclearEffectPath
, vYUVPath
, fYUVPath
, picPath
, picVertexPath
, picFragPath
);
}
}
public void setFBOPostProcessingType(int type) {
int typeVaule = type % 5;
if (mJniCall != null) {
mJniCall.glFBOPostProcessingSetParameters(typeVaule);
}
}
public int getFBOPostProcessingType() {
int type = 0;
if (mJniCall != null) {
type = mJniCall.glFBOPostProcessingGetParameters();
}
return type;
}
@Override
public void surfaceCreated(@NonNull SurfaceHolder holder) {
Log.e(TAG, "surfaceCreated");
mSurface = holder.getSurface();
if (mJniCall != null) {
mJniCall.glPSSurfaceCreated(mSurface, null);
}
}
@Override
public void surfaceChanged(@NonNull SurfaceHolder holder,
int format, int width, int height) {
Log.e(TAG, "onSurfaceChanged width:" + width + ",height" + height
+ "===surface:" + mSurface.toString());
if (mJniCall != null) {
mJniCall.initFBOPostProcessing(width, height);
}
startCameraPreview(width, height);
}
@Override
public void surfaceDestroyed(@NonNull SurfaceHolder holder) {
if (mJniCall != null) {
// mJniCall.glBOPostProcessingDestroy();
}
}
@Override
public void onCameraOpened(Size previewSize, int displayOrientation) {
}
@Override
public void onPreviewFrame(byte[] yuvData, int width, int height) {
if (mJniCall != null) {
mJniCall.glFboPostProcessingSurfaceDraw(yuvData, width, height, 90);
mJniCall.glFBOPostProcessingRenderFrame();
}
}
@Override
public void onCameraClosed() {
}
@Override
public void onCameraError(Exception e) {
}
private void startCameraPreview(int width, int height) {
if (camera2Helper == null) {
camera2Helper = new Camera2Helper2.Builder()
.cameraListener(this)
.specificCameraId(Camera2Helper2.CAMERA_ID_BACK)
.context(mContext)
.previewViewSize(new Point(width, height))
.rotation(90)
.build();
}
camera2Helper.start();
}
}
native代码层:
把GLSL着色器程序、图片资源、刷新的YUV传入到native层和C++层建立映射:
#include <jni.h>
#include <string>
#include <android/log.h>
#include "includeopengl/OpenglesFlashLight.h"
#include "includeopengl/OpenglesCameraPre.h"
#include "includeopengl/OpenGLShader.h"
#include "includeopengl/OpenGLCamera3D.h"
#include <android/native_window_jni.h>
#include <android/asset_manager_jni.h>
#include "OpenglesTextureFilterRender.h"
#include "OpenglesSurfaceViewVideoRender.h"
#include "EGLSurfaceViewVideoRender.h"
#include "GLDrawTextVideoRender.h"
#include "GLFBOPostProcessing.h"
#define LOG_TAG "wy"
#define LOGI(...) __android_log_print(ANDROID_LOG_INFO,LOG_TAG,__VA_ARGS__)
#define LOGD(...) __android_log_print(ANDROID_LOG_DEBUG, LOG_TAG, __VA_ARGS__)
#define LOGW(...) __android_log_print(ANDROID_LOG_WARN, LOG_TAG, __VA_ARGS__)
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR,LOG_TAG,__VA_ARGS__)
using namespace std;
//包名+类名字符串定义:
const char *java_call_jni_class = "com/wangyongyao/glplay/OpenGLPlayCallJni";
/*********************** GL 帧缓冲FBO——后期处理********************/
extern "C"
JNIEXPORT void JNICALL
cpp_fbo_post_processing_frag_vertex_path(JNIEnv *env, jobject thiz,
jstring fragScreen, jstring vertexScreen,
jstring fragOpposition,
jstring fragGrayScale,
jstring fragWeightedGray,
jstring fragNuclearEffect,
jstring vYUV, jstring fYUV,
jstring picsrc3,
jstring picVertex, jstring picFrag
) {
const char *fragScreenPath = env->GetStringUTFChars(fragScreen, nullptr);
const char *vertexScreenPath = env->GetStringUTFChars(vertexScreen, nullptr);
const char *fragOppositionPath = env->GetStringUTFChars(fragOpposition, nullptr);
const char *fragGrayScalePath = env->GetStringUTFChars(fragGrayScale, nullptr);
const char *fragWeightedGrayPath = env->GetStringUTFChars(fragWeightedGray, nullptr);
const char *fragNuclearEffectPath = env->GetStringUTFChars(fragNuclearEffect, nullptr);
const char *vYUVPath = env->GetStringUTFChars(vYUV, nullptr);
const char *fYUVPath = env->GetStringUTFChars(fYUV, nullptr);
const char *pic3Path = env->GetStringUTFChars(picsrc3, nullptr);
const char *picVertexPath = env->GetStringUTFChars(picVertex, nullptr);
const char *picFragPath = env->GetStringUTFChars(picFrag, nullptr);
if (postProcessing == nullptr) {
postProcessing = new GLFBOPostProcessing();
}
string sVertexScreenPath(vertexScreenPath);
string sFragScreenPath(fragScreenPath);
string sFragOppositionPath(fragOppositionPath);
string sFragGrayScalePath(fragGrayScalePath);
string sFragWeightedGrayPath(fragWeightedGrayPath);
string sFragNuclearEffectPath(fragNuclearEffectPath);
vector<string> sFragPathes;
sFragPathes.push_back(sFragScreenPath);
sFragPathes.push_back(sFragOppositionPath);
sFragPathes.push_back(sFragGrayScalePath);
sFragPathes.push_back(sFragWeightedGrayPath);
sFragPathes.push_back(sFragNuclearEffectPath);
postProcessing->setSharderScreenPathes(sVertexScreenPath, sFragPathes);
postProcessing->setPicPath(pic3Path);
postProcessing->setPicSharderPath(picVertexPath, picFragPath);
postProcessing->setYUVSharderPath(vYUVPath, fYUVPath);
env->ReleaseStringUTFChars(fragScreen, fragScreenPath);
env->ReleaseStringUTFChars(vertexScreen, vertexScreenPath);
env->ReleaseStringUTFChars(fragOpposition, fragOppositionPath);
env->ReleaseStringUTFChars(fragGrayScale, fragGrayScalePath);
env->ReleaseStringUTFChars(fragWeightedGray, fragWeightedGrayPath);
env->ReleaseStringUTFChars(fragNuclearEffect, fragNuclearEffectPath);
env->ReleaseStringUTFChars(vYUV, vYUVPath);
env->ReleaseStringUTFChars(fYUV, fYUVPath);
env->ReleaseStringUTFChars(picsrc3, pic3Path);
env->ReleaseStringUTFChars(picVertex, picVertexPath);
env->ReleaseStringUTFChars(picFrag, picFragPath);
}
extern "C"
JNIEXPORT jboolean JNICALL
cpp_fbo_post_processing_init_opengl(JNIEnv *env, jobject thiz, jint width, jint height) {
if (postProcessing == nullptr)
postProcessing = new GLFBOPostProcessing();
postProcessing->surfaceChanged(width, height);
return 0;
}
extern "C"
JNIEXPORT void JNICALL
cpp_fbo_ps_surface_created(JNIEnv *env, jobject thiz,
jobject surface,
jobject assetManager) {
if (postProcessing != nullptr) {
ANativeWindow *window = surface ? ANativeWindow_fromSurface(env, surface) : nullptr;
auto *aAssetManager = assetManager ? AAssetManager_fromJava(env, assetManager) : nullptr;
postProcessing->surfaceCreated(window, aAssetManager);
}
}
extern "C"
JNIEXPORT void JNICALL
cpp_fbo_post_processing_render_frame(JNIEnv *env, jobject thiz) {
if (postProcessing == nullptr) return;
postProcessing->render();
}
extern "C"
JNIEXPORT void JNICALL
cpp_fbo_ps_surface_draw(JNIEnv *env, jobject obj, jbyteArray data, jint width, jint height,
jint rotation) {
jbyte *bufferPtr = env->GetByteArrayElements(data, nullptr);
jsize arrayLength = env->GetArrayLength(data);
if (postProcessing != nullptr) {
postProcessing->draw((uint8_t *) bufferPtr, (size_t) arrayLength, (size_t) width,
(size_t) height,
rotation);
}
env->ReleaseByteArrayElements(data, bufferPtr, 0);
}
extern "C"
JNIEXPORT void JNICALL
cpp_fbo_post_processing_setParameters(JNIEnv *env, jobject thiz, jint p) {
if (postProcessing != nullptr) {
postProcessing->setParameters((uint32_t) p);
}
}
extern "C"
JNIEXPORT jint JNICALL
cpp_fbo_post_processing_getParameters(JNIEnv *env, jobject thiz) {
if (postProcessing != nullptr) {
return postProcessing->getParameters();
}
return 0;
}
static const JNINativeMethod methods[] = {
/*********************** GL 帧缓冲FBO——后期处理********************/
{"native_fbo_post_processing_set_glsl_path", "(Ljava/lang/String"
";Ljava/lang/String"
";Ljava/lang/String"
";Ljava/lang/String"
";Ljava/lang/String"
";Ljava/lang/String"
";Ljava/lang/String"
";Ljava/lang/String"
";Ljava/lang/String"
";Ljava/lang/String"
";Ljava/lang/String;)V", (void *) cpp_fbo_post_processing_frag_vertex_path},
{"native_fbo_post_processing_init_opengl", "(II)Z", (void *) cpp_fbo_post_processing_init_opengl},
{"native_fbo_ps_surface_created", "(Landroid/view/Surface;"
"Landroid/content/res"
"/AssetManager;)V", (void *) cpp_fbo_ps_surface_created},
{"native_fbo_post_processing_render_frame", "()V", (void *) cpp_fbo_post_processing_render_frame},
{"native_fbo_ps_surface_draw", "([BIII)V", (void *) cpp_fbo_ps_surface_draw},
{"native_fbo_post_processing_set_parameters", "(I)V", (void *) cpp_fbo_post_processing_setParameters},
{"native_fbo_post_processing_get_parameters", "()I", (void *) cpp_fbo_post_processing_getParameters},
};
// 定义注册方法
JNIEXPORT jint JNICALL JNI_OnLoad(JavaVM *vm, void *reserved) {
LOGD("动态注册");
JNIEnv *env;
if ((vm)->GetEnv((void **) &env, JNI_VERSION_1_6) != JNI_OK) {
LOGD("动态注册GetEnv fail");
return JNI_ERR;
}
// 获取类引用
jclass clazz = env->FindClass(java_call_jni_class);
// 注册native方法
jint regist_result = env->RegisterNatives(clazz, methods,
sizeof(methods) / sizeof(methods[0]));
if (regist_result) { // 非零true 进if
LOGE("动态注册 fail regist_result = %d", regist_result);
} else {
LOGI("动态注册 success result = %d", regist_result);
}
return JNI_VERSION_1_6;
}
extern "C" void JNI_OnUnload(JavaVM *jvm, void *p) {
JNIEnv *env = NULL;
if (jvm->GetEnv((void **) (&env), JNI_VERSION_1_6) != JNI_OK) {
return;
}
jclass clazz = env->FindClass(java_call_jni_class);
if (env != NULL) {
env->UnregisterNatives(clazz);
}
}
C++代码层:
在此层中,主要的是着色器的连接、编译、创建;创建、更新、绑定纹理;顶点数据及纹理数据的绑定;最后是绘制。
创建着色器程序、纹理:
bool GLFBOPostProcessing::setupGraphics(int w, int h) {
screenW = w;
screenH = h;
LOGI("setupGraphics(%d, %d)", w, h);
glViewport(0, 0, w, h);
checkGlError("glViewport");
LOGI("glViewport successed!");
//清屏
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
checkGlError("glClear");
//开启深度测试
glEnable(GL_DEPTH_TEST);
//创建YUV程序及YUV纹理
useYUVProgram();
createYUVTextures();
//创建图片程序及图片纹理
createPicProgram();
creatPicTexture();
//创建FBO程序及FBO纹理
createFBOProgram();
creatFBOTexture();
return true;
}
YUV视频显示纹理、图片水印纹理的创建前几篇已经介绍过,这里着重介绍下帧缓冲FBO的纹理创建,帧缓冲FBO的创建跟一般的纹理创建区别不大:
- 创建帧缓冲着色器程序,定位着色器的参数:
void GLFBOPostProcessing::createFBOProgram() {
screenProgram = screenShader->createProgram();
if (!screenProgram) {
LOGE("Could not create screenProgram shaderId.");
return;
}
//获取顶点着色器中的in的参数,及片段着色器的uniform
m_screen_vertexPos = (GLuint) glGetAttribLocation(screenProgram,
"aPos");
m_screen_textureCoordLoc = (GLuint) glGetAttribLocation(screenProgram,
"aTexCoords");
m_textureScreenLoc = (GLuint) glGetAttribLocation(screenProgram,
"screenTexture");
screenShader->use();
screenShader->setInt("screenTexture", 0);
}
- 创建帧缓冲FBO纹理:
void GLFBOPostProcessing::creatFBOTexture() {
//1.首先要创建一个帧缓冲对象,并绑定它,这些都很直观
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
//2.接下来我们需要创建一个纹理图像,我们将它作为一个颜色附件附加到帧缓冲上。
// 我们将纹理的维度设置为窗口的宽度和高度,并且不初始化它的数据
glGenTextures(1, &fboTexture);
glBindTexture(GL_TEXTURE_2D, fboTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, screenW, screenH,
0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,
GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, fboTexture,
0);
//3.创建渲染缓冲对象
glGenRenderbuffers(1, &rbo);
glBindRenderbuffer(GL_RENDERBUFFER, rbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, screenW,
screenH);
//4.将渲染缓冲对象附加到帧缓冲的深度和模板附件上
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER,
rbo);
//5.最后,我们希望检查帧缓冲是否是完整的,如果不是,我们将打印错误信息
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
LOGE("ERROR::FRAMEBUFFER:: Framebuffer is not complete!");
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
顶点数据及纹理数据的绑定并绘制:
void GLFBOPostProcessing::renderFrame() {
if (m_filter != m_prevFilter) {
m_prevFilter = m_filter;
//切换后期处理的着色器,实现滤镜切换。
if (m_filter >= 0 && m_filter < m_fragmentStringPathes.size()) {
delete_program(screenProgram);
LOGI("render---m_filter:%d", m_filter);
screenShader->getSharderStringPath(m_vertexStringPath,
m_fragmentStringPathes.at(m_prevFilter));
createFBOProgram();
}
}
//绑定到帧缓冲区,像往常一样绘制场景以着色纹理
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
//启用深度测试(禁用渲染屏幕空间四边形)
glEnable(GL_DEPTH_TEST);
// 确保清除帧缓冲区的内容
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//绘制YUV视频数据纹理
yuvGLShader->use();
glm::mat4 model1 = glm::mat4(1.0f);
glm::mat4 view1 = mCamera.GetViewMatrix();
glm::mat4 projection1 = glm::perspective(glm::radians(mCamera.Zoom),
(float) screenW / (float) screenH,
0.1f, 100.0f);
yuvGLShader->setMat4("view", view1);
yuvGLShader->setMat4("projection", projection1);
if (!updateYUVTextures() || !useYUVProgram()) return;
model1 = glm::scale(model1, glm::vec3(6.0f, 4.0f, 1.0f));
yuvGLShader->setMat4("model", model1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
//绘制图片水印数据纹理
bindPicTexture();
usePicProgram();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
//现在绑定回默认帧缓冲区,并使用附加的帧缓冲区颜色纹理绘制一个四边形平面
glBindFramebuffer(GL_FRAMEBUFFER, 0);
//禁用深度测试,这样屏幕空间四边形就不会因为深度测试而被丢弃。
glDisable(GL_DEPTH_TEST);
// 清除所有相关缓冲区
// 将透明颜色设置为白色(实际上并没有必要,因为我们无论如何都看不到四边形后面)
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
screenShader->use();
useFBOProgram();
//使用颜色附着纹理作为四边形平面的纹理
glBindTexture(GL_TEXTURE_2D, fboTexture);
glDrawArrays(GL_TRIANGLES, 0, 6);
checkGlError("glDrawArrays");
//切换到m_WindowSurface
m_WindowSurface->makeCurrent();
m_WindowSurface->swapBuffers();
}
此处的帧缓冲FBO的顶点数据、纹理数据的绑定跟YUV、水印图片的绑定一致:
void GLFBOPostProcessing::useFBOProgram() {
glUseProgram(screenProgram);
//绑定顶点数据及纹理数据
glVertexAttribPointer(m_screen_vertexPos, 3,
GL_FLOAT, GL_FALSE, 0,
PostProcessingQuadVertices1);
glEnableVertexAttribArray(m_screen_vertexPos);
glUniform1i(m_textureScreenLoc, 3);
glVertexAttribPointer(m_screen_textureCoordLoc, 3,
GL_FLOAT, GL_FALSE, 0,
PostProcessingQuadTextCoord);
glEnableVertexAttribArray(m_screen_textureCoordLoc);
}
C++类全代码:
以上的全部代码都提交在: WyFFmpeg/glplay at main · wangyongyao1989/WyFFmpeg · GitHub
GLFBOPostProcessing.cpp的代码:
// Author : wangyongyao https://github.com/wangyongyao1989
// Created by MMM on 2025/1/23.
//
#include <android/native_window.h>
#include <android/asset_manager.h>
#include <GLES3/gl3.h>
#include "GLFBOPostProcessing.h"
bool GLFBOPostProcessing::setupGraphics(int w, int h) {
screenW = w;
screenH = h;
LOGI("setupGraphics(%d, %d)", w, h);
glViewport(0, 0, w, h);
checkGlError("glViewport");
LOGI("glViewport successed!");
//清屏
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
checkGlError("glClear");
//开启深度测试
glEnable(GL_DEPTH_TEST);
//创建YUV程序及YUV纹理
useYUVProgram();
createYUVTextures();
//创建图片程序及图片纹理
createPicProgram();
creatPicTexture();
//创建FBO程序及FBO纹理
createFBOProgram();
creatFBOTexture();
return true;
}
void GLFBOPostProcessing::renderFrame() {
if (m_filter != m_prevFilter) {
m_prevFilter = m_filter;
//切换后期处理的着色器,实现滤镜切换。
if (m_filter >= 0 && m_filter < m_fragmentStringPathes.size()) {
delete_program(screenProgram);
LOGI("render---m_filter:%d", m_filter);
screenShader->getSharderStringPath(m_vertexStringPath,
m_fragmentStringPathes.at(m_prevFilter));
createFBOProgram();
}
}
//绑定到帧缓冲区,像往常一样绘制场景以着色纹理
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
//启用深度测试(禁用渲染屏幕空间四边形)
glEnable(GL_DEPTH_TEST);
// 确保清除帧缓冲区的内容
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//绘制YUV视频数据纹理
yuvGLShader->use();
glm::mat4 model1 = glm::mat4(1.0f);
glm::mat4 view1 = mCamera.GetViewMatrix();
glm::mat4 projection1 = glm::perspective(glm::radians(mCamera.Zoom),
(float) screenW / (float) screenH,
0.1f, 100.0f);
yuvGLShader->setMat4("view", view1);
yuvGLShader->setMat4("projection", projection1);
if (!updateYUVTextures() || !useYUVProgram()) return;
model1 = glm::scale(model1, glm::vec3(6.0f, 4.0f, 1.0f));
yuvGLShader->setMat4("model", model1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
//绘制图片水印数据纹理
bindPicTexture();
usePicProgram();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
//现在绑定回默认帧缓冲区,并使用附加的帧缓冲区颜色纹理绘制一个四边形平面
glBindFramebuffer(GL_FRAMEBUFFER, 0);
//禁用深度测试,这样屏幕空间四边形就不会因为深度测试而被丢弃。
glDisable(GL_DEPTH_TEST);
// 清除所有相关缓冲区
// 将透明颜色设置为白色(实际上并没有必要,因为我们无论如何都看不到四边形后面)
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
screenShader->use();
useFBOProgram();
//使用颜色附着纹理作为四边形平面的纹理
glBindTexture(GL_TEXTURE_2D, fboTexture);
glDrawArrays(GL_TRIANGLES, 0, 6);
checkGlError("glDrawArrays");
//切换到m_WindowSurface
m_WindowSurface->makeCurrent();
m_WindowSurface->swapBuffers();
}
GLFBOPostProcessing::GLFBOPostProcessing() {
screenShader = new OpenGLShader;
yuvGLShader = new OpenGLShader();
picGLShader = new OpenGLShader();
}
GLFBOPostProcessing::~GLFBOPostProcessing() {
deletePicTextures();
delete_program(m_pic_program);
delete_program(screenProgram);
fboTexture = 0;
//析构函数中释放资源
glDeleteRenderbuffers(1, &rbo);
glDeleteFramebuffers(1, &framebuffer);
if (winsurface) {
winsurface = nullptr;
}
if (m_EglCore) {
delete m_EglCore;
m_EglCore = nullptr;
}
if (m_WindowSurface) {
delete m_WindowSurface;
m_WindowSurface = nullptr;
}
if (m_pDataY) {
m_pDataY = nullptr;
}
if (m_pDataU) {
delete m_pDataU;
m_pDataU = nullptr;
}
if (m_pDataV) {
delete m_pDataV;
m_pDataV = nullptr;
}
screenShader = nullptr;
screenProgram = 0;
m_filter = 0;
if (picData) {
stbi_image_free(picData);
picData = nullptr;
}
if (picGLShader) {
delete picGLShader;
picGLShader = nullptr;
}
deleteYUVTextures();
if (yuvGLShader) {
delete yuvGLShader;
yuvGLShader = nullptr;
}
}
bool
GLFBOPostProcessing::setSharderScreenPathes(string vertexScreenPath,
vector<string> fragmentScreenPathes) {
screenShader->getSharderStringPath(vertexScreenPath, fragmentScreenPathes.front());
m_vertexStringPath = vertexScreenPath;
m_fragmentStringPathes = fragmentScreenPathes;
return 0;
}
void GLFBOPostProcessing::setPicPath(const char *pic) {
LOGI("setPicPath pic==%s", pic);
picData = stbi_load(pic, &picWidth, &picHeight, &picChannels, 0);
}
bool GLFBOPostProcessing::setPicSharderPath(const char *vertexPath,
const char *fragmentPath) {
picGLShader->getSharderStringPath(vertexPath, fragmentPath);
return 0;
}
bool GLFBOPostProcessing::setYUVSharderPath(const char *vertexPath,
const char *fragmentPath) {
yuvGLShader->getSharderPath(vertexPath, fragmentPath);
return 0;
}
void GLFBOPostProcessing::printGLString(const char *name, GLenum s) {
const char *v = (const char *) glGetString(s);
LOGI("OpenGL %s = %s\n", name, v);
}
void GLFBOPostProcessing::checkGlError(const char *op) {
for (GLint error = glGetError(); error; error = glGetError()) {
LOGI("after %s() glError (0x%x)\n", op, error);
}
}
void GLFBOPostProcessing::setParameters(uint32_t i) {
m_filter = i;
LOGI("setParameters---m_filter:%d", m_filter);
}
jint GLFBOPostProcessing::getParameters() {
return m_filter;
}
void GLFBOPostProcessing::createFBOProgram() {
screenProgram = screenShader->createProgram();
if (!screenProgram) {
LOGE("Could not create screenProgram shaderId.");
return;
}
//获取顶点着色器中的in的参数,及片段着色器的uniform
m_screen_vertexPos = (GLuint) glGetAttribLocation(screenProgram,
"aPos");
m_screen_textureCoordLoc = (GLuint) glGetAttribLocation(screenProgram,
"aTexCoords");
m_textureScreenLoc = (GLuint) glGetAttribLocation(screenProgram,
"screenTexture");
screenShader->use();
screenShader->setInt("screenTexture", 0);
}
void GLFBOPostProcessing::delete_program(GLuint &program) {
if (program) {
glUseProgram(0);
glDeleteProgram(program);
program = 0;
}
}
void GLFBOPostProcessing::OnSurfaceCreated() {
m_EglCore = new EglCore(eglGetCurrentContext(), FLAG_RECORDABLE);
if (!m_EglCore) {
LOGE("new EglCore failed!");
return;
}
LOGE("OnSurfaceCreated m_ANWindow:%p", m_ANWindow);
m_WindowSurface = new WindowSurface(m_EglCore, m_ANWindow);
if (!m_EglCore) {
LOGE("new WindowSurface failed!");
return;
}
m_WindowSurface->makeCurrent();
}
void GLFBOPostProcessing::surfaceCreated(ANativeWindow *window,
AAssetManager *assetManager) {
m_ANWindow = window;
postMessage(MSG_PS_SurfaceCreated, false);
}
void GLFBOPostProcessing::surfaceChanged(size_t width, size_t height) {
postMessage(MSG_PS_SurfaceChanged, width, height);
}
void GLFBOPostProcessing::render() {
postMessage(MSG_PS_DrawFrame, false);
}
void GLFBOPostProcessing::release() {
postMessage(MSG_PS_SurfaceDestroyed, false);
}
void GLFBOPostProcessing::handleMessage(LooperMessage *msg) {
Looper::handleMessage(msg);
switch (msg->what) {
case MSG_PS_SurfaceCreated: {
OnSurfaceCreated();
}
break;
case MSG_PS_SurfaceChanged:
setupGraphics(msg->arg1, msg->arg2);
break;
case MSG_PS_DrawFrame:
renderFrame();
break;
case MSG_PS_SurfaceDestroyed:
// OnSurfaceDestroyed();
break;
default:
break;
}
}
void
GLFBOPostProcessing::draw(uint8_t *buffer, size_t length,
size_t width, size_t height,
float rotation) {
ps_video_frame frame{};
frame.width = width;
frame.height = height;
frame.stride_y = width;
frame.stride_uv = width / 2;
frame.y = buffer;
frame.u = buffer + width * height;
frame.v = buffer + width * height * 5 / 4;
updateFrame(frame);
}
void GLFBOPostProcessing::updateFrame(const ps_video_frame &frame) {
m_sizeY = frame.width * frame.height;
m_sizeU = frame.width * frame.height / 4;
m_sizeV = frame.width * frame.height / 4;
if (m_pDataY == nullptr || m_width != frame.width
|| m_height != frame.height) {
m_pDataY = std::make_unique<uint8_t[]>(m_sizeY
+ m_sizeU + m_sizeV);
m_pDataU = m_pDataY.get() + m_sizeY;
m_pDataV = m_pDataU + m_sizeU;
}
m_width = frame.width;
m_height = frame.height;
if (m_width == frame.stride_y) {
memcpy(m_pDataY.get(), frame.y, m_sizeY);
} else {
uint8_t *pSrcY = frame.y;
uint8_t *pDstY = m_pDataY.get();
for (int h = 0; h < m_height; h++) {
memcpy(pDstY, pSrcY, m_width);
pSrcY += frame.stride_y;
pDstY += m_width;
}
}
if (m_width / 2 == frame.stride_uv) {
memcpy(m_pDataU, frame.u, m_sizeU);
memcpy(m_pDataV, frame.v, m_sizeV);
} else {
uint8_t *pSrcU = frame.u;
uint8_t *pSrcV = frame.v;
uint8_t *pDstU = m_pDataU;
uint8_t *pDstV = m_pDataV;
for (int h = 0; h < m_height / 2; h++) {
memcpy(pDstU, pSrcU, m_width / 2);
memcpy(pDstV, pSrcV, m_width / 2);
pDstU += m_width / 2;
pDstV += m_width / 2;
pSrcU += frame.stride_uv;
pSrcV += frame.stride_uv;
}
}
isDirty = true;
}
bool GLFBOPostProcessing::createYUVTextures() {
auto widthY = (GLsizei) m_width;
auto heightY = (GLsizei) m_height;
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &m_textureIdY);
glBindTexture(GL_TEXTURE_2D, m_textureIdY);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, widthY, heightY,
0, GL_LUMINANCE, GL_UNSIGNED_BYTE,
nullptr);
if (!m_textureIdY) {
LOGE("OpenGL Error Create Y texture");
return false;
}
GLsizei widthU = (GLsizei) m_width / 2;
GLsizei heightU = (GLsizei) m_height / 2;
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &m_textureIdU);
glBindTexture(GL_TEXTURE_2D, m_textureIdU);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, widthU, heightU,
0, GL_LUMINANCE, GL_UNSIGNED_BYTE,
nullptr);
if (!m_textureIdU) {
LOGE("OpenGL Error Create U texture");
return false;
}
GLsizei widthV = (GLsizei) m_width / 2;
GLsizei heightV = (GLsizei) m_height / 2;
glActiveTexture(GL_TEXTURE2);
glGenTextures(1, &m_textureIdV);
glBindTexture(GL_TEXTURE_2D, m_textureIdV);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, widthV, heightV, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE,
nullptr);
if (!m_textureIdV) {
LOGE("OpenGL Error Create V texture");
return false;
}
return true;
}
bool GLFBOPostProcessing::updateYUVTextures() {
if (!m_textureIdY && !m_textureIdU && !m_textureIdV) return false;
// LOGE("updateTextures m_textureIdY:%d,m_textureIdU:%d,m_textureIdV:%d,===isDirty:%d",
// m_textureIdY,
// m_textureIdU, m_textureIdV, isDirty);
if (isDirty) {
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_textureIdY);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE,
(GLsizei) m_width, (GLsizei) m_height, 0,
GL_LUMINANCE, GL_UNSIGNED_BYTE, m_pDataY.get());
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, m_textureIdU);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE,
(GLsizei) m_width / 2, (GLsizei) m_height / 2,
0,
GL_LUMINANCE, GL_UNSIGNED_BYTE, m_pDataU);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, m_textureIdV);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE,
(GLsizei) m_width / 2, (GLsizei) m_height / 2,
0,
GL_LUMINANCE, GL_UNSIGNED_BYTE, m_pDataV);
isDirty = false;
return true;
}
return false;
}
void GLFBOPostProcessing::deleteYUVTextures() {
if (m_textureIdY) {
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 0);
glDeleteTextures(1, &m_textureIdY);
m_textureIdY = 0;
}
if (m_textureIdU) {
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, 0);
glDeleteTextures(1, &m_textureIdU);
m_textureIdU = 0;
}
if (m_textureIdV) {
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, 0);
glDeleteTextures(1, &m_textureIdV);
m_textureIdV = 0;
}
}
int
GLFBOPostProcessing::createYUVProgram() {
//创建YUV视频通道着色器程序
m_yuv_program = yuvGLShader->createProgram();
LOGI("GLFboDrawTextVideoRender createProgram m_yuv_program:%d", m_yuv_program);
if (!m_yuv_program) {
LOGE("Could not create program.");
return 0;
}
//获取顶点着色器中的in的参数,及片段着色器的uniform
//Get Uniform Variables Location
m_yuv_vertexPos = (GLuint) glGetAttribLocation(m_yuv_program,
"position");
m_textureYLoc = glGetUniformLocation(m_yuv_program,
"s_textureY");
m_textureULoc = glGetUniformLocation(m_yuv_program,
"s_textureU");
m_textureVLoc = glGetUniformLocation(m_yuv_program,
"s_textureV");
m_yuv_textureCoordLoc = (GLuint) glGetAttribLocation(m_yuv_program,
"texcoord");
return m_yuv_program;
}
GLuint GLFBOPostProcessing::useYUVProgram() {
if (!m_yuv_program && !createYUVProgram()) {
LOGE("Could not use program.");
return 0;
}
//绑定顶点数据及纹理数据
glUseProgram(m_yuv_program);
glVertexAttribPointer(m_yuv_vertexPos, 3, GL_FLOAT, GL_FALSE,
0, FboPsVerticek);
glEnableVertexAttribArray(m_yuv_vertexPos);
glUniform1i(m_textureYLoc, 0);
glUniform1i(m_textureULoc, 1);
glUniform1i(m_textureVLoc, 2);
glVertexAttribPointer(m_yuv_textureCoordLoc, 3, GL_FLOAT, GL_FALSE,
0, FboPsTextureCoord);
glEnableVertexAttribArray(m_yuv_textureCoordLoc);
return m_yuv_program;
}
int
GLFBOPostProcessing::createPicProgram() {
//创建图片水印着色程序
m_pic_program = picGLShader->createProgram();
if (!m_pic_program) {
LOGE("Could not create m_pic_program.");
return 0;
}
//获取顶点着色器中的in的参数,及片段着色器的uniform
m_pic_vertexPos = (GLuint) glGetAttribLocation(
m_pic_program, "position");
m_pic_textureCoordLoc = (GLuint) glGetAttribLocation(
m_pic_program, "texcoord");
m_texturePicLoc = (GLuint) glGetUniformLocation(
m_pic_program, "s_texturePic");
return m_pic_program;
}
void GLFBOPostProcessing::usePicProgram() {
glUseProgram(m_pic_program);
//绑定顶点数据及纹理数据
glVertexAttribPointer(m_pic_vertexPos, 3, GL_FLOAT,
GL_FALSE, 0, EGLFboPSTextVerticek1);
glEnableVertexAttribArray(m_pic_vertexPos);
glUniform1i(m_texturePicLoc, 3);
glVertexAttribPointer(m_pic_textureCoordLoc, 3,
GL_FLOAT, GL_FALSE, 0,
EGLFboPSTextTextureCoord);
glEnableVertexAttribArray(m_pic_textureCoordLoc);
}
void GLFBOPostProcessing::useFBOProgram() {
glUseProgram(screenProgram);
//绑定顶点数据及纹理数据
glVertexAttribPointer(m_screen_vertexPos, 3,
GL_FLOAT, GL_FALSE, 0,
PostProcessingQuadVertices1);
glEnableVertexAttribArray(m_screen_vertexPos);
glUniform1i(m_textureScreenLoc, 3);
glVertexAttribPointer(m_screen_textureCoordLoc, 3,
GL_FLOAT, GL_FALSE, 0,
PostProcessingQuadTextCoord);
glEnableVertexAttribArray(m_screen_textureCoordLoc);
}
void GLFBOPostProcessing::deletePicTextures() {
if (m_texturePicLoc) {
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 0);
glDeleteTextures(1, &m_texturePicLoc);
m_texturePicLoc = 0;
}
}
void GLFBOPostProcessing::bindPicTexture() {
if (m_texturePicLoc) {
// bind Texture
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_texturePicLoc);
}
}
void GLFBOPostProcessing::creatPicTexture() {
if (picData) {
GLenum format = 0;
if (picChannels == 1) {
format = GL_RED;
} else if (picChannels == 3) {
format = GL_RGB;
} else if (picChannels == 4) {
format = GL_RGBA;
}
glGenTextures(1, &m_texturePicLoc);
glBindTexture(GL_TEXTURE_2D, m_texturePicLoc);
glTexImage2D(GL_TEXTURE_2D, 0, format, picWidth,
picHeight, 0, format, GL_UNSIGNED_BYTE,
picData);
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,
GL_LINEAR);
stbi_image_free(picData);
} else {
LOGE("creatPicTexture picData =(null)");
stbi_image_free(picData);
}
if (!m_texturePicLoc) {
LOGE("creatPicTexture Error Create PIC texture");
}
}
void GLFBOPostProcessing::creatFBOTexture() {
//1.首先要创建一个帧缓冲对象,并绑定它,这些都很直观
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
//2.接下来我们需要创建一个纹理图像,我们将它作为一个颜色附件附加到帧缓冲上。
// 我们将纹理的维度设置为窗口的宽度和高度,并且不初始化它的数据
glGenTextures(1, &fboTexture);
glBindTexture(GL_TEXTURE_2D, fboTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, screenW, screenH,
0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,
GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, fboTexture,
0);
//3.创建渲染缓冲对象
glGenRenderbuffers(1, &rbo);
glBindRenderbuffer(GL_RENDERBUFFER, rbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, screenW,
screenH);
//4.将渲染缓冲对象附加到帧缓冲的深度和模板附件上
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER,
rbo);
//5.最后,我们希望检查帧缓冲是否是完整的,如果不是,我们将打印错误信息
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
LOGE("ERROR::FRAMEBUFFER:: Framebuffer is not complete!");
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
五、引言:
后续可在系列文章5加入文字水印及视频的录制的功能。
Github地址:
https://github.com/wangyongyao1989/AndroidLearnOpenGL
WyFFmpeg/glplay at main · wangyongyao1989/WyFFmpeg · GitHub
参考资料:
中文版LearnOpenGL