[OpenGL]使用OpenGL实现基于物理的渲染模型PBR(中)
一、简介
在上篇博客中介绍了基于物理的渲染(Physically Based Rendering, PBR) 的基本概念,只实现了基于点光源的PBR。在本篇文章中会继续介绍 基于图像光源(IBL)的PBR中的漫反射部分。IBL中的镜面反射的IBL会在接下来的博客中进行讲解。
按照本文代码实现后,可以实现以下效果:
二、基于IBL的PBR
1. 什么是IBL
IBL(Image-Based Lighting,基于图像的光照) 是一种使用环境贴图(Environment Map)提供间接光照的渲染技术,广泛应用于物理渲染(PBR)、电影特效 和 实时渲染(如游戏、VR等)。它通过 预计算的光照信息 让物体在复杂环境中表现出更加真实的全局光照(Global Illumination, GI) 效果。
简单来讲,IBL是一种存储环境光照信息的方法。在 IBL 中默认待渲染的模型接受来自环境中四面八方的光照,环境光使用一个 环境贴图 将各个方向的 irrdiance 存储到一张 texture 中(类似于 skybox),那么在渲染时就可以根据该 环境贴图 得到不同方向环境光对目标着色点的光照贡献。
根据渲染方程:
L o ( p , w o ) = ∫ Ω f r ( p , w i , w o ) ∗ L i ( p , w i ) ∗ n ∗ w i d w i L_o\left(p,wo\right)=\int_{\mathrm{\Omega}}{fr\left(p,wi,wo\right)\ast L i\left(p,wi\right)}\ast n\ast wi\ dwi Lo(p,wo)=∫Ωfr(p,wi,wo)∗Li(p,wi)∗n∗wi dwi
在渲染时,对于目标点 p p p、当前的视线向量 w o wo wo,我们可以根据 w i wi wi 采样环境贴图对应方向的值,而环境贴图 w i wi wi方向的采样值即为 L i ( p , w i ) Li(p,wi) Li(p,wi)。因此我们可以使用数值积分方法求解上述渲染方程,即可得到目标值 L o L_o Lo。
2. IBL中的漫反射部分
如上篇文章中我们讲的那样,在 pbr 中 f r ( p , w i , w o ) fr(p,wi,wo) fr(p,wi,wo)可以分为 漫反射 和 镜面反射 两部分,如以下公式所示:
f r ( p , w i , w o ) = k d ∗ f l a m b e r t + f s ∗ f C o o k − T o r r a n c e fr\left(p,wi,wo\right)\ =kd\ast f_{lambert}+fs\ast f_{Cook-Torrance} fr(p,wi,wo) =kd∗flambert+fs∗fCook−Torrance
因此渲染方程可以写为:
L o ( p , w o ) = ∫ Ω ( k d ∗ f l a m b e r t + f s ∗ f C o o k − T o r r a n c e ) ∗ L i ( p , w i ) ∗ n ∗ w i d w i L_o\left(p,wo\right)=\int_{\mathrm{\Omega}}{(kd\ast f_{lambert}+fs\ast f_{Cook-Torrance})\ast L i\left(p,wi\right)}\ast n\ast wi\ dwi Lo(p,wo)=∫Ω(kd∗flambert+fs∗fCook−Torrance)∗Li(p,wi)∗n∗wi dwi
其中
f l a m b e r t = c π f_{lambert} = \frac{c}{\pi} flambert=πc
如果我们只考虑 漫反射 部分,那么可以得到:
L o , l a m b e r t ( p , w o ) = ∫ Ω ( k d ∗ f l a m b e r t ) ∗ L i ( p , w i ) ∗ n ∗ w i d w i = ∫ Ω ( k d ∗ c π ) ∗ L i ( p , w i ) ∗ n ∗ w i d w i = k d ∗ c π ∫ Ω L i ( p , w i ) ∗ n ∗ w i d w i L_{o,lambert}\left(p,wo\right)=\int_{\mathrm{\Omega}}{(kd\ast f_{lambert})\ast L i\left(p,wi\right)}\ast n\ast wi\ dwi \\ =\int_{\mathrm{\Omega}}{(kd\ast \frac{c}{\pi})\ast L i\left(p,wi\right)}\ast n\ast wi\ dwi \\ =kd\ast \frac{c}{\pi}\int_{\mathrm{\Omega}}{L i\left(p,wi\right)}\ast n\ast wi\ dwi Lo,lambert(p,wo)=∫Ω(kd∗flambert)∗Li(p,wi)∗n∗wi dwi=∫Ω(kd∗πc)∗Li(p,wi)∗n∗wi dwi=kd∗πc∫ΩLi(p,wi)∗n∗wi dwi
因此,假如只考虑漫反射部分,任意的目标着色点 p p p 对应的积分值只与点 p p p 法向 n p n_p np 所在的半球 Ω n p \Omega_{np} Ωnp 有关,跟当前的视线向量 w o wo wo 、点 p p p 的材质属性都没关系。
那么,我们可以根据环境贴图预先计算得到法向为 n p n_p np 对应的半球 Ω n p \Omega_{np} Ωnp 范围内的积分值 1 π ∫ Ω , n p L i ( p , w i ) ∗ n ∗ w i d w i \frac{1}{\pi}\int_{\mathrm{\Omega,n_p}}{L i\left(p,wi\right)}\ast n\ast wi\ dwi π1∫Ω,npLi(p,wi)∗n∗wi dwi,即预计算得到一个查找表(look up table),该查找表的仅仅包含一个维度的输入,即目标着色点的法向 n p n_p np。如下所示:
L o , l a m b e r t ( n p ) = l o o k u p t a b l e ( n p ) = 1 π ∫ Ω , n p L i ( w i ) ∗ n ∗ w i d w i L_{o,lambert}(n_{p}) = look\ up\ table (n_{p}) = \frac{1}{\pi}\int_{\mathrm{\Omega,n_p}}{L i\left(wi\right)}\ast n\ast wi\ dwi Lo,lambert(np)=look up table(np)=π1∫Ω,npLi(wi)∗n∗wi dwi
然后根据目标着色点的 k d kd kd 和 c c c,即可得到 L o , l a m b e r t = k d ∗ c ∗ L o , l a m b e r t ( n p ) L_{o,lambert}=kd * c * L_{o,lambert}(n_{p}) Lo,lambert=kd∗c∗Lo,lambert(np)。
3. IBL中的镜面反射部分
对于渲染方程中的 镜面反射 部分:
L o , C o o k − T o r r a n c e ( p , w o ) = ∫ Ω ( f s ∗ f C o o k − T o r r a n c e ) ∗ L i ( p , w i ) ∗ n ∗ w i d w i = k s ∗ ∫ Ω D ∗ F ∗ G 4 ∗ ( w o ∗ n ) ( w i ∗ n ) L i ( p , w i ) ∗ n ∗ w i d w i L_{o,Cook-Torrance}\left(p,wo\right)=\int_{\mathrm{\Omega}}{(fs\ast f_{Cook-Torrance})\ast L i\left(p,wi\right)}\ast n\ast wi\ dwi \\ =ks\ast \int_{\mathrm{\Omega}}{\frac{D\ast F\ast G}{4\ast\left(wo\ast n\right)\left(wi\ast n\right)} L i\left(p,wi\right)}\ast n\ast wi\ dwi Lo,Cook−Torrance(p,wo)=∫Ω(fs∗fCook−Torrance)∗Li(p,wi)∗n∗wi dwi=ks∗∫Ω4∗(wo∗n)(wi∗n)D∗F∗GLi(p,wi)∗n∗wi dwi
可以看到,对于镜面部分任意点 p p p 受到环境光的影响不仅仅跟 p p p 对应的半球 Ω n p \Omega_{np} Ωnp 相关,还受到视角方向 w o wo wo、 p p p 的材质属性的影响,如果如处理漫反射一样构建一个查找表,其纬度不仅需要包含法向 n p n_p np,还需要包含 w o wo wo,着色点材质属性 α \alpha α(粗糙度)等,维度太大。因此不能简单地如处理漫反射一样构建一个look up table 表示任意点 p p p 在任意视角方向 w o wo wo 下的环境光镜面反射。IBL中处理镜面反射部分的流程我将会在接下来的博客中进行介绍。
三、代码实现
0. 代码实现流程
在实际编程中代码的输入并不是预先制作好的 查找表,甚至也不是一个 cube map 类型的 环境贴图,而是一个 等距圆柱贴图,如下所示:
因此,我们需要先将等距圆柱贴图 hdr_texture
转为 cube map 类型的 enviroment_texture
;
然后,再对 environment_texture
进行预计算,相当于在 environment_texture
上进行卷积,得到 irradiance_texture
。这个 irradiance_texture
即为上文中我们提到的 查找表(look up table);
最后,在渲染中根据着色点 p p p 对应的法向 n p n_p np 查找 irradiance_texture
中对应位置的值,得到环境光对点 p p p 的漫反射光照贡献,即可得到着色点 p p p 的颜色(只考虑漫反射部分的颜色)。
代码实现的基本流程如下图所示:
下面介绍各步骤的具体代码:
1. 渲染函数
c++
部分代码:
/*** @brief 渲染模型* * @param shader 所使用的 shader* @param framebuffer 渲染的目标 framebuffer* @param inputTextures 输入的 texture2D* @param inputCubeMaps 输入的 textureCubeMap* @param rendermodel 渲染模式 GL_TRIANGLES\GL_TRIANGLE_STRIP*/
void Draw(Shader &shader, GLuint framebuffer, const std::vector<std::pair<std::string, GLuint>> &&inputTextures,const std::vector<std::pair<std::string, GLuint>> &&inputCubeMaps, GLuint rendermodel)
{// draw meshglBindFramebuffer(GL_FRAMEBUFFER, framebuffer);int texture_id = 0;for (int i = 0; i < inputTextures.size(); i++){glActiveTexture(GL_TEXTURE0 + texture_id); // 激活 纹理单元0glBindTexture(GL_TEXTURE_2D, inputTextures[i].second); // 绑定纹理,将纹理texture.id 绑定到 纹理单元0 上glUniform1i(glGetUniformLocation(shader.ID, inputTextures[i].first.c_str()),texture_id); // 将 shader 中的 texture1 绑定到 纹理单元0texture_id++;}for (int i = 0; i < inputCubeMaps.size(); i++){glActiveTexture(GL_TEXTURE0 + texture_id); // 激活 纹理单元0glBindTexture(GL_TEXTURE_CUBE_MAP,inputCubeMaps[i].second); // 绑定纹理,将纹理texture.id 绑定到 纹理单元0 上glUniform1i(glGetUniformLocation(shader.ID, inputCubeMaps[i].first.c_str()),texture_id); // 将 shader 中的 texture1 绑定到 纹理单元0texture_id++;}glBindVertexArray(VAO);glDrawElements(rendermodel, static_cast<unsigned int>(indices.size()), GL_UNSIGNED_INT, 0);glBindVertexArray(0);
}
2. 加载 hdr_texture
c++
部分代码:
/*** @brief 加载 hdr 文件** @param hdr_path* @return GLuint hdr_texture(texture2D)*/
GLuint loadHDR(std::string hdr_path)
{GLuint hdr_texture;stbi_set_flip_vertically_on_load(true);int width, height, nrComponents;float *data = stbi_loadf(hdr_path.c_str(), &width, &height, &nrComponents, 0);if (data){glGenTextures(1, &hdr_texture);glBindTexture(GL_TEXTURE_2D, hdr_texture);glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, width, height, 0, GL_RGB, GL_FLOAT, data);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);stbi_image_free(data);return hdr_texture;}else{std::cout << "Failed to load HDR image. Error info:" << stbi_failure_reason() << std::endl;return 0;}
};
3. 将 hdr_texture 转为 environment_texture
c++
部分代码:
glm::mat4 captureProjection = glm::perspective(glm::radians(90.0f), 1.0f, 0.1f, 10.0f);std::vector<glm::mat4> captureViews = {glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(1.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f)),glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(-1.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f)),glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f), glm::vec3(0.0f, 0.0f, 1.0f)),glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f), glm::vec3(0.0f, 0.0f, -1.0f)),glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, -1.0f, 0.0f)),glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 0.0f, -1.0f), glm::vec3(0.0f, -1.0f, 0.0f))};glm::mat4 captureModel = glm::mat4(1.0f);.../*** @brief 将等距圆柱贴图 hdr_textre 转为 cube map贴图 environment_texture** @param equiRectangularMap2CubeMapShader* @param sphere* @param captureFBO* @param captureRBO* @param hdrTexture* @param captureModel* @param captureViews* @param captureProjection* @return GLuint environment_texture (cubeMap)*/
GLuint equirectangleMap2CubeMap(Shader &equiRectangularMap2CubeMapShader, Sphere &sphere, GLuint captureFBO,GLuint captureRBO, GLuint hdr_texture, glm::mat4 captureModel,std::vector<glm::mat4> captureViews, glm::mat4 captureProjection)
{GLuint environment_texture;glGenTextures(1, &environment_texture);glBindTexture(GL_TEXTURE_CUBE_MAP, environment_texture);for (unsigned int i = 0; i < 6; ++i){// note that we store each face with 16 bit floating point valuesCHECK_GL(glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGB16F, 512, 512, 0, GL_RGB, GL_FLOAT, nullptr));}glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);equiRectangularMap2CubeMapShader.use();equiRectangularMap2CubeMapShader.setMat4("model", captureModel);equiRectangularMap2CubeMapShader.setMat4("projection", captureProjection);glViewport(0, 0, 512, 512); // don't forget to configure the viewport to the capture dimensions.glBindFramebuffer(GL_FRAMEBUFFER, captureFBO);for (unsigned int i = 0; i < 6; ++i){equiRectangularMap2CubeMapShader.setMat4("view", captureViews[i]);glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,environment_texture, 0);glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);sphere.Draw(equiRectangularMap2CubeMapShader, captureFBO, {{"hdr_texture", hdr_texture}}, {},GL_TRIANGLE_STRIP);}glBindFramebuffer(GL_FRAMEBUFFER, 0);return environment_texture;
};
equirectangularMap2CubeMap.vert
#version 330 core
layout(location = 0) in vec3 aPos;
layout(location = 1) in vec3 aNor;
layout(location = 2) in vec2 aTexCoord;out vec3 WorldPos;uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;void main() {WorldPos = aPos;gl_Position = projection * view * model * vec4(aPos, 1.0f);
}
equirectangularMap2CubeMap.frag
#version 330 core
out vec4 FragColor;
in vec3 WorldPos;uniform sampler2D hdr_texture;const vec2 invAtan = vec2(0.1591, 0.3183); // 1.0/(2*PI), 1.0/PI
vec2 SampleSphericalMap(vec3 v)
{// u' = arctan(z/x), v' = arcsin(y)// 与数学中的 arctan 不同的是// glsl 中 双参数的 arctan 函数的值域为 [-pi, pi]// atan(-epsilon / -1) = -pi, atan(-1 / 0) = -pi/2, // atan(0 / 0) = 0, atan(1 / 0) = pi/2, atan(+epsilon / -1) = pivec2 uv = vec2(atan(v.z, v.x), asin(v.y));// u = arctan(z/x)/(2*PI), v = arcsin(y)/PIuv *= invAtan;// [-0.5, 0.5] -> [0.0, 1.0]uv += 0.5;return uv;
}void main()
{ vec2 uv = SampleSphericalMap(normalize(WorldPos));vec3 color = texture(hdr_texture, uv).rgb;FragColor = vec4(color, 1.0);
}
4. 卷积 environment_texture 得到 irradiance_texture
c++
部分代码:
glm::mat4 captureProjection = glm::perspective(glm::radians(90.0f), 1.0f, 0.1f, 10.0f);std::vector<glm::mat4> captureViews = {glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(1.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f)),glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(-1.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f)),glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f), glm::vec3(0.0f, 0.0f, 1.0f)),glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f), glm::vec3(0.0f, 0.0f, -1.0f)),glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, -1.0f, 0.0f)),glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 0.0f, -1.0f), glm::vec3(0.0f, -1.0f, 0.0f))};glm::mat4 captureModel = glm::mat4(1.0f);.../*** @brief 预计算(卷积) environment_texture** @param irradianceConvolutionShader* @param sphere* @param captureFBO* @param captureRBO* @param environment_texture 卷积前的 environment_texture(cubeMap)* @param captureModel* @param captureViews* @param captureProjection* @return GLuint 卷积后的 irradiance_texture(cubeMap)*/
GLuint irradianceConvolution(Shader &irradianceConvolutionShader, Sphere &sphere, GLuint captureFBO, GLuint captureRBO,GLuint environment_texture, glm::mat4 captureModel, std::vector<glm::mat4> captureViews,glm::mat4 captureProjection)
{GLuint irradiance_texture;glGenTextures(1, &irradiance_texture);glBindTexture(GL_TEXTURE_CUBE_MAP, irradiance_texture);for (unsigned int i = 0; i < 6; ++i){glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGB16F, 32, 32, 0, GL_RGB, GL_FLOAT, nullptr);}glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);glBindFramebuffer(GL_FRAMEBUFFER, captureFBO);glBindRenderbuffer(GL_RENDERBUFFER, captureRBO);glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 32, 32);irradianceConvolutionShader.use();irradianceConvolutionShader.setMat4("model", captureModel);irradianceConvolutionShader.setMat4("projection", captureProjection);glViewport(0, 0, 32, 32); // don't forget to configure the viewport to the capture dimensions.glBindFramebuffer(GL_FRAMEBUFFER, captureFBO);for (unsigned int i = 0; i < 6; ++i){irradianceConvolutionShader.setMat4("view", captureViews[i]);glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,irradiance_texture, 0);glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);sphere.Draw(irradianceConvolutionShader, captureFBO, {}, {{"environment_texture", environment_texture}},GL_TRIANGLE_STRIP);}glBindFramebuffer(GL_FRAMEBUFFER, 0);return irradiance_texture;
};
irradianceConvolution.vert
#version 330 core
layout(location = 0) in vec3 aPos;
layout(location = 1) in vec3 aNor;
layout(location = 2) in vec2 aTexCoord;out vec3 WorldPos;uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;void main() {WorldPos = aPos;gl_Position = projection * view * model * vec4(aPos, 1.0f);
}
irradianceConvolution.frag
#version 330 core
out vec4 FragColor;
in vec3 WorldPos;uniform samplerCube environment_texture;const float PI = 3.14159265359;void main()
{ // The world vector acts as the normal of a tangent surface// from the origin, aligned to WorldPos. Given this normal, calculate all// incoming radiance of the environment. The result of this radiance// is the radiance of light coming from -Normal direction, which is what// we use in the PBR shader to sample irradiance.vec3 N = normalize(WorldPos);vec3 irradiance = vec3(0.0); // tangent space calculation from origin pointvec3 up = vec3(0.0, 1.0, 0.0); // 上向量vec3 right = normalize(cross(up, N)); // 右向量up = normalize(cross(N, right));float sampleDelta = 0.025;float nrSamples = 0.0;for(float phi = 0.0; phi < 2.0 * PI; phi += sampleDelta){for(float theta = 0.0; theta < 0.5 * PI; theta += sampleDelta){// spherical to cartesian (in tangent space)vec3 tangentSample = vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta));// tangent space to worldvec3 sampleVec = tangentSample.x * right + tangentSample.y * up + tangentSample.z * N; irradiance += texture(environment_texture, sampleVec).rgb * cos(theta) * sin(theta);nrSamples++;}}irradiance = PI * irradiance * (1.0 / float(nrSamples));FragColor = vec4(irradiance, 1.0);
}
5. 渲染场景
pbr.vert
#version 330 core
layout (location = 0) in vec3 aPos; //顶点位置
layout (location = 1) in vec3 aNormal; //顶点法向
layout (location = 2) in vec2 aTexCoords; //顶点纹理坐标out vec2 TexCoords;
out vec3 WorldPos;
out vec3 Normal;uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform mat3 normalMatrix;void main()
{TexCoords = aTexCoords;WorldPos = vec3(model * vec4(aPos, 1.0));Normal = normalize(transpose(inverse(mat3(model))) * aNormal);gl_Position = projection * view * model * vec4(aPos, 1.0);
}
pbr.frag
:
#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
in vec3 WorldPos;
in vec3 Normal;// material parameters
uniform vec3 albedoValue;
uniform vec3 normalDeviation;
uniform vec3 metallicValue;
uniform vec3 roughnessValue;
uniform vec3 aoValue;uniform samplerCube irradiance_texture;// lights
// 两个 点光源的位置 和 颜色
uniform vec3 lightPositions[2];
uniform vec3 lightColors[2];uniform vec3 camPos;const float PI = 3.14159265359;// 根据 normal Map 得到 片段的法向
vec3 getNormalFromMap()
{return normalize(Normal);vec3 tangentNormal = normalDeviation * 2.0 - 1.0;vec3 Q1 = dFdx(WorldPos); // Q1 是屏幕 X 方向世界坐标系的变化向量vec3 Q2 = dFdy(WorldPos); // Q2 是屏幕 Y 方向世界坐标系的变化向量vec2 st1 = dFdx(TexCoords);// st1 是屏幕 X 方向纹理坐标的变化向量vec2 st2 = dFdy(TexCoords);// st2 是屏幕 Y 方法纹理坐标的变化向量// 假设 切线向量为 T,副切线向量为 B// 那么应该有:/*Q1 = T * st1.s + B * st1.tQ2 = T * st2.s + B * st2.t那么可以得到:[Q1 Q2] = [T B] [st1.x st2.x] = [T B] M[st1.y st2.y]那么有:[T B] = [Q1 Q2] M^-1根据二维矩阵的性质得到 M^-1整理公式即可得到:T = (Q1*st2.t - Q2*st1.t) / det(M)那么:T = normalized (Q1*st2.t - Q2*st1.t)得到 T 即可根据 cross(N,T) 得到 B*/vec3 N = normalize(Normal);vec3 T = normalize(Q1*st2.t - Q2*st1.t);vec3 B = -normalize(cross(N, T));mat3 TBN = mat3(T, B, N);return normalize(TBN * tangentNormal);
}
// ----------------------------------------------------------------------------
// DFG 中的 D 项
float DistributionGGX(vec3 N, vec3 H, float roughness)
{float a = roughness*roughness;float a2 = a*a;float NdotH = max(dot(N, H), 0.0);float NdotH2 = NdotH*NdotH;float nom = a2;float denom = (NdotH2 * (a2 - 1.0) + 1.0);denom = PI * denom * denom;return nom / denom;
}
// ----------------------------------------------------------------------------
// DFG 中 G 项的分量 Gsub
float GeometrySchlickGGX(float NdotV, float roughness)
{float r = (roughness + 1.0);float k = (r*r) / 8.0;float nom = NdotV;float denom = NdotV * (1.0 - k) + k;return nom / denom;
}
// ----------------------------------------------------------------------------
// DFG 中的 G 项
float GeometrySmith(vec3 N, vec3 V, vec3 L, float roughness)
{float NdotV = max(dot(N, V), 0.0);float NdotL = max(dot(N, L), 0.0);float ggx2 = GeometrySchlickGGX(NdotV, roughness);float ggx1 = GeometrySchlickGGX(NdotL, roughness);return ggx1 * ggx2;
}
// ----------------------------------------------------------------------------
// DFG 中的 F 项
vec3 fresnelSchlick(float cosTheta, vec3 F0)
{return F0 + (1.0 - F0) * pow(clamp(1.0 - cosTheta, 0.0, 1.0), 5.0);
}
// ----------------------------------------------------------------------------
void main()
{ // gamma 校正vec3 albedo = pow(albedoValue, vec3(2.2));// 金属属性float metallic = metallicValue.r;// 粗糙度float roughness = roughnessValue.r;// aofloat ao = aoValue.r;// 根据法向贴图计算 法向vec3 N = getNormalFromMap();// 根据相机位置计算 V 向量vec3 V = normalize(camPos - WorldPos);// calculate reflectance at normal incidence; if dia-electric (like plastic) use F0 // of 0.04 and if it's a metal, use the albedo color as F0 (metallic workflow) // 计算 基础反射率 F0vec3 F0 = vec3(0.04); F0 = mix(F0, albedo, metallic);// reflectance equationvec3 Lo = vec3(0.0);// 假设存在 2 个点光源for(int i = 0; i < 2; ++i) {// calculate per-light radiance// 光源向量 Lvec3 L = normalize(lightPositions[i] - WorldPos);// 半程向量 Hvec3 H = normalize(V + L);float distance = length(lightPositions[i] - WorldPos);// radiance 衰减(与距离相关)float attenuation = 1.0 / (distance * distance);vec3 radiance = lightColors[i] * attenuation;// Cook-Torrance BRDF// Cook-Torrance BRDF = fr// fr = fd + fs// = kD*c/pi + kS*(D*F*G)/(4*(wo*n)*(wi*n))// = kD*c/pi + kS*(D*G)/(4*(wo*n)*(wi*n))// = (1-F)*c/pi + F*(D*G)/(4*(wo*n)*(wi*n))// = (1-F)*c/pi + (D*F*G)/(4*(wo*n)*(wi*n))// 计算 fs// D 项// 当 主法向为 N, 理想半程法向为 H, 粗糙度为 roughness 时// 实际 半程法向 等于 H 的概率float NDF = DistributionGGX(N, H, roughness); // G 项// 当 主法向为 N, 相机向量为 V, 光源向量为 L, 粗糙度为 roughness 时// 整体 反射光线未被 阴影遮挡的概率 float G = GeometrySmith(N, V, L, roughness); // F 项// 反射的概率, (1-F 为折射的概率)vec3 F = fresnelSchlick(max(dot(H, V), 0.0), F0);vec3 numerator = NDF * G * F; float denominator = 4.0 * max(dot(N, V), 0.0) * max(dot(N, L), 0.0) + 0.0001; // + 0.0001 to prevent divide by zero// ks*(D*F*G)/(4*(wo*n)*(wi*n)) = // F *(D*G)/(4*(wo*n)*(wi*n))vec3 specular = numerator / denominator;// kS is equal to Fresnel// 反射率 ksvec3 kS = F;// for energy conservation, the diffuse and specular light can't// be above 1.0 (unless the surface emits light); to preserve this// relationship the diffuse component (kD) should equal 1.0 - kS.// 折射(漫反射)的比例// 折射率 kdvec3 kD = vec3(1.0) - kS;// multiply kD by the inverse metalness such that only non-metals // have diffuse lighting, or a linear blend if partly metal (pure metals// have no diffuse light).// 只有 非金属有漫反射项, 金属没有漫反射项kD *= 1.0 - metallic; // scale light by NdotL// cos(N,L)float NdotL = max(dot(N, L), 0.0); // add to outgoing radiance Lo// Lo = 漫反射项 + 镜面反射项// = (漫反射 + 镜面反射) * radiance * cos(N,L)Lo += (kD * albedo / PI + specular) * radiance * NdotL; // note that we already multiplied the BRDF by the Fresnel (kS) so we won't multiply by kS again}// ambient lighting (we now use IBL as the ambient term)vec3 kS = fresnelSchlick(max(dot(N, V), 0.0), F0);vec3 kD = 1.0 - kS;kD *= 1.0 - metallic; vec3 irradiance = texture(irradiance_texture, N).rgb;vec3 diffuse = irradiance * albedo;vec3 ambient = (kD * diffuse) * ao;vec3 color = ambient + Lo;// HDR tonemappingcolor = color / (color + vec3(1.0));// gamma correctcolor = pow(color, vec3(1.0/2.2)); FragColor = vec4(color , 1.0);
}
6. 渲染skybox
skybox.vert
:
#version 330 core
layout (location = 0) in vec3 aPos; //顶点位置
layout (location = 1) in vec3 aNormal; //顶点法向
layout (location = 2) in vec2 aTexCoords; //顶点纹理坐标uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;out vec3 WorldPos;void main()
{WorldPos = aPos;mat3 viewRot = mat3(view); // 提取 view 矩阵的旋转部分// skybox 的位置 posvec4 pos = projection * view * model * vec4(aPos, 1.0);gl_Position = pos.xyww; // 令 gl_Position.z=gl_Position.w,// 让 skybox 的深度值永远等于 z/w=1.0,// 保证 skybox 永远在场景的后面
}
skybox.frag
:
#version 330 core
out vec4 FragColor;
in vec3 WorldPos;uniform samplerCube environmentCubeMap;void main()
{ vec3 envColor = texture(environmentCubeMap, WorldPos).rgb;// HDR tonemap and gamma correctenvColor = envColor / (envColor + vec3(1.0));envColor = pow(envColor, vec3(1.0/2.2)); FragColor = vec4(envColor, 1.0);
}
四、全部代码及模型文件
使用OpenGL实现IBL漫反射PBR的全部代码以及模型文件可以在OpenGL使用OpenGL实现基于物理的渲染模型PBR(中) 中下载。
下载源代码后使用以下命令编译运行:
mkdir build
cd build
cmake ..
make
./OpenGL_PBR
渲染结果如下:
五、参考
[1].LearnOpenGL-PBR-IBL-漫反射辐照