从Assimp加载Collada(dae)模型显示不正确的法线

米沙尔·扎曼(Mishal zaman)

解:

感谢Rabbid76,我需要Normal使用模型矩阵更新顶点着色器中的向量。更新了顶点着色器:

#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;

out vec3 FragPos;
out vec3 Normal;

uniform mat4 projection; 
uniform mat4 view; 
uniform mat4 model;

uniform float scale;

void main()
{
    FragPos = vec3(model * vec4(aPos, 1.0));
    Normal = vec3(model * vec4(aNormal, 0.0));
    gl_Position = projection * view * vec4(FragPos, 1.0);
}

在此处输入图片说明

我试图在Assimp中正确加载collada(dae)文件,但法线似乎出了错。我想帮助您解决这个问题。我感觉这与我处理转换矩阵的方式有关。例如,这是加载obj文件的OpenGL应用程序的屏幕截图:

在此处输入图片说明

在上面的屏幕截图中,灯光位于x = 0和z = 0处的模型正上方。法线显示正确。加载dae文件时,得到以下信息:

在此处输入图片说明

The light position seems to be coming from the -z side.

here is the code I currently have to load the models:

  1. Load the model file, and call the processNode() method which includes an aiMatrix4x4()
void Model::loadModel(std::string filename)
{
    Assimp::Importer importer;
    const aiScene *scene = importer.ReadFile(filename, aiProcess_Triangulate | aiProcess_FlipUVs | aiProcess_CalcTangentSpace | aiProcess_GenBoundingBoxes);

    if (!scene || !scene->mRootNode) {
        std::cout << "ERROR::ASSIMP Could not load model: " << importer.GetErrorString() << std::endl;
    }
    else {
        this->directory = filename.substr(0, filename.find_last_of('/'));

        this->processNode(scene->mRootNode, scene, aiMatrix4x4());
    }
}

  1. processNode() is a recursive method which primarily iterates over node->mMeshes i multiply the transformation.
void Model::processNode(aiNode* node, const aiScene* scene, aiMatrix4x4 transformation)
{ 
    for (unsigned int i = 0; i < node->mNumMeshes; i++) {
        aiMesh* mesh = scene->mMeshes[node->mMeshes[i]];

        // only apply transformation on meshs not entities such as lights or camera.
        transformation *= node->mTransformation;

        this->meshes.push_back(processMesh(mesh, scene, transformation));
    }

    for (unsigned int i = 0; i < node->mNumChildren; i++)
    {
        processNode(node->mChildren[i], scene, transformation);
    }
}
  1. processMesh() handles collecting all mesh data (vertices, indices etc)
Mesh Model::processMesh(aiMesh* mesh, const aiScene* scene, aiMatrix4x4 transformation)
{
    glm::vec3 extents;
    glm::vec3 origin;

    std::vector<Vertex> vertices = this->vertices(mesh, extents, origin, transformation);
    std::vector<unsigned int> indices = this->indices(mesh);
    std::vector<Texture> textures = this->textures(mesh, scene);

    return Mesh(
        vertices,
        indices,
        textures,
        extents,
        origin,
        mesh->mName
    );
}
  1. Next the vertices() method is called to get all the vertices. It passes the transformation matrix. Here, i multiply the vertices with the matrix (transformation * mesh->mVertices[i];). I have a strong feeling that I am not doing something right here, and I am missing something.
std::vector<Vertex> Model::vertices(aiMesh* mesh, glm::vec3& extents, glm::vec3 &origin, aiMatrix4x4 transformation)
{
    std::vector<Vertex> vertices;

    for (unsigned int i = 0; i < mesh->mNumVertices; i++) {
        Vertex vertex;

        glm::vec3 vector3;

        aiVector3D v = transformation * mesh->mVertices[i];

        // Vertices
        vector3.x = v.x;
        vector3.y = v.y;
        vector3.z = v.z;

        vertex.position = vector3;

        // Normals
        if (mesh->mNormals) {
            vector3.x = mesh->mNormals[i].x;
            vector3.y = mesh->mNormals[i].y;
            vector3.z = mesh->mNormals[i].z;
            vertex.normal = vector3;
        }


        // Texture coordinates
        if (mesh->mTextureCoords[0]) {
            glm::vec2 vector2;

            vector2.x = mesh->mTextureCoords[0][i].x;
            vector2.y = mesh->mTextureCoords[0][i].y;
            vertex.texCoord = vector2;
        }
        else {
            vertex.texCoord = glm::vec2(0, 0);
        }

        if (mesh->mTangents) {
            vector3.x = mesh->mTangents[i].x;
            vector3.y = mesh->mTangents[i].y;
            vector3.z = mesh->mTangents[i].z;
            vertex.tangent = vector3;
        }

        // Bitangent
        if (mesh->mBitangents) {
            vector3.x = mesh->mBitangents[i].x;
            vector3.y = mesh->mBitangents[i].y;
            vector3.z = mesh->mBitangents[i].z;
            vertex.bitangent = vector3;
        }


        vertices.push_back(vertex);
    }

    glm::vec3 min = glm::vec3(mesh->mAABB.mMin.x, mesh->mAABB.mMin.y, mesh->mAABB.mMin.z);
    glm::vec3 max = glm::vec3(mesh->mAABB.mMax.x, mesh->mAABB.mMax.y, mesh->mAABB.mMax.z);

    extents = (max - min) * 0.5f;
    origin = glm::vec3((min.x + max.x) / 2.0f, (min.y + max.y) / 2.0f, (min.z + max.z) / 2.0f);

    printf("%f,%f,%f\n", origin.x, origin.y, origin.z);

    return vertices;
}

As an added note, if it is helpful, here is the fragment shader i am using on the model:

#version 330 core
out vec4 FragColor;

in vec3 Normal;  
in vec3 FragPos;

uniform vec3 lightPos;
uniform vec3 viewPos;

vec3 lightColor = vec3(1,1,1);
vec3 objectColor = vec3(0.6, 0.6, 0.6);
uniform float shininess = 32.0f;
uniform vec3 material_specular = vec3(0.1f, 0.1f, 0.1f);
uniform vec3 light_specular = vec3(0.5f, 0.5f, 0.5f);

void main()
{
    // ambient
    float ambientStrength = 0.2;
    vec3 ambient = ambientStrength * lightColor;

    // diffuse 
    vec3 norm = normalize(Normal);
    vec3 lightDir = normalize(lightPos - FragPos);
    float diff = max(dot(norm, lightDir), 0.0);
    vec3 diffuse = diff * lightColor;

    // specular
    vec3 viewDir = normalize(viewPos - FragPos);
    vec3 reflectDir = reflect(-lightDir, norm);  
    float spec = pow(max(dot(viewDir, reflectDir), 0.0), shininess);
    vec3 specular = light_specular * (spec * material_specular);  

    vec3 result = (ambient + diffuse + specular) * objectColor;
    FragColor = vec4(result, 1.0);
}

Here is the vertex shader:

#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;

out vec3 FragPos;
out vec3 Normal;

uniform mat4 projection; 
uniform mat4 view; 
uniform mat4 model;

uniform float scale;

void main()
{
    FragPos = vec3(model * vec4(aPos, 1.0));
    Normal = aNormal;  
    gl_Position = projection * view * vec4(FragPos, 1.0);
}
Rabbid76

FragPos is a position in world space, because it is the vertex position transformed by the model matrix. lightPos and viewPos seems to be positions in world space, too.
So have to transform the normal vector aNormal, from model space to world space, too.

You have to transform the normal vector by the the inverse transposed of the upper left 3*3, of the 4*4 model matrix:

Normal = transpose(inverse(mat3(model))) * aNormal;

Possibly it is sufficient to transform by the upper left 3*3, of the 4*4 model matrix:
(See In which cases is the inverse matrix equal to the transpose?)

Normal = mat3(model) * aNormal;

See also:
Why is the transposed inverse of the model view matrix used to transform the normal vectors?
Why transforming normals with the transpose of the inverse of the modelview matrix?

本文收集自互联网,转载请注明来源。

如有侵权,请联系[email protected] 删除。

编辑于
0

我来说两句

0条评论
登录后参与评论

相关文章