- Texture Coordinate
- Mesh Reduction (Mesh Decimation)
- Direct Lighting
- Indirect Lighting
- Frame Rate
- Real-time Rendering
- Offline Rendering (Photo Rendering)
- Shader Program
- Vertex Shader
- Fragment/Pixel Shader
Here you will find common words and concepts that are common when explaining the computer graphics area.
A vertex is a point in space. By creating vertices and connecting them you will get a surface that can represent your product. The vertices may have additional properties associated with them such as a texture coordinate and a normal.
A normal defines how a piece of geometry is oriented. We can have a single normal for each triangle but this will result in a very faceted look. Therefore the vertices that define a triangle usually have normals associated with them. By interpolating the vertex normals across the triangle we will get a much smoother look.
Texture coordinates, also known as uv-coordinates, define how an image, also known as texture, is mapped to a surface. Usually, they are defined in two dimensions.
By connecting three vertices you get a triangle, also known as face or sometimes the more general term polygon. A triangle is the simplest way to represent a limited surface and that is also why they are commonly used when building things using computer graphics.
Simply an image that is to be mapped to a surface. A material may consist of several textures.
A material is a set of properties that affects how the geometry onto which it is applied interacts with light. For instance, you can specify the color, reflection, and opacity of a material. It is important to understand that it is not the material alone that determines how it will look when applied to a piece of geometry. Properties that belong to the geometry, such as the normal and the texture coordinate associated with a vertex also have a big influence on the quality of the rendering.
A mesh, or surface, is simply a set of triangles. It can be a complete 3D-model but it can also be a part of a 3D-model such as an armrest for a chair.
Mesh Reduction (Mesh Decimation)
The process of reducing the number of triangles of a mesh until a given state is reached. The goal is usually to always remove the triangles that have the smallest impact on visual quality.
When we are talking about direct lighting we refer to the light that hits a surface without having bounced on another surface before.
We say that a surface is lit by indirect lighting if the light that hits the surface is the result of light bouncing off another surface. Without indirect lighting, everything that does not get hit directly by light will be completely black/dark.
When we say frame we refer to the image produced by the rendering process.
By frame rate we mean the number of frames the application can produce within a second. If you have a high frame rate you will feel like the application is responding very fast to your operations. Contrarily, having a low frame rate means that you will experience a delay in interacting with the products in the drawing.
The process of creating an image from a set of data. The data is usually a set of geometry, materials, and light sources. Note, rendering has nothing to do with how fast the image is produced. See Real-time rendering and Photo rendering for further information.
Real-time rendering is a process of producing an interactive experience. It is all about producing a sequence of images, also referred to as frames, fast enough to make the application respond to your actions without a noticeable delay.
Offline Rendering (Photo Rendering)
Photo rendering, more commonly known as offline rendering, is the process of producing a great looking image. Opposed to real-time rendering photo rendering is about producing a single image where we in real-time rendering produce a series of images.
Describes how the color varies across a face/triangle/polygon. Usually, the shading is described by a set of shaders, or shader programs, executed on the GPU.
A shader program, or just shader, is used during the rendering process to perform some specific tasks. There are different types of shader programs that perform different tasks. The most common shader programs are the vertex shader and the fragment shader.
The prime purpose of the vertex shader is to transform the vertices needed to render a mesh to their right location.
The prime purpose of the fragment/pixel shader is to calculate the shading, or more simply put calculate the color, for all the fragments that are outputted by the rasterizer. For instance, texturing and lighting calculations are performed in the fragment shader.
Decides which pixels should be filled to properly render a triangle. It outputs empty fragments that are sent to the fragment shader so that it can perform shading calculations on the fragment.
A transformation is a function or operation that takes a point/vector as input and outputs a new point/vector. Typical operations are scaling, rotation, and translation/positioning.