Three-dimensional (3D) computer graphics |
---|
Fundamentals |
Primary uses |
Related topics |
|
3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.
PDFReal-Time Rendering.3rd.pdf - Free ebook download as PDF File (.pdf) or read book online for free. Scribd is the world's largest social reading and publishing site. Search Search. Download Real-Time Rendering Third Edition PDF Online. Any Format For Kindle Real-Time Rendering, Fourth Edition by Tomas Akenine-Mo Ller. About For Books Real-Time Rendering, Fourth Edition For Kindle. Truespace 7.6 Real Time Display Rendering Sept 9.
- 4Reflection and shading models
Rendering methods[edit]
A photorealistic 3D render of 6 computer fans using radiosity rendering, DOF and procedural materials
Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life.[1] Several different, and often specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photorealistic rendering, or real-time rendering.[2]
Real-time[edit]
A screenshot from Second Life, a 2003 online virtual world which renders frames in real-time
Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time rendering, the goal is to show as much information as possible as the eye can process in a fraction of a second (a.k.a. 'in one frame': In the case of a 30 frame-per-second animation, a frame encompasses one 30th of a second).
The primary goal is to achieve an as high as possible degree of photorealism at an acceptable minimum rendering speed (usually 24 frames per second, as that is the minimum the human eye needs to see to successfully create the illusion of movement). In fact, exploitations can be applied in the way the eye 'perceives' the world, and as a result, the final image presented is not necessarily that of the real world, but one close enough for the human eye to tolerate.
Rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artifact of a camera. This is the basic method employed in games, interactive worlds and VRML.
The rapid increase in computer processing power has allowed a progressively higher degree of realism even for real-time rendering, including techniques such as HDR rendering. Real-time rendering is often polygonal and aided by the computer's GPU.[3]
Non real-time[edit]
An example of a ray-traced image that typically takes seconds or minutes to render
Computer-generated image (CGI) created by Gilles Tran
![Pdf Pdf](/uploads/1/2/6/2/126273670/635010157.jpg)
Animations for non-interactive media, such as feature films and video, can take much more time to render.[4] Non real-time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk, then transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second (fps), to achieve the illusion of movement.
When the goal is photo-realism, techniques such as ray tracing, path tracing, photon mapping or radiosity are employed. This is the basic method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating other naturally occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light reflecting inside the volumes of solid objects, such as human skin).
The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system. The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software.
Reflection and shading models[edit]
Models of reflection/scattering and shading are used to describe the appearance of a surface. Although these issues may seem like problems all on their own, they are studied almost exclusively within the context of rendering. Modern 3D computer graphics rely heavily on a simplified reflection model called the Phong reflection model (not to be confused with Phong shading). In the refraction of light, an important concept is the refractive index; in most 3D programming implementations, the term for this value is 'index of refraction' (usually shortened to IOR).
Shading can be broken down into two different techniques, which are often studied independently:
- Surface shading - how light spreads across a surface (mostly used in scanline rendering for real-time 3D rendering in video games)
- Reflection/scattering - how light interacts with a surface at a given point (mostly used in ray-traced renders for non real-time photorealistic and artistic 3D rendering in both CGI still 3D images and CGI non-interactive 3D animations)
Surface shading algorithms[edit]
Popular surface shading algorithms in 3D computer graphics include:
- Flat shading: a technique that shades each polygon of an object based on the polygon's 'normal' and the position and intensity of a light source
- Gouraud shading: invented by H. Gouraud in 1971; a fast and resource-conscious vertex shading technique used to simulate smoothly shaded surfaces
- Phong shading: invented by Bui Tuong Phong; used to simulate specular highlights and smooth shaded surfaces
Reflection[edit]
The Utah teapot with green lighting
Reflection or scattering is the relationship between the incoming and outgoing illumination at a given point. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function or BSDF.[5]
Shading[edit]
Shading addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (Note that there is some confusion since the word 'shader' is sometimes used for programs that describe local geometric variation.) A simple example of shading is texture mapping, which uses an image to specify the diffuse color at each point on a surface, giving it more apparent detail.
Some shading techniques include:
- Bump mapping: Invented by Jim Blinn, a normal-perturbation technique used to simulate wrinkled surfaces.[6]
- Cel shading: A technique used to imitate the look of hand-drawn animation.
Transport[edit]
Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.
Projection[edit]
Perspective projection
The shaded three-dimensional objects must be flattened so that the display device - namely a monitor - can display it in only two dimensions, this process is called 3D projection. This is done using projection and, for most applications, perspective projection. The basic idea behind perspective projection is that objects that are further away are made smaller in relation to those that are closer to the eye. Programs produce perspective by multiplying a dilation constant raised to the power of the negative of the distance from the observer. A dilation constant of one means that there is no perspective. High dilation constants can cause a 'fish-eye' effect in which image distortion begins to occur. Orthographic projection is used mainly in CAD or CAM applications where scientific modeling requires precise measurements and preservation of the third dimension.
See also[edit]
- Graphics processing unit (GPU)
References[edit]
- ^Badler, Norman I. '3D Object Modeling Lecture Series'(PDF). University of North Carolina at Chapel Hill.
- ^'Non-Photorealistic Rendering'. Duke University. Retrieved 2018-07-23.
- ^'The Science of 3D Rendering'. The Institute for Digital Archaeology. Retrieved 2019-01-19.
- ^Christensen, Per H.; Jarosz, Wojciech. 'The Path to Path-Traced Movies'(PDF).
- ^'Fundamentals of Rendering - Reflectance Functions'(PDF). Ohio State University.
- ^'Bump Mapping'. web.cs.wpi.edu. Retrieved 2018-07-23.
External links[edit]
(Wayback Machine copy)
Retrieved from 'https://en.wikipedia.org/w/index.php?title=3D_rendering&oldid=935249906'
本书是系列专栏《【《Real-Time Rendering3rd》提炼总结】》的合辑和汇编,全书共9万7千余字。你可以把它看做中文通俗版的《Real-Time Rendering 3rd》,也可以把它看做《Real-Time Rendering3rd》的解读版与配套学习伴侣。
- 电子书PDF下载链接:《Real-Time Rendering3rd》提炼总结.PDF
- 书本配套思维导图“实时渲染知识网络图谱”下载链接:《Real-Time Rendering 3rd》核心知识网络图解.jpg
在实时渲染和计算机图形学领域,《Real-Time Rendering》系列书籍一直备受推崇。有人说,它是实时渲染的圣经,也有人说,它是绝世武功的目录。
其实《Real-TimeRendering》很像一整本图形学主流知识体系的论文综述,它涵盖了计算机图形和实时渲染的方方面面,可做论文综述合集了解全貌,也可作案头工具书日后查用。
本书是系列专栏《【《Real-Time Rendering 3rd》提炼总结】》的合辑和汇编,全书共9万7千余字。你可以把它看做中文通俗版的《Real-TimeRendering 3rd》,也可以把它看做《Real-Time Rendering3rd》的解读版与配套学习伴侣。
这本PDF的特点:
- 为纯文字版,支持全文搜索、快速检索
- 按照纸质出版物的标准进行了精排版
- 拥有高清的配图
- 有一点即达对应章节的详细目录
- 有精确到每章每节的书签
- 非常适合作为快速查阅的工具书之用
在内容方面,全书按照系列专栏的顺序正序收录,分为十二章:
- 第一章 全书知识点总览
- 第二章 图形渲染管线
- 第三章 GPU渲染管线与可编程着色器
- 第四章 图形渲染与视觉外观
- 第五章 纹理贴图及相关技术
- 第六章 高级着色:BRDF及相关技术
- 第七章 延迟渲染的前生今世
- 第八章 全局光照:光线追踪、路径追踪与GI技术进化编年史
- 第九章 游戏开发中基于图像的渲染技术总结
- 第十章 非真实感渲染(NPR)相关技术总结
- 第十一章 游戏开发中的渲染加速算法总结
- 第十二章 渲染管线优化方法论:从瓶颈定位到优化策略
- 附录:《Real-Time Rendering 3rd》核心知识思维导图
![3rd 3rd](/uploads/1/2/6/2/126273670/975594270.png)
当你觉得《Real-Time Rendering3rd》英文原版硬啃不下来,对照着这本书一起阅读,也许会事半功倍。而对于想快速入门实时渲染的朋友,翻翻这本书,应该也会有所收获。
正如上文说到的,你可以把这本书看做中文通俗版的《Real-Time Rendering3rd》,也可以把它看做《Real-Time Rendering 3rd》的解读版与配套学习伴侣。
图 精确到每章每节的PDF书签
图 一点即达对应章节的书本目录
图 按照纸质出版物的标准进行精排版的正文