Advanced Graphics Algorithms DevMaster.net

Home | Forums | Articles/Tutorials | IRC Chat Network | Contact Us
Category: Graphics: Algorithms and Data Structures Date Posted: 11/05/2004

Advanced Graphics Algorithms

1. Introduction

Graphics representation of reality - or at least virtual reality - in games, simulations, movies, commercial and military applications have become increasingly convincing and immersed a growing audience in disbelieve - and at times even utter belief. This process has, in part at least, been facilitated by exponentially growing processing speeds and in more recent years the advent of hardware acceleration of graphics rendering.

However, even in spite of being able to process several giga-flops every second, a brute force approach to rendering is not able to produce nearly as realistic real-time environments and worlds as we find portrayed in games and interactive simulations. The reason is that numerous algorithms are used that approximate or compromise reality in order to achieve interactive rendering rates. These algorithms include methods to simplify scenes, to efficiently cull invisible parts or to simply ignore realistic computations in favour of faster approaches that, though inaccurate, portray reality.

Following the introduction we present a section on several graphics rendering concepts that feature in this article. In the remainder of this article we will discuss six popular algorithms for indoor and outdoor rendering of environments, namely:

  • quad-based static rendering of environments
  • a continuous level-of-detail (CLOD) rendering of height fields as described by Roettger et al [1]
  • real-time optimally adapting meshes (ROAM) for terrain rendering
  • portal-based rendering of indoor environments
  • binary space partitions (BSP) of indoor environments
  • potential visibility sets (PVS)

We will discuss each approach, offering a high-level description of each as well as implementation considerations where appropriate. Finally each algorithm will be discussed in terms of its application in modern graphics system before we conclude the article.

2. Concepts in Graphics Rendering

This section offers a broad overview into several key concepts in graphics rendering. These include the graphics pipeline, vertex representations, scene reduction techniques and graphics models - for a more extensive description we refer the interested reader to Alan Watt's 3D Computer Graphics [2].

2.1 The Graphics Pipeline

Graphics rendering is concerned with reducing a scene, a collection of three-dimensional data, to a smaller, visible subset and rendering this subset. To render a scene subset we note that a scene consists of polygons that are usually reduced to sets of triangles for hardware rendering purposes. The rendering process goes through a graphics pipeline during which the vertices of a triangle are transformed according to the current point-of-view and then projected from world space onto screen space according to the viewing frustum. The point-of-view determines the position and direction from which the world is rendered, while the viewing frustum determines the scope of the field-of-view (FOV).

After transformation and projection the triangle is lighted (meaning lighting calculations are performed on it) and clipped (meaning only visible parts are drawn) and then finally drawn to the graphics buffer. A number of approaches can be adopted during the drawing of the triangle, such as wire-frame only, solid, textured and bump-mapped.

Wire-framing only renders the lines connecting polygon vertices, solid renders color information only, texturing uses bitmap or procedural data that is projected onto the polygon, bump-mapping textures the polygon and utilizes some form of shadowing technique that creates a sense of depth to the image.

2.2 Vertex Representation

The triangle vertices used during the graphics pipeline can be represented in a number of ways, the simplest being a triangle-list. A triangle-list simply stores the vertices in sets of three, corresponding to the triangles they represent.

A triangle-list may contain redundant information, since most triangles share vertices. Triangle-strips and triangle-fans take this into account and offer a vertex representation that reduces memory requirements and processing times.

Indexed vertex representations offer the greatest storage and performance benefits by storing only unique vertices and using a separate indexing buffer to associate triangles with vertices. The index buffer itself offers a number of representations corresponding to index-lists, index-strips and index-fans.

2.3 Scene Reduction

A scene is usually not rendered as a whole but reduced to a subset. This reduction is usually performed by culling triangle sets. Culling implies exclusion from the rendered subset and culling processes can take many forms; for example backface culling eliminates all the triangles who are facing away from the point-of-view (meaning the triangle normal does not point towards the point-of-view. Other forms of culling usually attempt to eliminate entire sets of triangles, if for example the bounding box around a set of triangles is found to be outside the field-of-view, then none of the triangles in that set need to be rendered.

A different form of triangle reduction is implemented in level-of-detail (LOD) and continuous level-of-detail schemes. These reduce triangles by finding an adequate lower triangle count approximation of a model. A simple LOD may store more or less detailed versions of the same object and use them as appropriate, while more complex schemes may compute LOD on the fly. A continuous level-of-detail algorithm can produce a vast set of approximations that may vary by as little as a single triangle.

2.4 Graphics Rendering Models

Graphics rendering attempts to create a visual representation of the model data used in the rendering process. We classify a model to represent either an object, an indoor environment, an outdoor environment or any combination of these.

An object can be any entity - chairs, swords, lamps, pigs, and humans are all examples of objects. Dynamic objects that interact with a scene and possess embedded artificial intelligence are also called actors. The raw data associated with objects is generally a collection of vertices.

Indoor environments represent buildings and structures, usually viewed from within. A sewer system and a cathedral are both examples of indoor environments. Similar to objects, the raw data associated with indoor environments is generally a collection of vertices.

Outdoor environments involve rendering of terrain and landscapes; mountain ranges, forests and oceans may be part of outdoor environments. Terrain data is often stored in the form of height fields. A height field is a two-dimensional bitmap where the value at a point is interpreted as the height at that point. Any bitmap can function as a height field. The different types of models possess different properties which make certain approaches more advantageous with certain models than with others. In this article we will focus on indoor and outdoor environments and describe different approaches that efficiently manage and render such scenes.

3. Algorithms

In this section we will cover a variety of algorithms for graphics rendering. The first three subsections present different approaches to rendering terrain; the following two sections offer algorithms to indoor environment rendering; the last section presents a technique that can successfully be applied to both forms of rendering.

3.1 Quad-based Static Rendering of Terrain

3.1.1 Introduction

Traditionally terrain rendering attempts to make real-time rendering of terrain possible by performing extensive level-of-detail (LOD) computations on the landscape data. However, the advent of modern graphics hardware capable of transforming and texturing vertex data brings a new philosophy to the less academic and more practically orientated programmer.

This new dogma states thatin the face of overwhelming graphics processing power the key concern of a graphics programmer should be to minimize the workload on the central processing unit and maximize the workload on the graphics processing unit. The static quad-based approach to terrain rendering is a direct consequence of this new philosophy.

3.1.2 Description and Implementation

The set of terrain data is subdivided using an arbitrary spatial subdivision scheme - the use of quad trees in this regard is a popular choice. Each element should contain as many vertices as are optimally processed by the graphics processor - this varies from processor to processor but a rule of thumb is that current graphics processor cope best with lumps of approximately 500 to 4000 vertices.

Graphics processing is further facilitated with the use of alternative vertex representations such as triangle-strips and indexed vertex representations; these both reduce the memory and processing requirements for the set of vertices. This generally holds true for all graphics algorithms.

The rendering process consists of culling elements that are outside the field-of-view and rendering the remainder. View-culling is performed very efficiently with hierarchical subdivision schemes such as quad trees.

Static quad-based rendering is most efficient when using static vertex buffers that can be stored onboard the graphics card. This eliminates the costs of memory transfers between main and graphics memory and allows the graphics processor to apply internal optimization to the rendering process.

Storing static vertex buffers onboard the graphics card represents a severe limitation to a graphics system. The primary drawbacks are that dynamically changing terrain is impossible to perform in such a system, and the size of the landscape is limited to graphics memory. A relatively small terrain set of 256 by 256 vertices can occupy between 1.5MB and 6MB of memory space (depending on vertex representation and metadata). A standard terrain set of 1024 by 1024 vertices demands 16 times more space; and in addition to these costs the graphics memory should also cater for textures.

It is of course possible to keep vertex buffers in main memory and send them to the graphics card for every frame. The use of an Accelerated Graphics Port (AGP) enables rapid memory transfers between system memory and graphics controller. Such an approach eases both prior limitations to the implementation. However in spite of dedicated AGP transfers the amount of data involved is huge and memory transfers represent a bottleneck to potential performance.

An alternative approach to efficient quad-based rendering involves a buffering system in which static vertex buffers are stored on the graphics card and replaced as necessary with new blocks as the camera moves over the terrain. Such a system cannot efficiently handle dynamic environments, but arbitrarily large terrain sets can be traversed in such a manner. The viewing distance will however be limited by the vertex data that is present on the graphics card.

3.1.3 Application

Applications requiring far viewing distances (such as dynamically changing terrain or rendering from vastly different levels-of-detail (such as a descent from orbital height around a planet to surface level of that planet) are not suited for static quad-based terrain renderers. Ground-based simulations that utilize a steep (top-down or isometric) viewing angle or artificial distance delimiters (such as fog) can greatly benefit from the use of a static quad-based approach to rendering.

3.2 Continuous LOD Height Fields by Roettger et al

3.2.1 Introduction

Stefan Roettger's approach to CLOD in terrain rendering [1] builds on work published earlier by Peter Lindstrom [3]. The basic premise forwarded by Lindstrom is the use of a quad tree that is imposed upon a height field. Using a bottom-up approach this quad tree is used to generate a continuous levels-of-detail tessellation of the terrain data in real-time. Roettger proposed a different mechanism that utilizes a top-down refinement of a hierarchical quad tree to produce a real-time continuous LOD tessellation of the landscape data.

3.2.2 Description and Implementation

Roettger's algorithm overlays a hierarchical quad tree over a set of terrain data. This quad tree stores a hierarchy of variance values - a measure of terrain complexity - of the terrain data which are used in terrain mesh refinement.

The refinement process is top-down, meaning that first the quad tree root and, as necessary, subsequent children are recursively processed. These computations eventually yield data within the quad tree that form an implicit continuous level-of-detail tessellation of the terrain data.

Refinement of quad tree nodes depends on the evaluation of an error metric, which represents an indication of the on-screen pixel error in the landscape. A host of different error metrics can be defined and implemented, each offering different rationales for refinement - this article will not discuss these to any greater length, except to describe Roettger's choice of error metric.

The metric adopted in Roettger's paper on CLOD terrain rendering is a popular choice of error metric that is used in a number of CLOD algorithms including the paper by Mark Duchaineau on real-time optimally adapting meshes [4]. The metric is of the form:

    priority = variance/distance
This metric caters for a generally decreasing degree of tessellation with increasing distance whilst offering increased tessellation for sections of high variance. This allows surfaces to be represented with low triangle counts, whilst sudden changes in terrain formation are allocated with higher triangle counts. Depending on the target priority chosen higher or lower triangle counts can be achieved (and correspondingly better or poorer approximations to the landscape data are rendered).

Roettger's algorithm has achieved considerable popularity, falling only short of ROAM (see Section 3.3 below). A number of features make Roettger's approach desirable to certain implementations. Foremost among these is the exceptionally low memory requirement of the algorithm.

Once set-up, the algorithm requires a mere additional byte per terrain data element. This byte stores variance and node refinement of an implicit quad tree. No additional permanent memory space is required, with the exception of a small pool of working memory to generate triangle fans for the rendering process. Other approaches require well in excess of 10 to 50 times more memory space.

The primary achilles heel of Roettger's approach to terrain rendering is the lack of a frame-coherent rendering scheme. Frame-coherency utilizes the tessellation data of prior frames to generate the resultant mesh of the current frame. Frame-coherent mechanisms perform work of the order O(change in tri count), typically much less than 5% per frame. Roettger's algorithm generally performs in the order O(log n), for a height field of n*n data points.

With increasing terrain data the asymptotically different nature of frame-coherent and Roetgger's algorithms becomes more and more prominent. However our tests indicate that for relatively small terrain sizes up to 512x512 the two approaches perform on par.

3.2.3 Application

Roettger's approach to terrain rendering may be efficiently utilized in terrain applications that do not benefit from frame-coherency mechanisms (such as rapidly changing points of view, used in some strategy games); furthermore, if terrain size is not overly large then the algorithm may be appropriate for implementation.

In addition to the above mentioned possibilities, the algorithm described by Roettger is particularly appealing to applications designed to run on systems with resource limitations (such as a PlayStation), where preserving memory space receives a higher priority then rendering a few more frames per second.

3.3 Real-time Optimally Adapting Meshes (ROAM)

3.3.1 Introduction

Mark Duchaineau wrote the definitive paper on the ROAM algorithm [4], which utilizes split/merge routines to efficiently generate terrain meshes. The idea of utilizing split and merge procedures to produce frame-coherent meshes had been forwarded by Lindstrom earlier - but Duchaineau observed the underlying implications to his notions of a bintritree and formulated the successful ROAM algorithm.

3.3.2 Description and Implementation

The basis of the ROAM algorithm is formed by the notion of splitting and merging triangles to refine or coarsen a terrain triangulation. A split procedure takes a right-angled triangle and splits it into two resultant right-angled triangles. A merge procedure reverses such a process. Priority queues are used to sort triangles according to priority. Based on these queues highest priority triangles are split and lowest priority triangles are merged.

The method proposed by Duchaineau relies on imposing a bintritree over the terrain data. A bintritree is a form of geometric binary tree that utilizes (in Duchaineau's implementation) right-angled triangles as the elementary sub-structure.

Using splits or a combination of splits and merges such a bintritree forms the underlying structure of a rapid terrain tessellation process. If the implementation solely utilizes split routines, then the implementation is said to be a split-only implementation of ROAM. If both splits and merges are used, then it is referred to as split-merge implementation of ROAM. The two versions of the ROAM algorithm vary significantly in their implementation, requirements and performance.

Split-only implementations do not require priority queues; utilize simpler, implicit data structures; and are processed using simpler and fewer programming routines. These benefits result in faster per-vertex processing and significantly reduced memory requirements.

Split-merge versions of the ROAM algorithm benefit from frame-coherency principles, but are burdened with the task of building and maintaining priority queues. In other words, though the amount of triangles to consider is significantly reduced due to frame-coherency, the overhead of handling a single triangle is significantly increased.

Often the requirements of strict priority queues are relaxed somewhat in favour of faster but inaccurate queuing approaches - this does not significantly compromise the algorithm and offers a considerably reduced queuing overhead.

However, regardless of the queuing accuracy, ultimately the asymptotically faster behaviour of a full (split-merge) ROAM implementation beats a split-only implementation for a sufficiently large terrain set. A host of additional improvements are suggested by Duchaineau which speed-up the underlying ROAM algorithm. These include deferred priority computations, frustum culling and geo-morphing.

3.3.3 Application

It is important to note that both split-only and split-merge implementations of ROAM are extremely successful terrain tessellating algorithms and both represent solid choices for all forms of terrain rendering. Applications requiring very large sets of terrain data to be processed and that require a smooth frame-to-frame transition will benefit from the use of a split-merge ROAM implementation.

In recent years the application of the ROAM algorithm to arbitrary 3D data has received exposure in academic work [5]. The ROAM algorithm requires only minor modifications to produce continuous level-of-detail representations of arbitrary models. The modifications largely focus on the computation of variance and the error metric of the ROAM algorithm.

3.4 Portal-Based Indoor Environments

3.4.1 Introduction

The problem of rendering indoor scenes efficiently differs from the problem of rendering sets of terrain data. Indoor renderers focus on what should and should not be rendered, whereas outdoor renderers seek to find suitable approximations of given terrain sets.

An early and successful approach to rendering indoor environments is based on portal rendering [6,7]. This approach lacks the necessary and elegance that other algorithms, such as binary space partitioning (BSP, see Section 3.5) offer, but in turn offers brute efficiency and a simplicity that lends itself to straightforward implementation.

3.4.2 Description and Implementation

The portal-based algorithm for indoor environment rendering divides a scene into convex sections that are linked with portals. A portal defines a visual region from which a viewer can look from one section into another; doors and windows are obvious examples of portals, however a host of other less obvious portals exist - a bend in a hallway would be an example of such a portal.

The rendering process first determines which sector the camera is in, the polygons of that sector are then in turn tested for visibility, and are clipped and rendered as appropriate. If a visible polygon is in fact a portal, then the viewing frustum is redefined to exactly encompass the visible portal and the sector pointed to by the portal is rendered using this refined viewing frustum. This process is recursive and will render all visible sectors.

Portals are not geometrically dependent on the sectors in which they are defined - this means that portals can be defined to point into geometrically independent sectors of the scene, or to sectors in completely different scenes. It is also possible to define a portal to point into its own sector - if the viewing frustum is suitably adapted the portal effectively functions as a mirror.

A number of traditional problems of rendering can be solved elegantly using portals as described above. A classic example is the process of light and shadow mapping:

A scene is rendered twice, once from the point of view of a light source and once from the camera position. When the scene is rendered using the light source as origin, then no on-screen rendering occurs, instead all polygons (clipped if necessary) that are determined visible by the light source are illuminated by the light source, similarly all other polygons can be darkened into shadows. This technique is relatively inexpensive, even for multiple light sources, since most of the workload consists of transforming and texturing the scene data. However it does not cater for dynamic actors in the scene that may occlude the light and cast shadows.

In modern graphics systems a portal algorithm is seldom implemented as described above. Usually a less rigorous approach is adopted that takes advantage of modern graphics hardware: The requirement to define the scene into convex sections is ignored as hardware z-buffering efficiently performs the necessary visibility tests. Similarly the visibility test and clipping performed on individual triangles is delegated to hardware, which rejects or renders individual triangles quicker then can be determined otherwise.

The net results of all these simplifications is that an implementation of a portal-based rendering system is remarkably simple - and the renderer makes extensive use of hardware acceleration resulting in very good performance.

The primary drawback of portal-based techniques is that the use of binary space partitions is more efficient and offers a host of additional features that cannot be implemented as effectively in portal-based algorithms. Portal-based renderers do however benefit from one aspect that BSP trees fail to offer, namely dynamic scenes. Dynamic scenes offer a host of possibilities that static scenes cannot; static scene renderers go to some length to create the illusion of some degree of dynamically changing scenes (doors, elevators) but portal-based renderers can dynamically transform an entire scene - this is impossible to achieve on static renderers.

3.4.3 Application

Portal-based rendering is a good choice for indoor environments of limited complexity - with rising complexity the benefits of BSP-based rendering increase; however portal-based renderers are a very good choice for applications that require considerable dynamic scene changes.

3.5 Binary Space Partitioning Trees

3.5.1 Introduction

The theory of binary space partition trees has been forwarded in 1980 by Fuchs [10], though the rise to popularity of BSP trees began with the release of Quake by id Software. Today most algorithms used to render indoor scenes in real-time make use of this approach [8,9]. Although BSP trees fail to offer dynamic scenes, the data structure offers an effective foundation that is used to efficiently solve a host of problems that exceed the basic requirement of rendering an indoor environment correctly.

3.5.2 Description and Implementation

Binary Space Partitioning trees were originally formulated as a means of quickly and correctly sorting a set of polygons into a depth order - a visible surface determination algorithm. BSP trees recursively divide a scene by bisecting it with a plane and sorting scene polygons into in front of and behind the dividing plane. Polygons are split into two if necessary, when they intersect the dividing plane. The resultant order data can be used to efficiently sort the scene into ascending or descending order of polygons for any arbitrary position inside (or outside) the scene.

Different forms of BSP trees exist, corresponding to different paradigms of BSP representation. The representations differentiate between whether arbitrary planes or in-scene polygons are used as splitting planes, whether only leafs or leafs and nodes store polygons, as well as other considerations. Depending on which method is adopted a number of different BSP variants are formed, such as Solid-BSPs, and Leaf- and Node-based BSPs. The trees possess slightly differing properties that affect rendering speeds and are more or less suitable for additional scene processing.

Interestingly BSP trees are rarely used in their original capacity to perform visible surface determination - this is done with potential visibility sets and hardware z-buffering. Instead BSP trees offer an exceptional platform to perform and accelerate many different scene processing techniques.

These techniques include:

  • Set operations (e.g. unions, intersections)
  • Collision detection
  • View frustum culling
  • Light and Shadow mapping
  • Ray-tracing
  • Radiosity rendering
  • Image segmentation

The use of a BSP tree greatly reduces the set of polygons that need to be processed during these techniques, resulting in algorithms that perform considerably faster: O(n2) algorithms are reduced to O(n*log n), and O(n) algorithms are reduced to O(log n).

The concept of an optimal BSP tree is crucial to the implementation of the above-mentioned techniques. The creation of such an optimal BSP tree is a non-trivial problem, and it has in fact been shown to be NP-complete. The primary problem is that mutually exclusive requirements vie for supremacy during BSP creation: A splitting plane must divide the set of polygons in question as evenly as possible, whilst minimizing the number of splits that occur along that plane.

Greedy algorithms do not generally generate the optimal BSP tree for a given data set, though the results are often a sufficiently good approximation to make their use feasible. However, even the approximation of a good BSP tree is too processing intensive to offer real-time manipulation of a complex scene - BSP-based applications therefore always render static scenes. A number of tricks or compromises are implemented to create the illusion of dynamic scenes - elevators and doors, for example, can be excluded from the BSP creation process and are simply rendered when within the view frustum.

3.5.3 Application

BSP trees are supremely efficient in rendering indoor environments and offer a host of advantages for scene processing - an application that requires (static) indoor scene management benefits from the use of BSP trees. It should be noted that, as binary space partitions are a particularly inefficient way to describe terrain data, BSP renderers are inappropriate for outdoor rendering. If rendering of both indoor and outdoor environments is necessary, then a hybrid approach using BSP trees for indoor environments and another algorithm to handle outdoor scenes must be adopted.

3.6 Potential Visibility Sets

3.6.1 Introduction

As noted in Section 2.4.1 indoor and outdoor renderers focus on different aspects of scene data processing. These differences are however not mutually exclusive - both renderers greatly benefit from good occlusion culling mechanisms; a potential visibility set offers exactly such benefits. Potential visibility sets are a prime example of the maxim that it is always possible to increase computational speeds in exchange for higher memory consumption. In the context of 3D rendering the complete set of triangles that make up a world or scene can be divided into many subsets; a PVS is a form of database that stores which of these subsets are visible given a particular position within the world or scene [9].

3.6.2 Description and Implementation

In essence a PVS pre-computes scene-level occlusions and visibility and the results of potential visibility computations are used to rapidly perform occlusion culling in the world, be the occluder a wall or a mountain. Potential visibility sets are independent of the type of scene rendered and the data structures that are used. Thus for indoor rendering purposes an octree could just as efficiently be used in conjunction with PVSs as BSP trees.

Currently BSP trees are rarely used for their original purpose of visible surface determination; PVSs and hardware z-buffering usually cater for that need. Nonetheless, BSP trees are a popular data structure in spite of other spatial division mechanisms that are more accessible to the human mind (such as octrees). The reason for this is that BSP trees act as a foundation on which various scene computations can be efficiently performed (see Section 2.5.2).

In the case of binary space partitions the usual approach adopted is the use of a Leafy-BSP where each leaf additionally is associated with a PVS that stores which other leafs are potentially visible from that leaf.
The generation of PVS data for a BSP tree is a cumbersome, computationally intensive process. After BSP generation the spatial subdivision is evaluated and portals are inserted using certain rules - these portals define visibility between leafs. The PVS is then computed by overlaying a fine grid over the scene and determining visibility from each grid point to every triangle in the world.

Not surprisingly such an approach yields considerable amounts of data - a sizeable scene easily contains in excess of 20Mb of PVS data. Usually the data is not handled in its raw form, but compacted with single bit boolean values and using ZRLE (zero run-length encoding); these mechanisms typically reduce the data set to the ranges of 50 to 500kb.

Similar to the description above, potential visibility determination can also be computed for sets of terrain data. For example it is possible to calculate the PVS of quad tree leafs that are used in terrain rendering. Although the process of PVS determination is straightforward, the computational expenses are tremendous. Renderers relying on PVS data can therefore only render inherently static scenes. To create a sense of world a compromise in PVS accuracy is often introduced, for example: In the case of an elevator the entire elevator shaft is considered part of a single block. Thus the elevator is rendered irrelevant of whether the elevator is visible, although the elevator shaft is required to be potentially visible (some approaches additionally require the bounding volume of the elevator to be visible).

3.6.3 Application

Since memory considerations have waned in an age where 256Mb of memory space are considered common, the use of pre-computing potential visibility data is an effective and inexpensive means to increase rendering performance - any graphics application that does not require extensively dynamic scenes greatly benefits from the use of PVSs.

4. Conclusion

We have presented a variety of popular algorithms and approaches to effective graphics computation, considering various aspects of implementation and application. Additionally, through the analysis of application it should be apparent to the reader that every algorithm presented offers different benefits that make an algorithm more or less suitable to given circumstances. We conclude that when implementing a graphics application, careful attention must be paid to requirements and application environment in the choice of graphics algorithms.

References

[1] Stefan Roettger, Wolfgang Heidrich, Philipp Slusallek, and Hans-Peter Seidel. Real-Time Generation of Continuous Levels of Detail for Height Fields. In V. Skala, editor, Proceedings of WSCG '98, pages 315-322, Plzen, Czech Republic, 1998.

[2] Alan Watt. 3D Computer Graphics. Third Edition. Addison-Wesley. Pearson Education. 2000.

[3] Peter Lindstrom, David Koller, William Ribarsky, Larry F. Hodges, Nick Faust. Real-Time, Continuous Level of Detail Rendering of Height Fields. Proceedings of ACM SIGGRAPH 96, in August 1996, pp. 109-118. Georgia Institute of Technology. Gregory A. Turner. SAIC. 1996.

[4] Mark Duchaineau, LLNL, Murray Wolinsky, LANL, David E. Sigeti, LANL, Mark C. Miller, LLNL, Charles Aldrich, LANL, Mark B. Mineev-Weinstein, LANL. ROAMing Terrain: Real-time Optimally Adapting Meshes. IEEE Visualization '97 Proceedings. 1997.

[5] Mark Duchaineau. ROAM Algorithm: Realtime Optimally-Adapting Meshes. http://www.cognigraph.com/ROAM_homepage/index.html

[6] Mark Feldman. Portal Engines. The Win95 Game Programmer's Encyclopedia. 1997. http://www.geocities.com/siliconvalley/2151/portals.html

[7] Jacco Bikker. Building a 3D Portal Engine, tutorial series. http://www.flipcode.com/portal/

[8] Bruce F. Naylor. Binary Space Partitioning Trees, A tutorial on. Spatial Labs Inc. http://www.cs.sun.ac.za/~henri/bsp_tutorial.pdf

[9] Steven Cento. BSP Tutorial. http://www.cs.sun.ac.za/~henri/bsppvs.doc

[10] H. Fuchs. On visible surface generation by a priori tree structures. 1980.

Discuss this article in the forums

© 2003-2004 DevMaster.net. All Rights Reserved. Terms of Use & Privacy Policy Want to write for us? Click here