The Real Deal About Voxels!

Wednesday, December 26, 2012 0 Comments:

People need to learn the difference between polygons and voxel based architecture

When reading about volume rendering on the internet, I discovered an irritating fact: there is no consistent definition of what a ‘voxel’ is. Most certainly when it comes to various rendering engines, everything that renders something which contains cubes/blocky elements gets the voxel renderer tag hastily slapped on.
A good example is the immensely popular Minecraft. It uses a voxel-based representation of the world – this makes it easy to generate and manipulate terrain, which is what Minecraft is all about. However, the end result is still rendered as polygon-based cubes, using hardware acceleration. I’m not saying this is cheating or that notch’s renderer is not a great technical achievement – just pointing out how easy it is to get confused.

So let’s get this straight. This is my view on the terminology:
In direct volume rendering (DVR), you’re rendering a data grid. This is a rectangular 3D volume which contains data points with equidistant spacing. Notice how different this is from traditional mesh-based rendering: a mesh consists of a geometric description of triangles and their position.
  • data point is a single unit of information. It can contain temperature data, a simple boolean, … anything you want.
  • voxel is a volume unit which is defined by 8 data points. These volumes get labeled with a representative data point, the point which is closest to the grid origin. Often, Voxels are also called cells.
  • Data points will always be data points, Voxels are just a certain way to group them. What do you call a 2×2 section of a regular 2D image? I’d say pixel block.  Voxels are the 3D analogy of that.
Getting the terminology right is half the work – and you’ve got to be aware of the subtle differences when reading research papers..
  • If I render all or a portion of the data grid’s points as a cube, using the representative data point of each Voxel to get my rendering info (color, …), I’m rendering Voxels: I’m rendering the volume spanned by 8 data points, which results in a cube because the grid has equidistant spacing.
  • If I shoot a ray from my eye and trace this through the grid, I am not rendering the volume spanned by the Voxel’s datapoints. I’m rendering interpolated datapoints, integrated over a ray starting in my eye. This is a whole different ballpark.

Thanks for reading, please comment :)


Post a Comment

Hope you enjoyed :)


©Copyright 2011 PHIFLOW PLATFORM | TNB