Essay

 

Summary:

The paper introduces new techniques for modelling in computer graphics using interpolating implicit surfaces as an alternative to NURBS, subdivision surfaces, pure thin-plate interpolation, Gaussian functions (blobby spheres/meatballs) or your favourite surface representation and modelling technique. Common implicit surfaces have been around for some time; the authors focus their sight on interpolating implicit surfaces for modelling and have come up with following features:

  • Direct specification of points on the implicit surface
  • Specification of surface normals
  • Conversion of polygon models to smooth implicit forms
  • Intuitive controls for interactive sculpturing
  • Addition of new control points that leave the surface unchanged (like knot insertion)
  • A new approach to blending objects

 

Interpolating implicit surfaces were used before for problems of surface reconstruction and shape transformation. An implicit surface is basically defined by a set of locations in space having zero value through which the surface should pass and identified locations that are declared either as inside or outside having positive or negative values. Out of this data an interpolating implicit function is computed using variational calculus; the iso-surface of the resulting function describes the desired surface.

Properties common with other implicit surfaces are the use for constructive solid geometry (CSG) and interference detection, interactive manipulation, approximation by polygonal tilings and the ease of ray tracing. One of the strongest features is the ability to place points and specify normals on the surface directly, e.g. there is no need for approximating calculations. A particle sampling technique introduced by Witkin and Heckbert for interactive sculpting is presented in a slightly modified version, namely the underlying data structure is an interpolating implicit surface rather than common implicit surfaces like blobby spheres and cylinders.

Towards the end a different and intuitive approach to blending is proposed and two possible rendering techniques are presented: Conversion to a polygonal mesh using a Marching Cube variant called continuation approach and sphere tracing, which is a special version of ray tracing.

 

Contribution to CG:

The fact that points on the surface may be placed freely is a very handy and intuitive capability for geometric and artistic modellers. Furthermore interpolating implicit surfaces may represent smooth, closed surfaces of arbitrary topology, which as far as I know is only possible with subdivision surfaces; all the other techniques for surface representation do not provide this ability. This immediately leads to the fact that designers don’t have to worry about seams between several different patches representing a closed surface as with the other approaches to surface modelling.

 What’s also nice are the three provided variations of constraints which are interior, exterior and normal constraints and the fact, that these are not mutually exclusive and may well be used in combination, which gives the modeller even greater control and freedom during modelling. The approach to blending is also quite handy, considering the fact that the reasoning behind it is valid for other areas where surface patches or data in general has to be merged.

 

On to the details:

It is very clearly stated how the underlying math looks like by providing the necessary equations such as in (3), (5) and (8). Having some background in matrix computations and variational calculus one can easily follow the arguments and explanations the authors make. What I missed was a more detailed description of the system of equations in (8); it seems that the weights should sum up to zero; it would have been nice to have an explanation for this. The same holds for the last three rows where the weighted coordinates should sum up to zero.

There are plenty of algorithmic and mathematical details given and thus an implementation would therefore be quite easy, assuming that some sort of modelling framework is existent this would take about two months for a small team. According to section 3 this amounts to only about 100 lines of code for the matrix computations and some more for the polygonalization routine from section 7; but this somehow sounds too good to be true and I bet the real problems lie in the details not mentioned in the paper. The explanation about sphere tracing could as well be a little more detailed, maybe they could have mentioned in two or three sentences what the Lipschitz constant is exactly used for; to be honest I had to peek into [14]. I would have also appreciated it if more comparisons to existing techniques would have been made; maybe to subdivision surfaces.

 

Favourites:

The detailed description in part 4 about the different constraints is what I liked most since one can literarily imagine in one’s head how the shapes of the examples provided change based on the type of constraint one applies. This is harder to do if one uses one of the other surface representations like NURBS or even subdivision surfaces; although the latter are pretty neat and provide local control. Impressive is also the insight provided in section 5 about a concrete modelling environment and most especially its operations based on floaters and control points; although no detailed information about the birth and death of floaters and the background math and algorithms is presented, since this was done already by Witkin and Heckbert in an earlier publication, but they mention the necessary changes like the one of the radial basis function from |x| to |x|^3. Most of all adding control points is really easy since this just amounts to a simple surface intersection of a ray emitting from the eye position and passing through the cursor; this doesn’t affect the surface at all; only when the new point is moved the shape also changes accordingly. This is a major advantage compared to the earlier implementation of Witkin and Heckbert since they use blobbies as the underlying model and thus one must think about where to place the center of the new sphere and how large the radius must be; this is rather counterintuitive I think. There is also the advantage that no approximation issues arise if one uses interpolating implicit surfaces rather than common implicit surfaces like blobbies as the underlying data structure.

What I didn’t like was the quality of the interpolating implicit surfaces in images (3) and (5) resulting from converted polygonal data; in particular the human fist appears rather awkward and unnatural  to me. Another drawback is the fact that creases are not studied in the paper; which is a strength of subdivision surfaces for example.

And what I especially did not like was the citation of Witkin’s and Heckbert’s paper in section five to show of how bad their implicit approach works compared to their own; I think if a method is good, which I think it is, then it is not necessary to make such citations since the educated audience is very well able to grasp this advantage on its own.

Conclusion & Extensions:

There are several extensions the authors mention. First there are the already existing techniques for common implicit surfaces such as critical point analysis, interval techniques, interactive texture placement and animation procedures which most probably may be ported in a straightforward manner. Next there is the issue of sharp features such as edges and corners which could be incorporated into an interpolating implicit model. Furthermore the way how the gradient is specified is not optimal, e.g. one could try to specify it exactly instead of only a few values near the zero constraints which results in an indirect specification for the gradient. Turk and O’Brien also mention at the end that high level interaction like selecting and moving several points at once or placing several new constraints simultaneously would provide greater flexibility and possibilities and would definitely speed up production processes as I see it.

Another problem arises with surfaces having boundaries; maybe for this a second implicit function could be used to specify the presence or absence of a surface. Finally one could also think about a different representation other than floaters for interactive modelling like a polygonal surface, with enough processor power this should be possible at interactive rates; which is definitely the case nowadays but back then when the paper was written they probably meant just a couple of frames per second; which from my point of view would not have been enough.

I can also imagine using interpolating implicit surfaces for data compression so to say: The idea is to have a certain working environment where one can save an old, rather course version of the final model together with the (system specific) transformations leading to the final result. The one could load the course version later and execute the accompanying transformations; maybe on a different workstation. This of course gives immediate raise to use interpolating implicit surfaces for level of detail (LoD) representations and could potentially speed up network communication in collaborative environments.

Considering the area of animation, imagine creating morphing or melting effects by using exterior constraints which change over time; similar to the existing metaballs animations.

What’s also possible is the simulation of (fractal) plants or comparable structures like in picture two, left side.

My guess is also that Pixar incorporated interpolating implicit surfaces into their production environment as an extension; most probably into RenderMan.