Multi-Res Modeling Group People Research


Teaching
Publications Software
Multi-Res Modeling Group Multi-Res Modeling Group


Compatability

This software has been built and run successfully on an SGI Indigo2 runing IRIX 5.3 and version 4.0 of SGI's C++ compiler. It is known not to build on an SGI Indigo2 running IRIX 6.2 and C++ compiler version 7.0 or 7.1. There has been not testing on other platforms.

Contents

This SD distribution contains all of the source for the program, a Makefile, two Open Inventor format .iv files, the libcfitsio.a library and its header files, and a FITS format data file. The distribution version is tailored for analyzing FITS format data on the hemisphere as an example. View the complete file list for more information.

Using SD

The distribution version of SD is capable of mapping flat and spherical data from FITS type input files onto a hemisphere and performing spherical wavelet transforms on them. The FITS format is used largely in the astronomical community, and the original version of SD was geared toward studying magnetic anomalies on the sun. It is easy to modify SD for other applications. The FITS capability was included as an example of how to use SD.

SD Walkthrough

Once you have SD built, follow these directions for a demonstration of the program's capabilities.

Start SD with the command:

sd -f hemi.iv -F m911101.fits -s 6 -g y -b Y

This will open the SD display window and it will contain a black and white speckled hemisphere. This color mapping represents line of site magnetic field strength on the sun.

Select the arrow from the toolbar. Press the PAGE UP and PAGE DOWN buttons on the keyboard to view different scale spaces of the spherical wavelet transform of this data. Also experiment with HOME and END to see the wave spaces. Pressing PRINT SCRN will dump the current image into a file called dump.rgb in the working directory. The keyboard controls may only be used while the arrow button is selected.

SD has other controls that affect the view of the scene. Select the hand button on the toolbar. Click on the hemisphere and "give it a spin", that is, let go of the button while moving the mouse. The scene should start rotating. Click the question mark button on the toolbar for a complete description of the capabilites of the viewer.

SD is a general package for subdivision and has many functions not related to wavelets. The menus contain a variety of options that will not be documented here.

Command Line Options

In order to map data onto the hemisphere, you must specify some options from the command line. Here is a summary of all command line options and valid arguments.

Notes:
    The provided geometry files - hemi.iv and tetra.iv - subdivide into a hemisphere and a sphere respectively. Using the Butterfly basis function and the Butterfly Sphere geometry have produced the best results thus far. Note that two input formats are supported. The -F and -B flags map 2D data sets onto the front and back of the sphere. The -S flag indicates a spherical input data set. It maps onto the entire sphere. It is possible to use the -F and -B flags simultaneously but flat and spherical data sets should not be mixed.

Interactive Controls

Once SD is loaded, there are several interactive controls available.

PAGE UP, PAGE DOWN
Display the next higher or next lower scale space.
HOME, END
Display the next higher or next lower wave space.
PRINT SCREEN
Dumps the current image into a file called dump.rgb in the current directory.

To use any of these, the arrow incon must first be selected on the tool bar. Note that the .rgb format is native to SGI computers and can easily be transformed into another format by an image manipulation program like xv.

Modifying SD

Typically applications of the spherical wavelet code will require some specific functionality which is not included in the standard distribution. Here we give a few examples of the kinds of things that users typically want to modify. They should serve as examples of the particular things described here as well as other modifications that fall into these categories

Changing the color mapping

The code only knows about coefficients without regard to how these are mapped to colors for final display. There is a single function which performs this mapping, and which needs to be changed for a particular mapping. The function is called ColorMapper and lives in seed.C


CVec ColorMapper( float c ) {
  float a = ( c - _depth ) / ( _height - _depth ) 
  
  // limit to range [0,1]
  a = min( 1.f, max( 0.f, a ) )

  return CVec( a, a, a )
}

The function takes a float argument and rescales so that it falls into a range which maps the original input data to [0,1]. The largest input sample is in the variable _height, while the smallest is in the variable _depth.

Note that this does not guarantee that a is in the range [0,1], since operations on the data could make individual values come out to be outside the range [_depth, _height]. That's why we clip them to the [0,1] range (values outside are undefined as far as the color mapping is concerned. In the above case we simply map the values onto a linear ramp of gray. Here is another version:


CVec ColorMapper( float c ) {
  static float mid = ( 0 - _depth ) / ( _height - _depth )
  float a = ( c - _depth ) / ( _height - _depth ) 
  
  // limit to range [0,1]
  a = min( 1.f, max( 0.f, a ) )

  return CVec( mid + max( a - mid, 0.f ), mid, mid + min( 0.f, a - mid ) )
}

This version will map 0 to gray and negative values to (increasing) blue while positive values will be mapped to increasing red. Other things to try are logarthmic mappings for example. But that should be easy to do following the above examples.

Computing derived quantities

In some applications it is interesting to compute derived quantities of coefficients. For example, one might want to compute the little l2 norm of all coefficients at a certain level. Here we give example of how to compute such a quantity using the high level helper functions provided in this package. Another example we give is non-linear processing such as attenuation or enhancement of different scales.

In order to make coefficient manipulations possible without messing things up hopelessly the code always maintains an original copy of all coeffcients computed at the beginning of time. These are stored in so called backing store. Here is an example from main.C. First we need a little helper function which will do the work. The built in class structures then support having this helper function evaluated for a given set of coefficients. Here is the helper function to compute l2 norms:


static float _l2
static int _ncoeff

void L2Norm( TTree *t, const Edge e ){
  _l2 += sqr( EdgeMidpoint( t, e )->c() )
  _ncoeff++
}

First we set up an accumulator _l2 and a counter _ncoeff. The function L2Norm is passed a given coefficient in the form af a triangle and edge name. The coefficient can be accessed as a member. Here we are just squaring the coefficient and putting it in our accumulator while counting the total. This function is used by the following higher level function:

void ComputeAllNorms( Mesh *mesh ){
  // initialize helper variables
  _ncoeff = 0 _l2 = 0
  // now call helper function on all vertices at level 0
  mesh->EvalVertex( 0, L2Norm )
  cout << "L2 norm of scaling coefficients at level 0 "
       << sqrtf( _l2  / _ncoeff ) << endl
  
  // all other levels have only detail coefficients
  for( int i = 1 i <= mesh->Depth() i++ ){
    // remember to initialize
    _ncoeff = 0 _l2 = 0
    // details sit at odd vertices of a given level
    mesh->EvalOddVertex( i, L2Norm )
    cout << "L2 norm of wavelet coefficients at level " << i << " "
         << sqrtf( _l2  / _ncoeff ) / ( 1 << i ) << endl
  }
}

The critical piece here are the two functions EvalVertex and EvalOddVertex. They each take a level number and a function pointer to be applied to some chosen set of vertices. Vertices are the places we keep coefficients (and lots of other info). EvalVertex calls its helper function argument on all vertices at a given level. This is semantically equivalent to saying that the function will be called for all vertices hold scaling and detail coefficients between level 0 (coarsest) and the first argument. EvalOddVertex is very similar, except it calls the helper function only on those vertices which were added when going from level i-1 to level i. Semanticallly these are the vertices that hold the wavelet coefficients of level i.

Other examples of norm computations can be found in main.C in the functions LoadVSpace and LoadWSpace.

An example of a non-linear function is the application of powerlaw scaling to the coefficients. This corresponds to smoothing the data or enhancing ("sharpening"). Both have examples in main.C. We'll take a lot here only at the Smoothe function:


void Smoothe( void *mesh ){
  Mesh *m = (Mesh *) mesh
  // first get all coefficients from backing store
  m->EvalVertex( -1, CopyCb2C )
  // now traverse them level by level applying a power law scaling to them
  // note that we leave the scaling coefficients at level 0 alone
  for( int i = 1 i <= m->Depth() i++ ){
    // simple 2^{-i} scaling as an example
    _scalefactor = 1.f / ( 1 << i )
    m->EvalOddVertex( i, CoeffScale )
  }
  // now do an inverse transform to see what we got
  m->Synthesis( 1, m->AllEvenLevel() )
  cout << "smoothed with 2^{-i}\n"
}

Note that we call EvalVertex at the beginning with a level argument of -1. This corresponds to "the finest level whatever it is." Doing so with the helper function CopyCb2C

void CopyCb2C( VertInd vi ) {
  Vertex *v = vi.v()
  v->c() = v->cb()
}

brings a fresh copy of coefficients from backing store (cb) into the slots used by the subsequent calls (and by the drawing routines!). After that we go over all the odd levels, that is to say, over all the wavelet coefficients, and apply a scaling to them using yet another small helper function

static float _scalefactor = 1

void CoeffScale( TTree *t, const Edge e ) {
  EdgeMidpoint( t, e )->c() *= _scalefactor
}

No surprise here. Finally we do a Synthesis (a builtin function which uses the "c" coefficients slots) to display the results.

These functions are invoked through the user interface. If you want to add more functions of this nature and bind them quickly to some keyboard key (not the best UI, but feel free to be fancier), take a look at the file ui.C. In there is a function called KeyPressCB which handles all the mappings of keys to functions. Adding a new hook is a simple exercise in cut and paste.


Copyright © 1998 Peter Schröder Last modified: Sat Feb 28 15:22:14 PST 1998