|
We often invite both local and global speakers to speak as part of our computer graphics seminar series. Past visitors include Pat Hanrahan, Marshall Bern, Jason Mitchell, Chas Boyd, Bill Mark, Jonathan Shewchuk, Matthew Papakipos, Steven Gortler, Eric Veach, Wook, Ken Musgrave, John Hughes, Leif Kobbelt, Bernd Frölich, among others. The most recent visitors are listed below. |
Pat Hanrahan, Department of Computer Science, Stanford University,
Monday, November 25th 2002, 12:30pm - 2pm, 123 Lauritsen;Streaming
Scientific Computing on GPUs;Graphics processors are very fast and
powerful. A modern GPU has 8 parallel programmable pipelines running 4-way
vector floating point instructions and achieves over 50 GFLOPs. Graphics
processors use VLSI resources efficiently because they use data parallelism
and carefully A natural
question to ask is whether other important algorithms can |
Marshall Bern, Palo Alto Research Center (PARC), Wednesday,
November 13th 2002, 4 - 5pm, 070 Moore;Two Computer Vision Problems
in Structural Biology; I will talk about two computer vision / image
processing problems, |
Jason L. Mitchell, ATI Research, Monday, October 28th 2002,
12:30pm - 2pm, 123 Lauritsen;Hacking Next-Generation Programmable Graphics
Hardware; This lecture will describe the the latest generation of
programmable We will also demonstrate
the ability to perform sophisticated image processing such as real-time
transformation to and from the frequency domain with a Fourier transform
executed on the GPU. Finally, the importance of high-level shading |
Chas Boyd, PM Direct3D (r) Microsoft Corporation, Wednesday,
October 23rd 2002, 12:30pm - 2pm, 123 Lauritsen, Hands on Lab with lead
developers of Direct3D, 4-6pm, Jorgensen Intel Lab 154; Plug into
the Power of X!; Fully programmable graphics cards with full floating
point support enable a |
Bill Mark, of nVidia Corp. and UT Austin, Monday, October 21st
2002, 12:30pm - 2pm, 123 Lauritsen; Programmable Graphics Hardware:
Beyond Real-Time Movie Rendering; The latest generation of 3D PC graphics
hardware (GPUs) includes highly-programmable floating-point vertex and
pixel-fragment processors. These processors are flexible enough to support
high-level |
Jonathan Shewchuk, Department of Electrical Engineering &
Computer Sciences, University of California at Berkeley, Wednesday, October
9th,4-5p.m. Moore 070; Constrained Delaunay Tetrahedralizations and
Provably Good Mesh Generation; Unstructured meshes of Delaunay or
Delaunay-like tetrahedra have many advantages in such applications as
rendering and visualization, interpolation, and numerical methods for
simulating physical
phenomena such as mechanical deformation, heat transfer, and fluid flow. One of
the most difficult parts of tetrahedral mesh generation is My solution
to this problem combines a theory about the existence of The boundary
recovery step is the first step for a mesh generation |
Matthew Papakipos, of nVidia Corp. Director of Architecture, Monday, October 7th 2002, 1pm, 123 Lauritsen; How long can Graphics Chips exceed Moore's Law?; A few short years ago, single-chip PC 3D graphics solutions arrived on the market at performance levels that rivaled Professional Workstations with multi-chip graphics pipelines. Since then, graphics performance has grown at a rate approaching doubling every 6 months, far exceeding Moore's Law. How
is this possible? Will it be sustainable? There is evidence that |
Alex Keller, Numerical Algorithms
Group, Dept. of Computer Science, University of Kaiserslautern,
July 30th thru August 3rd, 9:30 a.m. - 12:00 noon,
Powell-Booth 100;
Beyond Monte Carlo;
Monte Carlo methods are based on probability theory and are realized by simulations of random
numbers. Quasi-Monte Carlo algorithms are based on number theory and are realized by deterministic
low discrepancy points. Using low discrepancy sampling in the right way yields much faster
rendering algorithms.
This course presents new and strikingly simple algorithms for the efficient generation of deterministic and randomized low discrepancy point sets, introduces the principles of quasi-Monte Carlo integration and Monte Carlo extensions of quasi-Monte Carlo algorithms, and finally provides practical insight by example hard- and software rendering algorithms that benefit from low discrepancy sampling. This course is accessible to an audience with a principal understanding of basic ideas of Monte Carlo soft- and hardware rendering algorithms. The tutorial will teach the following topics: The concept of low discrepancy sampling, quasi-Monte Carlo integration techniques, Monte Carlo extensions of quasi-Monte Carlo integration, beneficial applications of deterministic and randomized low discrepancy sampling to hard- and software rendering algorithms. Participants in this course will learn how to construct simpler and much faster rendering algorithms by using number theoretic sampling methods. The course provides the simple algorithms, insight into the theory underneath, and gives application examples for hard- and software rendering. The course schedule is as follows
|
Igor Gusgov, Princeton University Program of Applied Mathematics,
Thursday, February 18, 4:30pm, Beckman Institute Auditorium;
Irregular Subdivision and Signal Processing for Arbitrary Surface
Triangulations;
Recent progress in 3D acquisition techniques and mesh simplification
methods has made triangulated mesh hierarchies of arbitrary topology a
basic geometric modeling primitive. These meshes typically have no
regular structure so that classical processing methods such as Fourier
and Wavelet transforms do not immediately apply.
In this talk I will report on some very recent work which is aimed at building signal processing type algorithms for unstructured surface triangulations. In particular I will introduce a new non-uniform relaxation technique which lets us build a Burt-Adelson type detail pyramid on top of a mesh simplification hierarchy (Progressive Meshes of Hoppe). The resulting multiresolution hierarchy makes it easy to perform a full range of standard signal processing tasks such as smoothing, enhancement, filtering and editing of arbitrary surface triangulations. I will explain the basic components of our approach, the motivation behind it, and show some examples demonstrating the power of our method. This is a joint work with Wim Sweldens and Peter Schröder. |
Vibeke Sorensen, USC School of Cinema and Television, Wednesday, May 20, 4pm, Jorgensen 74; Recent Explorations in Computer Art and Animation; Vibeke Sorensen will be showing and discussing her work in computer art and animation, focusing on her recent interactive and collaborative work. This includes stereoscopic animation (work-in-progress and Maya, 1993) and software (DrawStereo,1993/98), and interactive web based work, including "MindShipMind." The latter is in collaboration with Austrian composer Karlheinz Essl and based on the writings of 30 artists and scientists at a 3 week seminar called "Order, Complexity, and Beauty," the MindShip in Copenhagen, Denmark in 1996. She will also discuss her "Global Visual Music Jam Session", in collaboration with UC San Diego Music Department professors Miller Puckette (mathematician and computer scientist) and Rand Steiger (composer). They are developing a new multi-media programming language which allows users to combine 2 and 3-D computer graphics and animation, digital video, and computer sound and music for real-time, improvised multi-media performance. Finally, she will review her "Display Technology for Computer Art" in which she is working with USC Chemistry professor and Caltech Chemistry Department alumnus, Dr. Mark Thompson, on the development of new, light emitting displays for still and moving images. |
Kari Pulli, Stanford University, Tuesday, May 19, 10.30am-12noon, Moore 80; Scanning and Displaying Colored 3D Objects; In this talk I will describe two projects related to scanning and displaying 3D objects. The first project covers my thesis work completed at University of Washington. In this project, we used stereo with structured light to capture geometry and color of 3D objects. Several views of the object were then registered into a single coordinate system and an initial surface estimate was created using space carving. This initial estimate was then refined using mesh optimization techniques. Finally, the color and geometry information was combined using view-dependent texturing.The second project I will discuss is the Digital Michelangelo project at Stanford University. I will discuss how we plan to scan several Michelangelo sculptures, where are we now, what kind of problems do we foresee. |
Steven Seitz, Microsoft Research, Tuesday, May 12, 10.30am, Moore 80; Viewing and Manipulating 3D Scenes Through Photographs; The problem of acquiring and manipulating photorealistic visual models of real scenes is a fast-growing new research area that has spawned successful commercial products like Apple's QuickTime VR. An ideal solution is one that enables (1) photorealism, (2) real-time user-control of viewpoint, and (3) changes in illumination and scene structure.In this talk I will describe recent work that seeks to achieve these goals by processing a set of input images (i.e. photographs) of a scene to effect changes in camera viewpoint and 3D editing operations. Camera viewpoint changes are achieved by manipulating the input images to synthesize new scene views of photographic quality and detail. 3D scene modifications are performed interactively, via user pixel edits to individual images. These edits are automatically propagated to other images in order to preserve physical coherence between different views of the scene. Because all of these operations require accurate correspondence, I will discuss the image correspondence problem in detail and present new results and algorithms that are particularly suited for image based rendering and editing applications. |
Leo Guibas, Stanford, Friday, March 27, 3pm, Jorgensen 74; Kinetic Data Structures; Suppose we are simulating a collection of continuously moving bodies, rigid or deformable, whose instantaneous motion follows known laws. As the simulation proceeds, we are interested in maintaining certain quantities of interest (for example, the separation of the closest pair of objects), or detecting certain discrete events (for example, collisions -- which may alter the motion laws of the objects). In this talk we will present a general framework for addressing such problems and tools for designing and analyzing relevant algorithms, which we call kinetic data structures. The resulting techniques satisfy three desirable properties: (1) they exploit the continuity of the motion of the objects to gain efficiency, (2) the number of events processed by the algorithms is close to the minimum necessary in the worst case, and (3) any object may change its `flight plan' at any moment with a low cost update to the simulation data structures. |
Herbert Edelsbrunner, UIUC, Friday, March 20, 3pm,
Baxter Lecture Hall; Complex Geometry for Modeling Biomolecules;
The use of geometric models for molecular
conformations dates back at least to Lee and Richards
who in the 70s defined the solvent accessible (SA) model
as the union of spherical balls representing atoms.
Soon after, Richards and Greer introduced the
molecular surface (MS) model as a smooth and possibly
more realistic variant of the SA model. We will
introduce the new molecular skin (SK) similar to the MS
model that has an additional symmetry relevant in
studying questions of complementarity.
This talk introduces the alpha complex as the dual of the Voronoi decomposition of an SA model. The complex is a combinatorial object that leads to fast and robust algorithms for visualizing and analyzing geometric models of molecules. As an example we will see that the alpha complex can be used to compute the precise volume and surface area of an SA model without constructing it. The alpha complex offers a direct method to defining and computing cavities of molecules. Recent biological studies provide evidence for the physical relevance of this cavity definition. |
Michael Gleicher, Autodesk, Wednesday, February 25, 4pm, Jorgensen 74; Editing and Retargetting Animated Motion with Spacetime Constraints; Most motion for computer animation is single purpose: it applies to a particular character performing a particular action. In this talk, I will describe work on making motion more reusable by providing tools that adapt previously created motions to new situations. The approach views the task of finding an adapted motion as a constrained optimization problem: compute the motion that best preserves the desirable properties of the original, subject to meeting the demands of the new situation. The approach is a variation of Spacetime Constraints as it requires a solver to consider the entire motion simultaneously. By careful choice in how we pose the problem, by judicious use of simplifications and approximations, and by careful implementation, the approach can be made practical. I will show how the approach can be used to provide direct-manipulation editing of animated motion and to retarget motions to new characters. |
Copyright © 2003 Peter Schröder Last modified: Thurs Dec 19th 10:22:14 PST 2002 |