Current cameras capture instants in time and space. But there is something just beyond the reach of the camera that our minds seem to capture quite well: moments. I will provide at least an informal definition of what I am calling a moment. I'll also discuss recent technology that let's us use current cameras+processing to capture moments. Finally, I'll argue for how I imagine future cameras, editing systems, and display paradigms should support this new class of visual artifact.
Michael F. Cohen <http://www.research.microsoft.com/users/cohen/> , Senior Researcher, joined Microsoft Research in 1994 from Princeton University where he served on the faculty of Computer Science <http://www.cs.princeton.edu> . Michael received The 1998 SIGGRAPH Computer Graphics Achievement Award <http://www.siggraph.org/s98/conference/keynote/index.html> for his contributions to the Radiosity method for image synthesis. Dr. Cohen also served as paper's chair for SIGGRAPH '98 <http://www.siggraph.org/s98/> .
Michael received his Ph.D. in 1992 from the University of Utah <http://www.cs.utah.edu> . He also holds undergraduate degrees in Art and Civil Engineering from Beloit College <http://www.beloit.edu> and Rutgers University <http://www.rutgers.edu> respectively, and an M.S. in Computer Graphics <http://www.graphics.cornell.edu> from Cornell. Dr. Cohen also served on the Architecture faculty at Cornell University and was an adjunct faculty member at the University of Utah. His work at the University of Utah focused on spacetime control for linked figure animation. He is perhaps better known for his work on the radiosity method for realistic image synthesis as discussed in his recent book "Radiosity and Image Synthesis" <http://www.apcatalog.com/cgi-bin/AP?ISBN=0121782700&LOCATION=US&FORM=FORM2> (co-authored by John R. Wallace). Michael has published and presented his work internationally in these areas.
At Microsoft, Dr. Cohen has worked on a number of projects ranging from image based rendering, to animation, to camera control, to more artistic non-photorealistic rendering. One project focuses on the problem of image based rendering; capturing the complete flow of light from an object for later rendering from arbitrary vantage points. This work, dubbed "The Lumigraph" is analogous to creating a digital hologram. He has since extended this work through the construction of "Layered Depth Images" that allow manipulation on a PC. Michael also is continuing his work on linked figure animation. He and colleagues have focusing on means to allow simulated creatures to portray their emotional state (ie, a happy walk vs. a sad walk), and to automatically transition between verbs. Recent work also includes creating new methods for low bandwidth teleconferencing, technologies for combining a set of "image stacks", as well as developing new approaches to the low level stereo vision. His current work focuses on computational photography and video.