Sample Page

FutureLight was a motion picture visual effects research & development company, founded in the 1990s as the development arm of VisionArt.[1] It was headed by Rob Bredow, who would go one to become the head of Industrial Light & Magic).[2] FutureLight was responsible for a number of groundbreaking VFX technologies that helped transform the film industry, including “Sparky,” the first large-scale agent-based crown/dynamics simulation system,[3] “Tracky,” a revolutionary 3D/2D automated camera tracking/match move system,[4] and the first real-time tetherless optical motion capture system in the industry.[1]

Agentic crowd simulation

Sparky, which was originally developed in 1993 as a smoke and fire simulation program, quickly evolved to handle much more complex agentic simulations. Rob Bredow and Pete Shinners grew it into an advanced system designed to simulate large crowds of autonomous agents, such as insects, F-18 fighter jets, missiles, Alien Attackers,[3] or even “baby ‘zillas.”[1] Sparky revolutionized the production of visual effects shots involving the animation of large numbers of “agents” or elements in a scene. For Independence Day, dogfight shots with F-18 jet fighters and alien Attackers had taken about one month each, Sparky was also able to animate evern more complex and dense shots, previewing them in real-time, and rendering the frames in hardware anti-aliased at film resolution at just one minute per frame.[5] Independence Day digital effects supervisor Tricia Ashford noted that Sparky delivered where traditional animation techniques would have simply been “too laborious for the frenetic dogfight that concludes Independence Day.”[5] “The final air battle takes place around a fifteen-mile-wide destroyer,” remarked Ashford, “and Roland wanted to see hundreds of F-18s and attackers duking it out with missiles, light balls, and tracer fire. It was an enormous challenge; the only way it could be accomplished was through an advanced procedural system that could automatically calculate and render the interaction between all these different elements.”[5] In a 1996 interview for SIGGRAPH, writer Dean Devlin stated “When we were shown the software that was developed at VIsionArt, where we could just slide a little bar and add more F-18’s and more alien attackers… the ease in which were able to create these shots, and the flexibility – I don’t think that we could have done these sequences without it.”[6] Because none of Sparky’s agents such as F-18’s, Alien Attackers or “baby ‘zillas” were hand animated, and were instead simply given attributes and rule of engagement and allowed to dogfight on their own, Post Magazine described Sparky as early “Artificial Intelligence,” however VisionArt’s technical staff largely rejected this label in favor of “autonomous simulation.”[1] Sparky was able to render these simulations as film-resolution images in hardware using the Silicon Graphics Onyx framebuffer, over a decade before this capability was made available commercially by Massive.

For Godzilla, Sparky was used to animate “baby ‘zilla” creatures which hatched from nearly 1300 eggs laid by Godzilla in the lobby of Madison Square Gardens, including shots with 885 babies.[1] According to Post Magazine, “With those kinds of numbers, key frame animation was impossible, so VisionArt’s Rob Bredow, Brian Hall and Pete Shinners developed sophisticated flocking software that the babies a kind of artificial intelligence. With their flocking software, the babies had a set of animations to choose from, knew their environment, and knew parameters for moving.”[1] Sparky allowed for agents to move using blending of disparate motion capture animation sets to support autonomous agent interaction with both objects in their environment and each other: “The babies the choose an animation that allowed them to move without colliding through another baby or the environment,” explains Rose. “It’s complicated, because they have a lot of body parts for their animation and the only way some animations can happen is if they blend two animations, like stepping forward and to the side.”[1]

Automated camera tracking/match moving

Tracky was developed to enable vast numbers of hand-held camera shots to be integrated with complex digital visual effects.[1] Prior to the advent of “Tracky,” VFX generally had to either be shot with expensive and bulky motion control rigs, or tediously tracked by hand.[7] Tracky allowed for fast and accurate camera tracking of 2D film footage with only a handful of reference keyframes, and then transforming that data into 3D space with position, rotation and scale. Image tracking data could be further augmented with precise set measurements via VisionArt’s Zeiss Rec Elta RL-S reflectorless laser survey head.[1]

While in terms of notable “firsts,” while Tracky was developed in parallel to similar proprietary systems at Industrial Light & Magic or TRACK at Digital Domain, Tracky’s use on Godzilla marked an notable milestone due to the sheer number of VFX shots that were able to be integrated with handheld camera shots.[1] VisionArt completed 135 of the film’s visual effects shots in-house, nearly one-third of the total, and provided the camera tracking to Centropolis Effects, Sony Pictures Imageworks, Digiscope and Pixel Liberation Front, which created the remainder of the shots.[1] In addition, Tracky allowed for 2D/3D tracking of complex shots on a scale never before attempted, such as a 600 frame helicopter shot that follows Godzilla’s footprints then pans up to reveal a massive computer generated cargo ship filling two thirds of the frame.[1]

Tracky predated early commercial solutions such as Boujou,[8] 3DEqualizer and Matchmover.

Motion capture

FutureLight also created the first real-time tetherless optical motion capture (mocap) system in the industry. FutureLight’s mocap was used by VisionArt to allow director Roland Emerich to conceptualize the hero CG character for 1998’s Godzilla, for the first time allowing for human motion capture to be transposed in real-time to a character with non-human proportions.[1] The system allowed director Roland Emmerich to direct the human motion actor hired to play Godzilla, while viewing a realtime preview of the Godzilla creature rendered on a large monitor. “Motion capture helped conceptualize the character, how he would move and what Roland would be able to do with the character,” explained Josh Rose.[1]

While FutureLight’s mocap system scaled large for the hero Godzilla creature model’s enormous proportions, it also powered the key “Baby ‘Zilla” sequences at the climax of the film, including shots with up to 885 babies.[1]

See also

References

  1. ^ a b c d e f g h i j k l m n o Kaufman, Deborah (June 1998). “Building a Perfect Beast”. Post Magazine.
  2. ^ “Rob Bredow | SVP, Executive Creative Director & Head of ILM”. Lucasfilm. Retrieved 2018-10-21.
  3. ^ a b Prokop, Tim (September 1996). “Fireworks”. Cinefex. p. 78.
  4. ^ Shay, Estelle (October 1998). “Animals with Attitude”. Cinefex. p. 47.
  5. ^ a b c Prokop, Tim (September 1996). “Fireworks”. Cinefex.
  6. ^ “Independence Day – SIGGRAPH Interview with Roland Emmerich and Dean Devlin”. SIGGRAPH Conference (1996) New Orleans – VisionArt/Chalice News Conference. August 1996.
  7. ^ Seymore, Mike (August 24, 2004). “Art of Tracking Part 1: History of Tracking”. fxguide.
  8. ^ Sudd, David (November 12, 2009). “boujou 5 Review: Matchmoving Enters its Maturity”. Animation World Network.