GM Cruise Automation

Timeframe:
Summer 2018
3 Months
During the Summer of 2018, I interned as a 3D Interaction Designer at GM Cruise Automation, a self-driving car company in San Francisco. During my time there, I designed and wrote code for a 3D interactive experience that aims to demystify the complexities of self-driving technology.



Problem Space


A big problem in the autonomous vehicle industry is the lack of public trust in the safety of the technology. Based on extensive research, we found that one of the best way to gain trust is by explaining how the technology itself works. I was tasked with designing and creating an interactive experience that communicated these complex concepts, and how they fit together in a way that makes sense to non-engineers.


Notice how the simulation I’ve created lines up perfectly with the video! This is because all the elements are spawned from the real drive data.

Process


The greater the complexity, the more rigorous and reflective one’s process must be. My process for this project spanned from the development of principles I ought to follow all the way to writing production code.



Dedication to Accuracy
When explaining a scary new technology, you ethically can’t cut corners or sacrifice on accuracy. I put in a lot of effort throughout the summer to ensure that I wasn’t faking or compromising for the sake of a cleaner design. I would regularly consult with exports in each sector of our company to try to figure out what would be the best way to visually manifest the data that they offered me.


Teaching myself Three.js!
Three.js as Material
Throughout my internship, I didn’t have a prototyping tool like Sketch or Adobe Illustrator to easily mockup and test out my ideas. Because my experience was going to live on the web, I needed to quickly learn three.js well enough to not only implement my designs but also test and iterate upon them. I modelled the environment and its assets in Cinema4D, and manipulated them in three.js.


C4D Mockup of the environment
Guiding Model
There are so many aspects(perception, prediction, planning, hardware, etc.) to self-driving technology that I had a hard time explaining them all indivudually. I realized that I needed a guiding model, and I decided to explain each element of AV technology in the context of a single maneuver. This would allow me to portray not only how each aspect works, but also how they relate to one another!
The Maneuver I chose, for its accurate representation of difficult driving conditions in SF.
Iterations
Below you will find some of the visual and performance progression that I made in the past few months!

First implementation of raycasting triggers to affect camera angles.
I modelled the city environment where the maneuver actually happened. I used geospatial data for building dimensions!

Different stages of my visual and interaction iterations.
Added more visual polish with colors, and changed camera angles when clicked.
Adding some copy to the different technologies at Cruise Automation, as well as more visual polish with Ambient Occlusion
There is so much that I have done since this point, and so much I have yet to do to consolidate all the different elements of self-driving technology into a single medium. 

I am thankful to my team and my company for giving me this rare opportunity.


Where I ended up.

email     linkedin     instagram      vimeo     resume      inspirations     things