Sneaker Visualizer
Summer 2017
(7 Hours)

Role: Research and Prototyping

Collaboration: Grace Cha

With a partner, I designed and prototyped an app that reimagines the online experience for sneakers. We used various 3D and AR methods to achieve certain user experiences that focused on informing the consumer more of what they were purchasing.

In this project, I did all of the app/ interaction prototyping, and helped design the service.

Problem Space—

Reflecting on the recent news on online sneaker shopping, my partner and I began questioning if it was worth reconsidering the purchasing interaction to evolve into a more informed experience. With the recent arrival of AR SDKs, I wanted to try to prototype and experiment with this extremely nascent tool.


Daniel Kahneman wrote the book Thinking, Fast and Slow which introduces a concept called Systems 1 and 2 thinking. Systems 1 thinking (“Fast” thinking) refers to our impulsive, emotional, visceral notions. Systems 2 thinking (“Slow” thinking) refers to logical, composed thought. The claim is that many consumer experiences capitalize on Systems 1, when it would be beneficial to rely more on Systems 2 (from the consumer’s perspective). 

Amazon seems to have many systems set up to make impulsive purchases easier.
From our personal experiences and those of the people around us, we care not only how certain shoes fit, but also how they look as part of our personal expression of fashion. However, in online environments, we were more hesitant to purchase shoes because we were not able to make that judgement without seeing how they look on our feet and with the rest of our clothes.

There are millions of these kinds of photos on instagram, where people enjoy showing off and getting feedback on their style.

This is what lead us  to the decision to contextualize these items for consumers to get a more accurate understanding of what the items will look like on our feet.

Service Design—

Our app structure is structured in a similar way in which a user selects a shoe in-store.  First the user is prompted to a gallery of shoes that displays the shoes in profile view.  After selecting a shoe, they are directed to a instruction screen that displays how this shoe trying will work. First the user selects the size of the shoe, then they will scan his or her foot.  Once the app reads the foot, the user will be able to view the shoe on their foot via the screen of the app in augmented reality.

User Flow for our service.
Though we did incorporate image targets for our prototype, we think that a markerless system that tracks foot movement would be much more informative to the user. The movement with a markerless system would be more organic, for the foot does not have to be anchored on the image target/ marker. Because the foot scanning was not supported on our AR SDK, we decided to settle for image targets. I suppose this is the dichotomy between prototyping and designing for the future.

Vuforia’s documentation of object scanning. Since it’s only Android compatible, we weren’t able to use it.

This would be used to scan the User’s foot.
As our main goal was to create a platform through which the customer can make a more informed decision, we had two main views: an interactive 3D view of the shoe that also has relevant product details and the Augmented Reality view. The 3D view gives a better idea of how the product itself looks, while the AR view contextualizes that product on the user, making it personal.

Prototyping Process—

Being semi-interested in sneakers myself, I had recently stumbled on a website with an interesting visualization tool—a pseudo-turntable tool that shows all angles of a desired shoe. 

This is a photo-based visualization that gives a very good approximation of a 3D form in space.
From this point, I would screenshot every single angle, and use the photogrammetry technique to extrapolate a 3D model using Agisoft’s PhotoScanPro sofware.

The software is able to take the angle differences and reverse-engineer the camera positions. From there, it constructs an accurate point-cloud, which I can turn into a textured mesh.

The next thing to do was the Unity and Vuforia work. I registered the Image target that Grace designed in Vuforia, and generated the proper packages to import to my Unity Project.

The Unity3D setup. Vuforia works with a target system, but our end vision would be a markerless system, because users might not be willing/able to print out the proper target.
After building the app into my iOS device, I ended up with this result:

Demonstration of tracking capability and responsiveness

This is a demo of the tracking capabilites on the foot of the user. It is paramount to be able to see the shoe in context of the outfit. This was the overarching concept.

With some more scenes and scripts, we were able to arrive at this kind of application. However, this is where we decided to stop, as the amount of debugging started surpassing the amount of designing that we were doing, which was a good indication that this short exercise had come to a good finish.

What I learned—

Mind you, we are designers, not developers. All this Unity, Vuforia and Photogrammetry are simply ways to create a rough prototype to give the effect of a functional app, to understand what the experience and service feel like. The moment we start to spend more time debugging than thinking is the moment we need to take a step back and reevaluate our plan.

With my prior knowledge in barebones AR development, this whole project took a total of 7 hours in the course of two work sessions, excluding time taken to model miscellaneous shoes. Each prototype was rapidly put together and tested immediately, thus many iterations were possible, both on the app and the components of the whole system.

I also think there was a mistake that I made in that I tried to choose the tool (Augmented Reality) before truly arriving at my goal/ problem space. I believe tools should be selected wisely, after finding a full understanding of the effect that needs to be procured, and not a minute before. 
Email  / LinkedinInstagram  /  Goodreads  /  Medium  /  Vimeo