Devlog #1 : Nescafe experience
December 6th, 2017 | Heri
In this short post, I'll present the prototype we developed for Nestle for Pilot Accelerator by Chinaccelerator.
Nescafe came to us with a simple problem, that most traditional brands are facing with: how to engage Millennials and convert them into consumers. We decided to create an experience that is mixing AR and VR: draw your virtual light painting in VR and then share with all your friends in AR by scanning a Nescafe product. Our goal was to create a social experience where people can discuss and have a fun moment with Nescafe products.
As you notice at the beginning we planned to use the cup of coffee itself to make an AR marker but that proved to be tricky, so we decided to go for a Nescafe box instead. Since we always want to target a wide market, we focused our developments on the mobile platform. We used Leap Motion as an interface to draw the light painting, users just had to pinch their thumb and index to start drawing in 3D.
For Coolhobo, it was the first time we had to set up a branding environment. Our designers decided to adopt a closed space with a dark color that would remind of the famous Nescafe cup. In order to fill the space, they also decided to fill the space with coffee machine and cups. They also played on lighting and contrasts in order to highlight the important information in space.
After that, it came to the critical moment to define how to interact with the environment. Of course, we could go for a simple interface with buttons but it wouldn't make sense to do it when you don't have haptic feedbacks and we wanted another interesting way to interact with the environment. One of our developers had an interesting idea: use a ball that you move around to control different actions! How does it work? Just grab the ball and then move in a direction to make an action.
We also designed capsules in the Nescafe style in order to select the colors of the lines you draw. When you "click" on this capsule, it explodes. We created 4 different colors for the prototype, each color being used by a Nescafe capsule color.
This is the whole setup for the UI, in order to avoid confusion with the movements, you can only draw with your right hand and use UI with your left one. Once the user is set up, all he has to do is draw and upload it to the cloud!
Since it is only a prototype, we only display the latest created models in AR, so the interaction is pretty easy. Just get a pack of instant coffee and display your model! And this is the final result:
The tool leaves a lot of freedom for you to draw whatever you want in any shape you want and we had really a lot of fun designing it. All team members got their ideas in, from the design to the user tests everybody had a word to say about all the aspects of this project. Public reaction to this project was very positive and people enjoyed drawing a lot of things!
We had a lot of fun but this project helped us understand many things that will be useful to us in the future.
This was obviously the aspect we feared the most, we were not sure of how to achieve this. We had to set up a web service to make the bridge between the AR and the VR Applications when sending the model to the cloud, the service will write the geometry in the database. The same geometry can be easily accessed using the web service. With this experience, we can now build more complex web services for other needs.
As you can see the scene has a lot of complex objects (this is what happens when you let designers unleash their imagination without a technical guy to stop them...), in order to manage this the design team and our graphics programmer worked together to understand how to bake the lights and how to make the normal maps to fake the details. These techniques helped us to improve our performances and since we are on mobile, this improvement is really welcome! We can't wait to use it in our future scenes!
Imagine, you can control the world with your hands... How can you manipulate it in a simple and intuitive way? This is really a tough question! Many people in the team play games, so we are used to complex commands but how about common people? We need to consider them as well... At the beginning, we imagined a lot of cool stuff like using superpowers as a click or moving around cups like a Jedi by pointing your hand at it. The problem? Initial user tests were crystal clear, it was just too complicated.
So we have to imagine ways for people to interact in a cool (or unusual) but still simple way (Hence the idea of the control ball).
This pitch was an opportunity for the team to try and experiment new things (in terms of tech and UX) that we can apply to our main platform. Some of the concepts that we rejected can still be reused into the platform but this will be for another post!