Engineering
Design
Tech

Canon VR Experience: Giving Stories a New Form

Virtual reality has been diving in and out of the spotlight for at least 20 years (looking at you, Nintendo’s Virtual Boy and Lawnmower Man) and it still remains to be seen if this time around it will prove to be more than just a novelty. Having an opportunity to prove VR’s viability is an exciting prospect — and when you do get one, you really sink your teeth into it.

In the late summer of 2016, YLD teamed up with Canon — known to many as the leading manufacturer of cameras and printers — to begin working on a concept for a VR-focused product. The initial intention was to explore the technology, looking for opportunities to innovate existing products and services and ultimately sparking interest within the Canon community.

Having set out a goal to outline the characteristics of a potential VR-based product, a challenge presented itself in a form of a very tangible deadline — Photokina 2016, the world’s largest trade fair for photographic industries, leaving us with 5 weeks to conceptualise, design and build a working prototype. Putting collaboration and testing-based validation at the heart of the design process, the team set out to explore different alternatives yet staying aware of the time constraints.

At the heart of a photo is a story

Photography is a great storytelling medium, that can relay both facts and emotions. One of the challenges of telling a story is emphasising exploration and joy of discovery within a linear narrative. To find a less conventional way of conveying a story through photos, we had to step back a little and look at how we, as people, actually take photos.

We take photos all the time. Interesting and unique events — such as holiday trips or birthday parties — might yield a bigger and more diverse body of photos taken over a shorter period of time, whereas random photos taken on daily basis — for example of a snoozing dog — might result in fewer, more homogenous photos. Analysing user’s photo collections, identifying periods of increased activity by utilising Canon’s object analysis engine and comparing time/GPS data with user’s social media content, can help isolate photos related to those events and weave stories out of them.

Analysing a collection of photos taken over a certain period of time, identifying point of high and low activity and comparing time/GPS data with user’s social media content, can help isolate photos related to significant events and weave stories out of them.

Traditionally, a story is a told in a linear fashion (unless you are William S. Burroughs). After identifying a group of photos as a story, it is possible to arrange them in a sequence, locking a user into a single path — but that would partially extinguish the joy of discovery, defining the user’s role as completely passive. This is why we chose to break down a story into smaller clusters of photos, one that would illustrate the most memorable parts and leave space for the exploration of individual clusters without imposing a specific order.

Photos are grouped into smaller clusters, allowing the overall story to be told in a linear fashion and for individual parts to be explored more freely.


Beyond photography

As we kept clarifying the foundation of the concept during a series of collaborative workshops, it became clear that we needed to find a relatable metaphor. Without over-thinking, we’ve dubbed a story as a ‘memory’ and smaller parts of that story as ‘moments’, thus making it feel a bit more personal.

Reliving a memory through photos is an experience in itself, yet the connections between the individual moments and memories, felt a bit forced and “jumpy”. It felt as it was necessary to maybe bring some more elements to a story — elements that would help ‘fill in the gaps’, elicit a stronger response and simply make it more fun at the same time as “weaving” the moments and memories together more smoothly.

This is where having additional information about a photo — metadata — and Canon’s object analysis engine came in handy. Knowing the time of an event and it’s geographical location, combined with the social media content can help enrich a memory with things like travel routes, landmarks (museums, cafes, shops), weather reports, social profiles and quotes of people involved, musical playlists and more.

Based on all the available data, the team identified 4 different types of content:

  • First of all, photos themselves.
  • Contextual cards — any piece of content that compliments photos, for example information about a location, piece of music or a friend’s Twitter quote, to name a few.
Variety of contextual cards used in the prototype
  • Transitional cards — a piece of content that is shown between different moments to represent passage of time
Variety of transitional cards used in the prototype
  • Ambience — a memory- or moment-specific change of the VR environment, by the use of colour and / or background music, can create an even stronger sense of reliving an experience. Due to the time constraints of only 5 weeks, this feature was considered to be of lower priority at this stage, as we would not have time to develop and test it properly.

Exploring a memory

Learning how a ‘memory’ can be recognised and isolated out of an entire collection of photos and how the most interesting parts of it (represented by a cluster of photos and additional contextual content) can break that memory into smaller ‘moments’, provided a good set of building blocks. Now we were presented with one of the most challenging questions: what is the actual mechanism of exploration? How do we use all 3 dimensions to the fullest, especially when the content itself is mostly 2-dimensional?

First of all, we’ve split the whole experience into 3 levels:

  • Gallery of memories — a jump-off point for exploring memories
  • Memory — a sequence of moments (constituted of photos and contextual content) and transitional cards (emphasising lapse of time
  • Close up — pulling a photo towards the user for a closer look

Each moment is represented by a ‘cloud’ of photos and contextual cards, placed around a user in a 3-dimensional space, allowing them to look around and explore the content of that moment freely. The navigation between moments is linear — back and forth — and is based on the flow of time. This navigational metaphor proved to be successful — initial internal testing results and consequent feedback from the Photokina attendees, were overwhelmingly positive.

The UI of the product was designed to be as unobtrusive as possible — even though navigation required a direct interaction with the next or the previous cluster, additional navigational buttons were mapped onto the ‘floor’, easily accessible and not obscuring the content.

Considering physiology

Every single design decision for this project had to be tested with human physiology in mind. First of all, the amount of content that a single ‘moment’ can fit in had to be based on the user’s freedom of movement. Even though the product was being designed to be presented at a trade fair, our aim was to create a “household item”. This meant designing a product that could be used for considerably long periods of time in the comfort of one’s living room sofa. After all, viewing photos and revisiting memories is meant to be a relaxing experience. Thus this initial prototype was built around a sitting position.

A second important property of the VR experience is the illumination of the virtual environment. The initial intention of the design team was to explore the possibility of using Canon’s brand colour palette, consisting mostly of light colour tones. However, one of the early testing sessions showed that a dark environment is more preferable, as it lessens the strain on the eyes.

Techno magic

The project was prototyped using WebGL, the WebVR API, Three.js and GreenSock. These were running on a Samsung Gear VR headset containing a Samsung Galaxy S7. This hardware as it’s quite common, and we felt it helped represent a realistic near-future customer experience.

Three.js let us very quickly try out our ideas, and we had the first concept version of the application running within a couple of days. We experimented very quickly with code, trying different animations, interaction and concepts, and this really helped us mix the design and engineering of the product together successfully.

We had planned to use Samsung’s own “Samsung Internet” app which featured very early support for the WebVR API, but unfortunately this proved problematic. It gave us no access the high-resolution accelerometer present in the Gear VR Headset, and an odd limit to the resolution we could display. This made for a decidedly choppy, blocky experience! Not good! We had to find a solution, and quickly (we had just 5 weeks).

We decided to try using Crosswalk to compile our own environment using a recent Chromium build. This would let us polyfill the webVR API by passing accelerometer data through from the native Oculus SDK. Wonderfully, this resulted in super-smooth accelerometer interaction, native-resolution output, and a high frame-rate! YES!

However, we lost the barrel distortion (the trick that warps the image slightly to fit with the lenses that are present on the GearVR)! So, we had to re-create the barrel distortion needed to ‘warp’ the image back into the correct shape for the Gear VR lenses. This proved complex, but we got it! Phew.

With such a tight time frame for such a potentially very complicated product, we had to be very sensible during the planning phase and, from the outset, allow time for some inevitable technical issues. Doing this meant that we were left with just about enough time to polish the product’s performance, and make it work as smoothly as possible for the many hundreds of people who would experience it during the event.

Out in the wild

Launching the prototype at Photokina 2016 was both exciting and daunting — even though VR technology is here, there are currently very few established design principles for it. Collaboration, constant testing and a pinch of intuition can definitely get you on the right track, however there’s a great deal of trials and errors involved. On top of that, photography buffs can be notoriously demanding.

With that said, the feedback that has been gathered thus far, has been very positive and inspiring. Both the concept and its realisation were met with curiosity and excitement among the Photokina community. Navigating through a story proved to be intuitive. Our initial concern with overstimulation turned out to be totally unjustified — in fact, in many cases, the total opposite turned out to be true; people wanted to have even more diverse content presented to them at every step of a story!

The biggest obstacle for the users were the limitations of the 3rd party gear and its software — even though Galaxy S7 possesses a relatively high-resolution screen, running a WebVR application on top of Samsung’s software limits the final quality of an image. Also, the image then has to work its way through Gear VRs lenses, diminishing the quality even further. Another notable obstacle was interacting with the headset. Triggering of the UI components was handled through buttons sitting on the side of the headset — unfortunately, their position and even size revealed to be insufficient for a smooth experience.

What’s next?

In a nutshell, the prototype got people talking. Canon’s efforts to branch out into a new domain were met with excitement and raised the question ‘Where will it go from here?’. The 5-week-long experiment outlined one of the possible directions for the further development of Canon’s own VR products and service.

As for the YLD team, it was a great experience designing a real product for VR that definitively left us with a taste for more — we will keep our fingers crossed and look forward to push this prototype or another exciting project, even further!

YLD team: Anthony Mann, Alex Prokop, Antonas Deduchovas, Luis Klefsjo.

Canon/Irista team: Nick Babaian, Anton Odena, Alex Davies-Moore

Canon VR Experience: Giving Stories a New Form
was originally published in YLD Blog on Medium.
Share this article: