Tag Archives: AR

A Flurry of VR & AR Activity

Where can you both be present and absent at the exact same time? No, this isn’t a deep philosophical question on the meaning of existence but rather a description of virtual reality (VR), something that I have had a rich helping of over the past month. In my ongoing effort to learn all I can about this hugely exciting and developing technology, and the industry that is blossoming around both it and it’s related cousin, Augmented Reality (AR), I have been doing the conference circuit recently, traveling from Dubai to the US and back again.

iOTX

The first of the events I attended was iOTX right here in Dubai where I was fortunate enough to be a VIP guest of VR/AR Association Dubai Chapter Chairman, Shujat Mirza, at the VR and AR Start-up zone. Tucked away in a corner of the huge Dubai World Trade Centre, there was an impressive array of local companies working in the fields of VR, AR and related technologies. This included Hyperloop, who were at the time of the conference about to present the results of their feasibility study into building a hyper loop between Abu Dhabi and Al Ain, with the projected travel time being a mere 12 minutes! They had a Vive system with them to give people an idea of what it would be like to sit in one of their capsules and showcased the ‘window screens’ that will show passengers a view rather than the dark inside of the tube in which the capsules obviously have to run. The technology behind the hyper loop theory is fascinating, using passive magnets and actuators on the capsule that generate the initial thrust that propels the capsule forward. I really see the value in the technology and look forward to it’s eventual implementation. It makes far more sense for a desert environment such as the Gulf than high speed railway on account of being encased within a tube thus protecting the capsules and mechanisms from the harsh effects of the climate and conditions, including sand, which would play havoc with a standard railway were it to drift and build up on tracks.
Another company present was Candy Lab AR, a US company founded and run by Andrew Couch. Their location-based augmented reality platform uses beacons positioned in sites as diverse as airports, shopping malls etc that enable vendors to deliver real-time AR content to users, thus enhancing their experience in those locations. Great technology and a great team behind it! In addition to being present with a company stand, Andrew was a speaker during the event.

What a great day checking out the VR/AR Association startup zone at #IOTX in Dubai. Great ideas, great products, great people, such as Shujat Mirza (VR/AR Association Dubai Chapter President) & Clyde DeSouza (VR Filmmaker) – Spherical Image – RICOH THETA

Whilst small in overall size compared to the VR and AR industry in other parts of the world, especially the US and Europe, there is real potential for VR and AR to take off in the Middle East, especially somewhere with futuristic ambitions like Dubai and Abu Dhabi. I am already looking forward to seeing how the industry develops over the next few months and years.

AWE (Augmented World Expo)

Undoubtedly the largest industry show dedicated to both Virtual and Augmented Reality, I was excited to be heading back over to the US and Silicon Valley for the third year in a row, this time as a speaker. I always enjoy visiting the Bay Area and spent a day in San Francisco before heading down to Santa Clara, via both Facebook’s incredible HQ and Stanford University. Due to another speaking commitment the following day I was ultimately only able to attend AWE for the first day and so did not get to experience first-hand the fun and intrigue of the main Expo. There are reports aplenty online about the various companies showcasing their VR and AR wares so I didn’t feel as though i’d missed out on too much. The highlights for me during the first day were:
  1. Seeing how much bigger the event has become, even over the last three years. One could really get a sense of VR and AR starting to be embraced by the mainstream and the energy during the event certainly felt like it had been etched up a notch from the previous year.
  1. Getting to speak. I was one of several speakers who took to the stage on the Life track and thoroughly enjoyed being able to deliver my vision of where I see VR in Veterinary currently standing and where I see it going in the future. I believe I am correct when I say that I might have been the first veterinary surgeon to speak at the event so representing the veterinary profession in such an exciting and rapidly advancing industry was truly an honour.
My talk from the event can be viewed below, with a link to the rest of the AWE presentations being found here.
  1. Checking out Lllama Zoo’s HoloLens dissection experience. Charles and Kevin from the company had made the journey down from Canada and both my friend, Deborah, and I were privileged enough to be given a live demo of their augmented reality canine dissection tool, using the Microsoft HoloLens. With each of us wearing a headset, we were both able to see a high resolution holographic image of a set of lungs and heart floating in midair and move around it viewing it from different angles, remove layers and learn about the specific anatomy of this part of the body. The image quality was superb and I was not aware of there being any flicker or issues with the hologram staying fixed in position. A very compelling demonstration and a real glimpse at the future of anatomy teaching in vet and medical schools.
AWE 2017
A full day of VR & AR demonstrations and fascinating talks.
The day came to what felt like a rapid close and after lugging my suitcase up to the afterparty, which in previous years had always been in the adjacent hotel and was a lot of fun but this year had moved and, well, wasn’t quite the same, I hailed an Uber and hot-footed it up to the airport for my red-eye over to Washington DC and the second of my US VR events, and the third overall.

VR in Healthcare Symposium (VR Voice)

Touching down at Baltimore International airport having not really had any sleep whatsover I duly made my way down to Washington DC on the main commuter train, then transferring to the metro in order to arrive at the Milken Institute School of Public Health, part of George Washington University, by 8.30am. This, combined with the welcome discovery of there being a shower at the school, mercifully, gave me time to refresh from the previous day and the flight over, donning my suit before grabbing some breakfast and getting my head into the right space for another day of talks and discussion about virtual, and augmented, reality.
Organised by Robert Fine, of VR Voice, the one-day VR in Healthcare Symposium brought together several speakers and delegates both working in and interested in the use of spatial computing in healthcare, a much more specifically focused event than AWE and one that my talk was perfectly pitched for. In addition to being a great opportunity for me to introduce both myself and the work already being done in veterinary with VR, the day was a wonderful chance to meet a plethora of people, some already very active in the space, such as Dr Brady Evans, whose company OssoVR trains orthopaedic surgeons using virtual reality, and many who were there to learn about this exciting and rapidly changing technology and it’s application to healthcare.
Whilst my talk itself suffered from some degree of annoying technological hitch, I was still very pleased to be able to present and whilst not as high-brow as that presented by the neurosurgeon before me, it went down well – after all, what’s ultimately not to love about a dog wearing a VR headset?!
The full version of the talk can be viewed here:
In addition to enjoying a day of truly fascinating talks, including seeing how neurosurgeons are using VR to better plan and rehearse complex brain surgery, I finished the day with a win, having my ticket be one of those drawn to receive a Merge VR headset – a really great way to round out the day and kickstart my short break exploring the city itself.
VR in Healthcare, Washington DC
A great city & an equally great event.

Blurring the Lines – A New Digital Approach to Immersive Veterinary Education

“The great aim of education is not knowledge, but action.” These words, spoken by the philosopher Herbert Spencer, ring true and can, in my opinion, also be applied inversely. That is to say action delivers great education. For far too long the accepted model for delivering knowledge and training professionals such as vets has been to sit them all down in a lecture hall, drone on at them for hours on end, demand that they go off, read, write the odd essay and complete the occasional project, and then ask them to cram all of that supposed knowledge into their brains ready to regurgitate at will during the course of an exam or nine.
Granted there are also practical elements to most of these programmes, whether it be dissection, physiology labs or animal handling, but the bulk of the training has always been delivered in much the same manner: didactic instruction. For some this approach works and they go away retaining everything that they have heard. For most, however, myself included, it represents a dated and unbelievably inefficient method. Hence the need to condemn weeks to tedious, stress-induced revision before the big assessments. I always found it much easier, way less stressful and frankly more fun to learn by actually doing, seeing, touching or otherwise interacting with the subject matter at hand. Most of what I recall from anatomy training, for example, are the random little moments in the dissection lab when I recall physically holding a specimen and examining it. I can’t for the life of me easily recall a specific moment when I turned to a textbook page and had a piece of knowledge stick in perpetuity.
Whilst it is acknowledged by many educators that practical instruction has better outcomes in terms of understanding and long-term knowledge and skills retention, the fact of the matter is that preparing and delivering a lecture is significantly cheaper, quicker and easier to achieve, whilst the results of that labor can be shared far more widely than a practical session. In terms of resources, acquiring digital photos, videos and other screen-based media is far less costly and labor intensive than drawing together and delivering a tangible, practical learning tool, such as an anatomy specimen. Some of these barriers, I believe, are now finally being lifted and the costs, both in terms of time, effort and direct financial outlay, are narrowing between the old and the digital new. The implications for education and training at every level of schooling, from kids’ first school experience right through to professional CPD (continuing professional development), is profound and I wish to explore why I believe that to be so.

Mixed Reality & Virtual Reality

I first experienced both mixed reality and high-end virtual reality in 2015 and again in 2016 when I volunteered at the Augmented World Expo in Silicon Valley. The power of both technologies to fundamentally change how education outcomes are achieved and training delivered was clearly evident and left me convinced that the future of medical, including veterinary, education was in the application of these new immersive tools.
HoloLens, AWE 2016
Microsoft’s HoloLens offers users the ability to experience mixed reality

In 2016 I was fortunate enough to be at one of the conference parties where someone happened to have two Microsoft HoloLens headsets and was demonstrating them to the small crowd of curious nerds that had gathered around him. Well, I was one of those nerds and before long had the pleasure of donning one of the sets and so was introduced to the wonders of true mixed reality.

AWE 2016, HoloLens
Interacting with objects in mixed reality is as simple as reaching out and ‘touching’ them.

Much like a small welding mask, in both look and feel, the HoloLens is essentially a set of transparent screens that sit in one’s field of view by means of the headstraps that keep the device in place. Whilst not especially comfortable and certainly not something anyone is going to ever be in a rush to wear out in public on account of looking, frankly, ridiculous, the experience that it delivered was compelling. With the use of a simple gesture, specifically an upward ‘throwing’ movement, a menu popped into view suspended perfectly in mid-air and crystal clear as if it were right there in the real world in plain sight of everyone around me. Of course it wasn’t and the only person able to see this hologram was me. Selecting from the menu was as simple as reaching out and ‘touching’ the desired option and within seconds a holographic representation of the Earth was spinning languidly before me. I could ‘pick up’, ‘move’ and otherwise manipulate the item in front of me as though it were a physical object, and if I did move it, for example off to the right, out of my field of view, that is precisely where it remained and where I found it again when I turned back round. The human body application was similarly cool, as I was able to explore the various layers of anatomy through interaction with a highly rendered hologram. Whilst comical for onlookers not wearing a HoloLens, as I appeared to apparently be pawing away at thin air like someone suffering a particularly lucid acid hallucination, the thrill of what I was actually seeing and engaging with myself allowed me to ignore my daft appearance.

 

What are the medical education applications for such mixed reality technology? Whilst holographic visual representations of anatomy are, at first, a magical phenomemon to experience and a pretty cool party piece, it is the fact that mixed reality sees realistic holograms merged, or so it appears to the user, onto the real world, in contrast to virtual reality, which replaces the real world experience with an entirely digital one, that lends itself to unique educational applications. Anatomy instruction by being able to accurately overlay and track in real-time deeper layers onto a real-world physical specimen, enabling students to understand the wider context in which various anatomical structures sit is a far more compelling and useful application of MR than a simple floating graphic. Similarly, surgical training involving holographic overlays onto a real-world, physical object or combined with haptic technology to elicit tactile feedback, offers the potential to deliver programmable, repeatable, easily accessible practical training with minimal expense and zero waste on account of there being no need to have physical biological specimens.

 

Imagine: a fully-functional and resourced dissection and surgical training lab right there in your clinic or home and all at the press of a digital button. Imagine how confident you would become at that new, nerve-wracking surgical procedure if you had the ability to practice again and again and again, physically making the required cuts and placing the necessary implant, being able to make the inevitable mistakes that come with learning anything new but at zero risk to your patient. Being able to step up to the surgical plate for real and carry out that same procedure that you have rehearsed and developed refined muscle memory for, feeling the confidence that a board-certified specialist with years of experience has, and all without having had to put a single animal at risk – that’s powerful. That’s true action-based education at it’s most compelling and it is a future that both VR and MR promises.

 

I predict that the wide adoption of graphically rich, immersive and realistic digital CPD programmes, through both VR and MR, will result in a renewed engagement of professionals with CPD training and ultimately lead to more confident, skilled, professionally satisfied and happier clinicians. I, for one, know that were I able to complete practical CPD by simply donning a headset and loading up a Vive or HoloLens experience from the comfort and convenience of my clinic or home, all whilst still being able to interact in real-time with colleagues both physically present and remote, my CPD record would be bursting at the seams. That has to be a great thing for the profession, our clients and society in general.

Smart Glasses – Are We Ready?

Spatial computing, once a preserve of science fiction, is slowly but surely creeping into real life and whilst there are a number of companies working on industrial applications of Augmented Reality (AR), with true use of a headset/ glasses there is not yet a convincing consumer solution to herald in the age of smart glasses.

Smart glasses, Lake Tahoe
AR adds a layer of digital information to the real world that we see, adding to the experience.

The promise of AR and of smart-glasses is to seamlessly overlay digital information onto the real world such that this information adds to the experience. There are myriad potential applications where such a capability might prove either useful or just entertaining. For example:

  • Video calls – speak with a person on Skype or FaceTime (other video chat applications available) as though they were literally standing/ sitting there in front of you, in realistic hologram form.

 

  • Educational experiences – visits to galleries, museums or even city tours would be so much more entertaining and interesting with the ability to see projections of artists, historical figures or scenes played out in front of our eyes as though they were happening live. A visit to a famous battle scene or, for example, StoneHenge would be a richer learning experience if the subjects of our learning were walking about around us. How much better would we relate to our history if we could see, with our own eyes, such histories played out on the current world? Would it lead to a greater sense of the important lessons of history and reduce the risks that we repeat the same errors, a concern that holds resonance at this specific time of political uncertainty in the world.

 

  • Navigation – whether it be in a car or walking about an unfamiliar city, staring at a screen has its obvious disadvantages. Contrast that with seeing clear directions mapped out onto the real world in front of our eyes, negating the need to take our focus off the real world. This will be further enhanced by the use of real-time translation, such that foreign road signs are automatically presented in their translated form.

 

  • Many others…..

 

What AR experiences are already available?

Most of us will have first heard of or experienced AR through social apps such as Snapchat, whose filters allow for some silly but otherwise fun effects to be added to live video, such as the addition of rabbit ears and a nose that respond and change in real-time with our faces. Others might have used an AR app to scan a physical marker in, say, a magazine and seen a digital object, such as a movie character, materialise on our screen but viewed as though they were there in the real world. Companies such as Blippar do the latter and have a thriving business in using AR for brand marketing.

 

Who is doing interesting things in AR?

There are a number of companies working on AR, whether it be via smart glasses or the screens with which we interact with daily, such as tablets and phones. As already mentioned, social media is likely to be one of the first experiences of true AR that many of us have and it isn’t just Snapchat playing a role. Facebook are also players in this market with their purchase, this year, of the AR start-up MSQRD, whose technology does much the same thing as Snapchat’s. The technology behind such whimsical entertainment is actually pretty exciting and you can learn more about it here.

Aside from marketing and social media/ entertainment, other major applications for AR are in both industry and education, with a few vet schools even dabbling with the technology.

 

Form Factor…. The Big Issue

As much as I am truly excited by the promise of AR to revolutionise how we interact with digital information, form factor is still, for me, THE biggest issue. Until we move beyond the bulky, cyborg-esque headsets, that feel akin to wearing a welders mask, to lightweight, stylish eyewear or, preferably, a completely off-body solution then wider adoption of this tech will be slow. At present, the most accessible and reliable method by which to engage with AR for the vast majority of us is via our phones and tablets. In other words, handheld screens with cameras attached.

Phones work in as much as they do some incredible things for us and work the same regardless of personal factors and are situationally flexible (i.e. they work much the same way regardless of whether you are at home, work or, perhaps, out and about in a sporting or outdoors setting). They also have the advantage of being discretely held on person if necessary, a feature that an expensive pair of smart-glasses clearly lacks. For example, in areas where openly advertising the fact you have a powerful – and valuable – computer on your person would be ill-advised, it is perfectly possible to keep a phone hidden and, perhaps, access necessary information via other, more discrete methods, such as a smart-watch. Obviously wearing a pair of smart glasses, especially in their current form, would create not only some degree of social stigma, as was seen with Google Glass, but also a personal risk from theft as one would effectively be advertising the fact that they were in possession of a very valuable piece of personal computing equipment.

What of the issues pertaining to eyesight? I, personally, need corrective lenses, whether they be in the form of contacts, which I personally can’t stand wearing for very long and that do little to really improve my eyesight anyway, or spectacles. What solutions do smart-glasses have in store for users such as me in the future? Will I be forced to have to wear contacts whenever I want to wear and thus use my smart glasses? Or will I need to make an additional investment to install corrective/ prescription lenses, instantly increasing the overall cost of adoption and complexity of the product, and making it more of a tricky proposition to resell the device when it comes to upgrading. I wouldn’t be able to easily share/ lend my device to others unless they too shared my prescription, unless they automatically contained technology that corrected for the current user’s eyesight – maybe that’s the key?!

Then there are situational factors governing ease of use. I can currently use, or otherwise carry, my phone in virtually all circumstances. The design and form of the technology gives it this feature. At work it can remain in my pocket and be accessed should I need to quickly use the camera, or search an ebook or perform an information search online, whilst during exercise, such as on my bike, I can easily carry it using a sports-pouch and enjoy music and other services, such as GPS tracking and metrics apps like Strava. Paired with a smart-watch I can also interact directly with the device, accessing key performance data, all in a comfortable manner that the device is designed to be able to cope easily with. Smart-glasses, on the other hand, do not seem to be as flexible. For example, I doubt that I would want to wear the same style of smart glasses at work, interacting with clients and colleagues, and with the constant risk of getting blood etc on them, as I would whilst training, when the need is for eyewear that is sporty, aerodynamic, lightweight, sweat-resistant and aesthetically totally different to other situations. Personally, I even keep different styles of sunglasses depending on the situation in which they are worn. My everyday, casual pair are totally different to my sports/ training/ racing pair. Would I need to have several different pairs of smart glasses to achieve the same result? I only have a single smartphone and can use that in all of the settings mentioned.

Then there is the issue of social stigma and resistance to smart ‘facial-wear.’ Nerds get why people would want to wear a computer on their face – I am one of them. But as Google Glass, when it was first released, demonstrated, the wider public are generally suspicious of and occasionally outright hostile to the idea. Is it simply that wearers of such devices look alien and so instantly stand out as different? Is it the fact that people know such devices include cameras and so fear the perceived invasion of personal privacy that comes with being surveiled, even though we all carry smartphones with incredibly powerful, high resolution cameras that capture content constantly and may well be recorded multiple times per day by other users without our even being aware? In fact, unless you live in a rural area then it is highly probable that you are already being constantly recorded such is the pervasive nature of CCTV. And yet we’re collectively fine with this whilst being instantly suspicious of a person openly wearing a recording device in the form of smart eyewear.

This will need to change before smart glasses become universally accepted as ‘normal.’ A really interesting historical point was made at this year’s AWE (Augmented World Expo) by one of the speakers who talked about how prior to the First World War, wristwatches were generally considered to be pieces of womens’ jewellery and men typically carried a pocket watch. Any gentleman thus wearing a wristwatch would have been stigmatised. That was until the war when, due to the practical constraints of the battlefield, having a timepiece easily accessible, lightweight and handsfree was a big advantage. As a result officers sported wristwatches and continued to do so upon returning from active duty. The comical comment was the suggestion that no-one in their right mind would have ridiculed a tough soldier for wearing a piece of jewellery and so before long tastes changed and the idea of wearing a wristwatch became the accepted norm that we know today. Will the adoption of smart eyewear follow the same path? Who will it be that leads the way in changing public opinion? Will it once again be soldiers, after perhaps first experiencing smart-glasses in the military, or sports-stars perhaps? Regardless of who it is that ultimately leads to a change in opinion there first needs to be a compelling reason for why smart glasses are a preferable option over sticking with the good old smartphone and it is this that I cannot quite yet see.

If No Smart Glasses, Then What?

If smart-glasses, in the typical spectacle form, are not the answer then what could the future of AR look like? To answer this it is worth considering our experience of AR in two different contexts.

Fixed Position Interface

As we have already discussed, AR is already experienced by many of us via traditional screens, with the augmented content over-layed onto the real world as long as we view it through the screen itself. As such, any context in which a transparent surface is involved lends itself to AR. Obvious case examples include driving, with our view of the world outside of the car/ transport medium being through such a transparent ‘screen.’ Companies such as BMW have already explored this idea, for example with the Head-Up Display that shows important journey and vehicle information ‘on the windscreen’ such that the driver need not take their eyes off the road in front of them to still benefit from such data. Navigation information is another very obvious application for this concept, with drivers ‘seeing’ the route mapped out on the road and surrounding world without having to divert their gaze away from the road and towards a separate screen. Imagine how much less likely it would be to miss that rapidly approaching highway slip-road if you could ‘see it’ in advance by a change in colour of the road in front of your very eyes. Once we truly herald the arrival of fully-autonomous driving then the very same vehicle ‘screens’ that previously kept us informed of important driving information will give themselves over to becoming entertainment or productivity screens.

Other settings in which screens (as in what we currently think of as windows or transparent barriers) are currently employed and which promise to provide AR interfaces in the future include places such as zoos, museums, shop windows, or even our very own home windows. Basically anywhere that a transparent ‘screen’ could be found.

Mobile Interface

Until we somehow come up with a reliable, safe method by which to wirelessly beam AR directly into our brains, currently the most obvious alternative to smart-glasses is the smart contact lens. There are groups working on such stuff of science fiction as this very thing, with Samsung having patented a design for the same, although the power and processing would come from a tethered smart-phone, making it more of a smart screen than anything. I have already voiced my own personal objections to contact lenses and cannot see how adding hardware, however small, to them is going to overcome their obvious shortcomings. Assuming for a moment that the visual effect is staggeringly compelling, with beautifully rendered digital content seamlessly added to the world as if it was always there, designers are going to need to solve the following problems before we all don contact lenses:

  • comfort – many people either find them out and out uncomfortable or can only really stand wearing them for short periods of time.

 

  • ocular health – in some professions, especially medical, ophthalmologists recommend daily disposable lenses as, on balance, they are a more hygienic option when compared to longer term-use products. Will smart contact-lenses be cheap enough, and will it be socially and environmentally acceptable or sustainable even, to dispose of our high-tech lenses each day? What of the potential health issues associated with having a heat-generating, signal transmitting/ receiving device actually in contact with our eyes? Do we know what, if any, health risks that might present?

 

  • cost – whilst not especially cheap, I do not get too upset when I have to sacrifice a pair or two of contact lenses in any single day, either because some debris makes it way onto the lenses and renders them uncomfortable or my eyes just need a break. I would be less quick or willing to whip them out, however, if they had cost me a significant sum to purchase, and if I were forced to then I’d be resentful of having to have done so.

 

  • tethering – whilst not a major issue, having to keep a smart-phone in close proximity for such lenses to work as desired does somewhat dilute some of the real magic and potential of a truly untethered AR experience.

 

Smart glasses

Whilst the future is one in which Augmented Reality is definitely going to be HUGE, with companies such as Meta, Magic Leap and Microsoft (with the Hololens) creating some truly incredible technology and experiences that defy conventional belief and result in childish grins from anyone who tries them, there are still some significant and fundamental obstacles to overcome. Form factor is, I believe, one of the key issues that pioneers of this technology are yet to crack but when a compelling solution is found then, well, get strapped in and prepare for a technological shift the likes of which come around but once in a generation!

For more information on Smart Glasses, take a look at the AR Glasses Buyers Guide (www.ARglassesbuyersguide.com)

City of Tech

I have recently returned from my latest trip to what rapidly feels like my second home: California, and specifically the Bay Area. Ever since my first visit to see some friends several years ago I have felt drawn to the area, in no small part due to the fact that it is ‘tech Disneyland’ to the small, nerdy kid that is nestled at my core. It was almost a no-brainer then that I chose Lake Tahoe as my first Ironman race, oblivious at the time to the fact that it was THE hardest race in North America and that it would end up being a two year odyssey! (read about the race here) With the tech theme in mind it was to Silicon Valley that I headed last year when I wanted to learn more about the exciting and rapidly developing fields of Augmented Reality and Virtual Reality, collectively termed Spatial Computing. I even visited and subsequently applied to the MBA program at the Haas Business School at UC Berkeley. All in all, I am a big fan of the state of California, San Francisco and the Bay.
Make School, San Francisco
Make School in action

This most recent trip was principally in order to attend the same conference on spatial computing that I both volunteered at and attended in 2015: AWE (Augmented World Expo), albeit with some additional time tacked on for some R&R and additional nerdy activity in San Francisco itself. This included checking out Make School, one of many ‘coding schools’ (although they do some hardware stuff as well) present in the city, and spent time with Adam Braus chatting about the school, coding, start-ups and virtual reality (VR).

Upload VR, Taylor Freeman
UploadVR co-founder, Taylor Freeman, and the office dog

Talking of VR I was fortunate enough to be able to also visit the Upload Collective and speak with Co-Founder, Taylor Freeman about the excitement surrounding a technology that does finally feel as though it is meeting previously un-met expectations. One of the real highlights of my visit was getting to experience VR myself – not my first, mind, but certainly the most extensive and impressive experience of the technology that I had had to date – jumping in to several incredible HTC Vive experiences, including Google’s Tiltbrush and WeVR’s theBlu, an absolute must for anyone wondering what all the fuss is about “this VR thing.” I look forward to elaborating on a number of these experiences in separate posts, including sharing what I actually created in Tiltbrush!

AirBNB logo_handdrawnOne of the great things about a visit to San Francisco, and the Bay Area in a wider context, is that you are struck immediately by the wealth of tech talent and innovation that there is. It is no accident that some of the true behemoths of tech have all originated there, from Google to Twitter, Uber to AirBNB and beyond. The sharing economy, it could be argued, also sprang to life here with the most famous examples of companies that have built their fortunes on serving this part of our lives being both Uber and AirBNB. These two companies made much of my trip both possible, simple and cost-effective. I used AirBNB for both places I stayed, initially in San Francisco where I had the pleasure of staying with two awesome guys, Michael and Jimmy, and their dog, Emit, in the Mission District and for a fraction of the cost of a hotel, and then in Sillicon Valley with Kirupa, an in-house attorney at another San Francisco legendary tech firm, Square. I have consistently been bowled over by the quality of the lodging that I have been fortunate enough to book through the service and the wonderful hosts who I have had the pleasure of meeting and becoming friends with. There is something about staying in someone’s actual home that really makes you feel a greater connection to the area being visited compared to the relative sterility and formality of hotel stays. Then there is simply the cost difference. Hotels are quite simply multiple times more expensive, money that I personally prefer to spend on unique experiences in the locales that I visit. Many times the experience I have had staying with an AirBNB host has actually been on-par with or even better than a hotel. Kirupa’s place, for example, was one of the most beautiful homes I have ever had the good fortune to stay in and being within a neighbourhood, versus the faceless industrial areas that the main hotels were to be found in, I had a fantastically rejuvenating stay, including the flexibility to be able to leave at a time that suited me versus the rigid ‘checkout time’ that many hotels (admittedly have to) enforce.
Uber logo_handdrawnUber was the other service that really contributed massively to the success of my visit, especially their ‘Uber Pool’ feature that enabled me to request a ride to be shared with another person, thus significantly lowering the cost to each of the journey. Thanks to Uber’s incredible logistics technology routes are automatically planned in the most efficient manner and I made use of the service multiple times during my stay. Why would I not when they make it that easy to order a ride, track it’s progress, receive timely notification of it’s arrival, have pleasant conversations with drivers who have interesting things to say and keep their cars immaculate, and spend significantly less for the same journey than I would in a regular cab. Oh, and not be expected to cough up a tip regardless of the quality of the service! Uber just make it all so darned easy, including the payment part.
A successful return to my second home and a trip that has provided a lot of material for future posts. Viva San Francisco!

Virtual Reality – THIS is why I am so excited

The big issue that virtual reality (VR) faces in achieving mass adoption and truly being the transformative technology that I believe it represents is how to really extol its virtues to those who have not had the opportunity to physically try it out. How do you really sell something that requires users to try it to truly get it?

Being a self-confessed tech nerd I have always felt truly excited by the idea of VR, and also Augmented Reality (AR), and read with enthusiasm all of the reports and promises coming from companies like Oculus Rift. I also knew that pretty much anyone who got to physically try out the technology came away an instant convert. You just have to do a quick search for VR on You Tube to see the countless ‘reaction videos’ from people who donned a VR headset for the first time, from traditional gamers to the elderly and beyond.

I had my first experience of VR when I traveled to California and Silicon Valley in June 2015 for the annual Augmented World Expo (AWE) and was instantly amazed at how incredibly immersive VR was, with insanely rich graphics and the feeling as if I was suddenly physically transported to the worlds in which I found myself in. There is something magical about being able to turn around, a full 360-degrees, including looking up and down, and seeing a new world all around you. Your brain knows it’s not real and that you’re still standing at a trade fair stand, but then, your brain starts to forget that and, well, you find yourself reacting as if you’re actually in your new environment. It’s surreal. Awesome but truly surreal. I am not a gamer but I could easily see myself become one through VR such is the richness of the experience. One of the highlights of the trip for me, and my favorite VR experience, was being strapped into a horizontal harness, with fans blowing air at me and then having an Oculus headset and headphones placed on my head. Suddenly I was no longer hanging uncomfortably and self-consciously in a rig on full display to amused onlookers but was flying as a wing-suit skydiver through a mountain range, able to turn by physically adjusting my body and head position. Everywhere I looked I saw the mountains, the forests, the new world in which I was present. Except I wasn’t. But I had to remind myself of that. Repeatedly. The experience was simply that awesome and that immersive. Unsurprisingly that demonstration won “Best in Show” and anyone who was fortunate enough to experience it agreed that it was totally deserved.

Dad_Google CardboardSince returning from AWE I have kept exploring the world of VR, purchasing myself a set of Google Cardboard googles for use with my iPhone, even introducing my dad to the experience by ordering him a set for Fathers Day. Various apps have been downloaded, from the official Google Cardboard application to rollercoaster and dinosaur experiences, and amazing immersive video experiences courtesy of Vrse, and I have loved every one of them, insisting that others try them out too. In fact, everyone at work has had to hear me babble on about how awesome VR is and have experienced one if not several of the VR apps that I have on my phone. The reaction is always the same: initial quizzical skepticism rapidly followed by complete and utter conversion once the technology is actually experienced.

And so it was that I introduced my six year old nephew and two year old niece to VR during a recent trip home. My nephew is as excited about technology as I am – smart kid – and so was eager to try out the Cardboard. My niece, however, wasn’t quite so sure to start with, protesting as my sister moved the goggles towards her unenthusiastic eyes. What happened next, however, was worthy of a You Tube video all of it’s own.

VR_reaction
These grins are one of the key reasons I am so excited by Virtual Reality

As soon as her eyes locked onto the new, 3D immersive world that had been presented to her all protests evaporated. Gone! What instantly replaced them was the biggest, cutest, most genuine grin that I have ever seen and that still gets me a little emotional even now as I recall the scene. She was experiencing the pure, visceral joy that full immersion into a magical new world provides. Never have I seen such an instant and powerful reaction to a technology before. I challenge anyone to deny that VR is a game changer after witnessing what I did. Such was the power of the conversion and the fun of the experience that I then found myself sitting for the next two hours policing the sharing of my phone and goggles as they both spent time exploring worlds in which dinosaurs roamed, rollercoasters careered up and down mountains, and they absolutely loved the Explorer program on the Google Cardboard app that saw us digitally visit Tokyo, Paris, Jerusalem, the Red Sea, Venice, Rome, and many other global locations, all whilst sat in the comfort of their UK living room.

I am yet to join the ranks of those who own their own ‘high end’ VR device, such as the recently launched Oculus Rift, but that is going to change very soon. I cannot wait to delve even deeper into what is possible with this technology, both from a consumer stand-point and also with a view to creating content myself. The possibilities are indeed limitless and whatever we can imagine we can create and experience through the sheer and utter magic that is virtual reality. Reality will never truly be the same again.

Want to experience VR for yourself? The best, lowest cost way to try out the technology for the first time is to follow these instructions:

1. Get yourself a pair of ‘Google Cardboard’ goggles, many different takes on which can be found online at sites such as Amazon.

Google cardboard
Phone slots in to create a basic pair of VR goggles

2. Download the Google Cardboard app, or any one of the many VR apps that are on the various app stores.

Google cardboard app
The Google Cardboard app is a good one to start with

3. Follow the on-screen instructions and check out of reality as you know it!

A Fourth View on Three Sports

Following on from my recent post regarding Augmented Reality, Virtual Reality and their potential impact on our sporting lives, specifically skydiving, I thought I would take a look at how AR & VR might add to the other big sport in my life: triathlon.

Triathlon involves training and racing in three separate disciplines, with races ranging in total distance from super-sprint to Ironman and beyond. Data does play a role in both training and competing, whether it be keeping track of 100m splits in the pool, or sticking to a pre-defined power zone whilst on the bike. I think it would be safe to say that pretty much all of us rely, to some degree, on a sports watch, or athletic tracker of some description, with the required data available for monitoring live or analysing after the event.

AR offers the chance to have the most important and relevant data visible without breaking the rhythm of a workout, adding to the quality of the experience and value of the training or outcome of the effort.

 

SWIM – AR may not be the most obvious technology for use in an aquatic environment but I see AR offering some real advantages to those training both in the pool and open water. As far as I am aware there are no currently available AR systems for use with goggles, but with the advances being seen in the field, especially by companies specialising in athletic applications of AR, such as Recon Instruments (www.reconinstruments.com), I do not imagine it will be long before AR reaches the water.

  • Training data – the usual information that one might glance at a watch for, such as lap count, 100m lap times, heart rate and other such swim metrics could be easily projected into view, thus making such data available without having to break the flow of a swim workout.
  • Sighting & ‘staying on course’ – any open water swimmer will admit that sighting and staying truly on course can prove troublesome, during both training and especially races. Swimming further than is necessary is both a waste of energy and impacts on race time, and having to frequently sight disrupts smooth swimming action, again, impacting energy efficiency and swim time. Imagine having a virtual course line to follow, much like a pool line, projected into view both when you look down (as if looking at the pool floor) and when you do look up to sight, such that staying on course is as simple as ensuring you follow the line? Less ‘open swim wobble’ and a faster, more efficient swim.
Goggles, AR
Important swim data & virtual sight line projected into view using Augmented Reality-equipped goggles.

 

BIKE & RUN – systems do already exist that provide AR for both cyclists and runners, with the Jet, from Recon Instruments, being one such system. A range of metrics, including the usual – speed, average speed, heart rate, power, distance – could all easily be projected in AR. With GPS technology and mapping one could have a new cycle or run route virtually projected in order to follow a new course or how about having a virtual running partner/ pacemaker running alongside or just in front of you, pushing you that little bit harder than you may otherwise train? The limits to the uses of AR in both bike and run settings are really only limited by imagination, with the technology rapidly catching up with the former.

Cycling, cycle training
Augmented Reality data during cycle training

 

Cycling, AR, photo
Capture those awesome training and race moments without even having to look away. That’s the power of AR.
VR in bike & run – living in the UAE training outside in the summer months gets very testing, with any attempt at venturing outside in an athletic capacity after about 9am simply leading to guaranteed heat stroke. As such, the turbo trainer does get significantly more use at this time of year. It is, however, really dull! There are ways to engage the mind during such indoor sessions, from video-based systems such as Sufferfest and those available from Tacx.com, and of course the option of simply watching movies, but imagine how much more immersive and enjoyable an experience indoor training could be if it were possible to digitally export yourself fully to suitable setting. VR offers what even multiple screens can’t – full immersion! Training for a specific race? Fancy taking on a famous route but can’t spend the time and money travelling to the location? VR promises to solve these issues by taking you there. Again, there are companies working on this technology, with startups such as Widerun (www.widerun.com) pushing the envelope in this area.

Jumping into Augmented Reality

Augmented and Virtual Reality (AR & VR) both lend themselves to some very exciting applications in sports, especially those where data inputs in real time can be vital. Skydiving – one of my passions in life – is one such sport and here I shall explore where AR & VR might add to our enjoyment and progress in the sport.

In the interests of clarity, I shall just define what is meant by Augmented and Virtual Reality, terms that are becoming ever more part of normal lexicon and technologies that are set to redefine how we experience the world:

Augmented Reality: superimposition of digital data over the real world, thus adding a layer of additional information or detail over that which is seen in reality.

Virtual Reality: immersion in a fully digital world, such that users experience a computer-generated world as if it were real. Using VR goggles to allow users to see the simulated world, plus or minus other inputs, such as headphones or haptic devices to simulate touch, the principle of VR is to leave the real world rather than simply augment it.

 

Skydiving – there are so many data inputs that are vital to a safe skydiving experience, with the most important ones and where AR offers options to add to the experience being:

  1. ALTITUDE – the most important bit of information for any skydiver. We currently rely on a combination of wrist-worn altimeters and audible altimeters. Personally, I am more of a visual person so having my altitude displayed in front of me in an AR fashion, with pre-set altitude alerts popping up where I simply can’t ignore them would be great.
  2. OTHER SKYDIVERS – one of the biggest dangers, other than running out of sky, in skydiving comes from others sharing the same airspace, especially when inexperienced jumpers are involved. Mid-air collisions can be catastrophic, especially if they occur at low altitude. Knowing exactly where other skydivers are, especially if they are within a certain proximity to you, is very important. We cannot be expected to have full 360 degree awareness at all times – we literally do not have eyes in the back of our heads – and so an alert system that automatically identifies other jumpers in the skies would be a great use of AR.

    Skydiving AR
    Knowing who is sharing the skies with you, in addition to useful data such as remaining altitude, are examples of uses for AR in skydiving.
  3. JUMP RUN & WIND INFO – this would be of obvious use in training new skydivers in the basics of jump runs, winds aloft and the effect on their jump of winds, including adjusting landing patterns in response to changing wind characteristics. Experienced skydivers would benefit from such a system at new and unfamiliar dropzones or to revise core skills and competencies, perhaps after a period of absence from the sport.
  4. TRAINING/ COACHING – AR (and VR, especially for modelling of emergency situations) lends itself perfectly to the training of new skydivers and for coaching experienced jumpers in a range of disciplines. At present, new skydivers receive theory and ground schooling prior to their jumps, freefalling with a coach but then ultimately responsible for their own canopy piloting. Students who do need some assistance currently have to rely on audio instruction from a coach on the ground, who can only assess what he or she can see. What if the student could have the ideal flight path including important prompts for how best to prepare for their landing projected in from of them via AR? Important learning objectives would, I propose, be much faster to achieve and good practices established rapidly. The system could be taken a step further by enabling the ground-based coach to see exactly what the student is seeing via in-built cameras in the AR headset, thus significantly improving the accuracy and value of instructions to the student. Coaching uses could include real-time prompts on perfect body position for certain disciplines, such as tracking, and projected flight paths, to aid in flight accuracy. For example, following an AR line indicating a straight-line course in tracking would enable a skydiver to work on fine-tuning small body position perfections thus significantly enhancing progression in the sport.
Skydiving AR, landing
Canopy piloting and especially landing are vital parts of being a successful and safe skydiver. AR could really add to the effectiveness of training and safety for the sport.