With the hardware set-up and the software installed it was finally time to don the HTC Vive headset and enter my own VR for the first time. I am pleased to report that it was as awesome as I had imagined.
A little like scuba diving, which similarly requires the wearing of rather bulky, somewhat cumbersome equipment but whose experience is instantly transformed once in the medium for which it was designed, VR, as it currently stands, is much the same. The headset, whilst attractively and elegantly designed, is undeniably bulky and has some noticeable weight to it. It also looks pretty dorky if truth be told. No-one, I would posit, looks anything other than intensely nerdy wearing a VR headset. Still, as soon as the headset is placed on the head and the eyes drink in the rich graphics being streamed through the displays then, well, all that other stuff becomes instantly irrelevant. Like entering the water as a scuba diver, entering VR is met with the same perceived change of physical state. The weight of the headset is forgotten. The fact I looked like a dork was forgotten. I was, however, in a new world. Sure, I knew I was physically still in my room but then again I wasn’t. I was somewhere else entirely. That is the magic of VR.
The first experience of VR that any new owner of an HTC Vive will have is almost certain to be the tutorial. This takes place in a large white dome-like space, much like the O2 arena in London back when it was the Millenium Dome, complete with an echoey acoustic that one would imagine such a cavernous and empty space to possess. Whilst I was having an initial look around this new space I heard a voice from behind and turned to see a robotic sphere, complete with a central ‘eye,’ somewhat reminiscent of HAL in Kubrick’s Space Odyssey 2001 but friendlier. It was this levitating robotic head that was talking and it quickly became apparent that he was to be my tutor.
What followed was a systematic yet thoroughly entertaining introduction to the basic fundamentals of VR, from the principles of the play-area boundaries and the various functions of the controller handset buttons, neatly demonstrated by means of balloons, fireworks and laser beams emanating from the ends in response to each button being pushed. The balloons were especially fun as it quickly dawns on anyone going through the tutorial that one can interact with them, for example, by batting them away as one would do so at, say, a music festival. One of my friends, whilst undergoing the tutorial for the first time, released a swathe of balloons that floated gently towards the roof of the dome before quickly switching his controller over to serve as a laser beam and proceeded to shoot the balloons in a digital, VR version of clay pigeon shooting! Classic!
With the basics of the Vive explained in what I can honestly say is the most engaging, memorable, fun and effective computer setup tutorial I have ever undertaken, one of the key values of VR, and indeed spatial computing, as a medium was apparent: being immersive and interactive, including physically, serves to massively reinforce learning in a way that standard, screen-based tutorials just cannot. Can you imagine how much more engaged and willing to listen to the helpful Microsoft Office paperclip you would have been were you able to feel as though he was there in the room with you? If a simple start-up tutorial could have people grinning and feeling engaged then one can only begin to imagine the potential value of spatial computing for wider education. I have always believed that this technology will revolutionise learning and having now been able to experience it first-hand I am as convinced as ever. Going to school in the next five years is going to be awesome if the classroom experience is going to embrace the power of this tech – almost makes me want to regress back!
The kit has arrived and you are one step – physical that is – closer to taking that first virtual foray into an exciting, immersive new world. It’s just a case of opening the box and getting going, right? Not quite.
It took me about a week to finally get in to VR for real after taking delivery of my Vive, partly due to the fact that I moved house but also on account of one needing to put aside a reasonable chunk of time to dedicate to actually setting up the system. I’ll run through the steps I took in a moment but first it would be useful just to recap what I actually needed to have in place before being able to enter VR:
VR Headset, Trackers & Controllers – I opted for the HTC Vive and ordered it online from the US via Amazon. In the box was everything I needed to get going, other than the powerful PC to run it all.
PC – VR is processing hungry and requires a top-of-the-line graphics card in order to render everything properly. More and more ‘VR-compatible’ packages are coming onto the market with each passing day but, in essence, I knew that I needed to get a gaming PC as this was certain to have the grunt power necessary to fulfill my VR aspirations. In the end I opted for an Alienware 13″ laptop – it was a brand I was aware of, even as a non-gamer, and a laptop offered the portability that I wanted to be able to take my VR set-up to other locations in order to demonstrate it; not something that would be as easy with a chunky desktop.
The actual process for getting set-up and into VR involved:
Setting up the Lighthouses – in order to be able to do room scale VR at present, it is necessary to have a minimum of two scanning sensors, positioned roughly opposite one another, in order for the computer to be able to track the headset and controllers and define a virtual “play space.” It is this process that ultimately took the longest to achieve, principally for practical/ DIY reasons rather than technical ones. The sensors that come with the Vive are known as Lighthouses and were significantly bigger, and heavier, than I first imagined they would be. I’d figured that I would be able to easily hang them from the wall using a picture hook, a pretty quick and simple task to install. When I examined them, however, I discovered that they weighed a fairly decent amount and had no hole or the like from which to hang via a hook. Besides, realising how important they were to the entire virtual experience the idea of hanging them loosely on the wall lost it’s appeal.
Each had two spiral sockets to allow them to be screwed onto a camera mount, like the one that you would use to mount an SLR camera onto a tripod. With one on the back and one on the base, there were two options for how I might mount mine. Included in the box were two pivoted brackets that were intended to be screwed to the wall, a step that would necessitate drilling holes in the wall of my room. In spite of my landlord initially saying it would be fine to do so he seemed a little less keen when I broached the subject again at a later date, and the fact was that I didn’t readily have access to the necessary tools to facilitate the mounting. That and the concern about drilling into walls where I had no idea about the location of power lines – receiving an electric shock would not be a great introductory step to VR! I also wasn’t certain about the optimal location for the two Lighthouses and felt that getting that figured out might be a smart move before committing to making holes in the wall. I also wanted to retain the option of moving the system easily, for example by taking it into work to demonstrate VR to my colleagues, and so a more temporary yet similarly stable solution was preferable. This set me off on a research effort.
Various options were considered and promptly scored off the list. These included mounting the boxes via heavy duty velcro attachments (not reliable enough); setting up tall camera tripods (too much of a wide footprint to be practical in a limited space); using GoPro handlebar mounts to attach the Lighthouses to curtain rails (I did, ultimately, do this for one of them), etc. One option that seemed to be getting a lot of attention online was that of using adjustable support beams (see an example here), which have the advantage of being easy to position, set-up and move again if necessary, as well as being secure. Coupled with pole grips like the aforementioned handlebar mounts for cameras this idea certainly appealed. The issue, however, was in trying to source said poles. Nowhere I looked in Dubai seemed to have what I was after and once again I looked to the internet. They had what I thought I wanted on Amazon but being fairly expensive (about $50 each) and pretty bulky I wasn’t sure if I could even get them delivered.
Desperate to actually get going I even looked into whether it was essential to mount them in the first place. According to one video blog on the topic it seemed as though the Lighthouses could scan and track adequately even whilst placed on the floor. This, I realised, was not a practical medium to long-term option and getting them at the suggested ‘above head height’ was still preferable. In the end I actioned what I intend to be a temporary solution: one I attached via a GoPro handlebar mount to the end of one of the curtain poles in my room – thankfully the power cable just extended enough to permit this – and the other I positioned on top of a tabletop mirror that thankfully happened to be as wide as the Lighthouse itself. In a bid to reassure myself that it was moderately secure I did enlist the use of some sticky tack to try and plant the base onto the surface a little more securely than it might otherwise have been. I wasn’t entirely certain if this positioning would work as this second station was sitting not angled down towards the floor but rather horizontally. I wondered whether this would adversely impact it’s ability to scan and therefore track me correctly in VR. I needn’t have worried.
With the Lighthouses positioned, powered and synched with one another (wirelessly and automatically) it was now time to fire up the headset and controllers.
Connecting the Headset to the PC – whilst there are now systems available to allow for un-tethered headset connection, the vast majority of VR newbies will, like me, have their first experience via a fully tethered system, meaning that the headset is directly attached to their computer, via a long cable. With the Vive this cable has three components, all of which connect to the PC via an intermediary little box (included with the Vive). One of the cables plugs into an HDMI port, the second into one of the USB ports, and the third, a power cable that plugs into a power outlet. One of the cables and ports at the back of the headset is there to allow a set of headphones, or ear buds, to be plugged in – sound is a pivotal component of the immersive experience of VR – and the Vive comes with a simple set of ear buds included. I, like most however, have ultimately opted to spend some more and get a decent pair of headphones.
Switch on the PC and Set-Up – setting up the Alienware laptop itself was simple enough. These days computers pretty much come out of the box ready to rock and roll so I have skipped a description of that stage. With the headset plugged in it was now time to download the Vive software, via the Vive.com website. This very intuitively guided me through the set-up process, including checking that the Lighthouses were scanning correctly – they were 🙂 – and that both the headset and controllers were being tracked – they too were working well.
Once the system had established that they could see both the headset and controllers I was walked – literally – through the process of setting up the ‘play area,’ the term for the space in which I could safely immerse myself in VR without tumbling into and over furniture and the like. Unbeknownst to me until now my previous room was simply not big enough to meet the minimum floor space requirement for a play area, a fact that would have severely pissed me off had I discovered the hard way. As I say, I had thankfully just moved house and it turned out that my bigger room had just the right amount of ‘spare’ floor to permit a VR play area. Phew! In terms of defining this area, I was prompted to take one of the Vive controllers and sketch out, in mid-air, my area. That is I actually walked around the area in question whilst pressing the trigger of the controller to, in effect, draw an invisible chalk-line around the perimeter of my VR area. I had to repeat this process a couple of times as I was, initially, just shy of the minimum area requirement, but once it was done I could see on the computer screen a digital rendering of the outline of my safe VR play space.
Download SteamVR – a platform through which VR experiences, games and the like are available, I needed to download and install Steam in order to use my system. Again, much like installing the Vive software, it was a painless process to get Steam installed. Once it was and I had created an account and was logged in it was time to finally don the mask and enter my very own VR for the first time…..
“The great aim of education is not knowledge, but action.” These words, spoken by the philosopher Herbert Spencer, ring true and can, in my opinion, also be applied inversely. That is to say action delivers great education. For far too long the accepted model for delivering knowledge and training professionals such as vets has been to sit them all down in a lecture hall, drone on at them for hours on end, demand that they go off, read, write the odd essay and complete the occasional project, and then ask them to cram all of that supposed knowledge into their brains ready to regurgitate at will during the course of an exam or nine.
Granted there are also practical elements to most of these programmes, whether it be dissection, physiology labs or animal handling, but the bulk of the training has always been delivered in much the same manner: didactic instruction. For some this approach works and they go away retaining everything that they have heard. For most, however, myself included, it represents a dated and unbelievably inefficient method. Hence the need to condemn weeks to tedious, stress-induced revision before the big assessments. I always found it much easier, way less stressful and frankly more fun to learn by actually doing, seeing, touching or otherwise interacting with the subject matter at hand. Most of what I recall from anatomy training, for example, are the random little moments in the dissection lab when I recall physically holding a specimen and examining it. I can’t for the life of me easily recall a specific moment when I turned to a textbook page and had a piece of knowledge stick in perpetuity.
Whilst it is acknowledged by many educators that practical instruction has better outcomes in terms of understanding and long-term knowledge and skills retention, the fact of the matter is that preparing and delivering a lecture is significantly cheaper, quicker and easier to achieve, whilst the results of that labor can be shared far more widely than a practical session. In terms of resources, acquiring digital photos, videos and other screen-based media is far less costly and labor intensive than drawing together and delivering a tangible, practical learning tool, such as an anatomy specimen. Some of these barriers, I believe, are now finally being lifted and the costs, both in terms of time, effort and direct financial outlay, are narrowing between the old and the digital new. The implications for education and training at every level of schooling, from kids’ first school experience right through to professional CPD (continuing professional development), is profound and I wish to explore why I believe that to be so.
Mixed Reality & Virtual Reality
I first experienced both mixed reality and high-end virtual reality in 2015 and again in 2016 when I volunteered at the Augmented World Expo in Silicon Valley. The power of both technologies to fundamentally change how education outcomes are achieved and training delivered was clearly evident and left me convinced that the future of medical, including veterinary, education was in the application of these new immersive tools.
In 2016 I was fortunate enough to be at one of the conference parties where someone happened to have two Microsoft HoloLens headsets and was demonstrating them to the small crowd of curious nerds that had gathered around him. Well, I was one of those nerds and before long had the pleasure of donning one of the sets and so was introduced to the wonders of true mixed reality.
Much like a small welding mask, in both look and feel, the HoloLens is essentially a set of transparent screens that sit in one’s field of view by means of the headstraps that keep the device in place. Whilst not especially comfortable and certainly not something anyone is going to ever be in a rush to wear out in public on account of looking, frankly, ridiculous, the experience that it delivered was compelling. With the use of a simple gesture, specifically an upward ‘throwing’ movement, a menu popped into view suspended perfectly in mid-air and crystal clear as if it were right there in the real world in plain sight of everyone around me. Of course it wasn’t and the only person able to see this hologram was me. Selecting from the menu was as simple as reaching out and ‘touching’ the desired option and within seconds a holographic representation of the Earth was spinning languidly before me. I could ‘pick up’, ‘move’ and otherwise manipulate the item in front of me as though it were a physical object, and if I did move it, for example off to the right, out of my field of view, that is precisely where it remained and where I found it again when I turned back round. The human body application was similarly cool, as I was able to explore the various layers of anatomy through interaction with a highly rendered hologram. Whilst comical for onlookers not wearing a HoloLens, as I appeared to apparently be pawing away at thin air like someone suffering a particularly lucid acid hallucination, the thrill of what I was actually seeing and engaging with myself allowed me to ignore my daft appearance.
What are the medical education applications for such mixed reality technology? Whilst holographic visual representations of anatomy are, at first, a magical phenomemon to experience and a pretty cool party piece, it is the fact that mixed reality sees realistic holograms merged, or so it appears to the user, onto the real world, in contrast to virtual reality, which replaces the real world experience with an entirely digital one, that lends itself to unique educational applications. Anatomy instruction by being able to accurately overlay and track in real-time deeper layers onto a real-world physical specimen, enabling students to understand the wider context in which various anatomical structures sit is a far more compelling and useful application of MR than a simple floating graphic. Similarly, surgical training involving holographic overlays onto a real-world, physical object or combined with haptic technology to elicit tactile feedback, offers the potential to deliver programmable, repeatable, easily accessible practical training with minimal expense and zero waste on account of there being no need to have physical biological specimens.
Imagine: a fully-functional and resourced dissection and surgical training lab right there in your clinic or home and all at the press of a digital button. Imagine how confident you would become at that new, nerve-wracking surgical procedure if you had the ability to practice again and again and again, physically making the required cuts and placing the necessary implant, being able to make the inevitable mistakes that come with learning anything new but at zero risk to your patient. Being able to step up to the surgical plate for real and carry out that same procedure that you have rehearsed and developed refined muscle memory for, feeling the confidence that a board-certified specialist with years of experience has, and all without having had to put a single animal at risk – that’s powerful. That’s true action-based education at it’s most compelling and it is a future that both VR and MR promises.
I predict that the wide adoption of graphically rich, immersive and realistic digital CPD programmes, through both VR and MR, will result in a renewed engagement of professionals with CPD training and ultimately lead to more confident, skilled, professionally satisfied and happier clinicians. I, for one, know that were I able to complete practical CPD by simply donning a headset and loading up a Vive or HoloLens experience from the comfort and convenience of my clinic or home, all whilst still being able to interact in real-time with colleagues both physically present and remote, my CPD record would be bursting at the seams. That has to be a great thing for the profession, our clients and society in general.
After learning as much as possible about all things VR over the past couple of years I finally took the plunge and invested the significant sum required to become ‘VR Enabled.’ This involved purchasing both a high-end VR system, requiring the decision to be made between two competing devices: the HTC Vive, which sits on the SteamVR platform in terms of content provision, and Facebook’s Oculus Rift, the original poster child for VR and with its own online VR content store. I had visited Upload VR in San Francisco back in June 2016 fully thinking that the Oculus Rift was the device I wanted but was totally won over by the Vive, especially once it became clear that the Vive was able to do room-scale VR whilst the Oculus was only offering a seated experience and had not yet launched the Touch controllers.
The three experiences I enjoyed with the Vive (Google Tiltbrush, Universe Sandbox, and WeVR’s The Blu) had me grinning from ear to ear the moment I donned the headset and I spent hours crouching, circumnavigating, exploring and generally loving the immersive world into which VR had placed me. It was easy to forget that I was still stood in a room next to a PC and not in each of the magical playgrounds that I sequentially jumped into. You can read about my experience at UploadVR here.
I would’ve rushed out immediately and got myself set up with a VR set were it not for the considerable outlay of mullah that doing so requires and so I did what any good procrastinator does when it comes to making big decisions: I procrastinated and analysed the shit out of it!
So what made me finally decide to make the investment and jump into VR? A number of contacts I have made in the VR community over the past two years have all said the same thing: to truly understand VR and it’s potential it is important to actually experience it and become familiar with it. One VR expert pretty much told me to get a headset and spend 100 hours minimum in it! Like anything in life, whether it be language learning or training for a big sporting goal, full immersion is usually the best way to learn, grow and generally get good at whatever activity it is. Coupled with reading an advance copy of Robert Scoble and Shel Israel’s book, The Fourth Transformation, which focuses on the rapidly growing sectors of virtual, augmented and mixed reality, this collective advice made me realise that if I truly wanted to understand VR there was only one thing I could do.
Ordering…… then Waiting
Decision made. Credit card in hand. Where and how to get hold of my own VR set up? As much as the Oculus Rift has come on leaps and bounds, especially since the launch of the Touch Controllers, which are, according to the reports, much better than the Vive’s, the fact that the HTC Vive comes as a complete package and does room-scale VR straight of the box, in addition to knowing about some of the applications available for it, I headed to the Vive site and attempted to place an order. Unfortunately there was no way to order easily from Dubai and the only way to order directly from Vive was to have a US credit card. So no go there unless I could persuade a US friend to do me a huge favour. Amazon, however, saved the day as I was easily able to order via their international site using my UAE credit card, and arranged to have the order delivered to me via the Aramex Shop & Ship courier service. One concern I had after placing the order was whether or not I had inadvertently been duped and perhaps purchased a convincingly presented fake. I had visions of taking delivery of a cheap, knock-off and the headaches of having to then get my money back. Thankfully I need not have worried as the genuine article duly arrived about a week later. Stage 1 complete and about $800 plus delivery of $160 in expense. Now all I needed was a computer capable to powering the device and I would be on my way.
Sadly neither of my Apple computers are even close to being up to the task of powering a high-end VR system, so I knew that I was going to need to fork out for a PC with a powerful graphics card. I had already looked around the stores in Dubai and found some pretty impressive machines on offer, all with equally impressive price tags. Convinced that I would be able to find a similar offering online, albeit significantly cheaper, I began doing some research. This is where I found the difficulty in being able to make a decision: what do I go for that would enable me to experience VR and that would be relatively future-proof without going insane and spending many thousands? I am not a gamer and so have no real experience or knowledge of what makes for a decent gaming PC. What I did know, however, was that Alienware was a brand that I had heard of and knew were big in the gaming world. I soon discovered via their website that they are part of Dell and so set to exploring the models on offer.
One of the first questions to answer for myself was whether I wanted to opt for a laptop versus desktop. I knew that I wanted the portability of a laptop so that I would be able to take the system into work, or to friends’ houses, but knew that desktops offered a much easier time of it in terms of expansion in the future, in addition to generally being cheaper to purchase. After much thought I did eventually opt for a laptop, going for the compact yet graphically very powerful Alienware 13” R3, with an Intel i7-6700 Core processor (a very powerful one I was informed) and NVIDIA GeForce GTX 1060 graphics card. With added anti-virus software and UK VAT (I ordered via the UK site after clarifying with online customer support that I could a) use a UAE credit card and b) have the order sent to me via a UK routing address, Shop & Ship again), the total cost was about $2180. Unfortunately I was also liable to pay UAE Customs when the laptop arrived (eventually) from Dell. I did question whether with all the extra costs it might have just been better to buy a similar machine here in Dubai but came to the conclusion that it was still more cost effective to tread the path I did.
One note of caution for anyone looking to order overseas and have their computer delivered to them is to bare in mind that there might be unexpected issues. In my case the issue was that in spite of clarifying at the order stage that I was able to order using a non-UK card and to have it sent to me via a courier routing service, I received an email shortly after confirming — and paying for – the order from someone in accounts urging me to contact them within the next 24 hours as I had to confirm this, that or the other. This is where Dell fell down badly as trying to get in touch with the person in question, in spite of the urgency of their message, was impossible. I did eventually manage a stuttered email exchange, with the message being delivered to me being that my order was not able to be finalised as I could not verify the delivery address. I politely but pointedly highlighted the fact that I had asked right at the outset whether any of the specifics of ordering from out of country were going to be problematic and was assured that they would not. Now I was being told they were. How to frustrate and piss off a customer! Long story short I tried to get further clarification on the matter, advising Dell that if it was an issue then they simply had to refund my payment, cancel the order entirely and I would either find another option for ordering from them or buy said computer locally. Thankfully I was able to get some kind of clarity via a very nice customer service rep who chased up my case and eventually confirmed that my order could, after all, proceed. Fast forward a couple of weeks and I had confirmation that my order had left China, where it was being built – ?! why then was I a) paying UK VAT, and b) having to have the item delivered to me via the UK when it more than likely touched down in Dubai on route to the UK anyway?!
Still, I was now, finally, in possession of the necessary tools to enable me to enter VR.
Spatial computing, once a preserve of science fiction, is slowly but surely creeping into real life and whilst there are a number of companies working on industrial applications of Augmented Reality (AR), with true use of a headset/ glasses there is not yet a convincing consumer solution to herald in the age of smart glasses.
The promise of AR and of smart-glasses is to seamlessly overlay digital information onto the real world such that this information adds to the experience. There are myriad potential applications where such a capability might prove either useful or just entertaining. For example:
Video calls – speak with a person on Skype or FaceTime (other video chat applications available) as though they were literally standing/ sitting there in front of you, in realistic hologram form.
Educational experiences – visits to galleries, museums or even city tours would be so much more entertaining and interesting with the ability to see projections of artists, historical figures or scenes played out in front of our eyes as though they were happening live. A visit to a famous battle scene or, for example, StoneHenge would be a richer learning experience if the subjects of our learning were walking about around us. How much better would we relate to our history if we could see, with our own eyes, such histories played out on the current world? Would it lead to a greater sense of the important lessons of history and reduce the risks that we repeat the same errors, a concern that holds resonance at this specific time of political uncertainty in the world.
Navigation – whether it be in a car or walking about an unfamiliar city, staring at a screen has its obvious disadvantages. Contrast that with seeing clear directions mapped out onto the real world in front of our eyes, negating the need to take our focus off the real world. This will be further enhanced by the use of real-time translation, such that foreign road signs are automatically presented in their translated form.
What AR experiences are already available?
Most of us will have first heard of or experienced AR through social apps such as Snapchat, whose filters allow for some silly but otherwise fun effects to be added to live video, such as the addition of rabbit ears and a nose that respond and change in real-time with our faces. Others might have used an AR app to scan a physical marker in, say, a magazine and seen a digital object, such as a movie character, materialise on our screen but viewed as though they were there in the real world. Companies such as Blippar do the latter and have a thriving business in using AR for brand marketing.
Who is doing interesting things in AR?
There are a number of companies working on AR, whether it be via smart glasses or the screens with which we interact with daily, such as tablets and phones. As already mentioned, social media is likely to be one of the first experiences of true AR that many of us have and it isn’t just Snapchat playing a role. Facebook are also players in this market with their purchase, this year, of the AR start-up MSQRD, whose technology does much the same thing as Snapchat’s. The technology behind such whimsical entertainment is actually pretty exciting and you can learn more about it here.
Aside from marketing and social media/ entertainment, other major applications for AR are in both industry and education, with a few vet schools even dabbling with the technology.
Form Factor…. The Big Issue
As much as I am truly excited by the promise of AR to revolutionise how we interact with digital information, form factor is still, for me, THE biggest issue. Until we move beyond the bulky, cyborg-esque headsets, that feel akin to wearing a welders mask, to lightweight, stylish eyewear or, preferably, a completely off-body solution then wider adoption of this tech will be slow. At present, the most accessible and reliable method by which to engage with AR for the vast majority of us is via our phones and tablets. In other words, handheld screens with cameras attached.
Phones work in as much as they do some incredible things for us and work the same regardless of personal factors and are situationally flexible (i.e. they work much the same way regardless of whether you are at home, work or, perhaps, out and about in a sporting or outdoors setting). They also have the advantage of being discretely held on person if necessary, a feature that an expensive pair of smart-glasses clearly lacks. For example, in areas where openly advertising the fact you have a powerful – and valuable – computer on your person would be ill-advised, it is perfectly possible to keep a phone hidden and, perhaps, access necessary information via other, more discrete methods, such as a smart-watch. Obviously wearing a pair of smart glasses, especially in their current form, would create not only some degree of social stigma, as was seen with Google Glass, but also a personal risk from theft as one would effectively be advertising the fact that they were in possession of a very valuable piece of personal computing equipment.
What of the issues pertaining to eyesight? I, personally, need corrective lenses, whether they be in the form of contacts, which I personally can’t stand wearing for very long and that do little to really improve my eyesight anyway, or spectacles. What solutions do smart-glasses have in store for users such as me in the future? Will I be forced to have to wear contacts whenever I want to wear and thus use my smart glasses? Or will I need to make an additional investment to install corrective/ prescription lenses, instantly increasing the overall cost of adoption and complexity of the product, and making it more of a tricky proposition to resell the device when it comes to upgrading. I wouldn’t be able to easily share/ lend my device to others unless they too shared my prescription, unless they automatically contained technology that corrected for the current user’s eyesight – maybe that’s the key?!
Then there are situational factors governing ease of use. I can currently use, or otherwise carry, my phone in virtually all circumstances. The design and form of the technology gives it this feature. At work it can remain in my pocket and be accessed should I need to quickly use the camera, or search an ebook or perform an information search online, whilst during exercise, such as on my bike, I can easily carry it using a sports-pouch and enjoy music and other services, such as GPS tracking and metrics apps like Strava. Paired with a smart-watch I can also interact directly with the device, accessing key performance data, all in a comfortable manner that the device is designed to be able to cope easily with. Smart-glasses, on the other hand, do not seem to be as flexible. For example, I doubt that I would want to wear the same style of smart glasses at work, interacting with clients and colleagues, and with the constant risk of getting blood etc on them, as I would whilst training, when the need is for eyewear that is sporty, aerodynamic, lightweight, sweat-resistant and aesthetically totally different to other situations. Personally, I even keep different styles of sunglasses depending on the situation in which they are worn. My everyday, casual pair are totally different to my sports/ training/ racing pair. Would I need to have several different pairs of smart glasses to achieve the same result? I only have a single smartphone and can use that in all of the settings mentioned.
Then there is the issue of social stigma and resistance to smart ‘facial-wear.’ Nerds get why people would want to wear a computer on their face – I am one of them. But as Google Glass, when it was first released, demonstrated, the wider public are generally suspicious of and occasionally outright hostile to the idea. Is it simply that wearers of such devices look alien and so instantly stand out as different? Is it the fact that people know such devices include cameras and so fear the perceived invasion of personal privacy that comes with being surveiled, even though we all carry smartphones with incredibly powerful, high resolution cameras that capture content constantly and may well be recorded multiple times per day by other users without our even being aware? In fact, unless you live in a rural area then it is highly probable that you are already being constantly recorded such is the pervasive nature of CCTV. And yet we’re collectively fine with this whilst being instantly suspicious of a person openly wearing a recording device in the form of smart eyewear.
This will need to change before smart glasses become universally accepted as ‘normal.’ A really interesting historical point was made at this year’s AWE (Augmented World Expo) by one of the speakers who talked about how prior to the First World War, wristwatches were generally considered to be pieces of womens’ jewellery and men typically carried a pocket watch. Any gentleman thus wearing a wristwatch would have been stigmatised. That was until the war when, due to the practical constraints of the battlefield, having a timepiece easily accessible, lightweight and handsfree was a big advantage. As a result officers sported wristwatches and continued to do so upon returning from active duty. The comical comment was the suggestion that no-one in their right mind would have ridiculed a tough soldier for wearing a piece of jewellery and so before long tastes changed and the idea of wearing a wristwatch became the accepted norm that we know today. Will the adoption of smart eyewear follow the same path? Who will it be that leads the way in changing public opinion? Will it once again be soldiers, after perhaps first experiencing smart-glasses in the military, or sports-stars perhaps? Regardless of who it is that ultimately leads to a change in opinion there first needs to be a compelling reason for why smart glasses are a preferable option over sticking with the good old smartphone and it is this that I cannot quite yet see.
If No Smart Glasses, Then What?
If smart-glasses, in the typical spectacle form, are not the answer then what could the future of AR look like? To answer this it is worth considering our experience of AR in two different contexts.
Fixed Position Interface
As we have already discussed, AR is already experienced by many of us via traditional screens, with the augmented content over-layed onto the real world as long as we view it through the screen itself. As such, any context in which a transparent surface is involved lends itself to AR. Obvious case examples include driving, with our view of the world outside of the car/ transport medium being through such a transparent ‘screen.’ Companies such as BMW have already explored this idea, for example with the Head-Up Display that shows important journey and vehicle information ‘on the windscreen’ such that the driver need not take their eyes off the road in front of them to still benefit from such data. Navigation information is another very obvious application for this concept, with drivers ‘seeing’ the route mapped out on the road and surrounding world without having to divert their gaze away from the road and towards a separate screen. Imagine how much less likely it would be to miss that rapidly approaching highway slip-road if you could ‘see it’ in advance by a change in colour of the road in front of your very eyes. Once we truly herald the arrival of fully-autonomous driving then the very same vehicle ‘screens’ that previously kept us informed of important driving information will give themselves over to becoming entertainment or productivity screens.
Other settings in which screens (as in what we currently think of as windows or transparent barriers) are currently employed and which promise to provide AR interfaces in the future include places such as zoos, museums, shop windows, or even our very own home windows. Basically anywhere that a transparent ‘screen’ could be found.
Until we somehow come up with a reliable, safe method by which to wirelessly beam AR directly into our brains, currently the most obvious alternative to smart-glasses is the smart contact lens. There are groups working on such stuff of science fiction as this very thing, with Samsung having patented a design for the same, although the power and processing would come from a tethered smart-phone, making it more of a smart screen than anything. I have already voiced my own personal objections to contact lenses and cannot see how adding hardware, however small, to them is going to overcome their obvious shortcomings. Assuming for a moment that the visual effect is staggeringly compelling, with beautifully rendered digital content seamlessly added to the world as if it was always there, designers are going to need to solve the following problems before we all don contact lenses:
comfort – many people either find them out and out uncomfortable or can only really stand wearing them for short periods of time.
ocular health – in some professions, especially medical, ophthalmologists recommend daily disposable lenses as, on balance, they are a more hygienic option when compared to longer term-use products. Will smart contact-lenses be cheap enough, and will it be socially and environmentally acceptable or sustainable even, to dispose of our high-tech lenses each day? What of the potential health issues associated with having a heat-generating, signal transmitting/ receiving device actually in contact with our eyes? Do we know what, if any, health risks that might present?
cost – whilst not especially cheap, I do not get too upset when I have to sacrifice a pair or two of contact lenses in any single day, either because some debris makes it way onto the lenses and renders them uncomfortable or my eyes just need a break. I would be less quick or willing to whip them out, however, if they had cost me a significant sum to purchase, and if I were forced to then I’d be resentful of having to have done so.
tethering – whilst not a major issue, having to keep a smart-phone in close proximity for such lenses to work as desired does somewhat dilute some of the real magic and potential of a truly untethered AR experience.
Whilst the future is one in which Augmented Reality is definitely going to be HUGE, with companies such as Meta, Magic Leap and Microsoft (with the Hololens) creating some truly incredible technology and experiences that defy conventional belief and result in childish grins from anyone who tries them, there are still some significant and fundamental obstacles to overcome. Form factor is, I believe, one of the key issues that pioneers of this technology are yet to crack but when a compelling solution is found then, well, get strapped in and prepare for a technological shift the likes of which come around but once in a generation!
The Upload Collective is a co-working space for those working in the rapidly growing, exciting, immersive field of Virtual Reality (VR) and located in San Francisco. It offers access to like-minded people, mentorship from some of the industry’s leading thinkers and successful entrepreneurs and financiers, in addition to the ability to use shared resources, such as VR headsets, to help minimise the costs associated with launching a start-up in the space. It is also just good fun! A cool place to hang out, with interesting, exciting people all with a common passion and interest.
Why Did I Visit?
I am deeply fascinated by VR, and indeed spatial computing in all of it’s forms, seeing it as the next, logical step in our move towards ever more immersive digital interactions and intuitive computing that promises to change every facet of how we create and interact with content. From healthcare to learning to entertainment, spatial computing is, and will continue to do so at an ever greater rate, change how we work, learn and play. I was aware of Upload VR from my time at AWE (Augmented World Expo) in 2015, where I volunteered in a bid to connect with and learn more about both augmented and virtual reality. Hooked in an instant, I have continued to follow UploadVR as a source of industry news and decided that during my next trip to the Bay Area I wanted to visit and see first-hand what they were doing in the city. A LinkedIn email to Taylor Freeman, co-founder of UploadVR, later and a date was set for me to head on over and talk all things VR. In addition to being able to meet the people involved and see for myself what was going on at the collective I also really, really wanted to physically experience high-resolution VR myself. I had been able to try out a few VR experiences at AWE last year but since then both the Oculus Rift and the HTC Vive had been commercially released, along with a plethora of incredible experiences to accompany them. I was still trying to decide on which system to consider investing in and the only way to really know for sure is to try and garner the opinion on industry leaders, right?!
What Did I See & Who Did I Meet?
After having to rearrange the meeting on account of the Memorial Day holiday in the US, I headed round the corner from where I was staying in San Francisco to the Upload Collective’s space on Mission for my early meeting with Taylor. Walking in to their first floor space the first thing that struck me was how light and airy the place felt, with all of the casual cool that one naturally associates with a technology start-up. Comprising a large central co-working space, with a well-equipped kitchen at one end and comfortable sofas and the obligatory bean bag, this area was fringed with a number of separate rooms, containing various computers, whiteboards and all the other stuff one might need to create the future of immersive technology. One room, much bigger than the rest, contained a whole load of studio equipment and green screens, used for creating VR showcases in which people not wearing a headset can still feel immersed in what it is the user is experiencing. This is still one of the biggest hurdles for VR to overcome: how can you get people truly excited about the technology and experience without, well, actually physically donning a headset. It is the biggest marketing issue that VR has and whilst efforts by Google, and third parties such as the New York Times who gave away millions of Google Cardboard headsets to readers, to introduce people to the wonder of VR, it remains so that in order to really “get VR” it is vital to “try VR,” especially the high-end devices and experiences. Work being conducted at Upload Collective is aiming to tackle this very challenge.
Other rooms, and the ones I instantly had my attention drawn towards, were the VR rooms themselves. Devoid of furniture, blacked out and foam-lined, with a powerful gaming PC and various pieces of VR equipment sitting on hooks at one end, these are where the magic happens, or rather where it is experienced.
Given the fact that it was a) early and b) the day after the holiday weekend, there were not very many people in when I visited and so I daresay that I didn’t quite get the full impression of the energy that would normally coarse through the space in a usual day.
I met Taylor, who promptly offered me my first caffeine hit of the day courtesy of the shared espresso machine, and we sat down to talk about how UploadVR came about, Taylor’s own background and path into the space and plans for Upload Collective, including their collaboration with Make School, situated just next door, on a course for budding VR developers. You can read a little more about UploadVR here.
The second person I met was Avi Horowitz, Intern at Large at Upload, who was kind enough to get me set up on one of the Collective’s HTC Vive headsets and launched me into the first of several incredible VR experiences, Google’s amazing 3D art program, Tiltbrush.
What Did I Do?
Needless to say the time I spent in VR whilst visiting the Upload Collective was the most fun I have had in a very long time and was, without doubt, one of the highlights of my visit to San Francisco. Right off the bat I was hooked, with Google’s Tiltbrush proving the perfect introduction to the magic of high-resolution VR. I will do my best to describe what I experienced but as with trying to do VR justice in any other medium than actually trying it for yourself, it may not hit the mark.
As soon as I donned the headset I found myself standing in a blank, flat landscape, fringed with stars on the horizon and a beautiful night sky. Avi, with a simple selection from the menu, changed this setting such that I now found myself standing in the middle of space, surrounded on all sides by stars. Magical! However, this was nothing compared to what was to come next. Using the two controllers supplied with the Vive, I had all the tools of a master artist, with my left serving as a rotating smorgasbord of art options and my right as the main tool. With a simple ‘laser light’ tool selected I started drawing in the void in front of me. Yes! Drawing right there in space! This simple action may not have been that impressive on a 2D surface, such as a graphics tablet, but the fact that I was laying down graphics in 3D, such that I was able to walk towards, through, and around it made the entire experience a revelation. Much as I can imagine how Michelangelo would have felt at discovering the power and potential of sculpting clay as a medium for artistic expression, I felt the same thrill and joy at the potential for just what was now possible using this medium. A childish grin the size of the Cheshire Cat’s instantly spread across my face as I quickly learn’t how to select different tools, colours, effects and with all the enthusiastic urgency of a toddler at play set to creating my ‘masterpiece.’ The fact that what I was drawing/ building/ creating was nothing more than formless nonsense was immaterial. What was important was just how addictive, immersive and unique the experience was. I can not even imagine a child not becoming deeply fascinated in art and the process of design and creation using such a powerful yet intuitive tool as VR. As a medium for limitless artistic expression it is un-rivalled and for anyone professionally involved in design, from architects to product designers, being able to walk around, through and view your creations from any and all angles it surely renders the lowly drawing board redundant. It is testament to how incredibly fun this one VR experience is that I spent about an hour playfully immersed in it and the fact that I was then able to record what I had created and thus take it away with me provided the cherry on the big VR cake.
Other experiences were just as powerful, from Universe Sandbox that enables users to literally ‘play God’ by creating their own galaxies and the like, with celestial bodies even adhering to the laws of physics, to WeVR’s incredible experiences, theBlu that saw me standing on the bow of a sunken ship surrounded by incredible reef life and a whale that slowly swam out of the depths, passing me within touching distance, allowing me to look the beautifully rendered animal in the eye, and it into mine, the scope for becoming utterly and entirely lost in VR was limitless. This latter experience really helped solidify my view of VR as an incredibly powerful empathy generator, with evidence backing up the idea that immersion drives empathy and empathy really drives understanding and action. Can you think of a more powerful framework for effecting real educational outcomes? I can’t. VR enables users to experience, first-hand, albeit in a digitally-rendered simulation, the experiences of others and to put people in situations that they would otherwise not be able to experience either easily or at all. Want to understand what it is like to live in a Syrian refugee camp? Within’s ‘Clouds over Sidra’ achieved this very same thing. What about experiencing life on the streets? Upload created such a VR experience, ‘A Day in the Streets’, to help educate through empathy on the plight of San Francisco’s homeless population. I can imagine how the same approach could be applied to creating a similar experience to simulate the life of a stray dog or cat, or perhaps show what a journey from being owned to abandoned might ‘feel like’ in order to drive empathy and make people think twice about taking on a pet when they are not truly committed to providing a home for life. The potential is limitless and the effect of VR truly impactful. Just ask anyone who has donned a headset themselves.
Even though I spent just a few hours at the Upload Collective they were fascinating, fun and insightful. I could not help but feel as though I was at the epicenter of an exciting new movement in technology, all whilst standing in the undisputed center of the tech universe that is San Francisco. I look forward to getting more and more involved myself and to see where we’re all headed with spatial computing. As virtual as much of the content it, the effects are very real indeed.
I have recently returned from my latest trip to what rapidly feels like my second home: California, and specifically the Bay Area. Ever since my first visit to see some friends several years ago I have felt drawn to the area, in no small part due to the fact that it is ‘tech Disneyland’ to the small, nerdy kid that is nestled at my core. It was almost a no-brainer then that I chose Lake Tahoe as my first Ironman race, oblivious at the time to the fact that it was THE hardest race in North America and that it would end up being a two year odyssey! (read about the race here) With the tech theme in mind it was to Silicon Valley that I headed last year when I wanted to learn more about the exciting and rapidly developing fields of Augmented Reality and Virtual Reality, collectively termed Spatial Computing. I even visited and subsequently applied to the MBA program at the Haas Business School at UC Berkeley. All in all, I am a big fan of the state of California, San Francisco and the Bay.
This most recent trip was principally in order to attend the same conference on spatial computing that I both volunteered at and attended in 2015: AWE (Augmented World Expo), albeit with some additional time tacked on for some R&R and additional nerdy activity in San Francisco itself. This included checking out Make School, one of many ‘coding schools’ (although they do some hardware stuff as well) present in the city, and spent time with Adam Braus chatting about the school, coding, start-ups and virtual reality (VR).
Talking of VR I was fortunate enough to be able to also visit the Upload Collective and speak with Co-Founder, Taylor Freeman about the excitement surrounding a technology that does finally feel as though it is meeting previously un-met expectations. One of the real highlights of my visit was getting to experience VR myself – not my first, mind, but certainly the most extensive and impressive experience of the technology that I had had to date – jumping in to several incredible HTC Vive experiences, including Google’s Tiltbrush and WeVR’s theBlu, an absolute must for anyone wondering what all the fuss is about “this VR thing.” I look forward to elaborating on a number of these experiences in separate posts, including sharing what I actually created in Tiltbrush!
One of the great things about a visit to San Francisco, and the Bay Area in a wider context, is that you are struck immediately by the wealth of tech talent and innovation that there is. It is no accident that some of the true behemoths of tech have all originated there, from Google to Twitter, Uber to AirBNB and beyond. The sharing economy, it could be argued, also sprang to life here with the most famous examples of companies that have built their fortunes on serving this part of our lives being both Uber and AirBNB. These two companies made much of my trip both possible, simple and cost-effective. I used AirBNB for both places I stayed, initially in San Francisco where I had the pleasure of staying with two awesome guys, Michael and Jimmy, and their dog, Emit, in the Mission District and for a fraction of the cost of a hotel, and then in Sillicon Valley with Kirupa, an in-house attorney at another San Francisco legendary tech firm, Square. I have consistently been bowled over by the quality of the lodging that I have been fortunate enough to book through the service and the wonderful hosts who I have had the pleasure of meeting and becoming friends with. There is something about staying in someone’s actual home that really makes you feel a greater connection to the area being visited compared to the relative sterility and formality of hotel stays. Then there is simply the cost difference. Hotels are quite simply multiple times more expensive, money that I personally prefer to spend on unique experiences in the locales that I visit. Many times the experience I have had staying with an AirBNB host has actually been on-par with or even better than a hotel. Kirupa’s place, for example, was one of the most beautiful homes I have ever had the good fortune to stay in and being within a neighbourhood, versus the faceless industrial areas that the main hotels were to be found in, I had a fantastically rejuvenating stay, including the flexibility to be able to leave at a time that suited me versus the rigid ‘checkout time’ that many hotels (admittedly have to) enforce.
Uber was the other service that really contributed massively to the success of my visit, especially their ‘Uber Pool’ feature that enabled me to request a ride to be shared with another person, thus significantly lowering the cost to each of the journey. Thanks to Uber’s incredible logistics technology routes are automatically planned in the most efficient manner and I made use of the service multiple times during my stay. Why would I not when they make it that easy to order a ride, track it’s progress, receive timely notification of it’s arrival, have pleasant conversations with drivers who have interesting things to say and keep their cars immaculate, and spend significantly less for the same journey than I would in a regular cab. Oh, and not be expected to cough up a tip regardless of the quality of the service! Uber just make it all so darned easy, including the payment part.
A successful return to my second home and a trip that has provided a lot of material for future posts. Viva San Francisco!
Knowledge has never been so readily and easily available. It is instant, mobile and has the power to revolutionise how we operate as vets and work together with clients on their pets’ healthcare.
The problem is NOT that people look on the web; they will continue to do so more and more. The issue will not cease to be and nor should it. The internet is the ultimate learning resource.
What is at fault is that we are generally POOR at knowing HOW to LEARN and critically appraise the quality and reliability of information, especially that found on the internet.
A classic example is that of dog breeding/ puppies. I saw a Pug owner the other night whose female had been ‘accidentally’ mated (there are no accidents in these situations as there is a widely advocated option known as neutering) and so we are now looking at said bitch being due to whelp in the next 2 weeks. The owner in question admitted that they had never had any experience of breeding dogs but had “looked online” and was alarmed to “learn” various things, all of which were quite frankly sensationalist at best and downright incorrect at worst. The advice I gave, in addition to dealing with the immediate issue for which the dog had been presented to me in the middle of the night, was to advise the owner – nay, urge the owner – to visit their nearest well-stocked book store, buy and READ a comprehensive guide to dog breeding, especially the sections pertaining to whelping and puppy care. Books are great in as much as they generally still do undergo some degree of review before publication and so it is less likely that the information contained is plainly wrong, in contrast to much of what can be found online with the universal ability for anyone with a connection, voice and opinion to fire their musings out into the world. Hell, I am one of those same people as demonstrated by virtue of this very blog! How can you be sure that what I write on here isn’t just a load of inaccurate bollocks? You can’t is the truth of it. The same goes for much of what is published online, especially that found on forums/ discussion boards and blogs. Therein lays the challenge and risk associated with relying blindly on “the internet.”
I was fortunate enough to benefit from a rigorous training in the importance of critically appraising information for reliability and so do feel that I am able to mix my information sources (online versus print, etc) relatively safely. Many, unfortunately, do not and in the veterinary profession we still hear “but the breeder said..” or “a website I looked at said this (completely fanciful/ sensational/ wrong) thing…” again and again. Our battle is becoming more and more against the swathe of half-truths and inaccuracies that swirl around in the electronic ether and set against a client base that is becoming generally less trusting and more questioning of what we do, which is not necessarily a bad thing in and of itself.
I love the internet and the educational power that it contains. From TED talks to online courses, blogs from recognised experts and amateur enthusiasts alike, to social networks and their power to engage in real-time conversations and information dissemination, the web is and will continue to be utterly transformative. It is vital, therefore, that in order to get the most value out of this precious resource people know HOW to LEARN, what information to accept and what to question and potentially reject. Part of our role as modern day veterinary professionals is more and more going to be as information curators and sign-posters, directing our clients and the wider animal-owning and caring population toward sources of information that will lead to sound healthcare decisions and outcomes. As old-fashioned as it may sound, books do still serve as a good place to start and this is why I often still direct my clients to their local bookstore.
“Virtual Reality was made for education.” I have no idea who first said that – can I claim it? – but I am sure it has been uttered countless times since and I assure you that it will be said countless times in the future. From feeling as though virtual reality (VR) was nothing more than a sci-fi promise of things to come yet never quite delivered to the current situation in which VR feels as if it is undergoing a true renaissance.
With the arrival of devices, such as the Vive, Oculus Rift and Samsung GearVR, that are finally capable of delivering truly-immersive, high resolution and, most importantly, non-nausea-inducing experiences that captivate both young and old alike, VR has arrived and the exciting truth is that we are simply getting started!
There are already creative, innovative and fast-moving teams working on sating the appetite for immersive content, with gaming naturally leading the charge, and 360-degree video experiences also offering many their introduction to the world of VR. This, however, is not where VR ends and it continues to excite me to see the educational promise that this technology offers and that pioneers in the field are indeed delivering on. Unimersiv, one such team, refer to the idea that whilst 10% of knowledge that is read and 20% of that heard is retained two weeks later, a staggering 90% of what is experienced, or physically acted out, is recalled. If that is indeed the case then VR, with its power to immerse users in any environment that can be digitally rendered, offers a hugely powerful educational tool. The fact that the big players in the tech arena, such as Google, are now taking VR seriously speaks volumes for how impactful it is predicted to be, and that I believe it will be.
Potential medical, especially educational, applications abound, with veterinary no exception. Whilst my interests in the technology are NOT limited to veterinary, it is an area that I have direct experience of working in and so where perhaps I am most effectively able to postulate on the future applications of a technology that IS, I strongly believe, going to shake things up for all of us. In terms of medical and science education, for example, work such as Labster’s simulated world-class laboratories, where students can learn cutting-edge science in a realistic environment and with access to digital versions of professional equipment. It may be digital and simulated but that does not diminish the educational power that such experiences delivers. I can see Labster’s technology inspiring a new generation of scientists to develop a fascination for the subject and ultimately help solve many of the world’s most pressing problems, such as the issue of antimicrobial resistance and the drive to develop new drugs.
So what about the potential uses for VR within veterinary? Well, perhaps some of the following….
Dissection – Anatomical training without the need for donor animals/ biological specimens. More efficient, with multiple ‘reuse’ of specimens in a digital environment, leading to revision of key concepts and better learning outcomes, translating into better trained, more confident practitioners.
Physiology – take immersive ‘journeys’ through biological systems, such as the circulatory system, learning about how these systems work, both in health and disease. Simulation of the effects of drugs, parasites, disease processes can be achieved, with significant learning outcomes compared to traditional learning modalities.
Pharmacology – model the effects of drugs on various biological systems and see these effects up close in an immersive, truly memorable manner, thus deeply enhancing the educational experience.
Surgical training – simulate surgical procedures thus enabling ‘walk-throughs’ of procedures in advance of actually physically starting. With advances in haptic technologies, tactile feedback can further augment the experience, providing rich, immersive, powerful learning environments. Surgeons, both qualified and training, could learn in a solo capacity or with team members in the digital environment – great for refreshing essential skills and scenario role-playing with essential team members. For example, emergency situation modelling to train team members to carry out their individual roles automatically, efficiently and effectively.
Client education – at home and in-clinic demonstrations of important healthcare messages, helping drive healthcare messages home and driving clinic sales, revenue and profitability, and leading to more favorable healthcare outcomes and client satisfaction.
Communications training – many of the issues faced in medical practice stem from breakdowns or difficulties in communication with clients or between colleagues. Communications training is now an integral part of both medical and veterinary training and should be extended to all members of a clinic’s team, from receptionists to nurses and veterinary surgeons. With the immersive power of VR and the ability to create truly empathetic experiences, it offers the perfect tool for communications training.
Pre Vet School education/ Careers counseling – think you know what it means to go into veterinary practice? Can’t arrange a farm placement but still believe you have what it takes to pursue a veterinary career? Imagine being able to experience a range of VR simulations that guide you through a host of realistic scenarios faced by veterinary professionals, enabling you to make informed career decisions based on ‘real’ experience. It has been demonstrated that those who experience high-quality VR feel genuine empathy for those situations into which they digitally stepped. The power of this for making informed choices about future plans and for challenging preconceived notions about what it means to be or do something is compelling.
Commercial demonstrations/ trade show experiences – custom-made VR experiences for showcasing new products and services to prospective customers, creating truly memorable and impactful campaigns. I for one look forward to VR becoming a mainstream component of company presentations at trade shows.
These are simply a snapshot of some of the potential applications for VR with most easily being applied in other, non-veterinary contexts. I look forward to continuing to grow my knowledge and expertise in this exciting area and welcome anyone who shares the same sense of wonder and optimism at the possibilities to get in touch.
The big issue that virtual reality (VR) faces in achieving mass adoption and truly being the transformative technology that I believe it represents is how to really extol its virtues to those who have not had the opportunity to physically try it out. How do you really sell something that requires users to try it to truly get it?
Being a self-confessed tech nerd I have always felt truly excited by the idea of VR, and also Augmented Reality (AR), and read with enthusiasm all of the reports and promises coming from companies like Oculus Rift. I also knew that pretty much anyone who got to physically try out the technology came away an instant convert. You just have to do a quick search for VR on You Tube to see the countless ‘reaction videos’ from people who donned a VR headset for the first time, from traditional gamers to the elderly and beyond.
I had my first experience of VR when I traveled to California and Silicon Valley in June 2015 for the annual Augmented World Expo (AWE) and was instantly amazed at how incredibly immersive VR was, with insanely rich graphics and the feeling as if I was suddenly physically transported to the worlds in which I found myself in. There is something magical about being able to turn around, a full 360-degrees, including looking up and down, and seeing a new world all around you. Your brain knows it’s not real and that you’re still standing at a trade fair stand, but then, your brain starts to forget that and, well, you find yourself reacting as if you’re actually in your new environment. It’s surreal. Awesome but truly surreal. I am not a gamer but I could easily see myself become one through VR such is the richness of the experience. One of the highlights of the trip for me, and my favorite VR experience, was being strapped into a horizontal harness, with fans blowing air at me and then having an Oculus headset and headphones placed on my head. Suddenly I was no longer hanging uncomfortably and self-consciously in a rig on full display to amused onlookers but was flying as a wing-suit skydiver through a mountain range, able to turn by physically adjusting my body and head position. Everywhere I looked I saw the mountains, the forests, the new world in which I was present. Except I wasn’t. But I had to remind myself of that. Repeatedly. The experience was simply that awesome and that immersive. Unsurprisingly that demonstration won “Best in Show” and anyone who was fortunate enough to experience it agreed that it was totally deserved.
Since returning from AWE I have kept exploring the world of VR, purchasing myself a set of Google Cardboard googles for use with my iPhone, even introducing my dad to the experience by ordering him a set for Fathers Day. Various apps have been downloaded, from the official Google Cardboard application to rollercoaster and dinosaur experiences, and amazing immersive video experiences courtesy of Vrse, and I have loved every one of them, insisting that others try them out too. In fact, everyone at work has had to hear me babble on about how awesome VR is and have experienced one if not several of the VR apps that I have on my phone. The reaction is always the same: initial quizzical skepticism rapidly followed by complete and utter conversion once the technology is actually experienced.
And so it was that I introduced my six year old nephew and two year old niece to VR during a recent trip home. My nephew is as excited about technology as I am – smart kid – and so was eager to try out the Cardboard. My niece, however, wasn’t quite so sure to start with, protesting as my sister moved the goggles towards her unenthusiastic eyes. What happened next, however, was worthy of a You Tube video all of it’s own.
As soon as her eyes locked onto the new, 3D immersive world that had been presented to her all protests evaporated. Gone! What instantly replaced them was the biggest, cutest, most genuine grin that I have ever seen and that still gets me a little emotional even now as I recall the scene. She was experiencing the pure, visceral joy that full immersion into a magical new world provides. Never have I seen such an instant and powerful reaction to a technology before. I challenge anyone to deny that VR is a game changer after witnessing what I did. Such was the power of the conversion and the fun of the experience that I then found myself sitting for the next two hours policing the sharing of my phone and goggles as they both spent time exploring worlds in which dinosaurs roamed, rollercoasters careered up and down mountains, and they absolutely loved the Explorer program on the Google Cardboard app that saw us digitally visit Tokyo, Paris, Jerusalem, the Red Sea, Venice, Rome, and many other global locations, all whilst sat in the comfort of their UK living room.
I am yet to join the ranks of those who own their own ‘high end’ VR device, such as the recently launched Oculus Rift, but that is going to change very soon. I cannot wait to delve even deeper into what is possible with this technology, both from a consumer stand-point and also with a view to creating content myself. The possibilities are indeed limitless and whatever we can imagine we can create and experience through the sheer and utter magic that is virtual reality. Reality will never truly be the same again.
Want to experience VR for yourself? The best, lowest cost way to try out the technology for the first time is to follow these instructions:
1. Get yourself a pair of ‘Google Cardboard’ goggles, many different takes on which can be found online at sites such as Amazon.
2. Download the Google Cardboard app, or any one of the many VR apps that are on the various app stores.
3. Follow the on-screen instructions and check out of reality as you know it!