Tag Archives: augmented reality

A Flurry of VR & AR Activity

Where can you both be present and absent at the exact same time? No, this isn’t a deep philosophical question on the meaning of existence but rather a description of virtual reality (VR), something that I have had a rich helping of over the past month. In my ongoing effort to learn all I can about this hugely exciting and developing technology, and the industry that is blossoming around both it and it’s related cousin, Augmented Reality (AR), I have been doing the conference circuit recently, traveling from Dubai to the US and back again.

iOTX

The first of the events I attended was iOTX right here in Dubai where I was fortunate enough to be a VIP guest of VR/AR Association Dubai Chapter Chairman, Shujat Mirza, at the VR and AR Start-up zone. Tucked away in a corner of the huge Dubai World Trade Centre, there was an impressive array of local companies working in the fields of VR, AR and related technologies. This included Hyperloop, who were at the time of the conference about to present the results of their feasibility study into building a hyper loop between Abu Dhabi and Al Ain, with the projected travel time being a mere 12 minutes! They had a Vive system with them to give people an idea of what it would be like to sit in one of their capsules and showcased the ‘window screens’ that will show passengers a view rather than the dark inside of the tube in which the capsules obviously have to run. The technology behind the hyper loop theory is fascinating, using passive magnets and actuators on the capsule that generate the initial thrust that propels the capsule forward. I really see the value in the technology and look forward to it’s eventual implementation. It makes far more sense for a desert environment such as the Gulf than high speed railway on account of being encased within a tube thus protecting the capsules and mechanisms from the harsh effects of the climate and conditions, including sand, which would play havoc with a standard railway were it to drift and build up on tracks.
Another company present was Candy Lab AR, a US company founded and run by Andrew Couch. Their location-based augmented reality platform uses beacons positioned in sites as diverse as airports, shopping malls etc that enable vendors to deliver real-time AR content to users, thus enhancing their experience in those locations. Great technology and a great team behind it! In addition to being present with a company stand, Andrew was a speaker during the event.

What a great day checking out the VR/AR Association startup zone at #IOTX in Dubai. Great ideas, great products, great people, such as Shujat Mirza (VR/AR Association Dubai Chapter President) & Clyde DeSouza (VR Filmmaker) – Spherical Image – RICOH THETA

Whilst small in overall size compared to the VR and AR industry in other parts of the world, especially the US and Europe, there is real potential for VR and AR to take off in the Middle East, especially somewhere with futuristic ambitions like Dubai and Abu Dhabi. I am already looking forward to seeing how the industry develops over the next few months and years.

AWE (Augmented World Expo)

Undoubtedly the largest industry show dedicated to both Virtual and Augmented Reality, I was excited to be heading back over to the US and Silicon Valley for the third year in a row, this time as a speaker. I always enjoy visiting the Bay Area and spent a day in San Francisco before heading down to Santa Clara, via both Facebook’s incredible HQ and Stanford University. Due to another speaking commitment the following day I was ultimately only able to attend AWE for the first day and so did not get to experience first-hand the fun and intrigue of the main Expo. There are reports aplenty online about the various companies showcasing their VR and AR wares so I didn’t feel as though i’d missed out on too much. The highlights for me during the first day were:
  1. Seeing how much bigger the event has become, even over the last three years. One could really get a sense of VR and AR starting to be embraced by the mainstream and the energy during the event certainly felt like it had been etched up a notch from the previous year.
  1. Getting to speak. I was one of several speakers who took to the stage on the Life track and thoroughly enjoyed being able to deliver my vision of where I see VR in Veterinary currently standing and where I see it going in the future. I believe I am correct when I say that I might have been the first veterinary surgeon to speak at the event so representing the veterinary profession in such an exciting and rapidly advancing industry was truly an honour.
My talk from the event can be viewed below, with a link to the rest of the AWE presentations being found here.
  1. Checking out Lllama Zoo’s HoloLens dissection experience. Charles and Kevin from the company had made the journey down from Canada and both my friend, Deborah, and I were privileged enough to be given a live demo of their augmented reality canine dissection tool, using the Microsoft HoloLens. With each of us wearing a headset, we were both able to see a high resolution holographic image of a set of lungs and heart floating in midair and move around it viewing it from different angles, remove layers and learn about the specific anatomy of this part of the body. The image quality was superb and I was not aware of there being any flicker or issues with the hologram staying fixed in position. A very compelling demonstration and a real glimpse at the future of anatomy teaching in vet and medical schools.
AWE 2017
A full day of VR & AR demonstrations and fascinating talks.
The day came to what felt like a rapid close and after lugging my suitcase up to the afterparty, which in previous years had always been in the adjacent hotel and was a lot of fun but this year had moved and, well, wasn’t quite the same, I hailed an Uber and hot-footed it up to the airport for my red-eye over to Washington DC and the second of my US VR events, and the third overall.

VR in Healthcare Symposium (VR Voice)

Touching down at Baltimore International airport having not really had any sleep whatsover I duly made my way down to Washington DC on the main commuter train, then transferring to the metro in order to arrive at the Milken Institute School of Public Health, part of George Washington University, by 8.30am. This, combined with the welcome discovery of there being a shower at the school, mercifully, gave me time to refresh from the previous day and the flight over, donning my suit before grabbing some breakfast and getting my head into the right space for another day of talks and discussion about virtual, and augmented, reality.
Organised by Robert Fine, of VR Voice, the one-day VR in Healthcare Symposium brought together several speakers and delegates both working in and interested in the use of spatial computing in healthcare, a much more specifically focused event than AWE and one that my talk was perfectly pitched for. In addition to being a great opportunity for me to introduce both myself and the work already being done in veterinary with VR, the day was a wonderful chance to meet a plethora of people, some already very active in the space, such as Dr Brady Evans, whose company OssoVR trains orthopaedic surgeons using virtual reality, and many who were there to learn about this exciting and rapidly changing technology and it’s application to healthcare.
Whilst my talk itself suffered from some degree of annoying technological hitch, I was still very pleased to be able to present and whilst not as high-brow as that presented by the neurosurgeon before me, it went down well – after all, what’s ultimately not to love about a dog wearing a VR headset?!
The full version of the talk can be viewed here:
In addition to enjoying a day of truly fascinating talks, including seeing how neurosurgeons are using VR to better plan and rehearse complex brain surgery, I finished the day with a win, having my ticket be one of those drawn to receive a Merge VR headset – a really great way to round out the day and kickstart my short break exploring the city itself.
VR in Healthcare, Washington DC
A great city & an equally great event.

Smart Glasses – Are We Ready?

Spatial computing, once a preserve of science fiction, is slowly but surely creeping into real life and whilst there are a number of companies working on industrial applications of Augmented Reality (AR), with true use of a headset/ glasses there is not yet a convincing consumer solution to herald in the age of smart glasses.

Smart glasses, Lake Tahoe
AR adds a layer of digital information to the real world that we see, adding to the experience.

The promise of AR and of smart-glasses is to seamlessly overlay digital information onto the real world such that this information adds to the experience. There are myriad potential applications where such a capability might prove either useful or just entertaining. For example:

  • Video calls – speak with a person on Skype or FaceTime (other video chat applications available) as though they were literally standing/ sitting there in front of you, in realistic hologram form.

 

  • Educational experiences – visits to galleries, museums or even city tours would be so much more entertaining and interesting with the ability to see projections of artists, historical figures or scenes played out in front of our eyes as though they were happening live. A visit to a famous battle scene or, for example, StoneHenge would be a richer learning experience if the subjects of our learning were walking about around us. How much better would we relate to our history if we could see, with our own eyes, such histories played out on the current world? Would it lead to a greater sense of the important lessons of history and reduce the risks that we repeat the same errors, a concern that holds resonance at this specific time of political uncertainty in the world.

 

  • Navigation – whether it be in a car or walking about an unfamiliar city, staring at a screen has its obvious disadvantages. Contrast that with seeing clear directions mapped out onto the real world in front of our eyes, negating the need to take our focus off the real world. This will be further enhanced by the use of real-time translation, such that foreign road signs are automatically presented in their translated form.

 

  • Many others…..

 

What AR experiences are already available?

Most of us will have first heard of or experienced AR through social apps such as Snapchat, whose filters allow for some silly but otherwise fun effects to be added to live video, such as the addition of rabbit ears and a nose that respond and change in real-time with our faces. Others might have used an AR app to scan a physical marker in, say, a magazine and seen a digital object, such as a movie character, materialise on our screen but viewed as though they were there in the real world. Companies such as Blippar do the latter and have a thriving business in using AR for brand marketing.

 

Who is doing interesting things in AR?

There are a number of companies working on AR, whether it be via smart glasses or the screens with which we interact with daily, such as tablets and phones. As already mentioned, social media is likely to be one of the first experiences of true AR that many of us have and it isn’t just Snapchat playing a role. Facebook are also players in this market with their purchase, this year, of the AR start-up MSQRD, whose technology does much the same thing as Snapchat’s. The technology behind such whimsical entertainment is actually pretty exciting and you can learn more about it here.

Aside from marketing and social media/ entertainment, other major applications for AR are in both industry and education, with a few vet schools even dabbling with the technology.

 

Form Factor…. The Big Issue

As much as I am truly excited by the promise of AR to revolutionise how we interact with digital information, form factor is still, for me, THE biggest issue. Until we move beyond the bulky, cyborg-esque headsets, that feel akin to wearing a welders mask, to lightweight, stylish eyewear or, preferably, a completely off-body solution then wider adoption of this tech will be slow. At present, the most accessible and reliable method by which to engage with AR for the vast majority of us is via our phones and tablets. In other words, handheld screens with cameras attached.

Phones work in as much as they do some incredible things for us and work the same regardless of personal factors and are situationally flexible (i.e. they work much the same way regardless of whether you are at home, work or, perhaps, out and about in a sporting or outdoors setting). They also have the advantage of being discretely held on person if necessary, a feature that an expensive pair of smart-glasses clearly lacks. For example, in areas where openly advertising the fact you have a powerful – and valuable – computer on your person would be ill-advised, it is perfectly possible to keep a phone hidden and, perhaps, access necessary information via other, more discrete methods, such as a smart-watch. Obviously wearing a pair of smart glasses, especially in their current form, would create not only some degree of social stigma, as was seen with Google Glass, but also a personal risk from theft as one would effectively be advertising the fact that they were in possession of a very valuable piece of personal computing equipment.

What of the issues pertaining to eyesight? I, personally, need corrective lenses, whether they be in the form of contacts, which I personally can’t stand wearing for very long and that do little to really improve my eyesight anyway, or spectacles. What solutions do smart-glasses have in store for users such as me in the future? Will I be forced to have to wear contacts whenever I want to wear and thus use my smart glasses? Or will I need to make an additional investment to install corrective/ prescription lenses, instantly increasing the overall cost of adoption and complexity of the product, and making it more of a tricky proposition to resell the device when it comes to upgrading. I wouldn’t be able to easily share/ lend my device to others unless they too shared my prescription, unless they automatically contained technology that corrected for the current user’s eyesight – maybe that’s the key?!

Then there are situational factors governing ease of use. I can currently use, or otherwise carry, my phone in virtually all circumstances. The design and form of the technology gives it this feature. At work it can remain in my pocket and be accessed should I need to quickly use the camera, or search an ebook or perform an information search online, whilst during exercise, such as on my bike, I can easily carry it using a sports-pouch and enjoy music and other services, such as GPS tracking and metrics apps like Strava. Paired with a smart-watch I can also interact directly with the device, accessing key performance data, all in a comfortable manner that the device is designed to be able to cope easily with. Smart-glasses, on the other hand, do not seem to be as flexible. For example, I doubt that I would want to wear the same style of smart glasses at work, interacting with clients and colleagues, and with the constant risk of getting blood etc on them, as I would whilst training, when the need is for eyewear that is sporty, aerodynamic, lightweight, sweat-resistant and aesthetically totally different to other situations. Personally, I even keep different styles of sunglasses depending on the situation in which they are worn. My everyday, casual pair are totally different to my sports/ training/ racing pair. Would I need to have several different pairs of smart glasses to achieve the same result? I only have a single smartphone and can use that in all of the settings mentioned.

Then there is the issue of social stigma and resistance to smart ‘facial-wear.’ Nerds get why people would want to wear a computer on their face – I am one of them. But as Google Glass, when it was first released, demonstrated, the wider public are generally suspicious of and occasionally outright hostile to the idea. Is it simply that wearers of such devices look alien and so instantly stand out as different? Is it the fact that people know such devices include cameras and so fear the perceived invasion of personal privacy that comes with being surveiled, even though we all carry smartphones with incredibly powerful, high resolution cameras that capture content constantly and may well be recorded multiple times per day by other users without our even being aware? In fact, unless you live in a rural area then it is highly probable that you are already being constantly recorded such is the pervasive nature of CCTV. And yet we’re collectively fine with this whilst being instantly suspicious of a person openly wearing a recording device in the form of smart eyewear.

This will need to change before smart glasses become universally accepted as ‘normal.’ A really interesting historical point was made at this year’s AWE (Augmented World Expo) by one of the speakers who talked about how prior to the First World War, wristwatches were generally considered to be pieces of womens’ jewellery and men typically carried a pocket watch. Any gentleman thus wearing a wristwatch would have been stigmatised. That was until the war when, due to the practical constraints of the battlefield, having a timepiece easily accessible, lightweight and handsfree was a big advantage. As a result officers sported wristwatches and continued to do so upon returning from active duty. The comical comment was the suggestion that no-one in their right mind would have ridiculed a tough soldier for wearing a piece of jewellery and so before long tastes changed and the idea of wearing a wristwatch became the accepted norm that we know today. Will the adoption of smart eyewear follow the same path? Who will it be that leads the way in changing public opinion? Will it once again be soldiers, after perhaps first experiencing smart-glasses in the military, or sports-stars perhaps? Regardless of who it is that ultimately leads to a change in opinion there first needs to be a compelling reason for why smart glasses are a preferable option over sticking with the good old smartphone and it is this that I cannot quite yet see.

If No Smart Glasses, Then What?

If smart-glasses, in the typical spectacle form, are not the answer then what could the future of AR look like? To answer this it is worth considering our experience of AR in two different contexts.

Fixed Position Interface

As we have already discussed, AR is already experienced by many of us via traditional screens, with the augmented content over-layed onto the real world as long as we view it through the screen itself. As such, any context in which a transparent surface is involved lends itself to AR. Obvious case examples include driving, with our view of the world outside of the car/ transport medium being through such a transparent ‘screen.’ Companies such as BMW have already explored this idea, for example with the Head-Up Display that shows important journey and vehicle information ‘on the windscreen’ such that the driver need not take their eyes off the road in front of them to still benefit from such data. Navigation information is another very obvious application for this concept, with drivers ‘seeing’ the route mapped out on the road and surrounding world without having to divert their gaze away from the road and towards a separate screen. Imagine how much less likely it would be to miss that rapidly approaching highway slip-road if you could ‘see it’ in advance by a change in colour of the road in front of your very eyes. Once we truly herald the arrival of fully-autonomous driving then the very same vehicle ‘screens’ that previously kept us informed of important driving information will give themselves over to becoming entertainment or productivity screens.

Other settings in which screens (as in what we currently think of as windows or transparent barriers) are currently employed and which promise to provide AR interfaces in the future include places such as zoos, museums, shop windows, or even our very own home windows. Basically anywhere that a transparent ‘screen’ could be found.

Mobile Interface

Until we somehow come up with a reliable, safe method by which to wirelessly beam AR directly into our brains, currently the most obvious alternative to smart-glasses is the smart contact lens. There are groups working on such stuff of science fiction as this very thing, with Samsung having patented a design for the same, although the power and processing would come from a tethered smart-phone, making it more of a smart screen than anything. I have already voiced my own personal objections to contact lenses and cannot see how adding hardware, however small, to them is going to overcome their obvious shortcomings. Assuming for a moment that the visual effect is staggeringly compelling, with beautifully rendered digital content seamlessly added to the world as if it was always there, designers are going to need to solve the following problems before we all don contact lenses:

  • comfort – many people either find them out and out uncomfortable or can only really stand wearing them for short periods of time.

 

  • ocular health – in some professions, especially medical, ophthalmologists recommend daily disposable lenses as, on balance, they are a more hygienic option when compared to longer term-use products. Will smart contact-lenses be cheap enough, and will it be socially and environmentally acceptable or sustainable even, to dispose of our high-tech lenses each day? What of the potential health issues associated with having a heat-generating, signal transmitting/ receiving device actually in contact with our eyes? Do we know what, if any, health risks that might present?

 

  • cost – whilst not especially cheap, I do not get too upset when I have to sacrifice a pair or two of contact lenses in any single day, either because some debris makes it way onto the lenses and renders them uncomfortable or my eyes just need a break. I would be less quick or willing to whip them out, however, if they had cost me a significant sum to purchase, and if I were forced to then I’d be resentful of having to have done so.

 

  • tethering – whilst not a major issue, having to keep a smart-phone in close proximity for such lenses to work as desired does somewhat dilute some of the real magic and potential of a truly untethered AR experience.

 

Smart glasses

Whilst the future is one in which Augmented Reality is definitely going to be HUGE, with companies such as Meta, Magic Leap and Microsoft (with the Hololens) creating some truly incredible technology and experiences that defy conventional belief and result in childish grins from anyone who tries them, there are still some significant and fundamental obstacles to overcome. Form factor is, I believe, one of the key issues that pioneers of this technology are yet to crack but when a compelling solution is found then, well, get strapped in and prepare for a technological shift the likes of which come around but once in a generation!

For more information on Smart Glasses, take a look at the AR Glasses Buyers Guide (www.ARglassesbuyersguide.com)

A Fourth View on Three Sports

Following on from my recent post regarding Augmented Reality, Virtual Reality and their potential impact on our sporting lives, specifically skydiving, I thought I would take a look at how AR & VR might add to the other big sport in my life: triathlon.

Triathlon involves training and racing in three separate disciplines, with races ranging in total distance from super-sprint to Ironman and beyond. Data does play a role in both training and competing, whether it be keeping track of 100m splits in the pool, or sticking to a pre-defined power zone whilst on the bike. I think it would be safe to say that pretty much all of us rely, to some degree, on a sports watch, or athletic tracker of some description, with the required data available for monitoring live or analysing after the event.

AR offers the chance to have the most important and relevant data visible without breaking the rhythm of a workout, adding to the quality of the experience and value of the training or outcome of the effort.

 

SWIM – AR may not be the most obvious technology for use in an aquatic environment but I see AR offering some real advantages to those training both in the pool and open water. As far as I am aware there are no currently available AR systems for use with goggles, but with the advances being seen in the field, especially by companies specialising in athletic applications of AR, such as Recon Instruments (www.reconinstruments.com), I do not imagine it will be long before AR reaches the water.

  • Training data – the usual information that one might glance at a watch for, such as lap count, 100m lap times, heart rate and other such swim metrics could be easily projected into view, thus making such data available without having to break the flow of a swim workout.
  • Sighting & ‘staying on course’ – any open water swimmer will admit that sighting and staying truly on course can prove troublesome, during both training and especially races. Swimming further than is necessary is both a waste of energy and impacts on race time, and having to frequently sight disrupts smooth swimming action, again, impacting energy efficiency and swim time. Imagine having a virtual course line to follow, much like a pool line, projected into view both when you look down (as if looking at the pool floor) and when you do look up to sight, such that staying on course is as simple as ensuring you follow the line? Less ‘open swim wobble’ and a faster, more efficient swim.
Goggles, AR
Important swim data & virtual sight line projected into view using Augmented Reality-equipped goggles.

 

BIKE & RUN – systems do already exist that provide AR for both cyclists and runners, with the Jet, from Recon Instruments, being one such system. A range of metrics, including the usual – speed, average speed, heart rate, power, distance – could all easily be projected in AR. With GPS technology and mapping one could have a new cycle or run route virtually projected in order to follow a new course or how about having a virtual running partner/ pacemaker running alongside or just in front of you, pushing you that little bit harder than you may otherwise train? The limits to the uses of AR in both bike and run settings are really only limited by imagination, with the technology rapidly catching up with the former.

Cycling, cycle training
Augmented Reality data during cycle training

 

Cycling, AR, photo
Capture those awesome training and race moments without even having to look away. That’s the power of AR.
VR in bike & run – living in the UAE training outside in the summer months gets very testing, with any attempt at venturing outside in an athletic capacity after about 9am simply leading to guaranteed heat stroke. As such, the turbo trainer does get significantly more use at this time of year. It is, however, really dull! There are ways to engage the mind during such indoor sessions, from video-based systems such as Sufferfest and those available from Tacx.com, and of course the option of simply watching movies, but imagine how much more immersive and enjoyable an experience indoor training could be if it were possible to digitally export yourself fully to suitable setting. VR offers what even multiple screens can’t – full immersion! Training for a specific race? Fancy taking on a famous route but can’t spend the time and money travelling to the location? VR promises to solve these issues by taking you there. Again, there are companies working on this technology, with startups such as Widerun (www.widerun.com) pushing the envelope in this area.

Jumping into Augmented Reality

Augmented and Virtual Reality (AR & VR) both lend themselves to some very exciting applications in sports, especially those where data inputs in real time can be vital. Skydiving – one of my passions in life – is one such sport and here I shall explore where AR & VR might add to our enjoyment and progress in the sport.

In the interests of clarity, I shall just define what is meant by Augmented and Virtual Reality, terms that are becoming ever more part of normal lexicon and technologies that are set to redefine how we experience the world:

Augmented Reality: superimposition of digital data over the real world, thus adding a layer of additional information or detail over that which is seen in reality.

Virtual Reality: immersion in a fully digital world, such that users experience a computer-generated world as if it were real. Using VR goggles to allow users to see the simulated world, plus or minus other inputs, such as headphones or haptic devices to simulate touch, the principle of VR is to leave the real world rather than simply augment it.

 

Skydiving – there are so many data inputs that are vital to a safe skydiving experience, with the most important ones and where AR offers options to add to the experience being:

  1. ALTITUDE – the most important bit of information for any skydiver. We currently rely on a combination of wrist-worn altimeters and audible altimeters. Personally, I am more of a visual person so having my altitude displayed in front of me in an AR fashion, with pre-set altitude alerts popping up where I simply can’t ignore them would be great.
  2. OTHER SKYDIVERS – one of the biggest dangers, other than running out of sky, in skydiving comes from others sharing the same airspace, especially when inexperienced jumpers are involved. Mid-air collisions can be catastrophic, especially if they occur at low altitude. Knowing exactly where other skydivers are, especially if they are within a certain proximity to you, is very important. We cannot be expected to have full 360 degree awareness at all times – we literally do not have eyes in the back of our heads – and so an alert system that automatically identifies other jumpers in the skies would be a great use of AR.

    Skydiving AR
    Knowing who is sharing the skies with you, in addition to useful data such as remaining altitude, are examples of uses for AR in skydiving.
  3. JUMP RUN & WIND INFO – this would be of obvious use in training new skydivers in the basics of jump runs, winds aloft and the effect on their jump of winds, including adjusting landing patterns in response to changing wind characteristics. Experienced skydivers would benefit from such a system at new and unfamiliar dropzones or to revise core skills and competencies, perhaps after a period of absence from the sport.
  4. TRAINING/ COACHING – AR (and VR, especially for modelling of emergency situations) lends itself perfectly to the training of new skydivers and for coaching experienced jumpers in a range of disciplines. At present, new skydivers receive theory and ground schooling prior to their jumps, freefalling with a coach but then ultimately responsible for their own canopy piloting. Students who do need some assistance currently have to rely on audio instruction from a coach on the ground, who can only assess what he or she can see. What if the student could have the ideal flight path including important prompts for how best to prepare for their landing projected in from of them via AR? Important learning objectives would, I propose, be much faster to achieve and good practices established rapidly. The system could be taken a step further by enabling the ground-based coach to see exactly what the student is seeing via in-built cameras in the AR headset, thus significantly improving the accuracy and value of instructions to the student. Coaching uses could include real-time prompts on perfect body position for certain disciplines, such as tracking, and projected flight paths, to aid in flight accuracy. For example, following an AR line indicating a straight-line course in tracking would enable a skydiver to work on fine-tuning small body position perfections thus significantly enhancing progression in the sport.
Skydiving AR, landing
Canopy piloting and especially landing are vital parts of being a successful and safe skydiver. AR could really add to the effectiveness of training and safety for the sport.

Go go Gadget, go!

Inspector Gadget, Go Gadget, Go
"Go Gadget, Go!"

Are you a gadget gourmet? A purveyor of all things gadgety, techy and, well, just awesome? Yeah? Me too. Friday thus saw me in somewhat of my idea of Heaven on Earth as I attended the Gadget Show Live, held at the NEC in Birmingham. Attending the day after having gone to BSAVA meant that my week was feeling more and more like a ‘professional holiday’ (if such a thing exists) – not a bad way to spend a week in April. The show certainly seemed to be popular, with thousands of other eager gadget enthusiasts all piling into the large halls that served as home for 5 days to a plethora of tech talk, demonstrations, a fair amount of ‘retail’ and, of course, the main event itself, the live show.

One thing I would say at this stage is that I did perhaps expect to discover a few more real innovations and “WOW” factor technologies than I did, with a lot of the exhibitors tending to fall more in to the category of standard electronics retailers, whether it be trying to flog us a new TV, games console, or accessory for our iPhones, iPads and other such existing gadgetry. Having said that, the standard of displays, stands, demonstrations, activities and talks was superb and it was pretty easy to fill an entire day. As mentioned, the live show was certainly the main highlight, with a fantastically well choreographed and stage-managed show that served up a good balanced meal of fun features, such as Laser Man, the robotic bird from the incredible talents at Festo, and the larger than life 3D faces of our hosts, to numerous chances to win big and bag some tech to take home, with the legendary Gadget Show competitions. The winners all would have gone home with a significantly bigger smile than everyone else, and that’s really saying something!

My mission, as it were, was really just to head along for the day with an open mind and see what was new, fresh and exciting, especially with a view to what might have interesting applications to veterinary and animal healthcare. Afterall, I am The Nerdy Vet so wearing both my nerd and vet hats felt normal 🙂 There were certainly a few stand-out exhibitors for me, with the main ones of note being the following:

1. Aurasma – a ‘virtual browser’ that enables you to hold your smartphone or tablet up to a particular piece of media, or real-life scene (eg a magazine, CD case or poster) and for additional ‘content’, whether it be video, a link to a website, or something even more interesting and unexpected, to appear on the screen overlaid in real-time. This is an example of AR (Augmented Reality) and clearly has some interesting potential for those of us in the veterinary field.

2. Damson – really funky, compact little portable speaker with a difference. Using resonance technology, these little noise-makers work wirelessly to play music and other audio from any Bluetooth enabled-device, such as an iPod, and basically makes use of the surface on which it is placed as a speaker. The effect is to instantly take a small sound when held in the hand and transform, for example, a table, fridge, or indeed any surface or structure into a speaker, boosting the sound. Definately elicted some “wows” from friends and colleagues at the practice.

3. FitBit – this stand seemed to be literally buzzing with activity and it was clear to see why. Their product, a small wireless smart sensor that tracks your activity over the course of the day and then uses some clever algorithms to track, record and analyse various health and fitness parameters seems set to really help in the battle of the bulge. The actual devices themselves are tiny – about the size of a USB dongle – and can even track how well you sleep. Very very cool. And judging by how well they were selling, very very popular.

There were other innovations and I plan to serve a few more of them up in greater detail over the course of the next week or so. If you’re thinking of going next year to The Gadget Show then my advice would certainly be, in the immortal words of Ben Stiller, to just “do it.”

Did you hear the one about the haptic cow?

Haptic Cow, bovine simulator
The classic veterinary image

We’re all aware of the classic premise of virtual reality and the principle of experiencing a visual virtual world. But what about haptic technology? What does that mean to you? I had a unique opportunity to see this technology in action last week when I was fortunate enough to be invited to speak at Bristol Veterinary School and met with Professor Sarah Baillie, Chair in Veterinary Education at the University and inventor of the famed ‘Haptic Cow.’

I first became aware of the Haptic Cow when I was an undergraduate at Bristol myself, and found the idea simply incredible: using a computer programmed device to realistically simulate the tactile experience of pregnancy diagnosing cows, something that some vet students get immediately whilst others struggle with perpetually. I place myself in the latter category. No matter how many cows had the (dis)pleasure of me rummaging away fruitlessly in their general pelvic region, I simply could not make the link between the random ‘mush’ that I was feeling – or rather, gradually not feeling, as the blood in my arms was systematically squeezed out – and the textbook picture of ovaries, follicles and the various forms of the bovine uterus. The problem was that there was no way for the lecturer to help other than to tell you what you should be feeling and where. Most of us simply ended up nodding knowingly and feigned a sudden reversal in our ignorance. The truth was that it was easier to pretend that we could feel what we were supposed to, thus hastening our exit from said cow’s rectal area, than to battle on. After all, the cows don’t thank a trier!

Enter the Haptic Cow. The idea is that you, the user, reaches into a fake cow (a black and white fibreglass shell with a specially designed robotic arm inside) and attach the end of the aforementioned arm to the end of your middle finger – the one you would use as a ‘friendly’ greeting to someone you didn’t much care for – via a small thimble-like attachment that fits snugly on the end of your digit. The magic then happens when the computer program is launched and the ‘model’ of the cow is run. On the screen you are able to see some simplified representations of various structures, such as ovaries, and this is matched by what you are able to ‘feel’ in the simulator. It’s a very bizarre sensation but the truth is that using this technology, which relies on the computer program outputting to three motors controlling the robotic arm in three planes, it is possible to haptically simulate all manner of structures, textures and body systems. I was given the chance to ‘PD’ a cow, diagnose an ovarian follicular cyst, and even experience the sensation of rectally examining a horse, something that is an important part of a colic investigation, yet which is notoriously risky to the horse, and subsequently to the vet’s professional indemnity cover! Using the Haptic simulator removes all of the risk associated with learning these techniques and after just one short session I would feel confident going out tomorrow and diagnosing colic or telling a farmer if their cow was in calf. That’s incredible considering I didn’t manage to achieve that in an entire year at vet school.

The potential for such sophisticated technology in dramatically improving the standard and effectiveness of medical training is huge, with the technology already having been applied to modelling a cat’s abdomen for training in abdominal palpation, to teaching human doctors the fine intracacies of prostate examination – the model human a@*e was hilarious! I can easily see haptics being combined with augmented reality, or other such technological advancements, in forming sophisticated surgical training programmes, dramatically advancing career development and patient care, in all species.

Professor Baillie’s career is as equally incredible as her invention, having graduated from Bristol vet school with an additional intercalated degree, and then spending a number of years in clinical mixed practice. A forced break from the physical rigours of being a vet in practice led Professor Baillie to complete a Master’s degree in computing, in spite of no prior experience of the field, and led to the start of her work with haptic technology and a subsequent PhD and the Haptic Cow. After time teaching at London Vet School, Professor Baillie is now back at her Alma Mater, Bristol, providing students with the incredible opportunity to train with her amazing inventions.

I wish I was there!

I am officially jealous! One place I would absolutely love to be right now is at the CES (Consumer Electronics Show) in Las Vegas, Nevada, USA. Widely regarded as the key technology show in the world, where the likes of Microsoft, Sony and Apple tend to showcase their latest cool gadgets, it represents every tech enthusiasts’ fantasy setting.

Just because I can’t be there (it isn’t actually open to the general public) doesn’t mean I can’t get excited about some of the futuristic technology being showcased. One idea that I find especially interesting is a technology being demonstrated by a UK firm, Blippar, who produce Augmented Reality (AR) apps for smartphones and tablets. Augmented Reality is the process by which digital content is overlayed onto a view of the ‘real world,’ for example, by viewing a bottle of juice on your iPad using the in-built camera, AR would recognise that product and thus overlay the ‘real’ image with additional content, that moves and changes with the view of the product. This offers incredible opportunities for providing value-addition to all sorts of products, for example, by showing video demonstrations, or providing e-vouchers linked to the specific product being viewed. The potential is one that has been recognised and Blippar are being sponsored by the UK Government to showcase their technology at CES – very exciting!

Being a vet I am naturally interested in the vetty and animal applications for Augmented Reality, of which there are clearly loads. Imagine, if you will, such applications as…

  • Waiting Room – reveal interesting and informative content about your vet practice, such as a ‘view behind the scenes’ or educational advice about preventative healthcare, such as lungworm control, simply by holding your phone up to the poster on the wall. Waiting for your appointment will suddenly feel like a pleasure as you have so much interactive content just there “in front of you.”
  • Clinical – not quite sure what your vet means by your dog’s cruciate injury? Well how about if the vet were able to hold up a tablet over your dog’s leg and show you a cool, biological view of where the ligament is, how it works and what happens when it goes wrong? I reckon that would be pretty interesting!
  • Surgery – So many applications…. so very many!
  • At home – Not quite sure exactly when your pets’ vaccinations are next due? Looking after your parents’ cat and can’t quite remember what medication he is on? Imagine just holding up your phone to your pet and seeing all of their relevant information displayed there in front of you. Would work well in an app, don’t you think 😉