Since getting on stage several people have asked me to publish my talk, mainly because it rhymed I think… so here it is, enjoy!
Since getting on stage several people have asked me to publish my talk, mainly because it rhymed I think… so here it is, enjoy!
I’m writing this from one of my favourite cities in the world.
For me Hong Kong is the epitome of where urban dwellings meet the jungle. Tropics growing into brick, buildings built around ancient roots. It feels like a chapter from I Am Legend, except we’re all still very much alive.
The subject of technology outgrowing humans is rife here, I’m collaborating on a project with a friend and it’s come up time and again as this city and the surrounding majors struggle to adopt innovations in a way that resonate with the humble origins the culture is built upon.
This week has left me reflecting on whether sustainable thinking will lend to mega metropolises, or whether global urbanisation will mean that rural areas will get left behind.
As transport becomes more sustainable, more affordable, ever faster and more connected, will it conflict with the debate around borders being reimposed and trade restricted back to regions again? Take Brexit as an example closer to home; if we leave the EU and customs barriers lead to effective restriction to start with, will we note the stagnation that will quickly follow so that when international exchange comes to a standstill we won’t notice the daily conflict?
On a related note but different angle, technology has allowed that we all become producers. Open SDK’s and API’s, 3D printing and crowdfunding all mean that rather than brands lending to personalisation, self made and personally tailored will be the next movement. Will brands play a role in enabling this or will they move to a protective stance on their IP?
I like to think that all my favourite cities are characterised by diversity and as such we will continue to encourage flexibility across the globe, joining forces to build reliance against what could otherwise be a tough future.
But as technology surges with intelligence at it’s core, the biggest question I am hearing repeatedly from all over is whether global cooperation and unlimited interaction will evolve to one diverse world where we see and share everything, or whether the future take us back to the past?
Will Mega Metropolises mean just Mega blocks and Mega highways? Will we live in a Mega City One? Will it be more Skynet than that? So many film references and a tad Dredd I know, but I believe the next few years will impact this more than we care to acknowledge.
I judge you not.
The next era of connectivity is on the horizon, or rather more accurately hovering above it, as tech giants launch their test projects to provide internet coverage for the harder to reach parts of the world.
Google have test piloted Project Loon a few times since June 2013 near places such as New Zealand’s South Island and Sri Lanka, a series of high- altitude balloons equipped with LTE (more commonly known as 4G LTE) that rides the wind currents in the stratosphere.
Facebook have also developed a fleet of solar-powered drones called Aquila now ready to hover at altitudes of 60,000 to 90,000 feet. These can be steered and controlled more directly, constantly circling a two mile radius to stay aloft.
Both the balloons and the drones can be air born for around three months.
Combined with lower priced smartphones coming to market we are seeing the next evolution of connectivity looking set to be pretty rapid.
There’s still a way to go to stabilise the launch and flight of both, plus the clean up exercise once they come back down but the effort to connect the whole world with the internet is accelerating.
Next we’ll be in orbit, talking to the moon, connecting galaxies… well, maybe.
For years scientists summarised ‘memory’ as an intricate part of the brain much like an ever expanding filing cabinet, a neural super-computer if you like, but in fact our memory is a brain-wide process. For example, driving a car is reconstructed from several areas; operating it lies in one section, getting from A to B in another and the ‘oh crap that guy just carved me up’ from yet another.
In this intricate system, we have several areas of memory; sensory, which lasts a second, short-term working memory, which last up to a minute and long-term memory, which lasts a lifetime (unless you’re a goldfish). Long-term memory then splits into explicit (conscious) and implicit (unconscious) and that then continues to break down into episodical, procedural and semantic memory systems.
So with all that in mind, who’s to say we couldn’t slip an updated folder into the filing cabinet and start to amend our memories, enhancing them, through neural prostheses…
Neural prosthetic devices are designed to provide artificial reconstruction of neurone to neurone connections where deterioration has occurred, so what if there was a safe way to insert these ‘memories’ into our neural architecture? Could we ‘remember’ that we’re fluent in a language, play guitar or know how to save a life?
If we could work out how to create the file to save it, then could we implant it as a memory? If we could create a common code that works with the electronic impulses in our brain and is understood through computer algorithms, could the future integration of memory be possible?
Gets the cogs pulsing doesn’t it?
I live in a rapidly evolving world where cars can drive themselves, robots can run warehouses and drones can deliver our parcels. A world where virtual reality blends with real reality (the re substantiation of mere ‘reality’) and I can get anything I want, whenever and wherever I want it, in the palm of my hand, so why do my new shoes still give me blisters?
A friend of mine lost a limb, on his road to recovery I’ve got to know him in a very different light this last year, where many would have turned inside themselves and got lost in remorse and pity, he embraced modern technology and now has a pretty cool prosthetic limb. He’s as strong as before, as balanced as he ever was and he argues, able to endure more than his human limb previously allowed him to.
This got me thinking about the tech available that could be used for solving every day irritations (not that I’m comparing blisters from new shoes to wearing a prosthetic limb!) but we could adapt the thinking…
Why not develop shoes with materials that are electronically charged, materials that transform from being soft to hardy through electro static charges? What about exo-skeletal hiking boots that enhance our ability to scale mountains like gazelles?!
Imagine shoes that transform to your feet, imagine dancing the night away, foot loose and blister free.
Give a girl the right shoes and she can conquer the world!
We simply had our imaginations.
Then Thomas T Goldsmith Jr and Estle Ray Mann came along with the first interactive game, the ‘Cathode Ray Tube Amusement Device’, developed in 1947, renowned for their simulation skills and not their wordsmithing, you’ll be glad to know.
Soon after, we saw a burst of simple interactive programs such as; ‘Mouse in the Maze’, ‘Bertie the Brain’ and Alan Turing’s ‘Chess’ capable of computing two way problems but not complex algorithms. This was shortly followed by ‘Spacewar’ in 1962; a two-player game where you try and destroy each other’s starship… arguably the first true video game, it took around 200 hours to code and was done by some students at MIT.
Where am I going with this?
Well, if we fast forward through the Odyssey’s and the varying intergalactic games to the Atari release of ‘Adventure’ in 1980 where we saw text adventure visualised, albeit crudely, in a plethora of dragons, monsters and sword slaying, through to the help of RAM and better joysticks into the world of Sega Dreamcast and NES where imaginary friends like Sonic and the Super Mario Bros helped us through the 90’s and into today, you’ll see my ramblings are leading to a pertinent question…
In a world where we now have technology that scans brain activity to read our minds, technology that creates worlds that don’t exist and technology that maps us to our surrounding climate, how long before we live in our imaginations, in a virtually real world.
Or, what if I’m already living in it and the world we think we live in is actually virtually imagined.
I was just reading about Google’s newest project and it got me thinking about how the gap between science fiction and simply just, ‘science’ has pretty much closed since I was a kid and watching films like Innerspace and Terminator… bear with me…
Google’s new patent sees a range of smart contact lenses equipped with tiny cameras embedded within, to allow for ‘in lens’ photography or assistance for the visually impaired. Google also have lenses that are able to provide measurements of blood glucose levels for diabetics through a sensor that measures the glucose in tears and signals the levels through teeny, tiny LED’s.
Soooo… given we know that human bodies don’t accept foreign objects particularly well and this new lens means we can access the small surface of our eye (which is covered by live cells that can represent our whole body) in a non-intrusive way, how long before a contact lens can analyse what is happening inside our body without actually ever entering it… (that’s the Innerspace reference).
… and how long before we have night vision, or augmented data layered in. How long before we don’t need all the other screens around us, our phones, in-car dashboards… and how long before a simple lens scans everyone around us (Terminator ticked) and along with Arnie and Quaid, Cruise needs to re-draft his Minority Report to be something more actual fiction biased science fiction…
The only real reason that robots haven’t replaced us yet is simply that robotic engineers can’t program all the knowledge currently required into a robot quick enough. As humans we adapt and learn every second of every day, therefore the sheer magnitude of possibilities a human brain provides excels any manufactured equivalent to date.
What we have seen though is that once configured, current robots in use are quicker, more efficient and less prone to accident. Amazon already have factories with considerably less error where robots are deployed, some Governments are looking to introduce drone delivery systems and Google have tested their automated car in Nevada for over a year and the only accidents recorded happened when a human overrode the system.
And to top that, at the end of last year there was a break through in how robots acquire their knowledge; they can adapt and learn through validated paths that process the human language by understanding how to identify speech patterns, therefore replicating how the brain connects from the frontal cortex to the striatum.
With these developments engineers predict artificial intelligence and robots will replace humans in the next 10 years, and by 2050 robots will be a part of our every day life. Gets you thinking doesn’t it?
To end on a lighter note though, I’d like to share a poem I love by one of my favorite non-robots, Tim Burton:
“Mr. Smith yelled at the doctor,
What have you done to my boy?
He’s not flesh and blood,
he’s aluminum alloy!”
The doctor said gently,
What I’m going to say
will sound pretty wild.
But you’re not the father
of this strange looking child.
You see, there still is some question
about the child’s gender,
but we think that its father
is a microwave blender.”
This morning I was in Stockholm, a taxi ride, flight and three trains later I’m almost home, its 23.12 GMT. Yawn, I’m past tired, out comes my laptop.
Over the course of today I reckon I’ve passed thousands of people, made contact with hundreds and spoken to at least a dozen of which, two engaged me successfully in conversation.
It starts with people looking me up and down, business dress, heels, bit of lippy, you get the idea, well-groomed woman and she’s on her business travel. I settle down and whip out… my magazine and immediately feel eyes on me trying to sneak a peek at the headlines, is it Vogue? No! Is it Heat? Hell no! It’s Wired. Now why may I ask does that trigger the eyebrow flicker followed by frown facial dance? Should I have slicked hair and triple glazed glasses, fall over my laces and not be able to interact with other humans (unless distinctly leaning towards cyborg territory) to be allowed to read this most wondrous publication?
One Swedish guy I was trying not to talk to even ventured so far as to say ‘it was perhaps a little technical’, really? People don’t buy Wired to read about technology? I felt compelled to explain to him why I found it more stimulating to read about how the US Air force built their fastest supercomputer for the Department of Defence using 1,760 PS3 consoles, rather than how big Jordan’s boobs are or who she was screwing this week. Long pause, not much said, he moved on.
I’m tired; I admit but still, give me a break – Girls can be geeks too you know.
Not long after inventing the wheel man has had an increasing fascination for passing on our menial laborious tasks to anyone or thing other than ourselves. Once slavery was abolished through society in the early millennia attentions were turned to develop mechanical solutions instead. Man tried to play God.
To some degree man succeeded. By creating complex mechanisms and systems capable of performing repetitive arduous tasks; from the first water pumps, the first locomotive and in today’s vast technologically glorious world just about everything.
The last year has seen robots being engineered that can help teach children who have suffered from brain damage to walk again, the KASPAR Robot W/RoboSkin teaches autistic kids interaction, we’ve seen prototypes teach children in schools (in fact this blog caught my eye; Will teachers be replaced by computers?) and at Cambridge University they have developed a machine that can analyse millions of papers in an infinitely shorter period of time than any human can ever expect to achieve.
But where will it stop? Developing technology to aid us in our daily work seems a fantastic idea, but how long before we are entirely replaced by a robotic army?
Somewhere in the US robots have been created to be self-sufficient. Powered by microbial fuel cells, they are programmed with a survival instinct which pre-programs them to prey on all sorts of creepy crawlies and small rodents which they then digest to provide themselves with power.
A group of robotics researchers across Europe are working on a project to ascertain whether humanoid bots are able to interact with groups of people in a realistic, anthropomorphic way. They’ve built algorithms that will enable the bots to mimic human actions and emotions. Think about that for a second; if that works we are talking about human looking robots that have adept social skills. Can you imagine them in a room deciphering a conversation from all the background noise and music?
And if that wasn’t enough the future science historians themselves have marked the beginning of the 21st century for the era when robots will take their place beside human scientists.
It’s all quite mind-boggling when you think about it. I remember watching ‘I Robot’ when it came out (admittedly mainly for the Mr Smith eye candy) but realistically, a human and humanoid mixed world could be just around the corner…