Digitally Speaking

Fanni Vig
January 16, 2017

A friend of mine recently introduced me to the idea of the ‘runaway brain’ – a theory first published in 1993 outlining the uniqueness of human evolution. We take a look into how artificial intelligence is developing into something comparable to the human brain and the potential caveats that concern us as human-beings.

The theory considers how humans have created a complex culture by continually challenging their brains, leading to the development of more complex intellect throughout human evolution. A process which continues to occur, even up to today and will again tomorrow, and will no doubt for years to come. This is what theorists claim is driving human intelligence towards its ultimate best.

There are many ways in which we can define why ‘human intelligence’ is considered unique. In essence, it’s characterised by perception, consciousness, self-awareness, and desire.

It was by speaking to a friend that I considered with human intelligence alongside the emergence of artificial intelligence (AI), is it possible for the ‘runaway brain’ to reach a new milestone? After further research, I found some that say it already has.

They label it ‘runaway super intelligence‘.

Storage capacity of the human brain

Most neuroscientists estimate the human brains storage capacity to range between 10 and 100 terabytes, with some evaluations estimating closer to 2.5 petabytes. In fact, new research suggests the human brain could hold as much information as the entire internet.

As surprising as that sounds, it’s not necessarily impossible. It has long been said that the human brain can be like a sponge, absorbing as much information that we throw towards it. Of course we forget a large amount of that information, but take into consideration those with photographic memory or those who practice a combination of innate skills, learned tactics, mnemonic strategies or those who have an extraordinary knowledge base.

Why can machines still perform better?

Ponder this – if human brains have the capacity to store significant amounts of data, why do machines continue to outperform human decision making?

The human brain has a huge range – data analysis and pattern recognition alongside the ability to learn and retain information. A human needs only to glance before they recognise a car they’ve seen before, but AI may need to process hundreds or even thousands of samples before it’s able to come to a conclusion. Perhaps human premeditative assumption, if you will, to save time analysing finer details for an exact match, but conversely, while AI functions may be more complex and varied, the human brain is unable to process the same volume of data as a computer.

It’s this efficiency of data processing that calls on leading researchers to believe that indeed AI will dominate our lives in the coming decades and eventually lead to what we call the ‘technology singularity’.

Technology singularity

Technological singularity is defined by the hypothesis that through the invention of artificial super intelligence abruptly triggering runaway technological growth, which will result in unfathomable changes to human civilization.

According to this hypothesis, an upgradable intelligent agent, such as software-based artificial general intelligence, could enter a ‘runaway reaction’ cycle of self-learning and self-improvement, with each new and increasingly intelligent generation appearing more rapidly, causing an intelligence explosion resulting in a powerful super intelligence that would, qualitatively, far surpass human intelligence.

Ubiquitous AI

When it comes to our day-to-day lives, algorithms often save time and effort. Take online search tools, Internet shopping and smartphone apps using beacon technology to provide recommendations based upon our whereabouts.

Today, AI uses machine learning. Provide AI with an outcome-based scenario and, to put it simply, it will remember and learn. The computer is taught what to learn, how to learn, and how to make its own decisions.
What’s more fascinating, is how new AI’s are modeling the human mind using techniques similar to that of our own learning processes.

Do we need to be worried about the runaway artificial general intelligence?

If we to listen to the cautiously wise words of Stephen Hawking who said “success in creating AI would be the biggest event in human history”, before commenting “unfortunately, it might also be the last, unless we learn how to avoid the risks”.

The answer to whether we should be worried all depends on too many variables for a definitive answer. However, it is difficult not to argue that AI will play a growing part in our lives and businesses.

Rest assured: 4 things that will always remain human

It’s inevitable that one might raise the question is there anything that humans will always be better at?

  1. Unstructured problem solving. Solving problems in which the rules do not currently exist; such as creating a new web application.
  2. Acquiring and processing new information. Deciding what is relevant; like a reporter writing a story.
  3. Non-routine physical work. Performing complex tasks in a 3-dimentional space that requires a combination of skill #1 and skill #2 which is proving very difficult for computers to master. As a consequence this causes scientists like Frank Levy and Richard J. Murmane to say that we need to focus on preparing children for an “increased emphasis on conceptual understanding and problem-solving“.
  4. And last but not least – being human. Expressing empathy, making people feel good, taking care of others, being artistic and creative for the sake of creativity, expressing emotions and vulnerability in a relatable way, and making people laugh.

Are you safe?

We all know that computers/machines/robots will have an impact (positive and/or negative) on our lives in one way or another. The rather ominous elephant in the room here is whether or not your job can be done by a robot?

I am sure you will be glad to know there is an algorithm for it…
In a recent article by the BBC it is predicted that 35% of current jobs in the UK are at a ‘high risk’ of computerization in the coming 20 years (according to a study by Oxford University and Deloitte).

It remains, jobs that rely on empathy, creativity and social intelligence are considerably less at risk of being computerized. In comparison roles including retail assistants (37th), chartered accountants (21st) and legal secretaries (3rd) all rank among the top 50 jobs at risk.

Maybe not too late to pick up the night course on ‘Computer Science’…

Jorge Aguilera Poyato
December 13, 2016

Morpheus, in one of the most iconic scenes of the Matrix trilogies said, “You take the blue pill, the story ends. You wake up in your bed and believe whatever you want to believe. You take the red pill, you stay in Wonderland, and I show you how deep the rabbit-hole goes.”

Let me ask you something, what about taking decisions like the one offered by Morpheus based on additional information that can be used to evaluated better the two options? Would that influence Neo to change his mind on the decision made?

According to the Harvard Business Review (www.hbr.org) many business managers still rely on instinct to make important decisions, often leading to poor results. However, when managers decide to incorporate logic into their decision-making processes, the result is translated into better choices and better results for the business.

In today’s digital world, it’s difficult to ensure the integrity of mission critical networks without a detailed analysis of user engagement and an understanding of the user experience.

HBR outlines three ways to introduce evidence-based management principles into an organization. They are:

  • Demand evidence: Data should support any potential claim.
  • Examine logic: Ensure there is consistency in the logic, be on the lookout for faulty cause-and-effect reasoning.
  • Encourage experimentation: Invite managers to conduct small experiments to test the viability of proposed strategies and use the resulting data to guide decisions.

So, the big question is, would it be possible to introduce these three elements into the tasks assigned to the network manager?

The answer is ‘yes’ provided the manager is given the opportunity to integrate with network data that carries the context of users, devices, locations and applications in use and then given the opportunity to mine this captured data to gain insights into how and why systems and users perform the way they do.

Fortunately, the limitations of traditional networks can be overcome with the use of new network platforms providing in-depth visibility into application use across the corporate network, helping organisations to deliver significant, cost-effective improvements to their critical technology assets. It achieves this by:

  • improving the experience of connected users
  • enhancing their understanding of user engagement
  • optimizing application performance
  • improving security by protecting against malicious or unapproved system use.

According to IDC, “With the explosion of enterprise mobility, social and collaborative applications, network infrastructure is now viewed as an enterprise IT asset and a key contributor to optimizing business applications and strategic intelligence,”.

For companies facing the challenge of obtaining deep network insights in order to improve application performances and leverage business analytics, Logicalis is the answer.

Logicalis is helping their clients with the delivery of digital ready infrastructure as the main pillar for enhancing the user experience, business operations and taking secure analytics to the next level of protection for business information.

Latest Tweets