Digitally Speaking

Fanni Vig
January 11, 2019

In years to come, it’s possible that we’ll look back on 2018 as the year that Artificial Intelligence (AI) really arrived. When potential became possible, and hype gave way to enterprise adoption. But how much influence are CIO’s having in its adoption, and how well understood is the term?

AI and innovation

We recently launched the sixth edition of the annual Logicalis Global CIO Survey, which gathered the views of more than 840 CIOs in a number of areas. Innovation was a dominant theme, with this year’s survey demonstrating real progress in the role of the CIO in this area. CIOs are finding that they are very much at the heart of the action – with 83% either leading (32%) or enabling (51%) innovation. It’s an area they’re being increasingly judged on too, with half (50%) now being measured according to their ability to deliver service innovation.

Given that technology now pervades innovation – either delivering it or enabling it – we were keen to understand how CIO’s view the rate of adoption, utility and impact of emerging technologies – particularly Internet of Things (IoT) and Artificial Intelligence (AI). Whilst IoT shows signs of becoming more mature in its use, AI and Machine Learning, appear to be in an earlier phase of the hype cycle – possibly where IoT was 12 months ago. Nearly a fifth (19%) of CIOs claim that their organisations are already using AI. That adoption seems likely to continue at pace, with 66% saying AI will be in use in their organisations within three years.

Understanding who is responsible for AI

Before we get too excited about AI changing the face of business, it’s important that we stop and question just how the term is understood. Like IoT, AI is really complex, and many of the use cases we’ve seen up until now are just scraping the surface. The technology industry is still collectively scratching its head to work out what it really means, as the hype continues and threatens to dilute the concept. Do CIOs perceive AI simply as autonomous automated services and interfaces essentially guided by complex manually created rules? Or do they see it in its purest form, as technology that is not just autonomous and automated, but also able to learn and adapt independently based on context? In truth, it’s probably a bit of both.

The big issue facing CIOs when it comes to AI is less about the technology itself, and more about the people, process and culture which support its adoption. Whilst there’s clearly a value in products being ‘AI-ready’, the hangover from the rise of Shadow IT – which saw departments and individuals investing in their own hardware, software, apps and services – is organisations that have huge swathes of data residing in a wide range of places, some out of sight of the CIO.

When asked about where AI and machine learning were in use within their organisations, it was no surprise to see the CIOs we surveyed point to the IT department as the leading area. That’s likely because this is the department they’ve got the most sight of. The siloed nature of organisations, and the distributed data residing within, causes a headache in terms of ownership. Most business have a variety of people who own the data in each department, but few who want to be responsible. That’s where the CIO comes in, as issues such as data security and compliance sit squarely within their remit.

The opportunity for AI

So where can the CIO start, in order to truly realise the potential of AI? The first step is to get people together to understand and agree their processes around data. People and processes can change really quickly, so taking the time to agree a clear approach is important. Those silos aren’t going to be broken down overnight, so the CIO also needs to be realistic. Only then can you think about how to leverage what you’ve got in place, but even then this needs to be done in the right way, with issues such as security, performance and cost-effectiveness at front of mind.

From the survey findings, it was encouraging to see CIOs so bullish about their ability to engage with AI. This, more than anything, will be vital if organisations are to derive true value from AI. At present, rates of use across various business departments are low – with the exception of IT and customer service – which suggests operational, fringe and test cases. However, this also seems to be an opportunity for CIOs to build a culture of experimentation and small-scale deployments driven by clear customer or market needs.

AI should not be seen as a silver bullet. It isn’t the answer to everything. It needs more data, more resources and you need the right foundations and infrastructure in place and ready to go. Otherwise there is going to be a huge amount of inefficiency. Especially as the speed of change in technology terms is unbelievable, with new vendors and solutions springing up on a regular basis. To understand whether these new options are to deliver against your objectives it’s imperative that you look at the entire ecosystem, with people, culture and process playing a pivotal role.

Fanni Vig
April 20, 2017

Finally, it’s out!

With acquisitions like Composite, ParStream, Jasper and AppDynamics, we knew something was bubbling away in the background for Cisco with regards to edge analytics and IoT.

Edge Fog Fabric – EFF

The critical success factor for IoT and analytics solution deployments is to provide the right data, at the right time to the right people (or machines) .

With the exponential growth in the number of connected devices, the marketplace requires solutions that provide data generating devices, communication, data processing, and data leveraging capabilities, simultaneously.

To meet this need, Cisco recently launched a software solution (predicated on hardware devices) that encompasses all the above capabilities and named it Edge Fog Fabric aka EFF.

What is exciting about EFF?

To implement high performing IoT solutions that are cost effective and secure, a combination of capabilities need to be in place.

  • Multi-layered data processing, storing and analytics – given the rate of growth in the number of connected devices and the volume of data. Bringing data back from devices to a DV environment can be expensive. Processing information on the EFF makes this a lot more cost effective.
  • Micro services – Standardized framework for data processing and communication services that can be programmed in standard programming language like Python, Java etc.
  • Message routers – An effective communication connection within the various components and layers. Without state of the art message brokerage, no IoT systems could be secure and scalable in providing real time information.
  • Data leveraging capabilities – Ad hoc, embedded or advanced analytics capabilities will support BI and reporting needs. With the acquisition of Composite and AppDynamics, EFF will enable an IoT platform to connect to IT systems and applications.

What’s next?

Deploying the above is no mean feat. According to Gartner’s perception of the IoT landscape, no organization have yet achieved the panacea of connecting devices to IT systems and vice versa, combined with the appropriate data management and governance capabilities embedded. So there is still a long road ahead.

However, with technology advancements such as the above, I have no doubt that companies and service providers will be able to accelerate progress and deliver further use cases sooner than we might think.

Based on this innovation, the two obvious next steps that one can see fairly easily are:

  • Further automation – automating communication, data management and analytics services including connection with IT/ERP systems
  • Machine made decisions – once all connections are established and the right information reaches the right destination, machines could react to information that is shared with ‘them’ and make automated decisions.

Fanni Vig
January 16, 2017

A friend of mine recently introduced me to the idea of the ‘runaway brain’ – a theory first published in 1993 outlining the uniqueness of human evolution. We take a look into how artificial intelligence is developing into something comparable to the human brain and the potential caveats that concern us as human-beings.

The theory considers how humans have created a complex culture by continually challenging their brains, leading to the development of more complex intellect throughout human evolution. A process which continues to occur, even up to today and will again tomorrow, and will no doubt for years to come. This is what theorists claim is driving human intelligence towards its ultimate best.

There are many ways in which we can define why ‘human intelligence’ is considered unique. In essence, it’s characterised by perception, consciousness, self-awareness, and desire.

It was by speaking to a friend that I considered with human intelligence alongside the emergence of artificial intelligence (AI), is it possible for the ‘runaway brain’ to reach a new milestone? After further research, I found some that say it already has.

They label it ‘runaway super intelligence‘.

Storage capacity of the human brain

Most neuroscientists estimate the human brains storage capacity to range between 10 and 100 terabytes, with some evaluations estimating closer to 2.5 petabytes. In fact, new research suggests the human brain could hold as much information as the entire internet.

As surprising as that sounds, it’s not necessarily impossible. It has long been said that the human brain can be like a sponge, absorbing as much information that we throw towards it. Of course we forget a large amount of that information, but take into consideration those with photographic memory or those who practice a combination of innate skills, learned tactics, mnemonic strategies or those who have an extraordinary knowledge base.

Why can machines still perform better?

Ponder this – if human brains have the capacity to store significant amounts of data, why do machines continue to outperform human decision making?

The human brain has a huge range – data analysis and pattern recognition alongside the ability to learn and retain information. A human needs only to glance before they recognise a car they’ve seen before, but AI may need to process hundreds or even thousands of samples before it’s able to come to a conclusion. Perhaps human premeditative assumption, if you will, to save time analysing finer details for an exact match, but conversely, while AI functions may be more complex and varied, the human brain is unable to process the same volume of data as a computer.

It’s this efficiency of data processing that calls on leading researchers to believe that indeed AI will dominate our lives in the coming decades and eventually lead to what we call the ‘technology singularity’.

Technology singularity

Technological singularity is defined by the hypothesis that through the invention of artificial super intelligence abruptly triggering runaway technological growth, which will result in unfathomable changes to human civilization.

According to this hypothesis, an upgradable intelligent agent, such as software-based artificial general intelligence, could enter a ‘runaway reaction’ cycle of self-learning and self-improvement, with each new and increasingly intelligent generation appearing more rapidly, causing an intelligence explosion resulting in a powerful super intelligence that would, qualitatively, far surpass human intelligence.

Ubiquitous AI

When it comes to our day-to-day lives, algorithms often save time and effort. Take online search tools, Internet shopping and smartphone apps using beacon technology to provide recommendations based upon our whereabouts.

Today, AI uses machine learning. Provide AI with an outcome-based scenario and, to put it simply, it will remember and learn. The computer is taught what to learn, how to learn, and how to make its own decisions.
What’s more fascinating, is how new AI’s are modeling the human mind using techniques similar to that of our own learning processes.

Do we need to be worried about the runaway artificial general intelligence?

If we to listen to the cautiously wise words of Stephen Hawking who said “success in creating AI would be the biggest event in human history”, before commenting “unfortunately, it might also be the last, unless we learn how to avoid the risks”.

The answer to whether we should be worried all depends on too many variables for a definitive answer. However, it is difficult not to argue that AI will play a growing part in our lives and businesses.

Rest assured: 4 things that will always remain human

It’s inevitable that one might raise the question is there anything that humans will always be better at?

  1. Unstructured problem solving. Solving problems in which the rules do not currently exist; such as creating a new web application.
  2. Acquiring and processing new information. Deciding what is relevant; like a reporter writing a story.
  3. Non-routine physical work. Performing complex tasks in a 3-dimentional space that requires a combination of skill #1 and skill #2 which is proving very difficult for computers to master. As a consequence this causes scientists like Frank Levy and Richard J. Murmane to say that we need to focus on preparing children for an “increased emphasis on conceptual understanding and problem-solving“.
  4. And last but not least – being human. Expressing empathy, making people feel good, taking care of others, being artistic and creative for the sake of creativity, expressing emotions and vulnerability in a relatable way, and making people laugh.

Are you safe?

We all know that computers/machines/robots will have an impact (positive and/or negative) on our lives in one way or another. The rather ominous elephant in the room here is whether or not your job can be done by a robot?

I am sure you will be glad to know there is an algorithm for it…
In a recent article by the BBC it is predicted that 35% of current jobs in the UK are at a ‘high risk’ of computerization in the coming 20 years (according to a study by Oxford University and Deloitte).

It remains, jobs that rely on empathy, creativity and social intelligence are considerably less at risk of being computerized. In comparison roles including retail assistants (37th), chartered accountants (21st) and legal secretaries (3rd) all rank among the top 50 jobs at risk.

Maybe not too late to pick up the night course on ‘Computer Science’…

Latest Tweets