Digitally Speaking

Neil Thurston
April 25, 2017

Hybrid IT is often referred to as bimodal, a term coined by Gartner some four years ago to reflect the (then) new need for the simultaneous management of two distinct strands of work in a Hybrid IT environment – the traditional server-based elements on the one hand, and the Cloud elements on the other.

Since then, the two strands of the bimodal world have blended in various different ways. As they have engaged and experimented with new technologies, organisations have found that certain workload types are particularly suited to certain environments.

For example, DevOps work, with its strong focus on user experience elements such as web front ends, is typically well suited to cloud-native environments. Meanwhile, back end applications processing data tend to reside most comfortably in the traditional data centre environment.

The result is a multi-modal situation even within any given application, with its various tiers sitting in different technologies, or even different clouds or data centres.

The obvious question for IT management is this: how on earth do you manage an application which is split across multiple distinct technologies? Relying on technology to provide the management visibility you need drives you to traditional tools for the elements of the application based on traditional server technology, and DevOps tools for the cloud native side. Both sets of tools need to be continuously monitored. For every application, and every environment.

A new breed of tools is emerging, allowing you to play in both worlds at once. VMware vRealize Automation cloud automation software is a good example. Over the last three years, VMware has developed its long-standing traditional platform, adding Docker container capabilities, so that today vRealize is a wholly integrated platform allowing for the creation of fully hybrid applications, in the form of ‘cut-out’ blueprints containing both traditional VM images and Docker images.

This multi-modal Hybrid IT world is where every enterprise will end up. IT management needs clear visibility, for every application, of multiple tiers across multiple technologies – for security, scaling, cost management and risk management, to name just a few issues. Platforms with the capability to manage this hybrid application state will be essential.

This area of enterprise IT is moving rapidly: Logicalis is well versed, and experienced, in these emerging technologies both in terms of solution and service delivery, and in terms of support for these technologies in our own cloud. Contact us to find out more about multi-modal Hybrid IT and how we can help you leverage it.

Category: Hybrid IT

Fanni Vig
April 20, 2017

Finally, it’s out!

With acquisitions like Composite, ParStream, Jasper and AppDynamics, we knew something was bubbling away in the background for Cisco with regards to edge analytics and IoT.

Edge Fog Fabric – EFF

The critical success factor for IoT and analytics solution deployments is to provide the right data, at the right time to the right people (or machines).

With the exponential growth in the number of connected devices, the marketplace requires solutions that provide data generating devices, communication, data processing, and data leveraging capabilities, simultaneously.

To meet this need, Cisco recently launched a software solution (predicated on hardware devices) that encompasses all the above capabilities and named it Edge Fog Fabric aka EFF.

What is exciting about EFF?

To implement high performing IoT solutions that are cost effective and secure, a combination of capabilities need to be in place.

  • Multi-layered data processing, storing and analytics – given the rate of growth in the number of connected devices and the volume of data. Bringing data back from devices to a DV environment can be expensive. Processing information on the EFF makes this a lot more cost effective.
  • Micro services – Standardized framework for data processing and communication services that can be programmed in standard programming language like Python, Java etc.
  • Message routers – An effective communication connection within the various components and layers. Without state of the art message brokerage, no IoT systems could be secure and scalable in providing real time information.
  • Data leveraging capabilities – Ad hoc, embedded or advanced analytics capabilities will support BI and reporting needs. With the acquisition of Composite and AppDynamics, EFF will enable an IoT platform to connect to IT systems and applications.

What’s next?

Deploying the above is no mean feat. According to Gartner’s perception of the IoT landscape, no organization have yet achieved the panacea of connecting devices to IT systems and vice versa, combined with the appropriate data management and governance capabilities embedded. So there is still a long road ahead.

However, with technology advancements such as the above, I have no doubt that companies and service providers will be able to accelerate progress and deliver further use cases sooner than we might think.

Based on this innovation, the two obvious next steps that one can see fairly easily are:

  • Further automation – automating communication, data management and analytics services including connection with IT/ERP systems
  • Machine made decisions – once all connections are established and the right information reaches the right destination, machines could react to information that is shared with ‘them’ and make automated decisions.

Scott Hodges
April 18, 2017

Attending a recent IBM Watson event, somebody in the crowd asked the speaker, “So, what is Watson? ” It’s a good question – and one isn’t really a straightforward answer to. Is it a brand? A supercomputer? A technology? Something else?

Essentially, it is an IBM technology that combines artificial intelligence and sophisticated analytics to provide a supercomputer named after IBM’s founder, Thomas J. Watson. While interesting enough, the real question, to my mind, is this: “What sort of cool stuff can businesses do with the very smart services and APIs provided by IBM Watson?”

IBM provides a variety of services, available through Application Programmable Interfaces (APIs) that can developers can use to take advantage of the cognitive elements and power of Watson. The biggest challenge to taking advantage of these capabilities is to “Think cognitively” and imagine how they could benefit your business or industry to give you a competitive edge – or, for not-for-profit organisations, how they can help you make the world a better place.

I’ve taken a look at some of the APIs and services available to see some of the possibilities with Watson. It’s important to think of them collectively rather than individually, as while some use-cases may use one, many will use a variety of them, working together. We’ll jump into some use-cases later on to spark some thoughts on the possibilities.

Natural Language Understanding

Extract meta-data from content, including concepts, entities, keywords, categories, sentiment, emotion, relations and semantic roles.

Discovery

Identify useful patterns and insights in structured or unstructured data.

Conversation

Add natural language interfaces such as chat bots and virtual agents to your application to automate interactions with end users.

Language Translator

Automate the translation of documents from one language to another.

Natural Language Classifier

Classify text according to its intent.

Personality Insights

Extract personality characteristics from text, based on the writer’s style.

Text to Speech and Speech to Text

Process natural language text to generate synthesised audio, or render spoken words as written text.

Tone Analyser

Use linguistic analysis to detect the emotional (joy, sadness etc) linguistic (analytical, confident etc) and social (openness, extraversion etc) tone of a piece of text.

Trade-off Analytics

Make better choices when analysing multiple, even conflicting goals.

Visual Recognition

Analyse images for scenes, objects, faces, colours and other content.

All this is pretty cool stuff, but how can it be applied to work in your world? You could use the APIs to “train” your model to be more specific to your industry and business, and to help automate and add intelligence to various tasks.

Aerialtronics offers a nice example use-case of visual recognition in particular, they develop, produce and service commercial unmanned aircraft systems. Essentially, the company teams drones, an IoT platform and Watson’s Visual recognition service, to help identify corrosion, serial numbers, loose cables and misaligned antennas on wind turbines, oil rigs and mobile phone towers. This helps them automate the process of identifying faults and defects.

Further examples showing how Watson APIs can be combined to drive powerful, innovative services can be found on the IBM Watson website’s starter-kit page.

At this IBM event, a sample service was created, live in the workshop. This application would stream a video, convert the speech in the video to text, and then categorise that text, producing an overview of the content being discussed. The application used the speech-to-text and natural language classifier services.

Taking this example further with a spot of blue sky thinking, for a multi-lingual organisation, we could integrate the translation API, adding the resulting service to video conferencing. This could deliver near real-time multiple dialect video conferencing, complete with automatic transcription in the correct language for each delegate.

Customer and support service chat bots could use the Conversation service to analyse tone. Processes such as flight booking could be fulfilled by a virtual agent using the ‘Natural Language Classifier’ to derive the intent in the conversation. Visual recognition could be used to identify production line issues, spoiled products in inventory or product types in retail environments.

Identification of faded colours or specific patterns within scenes or on objects could trigger remedial services. Detection of human faces, their gender and approximate age could help enhance customer analysis. Language translation could support better communication with customers and others in their preferred languages. Trade-off Analytics could help optimise the balancing of multiple objectives in decision making.

This isn’t pipe-dreaming: the toolkit is available today. What extra dimensions and capabilities could you add to your organisation, and the way you operate? How might you refine your approach to difficult tasks, and the ways you interact with customers? Get in contact today to discuss the possibilities.

Richard Alexander
March 10, 2017

As Logicalis’ Chief Security Technology Officer I’m often asked to comment on cyber security issues. Usually the request relates to specific areas such as ransomware or socially engineered attacks. In this article I’m taking a more holistic look at IT security.

Such a holistic approach to security is, generally, sorely lacking. This is a serious matter, with cyber criminals constantly looking for the weak links in organisations’ security, constantly testing the fence to find the easiest place to get through. So, let’s take a look at the state of enterprise IT security in early 2017, using the technology, processes and people model.

Technology

A brief, high-level look at the security market is all it takes to show that there are vast numbers of point products out there – ‘silver bullet’ solutions designed to take out specific threats. There is, however, little in terms of an ecosystem supporting a defence-in-depth architecture. Integration of and co-operation between the various disparate components is , although growing, typically weak or non-existent.

We’ve seen customers with more than 60 products deployed, from over 40 vendors, each intended to address a specific security issue. Having such a large number of products itself presents significant security challenges, though. All these products combined have their own vulnerability: support and manintenance. Managing them and keeping them updated generates significant workload, and any mistakes or unresolved issues can easily become new weak points in the organisation’s security.

The situation has been exacerbated by the rapidly increasing popularity of Cloud and Open Source software. Both trends make market entry significantly simpler, allowing new players to quickly and easily offer new solutions, targeting whichever threat happens to be making a big noise at the moment.

Just as poor integration between security products is an issue, so is lack of integration between the components on which they are built. Through weak coding or failure to make use of hardware security features – Intel’s hardware-level Software Guards Extensions (SGX) encryption technology is a good example – security holes are left open, waiting to be exploited.

The good news on the technology front is that we are seeing the early stages of the development of protocols, such as STIX, TAXII and CybOX, allowing different vendors’ products to interact and share standardised threat information. The big security vendors have been promoting the idea of threat information sharing and subsequent action for a while, but only within their own product ecosystems. It’s time for a broader playing field!

Processes

IT security is one of the most important issues facing today’s enterprise, yet, while any self-respecting board will feature directors with responsibility for sales, marketing, operations and finance, few enterprises have a board level CISO.

Similarly few organisations have a comprehensive and thoroughly considered security strategy in place, or proper security processes and policies suitable for today’s threat landscape and ICT usage patterns. A number of industry frameworks exist: ISO 27001, Cyber Essentials, NIST to name but a few; and yet very few organisations adopt these beyond the bare minimum to meet regulatory requirements.

Most organisations spend considerable sums on security technology, but without the right security strategy in place, and user behaviour in line with the right processes and policies, they remain at risk of serious breaches.

People

The hard truth is that some 60% of breaches are down to user error. Recent research obtiained through Freedom of Information requests found that 62% reported to the ICO are down to humans basically getting it wrong. People make poor password choices, use insecure public (and private!) WiFi, and use public Cloud storage and similar services without taking the necessary security precautions. They do not follow, or indeed even know, corporate data classification and usage policies. The list, of course, goes on.

Training has a part to play here, to increase users’ awareness of the importance of security, as well as the behaviours they need to adopt (and discard) to stay secure. However, there will come a point at which the law of diminishing returns kicks in: we all make mistakes – even the most careful, well trained of us.

We need to explore, discover and devise new ways in which technology can help, by removing the human element, where possible and desirable, and by limiting and swiftly rectifying the damage done when human error occurs. Furthermore, we need to leverage ever improving machine learning and artificial intelligence software to help augment human capability.

Enterprises need to work with specialists that can help them understand the nature of the threats they face, and the weak links in their defences that offer miscreants easy ways in. That means closely examining all aspects of their security from each of the technology, processes and people perspectives, to identify actual and potential weaknesses. Then robust, practical, fit-for-purpose security architectures and policies can be built.

For an outline of how this can work, take a look at Logicalis’ three-step methodology here or email us security@uk.logicalis.com to discuss your cyber security needs.

Category: Security

Neil Thurston
February 13, 2017

The explosive growth of Cloud computing in recent years has opened up diverse opportunities for both new and established businesses. However, it has also driven the rise of a multitude of ‘islands of innovation’. With each island needing its own service management, data protection and other specialists, IT departments find themselves wrestling with increased – and increasing – management complexity and cost.

Necessity is the mother of invention, and with cost and complexity becoming increasingly problematic, attitudes to Cloud are changing. Organisations are moving selected tools, resources and services back to on-premises deployment models: we’re seeing the rise of the Hybrid Cloud environment.

The trend towards Hybrid Cloud is driven by an absolute need for operational and service consistency, regardless of the on-premises/Cloud deployment mix – a single set of automation platforms, a single set of operational tools and a single set of policies. We’re looking at a change in ethos, away from multiple islands of innovation, each with its own policies, processes and tools, to a single tool kit – a single way of working – that we can apply to all our workloads and data, regardless of where they actually reside.

Disparate islands in the Cloud have also increasingly put CIOs in the unenviable position of carrying the responsibility for managing and controlling IT but without the capability and authority to do so. Many organisations have experimented (some might say dabbled) with cherry-picked service management frameworks such as ITIL.

With focus shifting to Hybrid Cloud, we’re now seeing greater interest in more pragmatic ITSM frameworks, such as IT4IT, pushing responsibility up the stack and facilitating the move to something more akin to supply chain management than pure hardware, software and IT services management.

There are two key pieces to the Hybrid IT puzzle. On the one hand, there’s the workload: the actual applications and services. On the other, there’s the data. The data is where the value is – the critical component, to be exploited and protected. Workloads, however, can be approached in a more brokered manner.

Properly planned and executed, Hybrid Cloud allows the enterprise to benefit from the best of both the on-premises world and the Cloud world. The ability to select the best environment for each tool, service and resource – a mix which will be different in different industries, and even in different businesses within the same industry – delivers significant benefits in terms of cost, agility, flexibility and scalability.

Key to this is a comprehensive understanding of where you are and where you want to be, before you start putting policies or technology in place. The Logicalis Hybrid IT Workshop can help enormously with this, constructing a clear view of where you are now, and where you want to be.

In the workshop we assess your top applications and services, where they reside and how they’re used in your business. We then look at where you want to get to. Do you want to own your assets, or not? Do you want to take a CAPEX route or an OPEX route? Do you have an inherent Cloud First strategy? What are your licensing issues?

We then use our own analysis tools, developed from our real world experience with customers, to create visualisations showing where you are today, where you want to eventually be and our recommended plan to bridge the gap, in terms of people, processes, technology and phases.

Hybrid Cloud offers significant benefits, but needs to be carefully planned and executed. To find out more about how Logicalis can help, see our website or call us on +44 (0)1753 77720.

Category: Hybrid IT

Fanni Vig
January 16, 2017

A friend of mine recently introduced me to the idea of the ‘runaway brain’ – a theory first published in 1993 outlining the uniqueness of human evolution. We take a look into how artificial intelligence is developing into something comparable to the human brain and the potential caveats that concern us as human-beings.

The theory considers how humans have created a complex culture by continually challenging their brains, leading to the development of more complex intellect throughout human evolution. A process which continues to occur, even up to today and will again tomorrow, and will no doubt for years to come. This is what theorists claim is driving human intelligence towards its ultimate best.

There are many ways in which we can define why ‘human intelligence’ is considered unique. In essence, it’s characterised by perception, consciousness, self-awareness, and desire.

It was by speaking to a friend that I considered with human intelligence alongside the emergence of artificial intelligence (AI), is it possible for the ‘runaway brain’ to reach a new milestone? After further research, I found some that say it already has.

They label it ‘runaway super intelligence‘.

Storage capacity of the human brain

Most neuroscientists estimate the human brains storage capacity to range between 10 and 100 terabytes, with some evaluations estimating closer to 2.5 petabytes. In fact, new research suggests the human brain could hold as much information as the entire internet.

As surprising as that sounds, it’s not necessarily impossible. It has long been said that the human brain can be like a sponge, absorbing as much information that we throw towards it. Of course we forget a large amount of that information, but take into consideration those with photographic memory or those who practice a combination of innate skills, learned tactics, mnemonic strategies or those who have an extraordinary knowledge base.

Why can machines still perform better?

Ponder this – if human brains have the capacity to store significant amounts of data, why do machines continue to outperform human decision making?

The human brain has a huge range – data analysis and pattern recognition alongside the ability to learn and retain information. A human needs only to glance before they recognise a car they’ve seen before, but AI may need to process hundreds or even thousands of samples before it’s able to come to a conclusion. Perhaps human premeditative assumption, if you will, to save time analysing finer details for an exact match, but conversely, while AI functions may be more complex and varied, the human brain is unable to process the same volume of data as a computer.

It’s this efficiency of data processing that calls on leading researchers to believe that indeed AI will dominate our lives in the coming decades and eventually lead to what we call the ‘technology singularity’.

Technology singularity

Technological singularity is defined by the hypothesis that through the invention of artificial super intelligence abruptly triggering runaway technological growth, which will result in unfathomable changes to human civilization.

According to this hypothesis, an upgradable intelligent agent, such as software-based artificial general intelligence, could enter a ‘runaway reaction’ cycle of self-learning and self-improvement, with each new and increasingly intelligent generation appearing more rapidly, causing an intelligence explosion resulting in a powerful super intelligence that would, qualitatively, far surpass human intelligence.

Ubiquitous AI

When it comes to our day-to-day lives, algorithms often save time and effort. Take online search tools, Internet shopping and smartphone apps using beacon technology to provide recommendations based upon our whereabouts.

Today, AI uses machine learning. Provide AI with an outcome-based scenario and, to put it simply, it will remember and learn. The computer is taught what to learn, how to learn, and how to make its own decisions.
What’s more fascinating, is how new AI’s are modeling the human mind using techniques similar to that of our own learning processes.

Do we need to be worried about the runaway artificial general intelligence?

If we to listen to the cautiously wise words of Stephen Hawking who said “success in creating AI would be the biggest event in human history”, before commenting “unfortunately, it might also be the last, unless we learn how to avoid the risks”.

The answer to whether we should be worried all depends on too many variables for a definitive answer. However, it is difficult not to argue that AI will play a growing part in our lives and businesses.

Rest assured: 4 things that will always remain human

It’s inevitable that one might raise the question is there anything that humans will always be better at?

  1. Unstructured problem solving. Solving problems in which the rules do not currently exist; such as creating a new web application.
  2. Acquiring and processing new information. Deciding what is relevant; like a reporter writing a story.
  3. Non-routine physical work. Performing complex tasks in a 3-dimentional space that requires a combination of skill #1 and skill #2 which is proving very difficult for computers to master. As a consequence this causes scientists like Frank Levy and Richard J. Murmane to say that we need to focus on preparing children for an “increased emphasis on conceptual understanding and problem-solving“.
  4. And last but not least – being human. Expressing empathy, making people feel good, taking care of others, being artistic and creative for the sake of creativity, expressing emotions and vulnerability in a relatable way, and making people laugh.

Are you safe?

We all know that computers/machines/robots will have an impact (positive and/or negative) on our lives in one way or another. The rather ominous elephant in the room here is whether or not your job can be done by a robot?

I am sure you will be glad to know there is an algorithm for it…
In a recent article by the BBC it is predicted that 35% of current jobs in the UK are at a ‘high risk’ of computerization in the coming 20 years (according to a study by Oxford University and Deloitte).

It remains, jobs that rely on empathy, creativity and social intelligence are considerably less at risk of being computerized. In comparison roles including retail assistants (37th), chartered accountants (21st) and legal secretaries (3rd) all rank among the top 50 jobs at risk.

Maybe not too late to pick up the night course on ‘Computer Science’…

Latest Tweets