Digitally Speaking

Scott Reynolds
July 25, 2017

The amount of data that businesses generate and manage continues to explode. IBM estimates that across the world, 2.3 trillion gigabytes of data are created each day and this will rise to 43 trillion gigabytes by 2020.

From transactions and customer records to email, social media and internal record keeping – today’s businesses create data at rates faster than ever before. And there’s no question that storing and accessing this data presents lots of challenges for business. How to keep up with fast growing storage needs, without fast growing budgets? How to increase storage capacity without increasing complexity? How to access critical data without impacting on the speed of business?

It’s increasingly obvious that traditional storage can’t overcome these challenges. By simply adding more capacity, costs go up for both storage and management. And manually working with data across different systems can become an administrative nightmare – adding complexity, and taking up valuable IT resource.

So, what can you do? It’s likely that you’ve already got an existing infrastructure and for many, scrapping it and starting again, just isn’t an option. This is where flash and software-defined-storage (SDS) could be your saviour. Flash and tape aren’t mutually exclusive, and by separating the software that provides the intelligence from the traditional hardware platform, you gain lots of advantages including flexibility, scalability and improved agility.

So I could add to what I already have?

Yes. Flash and tape aren’t mutually exclusive. Lots of businesses use a mix of the old and the new – what’s important is how you structure it. Think of it like a well-organised wardrobe. You need your everyday staples close at hand, and you store the less frequently worm items, also known in the UK as the summer wardrobe (!), where you can access them if you need them but not in prime position.

Your data could, and should work like this. Use flash for critical workloads that require real-time access and use your older tape storage for lower priority data or lower performance applications.

But won’t it blow my budget?

No, the cost of Flash systems has come down over the last few years and the lower costs to operate make savings over the long term. It’s been proven that the virtualisation of mixed environments can store up to five times more data and that analytics driven hybrid cloud data management reduces costs by up to 73%. In fact, we estimate that with automatic data placement and management across storage systems, media and cloud, it’s possible to reduce costs by up to 90%!

So how do I know what system will work for me?

Well, that’s where we come in. At Logicalis we’ve got over 20 years of experience working with IBM systems. Our experts work with clients to help them scope out a storage solution that meets their needs today, and the needs they’ll have tomorrow.

We start with a Storage Workshop that looks at the existing infrastructure and what you’re hoping to achieve. We’ll look at how your data is currently structured and what changes you could make to improve what you already have – reducing duplication and using the right solution for the right workload. We’ll then work with you to add software and capacity that will protect your business and won’t blow your budget.

If you want to hear more about the solutions on offer, feel free to contact us.

Category: Hybrid IT

Scott Reynolds
July 12, 2017

£170m lost on the London Stock Market just over a week, and untold damage to the “World’s Favourite Airline”. That’s the cost within the UK to the International Airlines Group, the owner of British Airways, after BA’s recent ‘Power Outage’ incident.

“It wasn’t an IT failure. It’s not to do with our IT or outsourcing our IT. What happened was in effect a power system failure or loss of electrical power at the data centre. And then that was compounded by an uncontrolled return of power that took out the IT system.” Willie Walsh (IAG Supremo) during a telephone interview with The Times.

Willie has since inferred that the outage was caused by the actions of an engineer who disconnected and then reconnected a power supply to the data centre in “an uncontrolled and un-commanded fashion”. Could this then actually have something to do with the IT outsource after all, and did a staff member go rogue, or was it down to poor training and change control…?

For me what this highlights is the need to place greater emphasis on availability and uptime of those systems that support critical parts of a business or organisations services and offering. Along with robust processes and automation where possible to minimise the impact of an unplanned outage.

All businesses should expect their systems to fail. Sometimes it can be a physical failure of the infrastructure supporting the data centre (Power, UPS’s, Generators, Cooling etc.). It can be the power supply itself. Computing, Storage or the Network equipment can fail. Software and systems can suffer an outage. Plus it can also come down ‘Human Error’ or poor maintenance of core systems or infrastructure.

Coping with a Power Failure

Even if you have two power feeds to your building, and even if they’re from two different power sub-stations, and run through two different street routes, those sub-stations are still part of the same regional and national power grid. If the grid fails, so does your power. No way around it, except to make your own. Power Surge’s are handled by monitoring the power across Cabinet PDU’s, Critical PDU’s, UPS’s, Generators & Transformers, while assigning Maximum Load to all cabinets to make sure that we do not overload our customers systems.

Recovering from a Disaster

Recovering from a disaster is something that all organisation plan for, however not all have a Disaster Recovery (DR) Plan as there are some that consider High Availability (HA) to be more than sufficient. However HA only provides a localised system for failover, whereas DR is designed to cope with a site failure.

The challenge with DR for many of our customers is the cost;

  • First you need to prioritise which applications workloads you want to failover in the event of a disaster.
  • Second you need to purchase and manage infrastructure and licensing for these workloads with continuous replication.
  • Third you need a 2nd location.
  • Fourth you need a robust DR plan that allows you to recover your workloads at the 2nd location.
  • Then lastly (which is considered harder) you’ll need to fail back these services once the primary site has been recovered.

This can be an expensive option, but this is also where things like Cloud DR-as-a-Service can help minimise any expenditure, and the pain associated with owning and managing a DR environment.

Reducing the impact of an outage

Minimising the impact of any form of physical failure should be a priority over recovering from an outage. Workflow Automation can help a business maintain uptime of applications and services. This can be defined as a policy where services can be moved to other systems locally, or all services can be re-provisioned to a DR location or a DR platform in the event of outage caused either by a power issue or human error. Helping a business minimise the risk and the impact of outage.

I’ll let you come to your own conclusions as to whether British Airways should adopt a robust change control, automation or DR policy. Logicalis can assist and provide you with a number of options custom to your particular needs so that you are not the next press headliner.

Richard Simmons
June 20, 2017

I have a confession to make, I love to read. Not just an occasional book on holiday or a few minutes on the brief, or often the not so brief, train journey into and out of London but all the time. Right now has never been a better time for those with a love of reading! The rise of digital media means that not only can you consume it pretty much anywhere at any time but more importantly it is making it easier for more people to share their ideas and experience.

Recently I came across a book called “Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations” by Pulitzer Prize winner Thomas L. Friedman., which I not only found fascinating to read but has also helped to shape and change the way I view many of the challenges we are facing both in business but also in our personal lives. The premise of the book is that often he would arrange to meet people for breakfast early in the morning, to do interviews or research stories but occasionally these people would be delayed. These moments, rather than being a source of frustration, became time he actually looked forward to as it allowed him to simply sit and think. And looking at the world, he believed we are living through an age of acceleration due to constant technology evolution, globalisation and climate change. He argues that these combined are the cause for much of the challenges we currently face.

The key point about this acceleration is that it is now reaching a level in which society and people are struggling to adapt. Within the technology world we talk about disruption a lot, a new business or technology arrives that can disrupt a sector or market, the competition struggles to adapt and eventually a status quo is resumed. For example Uber has undoubtedly caused a huge disruption in the world of transport, and governments are currently working through how they can better legislate for this new way of operating. The challenge will be that new legislation can take 5-10 years to agree and implement in which time Uber may well have been replaced by autonomous cars.

So what we are experiencing now is not just disruption but a sense of dislocation, the feeling that no matter how fast we try and change it is never enough. In this environment it will be the people, businesses and societies that are able to learn and adapt the fastest which will be most successful . For business we are constantly shown how being more agile in this digital world can drive efficiency, generate new business models and allow us to succeed but I feel often what is lacking is the guidance on how to get there. We have a wealth of different technology which can support a business but what is right for me? What should I invest in first? And how do I make sure that I maximise the value of that investment?

My experience with many of our customers is that they understand the challenges and also the opportunity, but simply do not have the time to think and plan. When they do have time the amount of choice can be overwhelming and actually daunting. In a small way this is the same challenge I face when looking for new books to read, I can go online but with so much to choose from how will I know what I will enjoy? The opportunity that digital media provides with more authors and contents can actually make finding and choosing something that you think is valuable much harder.

In Logicalis, we understand the business challenges that you face and discuss with you the different technology options that could support you, recommending those that can deliver the biggest value in the shortest time frame. Contact us to find out how we can help you keep up to speed with emerging technology and use it to your benefit.

Alastair Broom
May 16, 2017

What if I told you, that Ransomware, is on its way to becoming a $1 billion annual market ?

Eyebrows raised (or not), it is a matter of fact in 2017 that Ransomware is an extremely lucrative business, evolving in an alarming rate and becoming more sophisticated day by day.

But, the question remains, what is Ransomware?

Ransomware is a malicious software – a form of malware – that either disables a target system or encrypts a user’s files and holds them ‘hostage’ until a ransom is paid. This malware generally operates indiscriminately with the ability to target any operating system, within any organisation. Once the malware has gained a foothold in an organisation, it can spread quickly infecting other systems, even backup systems and therefore can effectively disable an entire organisation. Data is the lifeblood of many organisations and without access to this data, businesses can literally grind to a halt. Attackers demand that the user pay a fee (often in Bitcoins) to decrypt their files and get them back.

On a global scale, more than 40% of ransomware victims pay the ransom, although there is no guarantee that you will actually get your data back and copies of your data will now be in the attacker’s hands. In the UK, 45% of organisations reported that a severe data breach caused systems to be down on average for more than eight hours. This makes it apparent that the cost is not only the ransom itself, but also the significant resources required to restore the systems and data. What is even more alarming, is that in the UK the number of threats and alerts is significantly higher than other countries (Cisco 2017 Annual Cybersecurity Report). Outdated systems and equipment are partially to blame, coupled with the belief that line managers are not sufficiently engaged with security. Modern and sophisticated attacks like ransomware require user awareness, effective processes and cutting edge security systems to prevent them from taking your organisation hostage!

How can you protect your company?

As one of the latest threats in cybersecurity, a lot has been written and said around ransomware and potential ways of preventing it. A successful mitigation strategy involving people, process and technology is the best way to minimise the risk of an attack and its impact. Your security program should consider the approach before, during and after an attack takes place giving due consideration to protecting the organisation from attack, detecting Ransomware and other malware attacks and how the organisation should respond following an attack. Given that Ransomware can penetrate organisations in multiple ways, reducing the risk of an infection requires a holistic approach, rather than a single point solution.  It takes seconds to encrypt an entire hard disk and so IT security systems must provide the highest levels of protection, rapid detection and high containment and quarantine capability to limit damage. Paying the ransom should be viewed as an undesirable, unpredictable last resort and every organisation should therefore take effective measures to avoid this scenario.

Could your organisation be a target?

One would imagine that only large corporations would be at risk of a Ransomware attack, but this is far from the truth. Organisations of all industries and sizes report Ransomware attacks which lead to substantial financial loss, data exposure and potential brand damage. The reason is that all businesses rely on the availability of data, such as employee profiles, patents, customer lists, financial statements etc. to operate.  Imagine the impact of Ransomware attacks in police departments, city councils, schools or hospitals. Whether an organisation operates in the public or private sector, banking or healthcare, it must have an agile security system in place to reduce the risk of a Ransomware attack.

Where to start?

The first step to shield your company against Ransomware is to perform an audit of your current security posture and identify areas of exposure.  Do you have the systems and skills to identify an attack?  Do you have the processes and resources to respond effectively?  As Ransomware disguises itself and uses sophisticated hacking tactics to infiltrate your organisation’s network, it is important to constantly seek innovative ways to protect your data before any irreparable damage is done.

With our Security Consultancy, Managed Security Service offerings and threat-centric Security product portfolio, we are able to help our customers build the holistic security architecture needed in today’s threat landscape.

Contact us to discuss your cyber security needs and ensure you aren’t the next topic of a BBC news article.

 

Category: Security

Neil Thurston
April 25, 2017

Hybrid IT is often referred to as bimodal, a term coined by Gartner some four years ago to reflect the (then) new need for the simultaneous management of two distinct strands of work in a Hybrid IT environment – the traditional server-based elements on the one hand, and the Cloud elements on the other.

Since then, the two strands of the bimodal world have blended in various different ways. As they have engaged and experimented with new technologies, organisations have found that certain workload types are particularly suited to certain environments.

For example, DevOps work, with its strong focus on user experience elements such as web front ends, is typically well suited to cloud-native environments. Meanwhile, back end applications processing data tend to reside most comfortably in the traditional data centre environment.

The result is a multi-modal situation even within any given application, with its various tiers sitting in different technologies, or even different clouds or data centres.

The obvious question for IT management is this: how on earth do you manage an application which is split across multiple distinct technologies? Relying on technology to provide the management visibility you need drives you to traditional tools for the elements of the application based on traditional server technology, and DevOps tools for the cloud native side. Both sets of tools need to be continuously monitored. For every application, and every environment.

A new breed of tools is emerging, allowing you to play in both worlds at once . VMware vRealize Automation cloud automation software is a good example. Over the last three years, VMware has developed its long-standing traditional platform, adding Docker container capabilities, so that today vRealize is a wholly integrated platform allowing for the creation of fully hybrid applications, in the form of ‘cut-out’ blueprints containing both traditional VM images and Docker images.

This multi-modal Hybrid IT world is where every enterprise will end up. IT management needs clear visibility, for every application, of multiple tiers across multiple technologies – for security, scaling, cost management and risk management, to name just a few issues. Platforms with the capability to manage this hybrid application state will be essential.

This area of enterprise IT is moving rapidly: Logicalis is well versed, and experienced, in these emerging technologies both in terms of solution and service delivery, and in terms of support for these technologies in our own cloud. Contact us to find out more about multi-modal Hybrid IT and how we can help you leverage it.

Category: Hybrid IT

Fanni Vig
April 20, 2017

Finally, it’s out!

With acquisitions like Composite, ParStream, Jasper and AppDynamics, we knew something was bubbling away in the background for Cisco with regards to edge analytics and IoT.

Edge Fog Fabric – EFF

The critical success factor for IoT and analytics solution deployments is to provide the right data, at the right time to the right people (or machines) .

With the exponential growth in the number of connected devices, the marketplace requires solutions that provide data generating devices, communication, data processing, and data leveraging capabilities, simultaneously.

To meet this need, Cisco recently launched a software solution (predicated on hardware devices) that encompasses all the above capabilities and named it Edge Fog Fabric aka EFF.

What is exciting about EFF?

To implement high performing IoT solutions that are cost effective and secure, a combination of capabilities need to be in place.

  • Multi-layered data processing, storing and analytics – given the rate of growth in the number of connected devices and the volume of data. Bringing data back from devices to a DV environment can be expensive. Processing information on the EFF makes this a lot more cost effective.
  • Micro services – Standardized framework for data processing and communication services that can be programmed in standard programming language like Python, Java etc.
  • Message routers – An effective communication connection within the various components and layers. Without state of the art message brokerage, no IoT systems could be secure and scalable in providing real time information.
  • Data leveraging capabilities – Ad hoc, embedded or advanced analytics capabilities will support BI and reporting needs. With the acquisition of Composite and AppDynamics, EFF will enable an IoT platform to connect to IT systems and applications.

What’s next?

Deploying the above is no mean feat. According to Gartner’s perception of the IoT landscape, no organization have yet achieved the panacea of connecting devices to IT systems and vice versa, combined with the appropriate data management and governance capabilities embedded. So there is still a long road ahead.

However, with technology advancements such as the above, I have no doubt that companies and service providers will be able to accelerate progress and deliver further use cases sooner than we might think.

Based on this innovation, the two obvious next steps that one can see fairly easily are:

  • Further automation – automating communication, data management and analytics services including connection with IT/ERP systems
  • Machine made decisions – once all connections are established and the right information reaches the right destination, machines could react to information that is shared with ‘them’ and make automated decisions.

Scott Hodges
April 18, 2017

Attending a recent IBM Watson event, somebody in the crowd asked the speaker, “So, what is Watson? ” It’s a good question – and one isn’t really a straightforward answer to. Is it a brand? A supercomputer? A technology? Something else?

Essentially, it is an IBM technology that combines artificial intelligence and sophisticated analytics to provide a supercomputer named after IBM’s founder, Thomas J. Watson. While interesting enough, the real question, to my mind, is this: “What sort of cool stuff can businesses do with the very smart services and APIs provided by IBM Watson?”

IBM provides a variety of services, available through Application Programmable Interfaces (APIs) that can developers can use to take advantage of the cognitive elements and power of Watson. The biggest challenge to taking advantage of these capabilities is to “Think cognitively” and imagine how they could benefit your business or industry to give you a competitive edge – or, for not-for-profit organisations, how they can help you make the world a better place.

I’ve taken a look at some of the APIs and services available to see some of the possibilities with Watson. It’s important to think of them collectively rather than individually, as while some use-cases may use one, many will use a variety of them, working together. We’ll jump into some use-cases later on to spark some thoughts on the possibilities.

Natural Language Understanding

Extract meta-data from content, including concepts, entities, keywords, categories, sentiment, emotion, relations and semantic roles.

Discovery

Identify useful patterns and insights in structured or unstructured data.

Conversation

Add natural language interfaces such as chat bots and virtual agents to your application to automate interactions with end users.

Language Translator

Automate the translation of documents from one language to another.

Natural Language Classifier

Classify text according to its intent.

Personality Insights

Extract personality characteristics from text, based on the writer’s style.

Text to Speech and Speech to Text

Process natural language text to generate synthesised audio, or render spoken words as written text.

Tone Analyser

Use linguistic analysis to detect the emotional (joy, sadness etc) linguistic (analytical, confident etc) and social (openness, extraversion etc) tone of a piece of text.

Trade-off Analytics

Make better choices when analysing multiple, even conflicting goals.

Visual Recognition

Analyse images for scenes, objects, faces, colours and other content.

All this is pretty cool stuff, but how can it be applied to work in your world? You could use the APIs to “train” your model to be more specific to your industry and business, and to help automate and add intelligence to various tasks.

Aerialtronics offers a nice example use-case of visual recognition in particular, they develop, produce and service commercial unmanned aircraft systems. Essentially, the company teams drones, an IoT platform and Watson’s Visual recognition service, to help identify corrosion, serial numbers, loose cables and misaligned antennas on wind turbines, oil rigs and mobile phone towers. This helps them automate the process of identifying faults and defects.

Further examples showing how Watson APIs can be combined to drive powerful, innovative services can be found on the IBM Watson website’s starter-kit page.

At this IBM event, a sample service was created, live in the workshop. This application would stream a video, convert the speech in the video to text, and then categorise that text, producing an overview of the content being discussed. The application used the speech-to-text and natural language classifier services.

Taking this example further with a spot of blue sky thinking, for a multi-lingual organisation, we could integrate the translation API, adding the resulting service to video conferencing. This could deliver near real-time multiple dialect video conferencing, complete with automatic transcription in the correct language for each delegate.

Customer and support service chat bots could use the Conversation service to analyse tone. Processes such as flight booking could be fulfilled by a virtual agent using the ‘Natural Language Classifier’ to derive the intent in the conversation. Visual recognition could be used to identify production line issues, spoiled products in inventory or product types in retail environments.

Identification of faded colours or specific patterns within scenes or on objects could trigger remedial services. Detection of human faces, their gender and approximate age could help enhance customer analysis. Language translation could support better communication with customers and others in their preferred languages. Trade-off Analytics could help optimise the balancing of multiple objectives in decision making.

This isn’t pipe-dreaming: the toolkit is available today. What extra dimensions and capabilities could you add to your organisation, and the way you operate? How might you refine your approach to difficult tasks, and the ways you interact with customers? Get in contact today to discuss the possibilities.

Alastair Broom
March 10, 2017

As Logicalis’ Chief Security Technology Officer I’m often asked to comment on cyber security issues. Usually the request relates to specific areas such as ransomware or socially engineered attacks. In this article I’m taking a more holistic look at IT security.

Such a holistic approach to security is, generally, sorely lacking. This is a serious matter, with cyber criminals constantly looking for the weak links in organisations’ security, constantly testing the fence to find the easiest place to get through. So, let’s take a look at the state of enterprise IT security in early 2017, using the technology, processes and people model.

Technology

A brief, high-level look at the security market is all it takes to show that there are vast numbers of point products out there – ‘silver bullet’ solutions designed to take out specific threats. There is, however, little in terms of an ecosystem supporting a defence-in-depth architecture. Integration of and co-operation between the various disparate components is , although growing, typically weak or non-existent.

We’ve seen customers with more than 60 products deployed, from over 40 vendors, each intended to address a specific security issue. Having such a large number of products itself presents significant security challenges, though. All these products combined have their own vulnerability: support and manintenance. Managing them and keeping them updated generates significant workload, and any mistakes or unresolved issues can easily become new weak points in the organisation’s security.

The situation has been exacerbated by the rapidly increasing popularity of Cloud and Open Source software. Both trends make market entry significantly simpler, allowing new players to quickly and easily offer new solutions, targeting whichever threat happens to be making a big noise at the moment.

Just as poor integration between security products is an issue, so is lack of integration between the components on which they are built. Through weak coding or failure to make use of hardware security features – Intel’s hardware-level Software Guards Extensions (SGX) encryption technology is a good example – security holes are left open, waiting to be exploited.

The good news on the technology front is that we are seeing the early stages of the development of protocols, such as STIX, TAXII and CybOX, allowing different vendors’ products to interact and share standardised threat information. The big security vendors have been promoting the idea of threat information sharing and subsequent action for a while, but only within their own product ecosystems. It’s time for a broader playing field!

Processes

IT security is one of the most important issues facing today’s enterprise, yet, while any self-respecting board will feature directors with responsibility for sales, marketing, operations and finance, few enterprises have a board level CISO.

Similarly few organisations have a comprehensive and thoroughly considered security strategy in place, or proper security processes and policies suitable for today’s threat landscape and ICT usage patterns. A number of industry frameworks exist: ISO 27001, Cyber Essentials, NIST to name but a few; and yet very few organisations adopt these beyond the bare minimum to meet regulatory requirements.

Most organisations spend considerable sums on security technology, but without the right security strategy in place, and user behaviour in line with the right processes and policies, they remain at risk of serious breaches.

People

The hard truth is that some 60% of breaches are down to user error. Recent research obtiained through Freedom of Information requests found that 62% reported to the ICO are down to humans basically getting it wrong. People make poor password choices, use insecure public (and private!) WiFi, and use public Cloud storage and similar services without taking the necessary security precautions. They do not follow, or indeed even know, corporate data classification and usage policies. The list, of course, goes on.

Training has a part to play here, to increase users’ awareness of the importance of security, as well as the behaviours they need to adopt (and discard) to stay secure. However, there will come a point at which the law of diminishing returns kicks in: we all make mistakes – even the most careful, well trained of us.

We need to explore, discover and devise new ways in which technology can help, by removing the human element, where possible and desirable, and by limiting and swiftly rectifying the damage done when human error occurs. Furthermore, we need to leverage ever improving machine learning and artificial intelligence software to help augment human capability.

Enterprises need to work with specialists that can help them understand the nature of the threats they face, and the weak links in their defences that offer miscreants easy ways in. That means closely examining all aspects of their security from each of the technology, processes and people perspectives, to identify actual and potential weaknesses. Then robust, practical, fit-for-purpose security architectures and policies can be built.

For an outline of how this can work, take a look at Logicalis’ three-step methodology here or email us security@uk.logicalis.com to discuss your cyber security needs.

Category: Security

Neil Thurston
February 13, 2017

The explosive growth of Cloud computing in recent years has opened up diverse opportunities for both new and established businesses. However, it has also driven the rise of a multitude of ‘islands of innovation’. With each island needing its own service management, data protection and other specialists, IT departments find themselves wrestling with increased – and increasing – management complexity and cost.

Necessity is the mother of invention, and with cost and complexity becoming increasingly problematic, attitudes to Cloud are changing. Organisations are moving selected tools, resources and services back to on-premises deployment models: we’re seeing the rise of the Hybrid Cloud environment.

The trend towards Hybrid Cloud is driven by an absolute need for operational and service consistency, regardless of the on-premises/Cloud deployment mix – a single set of automation platforms, a single set of operational tools and a single set of policies. We’re looking at a change in ethos, away from multiple islands of innovation, each with its own policies, processes and tools, to a single tool kit – a single way of working – that we can apply to all our workloads and data, regardless of where they actually reside.

Disparate islands in the Cloud have also increasingly put CIOs in the unenviable position of carrying the responsibility for managing and controlling IT but without the capability and authority to do so. Many organisations have experimented (some might say dabbled) with cherry-picked service management frameworks such as ITIL.

With focus shifting to Hybrid Cloud, we’re now seeing greater interest in more pragmatic ITSM frameworks, such as IT4IT, pushing responsibility up the stack and facilitating the move to something more akin to supply chain management than pure hardware, software and IT services management.

There are two key pieces to the Hybrid IT puzzle. On the one hand, there’s the workload: the actual applications and services. On the other, there’s the data. The data is where the value is – the critical component, to be exploited and protected. Workloads, however, can be approached in a more brokered manner.

Properly planned and executed, Hybrid Cloud allows the enterprise to benefit from the best of both the on-premises world and the Cloud world. The ability to select the best environment for each tool, service and resource – a mix which will be different in different industries, and even in different businesses within the same industry – delivers significant benefits in terms of cost, agility, flexibility and scalability.

Key to this is a comprehensive understanding of where you are and where you want to be, before you start putting policies or technology in place. The Logicalis Hybrid IT Workshop can help enormously with this, constructing a clear view of where you are now, and where you want to be.

In the workshop we assess your top applications and services, where they reside and how they’re used in your business. We then look at where you want to get to. Do you want to own your assets, or not? Do you want to take a CAPEX route or an OPEX route? Do you have an inherent Cloud First strategy? What are your licensing issues?

We then use our own analysis tools, developed from our real world experience with customers, to create visualisations showing where you are today, where you want to eventually be and our recommended plan to bridge the gap, in terms of people, processes, technology and phases.

Hybrid Cloud offers significant benefits, but needs to be carefully planned and executed. To find out more about how Logicalis can help, see our website or call us on +44 (0)1753 77720.

Category: Hybrid IT

Fanni Vig
January 16, 2017

A friend of mine recently introduced me to the idea of the ‘runaway brain’ – a theory first published in 1993 outlining the uniqueness of human evolution. We take a look into how artificial intelligence is developing into something comparable to the human brain and the potential caveats that concern us as human-beings.

The theory considers how humans have created a complex culture by continually challenging their brains, leading to the development of more complex intellect throughout human evolution. A process which continues to occur, even up to today and will again tomorrow, and will no doubt for years to come. This is what theorists claim is driving human intelligence towards its ultimate best.

There are many ways in which we can define why ‘human intelligence’ is considered unique. In essence, it’s characterised by perception, consciousness, self-awareness, and desire.

It was by speaking to a friend that I considered with human intelligence alongside the emergence of artificial intelligence (AI), is it possible for the ‘runaway brain’ to reach a new milestone? After further research, I found some that say it already has.

They label it ‘runaway super intelligence‘.

Storage capacity of the human brain

Most neuroscientists estimate the human brains storage capacity to range between 10 and 100 terabytes, with some evaluations estimating closer to 2.5 petabytes. In fact, new research suggests the human brain could hold as much information as the entire internet.

As surprising as that sounds, it’s not necessarily impossible. It has long been said that the human brain can be like a sponge, absorbing as much information that we throw towards it. Of course we forget a large amount of that information, but take into consideration those with photographic memory or those who practice a combination of innate skills, learned tactics, mnemonic strategies or those who have an extraordinary knowledge base.

Why can machines still perform better?

Ponder this – if human brains have the capacity to store significant amounts of data, why do machines continue to outperform human decision making?

The human brain has a huge range – data analysis and pattern recognition alongside the ability to learn and retain information. A human needs only to glance before they recognise a car they’ve seen before, but AI may need to process hundreds or even thousands of samples before it’s able to come to a conclusion. Perhaps human premeditative assumption, if you will, to save time analysing finer details for an exact match, but conversely, while AI functions may be more complex and varied, the human brain is unable to process the same volume of data as a computer.

It’s this efficiency of data processing that calls on leading researchers to believe that indeed AI will dominate our lives in the coming decades and eventually lead to what we call the ‘technology singularity’.

Technology singularity

Technological singularity is defined by the hypothesis that through the invention of artificial super intelligence abruptly triggering runaway technological growth, which will result in unfathomable changes to human civilization.

According to this hypothesis, an upgradable intelligent agent, such as software-based artificial general intelligence, could enter a ‘runaway reaction’ cycle of self-learning and self-improvement, with each new and increasingly intelligent generation appearing more rapidly, causing an intelligence explosion resulting in a powerful super intelligence that would, qualitatively, far surpass human intelligence.

Ubiquitous AI

When it comes to our day-to-day lives, algorithms often save time and effort. Take online search tools, Internet shopping and smartphone apps using beacon technology to provide recommendations based upon our whereabouts.

Today, AI uses machine learning. Provide AI with an outcome-based scenario and, to put it simply, it will remember and learn. The computer is taught what to learn, how to learn, and how to make its own decisions.
What’s more fascinating, is how new AI’s are modeling the human mind using techniques similar to that of our own learning processes.

Do we need to be worried about the runaway artificial general intelligence?

If we to listen to the cautiously wise words of Stephen Hawking who said “success in creating AI would be the biggest event in human history”, before commenting “unfortunately, it might also be the last, unless we learn how to avoid the risks”.

The answer to whether we should be worried all depends on too many variables for a definitive answer. However, it is difficult not to argue that AI will play a growing part in our lives and businesses.

Rest assured: 4 things that will always remain human

It’s inevitable that one might raise the question is there anything that humans will always be better at?

  1. Unstructured problem solving. Solving problems in which the rules do not currently exist; such as creating a new web application.
  2. Acquiring and processing new information. Deciding what is relevant; like a reporter writing a story.
  3. Non-routine physical work. Performing complex tasks in a 3-dimentional space that requires a combination of skill #1 and skill #2 which is proving very difficult for computers to master. As a consequence this causes scientists like Frank Levy and Richard J. Murmane to say that we need to focus on preparing children for an “increased emphasis on conceptual understanding and problem-solving“.
  4. And last but not least – being human. Expressing empathy, making people feel good, taking care of others, being artistic and creative for the sake of creativity, expressing emotions and vulnerability in a relatable way, and making people laugh.

Are you safe?

We all know that computers/machines/robots will have an impact (positive and/or negative) on our lives in one way or another. The rather ominous elephant in the room here is whether or not your job can be done by a robot?

I am sure you will be glad to know there is an algorithm for it…
In a recent article by the BBC it is predicted that 35% of current jobs in the UK are at a ‘high risk’ of computerization in the coming 20 years (according to a study by Oxford University and Deloitte).

It remains, jobs that rely on empathy, creativity and social intelligence are considerably less at risk of being computerized. In comparison roles including retail assistants (37th), chartered accountants (21st) and legal secretaries (3rd) all rank among the top 50 jobs at risk.

Maybe not too late to pick up the night course on ‘Computer Science’…

Jorge Aguilera Poyato
December 15, 2016

Last week I read that you can now hijack nearly any drone mid-flight just by using a tiny gadget.

The gadget responds to the name of Icarus and it can hijack a variety of popular drones mid-flight, allowing attackers to lock the owner out and give them complete control over the device.

Besides Drones, the new gadget has the capability of fully hijacking a wide variety of radio-controlled devices, including helicopters, cars, boats and other remote control gears that run over the most popular wireless transmission control protocol called DSMx.

Although this is not the first device we have seen that can hijack drones, this is the first one giving the control Icarus works by exploiting DMSx protocol, granting attackers complete control over target drones that allows attackers to steer, accelerate, brake and even crash them.

The attack relies on the fact that DSMx protocol does not encrypt the ‘secret’ key that pairs a controller and the controlled device. So, it is possible for an attacker to steal this secret key by launching several brute-force attacks.

You can also watch the demonstration video to learn more about Icarus box.

There is no mitigation approach to this issue at the moment, other than wait for manufacturers affected to release patches and update their hardware embracing encryption mechanisms to secure the communication between controller and device.

Having seen this video and the potential impact of this hijacking technique, my first thought was about the threat for Amazon’s new service coming soon, which will allow drones to safely deliver packages to people’s homes in under 30 minutes.

This is just another example of how important is to define the right strategy around using encryption as part of the security in the digital era. Business data and the way we want to access this data from any device, anywhere and anytime just highlight the need of enhanced and clever security solutions.

There are different ways Logicalis can help our customers in the protection of data located in data centres and end points with the help of the ecosystem of partners like Cisco and Intel Security.

An interesting offering to mention is Logicalis Endpoint Encryption Managed Service. This service gives our customer’s devices and the data within them the level of protection that will give them peace of mind should a device be lost or stolen, and we Logicalis manage the service for them. This service is the market leader for data protection and it provides the highest levels of Confidentiality, Integrity and Availability. The service is part of the global strategy adopted by Logicalis Group across EMEA.

Category: Automation, Security

Jorge Aguilera Poyato
December 13, 2016

Morpheus, in one of the most iconic scenes of the Matrix trilogies said, “You take the blue pill, the story ends. You wake up in your bed and believe whatever you want to believe. You take the red pill, you stay in Wonderland, and I show you how deep the rabbit-hole goes.”

Let me ask you something, what about taking decisions like the one offered by Morpheus based on additional information that can be used to evaluated better the two options? Would that influence Neo to change his mind on the decision made?

According to the Harvard Business Review (www.hbr.org) many business managers still rely on instinct to make important decisions, often leading to poor results. However, when managers decide to incorporate logic into their decision-making processes, the result is translated into better choices and better results for the business.

In today’s digital world, it’s difficult to ensure the integrity of mission critical networks without a detailed analysis of user engagement and an understanding of the user experience.

HBR outlines three ways to introduce evidence-based management principles into an organization. They are:

  • Demand evidence: Data should support any potential claim.
  • Examine logic: Ensure there is consistency in the logic, be on the lookout for faulty cause-and-effect reasoning.
  • Encourage experimentation: Invite managers to conduct small experiments to test the viability of proposed strategies and use the resulting data to guide decisions.

So, the big question is, would it be possible to introduce these three elements into the tasks assigned to the network manager?

The answer is ‘yes’ provided the manager is given the opportunity to integrate with network data that carries the context of users, devices, locations and applications in use and then given the opportunity to mine this captured data to gain insights into how and why systems and users perform the way they do.

Fortunately, the limitations of traditional networks can be overcome with the use of new network platforms providing in-depth visibility into application use across the corporate network, helping organisations to deliver significant, cost-effective improvements to their critical technology assets. It achieves this by:

  • improving the experience of connected users
  • enhancing their understanding of user engagement
  • optimizing application performance
  • improving security by protecting against malicious or unapproved system use.

According to IDC, “With the explosion of enterprise mobility, social and collaborative applications, network infrastructure is now viewed as an enterprise IT asset and a key contributor to optimizing business applications and strategic intelligence,”.

For companies facing the challenge of obtaining deep network insights in order to improve application performances and leverage business analytics, Logicalis is the answer.

Logicalis is helping their clients with the delivery of digital ready infrastructure as the main pillar for enhancing the user experience, business operations and taking secure analytics to the next level of protection for business information.

Latest Tweets