Digitally Speaking

Justin Price
November 8, 2017

Year by year we are generating increasingly large volumes of data which require more complex and powerful tools to analyse in order to produce meaningful insights.

What is machine learning?

Anticipating the need for more efficient ways of spotting patterns in large datasets on mass, Machine Learning was developed to give computers the ability to learn without being explicitly programmed.

Today, it largely remains a human-supervised process, at least in the developmental stage. This consists of monitoring a computer’s progress as it works through a number of “observations” in a data set arranged to help train the computer into spotting patterns between attributes as quickly and efficiently as possible. Once the computer has started to build a model to represent the patterns identified, the computer then goes through a looping process, seeking to develop a better model with each iteration.

How is it useful?

The aim of this is to allow computers to learn for themselves, knowing when to anticipate fluctuation between variables which then helps us to forecast what may happen in future. With a computer model trained on a specific data problem or relationship, it then allows data professions to produce reliable decisions and results, leading to the discovering of new insights which would have remained hidden without this new analytical technique.

Real-world Examples

Think this sounds like rocket science? Every time you’ve bought something from an online shop and had recommendations based on your purchase – that’s based on machine learning. Over thousands of purchases the website has been able to aggregate the data and spot correlations based on real buying users’ buying patterns, and then present the most relevant patterns back to you based on what you did or bought. You may see these as “recommended for you” or “this was frequently bought with that”. Amazon and Ebay have been doing this for years, and more recently, Netflix.

This sounds fantastic – but where can this help us going forward?

Deep learning

This is distinguished from other data science practices by the use of deep neural networks. This means that the data models pass through networks of nodes, in a structure which mimics the human brain. Structures like this are able to adapt to the data they are processing, in order to execute in the most efficient manner.

Using these leading techniques, some of the examples now look ready to have profound impacts on how we live and interact with each other.We are currently looking at the imminent launch of commercially available real-time language translation which requires a speed of analysis and processing never available before. Similar innovations have evolved in handwriting-to-text conversion with “smartpads” such as the Bamboo Spark, which bridge the gap between technology and traditional note taking.

Other applications mimic the human components of understanding; classify, recognise, detect and describe (according to SAS.com). This has now entered main-stream use with anti-spam measures on website contact forms, where the software knows which squares contain images of cars, or street signs.

Particularly within the healthcare industry, huge leaps are made where scanned images of CT scans have been “taught” how to spot the early sign of lung cancer in Szechwan People’s Hospital, China. This has come in to meet a great need as there is a shortage of trained radiologists to examine patients.

In summary, there have been huge leaps in data analysis and science in the last couple years. The future looks bright for the wider range of real world issues to which we can apply more and more sophisticated techniques and tackle previously impossible challenges. Get in touch and let’s see what we can do for you.

Category: Analytics, Automation

Dean Mitchell
October 24, 2017

Originally posted on Information Age, 18 October 2017.

Overspending on resources?

We can all agree, it’s nothing new. In fact, it’s an issue faced by business leaders almost every day. In our increasingly digital world, overspending on technical resources, alongside the human resources (or skills) to back them up, is common.

If you view over-provisioning as a necessary evil, you’re not alone. A recent independent study discovered that 90% of CIOs feel the same way, with the majority only using about half of the cloud capacity that they’ve paid for.

But, why pay for resources that you’re not going to use?

Well, it’s no secret that over provisioning on IT resources is better than the alternative. Understandably, you’d rather pay above-the-odds for ‘too many’ functional digital systems, than risk the outages associated with ‘too few’. A 2015 study by Populus discovered that almost a third of all outages on critical systems are still capacity related, proving that over provisioning is not the only problem here.

It can seem as if organisations are stuck between a rock and a hard place: do you spend thousands and thousands of pounds from your (already) tight budget and over provision, or do you make an upfront saving and risk becoming one of the 29% of companies experiencing business disruption, downtime or worse when the demand on your services exceeds the resources you have in place? How do you optimise costs without risking future, potentially devastating, strain on your resources?

Enter IT Capacity Management…

In a nutshell, IT Capacity Management gives you a snapshot view of all your business resources against the demands placed upon them. This enables you to ‘right-size’ your resources and ensure that you can meet current requirements without over provisioning and over spending.

The level of demand placed upon business resources is constantly fluctuating. That’s why Capacity Management models should run alongside your current operations as part of your ongoing business strategy. It’s one way to be proactive when it comes to resourcing.

However, it doesn’t stop there… Capacity Management also enables you to prepare your business for the future. It continually measures the performance and levels of use of your resources in order to make predictions, which will enable you to prepare for any future changes in terms of demand.

What can Capacity Management do for your business?

There are a number of benefits to having IT Capacity Management included in your company strategy. It gives you visibility of your entire IT infrastructure, including all physical, virtual and cloud environments. The importance of this should not be underestimated; it can enable you to:

● Optimise costs. It’s simple- if you have a clear view of all your resources, you can see where they’re not required, which means that you won’t feel the need to purchase them “just in case”. Capacity Management can be seen as a long-term investment- especially given its ability to predict future trends based on current performance.
● Easily adjust IT resources to meet service demands. With the ability to see exactly which of your services are being placed under the highest amount of pressure in terms of demand, you’ll be able to adjust your business plan accordingly to relieve some of that pressure- allowing you to even out the playing field by ensuring that one service area isn’t being drained whilst others are idle. You’ll be able to add, remove or adjust compute, storage, network and other IT resources as and when they are needed.
● Deploy applications on time. You’ll be able to reserve IT resources to be used for new applications when needed, resulting in a faster time to deployment.
● Reduced time and human resources spend. Imagine the hours being spent by your employees to plan and calculate capacity usage and availability. By implementing a real, ongoing plan which can run in the background, you free up more time for your employees to pursue higher value tasks.

Capacity Management solves the age-old problem of optimising costs for today’s CIOs. While this has always been a priority for organisations, our new digital landscape has redefined its meaning and its importance. Working habits and IT business structures have evolved to include mobile working, shadow IT, unimaginable amounts of data and complex technological advancements that need a certain skillset to deploy. Therefore, it is impossible to view everything simultaneously and manage all resources accordingly, unless you deploy the correct tools and have the right strategy in place.

Capacity Management should be a key element of any business strategy. It’s a model built for your business’ resourcing needs, both today and in the future.

If you’d like to find out more about the Capacity Management and Cost Optimisation services that Logicalis provides then, contact us today

Sara Sherwani
September 27, 2017

Throughout history, I don’t believe we’ve ever seen as much change as we do in the world of Technology! Just to think that in 10 years’ time we’ ve had more iPhone releases than Henry VIII had wives.

Taking a page out of some of tech giants books, be it Apple to Salesforce, it’s clear that innovation is at the centre of what enables the industry to move at the pace it does. It would be fair to say that 3 major trends currently dominate the industry:

1.Service, service, service – Many big players in the hardware product space recognise hardware is fast becoming a vanilla commodity. You’ve got a number of vendors such as Cisco, Oracle, Ericsson, Nokia, HP scrambling very quickly over a number of years to enable value added services on top of the hardware to increase margins.

 “Services are enabled by the specific knowledge, skills and experience you bring to the table which often drives business value through improved margins.”

Sometimes when I think about how you can build your brand of service that you deliver to customers, I like to compare it to food (one of my favourite subjects).

What keeps you going back to your favourite restaurant? Let’s take for instance McDonalds. It could be the quality of the food, but ultimately you KNOW you will get a fast, efficient service and a smile when they ask ‘would you like fries with that?’. The point being, it’s the trusted customer experience that underpins successful services, remember this bit – I’m going to allude to this part later on.

2.Business process design driven by cost reduction, optimization and automation – Ultimately, we use technology to enable us to make our lives simpler. Traditional IT has become so entrenched in complexity and with that has come high cost. Many businesses of all sizes are certainly looking at their balance sheets with scrutiny and seeking to utilize the benefits of IT innovation to gain a competitive advantage. The principles of globalisation, business process optimization and automation are all relevant now as we transform traditional IT to achieve the ultimate goal of simplicity.

3.Data driven customer experience being an investment for the future – Products in the world of data analytics are booming as businesses recognise the power of data in enabling intelligent business decisions. Some proven examples of boosting business value are how Telcos are using customer location data to pinpoint relevant marketing text messages.

Imagine you’re at the airport, where intelligent systems will pick up your location and send you a text to see if you want to purchase international data plan while you’re away. So instead of sending you random marketing messages, geo-location marketing becomes targeted and relevant. Through this intelligent marketing Telcos have been able to generate 40% more revenue than expected in that portfolio.

Keeping up with the pace of change within the industry can be overwhelming, unless you harness the key themes that I mentioned earlier which will be sure to relate to business value. Contact Logicalis today to learn how you can implement an agile business model and use its benefits to increase your business value.

Andrew Newton
September 8, 2017

Shadow IT is not a new concept, but certainly is a big issue for many organisations today. Companies of all sizes see a significant increase in the use of devices and/or services not within the organisation’s approved IT infrastructure.

A Global CIO Survey  found that IT leaders are under growing pressure from Shadow IT and are gradually losing the battle to retain the balance of power in IT decision-making. The threat from Shadow IT is forcing CIOs to re-align their IT strategy to better serve the needs of their line of business colleagues, and transforming IT to become the first choice for all IT service provision. However Shadow IT continues to apply pressure to many CIO’s and IT leaders who do not have clear visibility of the use of Shadow IT within their organisations and therefore cannot quantify the risks or opportunities.

So is Shadow IT a threat to your organisation or does it improve productivity and drive innovation?

Based on Gartner’s report, Shadow IT will account for a third of the cyber-attacks experienced by enterprises by the time we reach 2020. However, some customers have told us:

  • “Shadow IT is an opportunity for us to test devices or services before we go to IT for approval,”
  • Shadow IT allows us to be Agile and use services that IT don’t provide so we can work more effectively

One of the most important aspects of Shadow IT is of course the cost. What are the costs to the business from the hidden costs of a security breach, potential loss of data and for those with regulatory compliance requirements, the possibility of large fines and loss of reputation in their respective markets?

With an ever changing and expanding IT landscape and new regulations  such as the General Data Protection Regulation (GDPR) coming into effect in May 2018, managing and controlling data whilst ensuring complete data security should be top of the priority list. Therefore understanding the key challenges of Shadow IT is fundamental in order to manage it effectively.

Shadow IT – The Key Challenges:

    • Identifying the use of Shadow IT
      Arguably the biggest challenge with Shadow IT is visibility within the organisation. How can IT leaders see who is using or consuming what and for what purpose? If you can’t see or are aware of it, how can you manage it?
    • Costs of Shadow IT
      Controlling costs is impossible for Shadow IT spend if there is no visibility of what is being used. Not just the direct Shadow IT purchases present a challenge but the consequences of a security breach as a result of the use of Shadow IT in fines, reputation damage and future loss of business.
    • Securing the threat to your business
      One of the biggest areas of concern and quite rightly is the security threat to the business from the use of non-approved IT sources.  Not only does this have the potential to add to the organisation’s costs but also could result in the loss of data, again with the potential risk of considerable fines.
    • Managing Shadow IT without stifling innovation
      The wrong approach to managing Shadow IT, such as the “total lock down messaging”  can send signals to the organisation that IT are controlling, inflexible and  unwilling to listen with the possible result of driving Shadow IT under ground and in cases actually increase its use , thus increasing risks and costs.

Shadow IT is a complicated issue, but your response to it doesn’t have to be. Contact us to find out how we can help you manage Shadow IT, be forward thinking and fill the gaps within the current IT infrastructure.

Anis Makeriya
August 21, 2017

It’s always the same scenario: someone giving me some data files that I just want to dive straight into and start exploring ways to visually depict them, but I can’t.

I’d fire up a reporting tool only to step right back, realising that for data to get into visual shapes, they need to be in shape first!  One correlation consistently appearing over the years is that time spent on ETL/ELT (Extract, Transform and Load [in varying sequences]) and the speed of exit from reporting layer back to data prep share a negative correlation.

Data preparation for the win

‘80% of time goes into data prep’ and ‘Garbage in Garbage out (GIGO)’ have existed for some time now but don’t actually hit you until you face it in practical situations and it suddenly translates into ‘backward progress’. Data quality issues can vary from date formats, multiple spellings of the same value to values not existing at all in the form of nulls. So, how can they all be dealt with? Data prep layer is the answer.

Often with complex transformations or large datasets, analysts find themselves turning to IT to perform the ETL process. Thankfully, over the years, vendors have recognised the need to include commonly used transformations in the reporting tools themselves. To name a few, tools such as Tableau and Power BI have successfully passed this power on to the analysts making time to analysis a flash. Features such as pivot, editing aliases, joining and unioning tables and others are available within a few clicks.

There may also be times when multiple data sources need joining, such as matching company names. Whilst Excel and SQL fuzzy look-ups have existed for some time, the likes of dedicated ETL tools such as Paxata have imbedded further intelligence that enable it to go a step further and recognise that the solutions lies beyond just having similar spellings in between the names.

All the tasks mentioned above are for the ‘T’ (Transformation) of ETL and is only the second OR third step in the ETL/ELT process! If data can’t be extracted as part of the E in ETL in the first place, there is nothing to transform. When information lies in disparate silos, often it cannot be ‘merged’ unless the data is migrated or replicated across stores. Following the data explosion in the past decade, Cisco Data Virtualisation has gained traction for its core capability of creating a ‘merged virtual’ layer over multiple data sources enabling quick time to access as well as the added benefits of data quality monitoring and single version of the truth.

These recent capabilities are now even more useful with the rise in data services like Bloomberg/forex and APIs that can return weather info, if we want to further know how people feel about the weather, then the twitter API also works.

Is that it..?

Finally after the extraction and transformation of the data, the load process is all that remains… but even that comes with its own challenges. Load frequencies, load types (incremental vs. full loads) depending on data volumes, data capture (changing dimensions) to give an accurate picture of events and also storage and query speeds from the source to name a few.

Whilst for quick analysis a capable analyst with best practice knowledge will suffice, scalable complex solutions will need the right team from IT and non-IT side in addition to the tools and hardware to support it going forward smoothly. Contact us today to help you build a solid Data Virtualisation process customised to your particular needs.

Jorge Aguilera Poyato
August 9, 2017

It’s common knowledge that there is a global shortage of experienced IT security professionals, right across the spectrum of skills and specialities, and that this shortage is exacerbated by an ongoing lack of cyber security specialists emerging from education systems.

Governments are taking action to address this skills shortage, but it is nevertheless holding back advancement and exposing IT systems and Internet businesses to potential attacks.

Because of this, and despite the fear that other industries may have of Artificial Intelligence (AI) the information security industry should be embracing it and making the most of it. As the connectivity requirements of various different environments become ever more sophisticated, so the number of security information data sources is increasing rapidly, even as potential threats increase in number and complexity. Automation and AI offer powerful new ways of managing security in this brave new world.

At the moment, the focus in AI is on searching and correlating large amounts of information to identify potential threats based on data patterns or user behaviour analytics. These first generation AI-driven security solutions only go so far, though: security engineers are still needed, to validate the identification of threats and to activate remediation processes.

As these first generation solutions become more efficient and effective in detecting threats, they will become the first step towards moving security architectures into genuine auto-remediation.

To explore this, consider a firewall – it allows you to define access lists based on applications, ports or IP addresses. Working as part of a comprehensive security architecture, new AI-driven platforms will use similar access lists, based on a variety of complex and dynamic information sources. The use of such lists will under-gird your auto-remediation policy, which will integrate with other platforms to maintain consistency in the security posture defined.

As we move into this new era in security systems, in which everything comes down to gathering information that can be processed, with security in mind, by AI systems, we will see changes as services adapt to the new capabilities. Such changes will be seen first in Security Operations Centres (SOCs).

Today’s SOCs still rely heavily on security analysts reviewing reports to provide the level of service expected by customers. They will be one of the first environments to adopt AI systems, as they seek to add value to their services and operate as a seamless extension to digital businesses of all kinds.

SOCs are just one example, the security industry will get the most out of AI, but they need to start recognising that machines do best at what people do best. Any use of this technology will enable the creation of new tools and processes in the cybersecurity space that will protect new devices and networks from threats even before a human can classify that threat.

Artificial intelligence techniques such as unsupervised learning and continuous retraining can keep us ahead of the cyber criminals. However, we need to be aware that hackers will be also using these techniques, so here is where the creativity of the Good Guys can focus on thinking about what is coming next and let the machines do their job in learning and continuous protection.

Don’t miss out: to find out more, contact us – we’ll be delighted to help you with emerging technology and use it to your benefit.

Category: Security

Scott Reynolds
July 25, 2017

The amount of data that businesses generate and manage continues to explode. IBM estimates that across the world, 2.3 trillion gigabytes of data are created each day and this will rise to 43 trillion gigabytes by 2020.

From transactions and customer records to email, social media and internal record keeping – today’s businesses create data at rates faster than ever before. And there’s no question that storing and accessing this data presents lots of challenges for business. How to keep up with fast growing storage needs, without fast growing budgets? How to increase storage capacity without increasing complexity? How to access critical data without impacting on the speed of business?

It’s increasingly obvious that traditional storage can’t overcome these challenges. By simply adding more capacity, costs go up for both storage and management. And manually working with data across different systems can become an administrative nightmare – adding complexity, and taking up valuable IT resource.

So, what can you do? It’s likely that you’ve already got an existing infrastructure and for many, scrapping it and starting again, just isn’t an option. This is where flash and software-defined-storage (SDS) could be your saviour. Flash and tape aren’t mutually exclusive, and by separating the software that provides the intelligence from the traditional hardware platform, you gain lots of advantages including flexibility, scalability and improved agility.

So I could add to what I already have?

Yes. Flash and tape aren’t mutually exclusive. Lots of businesses use a mix of the old and the new – what’s important is how you structure it. Think of it like a well-organised wardrobe. You need your everyday staples close at hand, and you store the less frequently worm items, also known in the UK as the summer wardrobe (!), where you can access them if you need them but not in prime position.

Your data could, and should work like this. Use flash for critical workloads that require real-time access and use your older tape storage for lower priority data or lower performance applications.

But won’t it blow my budget?

No, the cost of Flash systems has come down over the last few years and the lower costs to operate make savings over the long term. It’s been proven that the virtualisation of mixed environments can store up to five times more data and that analytics driven hybrid cloud data management reduces costs by up to 73%. In fact, we estimate that with automatic data placement and management across storage systems, media and cloud, it’s possible to reduce costs by up to 90%!

So how do I know what system will work for me?

Well, that’s where we come in. At Logicalis we’ve got over 20 years of experience working with IBM systems. Our experts work with clients to help them scope out a storage solution that meets their needs today, and the needs they’ll have tomorrow.

We start with a Storage Workshop that looks at the existing infrastructure and what you’re hoping to achieve. We’ll look at how your data is currently structured and what changes you could make to improve what you already have – reducing duplication and using the right solution for the right workload. We’ll then work with you to add software and capacity that will protect your business and won’t blow your budget.

If you want to hear more about the solutions on offer, feel free to contact us.

Category: Hybrid IT

Scott Reynolds
July 12, 2017

£170m lost on the London Stock Market just over a week, and untold damage to the “World’s Favourite Airline”. That’s the cost within the UK to the International Airlines Group, the owner of British Airways, after BA’s recent ‘Power Outage’ incident.

“It wasn’t an IT failure. It’s not to do with our IT or outsourcing our IT. What happened was in effect a power system failure or loss of electrical power at the data centre. And then that was compounded by an uncontrolled return of power that took out the IT system.” Willie Walsh (IAG Supremo) during a telephone interview with The Times.

Willie has since inferred that the outage was caused by the actions of an engineer who disconnected and then reconnected a power supply to the data centre in “an uncontrolled and un-commanded fashion”. Could this then actually have something to do with the IT outsource after all, and did a staff member go rogue, or was it down to poor training and change control…?

For me what this highlights is the need to place greater emphasis on availability and uptime of those systems that support critical parts of a business or organisations services and offering. Along with robust processes and automation where possible to minimise the impact of an unplanned outage.

All businesses should expect their systems to fail. Sometimes it can be a physical failure of the infrastructure supporting the data centre (Power, UPS’s, Generators, Cooling etc.). It can be the power supply itself. Computing, Storage or the Network equipment can fail. Software and systems can suffer an outage. Plus it can also come down ‘Human Error’ or poor maintenance of core systems or infrastructure.

Coping with a Power Failure

Even if you have two power feeds to your building, and even if they’re from two different power sub-stations, and run through two different street routes, those sub-stations are still part of the same regional and national power grid. If the grid fails, so does your power. No way around it, except to make your own. Power Surge’s are handled by monitoring the power across Cabinet PDU’s, Critical PDU’s, UPS’s, Generators & Transformers, while assigning Maximum Load to all cabinets to make sure that we do not overload our customers systems.

Recovering from a Disaster

Recovering from a disaster is something that all organisation plan for, however not all have a Disaster Recovery (DR) Plan as there are some that consider High Availability (HA) to be more than sufficient. However HA only provides a localised system for failover, whereas DR is designed to cope with a site failure.

The challenge with DR for many of our customers is the cost;

  • First you need to prioritise which applications workloads you want to failover in the event of a disaster.
  • Second you need to purchase and manage infrastructure and licensing for these workloads with continuous replication.
  • Third you need a 2nd location.
  • Fourth you need a robust DR plan that allows you to recover your workloads at the 2nd location.
  • Then lastly (which is considered harder) you’ll need to fail back these services once the primary site has been recovered.

This can be an expensive option, but this is also where things like Cloud DR-as-a-Service can help minimise any expenditure, and the pain associated with owning and managing a DR environment.

Reducing the impact of an outage

Minimising the impact of any form of physical failure should be a priority over recovering from an outage. Workflow Automation can help a business maintain uptime of applications and services. This can be defined as a policy where services can be moved to other systems locally, or all services can be re-provisioned to a DR location or a DR platform in the event of outage caused either by a power issue or human error. Helping a business minimise the risk and the impact of outage.

I’ll let you come to your own conclusions as to whether British Airways should adopt a robust change control, automation or DR policy. Logicalis can assist and provide you with a number of options custom to your particular needs so that you are not the next press headliner.

Richard Simmons
June 20, 2017

I have a confession to make, I love to read. Not just an occasional book on holiday or a few minutes on the brief, or often the not so brief, train journey into and out of London but all the time. Right now has never been a better time for those with a love of reading! The rise of digital media means that not only can you consume it pretty much anywhere at any time but more importantly it is making it easier for more people to share their ideas and experience.

Recently I came across a book called “Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations” by Pulitzer Prize winner Thomas L. Friedman., which I not only found fascinating to read but has also helped to shape and change the way I view many of the challenges we are facing both in business but also in our personal lives. The premise of the book is that often he would arrange to meet people for breakfast early in the morning, to do interviews or research stories but occasionally these people would be delayed. These moments, rather than being a source of frustration, became time he actually looked forward to as it allowed him to simply sit and think. And looking at the world, he believed we are living through an age of acceleration due to constant technology evolution, globalisation and climate change. He argues that these combined are the cause for much of the challenges we currently face.

The key point about this acceleration is that it is now reaching a level in which society and people are struggling to adapt. Within the technology world we talk about disruption a lot, a new business or technology arrives that can disrupt a sector or market, the competition struggles to adapt and eventually a status quo is resumed. For example Uber has undoubtedly caused a huge disruption in the world of transport, and governments are currently working through how they can better legislate for this new way of operating. The challenge will be that new legislation can take 5-10 years to agree and implement in which time Uber may well have been replaced by autonomous cars.

So what we are experiencing now is not just disruption but a sense of dislocation, the feeling that no matter how fast we try and change it is never enough. In this environment it will be the people, businesses and societies that are able to learn and adapt the fastest which will be most successful . For business we are constantly shown how being more agile in this digital world can drive efficiency, generate new business models and allow us to succeed but I feel often what is lacking is the guidance on how to get there. We have a wealth of different technology which can support a business but what is right for me? What should I invest in first? And how do I make sure that I maximise the value of that investment?

My experience with many of our customers is that they understand the challenges and also the opportunity, but simply do not have the time to think and plan. When they do have time the amount of choice can be overwhelming and actually daunting. In a small way this is the same challenge I face when looking for new books to read, I can go online but with so much to choose from how will I know what I will enjoy? The opportunity that digital media provides with more authors and contents can actually make finding and choosing something that you think is valuable much harder.

In Logicalis, we understand the business challenges that you face and discuss with you the different technology options that could support you, recommending those that can deliver the biggest value in the shortest time frame. Contact us to find out how we can help you keep up to speed with emerging technology and use it to your benefit.

Alastair Broom
May 16, 2017

What if I told you, that Ransomware, is on its way to becoming a $1 billion annual market ?

Eyebrows raised (or not), it is a matter of fact in 2017 that Ransomware is an extremely lucrative business, evolving in an alarming rate and becoming more sophisticated day by day.

But, the question remains, what is Ransomware?

Ransomware is a malicious software – a form of malware – that either disables a target system or encrypts a user’s files and holds them ‘hostage’ until a ransom is paid. This malware generally operates indiscriminately with the ability to target any operating system, within any organisation. Once the malware has gained a foothold in an organisation, it can spread quickly infecting other systems, even backup systems and therefore can effectively disable an entire organisation. Data is the lifeblood of many organisations and without access to this data, businesses can literally grind to a halt. Attackers demand that the user pay a fee (often in Bitcoins) to decrypt their files and get them back.

On a global scale, more than 40% of ransomware victims pay the ransom, although there is no guarantee that you will actually get your data back and copies of your data will now be in the attacker’s hands. In the UK, 45% of organisations reported that a severe data breach caused systems to be down on average for more than eight hours. This makes it apparent that the cost is not only the ransom itself, but also the significant resources required to restore the systems and data. What is even more alarming, is that in the UK the number of threats and alerts is significantly higher than other countries (Cisco 2017 Annual Cybersecurity Report). Outdated systems and equipment are partially to blame, coupled with the belief that line managers are not sufficiently engaged with security. Modern and sophisticated attacks like ransomware require user awareness, effective processes and cutting edge security systems to prevent them from taking your organisation hostage!

How can you protect your company?

As one of the latest threats in cybersecurity, a lot has been written and said around ransomware and potential ways of preventing it. A successful mitigation strategy involving people, process and technology is the best way to minimise the risk of an attack and its impact. Your security program should consider the approach before, during and after an attack takes place giving due consideration to protecting the organisation from attack, detecting Ransomware and other malware attacks and how the organisation should respond following an attack. Given that Ransomware can penetrate organisations in multiple ways, reducing the risk of an infection requires a holistic approach, rather than a single point solution.  It takes seconds to encrypt an entire hard disk and so IT security systems must provide the highest levels of protection, rapid detection and high containment and quarantine capability to limit damage. Paying the ransom should be viewed as an undesirable, unpredictable last resort and every organisation should therefore take effective measures to avoid this scenario.

Could your organisation be a target?

One would imagine that only large corporations would be at risk of a Ransomware attack, but this is far from the truth. Organisations of all industries and sizes report Ransomware attacks which lead to substantial financial loss, data exposure and potential brand damage. The reason is that all businesses rely on the availability of data, such as employee profiles, patents, customer lists, financial statements etc. to operate.  Imagine the impact of Ransomware attacks in police departments, city councils, schools or hospitals. Whether an organisation operates in the public or private sector, banking or healthcare, it must have an agile security system in place to reduce the risk of a Ransomware attack.

Where to start?

The first step to shield your company against Ransomware is to perform an audit of your current security posture and identify areas of exposure.  Do you have the systems and skills to identify an attack?  Do you have the processes and resources to respond effectively?  As Ransomware disguises itself and uses sophisticated hacking tactics to infiltrate your organisation’s network, it is important to constantly seek innovative ways to protect your data before any irreparable damage is done.

With our Security Consultancy, Managed Security Service offerings and threat-centric Security product portfolio, we are able to help our customers build the holistic security architecture needed in today’s threat landscape.

Contact us to discuss your cyber security needs and ensure you aren’t the next topic of a BBC news article.

 

Category: Security

Neil Thurston
April 25, 2017

Hybrid IT is often referred to as bimodal, a term coined by Gartner some four years ago to reflect the (then) new need for the simultaneous management of two distinct strands of work in a Hybrid IT environment – the traditional server-based elements on the one hand, and the Cloud elements on the other.

Since then, the two strands of the bimodal world have blended in various different ways. As they have engaged and experimented with new technologies, organisations have found that certain workload types are particularly suited to certain environments.

For example, DevOps work, with its strong focus on user experience elements such as web front ends, is typically well suited to cloud-native environments. Meanwhile, back end applications processing data tend to reside most comfortably in the traditional data centre environment.

The result is a multi-modal situation even within any given application, with its various tiers sitting in different technologies, or even different clouds or data centres.

The obvious question for IT management is this: how on earth do you manage an application which is split across multiple distinct technologies? Relying on technology to provide the management visibility you need drives you to traditional tools for the elements of the application based on traditional server technology, and DevOps tools for the cloud native side. Both sets of tools need to be continuously monitored. For every application, and every environment.

A new breed of tools is emerging, allowing you to play in both worlds at once . VMware vRealize Automation cloud automation software is a good example. Over the last three years, VMware has developed its long-standing traditional platform, adding Docker container capabilities, so that today vRealize is a wholly integrated platform allowing for the creation of fully hybrid applications, in the form of ‘cut-out’ blueprints containing both traditional VM images and Docker images.

This multi-modal Hybrid IT world is where every enterprise will end up. IT management needs clear visibility, for every application, of multiple tiers across multiple technologies – for security, scaling, cost management and risk management, to name just a few issues. Platforms with the capability to manage this hybrid application state will be essential.

This area of enterprise IT is moving rapidly: Logicalis is well versed, and experienced, in these emerging technologies both in terms of solution and service delivery, and in terms of support for these technologies in our own cloud. Contact us to find out more about multi-modal Hybrid IT and how we can help you leverage it.

Category: Hybrid IT

Fanni Vig
April 20, 2017

Finally, it’s out!

With acquisitions like Composite, ParStream, Jasper and AppDynamics, we knew something was bubbling away in the background for Cisco with regards to edge analytics and IoT.

Edge Fog Fabric – EFF

The critical success factor for IoT and analytics solution deployments is to provide the right data, at the right time to the right people (or machines) .

With the exponential growth in the number of connected devices, the marketplace requires solutions that provide data generating devices, communication, data processing, and data leveraging capabilities, simultaneously.

To meet this need, Cisco recently launched a software solution (predicated on hardware devices) that encompasses all the above capabilities and named it Edge Fog Fabric aka EFF.

What is exciting about EFF?

To implement high performing IoT solutions that are cost effective and secure, a combination of capabilities need to be in place.

  • Multi-layered data processing, storing and analytics – given the rate of growth in the number of connected devices and the volume of data. Bringing data back from devices to a DV environment can be expensive. Processing information on the EFF makes this a lot more cost effective.
  • Micro services – Standardized framework for data processing and communication services that can be programmed in standard programming language like Python, Java etc.
  • Message routers – An effective communication connection within the various components and layers. Without state of the art message brokerage, no IoT systems could be secure and scalable in providing real time information.
  • Data leveraging capabilities – Ad hoc, embedded or advanced analytics capabilities will support BI and reporting needs. With the acquisition of Composite and AppDynamics, EFF will enable an IoT platform to connect to IT systems and applications.

What’s next?

Deploying the above is no mean feat. According to Gartner’s perception of the IoT landscape, no organization have yet achieved the panacea of connecting devices to IT systems and vice versa, combined with the appropriate data management and governance capabilities embedded. So there is still a long road ahead.

However, with technology advancements such as the above, I have no doubt that companies and service providers will be able to accelerate progress and deliver further use cases sooner than we might think.

Based on this innovation, the two obvious next steps that one can see fairly easily are:

  • Further automation – automating communication, data management and analytics services including connection with IT/ERP systems
  • Machine made decisions – once all connections are established and the right information reaches the right destination, machines could react to information that is shared with ‘them’ and make automated decisions.

Latest Tweets