Digitally Speaking

Andrew Newton
September 8, 2017

Shadow IT is not a new concept, but certainly is a big issue for many organisations today. Companies of all sizes see a significant increase in the use of devices and/or services not within the organisation’s approved IT infrastructure.

A Global CIO Survey  found that IT leaders are under growing pressure from Shadow IT and are gradually losing the battle to retain the balance of power in IT decision-making. The threat from Shadow IT is forcing CIOs to re-align their IT strategy to better serve the needs of their line of business colleagues, and transforming IT to become the first choice for all IT service provision. However Shadow IT continues to apply pressure to many CIO’s and IT leaders who do not have clear visibility of the use of Shadow IT within their organisations and therefore cannot quantify the risks or opportunities.

So is Shadow IT a threat to your organisation or does it improve productivity and drive innovation?

Based on Gartner’s report, Shadow IT will account for a third of the cyber-attacks experienced by enterprises by the time we reach 2020. However, some customers have told us:

  • “Shadow IT is an opportunity for us to test devices or services before we go to IT for approval,”
  • Shadow IT allows us to be Agile and use services that IT don’t provide so we can work more effectively

One of the most important aspects of Shadow IT is of course the cost. What are the costs to the business from the hidden costs of a security breach, potential loss of data and for those with regulatory compliance requirements, the possibility of large fines and loss of reputation in their respective markets?

With an ever changing and expanding IT landscape and new regulations  such as the General Data Protection Regulation (GDPR) coming into effect in May 2018, managing and controlling data whilst ensuring complete data security should be top of the priority list. Therefore understanding the key challenges of Shadow IT is fundamental in order to manage it effectively.

Shadow IT – The Key Challenges:

    • Identifying the use of Shadow IT
      Arguably the biggest challenge with Shadow IT is visibility within the organisation. How can IT leaders see who is using or consuming what and for what purpose? If you can’t see or are aware of it, how can you manage it?
    • Costs of Shadow IT
      Controlling costs is impossible for Shadow IT spend if there is no visibility of what is being used. Not just the direct Shadow IT purchases present a challenge but the consequences of a security breach as a result of the use of Shadow IT in fines, reputation damage and future loss of business.
    • Securing the threat to your business
      One of the biggest areas of concern and quite rightly is the security threat to the business from the use of non-approved IT sources.  Not only does this have the potential to add to the organisation’s costs but also could result in the loss of data, again with the potential risk of considerable fines.
    • Managing Shadow IT without stifling innovation
      The wrong approach to managing Shadow IT, such as the “total lock down messaging”  can send signals to the organisation that IT are controlling, inflexible and  unwilling to listen with the possible result of driving Shadow IT under ground and in cases actually increase its use , thus increasing risks and costs.

Shadow IT is a complicated issue, but your response to it doesn’t have to be. Contact us to find out how we can help you manage Shadow IT, be forward thinking and fill the gaps within the current IT infrastructure.

Anis Makeriya
August 21, 2017

It’s always the same scenario: someone giving me some data files that I just want to dive straight into and start exploring ways to visually depict them, but I can’t.

I’d fire up a reporting tool only to step right back, realising that for data to get into visual shapes, they need to be in shape first!  One correlation consistently appearing over the years is that time spent on ETL/ELT (Extract, Transform and Load [in varying sequences]) and the speed of exit from reporting layer back to data prep share a negative correlation.

Data preparation for the win

‘80% of time goes into data prep’ and ‘Garbage in Garbage out (GIGO)’ have existed for some time now but don’t actually hit you until you face it in practical situations and it suddenly translates into ‘backward progress’. Data quality issues can vary from date formats, multiple spellings of the same value to values not existing at all in the form of nulls. So, how can they all be dealt with? Data prep layer is the answer.

Often with complex transformations or large datasets, analysts find themselves turning to IT to perform the ETL process. Thankfully, over the years, vendors have recognised the need to include commonly used transformations in the reporting tools themselves. To name a few, tools such as Tableau and Power BI have successfully passed this power on to the analysts making time to analysis a flash. Features such as pivot, editing aliases, joining and unioning tables and others are available within a few clicks.

There may also be times when multiple data sources need joining, such as matching company names. Whilst Excel and SQL fuzzy look-ups have existed for some time, the likes of dedicated ETL tools such as Paxata have imbedded further intelligence that enable it to go a step further and recognise that the solutions lies beyond just having similar spellings in between the names.

All the tasks mentioned above are for the ‘T’ (Transformation) of ETL and is only the second OR third step in the ETL/ELT process! If data can’t be extracted as part of the E in ETL in the first place, there is nothing to transform. When information lies in disparate silos, often it cannot be ‘merged’ unless the data is migrated or replicated across stores. Following the data explosion in the past decade, Cisco Data Virtualisation has gained traction for its core capability of creating a ‘merged virtual’ layer over multiple data sources enabling quick time to access as well as the added benefits of data quality monitoring and single version of the truth.

These recent capabilities are now even more useful with the rise in data services like Bloomberg/forex and APIs that can return weather info, if we want to further know how people feel about the weather, then the twitter API also works.

Is that it..?

Finally after the extraction and transformation of the data, the load process is all that remains… but even that comes with its own challenges. Load frequencies, load types (incremental vs. full loads) depending on data volumes, data capture (changing dimensions) to give an accurate picture of events and also storage and query speeds from the source to name a few.

Whilst for quick analysis a capable analyst with best practice knowledge will suffice, scalable complex solutions will need the right team from IT and non-IT side in addition to the tools and hardware to support it going forward smoothly. Contact us today to help you build a solid Data Virtualisation process customised to your particular needs.

Jorge Aguilera Poyato
August 9, 2017

It’s common knowledge that there is a global shortage of experienced IT security professionals, right across the spectrum of skills and specialities, and that this shortage is exacerbated by an ongoing lack of cyber security specialists emerging from education systems.

Governments are taking action to address this skills shortage, but it is nevertheless holding back advancement and exposing IT systems and Internet businesses to potential attacks.

Because of this, and despite the fear that other industries may have of Artificial Intelligence (AI) the information security industry should be embracing it and making the most of it. As the connectivity requirements of various different environments become ever more sophisticated, so the number of security information data sources is increasing rapidly, even as potential threats increase in number and complexity. Automation and AI offer powerful new ways of managing security in this brave new world.

At the moment, the focus in AI is on searching and correlating large amounts of information to identify potential threats based on data patterns or user behaviour analytics. These first generation AI-driven security solutions only go so far, though: security engineers are still needed, to validate the identification of threats and to activate remediation processes.

As these first generation solutions become more efficient and effective in detecting threats, they will become the first step towards moving security architectures into genuine auto-remediation.

To explore this, consider a firewall – it allows you to define access lists based on applications, ports or IP addresses. Working as part of a comprehensive security architecture, new AI-driven platforms will use similar access lists, based on a variety of complex and dynamic information sources. The use of such lists will under-gird your auto-remediation policy, which will integrate with other platforms to maintain consistency in the security posture defined.

As we move into this new era in security systems, in which everything comes down to gathering information that can be processed, with security in mind, by AI systems, we will see changes as services adapt to the new capabilities. Such changes will be seen first in Security Operations Centres (SOCs).

Today’s SOCs still rely heavily on security analysts reviewing reports to provide the level of service expected by customers. They will be one of the first environments to adopt AI systems, as they seek to add value to their services and operate as a seamless extension to digital businesses of all kinds.

SOCs are just one example, the security industry will get the most out of AI, but they need to start recognising that machines do best at what people do best. Any use of this technology will enable the creation of new tools and processes in the cybersecurity space that will protect new devices and networks from threats even before a human can classify that threat.

Artificial intelligence techniques such as unsupervised learning and continuous retraining can keep us ahead of the cyber criminals. However, we need to be aware that hackers will be also using these techniques, so here is where the creativity of the Good Guys can focus on thinking about what is coming next and let the machines do their job in learning and continuous protection.

Don’t miss out: to find out more, contact us – we’ll be delighted to help you with emerging technology and use it to your benefit.

Category: Security

Scott Reynolds
July 25, 2017

The amount of data that businesses generate and manage continues to explode. IBM estimates that across the world, 2.3 trillion gigabytes of data are created each day and this will rise to 43 trillion gigabytes by 2020.

From transactions and customer records to email, social media and internal record keeping – today’s businesses create data at rates faster than ever before. And there’s no question that storing and accessing this data presents lots of challenges for business. How to keep up with fast growing storage needs, without fast growing budgets? How to increase storage capacity without increasing complexity? How to access critical data without impacting on the speed of business?

It’s increasingly obvious that traditional storage can’t overcome these challenges. By simply adding more capacity, costs go up for both storage and management. And manually working with data across different systems can become an administrative nightmare – adding complexity, and taking up valuable IT resource.

So, what can you do? It’s likely that you’ve already got an existing infrastructure and for many, scrapping it and starting again, just isn’t an option. This is where flash and software-defined-storage (SDS) could be your saviour. Flash and tape aren’t mutually exclusive, and by separating the software that provides the intelligence from the traditional hardware platform, you gain lots of advantages including flexibility, scalability and improved agility.

So I could add to what I already have?

Yes. Flash and tape aren’t mutually exclusive. Lots of businesses use a mix of the old and the new – what’s important is how you structure it. Think of it like a well-organised wardrobe. You need your everyday staples close at hand, and you store the less frequently worm items, also known in the UK as the summer wardrobe (!), where you can access them if you need them but not in prime position.

Your data could, and should work like this. Use flash for critical workloads that require real-time access and use your older tape storage for lower priority data or lower performance applications.

But won’t it blow my budget?

No, the cost of Flash systems has come down over the last few years and the lower costs to operate make savings over the long term. It’s been proven that the virtualisation of mixed environments can store up to five times more data and that analytics driven hybrid cloud data management reduces costs by up to 73%. In fact, we estimate that with automatic data placement and management across storage systems, media and cloud, it’s possible to reduce costs by up to 90%!

So how do I know what system will work for me?

Well, that’s where we come in. At Logicalis we’ve got over 20 years of experience working with IBM systems. Our experts work with clients to help them scope out a storage solution that meets their needs today, and the needs they’ll have tomorrow.

We start with a Storage Workshop that looks at the existing infrastructure and what you’re hoping to achieve. We’ll look at how your data is currently structured and what changes you could make to improve what you already have – reducing duplication and using the right solution for the right workload. We’ll then work with you to add software and capacity that will protect your business and won’t blow your budget.

If you want to hear more about the solutions on offer, feel free to contact us.

Category: Hybrid IT

Scott Reynolds
July 12, 2017

£170m lost on the London Stock Market just over a week, and untold damage to the “World’s Favourite Airline”. That’s the cost within the UK to the International Airlines Group, the owner of British Airways, after BA’s recent ‘Power Outage’ incident.

“It wasn’t an IT failure. It’s not to do with our IT or outsourcing our IT. What happened was in effect a power system failure or loss of electrical power at the data centre. And then that was compounded by an uncontrolled return of power that took out the IT system.” Willie Walsh (IAG Supremo) during a telephone interview with The Times.

Willie has since inferred that the outage was caused by the actions of an engineer who disconnected and then reconnected a power supply to the data centre in “an uncontrolled and un-commanded fashion”. Could this then actually have something to do with the IT outsource after all, and did a staff member go rogue, or was it down to poor training and change control…?

For me what this highlights is the need to place greater emphasis on availability and uptime of those systems that support critical parts of a business or organisations services and offering. Along with robust processes and automation where possible to minimise the impact of an unplanned outage.

All businesses should expect their systems to fail. Sometimes it can be a physical failure of the infrastructure supporting the data centre (Power, UPS’s, Generators, Cooling etc.). It can be the power supply itself. Computing, Storage or the Network equipment can fail. Software and systems can suffer an outage. Plus it can also come down ‘Human Error’ or poor maintenance of core systems or infrastructure.

Coping with a Power Failure

Even if you have two power feeds to your building, and even if they’re from two different power sub-stations, and run through two different street routes, those sub-stations are still part of the same regional and national power grid. If the grid fails, so does your power. No way around it, except to make your own. Power Surge’s are handled by monitoring the power across Cabinet PDU’s, Critical PDU’s, UPS’s, Generators & Transformers, while assigning Maximum Load to all cabinets to make sure that we do not overload our customers systems.

Recovering from a Disaster

Recovering from a disaster is something that all organisation plan for, however not all have a Disaster Recovery (DR) Plan as there are some that consider High Availability (HA) to be more than sufficient. However HA only provides a localised system for failover, whereas DR is designed to cope with a site failure.

The challenge with DR for many of our customers is the cost;

  • First you need to prioritise which applications workloads you want to failover in the event of a disaster.
  • Second you need to purchase and manage infrastructure and licensing for these workloads with continuous replication.
  • Third you need a 2nd location.
  • Fourth you need a robust DR plan that allows you to recover your workloads at the 2nd location.
  • Then lastly (which is considered harder) you’ll need to fail back these services once the primary site has been recovered.

This can be an expensive option, but this is also where things like Cloud DR-as-a-Service can help minimise any expenditure, and the pain associated with owning and managing a DR environment.

Reducing the impact of an outage

Minimising the impact of any form of physical failure should be a priority over recovering from an outage. Workflow Automation can help a business maintain uptime of applications and services. This can be defined as a policy where services can be moved to other systems locally, or all services can be re-provisioned to a DR location or a DR platform in the event of outage caused either by a power issue or human error. Helping a business minimise the risk and the impact of outage.

I’ll let you come to your own conclusions as to whether British Airways should adopt a robust change control, automation or DR policy. Logicalis can assist and provide you with a number of options custom to your particular needs so that you are not the next press headliner.

Richard Simmons
June 20, 2017

I have a confession to make, I love to read. Not just an occasional book on holiday or a few minutes on the brief, or often the not so brief, train journey into and out of London but all the time. Right now has never been a better time for those with a love of reading! The rise of digital media means that not only can you consume it pretty much anywhere at any time but more importantly it is making it easier for more people to share their ideas and experience.

Recently I came across a book called “Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations” by Pulitzer Prize winner Thomas L. Friedman., which I not only found fascinating to read but has also helped to shape and change the way I view many of the challenges we are facing both in business but also in our personal lives. The premise of the book is that often he would arrange to meet people for breakfast early in the morning, to do interviews or research stories but occasionally these people would be delayed. These moments, rather than being a source of frustration, became time he actually looked forward to as it allowed him to simply sit and think. And looking at the world, he believed we are living through an age of acceleration due to constant technology evolution, globalisation and climate change. He argues that these combined are the cause for much of the challenges we currently face.

The key point about this acceleration is that it is now reaching a level in which society and people are struggling to adapt. Within the technology world we talk about disruption a lot, a new business or technology arrives that can disrupt a sector or market, the competition struggles to adapt and eventually a status quo is resumed. For example Uber has undoubtedly caused a huge disruption in the world of transport, and governments are currently working through how they can better legislate for this new way of operating. The challenge will be that new legislation can take 5-10 years to agree and implement in which time Uber may well have been replaced by autonomous cars.

So what we are experiencing now is not just disruption but a sense of dislocation, the feeling that no matter how fast we try and change it is never enough. In this environment it will be the people, businesses and societies that are able to learn and adapt the fastest which will be most successful . For business we are constantly shown how being more agile in this digital world can drive efficiency, generate new business models and allow us to succeed but I feel often what is lacking is the guidance on how to get there. We have a wealth of different technology which can support a business but what is right for me? What should I invest in first? And how do I make sure that I maximise the value of that investment?

My experience with many of our customers is that they understand the challenges and also the opportunity, but simply do not have the time to think and plan. When they do have time the amount of choice can be overwhelming and actually daunting. In a small way this is the same challenge I face when looking for new books to read, I can go online but with so much to choose from how will I know what I will enjoy? The opportunity that digital media provides with more authors and contents can actually make finding and choosing something that you think is valuable much harder.

In Logicalis, we understand the business challenges that you face and discuss with you the different technology options that could support you, recommending those that can deliver the biggest value in the shortest time frame. Contact us to find out how we can help you keep up to speed with emerging technology and use it to your benefit.

Alastair Broom
May 16, 2017

What if I told you, that Ransomware, is on its way to becoming a $1 billion annual market ?

Eyebrows raised (or not), it is a matter of fact in 2017 that Ransomware is an extremely lucrative business, evolving in an alarming rate and becoming more sophisticated day by day.

But, the question remains, what is Ransomware?

Ransomware is a malicious software – a form of malware – that either disables a target system or encrypts a user’s files and holds them ‘hostage’ until a ransom is paid. This malware generally operates indiscriminately with the ability to target any operating system, within any organisation. Once the malware has gained a foothold in an organisation, it can spread quickly infecting other systems, even backup systems and therefore can effectively disable an entire organisation. Data is the lifeblood of many organisations and without access to this data, businesses can literally grind to a halt. Attackers demand that the user pay a fee (often in Bitcoins) to decrypt their files and get them back.

On a global scale, more than 40% of ransomware victims pay the ransom, although there is no guarantee that you will actually get your data back and copies of your data will now be in the attacker’s hands. In the UK, 45% of organisations reported that a severe data breach caused systems to be down on average for more than eight hours. This makes it apparent that the cost is not only the ransom itself, but also the significant resources required to restore the systems and data. What is even more alarming, is that in the UK the number of threats and alerts is significantly higher than other countries (Cisco 2017 Annual Cybersecurity Report). Outdated systems and equipment are partially to blame, coupled with the belief that line managers are not sufficiently engaged with security. Modern and sophisticated attacks like ransomware require user awareness, effective processes and cutting edge security systems to prevent them from taking your organisation hostage!

How can you protect your company?

As one of the latest threats in cybersecurity, a lot has been written and said around ransomware and potential ways of preventing it. A successful mitigation strategy involving people, process and technology is the best way to minimise the risk of an attack and its impact. Your security program should consider the approach before, during and after an attack takes place giving due consideration to protecting the organisation from attack, detecting Ransomware and other malware attacks and how the organisation should respond following an attack. Given that Ransomware can penetrate organisations in multiple ways, reducing the risk of an infection requires a holistic approach, rather than a single point solution.  It takes seconds to encrypt an entire hard disk and so IT security systems must provide the highest levels of protection, rapid detection and high containment and quarantine capability to limit damage. Paying the ransom should be viewed as an undesirable, unpredictable last resort and every organisation should therefore take effective measures to avoid this scenario.

Could your organisation be a target?

One would imagine that only large corporations would be at risk of a Ransomware attack, but this is far from the truth. Organisations of all industries and sizes report Ransomware attacks which lead to substantial financial loss, data exposure and potential brand damage. The reason is that all businesses rely on the availability of data, such as employee profiles, patents, customer lists, financial statements etc. to operate.  Imagine the impact of Ransomware attacks in police departments, city councils, schools or hospitals. Whether an organisation operates in the public or private sector, banking or healthcare, it must have an agile security system in place to reduce the risk of a Ransomware attack.

Where to start?

The first step to shield your company against Ransomware is to perform an audit of your current security posture and identify areas of exposure.  Do you have the systems and skills to identify an attack?  Do you have the processes and resources to respond effectively?  As Ransomware disguises itself and uses sophisticated hacking tactics to infiltrate your organisation’s network, it is important to constantly seek innovative ways to protect your data before any irreparable damage is done.

With our Security Consultancy, Managed Security Service offerings and threat-centric Security product portfolio, we are able to help our customers build the holistic security architecture needed in today’s threat landscape.

Contact us to discuss your cyber security needs and ensure you aren’t the next topic of a BBC news article.

 

Category: Security

Neil Thurston
April 25, 2017

Hybrid IT is often referred to as bimodal, a term coined by Gartner some four years ago to reflect the (then) new need for the simultaneous management of two distinct strands of work in a Hybrid IT environment – the traditional server-based elements on the one hand, and the Cloud elements on the other.

Since then, the two strands of the bimodal world have blended in various different ways. As they have engaged and experimented with new technologies, organisations have found that certain workload types are particularly suited to certain environments.

For example, DevOps work, with its strong focus on user experience elements such as web front ends, is typically well suited to cloud-native environments. Meanwhile, back end applications processing data tend to reside most comfortably in the traditional data centre environment.

The result is a multi-modal situation even within any given application, with its various tiers sitting in different technologies, or even different clouds or data centres.

The obvious question for IT management is this: how on earth do you manage an application which is split across multiple distinct technologies? Relying on technology to provide the management visibility you need drives you to traditional tools for the elements of the application based on traditional server technology, and DevOps tools for the cloud native side. Both sets of tools need to be continuously monitored. For every application, and every environment.

A new breed of tools is emerging, allowing you to play in both worlds at once . VMware vRealize Automation cloud automation software is a good example. Over the last three years, VMware has developed its long-standing traditional platform, adding Docker container capabilities, so that today vRealize is a wholly integrated platform allowing for the creation of fully hybrid applications, in the form of ‘cut-out’ blueprints containing both traditional VM images and Docker images.

This multi-modal Hybrid IT world is where every enterprise will end up. IT management needs clear visibility, for every application, of multiple tiers across multiple technologies – for security, scaling, cost management and risk management, to name just a few issues. Platforms with the capability to manage this hybrid application state will be essential.

This area of enterprise IT is moving rapidly: Logicalis is well versed, and experienced, in these emerging technologies both in terms of solution and service delivery, and in terms of support for these technologies in our own cloud. Contact us to find out more about multi-modal Hybrid IT and how we can help you leverage it.

Category: Hybrid IT

Fanni Vig
April 20, 2017

Finally, it’s out!

With acquisitions like Composite, ParStream, Jasper and AppDynamics, we knew something was bubbling away in the background for Cisco with regards to edge analytics and IoT.

Edge Fog Fabric – EFF

The critical success factor for IoT and analytics solution deployments is to provide the right data, at the right time to the right people (or machines) .

With the exponential growth in the number of connected devices, the marketplace requires solutions that provide data generating devices, communication, data processing, and data leveraging capabilities, simultaneously.

To meet this need, Cisco recently launched a software solution (predicated on hardware devices) that encompasses all the above capabilities and named it Edge Fog Fabric aka EFF.

What is exciting about EFF?

To implement high performing IoT solutions that are cost effective and secure, a combination of capabilities need to be in place.

  • Multi-layered data processing, storing and analytics – given the rate of growth in the number of connected devices and the volume of data. Bringing data back from devices to a DV environment can be expensive. Processing information on the EFF makes this a lot more cost effective.
  • Micro services – Standardized framework for data processing and communication services that can be programmed in standard programming language like Python, Java etc.
  • Message routers – An effective communication connection within the various components and layers. Without state of the art message brokerage, no IoT systems could be secure and scalable in providing real time information.
  • Data leveraging capabilities – Ad hoc, embedded or advanced analytics capabilities will support BI and reporting needs. With the acquisition of Composite and AppDynamics, EFF will enable an IoT platform to connect to IT systems and applications.

What’s next?

Deploying the above is no mean feat. According to Gartner’s perception of the IoT landscape, no organization have yet achieved the panacea of connecting devices to IT systems and vice versa, combined with the appropriate data management and governance capabilities embedded. So there is still a long road ahead.

However, with technology advancements such as the above, I have no doubt that companies and service providers will be able to accelerate progress and deliver further use cases sooner than we might think.

Based on this innovation, the two obvious next steps that one can see fairly easily are:

  • Further automation – automating communication, data management and analytics services including connection with IT/ERP systems
  • Machine made decisions – once all connections are established and the right information reaches the right destination, machines could react to information that is shared with ‘them’ and make automated decisions.

Scott Hodges
April 18, 2017

Attending a recent IBM Watson event, somebody in the crowd asked the speaker, “So, what is Watson? ” It’s a good question – and one isn’t really a straightforward answer to. Is it a brand? A supercomputer? A technology? Something else?

Essentially, it is an IBM technology that combines artificial intelligence and sophisticated analytics to provide a supercomputer named after IBM’s founder, Thomas J. Watson. While interesting enough, the real question, to my mind, is this: “What sort of cool stuff can businesses do with the very smart services and APIs provided by IBM Watson?”

IBM provides a variety of services, available through Application Programmable Interfaces (APIs) that can developers can use to take advantage of the cognitive elements and power of Watson. The biggest challenge to taking advantage of these capabilities is to “Think cognitively” and imagine how they could benefit your business or industry to give you a competitive edge – or, for not-for-profit organisations, how they can help you make the world a better place.

I’ve taken a look at some of the APIs and services available to see some of the possibilities with Watson. It’s important to think of them collectively rather than individually, as while some use-cases may use one, many will use a variety of them, working together. We’ll jump into some use-cases later on to spark some thoughts on the possibilities.

Natural Language Understanding

Extract meta-data from content, including concepts, entities, keywords, categories, sentiment, emotion, relations and semantic roles.

Discovery

Identify useful patterns and insights in structured or unstructured data.

Conversation

Add natural language interfaces such as chat bots and virtual agents to your application to automate interactions with end users.

Language Translator

Automate the translation of documents from one language to another.

Natural Language Classifier

Classify text according to its intent.

Personality Insights

Extract personality characteristics from text, based on the writer’s style.

Text to Speech and Speech to Text

Process natural language text to generate synthesised audio, or render spoken words as written text.

Tone Analyser

Use linguistic analysis to detect the emotional (joy, sadness etc) linguistic (analytical, confident etc) and social (openness, extraversion etc) tone of a piece of text.

Trade-off Analytics

Make better choices when analysing multiple, even conflicting goals.

Visual Recognition

Analyse images for scenes, objects, faces, colours and other content.

All this is pretty cool stuff, but how can it be applied to work in your world? You could use the APIs to “train” your model to be more specific to your industry and business, and to help automate and add intelligence to various tasks.

Aerialtronics offers a nice example use-case of visual recognition in particular, they develop, produce and service commercial unmanned aircraft systems. Essentially, the company teams drones, an IoT platform and Watson’s Visual recognition service, to help identify corrosion, serial numbers, loose cables and misaligned antennas on wind turbines, oil rigs and mobile phone towers. This helps them automate the process of identifying faults and defects.

Further examples showing how Watson APIs can be combined to drive powerful, innovative services can be found on the IBM Watson website’s starter-kit page.

At this IBM event, a sample service was created, live in the workshop. This application would stream a video, convert the speech in the video to text, and then categorise that text, producing an overview of the content being discussed. The application used the speech-to-text and natural language classifier services.

Taking this example further with a spot of blue sky thinking, for a multi-lingual organisation, we could integrate the translation API, adding the resulting service to video conferencing. This could deliver near real-time multiple dialect video conferencing, complete with automatic transcription in the correct language for each delegate.

Customer and support service chat bots could use the Conversation service to analyse tone. Processes such as flight booking could be fulfilled by a virtual agent using the ‘Natural Language Classifier’ to derive the intent in the conversation. Visual recognition could be used to identify production line issues, spoiled products in inventory or product types in retail environments.

Identification of faded colours or specific patterns within scenes or on objects could trigger remedial services. Detection of human faces, their gender and approximate age could help enhance customer analysis. Language translation could support better communication with customers and others in their preferred languages. Trade-off Analytics could help optimise the balancing of multiple objectives in decision making.

This isn’t pipe-dreaming: the toolkit is available today. What extra dimensions and capabilities could you add to your organisation, and the way you operate? How might you refine your approach to difficult tasks, and the ways you interact with customers? Get in contact today to discuss the possibilities.

Alastair Broom
March 10, 2017

As Logicalis’ Chief Security Technology Officer I’m often asked to comment on cyber security issues. Usually the request relates to specific areas such as ransomware or socially engineered attacks. In this article I’m taking a more holistic look at IT security.

Such a holistic approach to security is, generally, sorely lacking. This is a serious matter, with cyber criminals constantly looking for the weak links in organisations’ security, constantly testing the fence to find the easiest place to get through. So, let’s take a look at the state of enterprise IT security in early 2017, using the technology, processes and people model.

Technology

A brief, high-level look at the security market is all it takes to show that there are vast numbers of point products out there – ‘silver bullet’ solutions designed to take out specific threats. There is, however, little in terms of an ecosystem supporting a defence-in-depth architecture. Integration of and co-operation between the various disparate components is , although growing, typically weak or non-existent.

We’ve seen customers with more than 60 products deployed, from over 40 vendors, each intended to address a specific security issue. Having such a large number of products itself presents significant security challenges, though. All these products combined have their own vulnerability: support and manintenance. Managing them and keeping them updated generates significant workload, and any mistakes or unresolved issues can easily become new weak points in the organisation’s security.

The situation has been exacerbated by the rapidly increasing popularity of Cloud and Open Source software. Both trends make market entry significantly simpler, allowing new players to quickly and easily offer new solutions, targeting whichever threat happens to be making a big noise at the moment.

Just as poor integration between security products is an issue, so is lack of integration between the components on which they are built. Through weak coding or failure to make use of hardware security features – Intel’s hardware-level Software Guards Extensions (SGX) encryption technology is a good example – security holes are left open, waiting to be exploited.

The good news on the technology front is that we are seeing the early stages of the development of protocols, such as STIX, TAXII and CybOX, allowing different vendors’ products to interact and share standardised threat information. The big security vendors have been promoting the idea of threat information sharing and subsequent action for a while, but only within their own product ecosystems. It’s time for a broader playing field!

Processes

IT security is one of the most important issues facing today’s enterprise, yet, while any self-respecting board will feature directors with responsibility for sales, marketing, operations and finance, few enterprises have a board level CISO.

Similarly few organisations have a comprehensive and thoroughly considered security strategy in place, or proper security processes and policies suitable for today’s threat landscape and ICT usage patterns. A number of industry frameworks exist: ISO 27001, Cyber Essentials, NIST to name but a few; and yet very few organisations adopt these beyond the bare minimum to meet regulatory requirements.

Most organisations spend considerable sums on security technology, but without the right security strategy in place, and user behaviour in line with the right processes and policies, they remain at risk of serious breaches.

People

The hard truth is that some 60% of breaches are down to user error. Recent research obtiained through Freedom of Information requests found that 62% reported to the ICO are down to humans basically getting it wrong. People make poor password choices, use insecure public (and private!) WiFi, and use public Cloud storage and similar services without taking the necessary security precautions. They do not follow, or indeed even know, corporate data classification and usage policies. The list, of course, goes on.

Training has a part to play here, to increase users’ awareness of the importance of security, as well as the behaviours they need to adopt (and discard) to stay secure. However, there will come a point at which the law of diminishing returns kicks in: we all make mistakes – even the most careful, well trained of us.

We need to explore, discover and devise new ways in which technology can help, by removing the human element, where possible and desirable, and by limiting and swiftly rectifying the damage done when human error occurs. Furthermore, we need to leverage ever improving machine learning and artificial intelligence software to help augment human capability.

Enterprises need to work with specialists that can help them understand the nature of the threats they face, and the weak links in their defences that offer miscreants easy ways in. That means closely examining all aspects of their security from each of the technology, processes and people perspectives, to identify actual and potential weaknesses. Then robust, practical, fit-for-purpose security architectures and policies can be built.

For an outline of how this can work, take a look at Logicalis’ three-step methodology here or email us security@uk.logicalis.com to discuss your cyber security needs.

Category: Security

Neil Thurston
February 13, 2017

The explosive growth of Cloud computing in recent years has opened up diverse opportunities for both new and established businesses. However, it has also driven the rise of a multitude of ‘islands of innovation’. With each island needing its own service management, data protection and other specialists, IT departments find themselves wrestling with increased – and increasing – management complexity and cost.

Necessity is the mother of invention, and with cost and complexity becoming increasingly problematic, attitudes to Cloud are changing. Organisations are moving selected tools, resources and services back to on-premises deployment models: we’re seeing the rise of the Hybrid Cloud environment.

The trend towards Hybrid Cloud is driven by an absolute need for operational and service consistency, regardless of the on-premises/Cloud deployment mix – a single set of automation platforms, a single set of operational tools and a single set of policies. We’re looking at a change in ethos, away from multiple islands of innovation, each with its own policies, processes and tools, to a single tool kit – a single way of working – that we can apply to all our workloads and data, regardless of where they actually reside.

Disparate islands in the Cloud have also increasingly put CIOs in the unenviable position of carrying the responsibility for managing and controlling IT but without the capability and authority to do so. Many organisations have experimented (some might say dabbled) with cherry-picked service management frameworks such as ITIL.

With focus shifting to Hybrid Cloud, we’re now seeing greater interest in more pragmatic ITSM frameworks, such as IT4IT, pushing responsibility up the stack and facilitating the move to something more akin to supply chain management than pure hardware, software and IT services management.

There are two key pieces to the Hybrid IT puzzle. On the one hand, there’s the workload: the actual applications and services. On the other, there’s the data. The data is where the value is – the critical component, to be exploited and protected. Workloads, however, can be approached in a more brokered manner.

Properly planned and executed, Hybrid Cloud allows the enterprise to benefit from the best of both the on-premises world and the Cloud world. The ability to select the best environment for each tool, service and resource – a mix which will be different in different industries, and even in different businesses within the same industry – delivers significant benefits in terms of cost, agility, flexibility and scalability.

Key to this is a comprehensive understanding of where you are and where you want to be, before you start putting policies or technology in place. The Logicalis Hybrid IT Workshop can help enormously with this, constructing a clear view of where you are now, and where you want to be.

In the workshop we assess your top applications and services, where they reside and how they’re used in your business. We then look at where you want to get to. Do you want to own your assets, or not? Do you want to take a CAPEX route or an OPEX route? Do you have an inherent Cloud First strategy? What are your licensing issues?

We then use our own analysis tools, developed from our real world experience with customers, to create visualisations showing where you are today, where you want to eventually be and our recommended plan to bridge the gap, in terms of people, processes, technology and phases.

Hybrid Cloud offers significant benefits, but needs to be carefully planned and executed. To find out more about how Logicalis can help, see our website or call us on +44 (0)1753 77720.

Category: Hybrid IT

Latest Tweets