A cloud-first approach to data protection

The year 2020 saw a spike in cybercrimes across the world. Rising unemployment forced many to turn to criminal activities. Cyberattacks increased exponentially, especially business email compromise (BEC) attacks like phishing, spear phishing, and whaling – and ransomware attacks. These attacks have resulted in data and financial losses. With most employees working from home, the threat of data theft and data exfiltration looms high.

Today, the risk of storing data on-premise or on endpoints is higher than ever. That’s why organisations are taking a cloud-first approach to data protection. This article discusses the inadequacies of on-premise, legacy infrastructure for data protection and explains why more organisations are adopting modern cloud architectures.

Threat vectors looming large

According to a report by the Group-IB, there were more than 500 successful ransomware attacks in over 45 countries between late 2019 and H1 2020, which means at least one ransomware attack occurring every day, somewhere in the world.  By Group-IB’s conservative estimates, the total financial damage from ransomware operations amounted to over $1 billion ($1,005,186,000), but the actual damage is likely to be much higher.

Similarly, in the final week of the US Elections, healthcare institutions and hospitals in the US were impacted by Ryuk ransomware. The affected institutions could not access their systems and had to resort to pen and paper operations. Life was at risk as necessary surgeries and medical treatments were postponed; patient medical records were inaccessible. Healthcare is a regulated sector and hackers know healthcare data’s value: this includes X-ray scans, medical scans, diagnostic reports, medical prescriptions, ECG reports, and lab test reports.

Today, employees across industries work remotely and log in to enterprise servers to access data. In this scenario, data exfiltration is becoming a massive challenge for organisations. A study by IBM Security says the cost of a data breach has risen 12% over the past five years and now costs $3.92 million on an average.

The crux of the issue is that data exfiltration and data theft can severely tarnish an organisation’s reputation, erode its share price, breach customer and shareholder trust, and even result in customer churn. Stringent regulatory standards and acts like HIPAA, GDPR, CCPA, Brazilian LGPD impose stiff fines and penalties that have historically made companies bankrupt or put them in the red.

Indian companies doing business with organizations in the US, Europe or elsewhere, will need to comply with the regulations defined by those nations, at an industry level. And if customer data is breached, they will be liable to pay the penalties imposed by those regulatory bodies.

India’s forthcoming Personal Data Protection Bill 2019 (which is close to being passed into law) is expected to impose similar fines as GDPR. The bill aims to protect the privacy of individuals relating to the flow and usage of their personal data.

Legacy infrastructure may not be able to comply with new regulations being introduced in an increasingly digital world. In fact, legacy could up the risk for data loss, and hence, organisations must move away from legacy infrastructure and take a cloud-first approach to data protection.

Legacy infrastructure is expensive, insecure

An organisation needs scale to succeed in today’s highly competitive business environment. Adding new customers, introducing new products and services, and timely response to market demand requires agility – to support all these the infrastructure should be able to scale up on demand.

Scaling infrastructure on-premise requires colossal investments and the TCO may not be viable in the long term. The shortage of in-house skills is another challenge. CIOs are under tremendous pressure to deliver value. The only way to scale is to embrace disruptive technologies like Cloud, Big Data Analytics, Artificial Intelligence, Machine Learning, and Blockchain.

Traditional data protection tools offered by legacy infrastructure are inadequate to protect data in distributed environments, where employees work outside the perimeter, and to secure it from sophisticated attacks like ransomware.

At the same time, the introduction of new services and innovation by enterprises results in an exponential increase in data that gets generated from multiple sources like customers, partners, employees, supply chains, and other places. And much of this data is unstructured, which poses additional data governance and management challenges. Industry regulations mandate that this data be stored for a certain period, and copies of it need to be maintained.

Some governments insist that data must be stored on servers in their country (data residency). For instance, the Indian Personal Data Protection Bill will regulate how entities process personal data and create a framework for organisational and technical measures in processing of data, laying down norms for social media intermediary, cross-border transfer, accountability of entities processing personal data, remedies for unauthorised and harmful processing.

In such a scenario, it would be expensive for an organisation to store its growing data on-premise, as legacy infrastructure is inadequate to protect this data and comply with new data protection laws. Cloud environments are more suitable as cloud service providers ensure compliance.

For all these reasons, businesses want to break free from the shackles of captive data centers and embrace a cloud-first approach for rising data protection needs. To do that, they are moving away from the investment-heavy and legacy approach to a cloud-first approach for data storage and protection.

A cloud-first approach

Forrester predicts that 80 percent of organisations are extremely likely to adopt a cloud data protection solution, as more and more businesses are going in for cloud-first strategies. This is due to critical data loss with on-premises infrastructure, lack of security and scalability, and increased spending in legacy hardware and software altogether.

As enterprises face increasingly stringent compliance regulation, cloud data protection solutions help deliver enhanced privacy capabilities for them to keep pace with all of today’s dynamic business demands and needs.

For instance, enterprises scale up their operations globally, their infrastructure can extend to multiple clouds. This results in server sprawl and siloed data, posing additional data management solutions. This is where, they need to adopt Cloud Data Protection and Management solutions that can manage and protect these sprawling environments. These cloud solutions can also secure an increasingly remote workforce and bypass stalled supply chains and traditional data centers’ limitations considering the unprecedented pandemic situation.

The cloud also offers robust resiliency and business continuity – with backup and recovery tools. Storage-as-a-Service provides a flexible, scalable, and reliable storage environment based on various storage technologies like file, block, and object — with guaranteed SLAs. Furthermore, it allows end-users to subscribe to an appropriate combination of storage policies for availability, durability and security of data that can meet various expectations on data resiliency and retention.

Backup & Recovery as a service offers an end-to-end flexible, scalable, and reliable backup and recovery environment for all kinds of physical, virtual, file system, databases, and application data. This solution further extends the richness of backup capability by using agents to interface with and do data transfer or image-based method with a combination of full and incremental backups. This combination provides an extremely high level of protection against data loss as well as simplified recovery.

Today, organisations understand the value of cloud data protection solutions, which is much more secure than traditional hardware-based architectures. They are adopting platforms to protect data where it is being created — in the cloud — from anywhere with on-demand scalability (object storage), robust compliance capabilities, and industry-leading security standards.

While cloud migration efforts have been underway for several years, it has been dramatically accelerated this year. A remote workforce, growing ransomware threats, and questions about data governance have significantly accelerated the demand for a cloud-first approach to data protection.

How can CIOs drive digital transformation by maximizing the value of Cloud?

The year 2020 will go down in history books for many reasons. One of those is that the business world is more distributed than ever — customers, partners, and employees work from their own locations (and rarely their offices) today. What does that mean for businesses? The consumer touchpoints are different today, wherein supply chains and delivery networks have changed. This is where organisations have to find new ways to deliver value and new experiences to customers.

In response to the pandemic, business organisations had to fundamentally change the way they operate. They had to transform processes, models, and supply chains for service delivery. To sustain business and remain competitive in a post-COVID world, they had to challenge the status quo and make a lot of changes.

Digital is no longer an option 

When the global pandemic gripped the world in March this year, organisations with three to five-year digital transformation plans were forced to execute plans in a few months or days. Either that or they would go out of business.

A new IBM study of global C-Suite executives revealed that nearly six in 10 organisations have accelerated their digital transformation journey due to the COVID-19 pandemic. In fact, 66% of executives said they have completed initiatives that previously encountered resistance. In India, 55% of Indian executives plan to prioritise digital transformation efforts over the next two years.

This calls for new skills, strategies, and priorities. And the cloud and associated digital technologies will strongly influence business decisions in the post-COVID era. Organisations need to have a full-fledged cloud strategy and draw up a roadmap for cloud migration.

To achieve this, the leading-edge companies are aligning their business transformation efforts with the adoption of public and hybrid cloud platforms. For many sectors, remaining productive during lockdown depended on their cloud-readiness. Operating without relying too heavily on on-premise technology was key and will remain vital in the more digitally minded organisation of the future. In a way, we can say that with the right approach, strategy, vision, and platform, a modern cloud can ignite end-to-end digital transformation in ways that could only be imagined in the pre-Covid era.

To deliver new and innovative services and customer experiences, businesses – be it large corporates, MSMEs, or  start-ups – all are embracing disruptive technologies like cloud, IoT, artificial intelligence, machine learning, blockchain, big data analytics, etc., to drive innovative and profitable business models.

For instance, introducing voice interfaces and chat bots for customer helpdesk is a compute intensive task that requires big data analytics and artificial intelligence in the cloud. This enables customers to just speak to a search bot if they need help in ordering products on an e-commerce website. They can also order the product just by speaking to voice bot like Siri or Alexa, for instance. The same is applicable for banking services. Voice based interfaces are enabling conversational banking, which also requires processing in the cloud. These services simplify and improve the customer experience and provide customer delight. But to introduce such innovative service requires an overhaul and transformation of traditional business processes – that’s digital transformation.

Solving infrastructure & cost challenges

Cloud computing has been around for ages, but CIOs still grapple with cloud challenges such as lack of central control, rising / unpredicted cost, complexity of infrastructure, security & compliance, and scaling. However, over the years, public cloud technology has evolved to address these challenges.

Central Control: Public cloud offers dashboards through which one can monitor and control cloud compute resources centrally irrespective where it is hosted (multicloud).

Managing Complexity: IT infrastructure is getting increasingly complex and CIOs have to deal with multiple vendors for cloud resources. Infrastructure is spread out over multiple clouds, usually from different vendors. And various best of breed solutions are selected and integrated into the infrastructure. As a result, the management of all these clouds and technologies poses a huge challenge. CIOs want to simply the management of infrastructure through a single window or single pane of glass. Cloud orchestration, APIs, dashboards, and other tools are available to do this.

Reducing Costs: Demands on IT resources are increasing but budgets remain the same and lack of billing transparency adds to it. Public cloud addresses both issues as it offers tremendous cost savings as you do not make upfront capital investments in infrastructure. There’s also a TCO benefit since you do not make additional investments to upgrade on-premise infrastructure – that’s because you rent the infrastructure and pay only for what you consume. The cloud service provider makes additional investments to grow the infrastructure. There are cost savings on energy, cooling, and real-estate as well.

And since usage of resources is metered, one can view the exact consumption and billing on a monthly, quarterly, or annual basis. Usage information is provided through dashboards and real time reports, to ensure billing transparency.

Compliance & Regulation: Regulatory and compliance demands for data retention and protection may be taxing for your business.

Automated Scaling: Public cloud offers the ability to scale up or down to provision the exact capacity that your business needs, to avoid overprovisioning or under utilisation of deployed resources. Cloud service providers ensure that the resources are available on-demand, throughout the year, even when business peaks during festive seasons. And this scaling can happen automatically.

Global Reach: Apart from scale and cost savings, the cloud offers global reach, so that your customers can access your services from anywhere in the world. Furthermore, the cloud’s ability to explore the value of vast unstructured data sets is next to none, which in turn is essential for IoT and AI. Big Data can be processed using special analytics technologies in the cloud.

Agility: The cloud also makes your business agile because it allows you to quickly enhance services and applications – or a shorter time-to-market for launching new products and services.

Then there’s the benefit of control and management. A ‘self-service cloud portal’ offers complete management of your compute instances and cloud resources such as network, storage, and security.  The self-service nature offers agility, enabling organisations to quickly provision additional resources and introduce enhancements or new services.

With all these advantages, businesses clearly recognise the need for transformation and are gradually leaving legacy technologies behind in favour of next-generation technologies as they pursue competitive advantage. Public cloud is critical to this shift, thanks not only to the flexibility of the delivery model but also to the ease with which servers can be provisioned, reducing financial as well as business risks.

It will not be possible for most companies to transform their businesses digitally unless they move some of their IT applications and infrastructure into public or hybrid clouds.

Key considerations for cloud migration

Regulation and compliance are other vital considerations. What kind of compliance standards has your service provider adopted? There are industry-specific standards like HIPAA for data security and privacy. Besides, there are standards like PCI-DSS applicable across industries — and regionally specific standards like GDPR. Ask about compliance with all those standards.

Keep in mind that the onus of protecting data on the public cloud lies with both – the tenant and the cloud service provider. Hence, it would be a good idea to hire an external consultant’s services to ensure compliance and adherence to all the standards. This should be backed by annual audits and penetration testing to test the robustness and security of the infrastructure.

You also want to ensure resilience and business continuity. What kind of services and redundancy are available to ensure that?

Ask your cloud service provider for guarantees on uptime, availability, and response time. The other aspects to check are level of redundancy, restoration from failure, and frequency of backup. All this should be backed by service level agreements (SLAs) with penalty clauses for lapses in service delivery.

WAN optimization, load balancing and robust network design, with N+N redundancy for resources, and hyperscale data centres ensure high availability. But this should be backed by industry standard certifications such as ISO 20000, ISO, 9001, ISO 27001, PCI/DSS, Uptime Institute Tier Standard, ANI/BICSI, TIA, OIX-2, and other certifications. These certifications assure credibility, availability, and uptime.

Do you remember what happened when the city of Mumbai lost power on October 12 this year? Most data centres continued operations as they had backup power resources. And that’s why their customers’ businesses were not impacted by the power failure.

A key concern is transparency in accounting and billing. Ask about on-demand consumption billing with no hidden charges. How are charges for bandwidth consumption accounted for? Some service providers do not charge extra for inbound or outbound data transfer and this can result in tremendous cost savings. Do they offer hourly or monthly billing plans?

Public cloud for business leadership

Enterprises that still haven’t implemented cloud technologies will be impeded in their digital transformation journeys because of issues with legacy systems, slower change adaptability, longer speed to market and an inability to adapt to fast-changing customer expectations.

Companies are recognising the public cloud’s capabilities to generate new business models and promote sustainable competitive advantage. They also acknowledge the need for implementing agile systems and believe that cloud technology is critical to digital transformation.

However, the cloud does present specific challenges, and one needs to do due diligence and ask the right questions. Businesses need to decide which processes and applications need to be digitalised. Accordingly, IT team needs to select the right cloud service provider and model.

The careful selection of a cloud service provider is also crucial. Look at the service provider’s financial strength. Where is your business data being hosted? What kind of guarantees can they give in terms of uptime? What about compliance and security? These are vital questions to ask.

Switching from one cloud service provider to another is possible but not a wise choice due to many technical and business complexity., so look for long-term relationships. An experienced and knowledgeable service provider can ensure a smooth journey to cloud – and successful digital transformation.

Source: https://www.cnbctv18.com/technology/view-how-can-cios-drive-digital-transformation-by-maximizing-the-value-of-cloud-8011661.htm

HPCaaS – know why it is better than setting up an On-Premise environment

High Performance Computing (HPC) is transforming organisations across industries, from healthcare, manufacturing, finance to energy and telecom. As businesses in these sectors require dealing with complex problems and calculations, High Performance Computing solutions can work with huge quantities of data and enable high performance data analysis.

The gigantum computing prowess of High Performance Computing infrastructure aggregates the power of multiple high-end processors which is boosted with a GPU to provide quick and accurate results. Moreover, High Performance Computing supercharges digital technologies like Artificial Intelligence (AI) and Data Analytics to deliver data insights faster and gives any business a competitive edge in the market.

Despite the growing demand, High Performance Computing has its own set of challenges. For instance, enterprises need to make huge investments to set up a High Performance Computing infrastructure and undergo long procurement timelines while opertionalising AI infrastructure. Further, High Performance Computing infrastructure requires extremely high maintenance and specific skill-sets to manage; and at the same time, scaling it is difficult if workloads increase. A cost benefit analysis also indicates that setting up and maintaining an on-site High Performance Computing cluster is increasingly difficult to achieve – the costs are disproportionate to meet unexpected demand and the hardware procurement cycle is never ending.

Why HPC-as-a-Service is a viable option?

Historically, on-premises solutions are perceived to be the proven investment, however, there are significant hidden costs to run and maintain on-premises High Performance Computing infrastructure. According to Hyperion Research, the demand for on-premises High Performance Computing resources often exceeds capacity by as much as 300%.

Looking at these roadblocks, the whole concept of High Performance Computing-as-a-Service (HPCaaS) has picked up lately, as it provides enterprises with simple and intuitive access to supercomputing infrastructure wherein they don’t have to buy and manage their own servers or set up data centers. For example, the workloads required for research, engineering, scientific computing or Big Data Analysis, which run on High Performance Computing systems, can also run on High Performance Computing-as-a-Service.

As per the forecasts from Allied Market Research, the global High Performance Computing-as-a-Service market size was valued at $6.28 billion in 2018, and is projected to reach $17.00 billion by 2026, registering a CAGR of 13.3% from 2019 to 2026.

In today’s dynamic environment, organisations that opt for High Performance Computing-as-a-Service are poised to gain competitive advantage and drive greater RoI. Enterprises must look at High Performance Computing-as-a-Service to avoid unexpected cost and performance issues, as compute-intensive processing can be done without making capital investment in hardware, skilled staff, or for developing a High Performance Computing platform. With the support of High Performance Computing-as-a-Service, organisations can also receive efficient database management services with reduced cost.

On-Prem vis-à-vis As-A-Service 

The biggest advantage of leveraging High Performance Computing-as-a-Service is the ‘cost’ factor – users who are looking to take advantage of High Performance Computing but cannot invest in the upfront capital and avoid prolonged procurement cycles of on-premises infrastructure implementation. With flexible pricing models, the enterprises just need to pay for the capacity they use.

For instance, on-premises High Performance Computing requires large capital investment in GPU servers, storage, network, security, and other supporting infrastructure which could run into tens of millions of Rupees, approximately INR 1-1.5 crore, depending on the scale of the infrastructure; whereas, High Performance Computing-as-a-Service offers zero Capex investment with flexible pricing along with ready-to-use pre-provisioned High Performance Computing infrastructure including switching routing infrastructure, internet bandwidth, firewall, load balancer, and intrusion protection system.

High Performance Computing-as-a-Service can also enable organisations to easily scale up their compute power as well as infrastructure. With this kind of scalability, the enterprise can flex their infrastructure to match the workloads instead of throttling workloads based on infrastructure.

Pay-as-you-consume model is also acting as a great enabler in democratising High Performance Computing, as it brings powerful computational capabilities for the scientific researchers, engineers, and organisations who lack access to on-premises infrastructure or need to hire expensive resources to manage their High Performance Computing infrastructure. The service providers offering High Performance Computing-as-a-Service manages the infrastructure maintenance so that enterprises can focus on their projects.

Additionally, businesses with a deep focus on innovation can do away with the periodic tech or infra refresh cycles, as on-premises High Performance Computing run the risk of becoming obsolete with changing technology or getting under-utilised with changing workloads. Organisations even have to incur additional expense while upgrading the infrastructure; on the contrary, service providers can easily handle upgrades and updates for optimum performance. With on-premises High Performance Computing, enterprises have to deal with unreliable power, whereas, adopting High Performance Computing-as-a-Service provides fail-safe power infrastructure, thus ensuring 100% uptime.

Making the right choice 

By now, it is evident that High Performance Computing-as-a-Service can provide for speedier data processing with high accuracy and due to the low investment costs, it has emerged as an alternative to on-premises clusters for High Performance Computing. However, despite all the advantages associated with adopting High Performance Computing-as-a-Service, there are certain perceived barriers preventing enterprises from realising its true potential.

For organisations to lean on High Performance Computing-as-a-Service to grow their business and accelerate product and service development, they need to be constantly showcased or educated on its benefits and in turn, breakdown the common roadblocks. All the benefits of High Performance Computing-as-a-Service clearly suggest that there’s substantial headroom for growth.

Advantages of High Performance Computing-as-a-Service at a glance

* The cost factor - no need to for upfront capital investment

* Access to supercomputing infrastructure without buying or managing servers

* Pay only for capacity utilised

* Organisations can opt for flexible pricing models

* Avoid unexpected cost and performance issues

* Upgrades and updates managed by the service provider

* Fail-safe power infrastructure, ensuring 100% uptime