Why AI is a game changer for the banking industry?

The banking industry has always been at the forefront of adopting emerging technologies and has a strong track record for technology-led leadership. This is true in the case of AI too, which has been adopted by many banks in a variety of important functions. Today, this assumes greater significance, as the usage of online and mobile banking channels has risen significantly, and customers have cut down their branch visits in the wake of the pandemic. This has pushed banks to raise their bar for providing digital experiences, as customers expect the same experience that they have been accustomed to from digital upstarts.

With AI, banks can achieve their objectives, as this technology can truly be used to automate their processes (leading to greater efficiencies), engaging with customers and personalising experiences (leading to better customer satisfaction) and risk management. Let us now look at some critical areas in a bank where AI can make a pivotal difference.

Speeding up the process of customer onboarding

AI can make a significant difference in the way banks onboard customers. For example, when a customer wants to open a new bank account or applies for a loan, he or she has to provide a number of documents and identification proofs to the bank. The bank then has to physically scan each document to authenticate the document. This is even more applicable when a customer applies for a loan, and a bank checks bank statements, identification proofs, and other financial details to determine the credit worthiness of the customer. As these are manual activities, they are error-prone and time consuming. Additionally, as there is no real-time verification of information submitted by the customer, there is a possibility of missing or inadequate information.

In an AI-enabled eKYC platform, the entire process can be automated by using AI-driven face match and document verification algorithms. Once the process of eKYC starts, the data from the Government issued ID card and photo can be matched with the live selfie video to authenticate the customer. Data from ID cards is extracted using smart OCR and validated against government supported databases. This helps in completing the KYC in less than one minute compared to a minimum of 2-3 days that is required for customer onboarding.

AI can play a big role in reconciliation too. Today, a significant percentage of reconciliation efforts is spent on analysing transactions that already match, instead of focusing on the entries that require more analysis and investigation. AI can help in completely automating the reconciliation process and reduce the time and efforts that it takes to reconcile transactions. As the process is completely automated, there is no possibility of errors.

Raising the bar for customer experience

AI’s potential in raising the bar for customer experience is the highest in the banking industry. The most basic usage of the power of AI can be seen from the way banks have used chatbots to answer customer queries, besides customer acquisition and engagement. A study by Juniper Research in 2019 estimated that the operational cost savings from using chatbots in banking will reach $7.3 billion globally by 2023. While chatbots are now used by almost every bank, a bigger potential for AI lies in personalising experiences for customers. Considering the huge amount of data that banks have at their disposal (demographic data, transaction data, credit card spends, e-commerce transactions), banks are extremely well placed to hyper-personalise the experience for a customer.

The Boston Consulting Group estimates that a bank can garner as much as $300 million in revenue growth for every $100 billion that it has in assets, by personalising its customer interactions. For example, if a customer is paying rent of Rs 20,000 every month, then the AI model could recommend a home loan to you and show you how the EMI for the house could be a lot lesser than the rent you are paying every month. Similar recommendations can be made by the AI engine when a person crosses a certain age-threshold and suggestions can accordingly be made (health insurance, education loan, etc). AI can also understand and suggest the preferred channel that a specific customer prefers to interact with the bank – some are comfortable with email, some with telephonic calls or some could prefer to interact via chat only.

Preventing fraud

Despite huge technological advances, frauds are a common occurrence, and have only grown in scale. AI’s ability to learn and analyse each banking transaction can be used to prevent frauds. For example, if your credit card has never ever been used abroad, and if a transaction takes place on your card, then the AI system can flag this off, and automatically place a call to you to verify the transaction. If an account is being logged in at an hour that has never ever been recorded in the transaction history of an individual, then the AI system can flag this transaction to the bank.

Banks are also leveraging AI and ML for monitoring purposes. For instance, AI/ML engines are being used analyse data and then breaking them down to detect any malicious activity or presence of any compromising malware. Apart from security prioritization, AI also helps in assessing phishing websites, malicious attempts, and all other vulnerabilities.

Similarly, AI can be a big powerful asset in leading the fight against anti-money laundering. An AI powered anti-money laundering solution can monitor for example, small innocuous irregular deposits that are then transferred to a global merchant. By shifting through mountains of data and connected entities, the AI system can identify patterns that signify money laundering and provide a bank with insights into previously unknown relationships.

How can AI enable Banking-as-a-Service?

For AI to run efficiently, it needs huge amount of computational power. While HPC systems are well equipped to power AI-driven transactions, the cost is prohibitive. This is where HPC as a service can be utilized for accessing AI as a service.

This gives cost efficiencies with guaranteed performance. This requires zero CAPEX investment and can be consumed using a pay-as-you-use model. Scalability is also not an issue and can be scaled up or down as per the workloads.

Democratisation of AI is not possible today as AI is not accessible by all and is considered too costly to implement. This can be made possible by adopting Banking-as-a-Service, which is enabled by AI. All banking related services can be consumed as a service. Small banks or financial institutions that do not have access to AI powered solutions can use Banking-as-a-Service, as all these solutions are available on a pay-per-use model. Currently, the usage of AI in banking is low, primarily because of infrastructure-related costs. This is where High Performance Computing-as-a-Service can be a big catalyst for democratising the usage of AI.

Today, a bank’s competition may not strictly be a bank, and can be a pure play technology company. Technology companies who have a deeper understanding of technology and work on huge data sets are better placed to provide a better customer experience. Samsung, Google and Apple are some of the best examples that showcase this capability. The popularity of Google Pay shows how relatively new entrants with deep technology expertise can disrupt the payments space. Banks have also understood the significance of these disruptive technologies and are actively partnering or investing in startups who specialise in emerging technologies. A case in point is ICICI Bank, which has taken a stake in Tapits Technologies, a startup that enables contactless merchant onboarding using eKYC.

In the future, as more banks embrace a more digital future, it will be imperative for all banks to have an AI-first approach. Even the government has time and again emphasised that for banks to transform and fulfill India’s growing needs, they mist harness technologies like AI and Big Data. As more digital only banks enter the fray, this approach will be critical in defining their future competitiveness!

The future of enterprise IT: What the next decade holds for us

During the beginning of the year 2020, the Synergy Group had come out with a detailed review of enterprise IT spending over the last ten years. The analysis revealed that annual spending on cloud infrastructure services had gone up from virtually zero to almost $100 billion. In a recent report in December, the Synergy Group revealed that enterprise spending on cloud infrastructure services (IaaS, PaaS, and hosted private cloud services) and SaaS reached $65 billion in the third quarter, up 28% from the third quarter of 2019. Undoubtedly, COVID-19 drove changes in enterprise behavior and sped up the transition from on-premise operations to cloud-based services. These statistics reveal the unstoppable march of cloud computing as a technology and points out to the fact that the COVID-19 pandemic has only accelerated adoption by a significant percentage.

Today, even as the world waits for the vaccine to be available for the common masses, almost everything has been reset. IT infrastructure and procurement will never be the same again and will have a big impact on the next decade.

From my experience, I would like to point out key trends that I believe would be extremely important for enterprise IT in the next ten years:

1 Every company will use the cloud

The cloud’s growth has been unstoppable, as can be seen from the predictions of independent analysts. The industry estimates further suggests that cloud will be an irreplaceable component of enterprise IT in the future. Even today, the growth of almost every emerging technology depends heavily on the cloud, and is the foundational platform for AI, IoT and Analytics. While each technology can function on-premise too, it is the cloud that gives these technologies the firepower, and this is set to become more prominent in the future. Enterprise IT will be associated with cloud, and every company will use the cloud in one form or the other.

2 The edge will move to the center

It is a connected world, and the future will see an explosion of devices being connected to the Internet. A McKinsey study for example, claims that 127 new IoT devices connect to the Internet every second. Data centers will have to be built keeping this trend in mind, as organisations will look at keeping data close to the location where it is being generated. Also called as edge computing, this requires placing data center nodes as close to the sources of data and content. As more devices such as autonomous cars require real-time access and decision-making capability, it will not be feasible to transmit data all the way to a traditional cloud. With 5G on the horizon, edge computing will remain in high demand, as it ensures low latency and high speed. This is corroborated by the IDC FutureScape report, which states that by 2022, 40% of enterprises will have doubled their IT asset spending in edge locations.

3 An era of joint cloud offerings

The next decade will be defined by multi-cloud offerings, and every customer will look at having different cloud vendors for specific workloads. In 2019, the industry witnessed a landmark alliance between Oracle and Microsoft. This enables customers to migrate or run their enterprise workloads across Microsoft Azure and Oracle Cloud. Customers can have the best of both worlds, by running one part of a workload within Azure and another part of the same workload within the Oracle Cloud. This agreement heralds the arrival of an era where customers will have the ability to run applications that share data across clouds. In the future, we will see more partnerships between fierce rivals.

4 Domain specific clouds will become the norm

Like other industry software such as ERP, cloud will also highly become domain specific. An example of this trend can be seen from the recent launch of the Microsoft Cloud for Healthcare, which is designed to enhance patient engagement, empower team collaboration, and improve clinical and operational data insights to connect data from across systems to predict risk and help improve patient care and operational efficiencies. This industry-specific solution provides integrated capabilities for automated and efficient high-value workflows, and advanced data analysis functionally for structured and unstructured data so that healthcare organisations can truly transform information into insight and insight into action. Going forward, we will see the creation of highly specialised cloud, as specific industries require specialised functionalities. This is also needed in the case of regulated industries such as financial services and telecom, where companies need to comply with specific regulations as required by the authorities.

5 Rise in As-a-Service models

With cloud adoption increasing, there will be a rise in affordable ‘As-a-Service’ models for specific industries. Today, thanks to the cloud, almost every service can be offered in a virtual way. Enterprises will combine intelligent analytics with products, leading to an era of productised services. In the future, almost every machine will have the option of being serviced remotely. Availability of cheap bandwidth coupled with a rise in intelligent devices, will lead to enterprises providing data insights for their devices.

We will also see a rise in industry focused ‘As-a-Service’ models. With no limits of computational power, we will see new industry focused models emerging. For example, small banks or financial firms that do not have the financial ability to invest in emerging technologies such as AI can make use of ‘Banking-as-a-Service’ and consume services in a pay per use model. Similarly, pharmaceutical firms can make use of a service such as ‘Drug discovery-as-a-Service’ to use the technological capabilities of firms to significantly reduce the time for discovering a drug. Similarly, the manufacturing sector can use ‘Manufacturing-as-a-Service’ to reduce their costs for manufacturing. A company called 3D Hubs, for example, has built a common hub for manufacturers to share 3D printers that do not want to invest on their own. In the next few years, the adoption of ‘As-a-Service’ models will be witnessed in every sector.

6 Democratisation of AI

While AI holds huge promise for transforming every possible industry, it is limited by the huge computational power that is required to power AI systems. In 2018, OpenAI, an AI research and development firm, highlighted that that the amount of computational power required to train the largest AI models has doubled every 3.4 months since 2012. Looking at the increased demand for computational power for AI, researchers at the Massachusetts Institute of Technology, recently warned that deep learning is hitting computational limits. However, with more cloud power being available, AI will become truly mainstream. Today, thanks to the cloud, there are no limits. As AI needs more data to learn, a cloud-model can help in ingesting more data, leading to more learning. A cloud-model is also more economical, as it allows enterprises to purchase only the specific computational power they need, even if it is for a short duration.

7 Energy efficiency will be the new benchmark

In the future, energy efficiency will be a competitive benchmark for data center providers. Research firm, Gartner, estimates that power costs will increase at least 10% per year due to cost per kilowatt-hour (kwh) increases and underlying demand. Close to 70% of a hyperscale colocation data center operational expenditure is power, and as demand increases, this number is only set to rise further. From innovative cooling mechanisms to using natural gas, solar or wind energy extensively, the future will see a rise in energy related innovations, as energy efficiency becomes the new benchmark.

8 India will become the global hub for data centers

India already has a huge number of factors that will help it to position itself as a global hub for data centers. This can also be seen from the huge number of investments that this sector has received. A recent report by property consulting firm Anarock, states that India’s data centers received $977 million in private equity and strategic investments since 2008, of which nearly 40% or approximately $396 million were infused between the January-September 2020 period. The report further states that India will see an addition of at least 28 large hyperscale data centers over the next three years. The reasons are clear – India already is well known across the world as a software services powerhouse. It is also home to a large developer population. The Progressive Policy Institute (PPI), India expects the country to overtake the US as world’s largest developer population center by 2024. Data consumption is increasingly rapidly. A fast-rising e-commerce and increasing consumption for OTT services is also fueling the demand for data. When one looks at the current demand, and what it will lead to, you can visualise the huge growth that is going to come for data centers.

How businesses will benefit?

Each of the above trends points out to an era that will increasingly use data insights for improving their own efficiencies and productivity levels. For example, democratisation of AI will level the playing field between large and small players. Companies can use the ‘As-a-Service’ model to reduce the entry barriers and start experimenting with emerging technologies using the foundation of cloud. Financial or technological capability will no longer be the roadblock, and domain experts in sectors such as healthcare or manufacturing can use the potential of AI to solve some of the biggest problems concerning their industries. Simply put, there are no restrictions, and any company or individual can start experimenting with negligible costs.

Data centers have to think beyond ‘infrastructure’

Data center players will also have to be nimble and think innovatively to start offering services and solutions that are beyond the usual IaaS or PaaS offerings. For example, can data center players offer ‘High Performance Computing-as-a-Service’ to say, a small pharmaceutical company? Can data center service providers create unique co-created solutions by taking in active inputs from the community and solving known problems at a price point that they can afford? Can data center players create their own IP that helps their clients improve their energy efficiencies by a significant margin?

The future will belong to those data center companies who can provide answers to such questions or solve the challenges faced by the industry; and service providers that can scale quickly without any limits and provide intelligent outcome-based models that help their clients achieve the business objectives through a portfolio of ‘As-a-Service’ models.

The Future of Enterprise IT

Source: https://www.techcircle.in/2021/01/25/the-future-of-enterprise-it-what-the-next-decade-holds-for-us

How is the convergence of HPC and AI transforming the healthcare industry?

At the start of the year 2020, a promising development involving the use of Artificial Intelligence (AI) in detecting breast cancer was announced. Researchers from Google Health, DeepMind, Imperial College, London, the NHS, and the Northwestern University in the US, created an AI model that was able to correctly identify cancer from X-ray images with an accuracy like expert radiologists. The AI model which was trained by analysing images from close to 29,000 women has the potential to revolutionise healthcare, as it can not only reduce the probability of errors, but also alleviate the pressure on healthcare systems.

As we can see, while the potential is huge, the biggest challenge in healthcare today is the fact that solving complex problems requires vast amounts of data and extreme computational power to analyse this data. For example, in 2018, OpenAI, an AI research and development firm, estimated that the amount of computational power required to train the largest AI models had doubled every 3.4 months since 2012. Looking at the increased demand for computational power for AI, researchers at the Massachusetts Institute of Technology warned recently warned that deep learning is hitting computational limits. The researchers concluded that deep learning progress was dependent on an increase in computational ability.

In this context, the convergence of AI and High-Performance Computing (HPC) is extremely beneficial as it can lead to a win-win situation for every stakeholder in the healthcare ecosystem. HPC and AI have a symbiotic relationship and can complement each other. HPC utilises a cluster of systems working together as a cohesive unit to achieve high-performance goals. AI needs specialised hardware that supports processing of trillions of calculations per second. This is where it is perfectly suited for AI.

The costs of most traditional HPC systems are disproportionate to meet unexpected demand. To avoid these issues, healthcare firms can consider using High-Performance Computing-as-a-Service (HPCaaS), which gives cost efficiencies with guaranteed performance and requires zero CAPEX investment and can be consumed using a pay-as-you-use model. Scalability is also not an issue and can be scaled up or down as per the workloads.

GPUs – the heart of HPC

Today, HPC systems leverage modern GPUs that contain hundreds of processing units, which are capable of processing huge number of transactions per second. With the ability to run large number of processes in parallel, GPUs in HPC systems can process large data sets in less amount of time. From an AI perspective, this also allows organisations to process significantly higher volumes of data, which helps improve the AI model.

Leveraged intelligently, AI can significantly impact every healthcare segment – from predictive diagnostics to personalised treatments, which can have a big impact on drug development and clinical research. For example, today, more and more data is stored and generated. Electronic health records are available today for many people, which has allowed researchers and doctors to look at genetic information, medical history, and allergies, and understand how technology can help in making better decisions with respect to treatment.

Today, in clinical trials, the same drug is given to multiple people. However, as every human being has a different genetic approach, the ideal way should be to personalise drugs for each person. This has not been possible till date, due to the huge challenges of collecting and analysing data from a huge number of records. With AI and machine learning today, it is possible to analyse data faster than manual processes.

One of the best use cases of using AI in finding effective drugs can be seen from the recent efforts of scientists to find out possible drugs against COVID-19. Scientists at the University of California, Riverside, have used machine learning to identify hundreds of potential new medicines that could help treat COVID-19. Given the race against time to identify probable drug candidates, the scientists used machine learning techniques to screen more than 10 million commercially available small molecules from a database comprised of 200 million chemicals and identified the best probable cases for the 65 human proteins that interact with the SARS-CoV-2 proteins. The machine learning model also helped the researchers screen out toxic drugs.

This was used to create a drug discovery pipeline that could interfere with the entry and replication of the SARS-CoV-2 virus in the body. These kinds of experiments need an HPC infrastructure that can run AI algorithms. Given the time constraints in finding effective drugs against COVID-19, AI has been a game-changer.

The combined force

The combination of AI and HPC can also be used for real-time prediction of clinical interventions in intensive care units. Using real-time monitoring of vital signs of patient data such as blood pressure, heart rate and glucose levels, more precise future treatment can be undertaken. There are several other inspiring examples around the globe.

How HPC benefits Artificial Intelligence:Powered by GPUs— GPU instances parallelly process AI-based algorithms, taking off load from the CPUs to deliver analysis efficiently and faster.Data Volume – With super computational power, HPC can churn volumes of data with accuracy, thus aiding AI.Cost Efficient —HPCaaS provides a more cost-effective access to supercomputing without the need to invest or maintain the hardware. One can access HPC with pay-as-you-go pricing and avoid upfront capital costs.

Boston-based startup, FDNA, uses facial recognition techniques to identify close to 50 known genetic syndromes from the photographs of patients. The company has used cloud-based GPUs to analyse the huge amount of data received and collected from clinics and geneticists around the world and used this data to build its algorithm. Today, its algorithm is used by 70 percent of geneticists worldwide. This has become extremely useful in advancing the diagnostics for rare diseases.

Similarly, the New York University’s Langone School of Medicine has demonstrated how its team used deep learning to predict 200 ailments three months faster than traditional methods by analysing electronic health records such as X-rays, lab tests and doctor’s notes. As more and more data are ingested by AI, the better will be the accuracy.

In the future, as more remote healthcare models come into play, the availability of data in electronic forms will be huge. This will pave the way for more precise AI-enabled healthcare models, as healthcare officials use the huge data at their disposal to create intelligent algorithms that can dramatically improve the way healthcare services are delivered and consumed.

India Enterprise Cloud Survey 2020

The first edition of the India Enterprise Cloud Survey provides the latest trends and insights of Indian Enterprises journey on cloud adoption and usage. While the cloud has taken center stage today, it has been around for quite some time now. The COVID crisis has fast-tracked this and has been a serious reminder to design IT systems in a way that allows for agility, resilience, and efficiency amid disruption and fast reformation of organisational boundaries.

The external VUCA environment has been demanding an IT platform that supports businesses to innovate amid disruption, which has been relatively gradual. COVID has undoubtedly changed the pace. This research report provides exciting insights by leading CIOs and IT Heads of large and mid-sized enterprises across multiple industries.

Key Highlights of the research –

  • Cloud is mainstream today, catalysed in a big way by COVID as 77% of the enterprises plan to implement their enterprise cloud strategy over the next 12 to 18 months
  • 63% of the Indian enterprise see cloud as the key mode of infrastructure hosting in 2022 as compared to 37% in 2020
  • India Cloud Spending to grow at 15 to 17% in 2021, twice the rate of 7 to 9% at which the overall IT spending by enterprises is estimated to grow

Download Report

The India Enterprise Cloud Survey

The India Enterprise Cloud Survey reveals insights on cloud computing adoption by leading CIOs & IT Leaders across Industry.

A cloud-first approach to data protection

The year 2020 saw a spike in cybercrimes across the world. Rising unemployment forced many to turn to criminal activities. Cyberattacks increased exponentially, especially business email compromise (BEC) attacks like phishing, spear phishing, and whaling – and ransomware attacks. These attacks have resulted in data and financial losses. With most employees working from home, the threat of data theft and data exfiltration looms high.

Today, the risk of storing data on-premise or on endpoints is higher than ever. That’s why organisations are taking a cloud-first approach to data protection. This article discusses the inadequacies of on-premise, legacy infrastructure for data protection and explains why more organisations are adopting modern cloud architectures.

Threat vectors looming large

According to a report by the Group-IB, there were more than 500 successful ransomware attacks in over 45 countries between late 2019 and H1 2020, which means at least one ransomware attack occurring every day, somewhere in the world.  By Group-IB’s conservative estimates, the total financial damage from ransomware operations amounted to over $1 billion ($1,005,186,000), but the actual damage is likely to be much higher.

Similarly, in the final week of the US Elections, healthcare institutions and hospitals in the US were impacted by Ryuk ransomware. The affected institutions could not access their systems and had to resort to pen and paper operations. Life was at risk as necessary surgeries and medical treatments were postponed; patient medical records were inaccessible. Healthcare is a regulated sector and hackers know healthcare data’s value: this includes X-ray scans, medical scans, diagnostic reports, medical prescriptions, ECG reports, and lab test reports.

Today, employees across industries work remotely and log in to enterprise servers to access data. In this scenario, data exfiltration is becoming a massive challenge for organisations. A study by IBM Security says the cost of a data breach has risen 12% over the past five years and now costs $3.92 million on an average.

The crux of the issue is that data exfiltration and data theft can severely tarnish an organisation’s reputation, erode its share price, breach customer and shareholder trust, and even result in customer churn. Stringent regulatory standards and acts like HIPAA, GDPR, CCPA, Brazilian LGPD impose stiff fines and penalties that have historically made companies bankrupt or put them in the red.

Indian companies doing business with organizations in the US, Europe or elsewhere, will need to comply with the regulations defined by those nations, at an industry level. And if customer data is breached, they will be liable to pay the penalties imposed by those regulatory bodies.

India’s forthcoming Personal Data Protection Bill 2019 (which is close to being passed into law) is expected to impose similar fines as GDPR. The bill aims to protect the privacy of individuals relating to the flow and usage of their personal data.

Legacy infrastructure may not be able to comply with new regulations being introduced in an increasingly digital world. In fact, legacy could up the risk for data loss, and hence, organisations must move away from legacy infrastructure and take a cloud-first approach to data protection.

Legacy infrastructure is expensive, insecure

An organisation needs scale to succeed in today’s highly competitive business environment. Adding new customers, introducing new products and services, and timely response to market demand requires agility – to support all these the infrastructure should be able to scale up on demand.

Scaling infrastructure on-premise requires colossal investments and the TCO may not be viable in the long term. The shortage of in-house skills is another challenge. CIOs are under tremendous pressure to deliver value. The only way to scale is to embrace disruptive technologies like Cloud, Big Data Analytics, Artificial Intelligence, Machine Learning, and Blockchain.

Traditional data protection tools offered by legacy infrastructure are inadequate to protect data in distributed environments, where employees work outside the perimeter, and to secure it from sophisticated attacks like ransomware.

At the same time, the introduction of new services and innovation by enterprises results in an exponential increase in data that gets generated from multiple sources like customers, partners, employees, supply chains, and other places. And much of this data is unstructured, which poses additional data governance and management challenges. Industry regulations mandate that this data be stored for a certain period, and copies of it need to be maintained.

Some governments insist that data must be stored on servers in their country (data residency). For instance, the Indian Personal Data Protection Bill will regulate how entities process personal data and create a framework for organisational and technical measures in processing of data, laying down norms for social media intermediary, cross-border transfer, accountability of entities processing personal data, remedies for unauthorised and harmful processing.

In such a scenario, it would be expensive for an organisation to store its growing data on-premise, as legacy infrastructure is inadequate to protect this data and comply with new data protection laws. Cloud environments are more suitable as cloud service providers ensure compliance.

For all these reasons, businesses want to break free from the shackles of captive data centers and embrace a cloud-first approach for rising data protection needs. To do that, they are moving away from the investment-heavy and legacy approach to a cloud-first approach for data storage and protection.

A cloud-first approach

Forrester predicts that 80 percent of organisations are extremely likely to adopt a cloud data protection solution, as more and more businesses are going in for cloud-first strategies. This is due to critical data loss with on-premises infrastructure, lack of security and scalability, and increased spending in legacy hardware and software altogether.

As enterprises face increasingly stringent compliance regulation, cloud data protection solutions help deliver enhanced privacy capabilities for them to keep pace with all of today’s dynamic business demands and needs.

For instance, enterprises scale up their operations globally, their infrastructure can extend to multiple clouds. This results in server sprawl and siloed data, posing additional data management solutions. This is where, they need to adopt Cloud Data Protection and Management solutions that can manage and protect these sprawling environments. These cloud solutions can also secure an increasingly remote workforce and bypass stalled supply chains and traditional data centers’ limitations considering the unprecedented pandemic situation.

The cloud also offers robust resiliency and business continuity – with backup and recovery tools. Storage-as-a-Service provides a flexible, scalable, and reliable storage environment based on various storage technologies like file, block, and object — with guaranteed SLAs. Furthermore, it allows end-users to subscribe to an appropriate combination of storage policies for availability, durability and security of data that can meet various expectations on data resiliency and retention.

Backup & Recovery as a service offers an end-to-end flexible, scalable, and reliable backup and recovery environment for all kinds of physical, virtual, file system, databases, and application data. This solution further extends the richness of backup capability by using agents to interface with and do data transfer or image-based method with a combination of full and incremental backups. This combination provides an extremely high level of protection against data loss as well as simplified recovery.

Today, organisations understand the value of cloud data protection solutions, which is much more secure than traditional hardware-based architectures. They are adopting platforms to protect data where it is being created — in the cloud — from anywhere with on-demand scalability (object storage), robust compliance capabilities, and industry-leading security standards.

While cloud migration efforts have been underway for several years, it has been dramatically accelerated this year. A remote workforce, growing ransomware threats, and questions about data governance have significantly accelerated the demand for a cloud-first approach to data protection.

How can CIOs drive digital transformation by maximizing the value of Cloud?

The year 2020 will go down in history books for many reasons. One of those is that the business world is more distributed than ever — customers, partners, and employees work from their own locations (and rarely their offices) today. What does that mean for businesses? The consumer touchpoints are different today, wherein supply chains and delivery networks have changed. This is where organisations have to find new ways to deliver value and new experiences to customers.

In response to the pandemic, business organisations had to fundamentally change the way they operate. They had to transform processes, models, and supply chains for service delivery. To sustain business and remain competitive in a post-COVID world, they had to challenge the status quo and make a lot of changes.

Digital is no longer an option 

When the global pandemic gripped the world in March this year, organisations with three to five-year digital transformation plans were forced to execute plans in a few months or days. Either that or they would go out of business.

A new IBM study of global C-Suite executives revealed that nearly six in 10 organisations have accelerated their digital transformation journey due to the COVID-19 pandemic. In fact, 66% of executives said they have completed initiatives that previously encountered resistance. In India, 55% of Indian executives plan to prioritise digital transformation efforts over the next two years.

This calls for new skills, strategies, and priorities. And the cloud and associated digital technologies will strongly influence business decisions in the post-COVID era. Organisations need to have a full-fledged cloud strategy and draw up a roadmap for cloud migration.

To achieve this, the leading-edge companies are aligning their business transformation efforts with the adoption of public and hybrid cloud platforms. For many sectors, remaining productive during lockdown depended on their cloud-readiness. Operating without relying too heavily on on-premise technology was key and will remain vital in the more digitally minded organisation of the future. In a way, we can say that with the right approach, strategy, vision, and platform, a modern cloud can ignite end-to-end digital transformation in ways that could only be imagined in the pre-Covid era.

To deliver new and innovative services and customer experiences, businesses – be it large corporates, MSMEs, or  start-ups – all are embracing disruptive technologies like cloud, IoT, artificial intelligence, machine learning, blockchain, big data analytics, etc., to drive innovative and profitable business models.

For instance, introducing voice interfaces and chat bots for customer helpdesk is a compute intensive task that requires big data analytics and artificial intelligence in the cloud. This enables customers to just speak to a search bot if they need help in ordering products on an e-commerce website. They can also order the product just by speaking to voice bot like Siri or Alexa, for instance. The same is applicable for banking services. Voice based interfaces are enabling conversational banking, which also requires processing in the cloud. These services simplify and improve the customer experience and provide customer delight. But to introduce such innovative service requires an overhaul and transformation of traditional business processes – that’s digital transformation.

Solving infrastructure & cost challenges

Cloud computing has been around for ages, but CIOs still grapple with cloud challenges such as lack of central control, rising / unpredicted cost, complexity of infrastructure, security & compliance, and scaling. However, over the years, public cloud technology has evolved to address these challenges.

Central Control: Public cloud offers dashboards through which one can monitor and control cloud compute resources centrally irrespective where it is hosted (multicloud).

Managing Complexity: IT infrastructure is getting increasingly complex and CIOs have to deal with multiple vendors for cloud resources. Infrastructure is spread out over multiple clouds, usually from different vendors. And various best of breed solutions are selected and integrated into the infrastructure. As a result, the management of all these clouds and technologies poses a huge challenge. CIOs want to simply the management of infrastructure through a single window or single pane of glass. Cloud orchestration, APIs, dashboards, and other tools are available to do this.

Reducing Costs: Demands on IT resources are increasing but budgets remain the same and lack of billing transparency adds to it. Public cloud addresses both issues as it offers tremendous cost savings as you do not make upfront capital investments in infrastructure. There’s also a TCO benefit since you do not make additional investments to upgrade on-premise infrastructure – that’s because you rent the infrastructure and pay only for what you consume. The cloud service provider makes additional investments to grow the infrastructure. There are cost savings on energy, cooling, and real-estate as well.

And since usage of resources is metered, one can view the exact consumption and billing on a monthly, quarterly, or annual basis. Usage information is provided through dashboards and real time reports, to ensure billing transparency.

Compliance & Regulation: Regulatory and compliance demands for data retention and protection may be taxing for your business.

Automated Scaling: Public cloud offers the ability to scale up or down to provision the exact capacity that your business needs, to avoid overprovisioning or under utilisation of deployed resources. Cloud service providers ensure that the resources are available on-demand, throughout the year, even when business peaks during festive seasons. And this scaling can happen automatically.

Global Reach: Apart from scale and cost savings, the cloud offers global reach, so that your customers can access your services from anywhere in the world. Furthermore, the cloud’s ability to explore the value of vast unstructured data sets is next to none, which in turn is essential for IoT and AI. Big Data can be processed using special analytics technologies in the cloud.

Agility: The cloud also makes your business agile because it allows you to quickly enhance services and applications – or a shorter time-to-market for launching new products and services.

Then there’s the benefit of control and management. A ‘self-service cloud portal’ offers complete management of your compute instances and cloud resources such as network, storage, and security.  The self-service nature offers agility, enabling organisations to quickly provision additional resources and introduce enhancements or new services.

With all these advantages, businesses clearly recognise the need for transformation and are gradually leaving legacy technologies behind in favour of next-generation technologies as they pursue competitive advantage. Public cloud is critical to this shift, thanks not only to the flexibility of the delivery model but also to the ease with which servers can be provisioned, reducing financial as well as business risks.

It will not be possible for most companies to transform their businesses digitally unless they move some of their IT applications and infrastructure into public or hybrid clouds.

Key considerations for cloud migration

Regulation and compliance are other vital considerations. What kind of compliance standards has your service provider adopted? There are industry-specific standards like HIPAA for data security and privacy. Besides, there are standards like PCI-DSS applicable across industries — and regionally specific standards like GDPR. Ask about compliance with all those standards.

Keep in mind that the onus of protecting data on the public cloud lies with both – the tenant and the cloud service provider. Hence, it would be a good idea to hire an external consultant’s services to ensure compliance and adherence to all the standards. This should be backed by annual audits and penetration testing to test the robustness and security of the infrastructure.

You also want to ensure resilience and business continuity. What kind of services and redundancy are available to ensure that?

Ask your cloud service provider for guarantees on uptime, availability, and response time. The other aspects to check are level of redundancy, restoration from failure, and frequency of backup. All this should be backed by service level agreements (SLAs) with penalty clauses for lapses in service delivery.

WAN optimization, load balancing and robust network design, with N+N redundancy for resources, and hyperscale data centres ensure high availability. But this should be backed by industry standard certifications such as ISO 20000, ISO, 9001, ISO 27001, PCI/DSS, Uptime Institute Tier Standard, ANI/BICSI, TIA, OIX-2, and other certifications. These certifications assure credibility, availability, and uptime.

Do you remember what happened when the city of Mumbai lost power on October 12 this year? Most data centres continued operations as they had backup power resources. And that’s why their customers’ businesses were not impacted by the power failure.

A key concern is transparency in accounting and billing. Ask about on-demand consumption billing with no hidden charges. How are charges for bandwidth consumption accounted for? Some service providers do not charge extra for inbound or outbound data transfer and this can result in tremendous cost savings. Do they offer hourly or monthly billing plans?

Public cloud for business leadership

Enterprises that still haven’t implemented cloud technologies will be impeded in their digital transformation journeys because of issues with legacy systems, slower change adaptability, longer speed to market and an inability to adapt to fast-changing customer expectations.

Companies are recognising the public cloud’s capabilities to generate new business models and promote sustainable competitive advantage. They also acknowledge the need for implementing agile systems and believe that cloud technology is critical to digital transformation.

However, the cloud does present specific challenges, and one needs to do due diligence and ask the right questions. Businesses need to decide which processes and applications need to be digitalised. Accordingly, IT team needs to select the right cloud service provider and model.

The careful selection of a cloud service provider is also crucial. Look at the service provider’s financial strength. Where is your business data being hosted? What kind of guarantees can they give in terms of uptime? What about compliance and security? These are vital questions to ask.

Switching from one cloud service provider to another is possible but not a wise choice due to many technical and business complexity., so look for long-term relationships. An experienced and knowledgeable service provider can ensure a smooth journey to cloud – and successful digital transformation.

Source: https://www.cnbctv18.com/technology/view-how-can-cios-drive-digital-transformation-by-maximizing-the-value-of-cloud-8011661.htm

HPCaaS – know why it is better than setting up an On-Premise environment

High Performance Computing (HPC) is transforming organisations across industries, from healthcare, manufacturing, finance to energy and telecom. As businesses in these sectors require dealing with complex problems and calculations, High Performance Computing solutions can work with huge quantities of data and enable high performance data analysis.

The gigantum computing prowess of High Performance Computing infrastructure aggregates the power of multiple high-end processors which is boosted with a GPU to provide quick and accurate results. Moreover, High Performance Computing supercharges digital technologies like Artificial Intelligence (AI) and Data Analytics to deliver data insights faster and gives any business a competitive edge in the market.

Despite the growing demand, High Performance Computing has its own set of challenges. For instance, enterprises need to make huge investments to set up a High Performance Computing infrastructure and undergo long procurement timelines while opertionalising AI infrastructure. Further, High Performance Computing infrastructure requires extremely high maintenance and specific skill-sets to manage; and at the same time, scaling it is difficult if workloads increase. A cost benefit analysis also indicates that setting up and maintaining an on-site High Performance Computing cluster is increasingly difficult to achieve – the costs are disproportionate to meet unexpected demand and the hardware procurement cycle is never ending.

Why HPC-as-a-Service is a viable option?

Historically, on-premises solutions are perceived to be the proven investment, however, there are significant hidden costs to run and maintain on-premises High Performance Computing infrastructure. According to Hyperion Research, the demand for on-premises High Performance Computing resources often exceeds capacity by as much as 300%.

Looking at these roadblocks, the whole concept of High Performance Computing-as-a-Service (HPCaaS) has picked up lately, as it provides enterprises with simple and intuitive access to supercomputing infrastructure wherein they don’t have to buy and manage their own servers or set up data centers. For example, the workloads required for research, engineering, scientific computing or Big Data Analysis, which run on High Performance Computing systems, can also run on High Performance Computing-as-a-Service.

As per the forecasts from Allied Market Research, the global High Performance Computing-as-a-Service market size was valued at $6.28 billion in 2018, and is projected to reach $17.00 billion by 2026, registering a CAGR of 13.3% from 2019 to 2026.

In today’s dynamic environment, organisations that opt for High Performance Computing-as-a-Service are poised to gain competitive advantage and drive greater RoI. Enterprises must look at High Performance Computing-as-a-Service to avoid unexpected cost and performance issues, as compute-intensive processing can be done without making capital investment in hardware, skilled staff, or for developing a High Performance Computing platform. With the support of High Performance Computing-as-a-Service, organisations can also receive efficient database management services with reduced cost.

On-Prem vis-à-vis As-A-Service 

The biggest advantage of leveraging High Performance Computing-as-a-Service is the ‘cost’ factor – users who are looking to take advantage of High Performance Computing but cannot invest in the upfront capital and avoid prolonged procurement cycles of on-premises infrastructure implementation. With flexible pricing models, the enterprises just need to pay for the capacity they use.

For instance, on-premises High Performance Computing requires large capital investment in GPU servers, storage, network, security, and other supporting infrastructure which could run into tens of millions of Rupees, approximately INR 1-1.5 crore, depending on the scale of the infrastructure; whereas, High Performance Computing-as-a-Service offers zero Capex investment with flexible pricing along with ready-to-use pre-provisioned High Performance Computing infrastructure including switching routing infrastructure, internet bandwidth, firewall, load balancer, and intrusion protection system.

High Performance Computing-as-a-Service can also enable organisations to easily scale up their compute power as well as infrastructure. With this kind of scalability, the enterprise can flex their infrastructure to match the workloads instead of throttling workloads based on infrastructure.

Pay-as-you-consume model is also acting as a great enabler in democratising High Performance Computing, as it brings powerful computational capabilities for the scientific researchers, engineers, and organisations who lack access to on-premises infrastructure or need to hire expensive resources to manage their High Performance Computing infrastructure. The service providers offering High Performance Computing-as-a-Service manages the infrastructure maintenance so that enterprises can focus on their projects.

Additionally, businesses with a deep focus on innovation can do away with the periodic tech or infra refresh cycles, as on-premises High Performance Computing run the risk of becoming obsolete with changing technology or getting under-utilised with changing workloads. Organisations even have to incur additional expense while upgrading the infrastructure; on the contrary, service providers can easily handle upgrades and updates for optimum performance. With on-premises High Performance Computing, enterprises have to deal with unreliable power, whereas, adopting High Performance Computing-as-a-Service provides fail-safe power infrastructure, thus ensuring 100% uptime.

Making the right choice 

By now, it is evident that High Performance Computing-as-a-Service can provide for speedier data processing with high accuracy and due to the low investment costs, it has emerged as an alternative to on-premises clusters for High Performance Computing. However, despite all the advantages associated with adopting High Performance Computing-as-a-Service, there are certain perceived barriers preventing enterprises from realising its true potential.

For organisations to lean on High Performance Computing-as-a-Service to grow their business and accelerate product and service development, they need to be constantly showcased or educated on its benefits and in turn, breakdown the common roadblocks. All the benefits of High Performance Computing-as-a-Service clearly suggest that there’s substantial headroom for growth.

Advantages of High Performance Computing-as-a-Service at a glance

* The cost factor - no need to for upfront capital investment

* Access to supercomputing infrastructure without buying or managing servers

* Pay only for capacity utilised

* Organisations can opt for flexible pricing models

* Avoid unexpected cost and performance issues

* Upgrades and updates managed by the service provider

* Fail-safe power infrastructure, ensuring 100% uptime

7 key factors to be considered for SAP upgrade

Over the last few years, we have witnessed democratisation of Enterprise Resource Planning (ERP) systems and the emergence of SAP. Businesses looking to scale-up their operations today are likely to have experienced an ERP or a similar system that connects disparate functions within the organisation. However, as customer preferences and market dynamics evolve over time, legacy ERP systems begin to lag, and so it comes as no surprise that a recent survey by Deloitte revealed that 64% of the CIOs are either rolling-out next-generation ERP solutions such as SAP or are modernising legacy systems.

Having said that, it is a known fact that deploying a new or rewiring an existing SAP system can be a mammoth task, both in terms of effort and financial resources. Hence, before undertaking an upgrade, CIOs need to have an absolute clarity of thought and purpose in light of the emerging technologies and business realities. Here’s a checklist to get you started:

Need-Gap Analysis: Elementary as it may sound, the performance evaluation of an SAP system often tends to  focus on technology and hardware. To get a  clear picture, it is equally important to perform an assessment with the objective of identifying the functional and business gaps that it is unable to fulfill effectively. For example, a legacy system that does not support smart manufacturing or digital channels of sales places the business at a distinct disadvantage in the digital world we operate in today.

IT Infrastructure: A large number of SAP users still rely on IT infrastructure located on-premise.  However, there are risk factors associated with on-premise infrastructure including physical damage due to fire, flooding or other natural calamities, or like a situation resulting from the ongoing pandemic. In any case, if users are unable to log-in to or access their data, the SAP and all the investment are rendered useless. When considering an upgrade, it is advisable do consider SAP on the cloud or at least co-location of your IT infrastructure to ensure business continuity and reduced IT infrastructure costs.

Technology Upgrade: The fast-paced technology landscape often renders legacy systems inoperable or incompatible with newer hardware or software before OEMs eventually discontinue those products. Additionally, application upgrades also offer definite business and technology benefits. While considering an SAP upgrade, it is therefore crucial to check for technology obsolescence, availability of upgrades and continued support across all systems and modules.

Scalability: As businesses grow, existing systems need to process and store higher volumes of data. Additionally, it also leads to a number of other changes including new methods of production and business models, all of which require a robust and flexible infrastructure. It is, therefore, recommended to select a system that offers scalability and can keep pace with the changing business needs while being financially viable.

Functionality: There are a number of functions and attributes in current businesses that were not as prominent or critical earlier – big data and analytics for example. Such functions are mission-critical to modern businesses and if your existing ERP system does not allow you to support such functions, it is time for a change.

Total Cost of Ownership (TCO): A primary factor to consider while deciding TCO includes the capital expenditure required for the new infrastructure as well as operational expenses such as license fees, ERP customisations, the training expenses to bring employees up to speed, the cost of maintenance, and ongoing support. While the objective should be to minimise the TCO, it should be done keeping in mind the potential benefits and the ROI.

Return on Investment (ROI): As with most business decisions, the choice of whether or not to upgrade an SAP system is also driven financially. While we have covered the TCO, a decision whether to modernise or not boils down to the kind and quantum of returns the upgrade would yield. And while calculating the ROI, efforts should also be made to quantify intangible benefits such as increased productivity and enhanced customer experience that add business value and contribute to the topline.

There is little doubt that the business landscape and the macroeconomic factors are changing faster than ever before. This is not only reshaping the markets but also influencing customer behavior and decision-making in many ways. And this is reflected in the increased jostle for customer’s attention and the hyper-competitive environment that businesses need to survive in today’s day and age.

In such a scenario, a state-of-the-art SAP solution could be a key differentiator and help organisations unlock latent business value that exists within the organisation and its ecosystem. The more integrated an organisation is – from sourcing inputs all the way to post-sale customer experience, the more agile and competitive it becomes. And that’s why it is critical to conduct periodic checks to evaluate if your existing SAP system is keeping pace with your business needs.

Posted in SAP

Is edge computing better for the future or the cloud? Answers EVP & CIO, Yotta Infrastructure

At times, considered as a conflicting concept to an IT infrastructure, edge computing and cloud computing effectively complement each other’s functions. Even though they function in different ways, utilising one does not prevent the use of the other.

Cloud computing is a more common term than Edge and has been used by businesses for a long time. Businesses have favoured it due to the flexibility it provides to manage a workload on a dedicated platform in a virtual environment. However, the time it takes to communicate a task from the primary server to the client is noticeably huge when compared to edge computing. Hence, the former requires more bandwidth if connected to IoT devices.

Benefits of Cloud computing
The primary role of cloud evolves from that of an infrastructure utility to serve as a platform for the next generation of organisational innovation and evolution. Cloud computing not only allows companies to scale their operations but also provides them with the best-suited service model depending on specific requirements such as PaaS, IaaS or SaaS.

While the organisations have deployment models to choose from such as Public, Private, and Hybrid clouds, they can keep a check on the capital and operating expenses by using cloud computing. By adopting cloud strategies, enterprises have seen significant improvement in efficiency, reduction of costs, and decreased downtimes. With the recent disruption and large-scale lockdown measures due to COVID-19, the mobility, security, and scalability of cloud data platforms further highlighted its value to the businesses. The current pandemic has pushed companies to migrate to cloud environments to deal with the lockdown crisis and promote their geographically scattered teams with regular data access, sharing, and collaboration.

The relevance of Edge Computing
While cloud computing has its benefits, for improved performance and meeting more efficient computational needs, businesses are inclined towards using edge technologies. It provides a distributed communication path that works on a decentralised IT infrastructure. When transferring large quantities of data, it is essential to optimise data and complete the process in milliseconds.

Edge computing allows organisations to process, analyse, and perform necessary tasks locally on the data collected. This brings analysis closer to the data generation site eliminating intermediaries and makes it an affordable option for better asset performance. Edge computing makes it possible to utilise the full potential of the latest IoT devices which have data storage and processing power. A few areas where Edge computing has demonstrated incredible success are autonomous vehicles, streaming services, and smart homes. As new technologies like 5G networks, smart cities, and autonomous cars become common, they will integrate with, operate on, and be more dependent on edge computing resources.

Edge vs Cloud Computing
While edge computing and cloud computing are very different from each other, it is not advised to replace cloud computing with Edge. Both have different uses and purposes. Edge computing can be used for extreme latency operations and programming with varying times of run whereas cloud computing is suitable for programmes that require massive storage and provides a targeted platform. The former needs a robust and sophisticated plan for security with advanced authentication while it is easy to secure and control the latter along with remote access.

With the rise in the adoption of digital technologies, the data generated, as a result, continues to increase. And while processing these data, many organisations have started realising that there are shortfalls such as latency, cost and bandwidth in cloud computing. To help eliminate these drawbacks, enterprises are now gradually moving towards edge computing, an alternative approach to the cloud environment. Edge computing not only lowers the dependency on the cloud but simultaneously improves the speed of data processing as a result.

As IoT devices are becoming more widespread, businesses are in need to put in effect edge computing architectures to leverage the potential of this technology. Nowadays, companies are integrating edge capabilities with centralised cloud computing, and this integrated network infrastructure is called fog computing. Fog computing helps in enhancing efficiency as well as data computing competencies for cloud computing.

It is not possible to rely only on the Edge or on the cloud for your IT infrastructure but rather an amalgamation of the two that is best suited to the company’s operations. As these models become more mainstream, companies can strategise to find various hybrid structures to reduce costs and enhance their full potential.

Source: https://content.techgig.com/is-edge-computing-better-for-the-future-or-the-cloud-answers-evp-cio-yotta-infrastructure/articleshow/78874732.cms

HPC-powered AI to take manufacturing efficiencies to a new level

Today, enterprises are leveraging the self-learning power of Artificial Intelligence (AI) and parallel process systems of a High-Performance Computing (HPC) architecture to customise business processes and get more done in less time. In the current unprecedented scenario, industries across verticals had to fast-track digitisation and are testing HPC-enabled AI to synchronise data and build new products and services.

MarketWatch predicts that HPC-based AI revenues will grow 29.5% annually as enterprises continue to integrate AI in their operations. Moreover, with the growth of AI, Big Data, as well as the need for larger-scale traditional modelling and simulation jobs, HPC user base is getting expanded to include high growth sectors like automotive, manufacturing, healthcare, and BFSI among others. These verticals are adopting HPC technology to manage large data sets and scale-out their current applications.

The manufacturing companies, especially, can reap the benefits of HPC as they strive to enhance their operations – right from design process, supply chain, to delivery of products. A study by Hyperion Research indicates that each $1 invested in HPC in manufacturing, $83 in revenue is generated with $20 of profit.

Similarly, they are leveraging Artificial intelligence (AI) & Machine Learning (ML) to accelerate innovation, gain market insights and develop new products and services. Manufacturing organisations have been able to introduce AI into three aspects of their business, including operational procedures, production stage, and post-production. According to a report by Mckinsey’s Global Institute, manufacturing industry investing in AI are expected to make an 18% estimated annual revenue growth than all other industries analysed.

Optimising processes together with HPC & AI

As manufacturers aim to achieve optimal performance and quality output, their focus is to implement HPC-fuelled AI applications to proactively identify issues and enhance the entire product development process, thereby improving end-to-end supply chain management.

At the same time, M2M communication and telematics solutions in the manufacturing sector have increased the number of data points in the value chain. Usage of HPC drives sophisticated and fast data analyses to ensure accurate insights are derived from large data sets. Combining HPC with AI applications allows network systems to automate real-time adjustments in the value chain and reduce the breakdown time. This results in enhanced product quality, accelerate time-to-market, and make the production process more agile.

Substantial use of computer vision cameras in the inspection of machinery, adoption of the Industrial Internet of Things (IIoT), and use of big data in the manufacturing industry are some of the factors adding to the growth of the AI in the manufacturing market for predictive maintenance and machinery inspection application.

Enterprises in the manufacturing industry can use the power of AI with HPC capabilities to deploy predictive analytics. This will not only help them optimise their supply chain performance but also help design demand forecast models and use deep learning techniques to enhance product development. There will, thus, be a need for high-speed networking architecture and systems storage to roll out and power the AI-based programs.

On the other hand, the manufacturing companies are increasingly leveraging HPC systems with Computer-Aided Engineering (CAE) software for performing high-level modelling and simulation. And there is a significant inter-dependability between HPC-powered CAE and AI, where simulations generate huge sets of data and AI models apply data analytics repetitively for even higher quality simulations. By now it is evident that the integration of CAE and AI will accelerate product development and improve quality; however, the scalability required to address the Big Data and compute challenges can only be managed by an HPC infrastructure.

Cloud-enabled approach to HPC

More data means more modelling, and, therefore, a more intensive machine learning solution. It is also important to invest in an HPC-Cloud for faster delivery of results by AI/ML models. A cloud-enabled HPC will help companies scale up their computing capabilities, as many AI workloads run in the cloud today. HPC applications built on cloud, allows companies to innovate by incorporating AI and enhance operations. AI workflows require continuous access to data for training; however, it can be a task to do so on-premise.

Today, manufacturing companies can choose from hybrid and multi-cloud options to provide a continuous and smooth computing HPC environment for on-premise hardware and cloud resources.

The power of one 

The manufacturing industry stands to benefit most from the convergence of HPC & AI technologies. Instead of using AI and HPC as different technologies, the organisations in this sector is unifying the two clusters to reduce OPEX cost and optimise resources. Just to reiterate, the powerful combination of HPC and AI tools are helping manufacturing companies in high-quality product development, improvement of supply chain management capabilities, analysis of growing datasets, reduction in forecasting errors, and optimal IT performance.

By combining AI and HPC capabilities, the manufacturing sector has found multiple ways to deliver the right products and services, accelerate time to market, and drive efficiencies at each stage of development.

Source : https://www.dqindia.com/hpc-powered-artificial-intelligence-take-manufacturing-efficiencies-new-level/