The promise of artificial intelligence (AI) in their everyday lives is a source of near-continuous hype for South Africans. However, for enterprises implementing AI solutions, there are some important considerations regarding their intellectual property (IP) and secret data.
The promise of artificial intelligence (AI) in their everyday lives is a source of near-continuous hype for South Africans. However, for enterprises implementing AI solutions, there are some important considerations regarding their intellectual property (IP) and secret data.
With more than 70% of southern Africa’s CEOs making generative AI a top investment priority, most companies are asking many of the right questions, such as “How do we not miss taking advantage of this emerging technology?”, “How does AI fit into our digital transformation strategy?”, “How can we operationalise and securely put all our data to work?”, and “How do we execute a successful AI strategy with limited resources?”. However, not enough is asked about the corporate secrets used to train AI models to achieve the activities mentioned above.
Risky business?
Corporate secrets and IP are critical issues. The majority of public AI services – like ChatGPT – reply on inputs from a variety of sources, and their knowledge grows through input from users. That means every piece of data passed into that system helps evolve, mature, and train those models and inferences and other generative elements.
But what happens when those pieces of data are your corporate secrets?
When protected or secret data is fed into and informing models, it is no longer secret – it is now part of that platform. For most enterprises, that is a daunting thought. Without careful planning and policies, losing control over corporate secrets could be the new reality.
Healthcare as an example
Healthcare provides a good reference point because healthcare companies in South Africa handle very private and sensitive data. These companies often cannot outsource or move data outside of very specific regions based on compliance requirements, security, privacy needs, and more.
However, there are a number of tasks within healthcare where AI could help. These are automatable, mundane, and non-value-added tasks. Building AI solutions for some of these areas would free up teams to focus on better, more timely, and more accurate patient care. Some examples might include speech-to-text transcription for doctor’s notes or pre-screening patients through a portal that lets them self-present their symptoms.
In fact, the widespread adoption of AI-powered tools designed to drive efficiencies in healthcare could unlock billions in potential savings across countries in Africa. But protecting that data is vital.
Minimising risk
The question then is how to leverage AI’s capabilities and promise without risking exposing sensitive data, corporate secrets, and precious internal resources.
A recent survey by IDC discusses how many companies are considering moving their priority AI-related workloads from a private cloud or clouds to the public infrastructure (Workload Repatriation Trends Update” by Natalya Yezhkova, June 2023). While that can work for some companies, there are some important considerations, such as the locality of data, security, privacy, and risk.
For AI placement, the reality is not all workloads should be treated equally. A good way to minimise risk for sensitive workloads and data is by utilising a hybrid approach. According to the IDC, around 60% of South African organisations say they are using a combination of public and private clouds.
Hybrid by design
Many enterprises today already employ a hybrid approach because they are running a collection of private and public infrastructure. This is a ‘hybrid by accident’ approach for all intents and purposes.
Hybrid by design, however, is more purposeful and structured, enabling experiences for intentional workload placement, and allowing the enterprise to make workload placement decisions based on the performance, security, compliance, cost, and other considerations on where best to deploy their workloads optimally.
AI in IT
Beyond its mystique, AI is simply another workload. As with any workload, IT must decide whether it should reside on public, private, or hybrid clouds. Making the best decision at the outset means determining first how they will provide the core tenets/core primitives they need to deliver AI. Considerations should include inferencing, PFT, fine-tuning, and other components along the AI spectrum. Not surprisingly, many will find a private cloud is a great landing spot, especially when security and privacy are primary considerations.
With some private cloud solutions, vendors might provide out-of-the-box, GPU-enabled AI-optimised instance types based on the workloads the customer plans to run in their environment. These are not one-size-fits-all solutions, typically, but rather are optimised and designed to run AI workloads. It is also possible to deliver cloud primitives in a self-service experience, including common tooling, APIs, CLI, Terraform workloads, and the like.
This approach allows the company deploying the private cloud to focus on the automation and orchestration of deploying the workloads across various modes – from containers to virtual machines to bare metal. It also makes the cloud experience feel as flexible as public clouds, but with the optimal security and design of a private cloud.
Benefits of private clouds
In a private cloud, the user company controls the geographic locality of its data. By running their clouds privately, they also have the inherent low-latency, direct connectivity to their core data. With the majority of corporate AI workloads being run against precious corporate assets and corporate data, users can do their work without leaving the security of the virtual or physical boundary of their own data centres, co-location environments, or other dedicated environments.
Workload-enabled solutions or those built for purpose as part of a private cloud experience are the ones that will truly make a difference in protecting their assets and optimising the value of AI.
Open standards
How do companies get there? The focus should be, “How do I bring the power of these computing resources to enable and deliver these AI workloads where I need them to be?” That might include running them in a core data centre against the company’s data or running in edge locations providing real-time inferencing results at the edge.
They can often speed this process by embracing open standards and allowing cloud users to essentially self-service with common or well-known tooling (think Terraform or other de facto standards) to determine how they would handle templating, orchestrating, deploying, and managing these types of workloads and solutions. Conducting activities like this in a private cloud can help ensure full agility and speed.
Another important consideration is how they will consume their AI services, with options ranging from a fully managed ‘deliver this as a cloud’ or as-a-service in the data centre to employing self-service tooling so that companies can manage and operate the cloud themselves.
For many South African companies, a hybrid cloud experience with AI will work best for visibility, deploying and managing their workloads across not just private cloud or clouds, but also then extending into the hyperscale public cloud environments.
© Technews Publishing (Pty) Ltd. | All Rights Reserved.