Since the turn of the millennium, 24 years ago, technology has accelerated at an unprecedented pace. In that time, we have seen the emergence of smartphones, cloud computing, and ultra-high-speed internet, among others. Each of these technologies has brought significant changes to business, society, and our personal lives.
The changes felt by businesses and other organisations have been particularly significant. Disruptive, cutting-edge tools transformed how organisations carry out their commercial operations. Through the optimisation of certain technologies and tools, organisations can, (amongst other things):
• Automate certain back-office functions such as accounting, payroll and record keeping, which significantly improves efficiency.
• Create secure environments for maintaining sensitive business and/or consumer information.
• Aggregate, analyse and process data to assist organisations in exploiting novel communication tools (such as social media) to increase sales and profitability.
The pace of technological change shows no signs of slowing down either. To see that, one need only look at the rapid advances in artificial intelligence (AI) over the past couple of years. While there have been significant gains in AI over the past decade or so, the explosion of generative AI into the public consciousness, coupled with significantly improved computing power and the increasing availability of data, has helped drive many businesses to consider how it could impact their operations.
While there are undoubtedly benefits for organisations willing to embrace AI, there are also risks. The rapid development and integration of AI systems in their business operations pose numerous challenges, specifically in managing and using data or personal information belonging to employees, customers, and suppliers. It is, therefore, critical that organisations ensure their data privacy systems can cope with the realities of today’s technological environment and that they are as future-proof as possible, too.
Understanding AI
Before looking at how organisations can do so, it is important to build a more in-depth understanding of AI. What are we talking about when we refer to AI, and how will it affect data privacy within organisations?
Broadly speaking, AI refers to systems that exhibit intelligent behaviour and can rapidly analyse various activities and environments, make independent decisions, and achieve a specific objective. Conventional AI systems are characterised by their ability to perform activities typically associated with the human mind, such as the ability to perceive, learn, interact with an environment, problem-solve and, in certain instances, exhibit creativity.
All of those characteristics have significant benefits for organisations. Complex business processes can, for example, be converted into streamlined solutions that require little to no human intervention.
Those benefits must, however, be balanced out by the compliance obligations placed on organisations regarding the processing and management of data and personal information. These requirements present numerous legal risks and challenges, which must be considered in detail before deploying AI systems and solutions.
AI and South African data privacy legislation
When it comes to achieving regulatory compliance, organisations must understand the two distinguishable AI systems that bear relevance to their business operations. Generative AI models use algorithms to generate content based on the analysis of patterns and data and can learn how to improve their own output. Conversely, Applied AI models use machine learning algorithms to analyse data and make predictions and/or decisions based on the data processed.
What becomes apparent in the examples of AI models set out above, is that data plays an integral role in defining the operational parameters of AI and realising the benefits of AI systems, and it is through the processing and analysis of such data that organisations can undertake a sophisticated analysis of large volumes of data, to benefit their commercial operations.
In this regard, it is important to note that South Africa has no comprehensive legislative framework to regulate the integration and use of AI and machine learning technologies. There are, however, provisions in the Protection of Personal Information Act No. 4 of 2013 (POPI) that impose certain obligations on how organisations can utilise and deploy AI systems.
While POPI does not explicitly address the comprehensive operational parameters and capabilities of modern AI systems, it does regulate the processing of data/personal information using automated means. Section 71 (1) of POPI provides that data subjects cannot be subjected to a decision based solely on automated decision-making, which results in legal consequences for the data subject and the data subject being profiled.
As an example, the use of automated decision-making tools to perform a credit assessment to establish the creditworthiness of a given data subject is generally not permitted in terms of section 71 of POPI, unless said decision:
• Has been taken in connection with the conclusion or execution of a contract.
• The request of the data subject in terms of the contract has been met.
• Appropriate measures have been taken to protect the data subject’s legitimate interests; or is governed by a law or code of conduct in which appropriate measures are specified for protecting the legitimate interests of data subjects.
The South African Information Regulator additionally has noted its concerns with the use of AI platforms, and it acknowledges it still needs a better understanding of the technical issues associated with AI platforms if it is to ensure that the personal information of data subjects is not compromised.
As such, if organisations do not implement appropriate data protection compliance measures and programmes in conjunction with the integration and deployment of AI systems, they risk enforcement notices, sanctions and fines. Organisations should, therefore, consider taking the following measures:
• Conducting a POPI impact assessment on the AI-related systems that it utilises to ensure that such processing activities are carried out within the prescripts and parameters provided in POPI.
• Preparing and delivering to the Information Regulator a prior authorisation application in terms of section 57 of POPI, which generally requires responsible parties to obtain the prior authorisation of the Information Regulator if they intend to (i) process any unique identifiers of data subjects for another purpose than what was intended during the collection of such personal information; and (ii) to link the information with information processed by other responsible parties.
While the immense potential of AI systems may present meaningful opportunities for local organisations to optimise and/or diversify their commercial operations, such potential cannot be realised without implementing appropriate controls to ensure compliance with POPI and other related governing legislation that may apply to AI systems.
Therefore, organisations must adopt a pragmatic and meticulous approach in assessing the risks associated with integrating AI systems and AI-related solutions, to ensure that such systems and solutions yield the intended results, without exposing a given organisation to further regulatory risk.
© Technews Publishing (Pty) Ltd. | All Rights Reserved.