“Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to
oppress the many.”
– Stephen Hawking
Generative AI is no longer an experiment, it is already embedded in day-to-day corporate life. From marketing and customer engagement to recruitment screening and risk analytics, businesses are leaning on AI to move faster and do more with less. The benefits are real, but so are the risks.
In South Africa, the use of personal information in these AI systems is not a grey area. It is squarely governed by the Protection of Personal Information Act (POPIA). Regulators, counterparties and courts will expect companies to show that they not only understand the law but have built practical safeguards around their use of AI.
POPIA APPLIED TO AI PROCESSING
POPIA’s principles map neatly onto AI, but organisations often overlook how directly they bite. Every time personal information flows through an AI tool, whether in prompts, training data, or outputs, there must be a lawful reason for using it and a clear need for why that data is required. Data collected for customer support cannot simply be repurposed for model training. Transparency requires businesses to tell people, in plain language, how their information is being used. And every log, prompt or output that contains personal information must be secured against leaks, manipulation or loss.
If AI models or their data are hosted abroad, POPIA’s cross-border transfer rules come into play. That means businesses need enforceable assurances from vendors about where data lives, how it is handled, and what happens if something goes wrong.
KEY LEGAL RISKS AND FAILURE MODES
The most obvious legal risk is inaccurate or misleading content that names or profiles a real person. A hallucinated claim or false attribution can trigger unlawful processing, defamation or regulatory breaches. The deeper risk is opacity. If a business cannot explain what data was used, on what legal basis, and who checked the output, it will struggle to defend itself to a regulator, a court, or even the public. Weak contracts with AI vendors add to the problem: without clear operator agreements, data may move across borders or into sub-processors without proper control.
GOVERNANCE EXPECTATIONS FOR ACCOUNTABLE DEPLOYMENT
Good governance does not need to be heavy-handed. What it does require is clarity and documentation. At a minimum, companies should adopt a board-approved AI policy that sets boundaries on use, flags prohibited practices, and allocates responsibility. Riskier use cases, like public communications, customer profiling or safety-critical outputs, deserve a formal assessment that sets out the lawful basis, data sources, human review points and security safeguards.
Human oversight is non-negotiable for anything that could damage reputation or create legal exposure. Contracts with AI vendors should reflect POPIA obligations, including breach notifications, security measures, and deletion commitments. And every decision or approval should leave an audit trail that shows the company acted responsibly.
DISINFORMATION AS A CORPORATE RISK
Generative AI has also made disinformation cheaper, faster and harder to spot. Fake executive statements, fabricated reviews, and manipulated images can spread across digital channels in minutes. This is not just a communications headache, it is a governance risk.
The right posture combines monitoring with readiness. Organisations should keep a cross-functional playbook, linking Legal, IT Security and Corporate Affairs, that sets out who acts, how to verify false content, and how to communicate with regulators, investors and customers. Speed and accuracy are everything, without a plan, even well-resourced companies will be caught flatfooted.
PRACTICAL TEST OF READINESS
A simple way to measure maturity is to ask three questions: –
1.For each AI use case, what is our lawful basis for processing personal data?
2.For each public AI output, who reviewed and approved it, and where is that record kept?
3.If a false AI-generated story about our company spreads tonight, what exactly happens in the next few hours?
If the answer to any of these is unclear, the problem is not the technology, it is governance.
CLOSING THOUGHTS
Generative AI will reshape commerce, communication, and compliance as profoundly as the internet did a generation ago. Yet it does not suspend existing duties, it heightens them. POPIA remains the controlling statute for personal information, and the courts will not treat innovation as an excuse for carelessness. Organisations that adopt AI without embedding lawful basis, purpose limitation, security and accountability into their design will expose themselves to regulatory non-compliance, contractual disputes and reputational harm.
The boardroom question is no longer whether to use AI, but whether to use it responsibly. Clients, regulators, and the market will measure companies not by their enthusiasm for innovation, but by their ability to demonstrate control, foresight and integrity when deploying it. Disinformation is not a theoretical threat, it is already a weapon in the marketplace and businesses that cannot respond quickly and lawfully to false narratives risk losing consumer trust and investor confidence overnight.
The companies that succeed will be those that fuse innovation with governance: documenting their decisions, placing humans in the loop, contracting intelligently with vendors, and practising their response to synthetic crises before they occur. In this space, discipline is not a drag on progress, it is the very foundation that makes bold adoption possible. AI is here to stay. The only real question is whether your organisation will treat compliance and credibility as strategic assets, or gamble them away in pursuit of speed.











