You can also listen to this podcast on iono.fm here.
SIMON BROWN: I’m chatting now with PJ Veldhuizen from Gillan and Veldhuizen Inc. PJ, I appreciate the early morning time [looking at] AI policies, particularly around legal and ethical security issues.
Before we delve into some details, I think when we think of AI at the moment, everyone’s kind of thinking ChatGPT. But there’s a whole lot more to it. There’s video, there’s audio. I’ve a piece of software on which I spent, what, 10 minutes training – and it does an okay job of mimicking my voice. This is early days in what is going to be huge and challenging for corporates.
PJ VELDHUIZEN: Morning Simon, and morning to your listeners. Thank you very much for having me on your show. Yes. Just this morning I received an email from The Economist telling me that they’re updating the terms of use of their app, and my subscription with regard to the use of artificial intelligence. So it’s not just us, it’s everybody that’s facing this barrage of new ways of dealing with each other.
The article that I wrote about on this recently was directed more towards the knowledge worker, the accountants, the lawyers, the doctors, that kind of thing, because you can go onto ChatGPT, you can ask it a question; it gives you an answer, and it looks plausible. But the question is, I suppose, where did the answer come from, and do you know enough about asking the right question to obtain the right answer?
And then when you get the answer, are you able to rely on it? Is it, in fact, an answer which is something that you can use because, of course, there may not be regulation surrounding artificial intelligence at the moment but there is tangential regulation. Think of copyright laws and IP laws and things like that. You might be using somebody else’s opinion on something, which you are going to then sell in your legal practice as your opinion; but in fact, it’s the opinion of another firm.
Of course ChatGPT is just a large language aggregator of all the information that’s available on the internet that one’s going to have to develop policies for.
SIMON BROWN: We are going to [have to] because, as you point out, this is the Wild West. Copyright is just one part of it. I know Steam, which is the online gaming store, doesn’t want games that have used AI because they say they just don’t know what the copyright implication is. And that’s the absence of any legislation locally or globally.
That legislation is years away, I would imagine, and it really does leave businesses vulnerable. You mentioned those knowledge workers who might use something in good faith, and it could come back to bite them in the short term – but even in the longer term.
PJ VELDHUIZEN: Well, exactly. It’s sort of a case of the Dunning-Kruger effect, which is you don’t know what you don’t know. We just don’t know what AI is going to turn out to be. Is it going to be the panacea of all our woes or is it going to be something that comes back to bite us if you don’t have the policies in place?
Take the less experienced attorneys in a firm, or less experienced knowledge workers in many firms. There are a lot of children at school [asking], ‘Write my essay for me on How the West was Won’. It’s fraught with problems because it looks like a great piece of work that comes out at the end of the day, but whose work is it, and is it reliable? That’s my big scare about it.
SIMON BROWN: Yes. I’ve done some tests on ChatGPT and, with respect, ‘it lies a lot’ is the short answer. So companies are going to need fairly rigorous and thought-out AI policies governing things around confidentiality. You might be putting in confidential client or business information and really [need to] give thought to data protection, security and how this all fits together. It’s going to be a challenge, but it is critically important.
The one answer is to just ban all AI in the place of work. That never works. You’re really going to need to think this through.
PJ VELDHUIZEN: I agree. I think – and I haven’t read it yet – but that’s probably the nub of the email I got from The Economist this morning. How are they going to use my information? Think about how they track my usage of the app. What am I reading? What articles am I looking at? What alerts am I ticking through, and what are they going to do with that information? Am I agreeing to let them use that tracking for sales to other products, to other companies? It’s like managing smokeless at the end of the day; how do you really do this? Perhaps it creates an entire new department in these firms that are specifically and solely to manage, can I say, the emergence/fallout of AI?
SIMON BROWN: Yes. That’s an important point because we know there’s going to be fallout, in fact there’s already a lawyer in the US, to your earlier point, who basically put together a brief using ChatGPT, and it quoted cases that simply didn’t exist. They are now at fairly significant risk there.
Just a sense – and I don’t know if you have this – do you have a sense that corporates in South Africa are looking at this seriously? Or is it still a case of it perhaps being too new and too fresh and they haven’t given it thought yet?
PJ VELDHUIZEN: Look, I’m on one or two boards, and we certainly are taking a proactive stance. When I say ‘proactive stance’ it’s more: ‘You can’t use it here. Please do not use ChatGPT in our offices. We can’t rely on it at this stage. We don’t know what the effect of this is’.
So it’s more a proactive stance of standing back and watching – if that’s proactive at all.
SIMON BROWN: Yes. I suppose it’s something, rather than head-in-the-sand. We’ll leave it there.
PJ Veldhuizen from Gillan and Veldhuizen Inc, I appreciate the early morning insights.
Listen to the full MoneywebNOW podcast every weekday morning here.
AI Masterclass: Moneyweb has partnered with the Institute for Technology Strategy and Innovation and North-West University Business School to offer a ground-breaking new artificial intelligence course. All Insider Gold subscribers receive a 10% discount for the four-day virtual course. For more information click here.