AI in NZ's Public Sector: A Smart Move or a Risky Bet?

NZ's public sector has new AI guidance – but what does it mean in practice? In this interview, we discuss the new Public Service AI Framework and invite your thoughts.

AI in NZ's Public Sector: A Smart Move or a Risky Bet?
Screenshot from live interview / Frog Recruitment

In this interview with Shannon Barlow, Managing Director of Frog Recruitment, we discuss the challenges and opportunities of AI in NZ's public sector – particularly the need for transparent, practical guidance to help agencies navigate emerging technologies. The transcript below captures the conversation in full. If you have thoughts or questions, I’d love to hear them.

A recording of the interview is available on LinkedIn here⧉ or on YouTube here⧉

Transcript


[Shannon 0:00]

Kia ora koutou, nau mai, haere mai, and a very warm welcome to Mahi Matters, New Zealand's weekly update on what's happening in the employment market and our first for the year.

Very excited to be here again, and I thought we might just ease into the year with a nice, easy topic, maybe shorts in the office, emojis, something like that. But no, we're going in heavy hitting and we want to find out is New Zealand setting the standard for ethical AI usage in government.

So this week, we're joined by Stuart MacKinnon, Strategic Advisor at Analysis One to discuss the government's new safe AI Framework and its potential impact on the public sector and beyond.

So how will this initiative enhance efficiency and transparency? What are the benefits of greater AI integration, and why? Why may some government agencies be hesitant to adopt these changes? So as I said, big topic to start off the year, also on today's show, will the recent unemployment rate have an impact on employer confidence? We'll explore our latest job opportunities across New Zealand and tell you how you can take the next step in your career. And lastly, we'll reveal our Social Media Insights on pay transparency, where we ask, would your employees be comfortable disclosing their salary?

Okay, so jumping in. As I said, recently, the New Zealand government has introduced new guidelines to ensure the safe and responsible use of artificial intelligence in the public sector. It's been developed by the Government Chief Digital Officer, the responsible AI guidance for the Public Service aims to help agencies adopt generative AI systems in ways that are safe, transparent and responsible.

So balancing potential benefits with associated risks, although it's been well received by some, there's still some hesitation from others. AI is shaping the future of New Zealand's public sector. But is it a smart move or a risky gamble to discuss this further? We're joined today by Stuart MacKinnon, Strategic Advisor at Analysis One. Welcome Stuart.

[Stuart 3:17]

Thank you for having me. Yes, it's an exciting topic.

[Shannon 3:21]

Yeah, great one to kick off the year.

So now the framework takes, and I quote, a "light-touch, proportionate and risk-based approach". I'm glad I've got you here to translate for us. What does it actually mean? What do you think that means in practice, and is it the right approach for New Zealand?

[Stuart 3:44]

Well, the phrase seems to have come out of some advice from MBIE to Cabinet around the middle of last year. And I guess, well, firstly, MBIE were looking at AI from an economic perspective, and I guess they were holding that in contrast to a more heavy-handed regulatory approach, which comes with its own issues.

Of course, what does it mean in practice? I think we really need to have a handle on what AI risk is and what proportionate mitigations are. So "light-touch, proportionate, risk-based", sounds good, but what do we mean by risk?

And AI risk is multi-dimensional, so it's not easy to rank AI initiatives on a scale of 'one to ten' in terms of risk. They impact different people, they vary in terms of how autonomous they are, etc. So, without that, I fear that we may only do the first part of it. So, is it the right approach? It sounds very pragmatic, but there's always that risk that we do the first part right, but don't get to the "proportionate and risk-based" part.

[Shannon 5:04]

Yeah, figured about the potential risks there, and I can see why we would take this approach. Last year, we talked a lot about productivity and how New Zealand has a pretty dismal record in that regard, and it's definitely an area that we can improve in, and this is one of the many tools that can help get us there. So understand why we're taking that approach, but as you say, you do need to be able to balance that out.

So of course, AI can cover a vast range of things, from simple office automation to more complex use and each of course, with their own set of risks. What kind of worked examples would be most useful in strengthening guidance for public sector agencies?

[Stuart 5:52]

Well, I can use two examples, maybe to illustrate the point. If we imagine, say, the Department of Conservation investigating the use of AI in bird monitoring. So imagine putting microphones out in the bush, and they use AI to identify the birds that are making calls, and potentially how many of them are there, and it sounds very clever, and also sounds very low risk, even if it doesn't work at all, but I'm sure it would.

But take another example where an agency might want to use AI, albeit skilfully, in some part of a process that helps them to determine a citizen's eligibility to some kind of government assistance or other service. You can immediately get that sense of a shift in risk, and so I think that we need a range of examples, right from those very simple 'organising my inbox' kinds of things, right up to those areas that, even if they are skilfully implemented and well managed, still present a lot of 'perceived risk' to the public.

[Shannon 7:05]

Yeah, I think that's a really good point there. And you know, even for quite simple things, it is how you use it and how much you rely on it. So, you know, in the recruitment field, will say, yes, definitely use it to be able to take some of the leg work out of things, but don't just, don't just take the answer hole and, you know, put it in your resume or your application, because it's going to backfire. So, yeah, lots of things to keep into account there.

Do you think there's areas in the framework that could lead to potential challenges for agencies?

[Stuart 7:47]

Yes, I think the first challenge would just be to navigate the framework. So in the absence of those examples, and in the absence of a kind of analysis of the types of multi-dimensional risk we're dealing with, an agency chief executive or technology leader would need to really absorb a lot of information to intuit where they're at contextually, and what they might need to do next. So there are challenges around that.

And I think also, putting myself in their position, humbly, they might ask the question, What's genuinely new here? So agencies already have existing obligations around privacy, security, reliability. So they might be asking themselves, what's genuinely new? And particularly, where can I leverage all of my existing expertise here, and where might I need to seek specialist advice.

And I think it's great to remind agencies of their broader, standard, bread-and-butter obligations, but it's a matter of lifting out those areas where AI makes a qualitative difference to the areas that we're looking at.

[Shannon 9:03]

Yeah, absolutely. We've got a comment from Facebook, from Nivla around, you know, the ability to filter out incorrect data, and I guess that's a point around you still need that human aspect, and you can't leave it all up to all up to AI, so that skilful input from your good humans is really important as well.

What about AI, and that potential to enhance transparency and accountability, or perhaps reverse of that, I guess there's potential to introduce new risks in that area?

[Stuart 9:44]

Absolutely. I could see AI supporting integrity and transparency and accountability, but I could also see it obfuscate those things.

But putting my optimist hat on for a moment, an area I've been involved in is investment monitoring for large, high-risk programs throughout the life cycle of the program. And, you know, thinking of Central Agencies like Treasury, those things are monitored through a Gateway process. It's human experts, and sometimes mountains of complex documents, and sometimes also limited time to work through those. So I think skilful application of AI – now your commenter made a really important point.

I'm not talking about taking ChatGPT, or something similar, uploading documents to it like some magic black box and saying, "What do you think, magic machine?". Quite different. There are very skilled techniques, very advanced techniques in the area of things like Retrieval Augmented Generation, which are complex to implement, but can really aid humans who are faced with large volumes of complex data.

The Auditor General is looking at new legislation around transparency and accountability in terms of performance reporting. I could see ways in which AI could support that.

Go to the flip side, and if you have people using AI in that magic black box way – you know – if you upload your job description and your CV and say, "AI, can you please help me pull this together?" it can be brilliant, but it's quite different when you're working with government policy. But in that setting, you have no idea what's going on inside the machine. You're really at its mercy, in fact. So techniques that expose the inner workings, and allow logging and auditing, etc.

In addition, you know, there are certain types of AI, proprietary AI, where there's 'secret sauce' in the AI, that's not examinable. And so we are reliant on an external vendor who's not exposing their code to me, or their weightings, etc, then that's a problem.

Also, if something goes wrong, who's liable there? Is it the person who designed the AI, or is it the user? So complexity all around.

But a final point is, transparency is also transparency about how we use AI. So I'll give you an example. Yesterday, I went to a US website where, within three clicks, I could download a machine-readable, detailed database of all of the Federal Agencies AI use cases. That's been in place since 2020, so I think in terms of transparency, we also need to be transparent about how we're using AI.

[Shannon 13:02]

Yeah, excellent point. And from LinkedIn, we've had Tim pick up a point there saying that the real challenge for the public sector and in general, is the exponential speed of AI development. So you know, we're talking about it specifically in government and the frameworks being built around that. But of course, it's everywhere, and it's happening fast, and some people adopt it well and not so much. So I think that a good point there, that this is probably the worst AI that you'll see, it's only going to get better, but also the way that we use it as well, and the lessons that we learn sometimes the hard way along the way, will be really important.

So Stuart, what would you like to see evolve in the next, say, year or two years, in regards to AI adoption and the framework around that?

[Stuart 14:02]

Well, I'd really like us to address the issue of transparency. I think we need to prioritise this. If our goal is to expand the use of AI carefully and safely, to improve the efficiency and the quality of services we can offer the public, then we really need to be transparent about that. And we need social license to be able to increase our use of AI and the road, the path to social license, I like to say, passes through the valley of transparency. There are no shortcuts. So that's something I would say at the moment.

The framework uses words like "should" – it says "Agencies should be transparent". I'd like to see that evolve very rapidly to "Agencies must be transparent - and this is the file format for you to upload your detailed AI use cases into the central public register". So that's, so that's my view there.

I know that there are good relationships in New Zealand with the Australian AI Assurance Pilot, that they're testing over there. Firstly, I'm love that they're testing it, rather than just going big-bang. And with New Zealand's relationship with those Australian Federal agencies, we will have opportunities to take lessons learned from that.

And I'm also aware that, you know, this is ongoing work, and there's additional advice on its way, and I look forward to seeing what that advice is, and helping people navigate that.

[Shannon 15:36]

Yeah, fantastic. Good things to come. Now, really interesting topic, and I'm sure people are still keen to engage. So make sure to if you're watching this later, feel free to still engage in the conversation. And Stuart's details of courses, along with our own, will be on the links. Thanks so much, Stuart, that was really insightful.

[Stuart 16:01]

Thank you very much.

• • •

This conversation inspired a simple model for early discussion. If you would like read more about that, it's available here.