Part one of a two-part feature. In an insightful conversation with Professor James Maclaurin from the University of Otago we look at New Zealand’s AI landscape, discussing the findings of two reports addressing the government use of AI and the impact of AI on the future of work.
Plus details of our next exclusive event taking place on August 26th.
Part One of a Two-Part Feature Series.
When it comes to technology, it seems hard to avoid the topic of ‘AI’.
The enhancement of commerce, government and everyday life is the promise of ‘AI’ and smart technologies. And the looming threat often touted is the chance of being left behind by the ‘AI revolution’.
This article draws on a podcast discussion with James Maclaurin.
Maclaurin is a principal investigator on the Artificial Intelligence and Law in New Zealand Project and is the co-director of the University of Otago’s Centre for Artificial Intelligence and Public Policy is the key-note speaker at our next DX Seminar event, New Zealand’s AI Future, taking place virtually on August 26th where James will speak to more of the detail in the reports and engage in an interactive audience discussion on the wider topic of how AI will impact people, organisations and society.
The two reports our conversation was based on, were produced as part of the Artificial Intelligence and Law in New Zealand Project which explores AI specifically in a New Zealand context are: Government Use of Artificial Intelligence in New Zealand and The Impact of Artificial Intelligence on Jobs and Work in New Zealand.
Both reports are published by The University of Otago and were funded by the New Zealand Law Foundation as part of the Internet Law and Policy Project.
The reports address a wider range of topics and questions in detail and necessarily add greater context to the very high-level points summarised in this article. it’s well worth looking at both reports.
Government use of AI in New Zealand.
In 2018 the New Zealand government undertook a review of how government agencies use algorithms resulting in the Algorithm Assessment Report. Fourteen government agencies participated in the review including the Accident and Compensation Corporation (ACC), Ministry of Health (MoH), Ministry of Justice (MoJ), New Zealand Police and Oranga Tamariki.
The report is not exhaustive in that it does not audit the use of all ‘AI’ in the varying forms, it’s use at all levels from strategic applications to operational processing or the processes, controls and practices in place to manage the ‘AI’ initiatives. The report does highlight the some of the algorithms that are used and does show a perspective of the breadth of application in New Zealand.
For example, the Ministry of Education uses algorithms to improve the design of more than 2,200 school bus routes nationwide based on factors like eligibility and efficiency. The use of this algorithm alone is estimated to save the taxpayer $20 million a year.
Work and Income’s youth service use an algorithm to help identify school leavers who may be at greater risk of long-term unemployment and proactively offers them support in terms of training opportunities. At the time the report was published more than 60,000 young people had accepted assistance and more than one third of those people had been offered the service through the algorithm that automated the referral system.
Other examples include ‘AI’ that assesses the risk of reconviction by a convicted individual (MoJ), or the clinical prioritisation of elective health services to patients (MoH).
Why is public sector use of 'AI' relevant to the private sector?
If you were to hold the view (in general terms) that public sector organisations are slow to move and slow to adopt, I would agree with you. And so, it did come as a surprise to me, when Maclaurin shared his view that the New Zealand public sector have are pro-actively using ‘AI’ in the provisioning of services and in a global context are doing so in an effective way.
Maclaurin was a principal investigator for both research projects and in his words the choice was made to focus on government use because it was easier to obtain information, when compared to the private sector.
While in many ways the private sector and the public sector have many unique differences and idiosyncrasies, both sectors face the same broad challenges in relation to AI and everyone will become subject to the same ‘AI’ policies as legislation progressively moves forwards.
The view that our government is 'relatively effective' in their use of AI.
The New Zealand Government uses AI in many contexts and Maclaurin makes the very general comment that New Zealand is relatively effective in the ways we use AI, how we are mitigating risks and how we ensure that we get full value.
Maclaurin outlines that the current use of AI was largely established by a policy direction created by the previous government called Social Investment. “The basis of the policy direction was if we can build models of the effects of various types of support for people, we can support people more efficiently and identify and prioritise the people who really need that support. From the outset, AI in government was doing something in a policy sense.”
This comment very strongly aligns to the number one piece of advice we often here from technology experts in the AI sector. Start with the problem, not with the technology or the data. And, ensure the problem you are solving aligns to strategic drivers.
We have also stumbled in our application of AI. Maclaurin points out that there have been arms of government appear in the media necessarily for the wrong reasons over their failures or mistakes in the use AI. The point of that comment is not to highlight that there were errors or failings, rather that the mistakes were not wasted.
According to the Government use of Artificial Intelligence in New Zealand Report the Ministry of Social Development (MSD) developed The Privacy, Human Rights and Ethics Framework (“PHRaE”) in conjunction with Professor Tim Dare from Auckland University and The University of Otago has helped Statistics New Zealand to develop the principles for safe and effective use of data analytics.
By in large these could be classified as risk management frameworks, but they are also tools which ensure AI is being used for the right reasons and support the identification and mitigation of unintended consequences from AI.
A point that is worth drawing attention to is that most AI used by our government has been custom built, according to Maclaurin. From the outset a given ‘AI’ was built as a round peg for a round hole and to some degree there is a case to say a strong platform for success exists when you are applying a custom solution vs. attempting to create a fit from an ‘out of the box’ solution. And I think this is something to be conscious of. An alternative approach is to seek ‘out of the box’ solutions, which comes with benefits, but also comes with a new range of considerations.
Maclaurin is also the Co-Director for The University of Otago’s Centre for Artificial Intelligence and Public Policy (CAIPP). The CAIPP draws on the voice and perspective of individuals interested in all aspects of AI from the deployment to the creation, the impact on society or the ethics of AI. The government has the benefit of drawing from the University of Otago’s CAIPP, and any other centres that may exist but I also think the private sector could take from this example.
Perhaps illustrating the potential complexity and impact at the time the Government Use of Artificial Intelligence in New Zealand report was published, ‘The Australian Human Rights Commission (“AHRC”) is currently investigating human rights and technology. In July 2018, the AHRC released a discussion paper asking, among other things, whether “Australia needs a better system of governance to harness the benefits of innovation using AI and other new technologies while effectively addressing threats to our human rights”.’ (page 65)
Regulation and AI.
According to Maclaurin, we have guidelines but at a legislative level we have no specific regulation concerning the use or application of AI.
The Government use of Artificial Intelligence in New Zealand report rightfully points out that “legislation is only one of the tools available to address the concerns around algorithms. Regulatory bodies or agencies might also have an important role to play.” (Page 62).
The key message would be that the application of AI in the provisioning of services, both public and private is largely unregulated and given the issues we will have to address in the large-scale adoption and application of AI technologies it is very likely that we will see a mix of initiative to provide governance at a societal level.
The same report also extends to say that “A review of regulatory approaches to AI in other jurisdictions reveals a very diverse approach. By December 2018, 26 countries (including the European Commission) had developed some form of national AI strategy or undertaken a national assessment of AI implications including France, Germany, the United Kingdom, Japan, China, Russia, Kenya, India, South Korea, Sweden, the United States of America and Singapore. The number of such strategies is rapidly increasing; no country has yet regulated AI in general although the European Union's General Data Protection Rules (GDPR) are often quoted as a partial solution to the problem.
At a governance level attention to AI is rapidly growing. “In 2017, only seven countries had national AI strategies, whereas by the end of 2018, this number had jumped to 26, including some multi-lateral strategies.” (Page 63).
But trying to regulate ‘AI’ is also not an easy task. “One of the problems that every sector of the economy is so different and unique meaning that it is unlikely to be one big law that is able to govern the use of AI by New Zealand businesses”, according to Maclaurin.
What is clear is that we can most likely expect to see a changing and evolving landscape based on increasing the level of governance of large-scale AI. The governance is likely to be in the form of a mix of approaches and initiatives and it will be an evolving journey.
Transparency in AI.
Maclaurin raised an interesting example – food labelling.
When we pick up any processed food items off a shelf, they have a label transparently providing all the ingredients in that item and the nutritional value of the item. More recent steps have also been taken to develop a visual system that makes it easier for consumers to make informed choices.
Food is fundamental to society and there is mature legislation, policies and systems in place to monitor and govern the food production industry.
And Maclaurin rightfully applies this context to AI products in the scenario of a (hypothetical) AI enabled or internet connected heater. “There is no label telling you what is actually going on. What data is being collected. Where is it being stored. What will it be used for. Who will have access to it. How will your data be secured.”
This example of transparency is a direct example that highlights many of the issues we have eluded to, particularly in the context of the consumer of ‘AI’.
The future of work, large scale adaption scenarios and the choices that will be faced by New Zealand society. These are massive topics. We were never going to get to them in one article or even in one podcast conversation.
I will write a second part article which will touch on the competitive impacts of AI and ‘AI monopolies’, large scale adaption scenarios and the tradeoff between benefit and caution that can be created when AI ‘goes wrong’.
Professor James Maclaurin has made The Impact of Artificial Intelligence on Jobs and Work in New Zealand and Government Use of Artificial Intelligence in New Zealand freely available. It would be my encouragement to you, to take the time to read them. There is some great information, perspectives and views that can help shape all our thinking.
If you want to hear more, Maclaurin will also be speaking on these topics and more, alongside an engaging audience led conversation in our August 26th virtual event, New Zealand's AI Future. See you there.