VoiceOver users please use the tab key when navigating expanded menus

Trust in the age of AI

"During my 14 years in government, I spent most of my time building new or radically improving existing intelligence and security capabilities and running them.”

As a child who loved science growing up in the 1970s, using artificial intelligence to do specific, often complex jobs seemed a logical next step for society.

 

When I look back, I smile at the enthusiasm of my younger self for a technology that was still many years away.

 

Today when I look at the intense focus on AI globally – and how it can be used and abused – it makes me reflect on my own journey with the technology.

 

A passion of mine is applying AI to security, to ensure the large information systems that connect our society remain robust and protected.

 

But behind this is an understanding we need to deal holistically with questions of trust and trustworthiness and to take the issues of risk and our responsibilities to others seriously.

 

From undergraduate student to innovator

 

My passion for AI was sparked at university in the late 1980s studying computer science and mathematics. While I was doing the normal undergraduate things of learning to drive, working part-time jobs and hanging out with friends, I also had my special project playing around with AI.

 

In 1989 I created a so-called “digital twin” of a zinc processing plant that was the first of its kind in the world.

 

Zinc has always had great value, used for everything from galvanised iron to paints and even medicines. The process to extract zinc requires a lot of energy and at that time, was also humanly intensive with many potential points of failure.

 

Solving this problem was a good place to start my journey, creating a virtual digital simulation. Working from hundreds of pages of complex formulas, I created a thermodynamic simulation of the plant.

 

It was basically a digital working model to simulate the flow of energy in extracting minerals and precious metals from raw earth ore.

In my fourth year of university I bolted on AI to do some predictive modelling. This meant we could simulate a variety of conditions and improvements to the plant, such as what would happen if we made specific changes in the environment.

We were able to predict what the energy consumption would be if the conditions were altered, as well as what might go wrong (such as the temperature or pressure being too high).

 

In a processing plant, once things go wrong it can take a lot of effort to correct. But by simulating, the operators could take preventative action.

 

That was in 1991. Little did I realise that the “digital twin” and the AI model were both world firsts.

 

After university I worked for the CSIRO for a few years and received a scholarship with Microsoft to do a PhD where I built what was then the first AI generated website in the world.

After about 15 years in research and development, using AI and working with or creating startup companies, I decided to move into law enforcement and national security. I wanted to use my experience to help make the world safer.

 

National Security

 

In the mid-2010s, cyber security became a real concern. Until then it was largely seen as this ‘backroom IT thing’ – not considered that interesting or important. But all of a sudden that changed.

 

I spent most of my 14 years in government building new or improving existing intelligence and security capabilities. This included new intelligence and analytics platforms and partnerships at financial crime agency AUSTRAC and the Australian Crime Commission; the New South Wales Government (Cyber Security NSW as the inaugural Government Chief Information Security Officer); a new Data and Analytics Division at Services Australia (as the inaugural Chief Data Officer) and a new Data Division at Defence (as the inaugural Chief Data Integration Officer).

 

I also built up multiple human capabilities including international security and law enforcement partnerships.

 

Moving away from national security and defence was hard, as it was an amazing opportunity to contribute to national and global security.

 

I joined ANZ in August last year. What intrigued me was the bank’s purpose “to shape a world where communities and people thrive”. At first, I struggled to believe a bank could truly live this.

 

When I was being interviewed by Chief Executive Officer Shayne Elliott, I asked him about this and he spent a long time taking me through the answers to questions I had about the purpose and how it applied practically.

 

It showed me the purpose statement is real, not just something on paper or the side of the building. It was lived and breathed by people across the organisation.

 

That clear, strong leadership from the top has generated a culture of wanting to do the right thing by our customers and by our communities.

 

My focus is making sure the security operating model is understood and owned by the entire organisation – to ensure everyone plays their role in keeping the bank and our customers secure.

One thing I immediately noticed at ANZ was the strong focus on compliance. It’s important when risks are stable – but even more critical in an unprecedented, fast-paced cyber security environment.

The challenge for us is making sure the security operating model is in place and is adaptive and flexible enough to deal with dramatic changes as they happen.

 

A trustworthy core

 

For a “technology expert”, why do I place ANZ’s purpose and values so centrally?

 

Because without these, we won’t know how to navigate current and emerging technology – we need to understand these as more than just concepts.

 

A lot of people talk about ethics and trust. Fundamentally, ethics is just one of the many pillars of trust – along with security, privacy, human rights, quality and fairness. How do we act in a trustworthy manner and make sure we earn and maintain this trust?

 

You may be ethical, but if you do the wrong thing through poor decision making, a lack of security or avoid engaging with risk appropriately, then you will lose trust.

 

In this modern world, you can also lose trust by not innovating or by being too slow to innovate or even too fast by cutting corners. This may mean not providing your customers with what they need. Or you may not communicate effectively, which can also erode trust.

At the end of the day, it is far easier to lose trust than to gain it – it takes years to earn and just seconds to lose. What we do is ultimately about earning and maintaining trust - proving we’re worthy of it by using the best technology and enabling secure business transformation.

In Defence they talk about the “airworthiness” and “seaworthiness” of their capabilities. In the same way we need to be thinking about the trustworthiness of the systems we create.

 

Dr Maria Milosavljevic is Chief Information Security Officer, ANZ

Related Articles:

The artificial intelligence opportunity

Artificial Intelligence is not new, but its applications are exploding. Collaboration is required for it to reach its full potential in a safe manner.

AI and the erosion of customer trust

Artificial intelligence and its applications have boomed. But what are the implications of its interactions with customers and do we still prefer the human touch?

Using artificial intelligence to fight fraud and scams

There are new helpers in the battle against online scammers. AI, machine learning and data analytics are revolutionising our ability to protect ourselves.