James Gauci On Strategies For Aligning Artificial Intelligence And Digital Technology With Humanity

James is CEO and founder of Ethē, an Australian AI governance platform. Ethē helps organisations establish responsible AI governance in days, not months, and is built to support international standards in AI risk, safety, and ethics management.

Ethē is an all-in-one solution to achieving and maintaining compliance with ISO/IEC 42001 and Australian AI Safety Standards.

James is also Managing Director and founder of Cadent, an ethical technology studio and certified social enterprise working to align technology to humanity. Cadent has delivered tech consulting and software work for the Australian Government, Queensland Government, not-for-profits, and private companies. In 2023 Cadent donated more than 50% of its profits to AI safety and standards research and development through The Gradient Institute and IEEE SSIT.

James has enjoyed more than a decade in technology roles, holds a Bachelor of Psychological Science and an MBA, and is Chair of the Australian national committee of the IEEE SSIT (Society on Social Implications of Technology).

 

James discusses How the growth of Artificial Intelligence presents unique possibilities for the future of values aligned business and how failing to effectively manage this technology will negatively impact individuals, societies and the environment.

 

Highlights from the interview (listen to the podcast for full details)

[Indio Myles] - To start off, can you please share a bit about your background and what led to your career in ethical digital technologies?

[James Gauci] - It’s somewhat of a meandering journey as you can imagine, but I'll try to condense it as best as I can! I'm a psychology graduate, so I spent some time in the arts and culture sector where I got a good grounding in human behaviour and statistics.

As a graduate, I found myself working in an organisation that needed significant capabilities uplift in using digital technology. I filled that gap, and that's where my digital technology career started.

Fast forward a few years, I was leading software teams in a delivery management and product management capacity, mostly in professional services. I was working with a lot of different organisations around the world like MTV and Spotify, but also a lot of large organisations locally here in Brisbane, Australia.

That all culminated with spending time here in Queensland at PPQ, where I was the head of digital and technology. From that role I got a great grounding in infrastructure, cybersecurity, and the software work I was doing. Throughout that entire journey, I never found full affinity between myself and the work I was doing.

When my time at PPQ ended, there was an opportunity to work on a project with the Federal Government nonexclusively as a contractor. It presented this opportunity to work in the business I always wanted to work in by creating that business, and that business became Cadent.

That was the start of something special for me, because it allowed me to live my authentic self through my business which is values aligned and making sure technology is doing the best for humanity rather than the other way around.

As the founder of Cadent, can you share more about this enterprise, its core values, and how it’s aligning technology to humanity?

The way we live out our values is both through the way we do our work and through the output of our work. On the one hand we call ourselves an ethical technology studio. What the heck does that mean?

Everybody's ethics are different, so for us what that means is that we're accessible, inclusive, and secure all by design. Because we take a by design approach, it doesn't cost more than it otherwise would, and we prioritise using those skills and elements of the craft.

What we find is this has the effect of reducing the total cost of ownership for technology solutions. It also increases their effectiveness for broader groups of people. This includes classically underserved communities like people with disabilities and minority communities who tend to get sidelined in society by digital experiences in developed countries like ours.

By making sure that they're considered at every step of the journey, we're able to achieve a lot of efficiencies and realise a lot of effectiveness. The other side of our impact is through donations, and it's not just the how it's the what.

For our first year of operations, we did some calculations of the in kind, low bono work and cash donations we've given to organisations. We calculated that we ended up donating 273% percent of our profits, and I don't know if that makes me a good or a bad CEO! That's something that we were proud of, the fact we were able to make that impact.

The keystone of that giving aspect was a donation to the Gradient Institute, and anybody who's been paying attention to the national conversation in AI may have seen the Gradient Institute’s name. They're the leading independent research institute on responsible AI in this country.

They're a not-for-profit and charity entity which consults with the government to help them formulate their regulations, strategy and approach to responsible AI. Our cash donation supported a couple of Honours students to do some original research into responsible AI. We were very proud of this.

As CEO and founder of AI governance platform Ethē, can you discuss a bit more about the platform and how is it benefiting users?

Ethē is an interesting product, and it was somewhat opportunistic in the way we started to pursue it.

We've always been interested in making sure exponential technologies like AI are serving humanity more than they're not. We want the harms to be wherever possible eliminated and any risks are reduced.

Something that happened late last year was the advent of the Australian Standard for AI Management Systems. ISO standards are common in the workplace, so ISO 9001 for example is for quality while ISO 27001 is for information security.

ISO 42001 was brought out in December of last year, and Australia adopted it immediately to be the Australian standard for AI management systems. In a nutshell, it's an approach to help you manage the inventory of AI systems in your organisation, related policies, risks, and the impact of those systems on people, groups, society and the environment.

It's an out of the box set of controls you can implement in your organisation to manage all those aspects. As somebody who's worked with ISO Standards extensively in the past, I knew how they could go wrong, become expensive and stop you from doing good work. I was also lucky enough to have been in environments where ISO Standards were used as an enabler of innovation and good practices.

Rather than being bureaucratic and compliance based, these ISO standards were forward thinking. I wanted to create a platform that helped organisations do the latter, to use governance as a lever for greater effectiveness and efficiency.

What problems does AI present to business leaders, organisations, individuals and the planet?

That’s great questions, ones we're all grappling with in one way or another. At the end of the day, we've spent the better part of the last 20 years coming to grips with the way technologies (specifically digital technologies) are pervading our lives.

We don't have to look forward (like you've asked me a little bit about the future), but to answer that question I would say we don't need to look forward, we just need to look back. Even as recently as five years ago we can look at when people realised cyber security, individual privacy and rights were an important thing for our lives.

Then there was 10 years ago when social media was starting to change the way we had conversations as friendship groups, but also on the societal level. Then you've got The Great Hack, the Cambridge Analytica scandal where the technologies at play and contractor organisations were essentially using the tools of big tech companies to sway the public’s opinion and overthrow governments.

This isn't the stuff of fantasy; this is literally happening in our lifetimes. When you consider how social media and technology is being used to influence political aspects of a country's existence or to pervade our privacy and individual rights, these ethical conundrums can also be overlaid onto AI.

Imagine AI powered social media, and social media has been AI powered for a long time. Social media has been using deep neural nets like the ones that we're using today for ChatGPT and so on to optimise for profit and engagement with content.

That's part of the reason why social media seems like such a dramatic and polarising place, it's because as a product it wants you to react in an outsized way so that you'll click on the next thing because you’re incensed, sad, or interested in it. You'll then follow that rabbit hole where they’ll serve you more advertisements. When you consider AI through that lens, we're essentially building these systems at scale now to generate content.

The way those systems generate content bears questioning; what content has it been trained on to give us these outputs? What optimisations are companies making to determine whether it creates a useful output? What are the parameters they're optimising for? If we look at the way big tech has behaved over the last 15 years, it stands to reason they're going to optimise for profit.

They've got a fiduciary responsibility to do so, they've got a responsibility to their shareholders to optimise for profit. If they're not doing it, they're not doing their job, and they might find themselves in hot water legally.

There are a few challenging incentive structures in place, and so it relies on people like us (the socially conscious aspect of capitalism) to say, "look we can go ahead and do these things, but how might we avoid all of the worst aspects of what we're talking about doing here?”

How can we think beyond that quarterly or annual financial cycle, and think about what organisation we want to be over the next 5-10 or even 50 years? What values do we have as an organisation so that when we're implementing decision making systems those decisions will align more to the human values, we've imbued them with.

That's a real challenge, and it seems nebulous and esoteric (which it is)! But that's where the standard comes in, it tries to put a bit of a structure around all these hard questions and point you towards managing the biggest risks. This is what gets me excited about the prospect of a product like Ethē.

Where are there opportunities to ensure we're implementing and growing AI effectively while minimising its negative impacts to the people and planet?

All the good practice aspects to procurement or systems development, like human centred design and ethical supply chain risk apply in this scenario. Existing laws also apply in this scenario.

Sometimes with AI it feels like we're building the plane as we're taking off, and that makes sense because it's new, exciting, and feels like a different class of technology or a different way of thinking about work. We're having to rethink the way we interface with our computers to take advantage of Stable Diffusion, Google Gemini or whatever AI you have.

What I would suggest is we should first consider how the current aspects of the environment apply to our scenario and the tools we're using. We should also consider what we're asking the system to do and whether what we're asking that system to do is consistent with our own values.

A pretty good example is OpenAI, they have a third-party contractor based in Kenya paying people $2 an hour to label content. Now that may be a legal amount of money to pay someone in Kenya, and there would probably be a lot of supply chain professionals, even ethical supply chain professionals who might come out and say that this is understandable.

They might say because it's a different economy it's an acceptable business situation, but what I would probably suggest is it infringes on some modern slavery policy aspects, for organisations in Australia at the very least.

A lot of people come out strong on modern slavery, but if you're unable to adequately interrogate your suppliers, then you're probably going to miss a trick and find yourself in some ethical or even legal hot water. The thing is you’re not alone either; there are smart people who've been thinking about this stuff for a long time. In this podcast for example, you're putting me in the position of being an expert and asking me these questions, but I don't even consider myself an expert!

These are such challenging areas, and I'm just lucky I've been able to participate in the conversation for the last five years and that I got the jump on it a little bit prior to the advent of ChatGPT, where it's now become a mainstream product.

There are a lot of smart people who've been working for a long time across academia, industry and government to make sure AI works in the interests of humanity. There are lots of standards, ways of thinking, and hardworking people out there who are advocating for these things, so you should never feel alone.

We're all out there working hard, and as another vote of confidence, I've never seen governments move as quickly on an emerging technology issue as they have with AI. This should be a source of hope, and hope is important to have in situations where it all feels complicated and uncertain. There are good grounds for having hope in this situation.

What advice would you give to an entrepreneur who wants to harness AI responsibly and ethically?

I think the best way to answer that question is to tell you how we do it, because we are a socially conscious certified social enterprise operating in technology and using AI every day. We have robust internal conversations about how and when we should use AI, and it must start with your policy.

Like you would do with your own individual policy as a small business owner or a social entrepreneur, understand your values and be explicit about those. Test any of your adoption or usage of AI against those values to see whether you're infringing on your own values and whether you're happy with that.

In the tech industry you often hear this term called ‘wetware’, and this is what they refer to humans as. What I’m talking about is a very "wetware consideration", and this gathering or scoping process for the use of AI is so much about who we are and what we value. It becomes very human very quickly, and so we're learning about human centricity repeatedly in tech.

We use Gemini internally, but that's because Google has all our data anyway. We're already using Google Workspace and their tools for our regular workflows, so we're not giving anything extra to them. We are using the paid versions of those tools also, and what you'll find is that when you pay for the usage of a generative AI like Gemini or ChatGPT, it gives you the option to not have your sensitive information or any information put into the system to be used to train those models.

For example, if I was using the free version of ChatGPT and I put all my payroll information and email addresses associated with that into the prompt for analysis, what you'll find is that data will be used to train the next version of the model.

You can't guarantee some person won’t come along to that model and pull that information out of the model. It’s a low likelihood, but it is a likelihood. See if there are versions of the tools, you're considering using that allow you to not have the information you submit be used in the training of those models.

Finally, in a better world we might have locally deployed AI systems that only act within our interests, ones that are fully controllable by us. This is already happening right now; larger organisations are deploying AI in their own environments so they can fully control all aspects. That gives you a lot of advantages, but it also comes with a lot of costs now.

It's not feasible for a lot of people, it's something an organisation like us who works with technology all the time wants to do, but it's cost prohibitive because of our small size.

If you have the resources absolutely do that, but if you don't just be careful of the features the tools you're using have and be mindful of uploading any confidential sensitive information or IP’s because it could find its way into models around the place.

What inspiring projects or initiatives have you come across creating a positive change?

I want to give a special shout out to all the people working in the public sector on the ethical, responsible and safe adoption of AI. There's been a strong level of party alignment and healthy conversation, debate and consultation with community over the last couple of years. The hat should go off to state and federal governments around the country.

For example, there was a National AI Assurance Framework document that came out recently, and that's been the product of a significant amount of work across state governments around the country in consultation with the Federal Government on what Australia's approach to ethical and responsible AI adoption looks like in the public sector.

It's a consistent message, and while it doesn't make a lot of the decisions for you, it is a guiding document. If you're interested in what that looks like in the public sector and how that might manifest as policy or regulation over the next 12 months in Australia, it's an important landmark document to get an idea of what the public sector is thinking about.

There are a lot of smart people working hard, so shout out to them. There is the Gradient Institute as well, I mean there are many reasons why we picked them to receive our cash donations as a social enterprise. The main one is they are the only independent research institute (or at least a year ago they were) that had a charity arm within their organisation.

They’re doing fantastic work, and they've got a great team. A lot of them are previously from CSIRO, Data 61, the AI industry and academia as well, so it's this great melting pot of the AI sector coming together and implementing responsible AI in the public and private sector. The fact they have the strength and personnel to take their mission and carry it out was a strong determining factor in where our money ended up going, big shout out to Bill, Tiberio and the team.

They're doing some great work and I’m looking forward to seeing the resultsof that work over the coming couple of years.

To finish off, what books or resources would you recommend to our audience?

These are probably going to sound a little bit left field, but I’m a huge advocate for the book Mindset by Carol Dweck. This isn't about AI, it's more about humanity and getting into this frame of mind that it's okay and important not to know.

It's important to acknowledge that because it allows you to move forward, learn things, and make mistakes along the way. There are just a few crucial aspects of what it means to be human in that book, and it helps a lot of people myself included.

It was an important read for me in my mid-twenties to get into that growth mindset and become comfortable with the fact you're going to be wrong a lot, but it's what happens next that's important, not the fact that you're wrong.

The Lean Startup is an important book, and it tells you a lot about the way the contemporary tech sector led by people in Silicon Valley are thinking about solving problems and making their businesses successful. If you're ever wondering what they think about when they start ventures and make decisions, it's a short and accessible read.

Where it gets interesting is the author has gone on to publicly edit his own work in podcast appearances because he began to see the impact and unintended consequence of some of the ideas he had.

These days he's advocating for a social impact orientated economy, he's trying to try to start up a social impact stock exchange which facilitates the growth of organisations with societal and environmental benefits embedded into their operations. It's interesting to see his personal journey played out in some of his writings and podcast appearances, because he is quite a socially leaning guy.

One final book I'd recommend is by Mustafa Suleyman, and it's called The Coming Wave. He was one of the co-founders of DeepMind, and DeepMind got bought by Google. They created Alpha Go and all those frontier (to use a jargon term) Convolutional Neural Nets that started playing board games like Chess and Go at a high level.

Interestingly he started out as a social entrepreneur and the head of some socially orientated not for profits before joining DeepMind, he's got a unique perspective on policy, people, the environment and all the impacts of AI on each of those areas.

He frames these impacts as this coming wave, and his philosophy is that we shouldn't be pursuing an Artificial General Intelligence (G.I), one AI that is better than all humans at everything, he's advocating for an approach where we develop narrow AI's that are good at specific things.

That is an aspect of safety we can consider, and he's now broken up with Google, started his own AI company called Inflection, and recently moved over to Microsoft. Just a mover and shaker in this game, and someone who has a socially conscious leaning as well.

 
 

You can contact James on LinkedIn. Please feel free to leave comments below.


Find other articles on social innovation.