The Progress Report

Finding the AI balance: Opportunities, threats, ethics, and responsibility

Episode Summary

As generative AI integrates into how business gets done, this discussion looks at how to ensure AI is being leveraged in responsible and ethical ways. Listen as our experts explore the challenges and opportunities of responsible AI, including the role of data and the challenges of ensuring diversity, trust, and quality and how to create frameworks for a cohesive strategy for generative AI across stakeholder groups. The conversation will dive into insight for how organization should think about implementation of responsible AI practices and ethical principles and policies, including broader stakeholder representation.

Episode Notes

As generative AI integrates into how business gets done, this discussion looks at how to ensure AI is being leveraged in responsible and ethical ways. Listen as our experts explore the challenges and opportunities of responsible AI, including the role of data and the challenges of ensuring diversity, trust, and quality and how to create frameworks for a cohesive strategy for generative AI across stakeholder groups.

The conversation will dive into insight for how organization should think about implementation of responsible AI practices and ethical principles and policies, including broader stakeholder representation.

Featured experts

Episode Transcription

Tom Rourke00:02

Hi, I'm Tom Rourke, Global Leader for Kyndryl Vital and welcome to The Progress Report. Today we're going to have a fascinating discussion on a very specific dimension, which is ethics and responsibility in AI. And I'm delighted to be joined today by Rachel Higham, who is the CIO for WPP. And Elizabeth Adams, who is CEO of EMA advisory services, but also specializes in the whole domain of responsible and ethical AI. So Rachel, just in terms of helping our listeners get to understand what we're going to talk about today, what's your definition of ethical AI? 

 

Rachel Higham00:43

Well, I have to think about a broader definition of responsible AI, which includes ethics. And if we think about responsible AI, it's an approach to developing assessing and deploying AI systems in a safe, trustworthy and ethical way throughout their lifecycle. A responsible AI approach should emphasize transparency, explainability, fairness and inclusivity. And responsible AI practices must aim to mitigate bias, ensure privacy, and prioritize the well being of all users and stakeholders.

 

Tom Rourke01:15

Well, that's fairly comprehensive, I guess, Elizabeth, coming at this from a different perspective in the work that you're doing in academia? What would your response to Rachel's definition be? Or does it differ from your own?

 

Elizabeth Adams01:25

I think Rachel hit on the on the head, I think that's a great definition. What I'll add to it is based on the leaders that I've spoken to over the last couple of years, and the 1000s of employees that I've talked to, and with consideration for my academic research, what responsible AI means to me for research is really a commitment to the ethical practices that lead to all the things that Rachel mentioned. So you're responsible AI, or your AI output should include inclusivity, it should include accountability, it should include auditability, and explainability. And some new attributes or principles that I'm finding as a result of speaking with these leaders are things such as attribution, and consent and protection of intellectual property. So when I talked to leaders, and I asked them, 'can you explain your ethical approaches to responsible AI?' and Rachel did a really good job of talking about how inclusive ethical AI is to that I asked them to share in terms of demonstrated outputs, how can you demonstrate transparency? How can you demonstrate accountability? Like what are the inner workings of your organization that will give us a sense of how you are managing your responsible, yeah, practices?

 

Tom Rourke02:39

How do you get communities of leaders who are now suddenly excited about all the potential and wanting to kind of jump straight in and start harvesting benefits? How do you get them to take the time to pause and ensure that strategies actually have that sense of responsibility infused into the strategy?

 

Rachel Higham02:59

Yeah, I think it starts by you as the organization defining what responsible or ethical AI means for your organization, and agreeing what promises you'll make to your stakeholders, and then converting that into a policy instead of internal and public statements that all your colleagues can anchor on, as they're starting to experiment and evaluate and test whether any use case is appropriate for your organization for your industry for your customer base and the domain. We started by taking our existing data privacy, ethics and security governance committee, and expanding its role over AI and responsible AI. And within that framework, we mapped out the relevant stakeholders, how we would govern initiatives, how would measure the assurance of our activities and their effectiveness, and what standards we expected the whole organization to adhere to. And from that we outlined and approved the various guardrails, we wanted people to live within and set alongside a really comprehensive education program for all of our colleagues to understand the opportunities and pitfalls of around the topic, what guardrails we had put in place. And we've now trained over 8,000 creative technologists and 10,000 data practitioners, and even 60 of our very senior leaders in AI sending some of them off to do diplomas and various formal external education, just to build that AI literacy and understanding of both the opportunities and the challenges.

 

Tom Rourke04:31

I mean, I think it's fair to say that your organization was one of those organizations very early to recognize both the potential of AI but also the possible implications for your business model. But I think Elizabeth, you work with a kind of broad set of people in industry, is what Rachel is describing typical of how people would approach this or where would you see differences in approach with the people that you work with?

 

Elizabeth Adams04:54

What I'm seeing is for organizations where their employees can clearly speak about the responsible AI definitions, the vision for the organization and where they fit in that those organizations do very well. But there's the other set as well, where there are a number of employees who want to participate in responsible AI, but their organizations are a little slow. And so they're relying on their external expertise. They're playing around with AI tools, generative AI tools, whether it's beginning or mid journey or something else, to understand how they should responsibly contribute to an organization where maybe the vision just isn't clear. So to answer your question, yes, those organizations where employees feel that responsible AI is an integral experience - absolutely - there are training courses, there's a AI Center of Excellence, a hub where they can contribute, and they can share and learn the things that they are participating in and exchange information. But what I'm happy about is to at least see the conversations happening in at the executive level, where they understand that a vision is absolutely essential to drive your organization forward in shaping its future in responsible AI.

 

Tom Rourke06:10

What would you suggest either of you, I guess, as the kind of first and essential steps to getting executives to pay attention to this as a topic and not sort of take a head in the sand to actually recognize that there needs to be a conversation around the ethics and responsibilities around use of AI.

 

Rachel Higham06:28

Well, it happened very early for us. And we realized very, very quickly that generative AI would have an outsized impact on the creative and media industry because of what we do. So we first looked with the board and with the executive committee, at a macro level, which of our activities would be untouched by generative AI and traditional AI, which would be optimized and which will be eliminated. We then tested that perspective really deeply with our tech partners, with clients, with colleagues and with the board. The second step was we decided that we'd build a strategic framework where we have three layers to it, we have a sort of general productivity layer, where we embed core AI infused productivity tools, such as Microsoft copilot, and Adobe Firefly. And we'd provide access to those in a safe way and make sure we were comfortable with the models that underpin them, and then just let them be unleashed and let our colleagues experiment and find their own use cases for them. The second layer is what we call our acceleration layer. And this is where we formally embed AI into existing products and services. So here we've built an ethical large language model or our brain as we called it. And that brain automates the process of checking compliance of the content we produce. Our final layer is reinvention. This is where we see AI disruption really putting on its head, our commercial and operating models. There it is a big cross-functional team looking at the legal implications, the product, the commercialization, the service, the clients, the data requirements. So we've got this this three layered framework that requires a very different response and a very different set of individuals from across the organization to to ensure we're doing each of those layers safely.

 

Tom Rourke08:12

And Elizabeth, do you sort of see other contexts where there are kind of people who just haven't embraced it as profoundly as Rachel's organization, but what advice you would give to leaders stepping into this? 

 

Elizabeth Adams08:25

From the lens that I operate from - obviously, I bring a lived experience and tons of professional experience around technology, inclusion and technology development and representation - and many of the organizations that I talked to that want to start with responsible AI the organization's first we have to understand why do you think you need responsible AI? And then we talk about representation, team composition, who on your executive team actually has the lived experience or professional experience related to the products that you want to develop or the services that you want to develop? And then it's questioning who should be at the table, then helping to shape your future? So in my particular research, it's the artifacts, it's the policies, it's the procedures, it's the very beginning before you even write a code of what does that look like? And what is that composition because in my personal belief in the research that I've done, you cannot have responsible or ethical AI without representation - doesn't matter what type of service or product you are delivering. And there are opportunities to engage, let's say civic organizations, or government or academia in forming your governance teams or your advisory board. So one particular organization that I worked with having incorporated their employees as a part of their shared learning and responsibility in developing responsible AI, they found out that they weren't really ready to talk about responsible AI they really needed to understand the values of that organization and what their North  Star was. So we spent about six months, having the employees lead that conversation. And once that happened, trust was built. And so the employees themselves who are also answering client and customer calls, are able to trust the outputs of the system because they've been engaged from the beginning in what responsible AI or ethical AI looks like in their organization. And it really depends on the views of the leaders. So that's why I keep saying representation is important. Because I have found that one person who might identify as a black woman in her organization, she's the only one, she's saying, my priorities are not, they're not the major priority, because there's not enough people with that same priority. And so part of my research is kind of discovering and putting together a framework that will help organizations with that kind of stakeholder claim. 

 

Tom Rourke10:48

The extent to which people are beginning to understand what the implications of AI might have for their own roles and their own futures. And if I give an example of something is that I spoke on over the weekend to a young lawyer that I know he's in midway through the formation process as a solicitor, and he was very clear, he feels like he's going to be the last generation of lawyers that will go through that particular formation process, because it's actually fundamentally changing how they're formed, like the model is, they perform menial tasks in the presence of more expert leaders, and they learn from them by being around but now that the menial tasks are being done by the machine, why are they there? So if we think about Rachel, in your industry, is there a sense that people are beginning to have concerns or at least see a path about what it might mean for their own futures?

 

Rachel Higham11:34

There's definitely a perception that the real concern is probably five or six years away. You know, our creatives are experimented with generative AI and seeing the lack of fidelity, their lack of ability to hold brand logos and product designs, in generated images, for example, mistakes come through in summarizations of documents, etc. So we're seeing very much the fragility and the the current issues that have yet to be solved for in generative and non generative AI. And we're starting to have a very open conversation led by our Chief People Officer around how the shape of our workforce will change over time, and how we will have to redesign the career and learning pathways for all of our roles, and particularly our entry roles to make sure we still get the fantastic creatives, art directors, photographers, media buyers and planners, PR specialists of the future, which will still need to exist, but as you say, that training route will be very, very different.

 

Tom Rourke12:34

There's this whole concept of democratizing AI and I know a lot of organizations are looking to try and mobilize their employees. I mean, there's one particularly good example of the market where they've been a really bold statement about wanting their staff to take cuttings, 140 hours of education, they very publicly admonished their own CEO as being one of the 10% or something that didn't. Is that a thing to be embraced as an as an opportunity? Or does that kind of create very specific concerns because you've kind of gotten suddenly a very large number of people experimenting, perhaps within the guardrails in the office and outside the guardrails outside of the office? And what sort of challenges does that present?

 

Elizabeth Adams13:09

So the reason why I started my doctoral work in leadership of responsible AI is because there were people who were not benefiting from these wonderful things about AI that we were learning about. I became a part of this social community of about 100,000 people who were a part of it. And I was floored by how willing this community is to share information about their prompts, the issues that they're finding with certain apps, they're helping to upskill and reskill. But the beauty of this is that now these people have the skills that they're learning from a community that they didn't even know existed, you know, a month ago or couldn't have existed a month ago. And they're able to take that learning back into their organizations, and help with that. So some of that experimenting that I'm seeing is actually happening outside of the organization, but it is contributing to employees who are now aware of it. And so they're able to bring that learning into their organization, which I think is incredible.

 

Tom Rourke14:10

Coming back to data, like what are those extra considerations that we need to think about the data on which we build our AI structures and the challenges we have and essentially now we can exploit these things far faster, but we really now need to look at like the quality of what actually underlies these structures. 

 

Rachel Higham14:25

Fundamentally, you can't achieve responsible AI without a responsible approach to data. I think several concerns arise. Data because its historical, often reflects historical biases and social inequalities. Collecting and analyzing personal data can infringe on privacy rights, if it's not handled well. We must handle data transparently. We must obtain informed consent and we must protect sensitive information. Think about transparency and explainability AI models can be incredibly complex, making it challenging to understand and be transparent about their decision making processes. A couple of other areas, I would say, you know, data ownership is going to be very complex, because a lot of organizations are going to bring together their own data, customer data, and externally bought data together to train algorithms and AI systems. Finally, I would say, the environmental impact is front of mind for me at the moment, because as you build larger and larger models, build larger and larger datasets that requires significant computational resources, it contributes to ever higher energy consumption, carbon emissions. So you know, needing a laser focus on efficiency, as we think about what data we use and bring into our AI algorithms and models is going to be key as well.

 

Tom Rourke15:42

I mean, I think it's interesting that you mentioned the sustainability dimension to that, because actually, I don't hear it as often as perhaps we should hear us. If you think about people's reaction to things like Bitcoin and blockchain, there's a perhaps a slightly growing awareness of the environmental impact of that. Maybe Elizabeth, have you come across this in your work sustainability emphasis?

 

Elizabeth Adams16:05

Not so much, because again, my work is mostly focused on responsible AI, as Rachel mentioned, data, data bias, but it is something that I think about. But I do want to just highlight one particular thing that when we typically talk about data, the beginning of the life of an algorithm, that from my experience, we have typically talked about bias and data as a technological fix that can happen inside of an organization without understanding the second and third order consequences of AI harm, which is a negative human experience, as a result of a biased AI output. And when I work with organizations, I want to make sure that they understand that negative human experience, because it helps them prioritize a particular model and the impacts that it might have on society, or or a community group. In governance boards that I've sat on, and we just don't talk enough about that human negative experience.

 

Tom Rourke17:01

Yesterday, I hosted - I was part of a group hosting a call of our design community in Kyndryl, and they seem to be a community that are really, really sensitive to all the bad things that could emerge out of generative AI. But actually, someone brought up the point around, is there an opportunity here that says, bias actually is an inherent part of who we are as humans. And actually, is there a more positive opportunity to use generative AI to be alert to and correct for the bias that actually exists in our societies anyway? Yes, there's a huge potential for bias in AI, but is there a potential for AI to trap for human bias and help us correct for bias? 

 

Rachel Higham17:41

Yeah, I think there absolutely is. And I love the idea of reframing it as a positive outcome, as opposed to seeing it as a potential negative because how many times have we designed systems and ways of processing data to make decisions, and never even discussed bias? Now we're starting to think about how might we design automated, and structured and transparent ways of spotting bias which will be far better than what we perceive or pretend that humans could have achieved in the past. So I think we can reframe it as a positive outcome, because we'll be proactively looking for bias, we'll be building tools and processes and practices and skills roles to do that, specifically, whereas before, we never have. We never thought it was important enough.

 

Elizabeth Adams18:24

I hope we get there. I really do. What I'm seeing is - I had an opportunity to speak with some folks last weekend about responsible AI, and they had no idea that responsible AI even needed to exist, because they did not understand anything about AI bias. They didn't understand that there were huge groups of people that are denied mortgages, because of the color of their skin or their zip code, or that their exam proctoring systems that don't identify people who live in wheelchairs. And so they're accused of cheating, or facial recognition systems. And so I want us to get there. But I also know that there is a study called captology, computers as persuasive technology, I think. And it basically suggests that the news information that we feed or whatever it is that we feed ourselves is what we bring into our work environment. I would love to see AI help kind of correct that. But I don't know if we're there yet. I would love for my work to be on business. And on that note, I would love to be on a beach somewhere hanging out a tiki bar. But right now I can't because there's still so much work of awareness to do. 

 

Tom Rourke19:34

This is obviously The Progress Report, so what we like to kind of conclude on is around what your measure of progress might be. And maybe Elizabeth, as you've been speaking, maybe we just ask you is that in the next two to three years what would feel like progress from your point of view?

 

Elizabeth Adams19:52

Yeah, I would love to live in a world where there are not AI systems that were designed to keep me out of things. Either to deny me some access to something, because of something I have no control over, or because of something that I have no idea was built into the system. I don't know if I'll be able to live in that world in the next two to three years, but that is what I'd like to do. I'd like to wake up and trust that the technology that I love so much - I'm a technologist - that it works for me, and it works for everyone, and that we can all thrive in the age of AI. If I could do a quick plug my new LinkedIn Learning Course "Leading Responsible AI and Organizations," we talk about how to bring that gap closer together, maybe it's a town hall, maybe it's a survey, but something that allows those who have the influence in the organization to hear from those who do have those very, very real concerns about what AI is going to mean for their jobs.

 

Tom Rourke20:48

Thank you, Elizabeth. And Rachel, maybe if I can ask you the same question from your perspective, what would feel like progress in the next two to three years?

 

Rachel Higham20:55

I think the emergence of a balanced set of global regulation that gives the public society and communities and organizations confidence that any entity is building AI responsibly, and that we see the emergence of the suite of tools and practices and roles and accountabilities that will be needed to build truly inclusive and responsible solutions that as Elizabeth, very rightly says doesn't disadvantage anyone.

 

Tom Rourke21:27

Rachel, Elizabeth, thank you for what was an absolutely fascinating conversation. Obviously, Rachel, your organization has a very mature model for the governance of AI in a responsible and ethical way. And Elizabeth, you're working with organizations to basically build those structures. But I think you both highlighted the fact that the lack of a consistent regulatory framework really is something that we need to give some attention to support your organizations. I think another important dimension was the impact on the individual. I mean, both the individual and society and how they may be impacted by biases, but also the potential for AI to help us address those biases. But we all need to recognize that quite aside from the broad business impacts of AI and generative AI, we do need to be mindful of the concerns of our colleagues and employees and what that will mean for their roles in the future. This is very definitely a rapidly evolving context, and one where we really do need to be mindful of the societal and individual impacts that we can have in the work that we do. As always, on The Progress Report, we seek to have interesting conversations on important topics and if you want to stay in tune with that conversation, please do like, subscribe, and above all, share. Thank you for listening to The Progress Report.