Generative AI presents businesses with endless opportunities to innovate and boost efficiency. On a larger scale, it promises to help grow the economy and even solve global crises like climate change.
But as with most emerging technologies, AI comes with its own set of risks. And with stakeholders demanding ethical practices from those they work with, getting it wrong is costly.
As IT professionals, it’s our responsibility to consider the impact of our decisions so that we can reap the rewards of AI while keeping everyone safe.
What are the risks of working with AI?
AI poses a whole host of threats that could harm individuals as well as organisations. These include:
- Loss of human control over decisions
- Environmental damage
- Reinforcing biases and inequalities
- Job displacement
- Security risks
- Deception such as fraud and deepfakes
- Negative effects on human rights
- Infringement of intellectual property rights
- Lack of accessibility
Let’s explore a couple of these in more detail.
If AI systems are trained on biased datasets, they can perpetuate and even amplify these biases. Imagine an AI application that uses historical data to inform hiring decisions. If the data contains biases that have influenced previous hires, the AI will use these to guide future decisions, potentially leading to discrimination. The technology may even exaggerate the biases it finds, worsening inequality within the organisation.
Developing and running AI systems requires significant computational power and energy-intensive data centres. The larger and more complex these systems become, the more energy they demand and the more carbon they emit.
Managing the risks of AI:
Some have called for a pause in AI development to avoid facing these risks. But at BCS, we believe there are better ways of mitigating them so that everyone can benefit from AI safely. We’re confident that, with the right professional standards in place, the technology can transform our lives for the better.
In a recent survey, 88% of our members told us the UK should take an international lead on ethical standards in AI, and 8 out of 10 think organisations should have to publish ethical policies on their use of AI. While we still have a long way to go, there are plenty of steps we can take right now.
Is AI the right solution?
While it seems like everyone is using AI to get ahead, it’s not right for every project or organisation. Follow these tips when deciding whether to use it to develop new solutions:
- Clearly define the problem you’re trying to solve
- Ensure the project fits with your broader startegy and values
- Analyse external factors that impact the decision
- Check the project complies with relevant regulations
- Gather accurate and thorough project requirements
- Establish robust data sources
- Ensure your computing infrastructure is sufficient
- Check you have the technical expertise to deliver the project
Using AI ethically
Once you’ve decided to use AI to power your next project, build ethics into every stage of the design and development process.
Use tools like risk assessment, security standards, audits, and data governance to manage your data responsibly.
Ensure you trust the data you’re using to train your AI model and choose the right training approach and algorithm depending on what you’re trying to achieve.
Testers should thoroughly understand what they’re assessing, the challenges to look out for, and how to align testing with ethical principles.
Once the system is live, proactively monitor and maintain it to ensure it continues to run safely and in the way you intended.
Consider the factors that influence AI’s environmental impact, including:
- Data centre efficiency
- Your supply chain
- Choice of algorithm
- Low code/no-code programming
- Effective environmental impact reporting
Could apprenticeships be the answer?
To get all this right, AI must be managed by competent, ethical, and accountable professionals. Thankfully, apprenticeships can help you achieve this, and they’re a fantastic way to develop both new and existing team members.
While digital apprenticeships follow nationally recognised standards, they let you develop skills that are specific to your organisation at the same time. They’re also extremely cost-effective for everyone involved; while you’ll receive government funding to train employees, your apprentices will get paid to learn instead of forking out for expensive alternatives.
BCS is a leading end-point assessment organisation for 25 digital apprenticeship standards including the Artificial Intelligence Data Specialist (level 7). This programme supports learners to develop AI solutions that comply with ethical best practices, regulatory requirements, and your own internal standards.
Other ways to upskill your team:
Apprenticeships aren’t the only option, and there are plenty of other ways to equip your team to develop AI ethically. Our new Foundation Certificate in the Ethical Build of AI will launch in early 2024 as part of the Lord Mayor of London’s Ethical AI Initiative. The programme will teach professionals how to manage the risks that come with building AI-powered systems by applying a clear set of ethical principles.
Sustainability goes hand in hand with ethical AI development, and we’ve also produced a series of green IT learning modules that help professionals cut carbon emissions by using technology more wisely.
Want to give team members a broader understanding of AI? We also offer an AI Foundation Pathway made up of 12 bite-size awards, including two devoted to ethics.
Ready to equip your team to develop AI ethically and sustainably? Get in touch now to find an approach that works for you.
Written by Annette Allmark, L&D Director, BCS, The Chartered Institute for IT