Yemi Olagbaiye, Head of Client Services, Softwire, discusses ethical AI and the fears surrounding who should be responsible for how quickly these technologies are emerging.
Ethical thinking is increasingly embedded in every aspect of corporate thinking – from employee welfare to environmental impact. But while there are a few areas of contention within any ethical strategy, organisations can turn to an array of science and research to create consensus. The speed with which Artificial Intelligence (AI) is evolving, in contrast, is raising ethical concerns that appear far harder to address. And with organisations increasingly pushing responsibility onto technologists alone, software developers fear becoming scapegoats should a problem arise.
Ethical AI thinking may be immature and unformed, but the responsibility for innovation must lie beyond technology teams: there is a wide group of stakeholders, from business leaders to legislators, who need to take responsibility.
Force for Good
Fuelled in no small part by a media eager to focus on the negatives rather than AI’s potential as a force for good, ethical concerns regarding the implications on society at large are beginning to affect the technology community. Indeed, growing numbers of software engineers and developers are becoming increasingly edgy, fearful of the assumption that it is the creators of the technology alone that are responsible for any outcomes and issues, especially if they are negative.
And, of course, those at the front line of AI development should take some responsibility – they understand the technology and should have good insight into the potential ramifications of a specific solution. But they cannot and should not take that responsibility alone. Technologists are just one component of a far broader set of stakeholders, from project initiators to legislators and governments, who need to take ownership of any development that may have far reaching social consequences.
Collective Responsibility
AI has the potential to radically change business practices and that needs to be thought through in great detail. While developers can explain what is possible, there is a wide cohort of organisations and individuals that must deeply consider the potential implications before any development is undertaken.
Right now, however, while some in the technology industry – including organisations such as Microsoft and IBM – are stepping up to share their ethical thinking and creating courses that provide organisations with the latest insight into ethical development, there is a growing sense of ‘us and them’; a fear that the blame for any unintended consequences of an AI development will be placed firmly at the feet of developers. And that fear is, quite frankly, creating a sense of developer paralysis.
At a time when developers should have the world at their feet, when the possibilities for innovation have never been more exciting, too many are opting to step away from AI – constraining not only innovation but also the essential evolution of AI understanding, including ethical thinking.
Limited Understanding
There is, of course, no doubt that the vast majority of legislators lack the expertise and technology confidence required to provide adequate guidance – a situation that will no doubt be addressed over the next couple of years as the next generation of tech-aware individuals both enters the workforce and gains senior positions. But, to be fair, strict rules and limitations are not the solution – they will do nothing more than place a constraint on innovation. What is required is a culture of responsibility across all stakeholders and a recognition of the need for a big picture approach to AI – indeed to any development.
In the same way that many organisations have created codes of conduct for ethical sourcing of goods, for ensuring supplier organisations treat the workforce fairly and that environmental standards continue to be reinforced, that same thinking must apply to AI. Even if the ethics policy will be, by default, somewhat immature and will require continual expansion, without some form of code of conduct organisations will fail to move forward with any degree of confidence.
By carrying out a fully comprehensive assessment before any project is kicked off, organisations can ensure that the potential ramifications have been fully considered, not only from a development perspective but also in line with the business’ ethical standpoint. And this requires commitment from a broad range of individuals – not just developers. This is not a basic box ticking exercise but an in depth review that ensures senior management are briefed and, critically, willing to fully endorse the project as ethical.
Conclusion
Right now, too many organisations are pushing technologists to explore and exploit AI without undertaking robust ethical assessments or, indeed, any willingness to take ownership of the projects’ outcomes. Blithely assuming some young software engineers have the knowledge, insight and confidence to take ownership is both naïve and unfair – it is hardly surprising such individuals are becoming increasingly concerned about their long term roles.
AI is without doubt unbelievably exciting and compelling but any organisation looking to engage in the development of AI technology has to take responsibility. No doubt when we look back at this time a decade or so hence, armed with new insights and a far greater ethical awareness at both organisational and legislative level, the process will be far better understood. But in the meantime, any company embarking on AI needs to consider the ethical standpoint; must ensure the processes and ethical thinking are in place to guarantee the entire business is confident to stand by any development. Of course, there will be a trade off between innovation and ethics – organisations will have to tread their own path based on specific corporate values. But the key is to define those values, and to ensure that AI and technology ethics are as embedded in the business as environmental and well-being ethics.
AI genuinely has more potential for good than harm – but unless the culture of AI induced fear is addressed quickly and effectively, innovation will be stifled and inspired developers disenfranchised. It is those companies willing and able to have the ethical AI conversation, to undertake project by project assessments and reassure developers that they are part of a complete stakeholder community, not standing alone, that will provide the confidence to push forward with innovation and create tangible and meaningful change.
By Yemi Olagbaiye
Head of Client Services
Softwire