Over the last decade, there has been a growing recognition that technology, and the trend towards more sophisticated uses of data, have potential consequences for society, individuals and the environment. Questions about the privacy concerns arising from smart home cameras, concerns over the effects of social media on young people, and so-called mutant algorithms being used in the public education system have been increasing.
These headlines, and many more like them, have started to shake what used to be a relatively firm foundation of trust in technology. And with trust in public institutions already on the decline according to reports like the Edelman Trust Barometer1, this is worrying for organisations that need to use data and technology to provide effective public services.
Sopra Steria has been working with organisations to take these complex and daunting issues out of the abstract and to try to make them approachable and manageable by providing a structured approach to digital ethics.
To that end, we use this definition of digital ethics:
Digital ethics is active, not a passive set of principles or codes of conduct. It requires policy and governance, but it also requires tools, skills and culture adaptation.
To make digital ethics accessible and manageable, and to start that continual process of identification, prioritisation and management, we use our Digital Ethics Categories as lenses that organisations can use to identify ethical risks and opportunities within their own unique strategic and cultural context. These categories have been defined by drawing on the myriad of standards and guidelines published across the world with regards to technology ethics in the last decade.
Digital services are typically fed and improved by access to data which may be personal to an individual. However, the costs of mishandling personal information can be considerable – Alphabet, the parent company of Google, was fined €50m for “lack of transparency, inadequate information and lack of valid consent regarding ads personalisation”2. Society values privacy, so we must achieve a balance between utility and individual privacy.
Digital technology comes with new and sometimes increased threats to people, businesses and national security. Our attention to safety is heightened as technology typically reduces human touch points, where risks can be spotted and mitigated quickly.
Technology has the potential to create new and interesting careers, and to enable people to live more fulfilling lives. However, digital technology has been changing how we work, the types of jobs available, and how work is valued and remunerated for decades. The transition to the new world of work is accelerating as companies undergo digital transformation, and this is raising fear. This category asks what the impact of digital technology will be on an organisation’s own workforce and the wider world of work.
Digital solutions offer the potential to provide services more quickly and effectively than ever before, and to a greater number of people. However, reducing or removing human-to-human interaction may make it more difficult for users to understand what they are agreeing to and how decisions are made. Organisations will have to address this as users demand more transparency, and lawmakers slowly catch up.
Moreover, digital services often mask the ethical responsibility for a given act, and create networks of "distributed responsibility"3. To ensure transparency over decision-making and the reversibility of outcomes that impact humans, organisations will have to address the assignment of responsibility for their digital technology.
Digital technologies can be used to create a more diverse and inclusive world. By connecting more people together than ever before using digital technologies, we can expand the access to services across the globe and improve empathy through shared experiences.
To ensure this greater inclusion and accessibility, however, we must not reinforce and amplify human bias in a digital platform, or introducing new types of bias unique to the technology (for example, datasets that use unreliable, biased data, or facial recognition technology that doesn’t recognise certain groups of people). Special care and attention must be taken towards vulnerable persons and those that may be left behind by technology, and we must work to break down barriers rather than introduce new ones. Furthermore, mitigating technology’s ability to exclude is not enough – organisations must act to empower marginalised groups.
Public sentiment has shifted greatly towards ethical business practices, and there is increasing scrutiny on technology businesses from regulators, the public, consumers and employees to act on social issues. With the power that technology brings, it is imperative that an organisation acts to ensure not only its own profitability, but that it builds a better society, working towards the common good. We are already seeing organisations holding back technology which could be used for dangerous means, highlighting the complexity of ensuring a positive societal impact4.
Digital technology has the potential to help solve some of the world’s biggest challenges, such as climate change, air and water pollution, and resource shortages. But it can have environmental costs too, in the forms of resource consumption and depletion, earth and water pollution, and its own energy and carbon footprint.