top of page

4 steps to developing responsible AI


4 steps to developing responsible AI

wp:paragraph

Artificial intelligence (AI) is arguably the most disruptive technology of the information age. It stands to fundamentally transform society and the economy, changing the way people work and live. The rise of AI could have a more profound impact on humans than electricity.

But what will the new relationship between humans and intelligent machines look like? And how can we mitigate the potential negative consequences of AI? How should companies forge a new corporate social contract amid the changing relationship with customers, employees, government and the pubic?

In May, China announced its Beijing AI Principles, outlining considerations for AI research and development, use and governance.

In China, the zeitgeist around AI has been more intense than around other emerging technologies, as the country is positioned to harness the tremendous potential of AI as a means to enhance its competitiveness in technology and business.

According to Accenture research, AI has the potential to add as much as 1.6 percentage points to China’s economic growth rate by 2035, boosting productivity by as much as 27%.

In 2017, the central government launched a national policy on AI with significant funding. The country already tops the AI patent table and has attracted 60% of the world’s AI-related venture capital, according to Tsinghua University’s report.

We’re already seeing the impact of AI across many industries. For example, Ping An, a Chinese insurance company, evaluates borrowers’ risk through an AI app. On the other hand, AI has generated a plethora of fears about a dystopian future that have captured the popular imagination.

Indeed, the unintended consequences of disruptive technologies – whether from biased or misused data, the manipulation of news feeds and information, job displacement, a lack of transparency and accountability, or other issues – are a very real consideration and have eroded public trust in how these technologies are built and deployed.

However, we believe, and history has repeatedly shown, that new technologies provide incredible opportunities to solve the world’s most pressing challenges. As business leaders, it is our obligation to navigate responsibly and to mitigate risks for customers, employees, partners and society.

Although AI can be deployed to automate certain functions, the technology’s greatest power is in complementing and augmenting human capabilities. This creates a new approach to work and a new partnership between human and machine, as my colleague Paul Daugherty, Accenture’s Chief Technology and Innovation Officer, argues in his book, Human + Machine: Reimagining Work in the Age of AI.

Are business leaders around the world prepared to apply ethical and responsible governance on AI? From a 2018 global executive survey on Responsible AI by Accenture, in association with SAS, Intel and Forbes, 45% of executives agree that not enough is understood about the unintended consequences of AI.

Of the surveyed organizations, 72% already use AI in one or more business domains. Most of these organizations offer ethics training to their technology specialists. However, the remaining 30% either do not offer this kind of training, are unsure if they do, or are only just considering it.

As AI capabilities race ahead, government leaders, business leaders, academics and many others are more interested than ever in the ethics of AI as a practical matter, underlining the importance of having a strong ethical framework surrounding its use. But few really have the answer to developing ethical and responsible AI.

Responsible AI is the practice of designing, building and deploying AI in a manner that empowers people and businesses, and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence.

It is imperative for business leaders to understand AI and make a top-down commitment to the responsible use of AI. Central to this is taking a human-centric approach to AI thinking and development. It is not enough to have the correct data, or an algorithm that performs accurately. It is critical to incorporate systems of governance, design and training that provide a framework for successfully implementing AI in an organization.

A strong Responsible AI framework entails mitigating the risks of AI with imperatives that address four key areas:

  1. Governance

Establishing strong governance with clear ethical standards and accountability frameworks will allow your AI to flourish. Good governance on AI is based on fairness, accountability, transparency and explainability.

  1. Design

Create and implement solutions that comply with ethical AI design standards and make the process transparent; apply a framework of explainable AI; design a user interface that is collaborative, and enable trust in your AI from the outset by accounting for privacy, transparency and security from the earliest stage.

  1. Monitoring

Audit the performance of your AI against a set of key metrics. Make sure algorithmic accountability, bias and security metrics are included.

  1. Reskilling

Democratize the understanding of AI across your organization to break down barriers for individuals impacted by the technology; revisit organizational structures with an AI mindset; recruit and retain the talent for long-term AI impact.

The benefits and consequences of AI are still unfolding. China has a great opportunity to capitalize on AI in its development and shares a huge responsibility with other countries to help it deliver positive societal benefits on a global scale. We must work to ensure a sound global public policy environment that works to enable and encourage investment in the development and deployment of responsible AI.

/wp:paragraph

bottom of page