How ethical subjectivity complicates AI regulation

Publication type
Artificial intelligence

Humans have biases, both conscious and unconscious.

How any single individual or organization’s biases take root – and how they manifest themselves – can be extremely complex to pin down.

Knowing all that, would ceding more decision-making power to computers and Artificial Intelligence (AI) models not result in more objective, ethical outcomes?

Not so fast.

The problem with that is, someone must still decide what factors come into play in any model’s decision-making process.

At the end of the day, AI models are (for now, anyway) results of human creativity, programming and, yes, biases.

So, how do we ensure ethics are prioritized when developing and using AI models?

Unpacking ethical considerations in AI

This was just one of the themes that a group of AI experts from the financial services industry, government bodies, and academia mulled over at the Financial Industry Forum on Artificial Intelligence (FIFAI) workshops.

Those conversations, summarized and organized in the FIFAI report, touched on four main principles guiding the use and regulation of AI in the financial industry:

  • E – Explainability
  • D – Data
  • G – Governance
  • E – Ethics

In this series of articles – which has already touched on Explainability, Data, and Governance – we are examining each of the themes in detail to see what we can learn and how we can apply this knowledge to regulatory research and activities.

Please note that the content of this article and the AI report reflects views and insights from FIFAI speakers and participants. It should not be taken to represent the views of the organizations to which participants and speakers belong, including FIFAI organizers, the Office of the Superintendent of Financial Institutions (OSFI) and the Global Risk Institute (GRI).

In addition, the content of the article and report should not be interpreted as guidance from OSFI or any other regulatory authorities, currently or in the future.

The subjectivity wild card

One major factor that makes setting ethical standards an exceedingly complex endeavor: subjectivity.

“The subjectivity of ethics, the recognition that ethical standards change over time and their codification into laws and regulations show the challenges and complexity of addressing AI Ethics,” the FIFAI report notes. “For financial institutions operating in multiple jurisdictions the challenges and complexity are compounded.”

One person’s view of fairness may differ from another’s (or even their own when put in a different context).

And what exactly is fairness, anyway?

This was one of the key questions that FIFAI participants discussed at the forum.

Their debates centred around:

  • How does the relationship between ethics and law apply to AI?
  • What are the different views on regulatory guidance for AI ethics?
  • What are the challenges in addressing AI ethics, and how can these challenges be overcome?
  • Is there a universal definition of fairness that can be codified?
  • Is a “biased model” necessarily a bad thing?

That there is no universal definition of fairness creates the need to come up with one for many different scenarios. Again, the potential options here are almost limitless.

“Much rigour would need to be undertaken to understand fairness in decisions made or actions taken based on AI generated results,” the FIFAI report explains. “In addition, financial institutions would need to define what is considered ‘fair’ depending on the particular use of an AI application.”

Isn’t fairness self-explanatory though?

Well, not exactly. Consider car insurance.

In a vacuum, it would seem “fair” that everyone pays the same rate. However, in this case, bias may be a necessary and desired contributor to the outcome of a model.

That is, if you’re crashing your car all the time, you should probably be paying a higher insurance rate.

Not every situation is that straightforward, however.

“It was discussed at the forum that the concept of ethics encompasses a range of nuances,” the FIFAI report says. “First, ethical principles and values can be relative, thus, impacting the way organizations in various jurisdictions address ethics.

“Moreover, the implementation of ethical standards may vary based on the specific application within an organization,” it continues. “For instance, while the use of travel history for fraud detection system could be considered appropriate due to its potential in identifying suspicious or fraudulent activity, its use to assess creditworthiness may be viewed as unethical as it may not be directly relevant to an individual's creditworthiness and may potentially discriminate against certain individuals based on their travel history.”

The search for solutions

There are no easy answers on how best to ensure ethical standards are set and respected, but FIFAI forum participants made the following suggestions as a starting point:

  • Multidisciplinary views

    Broad perspectives through the engagement of multidisciplinary and diverse teams (e.g., computer scientists, lawyers, financial data scientists, ethicists) are needed at all stages of development and use of AI applications.

  • New roles

    Organizations could consider greater corporate investment in AI ethics by adding new roles such as a Chief Ethics Officer or Chief Trust Officer.

  • Standards

    Standards are distinct from regulations or laws; however, they may be voluntary or mandatory. Standards setting bodies could set agreed upon ethical guidelines for the financial services industry help manage the related risks of AI technologies. Professional associations could also develop ethical standards. The Canadian Institute of Actuaries (CIA) have developed minimum ethical standards and their members are expected to comply with the ethical framework.

  • AI designation

    To ensure that appropriate ethical questions are considered during the process of designing, developing, and implementing AI systems, it was suggested that a designation be created requiring training in ethical issues. For example, Singapore has created a Chartered Engineer for AI designation that addresses this.

Whatever the approach, the FIFAI report is unequivocal: “Operationalizing AI ethics is critical.”

Ethics cannot be an afterthought in the development and use of AI models.

“Organizations should maintain transparency, both internally and externally through disclosure, on how they ensure high ethical standards for their AI models,” the report says. “Furthermore, since ethical standards change, it is necessary to document the rationale for decisions made.”

For more discussion on fairness and ethical considerations when using AI in the financial industry, be sure to read the full FIFAI report (PDF, 5.42 MB).