Fairness Principles for Artificial Intelligence and Data Analytics
Following the conclusion of the first phase of the Veritas initiative on January 6, 2021, the Veritas consortium (the “Consortium“) Has published two white papers detailing fairness, ethics, accountability and transparency (“FEAT“) Equity Assessment Methodology (the”Methodology”) And its application in both use cases. This article provides an update on Singapore’s equity framework for the adoption of artificial intelligence in finance.
Artificial intelligence and data analysis (“AIDA”) Technology is increasingly used for its ability to optimize decision-making processes. AIDA removes human decision-making as a variable and replaces it with a data-driven approach. The adoption of AIDA by financial services institutions (“ISP”) Has been observed in areas involving the automation of internal processes and risk management, in the form of credit scoring and fraud detection.
In response to the plethora of risks associated with adopting AIDA in finance, regulators around the world have developed their own guidelines to address what they identify as the main categories of risk. In a research study of 36 guidelines on ethics and principles of artificial intelligence, the team at the Berkman Klein Center found that the topic of “fairness and non-discrimination” featured in all guidelines studied, the Monetary Authority of Singapore (“MAS») FEAT principles being one of them.
The effectiveness of artificial intelligence is fundamentally based on the data it analyzes. It follows that AIDA technology is limited both by latent biases in the data and by the algorithmic perpetuation of the data. To counter these risks, it is essential to identify the context of the data used and to understand how this data is relevant to the end product.
Context is of particular importance because the latent biases mentioned above can hamper the ability of the system to process data. Such latent biases can be observed from the following example:
“If one obtains data on the professional white-collar workforce from the 1940s to the 1970s into an artificial intelligence system to predict which demographics of individuals would be the most successful candidates for the white collar occupations, the suggestion would probably be white males of a certain age. “
As we accept that data may always contain some form of bias, extra care should be taken when handling the final product and appropriate adjustments should be made to mitigate such bias. Such adjustments are necessary not only to improve the accuracy of the final product, but also to integrate a human assessment of the ethics, morals and social acceptability of the final product into the decision-making process.
After highlighting concerns about AIDA’s fairness in decision-making, we assess the findings presented by the consortium in the following sections.
Principles of equity
In Singapore, a set of principles has been published by MAS regarding the use of AIDA by ISPs. The principles of equity form the principles of the Consortium methodology, and their application keeps AIDA’s decision-making process aligned with overall business and equity objectives.
The four principles of fairness are as follows:
F1 – Individuals or groups of individuals are not systematically disadvantaged by decisions made by AIDA, unless such decisions can be justified
F2 – The use of personal attributes as input factors for decisions made by AIDA is justified
F3 – Data and models used for decisions made by AIDA are regularly reviewed and validated for accuracy and relevance, and to minimize unintentional bias
F4 – Decisions made by AIDA are regularly reviewed so that models behave as expected and expected
The methodology consists of five steps:
(A) describe the objectives and context of the system;
(B) examine data and models to detect unintended biases;
(C) measure the disadvantage;
(D) justify the use of a personal attribute; and
(E) review the monitoring and review of the system.
Steps A, B, and C direct the assessor to establish both the business and fairness goals of the system, which establishes the benchmark against which the fairness and potential trade-offs of the system are measured. In the HSBC simulated case study on marketing unsecured loans, the potential drawbacks and benefits of a marketing intervention with those selected by AIDA were considered. Historically, foreign nationals have a lower approval rate for loan applications. It was noted in the study that there is a potential risk of further disadvantaging foreign nationals when this historical data is used. By identifying latent bias at an early stage, FSIs are able to introduce mitigation mechanisms such as lifting the threshold for foreign nationals to mitigate the bias present in the data.
The concept of fairness should not be viewed as blind to personal attributes. A gender-neutral or racial algorithm can widen any pre-existing disparities, and intervention may be needed to promote fairness. It was observed in the HSBC study that a higher loan rejection rate for foreign nationals would materialize if the applicant’s nationality was not taken into account by the system. Such inclusion of a personal attribute was justifiable to ensure that the system meets the intended objectives set out in step A and meets the principles of fairness F1 and F2.
Finally, the methodology calls for continuous monitoring of the system, in accordance with the principles of fairness F3 and F4. HSBC hypothesized that such monitoring can be implemented by performing an analysis before launching a campaign, to avoid a significant change in the system parameter; monitor the production of the system during the campaign; and ask the senior management team to review the end result of the campaign to ensure that the system meets established goals. In order to keep humans abreast of AIDA technology operations, it has been suggested that such an accountability framework should be built on top of the existing infrastructure. In Singapore, this may take the form of an extension of the scope and responsibilities of senior FSI managers within the framework of the MAS proposed guidelines on individual responsibility and conduct (IAC proposed guidelines)., to integrate responsibility for the day-to-day operations of AIDA technology.
We note that the methodology is principle-based and does not prescribe any mandatory responsibilities or regulatory obligations with which FSIs must comply. It remains to be seen how the principles of fairness will perform beyond simulated studies. Going forward, phase two of the Veritas initiative will focus on developing the methodology for assessing ethics, accountability and transparency.