Page 8 Soft

Main Menu

  • Home
  • Software Start Ups
  • Android Apps
  • Iphone Apps
  • App Funding
  • Money

Page 8 Soft

Header Banner

Page 8 Soft

  • Home
  • Software Start Ups
  • Android Apps
  • Iphone Apps
  • App Funding
  • Money
Money
Home›Money›Fairness Principles for Artificial Intelligence and Data Analytics

Fairness Principles for Artificial Intelligence and Data Analytics

By Margaret J. Beltran
April 7, 2021
0
0


Following the conclusion of the first phase of the Veritas initiative on January 6, 2021, the Veritas consortium (the “Consortium“) Has published two white papers detailing fairness, ethics, accountability and transparency (“FEAT“) Equity Assessment Methodology (the”Methodology”) And its application in both use cases.[1] This article provides an update on Singapore’s equity framework for the adoption of artificial intelligence in finance.

Background

Artificial intelligence and data analysis (“AIDA”) Technology is increasingly used for its ability to optimize decision-making processes. AIDA removes human decision-making as a variable and replaces it with a data-driven approach. The adoption of AIDA by financial services institutions (“ISP”) Has been observed in areas involving the automation of internal processes and risk management, in the form of credit scoring and fraud detection.[2]

In response to the plethora of risks associated with adopting AIDA in finance, regulators around the world have developed their own guidelines to address what they identify as the main categories of risk. In a research study of 36 guidelines on ethics and principles of artificial intelligence, the team at the Berkman Klein Center found that the topic of “fairness and non-discrimination” featured in all guidelines studied, the Monetary Authority of Singapore (“MAS») FEAT principles being one of them.[3]

AIDA equity

The effectiveness of artificial intelligence is fundamentally based on the data it analyzes. It follows that AIDA technology is limited both by latent biases in the data and by the algorithmic perpetuation of the data.[4] To counter these risks, it is essential to identify the context of the data used and to understand how this data is relevant to the end product.

Context is of particular importance because the latent biases mentioned above can hamper the ability of the system to process data. Such latent biases can be observed from the following example:

“If one obtains data on the professional white-collar workforce from the 1940s to the 1970s into an artificial intelligence system to predict which demographics of individuals would be the most successful candidates for the white collar occupations, the suggestion would probably be white males of a certain age. “[5]

As we accept that data may always contain some form of bias, extra care should be taken when handling the final product and appropriate adjustments should be made to mitigate such bias. Such adjustments are necessary not only to improve the accuracy of the final product, but also to integrate a human assessment of the ethics, morals and social acceptability of the final product into the decision-making process.[6]

After highlighting concerns about AIDA’s fairness in decision-making, we assess the findings presented by the consortium in the following sections.

Principles of equity

In Singapore, a set of principles has been published by MAS regarding the use of AIDA by ISPs. The principles of equity form the principles of the Consortium methodology, and their application keeps AIDA’s decision-making process aligned with overall business and equity objectives.

The four principles of fairness are as follows[7]:

F1 – Individuals or groups of individuals are not systematically disadvantaged by decisions made by AIDA, unless such decisions can be justified

F2 – The use of personal attributes as input factors for decisions made by AIDA is justified

F3 – Data and models used for decisions made by AIDA are regularly reviewed and validated for accuracy and relevance, and to minimize unintentional bias

F4 – Decisions made by AIDA are regularly reviewed so that models behave as expected and expected

Methodology

The methodology consists of five steps:

(A) describe the objectives and context of the system;

(B) examine data and models to detect unintended biases;

(C) measure the disadvantage;

(D) justify the use of a personal attribute; and

(E) review the monitoring and review of the system.[8]

Steps A, B, and C direct the assessor to establish both the business and fairness goals of the system, which establishes the benchmark against which the fairness and potential trade-offs of the system are measured. In the HSBC simulated case study on marketing unsecured loans, the potential drawbacks and benefits of a marketing intervention with those selected by AIDA were considered.[9] Historically, foreign nationals have a lower approval rate for loan applications. It was noted in the study that there is a potential risk of further disadvantaging foreign nationals when this historical data is used.[10] By identifying latent bias at an early stage, FSIs are able to introduce mitigation mechanisms such as lifting the threshold for foreign nationals to mitigate the bias present in the data.[11]

The concept of fairness should not be viewed as blind to personal attributes. A gender-neutral or racial algorithm can widen any pre-existing disparities, and intervention may be needed to promote fairness. It was observed in the HSBC study that a higher loan rejection rate for foreign nationals would materialize if the applicant’s nationality was not taken into account by the system.[12] Such inclusion of a personal attribute was justifiable to ensure that the system meets the intended objectives set out in step A and meets the principles of fairness F1 and F2.

Finally, the methodology calls for continuous monitoring of the system, in accordance with the principles of fairness F3 and F4. HSBC hypothesized that such monitoring can be implemented by performing an analysis before launching a campaign, to avoid a significant change in the system parameter; monitor the production of the system during the campaign; and ask the senior management team to review the end result of the campaign to ensure that the system meets established goals.[13] In order to keep humans abreast of AIDA technology operations, it has been suggested that such an accountability framework should be built on top of the existing infrastructure.[14] In Singapore, this may take the form of an extension of the scope and responsibilities of senior FSI managers within the framework of the MAS proposed guidelines on individual responsibility and conduct (IAC proposed guidelines).[15], to integrate responsibility for the day-to-day operations of AIDA technology.[16]

Remarks

We note that the methodology is principle-based and does not prescribe any mandatory responsibilities or regulatory obligations with which FSIs must comply. It remains to be seen how the principles of fairness will perform beyond simulated studies. Going forward, phase two of the Veritas initiative will focus on developing the methodology for assessing ethics, accountability and transparency.


Related posts:

  1. Pakistan’s efforts to mend barriers with Saudi-led bloc bear fruit
  2. Bentley Certified Pre-Owned Program | Kelley Blue Book
  3. Yvette Brooks, Mayor’s Message | The city supports businesses of the past and present – Santa Cruz Sentinel
  4. Canadian and European nuclear industries agree on a partnership: Corporate
Tagsartificial intelligence

Recent Posts

  • What the Tech: 15th anniversary of the iPhone
  • Qatari fintech platform KARTY secures QR 4.3m pre-seed funding
  • Global Enterprise Accounting Software Tools Market Research Report Analysis 2022 – Intuit, Sage, SAP, Oracle (NetSuite) – Instant Interview
  • Popular child-tracking apps contain privacy and security flaws
  • Google Pixel 6a vs Apple iPhone 13

Archives

  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021

Categories

  • Android Apps
  • App Funding
  • Iphone Apps
  • Money
  • Software Start Ups
  • Terms and Conditions
  • Privacy Policy