Search
  • Easy Gov

Building a more Responsible AI System (Part 4 of quad-blog series)

Journey towards Effective Targeting & Delivery of Welfare Schemes

 

The private and public sectors are increasingly turning to artificial intelligence (AI) systems and machine learning algorithms to automate simple and complex decision-making processes. AI is having an impact on democracy and governance as computerized systems are being deployed to improve accuracy and drive objectivity in government functions.

In the previous blog of the series, we observed how the artificial intelligence system can be developed with limited data but strong industry knowledge. In this part of the blog, we will show you how EasyGov strives to create a more responsible AI tool.

 

In the pre-algorithm world, humans and organizations made decisions in terms of fairness, transparency, and equity. Today, some of these decisions are entirely made or influenced by machines whose scale and statistical rigor promise unprecedented efficiencies. In machine learning, algorithms rely on multiple data sets or training data, that specifies the correct outputs for certain people or objects. From that training data (in our case domain knowledge engineered data), it then learns a model that can be applied to other people or objects and makes predictions about what should be the correct outputs for them.

Image 1: Non-Explainable AI being a Black Box for User

But is it enough to design a responsible AI welfare services recommendation? Do you think this will make a Responsible AI? The questions we face now are :

  1. Is the system considering the right data?

  2. What are the AI recommendations based upon?

  3. Are the probability distributions and parameters considered providing the right results?

Since machines can treat similarly-situated people and objects differently, research is starting to reveal some troubling situations in which the reality of algorithmic decision-making falls short of our expectations. Algorithmic bias can manifest in several ways with varying degrees of consequences for the subject group. Let us consider some real-world examples of Algorithmic Biases

Bias in Online Recruitment Tool


Case: Amazon’s global workforce is 60 percent male and where men hold 74 percent of the company’s managerial positions. Amazon discontinued the use of a recruiting algorithm after discovering gender bias.

Data Used: The data that engineers used to create the algorithm was derived from the resumes submitted to Amazon over a 10-year period, which were predominantly from white males.

Biased Result: As a result, the AI software penalized any resume that contained the word “women’s” in the text and downgraded the resumes of women who attended women’s colleges, resulting in gender bias.

Bias in Criminal Justice Algorithms

Case : The COMPAS algorithm, which is used by judges to predict whether defendants should be detained or released on bail pending trial, was found to be biased against African-Americans, according to a report from ProPublica.

Data Used: The algorithm assigns a risk score to a defendant’s likelihood to commit a future offense, relying on the voluminous data available on arrest records, defendant demographics, and other variables.

Biased Result: Compared to whites who were equally likely to reoffend, African-Americans were more likely to be assigned a higher risk score, resulting in longer periods of detention while awaiting trial.


There are many reasons for such biases. Some causes of AI biases are listed below:

  1. Historical Human Biases: Historical realities often find their way into the algorithm’s development and execution, and they are exacerbated by the lack of diversity that exists within the computer and data science fields. Further, human biases can be reinforced and perpetuated without the user’s knowledge.

  2. Incomplete or Unrepresentative Training Data: If the data used to train the algorithm are more representative of some groups of people than others, the predictions from the model may also be systematically worse for unrepresented or underrepresented groups.

Image : Stages of AI Bias

While these examples of bias are not exhaustive, they suggest that these problems are empirical realities and not just theoretical concerns.

 

TRUST IN AI IS CRITICAL


Since we are building a system that is derived by human intelligence, hence ‘trust’ is critical. An understanding of the system’s decision-making process and the attributes that have an impact to make such decisions is most important. It becomes a critical mission to understand the decision-making process of AI systems for use cases with big human impacts like Governance and Policy Making.

According to Google AI, these questions are far from solved, and in fact are active areas of research and development. The recommended practises to making progress in the responsible development of AI are:

  1. Use a human-centered design approach

  2. Identify multiple metrics to assess training and monitoring

  3. When possible, directly examine your raw data

  4. Understand the limitations of your dataset and model

  5. Test, Test, Test

  6. Continue to monitor and update the system after deployment

All of the mentioned system development approaches can be implemented with the help of Explainable AI. In the upcoming paragraph we will mention how EasyGov’s “Family-Centric Welfare Recommendation Tool” is developing a Responsible AI whilst focusing on the above mentioned approaches while referring to Explainable AI.

 

EXPLAINABLE AI


Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology such that the results of the solution can be understood by humans.This area inspects and tries to understand the steps and models involved in making decisions.

Following are the actionable’s of Explainable AI:

  1. Explain To Justify

  2. Explain To Improve

  3. Explain To Discover

  4. Explain To Control

Image 2: Actionables of Explainable AI
 

STITCHING THE CONCEPTS: EASYGOV’S EXPLAINABLE AI


Explainable AI removes the ‘black-box’ decision-making process of the AI system, ensuring trust, adoption, robustness, and accuracy in the system. EasyGov’s system provides various functionalities to implement Explainable AI in the following two ways:

  1. Scoring and Simulation: Explainable AI allows us to go through the simulated tool behavior and ultimate scoring given by the system for decision making

  2. Analysis: Explainable AI helps us in analyzing derived solutions and considered parameters.

Simulation of AI Recommendations Simulation Tool allows us to simulate different tool behaviors and recommendations based on input attributes. Example: Shown in this visual is the simulation of system recommendations of various welfare benefits and corresponding weights for a set of families.

Image : Simulation of Explainable AI Recommendations by our System

Scoring of AI Recommendations Based on the structured parameters linkages and corresponding Conditional Probability Distributions, the system recommends benefits in percentages. For Example Example: Major recommendations (by percentage) made by our AI system for a Family Profile shown in the figure below are:

  1. Monetary Assistance For Occupation: 70.31%

  2. HealthCare Insurance: 68.89%

  3. House Electrification: 61.82%

Image : Scoring of Explainable AI Recommendations by our System

Analysis of AI Recommendations The analysis shows major input parameters responsible for the recommendations given by the AI system. Example: Major parameters and their corresponding attributes that are resulting in the Recommended House Repair Benefit:

  1. House Value: Low

  2. House Type: Dilapidated

  3. House Type: Livable

Image : Analysis of major contributing Factors in House Repair Intervention

Reverse Analysis of AI Recommendations For greater accuracy, a reverse analysis approach allows us to see all the recommended benefits for a particular input parameter attribute. Example: Benefits Recommended When Input data is House Type: Dilapidated:

  1. House Repair

  2. House Benefit

Image : Reverse Analysis of Benefits Having House Type Parameter’s Contribution
 

CONCLUSION :

In order to create a fair, accurate, interpretable, and secure AI, Explainable AI offers several advantages which become more concrete for risky algorithms. The goal here is to monitor for disparate impacts resulting from the model that border on unethical, unfair, and unjust decision-making. A robust feedback loop in such a case aids in the detection of bias, which leads to the next recommendation promoting regular audit. To sum up, following outcomes can be expected from Explainable AI:

  1. It is easier to trust a system that explains its decisions compared to a black box.

  2. It can be verified that sensitive information in the data remains protected.

  3. It ensures that predictions are unbiased and do not implicitly or explicitly discriminate between protected groups.

  4. It ensures the robustness of the model and helps in preventing impact of minor input , that otherwise would lead to large changes in the predictions.

  5. It can help the validation of the models in addition to matrix accuracy and precision.

 

Also Read:

  1. https://ai.google/responsibilities/responsible-ai-practices/

  2. https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/#footnote-10

 

3 views0 comments