A think tank for inclusive Artificial Intelligence

Preventing racial, age, gender, disability and other discrimination by humans and A.I. using the latest advances in Artificial Intelligence

The new issue of Science (Vol 357, Issue 6346) is devoted to AI.

We strongly suggest reading the editorial "AI, people, and society".
(Science Editorial)
Problem definition
The "AI Spring" fueled by the major leaps in performance and applications of machine learning technologies and especially in deep learning triggered a wave of concerns regarding AI safety. Most of these concerns are focused on the possibility of intelligent machines causing physical harm to or exterminating humans entirely. These concerns are important but are not timely.
Sentient machines are very unlikely to get out of control any time soon. However, racial, gender, ethnic, and age discrimination by artificially intelligent systems is probably happening already in many aspects of our lives involving individual and group profiling.
For example, as we learned through several experimental projects including Aging.AI and Beauty.AI when certain population groups are under-represented in the training sets, these populations are left out or may be subject to higher error rates.
These are the types of problems this collective will try to address through meetings, seminars, hackathons, open database sharing and audits of artificially intelligent systems to promote and encourage inclusion, equality, balance and diversity.
Research directions
AI for Diversity.
Using AI to detect, analyze, prevent and combat human biases and discrimination.
Using AI to prevent discrimination by AI.
The title of our first research project is "Evaluating race and sex diversity in the world's largest companies using Deep Neural Networks".

Here is the abstract of our first paper:
Diversity is one of the fundamental properties for survival of species, populations and organizations. Recent advances in deep learning allow for the rapid and automatic assessment of organizational diversity and possible discrimination by race, sex, age and other parameters. Automating the process of assessing the organizational diversity using the deep neural networks and eliminating the human factor may provide a set of real-time unbiased reports to all stakeholders. In this pilot study we applied the deep learned predictors of race and sex to the executive management and board member profiles of the 500 largest companies from the 2016 Forbes Global 2000 list and compared the predicted ratios to the ratios within each company's country of origin and ranked them by the sex-, age- and race- diversity index (DI). While the study has many limitations and no claims are being made concerning the individual companies, it demonstrates a method for the rapid and impartial assessment of organizational diversity using deep neural networks.

Click the button below to get access to the dataset that we used in this study.
Immediate tactical goals:
  • Establish a discussion forum for thought leaders in AI, racial, gender and ageism issues, biologists, policy makers and ethicists;
  • Schedule meet ups to discuss the possible discrimination issues in AI and strategies to minimize exclusion and bias;
  • Develop a range of guidelines and validation mechanisms to test the deep-learned systems and other cognitive computing solutions for racial, gender, age and ethnic bias;
  • Develop an open access data sets to allow developers to train the algorithms on minority data sets.
Achievement has no color.
— Abraham Lincoln
Case studies
In media
To be announced.
Join our effort
Nominate an advisor or become a volunteer
Youth Laboratories
Email: team@beauty.ai