Skip site navigation
University of Maryland Division of Research
Who We Are Capabilities Partnerships Resources News
Analytical Nuclear Magnetic Resonance (NMR) Service & Research Center Biomolecular Nuclear Magnetic Resonance (NMR) Facility Biosciences Cores: Genomics, Imaging, and Flow Cytometry BioWorkshop Brain & Behavior Institute - Advanced Genomic Technologies Core CALCE Test Services and Failure Analysis Laboratory Center For Innovative Biomedical Resources (CIBR) Clarice Smith Performing Arts Center Daikin Energy Innovation Lab DLAR Imaging Core Exposome Small Molecule Core Facility Glenn L. Martin Wind Tunnel Herschel S. Horowitz Center for Health Literacy KIT-Maryland MEG Lab Maryland Fire and Rescue Institute (MFRI) Maryland NanoCenter Maryland Neuroimaging Center Mass Spectrometry Facility Michelle Smith Collaboratory for Visual Culture Neutral Buoyancy Research Facility (NBRF) Surface Analysis Center The Laboratory for Biological Ultrastructure The University of Maryland Center for Health Equity The University of Maryland Prevention Research Center X-ray Crystallographic Center (XCC)
Africa Through Language and Area Studies (ATLAS) Anti-Black Racism Initiative Effective and Equitable Weather Forecasting in a Changing Climate with Machine Learning Encuentros: A University-Community Partnership to Mitigate the Mental Health Crisis for Latino Immigrant Youth Fostering Inclusivity through Technology (FIT) Helping Our Bodies Clear Respiratory Infections The Maryland Safe Drinking WATER Study Modeling the Evolution of Avian Influenza Viruses Music Education for All Through Personalized AI and Digital Humanities Observing Wildfires Through UAVs and Fire Imaging Technologies Programmable Design of Sustainable, All-Natural Plastic Substitutes Racial and Social Justice Research-Practice Partnership Collaborative Remediation of Methane, Water, and Heat Waste Seizing Opportunities: Social Capital, Businesses, and Communities Using Machine Learning to Measure and Improve Equity in K-12 Mathematics Classrooms Water Emergency Team
Accurate, Equitable, and Transparent Genetic Ancestry Inference Advancing Environmental Justice By Evaluating Climate-Ready Urban Street Trees In Historically Redlined Neighborhoods AFTER: A Hospital Violence Intervention Program For Youth Victims of Gunshot Injury An Innovative Intervention to Help Asian American Families Cope with Racism and Mental Health Difficulties Bridging the Gaps in Satellite Observations of Earth Systems to Support Climate Monitoring and Prediction Climate Change and Political Conflict Climate Mitigation and Land-Use Digital Equity Mapping Research and Training Program Establishing a Role for Psilocybin in Frontal Lobe Function Fetal Mammary Stem Cell Programming and Hormone Dysfunction Forecasting Acute Malnutrition for Anticipatory Action Genetic and Lifestyle Risk Factors of Accelerated Brain Aging in Severe Mental Illness How Does Statistical Learning Interact with Socioeconomic Status to Shape Literacy Development? Human Rights Politics and Policies: Lessons from Latin America Increasing Sustainability, Accessibility, and Equity in Urban Mobility with A Self-driving E-Scooter Increasing Participation of Minorities and Women In STEM Through Sports Performance Analytics Research Market Design, Energy Storage, and Interconnection to the U.S. Power Grid On-board Energy Harvesting for Long-endurance Earth Observation UAVs Promoting Youth Mental Wellbeing in Rural Honduras by Engaging Teachers as Catalysts Relating Attitudes on Democracy to Attitudes on Race and Ethnicity An Innovative Approach to Remove Emerging Organic Contaminants from the Environment Role of Mitochondria Dynamics in Opioid Addiction Towards an Early Warning System for Increased Probability of Community Infection by SARS-Cov-2 Variants Understanding the Impact of Wind on Fire Dynamics in Mass-Timber Compartment Visualizing Urban Flooding Due To Climate Change
Search
Who We Are Capabilities Partnerships Resources News

Researchers Work to Make Artificial Intelligence Genuinely Fair

$1.6M NSF, Amazon Awards Bolster Studies of AI in College Admissions, Language Translation

April 19, 2022

Collage of cartoon portraits

Artificial intelligence (AI) algorithms help make online shopping seamless, calculate credit scores, navigate vehicles and even offer judges criminal sentencing guidelines.

But as the use of AI increases exponentially, so does the concern that biased data can result in flawed decisions or prejudiced outcomes.

Now, backed by a combined $1.6 million in funding from the National Science Foundation (NSF) and Amazon, two teams of University of Maryland researchers are working to eliminate those biases by developing new algorithms and protocols that can improve the efficiency, reliability and trustworthiness of AI systems.

Out of 11 proposals that were accepted this year by the NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon, two are led by UMD faculty.

The program’s goals are to increase accountability and transparency in AI algorithms and make them more accessible so that the benefits of AI are available to everyone. This includes machine learning algorithms—a subset of AI in which computerized systems are “trained” on large datasets to allow them to make proper decisions. Machine learning is used by some colleges around the country to rank applications for admittance to graduate school or allocate resources for faculty mentoring, teaching assistantships or coveted graduate fellowships.

“As these AI-based systems are increasingly used in higher education, we want to make sure they render representations that are accurate and fair, which will require developing models that are free of both human and machine biases,” said Furong Huang, an assistant professor of computer science who is leading one of the UMD teams.

That project, “Toward Fair Decision Making and Resource Allocation with Application to AI-Assisted Graduate Admission and Degree Completion,” received $625,000 from NSF with an additional $375,000 from Amazon.

A key part of the research, Huang said, is to develop “dynamic fairness classifiers” that allow the system to train on constantly evolving data and then make multiple decisions over an extended period. This requires feeding the AI system historical admissions data, as is normally done now, and consistently adding student-performance data, something that is not currently done on a regular basis.

The researchers are also active in developing algorithms that can differentiate notions of fairness as it relates to resource allocation. This is important for quickly identifying resources—additional mentoring, interventions or increased financial aid—for at-risk students who may already be underrepresented in the STEM disciplines.

Collaborating with Huang are Min Wu and Dana Dachman-Soled, a professor and an associate professor, respectively, in the Department of Electrical and Computer Engineering.

A second UMD team led by Marine Carpuat, an associate professor of computer science, is focused on improving machine learning models used in language translation systems—with particular focus on platforms that can accurately function in high-stakes situations like an emergency hospital visit or legal proceeding.

That project, “A Human-Centered Approach to Developing Accessible and Reliable Machine Translation,” is funded with $393,000 from NSF and $235,000 from Amazon.

Immigrants and others who don’t speak the dominant language can be hurt by poor translation, said Carpuat. “This is a fairness issue, because these are people who may not have any other choice but to use machine translation to make important decisions in their daily lives,” she said. “Yet they don’t have any way to assess whether the translations are correct or the risks that errors might pose.”

To address this, Carpuat’s team will design systems that are more intuitive and interactive to help the user recognize and recover from translation errors that are common in many systems today.

Central to this approach is a machine translation bot that will quickly recognize when a user is having difficulty. The bot will flag imperfect translations, and then help the user to craft alternate inputs—phrasing their query in a different way, for example—resulting in better outcomes.

Carpuat’s team includes Ge Gao, an assistant professor in the iSchool, and Niloufar Salehi, an assistant professor in the School of Information at UC Berkeley.

Of the six researchers involved in the Fairness in AI projects, five have appointments in the University of Maryland Institute for Advanced Computer Studies (UMIACS).

“We’re tremendously encouraged that our faculty are active in advocating for fairness in AI and are developing new technologies to reduce biases on many levels,” said UMIACS Director Mihai Pop. “I’m particularly proud that the teams represent four different schools and colleges at two universities. This is interdisciplinary research at its best.”

Original news story written by Tom Ventsias