The 38th Annual AAAI Conference on Artificial Intelligence
February 20-27, 2024 | Vancouver, Canada
AAAI-24 Diversity & Inclusion Activities
Sponsored by the Association for the Advancement of Artificial Intelligence
February 22-25, 2024 | Vancouver Convention Centre – West Building | Vancouver, BC, Canada
Please note that these sessions are either a half day or full day
9:00AM – 3:00PM
12:30PM – 2:00PM
Women’s Mentoring Lunch
9:00AM – 1:00PM
2:00PM – 5:00PM
1:00PM – 5:00PM
9:00AM – 5:00PM
2:00PM – 5:00PM
10:00AM – 4:00PM
Cancelled – Trustworthy AI: Addressing Socioethical Effects of AI
Description of Activities
Thursday, February 22
7th DiverseInAI.org Workshop on Artificial Intelligence – Diversity, Belonging, Equity, and Inclusion (AIDBEI)
This workshop is the fifth in the series of workshops organized by Diverse In AI, an affinity group which aims to foster links between participants from underrepresented populations, which in artificial intelligence includes but is not limited to women, BIPOC persons, LGBTQ+ persons, persons with disabilities (e.g., Black in AI, WiML, LatinX in AI, Queer in AI, Indigenous in AI, Disability in AI). Meanwhile, many service and outreach workshops such as the Grace Hopper Conference (GHC) provide opportunities to technologists to understand the needs of underserved populations and in turn give back to these communities. The organizers of this workshop wish to bring together these communities to strive to achieve the intersecting goals through interdisciplinary collaborations. This shall help in the dissemination of benefits to all underserved communities in the field of AI and further help in mentoring students/future technologists belonging to isolated, underprivileged, and underrepresented communities.
Organizing Committee: Yihong Theis, William Hsu and Enock Ayiku
Women’s Networking Lunch
Moderator: Maria Chang, IBM
Maria Chang is a Senior Research Scientist at IBM Research specializing in neuro-symbolic methods for natural language understanding, including integrations of structured knowledge bases and large language models to improve reliability, interpretability, and reasoning skills. She is particularly interested in how these capabilities can contribute to AI alignment and safety. She was a co-PI of the CHRONOS event understanding and prediction project, funded by the DARPA KAIROS program. She was also a key technical contributor to the Watson Dialogue-Based Tutoring project. Her work has appeared in a variety of AI and cognitive science conferences and journals, including AI magazine, AAAI, IAAI, Topics in Cognitive Science, Spatial Cognition, and Neuroimage. She is currently an elected member of the AAAI executive council (elected in 2022). She received a PhD in computer science from Northwestern University and a BA in cognitive science from UC Berkeley.
Panelists: Brent Venable, Laura Hiatt, and Elizabeth Ondula
Brent Venable is the inaugural director for UWF’s intelligent systems and robotics doctoral program. Previously, Venable served as a professor of computer science at Tulane University in New Orleans, while also serving as a research scientist for the Florida Institute of Human and Machine Cognition in Ocala, Florida.
Venable’s primary research interests are within artificial intelligence including constraint-based reasoning, preferences, temporal reasoning and computational social choice. Her research focuses on providing a solid framework for the design and deployment of intelligent systems able to reason about preferences.
She began her career in higher education as a faculty member in the Department of Pure and Applied Mathematics at the University of Padova in Italy, where she also earned a doctorate in computer science and a Laurea degree with honors in mathematics.
I lead the Adaptive Systems Section at the Navy Center for Applied Research in Artificial Intelligence, part of the U.S. Naval Research Laboratory in Washington, DC. I received my B.S. from Stanford University in Symbolic Systems, and my M.S. and Ph.D. from Carnegie Mellon University in Computer Science.
My work has primarily focused on ways in which humans and robots can effectively work together as teammates. The research involves issues of planning and execution, cognitive science, and team-based task coordination strategies.
Elizabeth is an Electrical Engineer from the Technical University of Kenya and is currently a Ph.D. student of Computer Science at USC. She is a member of Autonomous Networks Research Group (https://anrg.usc.edu/www/) . She co-organizes a bi-weekly reinforcement learning group, SUITERS-RL(https://suitersrl.github.io/). Prior to academia, she had roles as a Software Engineer at IBM Research in Kenya, Head of Product Developmen of Brave Venture Labs and Co-lead of Hardware Research at iHub Nairobi. Outside academia and engineering, she enjoys music, poetry, fine arts, photography, journaling, and outdoor activities.
Friday, February 23
Promoting Meaningful, Beneficial and Informed Participation of African Communities in the Development and Utilization of AI Solutions
The development and utilization of AI in addressing key societal, environmental and developmental issues in Africa is on a growth trajectory. There are many successes, challenges and lessons learned in the process. This workshop will provide a platform for stakeholders and interested parties to share case studies, empirical works, literature reviews and thought papers focusing on how to engage African communities in the AI development process in a manner that identifies and addresses pertinent ethical, data protection and Intellectual Property issues. The workshop will feature brief presentations followed by panel discussions by presenters and invited speakers.
Organizing Committee: Moses Thiga
Fostering Dynamic Inclusivity in AI Team Science
Diversity is essential but is made sustainable and transformative by the actions and environments of inclusivity. Inclusion is dynamic in that it recognizes dimensions of diversity that emerge from identities, including the intersection of socioeconomic status, ethnicity, cultural background, gender, ability, and sexual orientation but goes beyond to ensure belonging, respect, and success. We will organize a strategic inclusive team science workshop to provide session participants with an opportunity to explore practical tools of equitable and inclusive research collaboration. Our objectives are to:
- Identify the challenges and opportunities for AI in strategic team science.
- Discuss diversity and inclusion within the framework of assets for organizations and teams that participants are a part of.
- Reflect with a specific case study regarding inclusion of Indigenous peoples and epistemologies in AI research.
- Leave with a better understanding of the importance of diversity within diversity in Team Science and specific resources and tools for digital health.
Organizing Committee: Ashley Cordes, Parisa Rashidi and Yulia Strekalova
Unfinished Comics for Inclusive AI Education
This session will include a hands-on activity around the use of comics as a medium for collaborative engagement and public education, using unfinished comics as a participatory design tool. Unfinished comics are a technique that uses incomplete comic panels and elements, as participatory, scaffolding materials, to engage participants in collaborative design activities. We aim to collaboratively create comics with participants, using modular unfinished comic materials, comprising of an assortment of high fidelity rendered comic panels, and supplemental comic elements for participants to add text captions, dialogue, annotations etc. Participants will be given these unfinished comic materials, and use them to co-create the final comic assemblage, drawing from their lived experiences, to interpret the visuals and construct the narrative text, and order of the panels. The unfinished comics will depict common, relatable contexts where people interact with algorithms, (e.g., booking a taxi, or applying for a job). Hence, participants will draw from their personal experiences in such contexts, to “finish” the comic. The aim of this work is to get the audience to reflect on how they engage with AI-based systems, while exploring a visual format that acts both as a pedagogical tool and a channel to collaboratively communicate with others.
Organizing Committee: Falaah Arif Khan, Awais Hameed Khan and Julia Stoyanovich
Saturday, February 24
Third International Workshop on Social Impact of AI for Africa (SIAIA-24)
Artificial Intelligence (AI) is rapidly transforming multiple sectors worldwide, from healthcare and agriculture to education and governance. However, this transformation has not been uniform, often neglecting the unique challenges and opportunities present in developing regions such as Africa. Recognizing this gap, the Third Social Impact of AI for Africa (SIAIA-24) is a crucial platform for multi-disciplinary dialogue and collaborative action to harness AI’s potential for sustainable development in Africa. Building on the significant successes of SIAIA-22 and SIAIA-23—which initiated key conversations, catalyzed projects, and garnered participation from over 30 countries SIAIA-24 aspires to delve deeper into actionable strategies and foster robust partnerships. This year’s focus is ”AI for Sustainable Development Goals in Africa,” emphasizing sectors like healthcare, agriculture, education, and governance. The workshop will feature keynote addresses, panel discussions, and paper presentations. Additionally, it will introduce an ”African Young Innovators Showcase,” a platform for African students to showcase their ongoing research and receive constructive feedback and awards for their presentations. Through these activities, SIAIA-24 seeks to not only discuss the social implications of AI in Africa but also contribute meaningfully to the global efforts to ensure that the benefits of AI are universally accessible and socially impactful.
Organizing Committee: Yetunde Folajimi, Memo Ergezer and Salem Othman
Cancelled: How to Know Your Market Value as an AI Researcher
Diversity, Equity, and Inclusion in Algorithmic Hiring: Perils and Promises of Human-Machine Systems
Organizations are increasingly relying on AI-enabled technologies in their hiring practices as efficacy of traditional hiring methods has been questioned and their costs have inflated. Increased data availability and enhanced computing capacities have further enabled the use of algorithmic hiring. As the use of AI in hiring has increased, so have the Diversity, Equity, and Inclusion (DEI) concerns surrounding this practice. In fact, much normative and empirical research has highlighted the DEI issues associated with algorithmic hiring, particularly with respect to women, minorities, and under-represented groups. To counter these issues, some have recently offered human-machine systems as a potential remedy with the assumption that human-machine systems outperform both humans and machines, and mitigate any DEI concerns around algorithmic hiring. In light of these developments, we offer a dedicated forum to (a) chronicle the increasing use of AI in hiring; (b) identify and discuss DEI concerns surrounding algorithmic hiring; and (c) examine the challenges and opportunities that human-machine systems might offer in exacerbating and mitigating the DEI concerns associated with algorithmic hiring.
Organizing Committee: Pooria Assadi and Nima Safaei
Sunday, February 25
Cancelled – Trustworthy AI: Addressing Socioethical Effects of AI
While the potential of AI systems is bountiful, though, is still unknown–as are their risks. In this proposal, we intend to offer a brief, high-level overview of adverse societal impacts of AI systems. To do so, we will highlight the requirement of multi-disciplinary governance and convergence throughout its lifecycle via critical systemic examinations, and later discuss induced effects on and in society. In particular, we will consider these impacts from a multi-disciplinary perspective: computer science, sociology, environmental science, and so
on to discuss its inter-connected societal risks and inability to simultaneously satisfy aspects of “Well-Being”. In our proposed activity, we plan to present infrastructural disparities, and discuss ecological and sociological concerns of Trustworthy AI research in full, for example, unveiling harmful socially-constructed and algorithmic biases of underrepresented protected classes (e.g., gender). Therefore, we aim to accentuate the necessity of holistically addressing pressing concerns of AI systems from a socioethical impact assessment perspective to explicate its harmful societal effects to truly enable humanity-centered Trustworthy AI.
Organizing Committee: Jamell Dacon
D&I Activities Co-Chairs
René Mellema (Umeå University, Sweden)
Martin Mundt (TU Darmstadt, Germany)