Block letters I D E A with icons demonstrating their meanings (inclusion, diversity, equity, and access).We invite you to participate in a semester of engaging and thought-provoking programming as we explore diversity, equity, inclusion, and access within the fields of generative artificial intelligence and big data. This series will examine how these powerful technologies impact and shape society. Do they help reduce biases and prejudice, or do they open new pathways for reinforcing — and even expanding — them? Who is being excoded through ​​algorithmic bias and digital discrimination, effectively rendering some groups invisible in data-driven processes, while other groups may be specifically targeted? How can we harness these technologies to forge the future we aspire to achieve? Together with experts and peers from a variety of disciplines, both at Lafayette and beyond, we’ll delve into these questions and the ethical considerations, challenges, and opportunities that AI and big data present. Join us for this critical conversation to help shape and support technologies that drive justice and build an equitable future for all.

Inclusive STEM Reading Group

Book cover for "Unmasking AI: My Mission to Protect What Is Human in a World of Machines" by Joy BuolamwiniAs part of this semester’s programming the Inclusive STEM Reading Group will be reading Unmasking AI: My Mission to Protect What is Human in a World of Machines by Dr. Joy Buolamwini, who will be visiting us in February. Please let us know you are interested in participating in the Inclusive STEM Reading Group by completing this form by 5:00 p.m. on Friday, January 31.

 

JANUARY

Wednesday, January 29, 4:15: Kirby 104

Algorithms and Social Justice: A Collaborative Approach to Teaching About Equity in Tech

A headshot of Lawrence Snyder.

Dr. Larry Snyder (Lehigh University)

A headshot of Suzanne Edwards.

Dr. Suzanne Edwards (Lehigh University)

How can we ensure that technology advances equity rather than perpetuating systemic inequities? Lehigh University professors Dr. Suzanne Edwards (English/Women’s, Gender, and Sexuality Studies) and Dr. Larry Snyder (Industrial and Systems Engineering) explore this pressing question and more in Algorithms and Social Justice, an innovative interdisciplinary course they team-teach. In this presentation, Dr. Edwards, an expert in feminist/queer theory, and Dr. Snyder, a specialist in data and systems engineering, will share segments from their course that bridges coding with critical discussions about race, gender, and equity in technology. Participants will gain valuable insights about the impact of algorithms and generative artificial intelligence on diversity, equity, and inclusion, especially their capacity to perpetuate systemic inequities or foster transformative progress. Professors Edwards and Snyder will also explore the challenges and rewards of teaching across disciplines, the collaborative process of developing the course, and its transformative impact on students as they tackle the ethical complexities of technology. Faculty, staff, and students will leave with new ideas from this compelling example of interdisciplinary teaching and a strong foundation for engaging with the Hanson Center’s upcoming programming this semester around the theme Can AI Generate IDEAs? Navigating the Promises and Perils of Artificial Intelligence for Inclusion, Diversity, Equity, and Access.

FEBRUARY

Wednesday, February 5, 4:15: Gendebien Room (206), Skillman Library

Optimizing Surveillance: A Story about Race & Technology

A headshot of Jenn Rossmann.

Dr. Jenn Rossmann (Mechanical Engineering, Lafayette College)

Facial recognition and AI, like many technologies, express ethno-racial and gender politics. Whose needs are served, and who is made visible in what ways, by these tools? How can we understand the way technology, race, and gender shape each other, make responsible decisions about the way we use and regulate tech, and ethically co-design a more just future? Dr. Jenn Rossmann, from Lafayette’s Mechanical Engineering department, explores these questions in her lecture.

 

Wednesday, February 19, 4:15: TBD

Unmasking AI: My Mission to Protect What is Human in a World of Machines

Dr. Joy Buolamwini (MIT and the Algorithmic Justice League)

Headshot of Dr. Joy Buolamwini.

Dr. Joy Buolamwini (MIT, AJL)

Dr. Joy Buolamwini is the founder of the Algorithmic Justice League, a groundbreaking MIT researcher, a model, and an artist. She is the author of the National Bestseller Unmasking AI: My Mission to Protect What Is Human in a World of Machines and advises world leaders on preventing AI harm. Her research on facial recognition technologies transformed the field of AI auditing and her TED talk on algorithmic bias has been viewed over 1.7 million times. Her TED AI talk on protecting human rights in an age of AI transforms the boundaries of TED talks. Dr. Buolamwini will join us for a moderated conversation and Q&A. Her visit is co-sponsored by the Engineering Division as part of National Engineers Week.
 

MARCH

Monday, March 10, 4:15, Simon 109

Promise and Pitfalls of Scale: Working with Machine Learning Models of Text

A headshot of Sofia Serrano.

Dr. Sofia Serrano (Computer Science, Lafayette College)

Scale is crucial to contemporary large language models. What implications does that scale have for viable uses of these language technologies? Knowing that these models have biases, how can we leverage their scale in a safer way to accomplish nontrivial tasks? In this talk, Dr. Serrano will use a past project to illustrate working through these questions together with qualitative researchers studying power and historical political violence. She’ll walk through how their initial approach evolved to address the limitations of the language technologies they were using at the time, how those models’ technical biases informed their use on the project, and what they were able to use those models to accomplish.

Thursday, March 27, 4:15: Kirby 104

The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want

Dr. Alex Hanna (Distributed AI Research Institute)

Headshot for Alex Hanna.

Dr. Alex Hanna (Director of Research, DAIR)

Is artificial intelligence going to take over the world? Have big tech scientists created an artificial lifeform that can think on its own? Is it going to put authors, artists, and others out of business? Are we about to enter an age where computers are better than humans at everything? The answer to these questions, we respond: is “no,” “they wish,” “LOL,” and “definitely not.” This kind of thinking is a symptom of a phenomenon known as “AI hype.” Hype looks and smells fishy: It twists words and helps the rich get richer by justifying data theft, motivating surveillance capitalism, and devaluing human creativity in order to replace meaningful work with jobs that treat people like machines.

In this talk, Dr. Alex Hanna discusses her book The AI Con (coauthored with Dr. Emily M. Bender), which offers a sharp, witty, and wide-ranging take-down of AI hype across its many forms. She’ll show you how to spot AI hype, how to deconstruct it, and how to expose the power grabs it aims to hide. Armed with these tools, you will be prepared to push back against AI hype at work, as a consumer in the marketplace, as a skeptical newsreader, and as a citizen holding policymakers to account. Together, we expose AI hype for what it is: a mask for Big Tech’s drive for profit, with little concern for who it affects.

APRIL

Thursday, April 3, 4:15: Simon 109

Are Everyday Writers Aware of AI’s Potential Harms?

A headshot of Tim Laquintano.

Dr. Timothy Laquintano (Dean of Arts, Humanities and Interdisciplinary Programs, Lafayette College)

As academic researchers sort through Silicon Valley’s AI hype and continue their inquiry into the evolution of large language models, millions of “everyday writers” have quietly adopted chatbots to facilitate their writing. The term everyday writers refers to a class of people who write for substantial portions of their day but do not tend to self-identify as “writers.” This talk draws on an interview study of how these writers have been implementing LLMs into their work. It specifically attends to whether or not ordinary users of LLMs are aware of (and shift their usage in response to) the categories of AI harms emerging in the academic literature such as bias, environmental concerns, linguistic homogenization, and data privacy.