publications
publications in reversed chronological order. generated by jekyll-scholar.
2025
- A Closer Look at the Existing Risks of Generative AI: Mapping the Who, What, and How of Real-World IncidentsMegan Li, Wendy Bickersteth, Ningjing Tang, and 4 more authorsIn Forthcoming in the 2025 Conference on AI, Ethics, and Society, Madrid, Spain, 2025
Due to its general-purpose nature, Generative AI is applied in an ever-growing set of domains and tasks, leading to an expanding set of risks of harm impacting people, communities, society, and the environment. These risks may arise due to failures during the design and development of the technology, as well as during its release, deployment, or downstream usages and appropriations of its outputs. In this paper, building on prior taxonomies of AI risks, harms, and failures, we construct a taxonomy specifically for Generative AI failures and map them to the harms they precipitate. Through a systematic analysis of 499 publicly reported incidents, we describe what harms are reported, how they arose, and who they impact. We report the prevalence of each type of harm, underlying failure mode, and harmed stakeholder, as well as their common co-occurrences. We find that most reported incidents are caused by use-related issues but bring harm to parties beyond the end user(s) of the Generative AI system at fault, and that the landscape of Generative AI harms is distinct from that of traditional AI. Our work offers actionable insights to policymakers, developers, and Generative AI users. In particular, we call for the prioritization of non-technical risk and harm mitigation strategies, including public disclosures and education and careful regulatory stances.
- Navigating Uncertainties: Understanding How GenAI Developers Document Their Models on Open-Source PlatformsNingjing Tang, Megan Li, Amy Winecoff, and 3 more authors2025
Model documentation plays a crucial role in promoting transparency and responsible development of AI systems. With the rise of Generative AI (GenAI), open-source platforms have increasingly become hubs for hosting and distributing these models, prompting platforms like Hugging Face to develop dedicated model documentation guidelines that align with responsible AI principles. Despite these growing efforts, there remains a lack of understanding of how developers document their GenAI models on open-source platforms. Through interviews with 13 GenAI developers active on open-source platforms, we provide empirical insights into their documentation practices and challenges. Our analysis reveals that despite existing resources, developers of GenAI models still face multiple layers of uncertainties in their model documentation: (1) uncertainties about what specific content should be included; (2) uncertainties about how to effectively report key components of their models; and (3) uncertainties in deciding who should take responsibilities for various aspects of model documentation. Based on our findings, we discuss the implications for policymakers, open-source platforms, and the research community to support meaningful, effective and actionable model documentation in the GenAI era, including cultivating better community norms, building robust evaluation infrastructures, and clarifying roles and responsibilities.
2023
- A US-UK Usability Evaluation of Consent Management Platform Cookie Consent Interface Design on Desktop and MobileElijah Robert Bouma-Sims, Megan Li, Yanzi Lin, and 6 more authorsIn Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 2023
Websites implement cookie consent interfaces to obtain users’ permission to use non-essential cookies, as required by privacy regulations. We extend prior research evaluating the impact of interface design on cookie consent through an online behavioral experiment (n = 1359) in which we prompted mobile and desktop users from the UK and US to make cookie consent decisions using one of 14 interfaces implemented with the OneTrust consent management platform (CMP). We found significant effects on user behavior and sentiment for multiple explanatory variables, including more negative sentiment towards the consent process among UK participants and lower comprehension of interface information among mobile users. The design factor that had the largest effect on user behavior was the initial set of options displayed in the cookie banner. In addition to providing more evidence of the inadequacy of current cookie consent processes, our results have implications for website operators and CMPs.
2022
- “Okay, Whatever”: An Evaluation of Cookie Consent InterfacesHana Habib, Megan Li, Ellie Young, and 1 more authorIn Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 2022
Many websites have added cookie consent interfaces to meet regulatory consent requirements. While prior work has demonstrated that they often use dark patterns — design techniques that lead users to less privacy-protective options — other usability aspects of these interfaces have been less explored. This study contributes a comprehensive, two-stage usability assessment of cookie consent interfaces. We first inspected 191 consent interfaces against five dark pattern heuristics and identified design choices that may impact usability. We then conducted a 1,109-participant online between-subjects experiment exploring the usability impact of seven design parameters. Participants were exposed to one of 12 consent interface variants during a shopping task on a prototype e-commerce website and answered a survey about their experience. Our findings suggest that a fully-blocking consent interface with in-line cookie options accompanied by a persistent button enabling users to later change their consent decision best meets several design objectives.