Balancing Competing Metrics in Applied AI
Data Science Research Bazaar focused discussion
Event Details
Presenter: Chris Endemann, DoIT Research Cyberinfrastructure
As AI systems become more embedded in research workflows, they are evaluated using an expanding set of metrics including accuracy, speed, cost, energy use, explainability, adoption, and more. These metrics often compete with one another, and priorities are shaped by practical constraints around resources, scale, and time-to-results. This session focuses on how researchers choose and balance metrics when using AI in applied settings, examining how these choices influence trust, deployment decisions, and the feasibility of sustaining AI workflows over time. Rather than proposing a single evaluation framework, the discussion aims to surface real constraints, tradeoffs, and open questions researchers face when deciding what “good enough” looks like in practice.
We value inclusion and access for all participants and are pleased to provide reasonable accommodations for this event. Please email facilitator@datascience.wisc.edu to make a disability-related accommodation request. Requests should be made by Thursday, April 2, 2026, though reasonable effort will be made to support late accommodation requests.