ML+Coffee: Optical Character Recognition & Conversational Surveys
ML+X Networking & Coworking
Event Details
Date
Wednesday, May 6, 2026
Time
9-11 a.m.
Location
1145 Discovery Building
Description
ML+Coffee offers a supportive and casual environment to discuss ongoing ML/AI projects and share knowledge & tools across campus. Whether you're looking for advice on applying ML/AI to your data, hoping to demo a favorite tool, or interested in discussing a paper, ML+Coffee offers the perfect space. The majority of our attendees are applied practitioners from diverse fields, not AI/ML purists looking to critique. Coffee provided ☕ to keep the ideas flowing, courtesy of our sponsors.
Where: Room 1145, Discovery Building (330 N. Orchard St.).
When: Monthly on Wednesdays, 9-11am CT. Spring dates include: 2/18, 3/11, 4/8, and 5/6.
Register: https://forms.gle/TL4Gnp8oHBBEwBxg9
May 6 Schedule
- 9-9:30 (Intros & Resource Sharing): The first 30-minutes are typically focused on introductions and causal share-outs about new ML/AI tools or resources folks are using.
- 9:30-10:00 (PrismScope: Conversational Surveys — Dr. Christopher Harrison): A demo of PrismScope, an AI-powered conversational survey platform designed to close the "feedback gap" between traditional surveys and human interviews. Instead of static questionnaires, PrismScope conducts conversations—asking follow-up questions, probing deeper when something important emerges, and synthesizing themes and insights automatically. The goal: deliver qualitative depth ("why") at the speed and scale of surveys ("what"). Learn more: https://prismscope.ai
- 10:00-10:30am (Optical Character Recognition with RunAI — Chris Endemann): A demo of recent optical character recognition (OCR) applications running on a pilot RunAI-scheduled local GPU cluster, highlighting how shared model endpoints can support multiple labs and research groups. I'll cover practical use cases (e.g., digitizing documents, extracting structured text, working with messy scans), briefly discuss traditional approaches (e.g., Tesseract), and demo a hosted Qwen-VL endpoint for multimodal text extraction. I'll also discuss how researchers can plug into shared local GPU infrastructure rather than building from scratch or relying solely on external APIs. Looking for researchers with OCR needs who want to try this on their own data or help shape shared infrastructure!
- 10:30-11am (TBD): Have a demo, paper, or ongoing project to discuss? Join the discussion queue! No formal presentation is required—this event prioritizes open dialogue and casual discussion over formal presentations. If helpful, you're welcome to bring a couple of slides (e.g., to share data, methods, or results). Many participants just bring a few key points or a rough overview of their work, and the conversation flows naturally from there.
Cost
Free
Contact