Pl(ai) Lab Home Health Questionare

A multimodal, GPT-4-powered system enabling independent health form completion for older adults

Goal:
The goal of the Home Health Questionnaire project was to design and build an accessible, voice interactive health survey that helps older adults complete digital medical forms independently. The system uses speech and touch inputs powered by GPT 4, allowing participants to answer standardized health assessment questions naturally by voice or by touch. The project builds directly on Digital Forms for All, a collaboration between Stanford University and Memorial Sloan Kettering Cancer Center focused on multimodal LLM driven interfaces for healthcare access.

Challanges:
One major challenge was transforming a traditional written clinical assessment into a conversational format without losing structure, reliability, or medical meaning. I worked closely with Dr. Andrea Cuadra, Kristen Fessele, and David Ihim to translate survey items into clear spoken language, design consistent state feedback for listening and response modes, and refine a voice first interaction model that stayed simple for older adults and visually impaired users.

A second challenge was making the system practical for clinical deployment. We had to debug multimodal transitions between voice and touch, resolve speech to text inconsistencies, design a layout that preserved clarity for low vision users, and ensure everything could integrate with HIPAA compliant infrastructure. Preparing the system for use inside Memorial Sloan Kettering’s internal testing environment required careful engineering and coordination.

We also needed to prepare the tool for broader adoption. This meant running user sessions to identify accessibility issues, building multilingual support, and simplifying the configuration so hospitals and clinics worldwide could localize the software for their own needs.

Outcome:
The final system is now being piloted at Memorial Sloan Kettering Cancer Center, where patients can complete the Geriatric Assessment through natural conversation rather than paper or digital forms. This reduces reliance on caregivers and significantly improves accessibility for older adults and people with visual, cognitive, or physical limitations.

The project produced a fully interactive, voice enabled health assessment platform with refined UI and UX design, a robust multimodal interface, multilingual support, and an open source ready configuration. It demonstrates how conversational AI can make essential medical processes more independent, more inclusive, and more humane for the people who need them most.