A machine learning model for pediatric speech sound disorder identification, built by a clinician, grounded in 54 SLP interviews, and designed for the real world.
Speech sound disorders affect approximately 8% of children, yet diagnosis remains subjective, variable, and inequitably distributed. The tools are outdated. The burden falls on clinicians. The children bear the gap.
SSD diagnosis varies across evaluators, settings, and experience levels. Acoustic-grounded analysis makes the outcome consistent, regardless of who performs the evaluation.
Children in under-resourced schools and underserved communities face the longest waitlists and the most inconsistent care. Objective tools change that calculus at scale.
Research findings will be shared openly with the field. TALK is committed to contributing to, not just consuming from, the science of pediatric speech-language pathology.
If this tool could squeeze time from 1.5 hours into 20 to 30 minutes, that would be helpful. I could spend that time somewhere else.School-based SLP, Jordan School District (primary research interview)
Speech sound disorder identification is inherently subjective. Two trained clinicians can evaluate the same child and reach different conclusions. The GFTA-3, the dominant tool for decades, was normed on a small, homogeneous sample and requires manual elicitation, transcription, and analysis by hand.
Across 54 SLP interviews, the consistent finding: too much time on scoring, no objective verification, no automated pattern detection, and no clear path from evaluation data to diagnosis that does not run entirely through the clinician's judgment and memory.
TALK is building the missing infrastructure: an acoustic ML model that classifies speech sound errors objectively, acts as a second set of ears, and integrates into the workflows SLPs already use.
TALK is developing a supervised machine learning model to identify acoustic markers associated with speech sound disorders in children. The model will be trained and validated on clinically annotated pediatric speech data, with the goal of producing a tool that integrates into existing SLP evaluation workflows.
Access to high-quality, annotated pediatric speech corpora is critical to this work. TALK is actively seeking research data partnerships with institutions whose existing datasets can help ground the model in clinically valid, diverse speech samples.
I am a licensed, ASHA-certified speech-language pathologist with a master's degree from a Harvard-affiliated SLP program, and I have spent the last several years building the case that clinical rigor and technical ambition are not just compatible, they are the only combination that actually solves this problem.
The idea for TALK came while I was practicing in Boston, sitting across from children and their families in evaluation rooms, running pen-and-paper assessments that had not meaningfully changed in decades. I knew the tools were inadequate. I also knew that the data, the computing infrastructure, and the machine learning frameworks to do something about it all existed. They had simply never been brought together with genuine clinical depth.
Since forming TALK LLC in 2023, I have conducted 54 in-depth interviews with SLPs across every practice setting to validate the problem and inform the product. I have assembled an advisory team across machine learning, software development, business strategy, and clinical practice. I also serve as an engineering program manager at a national technology company, giving me direct experience leading complex software programs from concept to production.
What I am building is not a clinical decision-support chatbot. It is a validated acoustic ML model, built with the same rigor that SLPs apply to their evaluations, because anything less would not be worth building.
TALK welcomes conversations with university researchers, lab directors, and institutions interested in supporting the development of objective SSD identification tools.
hello@talkpathways.com