Skip to content

Healthcare

AI Dental Screening from Intraoral Images

We built the perception and triage foundation for a dental screening product. The system turns intraoral images and questionnaire data into structured screening signals that can improve as more data arrives.

Computer VisionDental TriageMultimodal AIML Pipeline

Industry

Healthcare (DeepTech / Dental Screening)

Client

A DeepTech dental screening startup under NDA, in partnership with a strategic AI partner

Engagement

Two-MVP build of perception and triage models with ongoing accuracy improvement

Outcome

MVP 1.0 delivered, with the perception model reaching 60% accuracy on the initial training set and a clear path to improve as data scales

The Challenge

The startup needed to turn smartphone photos of patients' teeth into clinically useful triage information. That meant solving both perception and decision-making: identifying tooth numbers and conditions from images, then combining those findings with structured questionnaire data to classify treatment urgency.

Both layers needed to be reliable enough for clinicians to use and structured enough to become part of a product, not just a prototype demo.

What We Built

  • Led the technical architecture and implementation of the perception and triage stack in close collaboration with the client's product team and strategic AI partner.
  • Built a computer vision pipeline using YOLOv7 and RF-DETR on annotated intraoral imagery to identify tooth numbers and detect dental conditions.
  • Created a decision-tree-based triage classifier that combines image findings with structured questionnaire responses.
  • Set up a Roboflow-driven annotation workflow and reproducible training pipeline so improvements can move cleanly from data to deployed API.
  • Exposed perception and triage as separate FastAPI services for the client's mobile and web products.
  • Started active work on a dental vision-language model extension to go beyond bounded condition detection.

What Changed

  • MVP 1.0 shipped on schedule and integrated into the client's product flow.
  • The perception model reached 60% accuracy on the initial 1,500-image training set with a clear data-driven improvement path.
  • Perception and triage were separated cleanly, so each layer can improve independently.
  • MVP 2.0 is in active development with the first milestone already delivered.

Stack

YOLOv7RF-DETRRoboflowPython FastAPIPaperspaceCustom VLM training pipeline

Next step

Building a vision-first product that needs production-grade ML?

We can map the workflow, tell you whether this pattern fits your operation, and outline what a first delivery slice would look like.

Discuss a similar build