Failures & Pivots
What didn't work and why
Not every concept survived contact with users. Here are the ideas we killed, the pivots we made, and what we learned from things that didn't work.
❌ Failed: Tutorial-first onboarding
What we tried: A 5-screen tutorial explaining how AI guidance worked before letting users access the camera.
Why it failed: Completion rate was 31% in pilot testing. Assistants were under appointment pressure and skipped tutorials entirely. Patients at home lost patience and abandoned the flow. The guidance we were explaining made no sense without actually seeing it in the camera.
What we learned: Teach through use, not upfront. We moved all guidance explanations into contextual tooltips that appeared the first time a user encountered each feature. Completion rate jumped to 89%.
❌ Failed: Auto-capture on quality threshold
What we tried: Automatic shutter trigger when all quality heuristics passed thresholds—hands-free capture.
Why it failed: Users felt a complete loss of control. The camera would fire at "wrong" moments from their perspective, even though quality scores were good. One assistant said it felt like "the app is taking photos OF me, not FOR me." Trust eroded fast.
What we learned: Users needed to feel in control of the capture moment, even if AI was doing heavy lifting behind the scenes. We kept manual shutter control and moved AI to a supportive role: guiding toward good positioning, then letting users decide when to capture. Satisfaction scores improved 28 points.
❌ Failed: Single comprehensive overlay
What we tried: One overlay showing distance, angle, symmetry, and lighting feedback simultaneously.
Why it failed: Information overload. Users didn't know which feedback to prioritize. In testing, they'd fix angle, which broke distance, which broke symmetry—chasing multiple moving targets with no clear hierarchy.
What we learned: Progressive disclosure wins. We broke guidance into layers: primary (distance + framing) activates first, then secondary (symmetry) once primary is met, then tertiary (lighting hints) only if needed. Users now fix issues in sequence, not all at once. Time-to-good-shot dropped 41%.
⚠️ Pivot: From cloud-based to on-device processing
Original plan: Send frames to cloud for ML analysis, return guidance prompts. Lower device requirements, centralized model updates.
Why we pivoted: Field reality hit hard—45% of operatories had spotty WiFi. Cloud processing meant 300-800ms latency, which felt laggy and broke the real-time illusion. Worse: complete failure in offline scenarios.
The pivot: Migrated to Apple Core ML for on-device inference. Latency dropped to <50ms, offline-first became default. Trade-off: bigger app size (+47MB) and device requirements (iPhone 8+ only), but worth it for reliability. This pivot shaped the entire architecture going forward.