Overview
Led development of a computer vision AI assistant deployed in hospital systems across the United States. The system enables hospital staff to act as "virtual sitters" — remotely monitoring multiple patients simultaneously using AI-powered video analysis to detect fall risk and trigger alerts before incidents occur. By augmenting human oversight with real-time machine perception, a single sitter can safely monitor far more patients than traditional 1:1 bedside sitting allows.
The Challenge
Patient falls are one of the most costly and preventable adverse events in hospitals — responsible for significant injury, extended stays, and liability exposure. Traditional 1:1 patient sitting is resource-intensive and difficult to scale, particularly during staffing shortages. The system needed to reliably detect fall risk across diverse patient populations, room configurations, and lighting conditions — and critically, it needed a low false-alarm rate that maintains clinician trust. Frequent false positives cause alert fatigue, which defeats the purpose entirely.
Technical Approach
- Real-time object detection to identify patient, bed, and environment objects including wheelchairs, IV poles, and bed rails
- Human pose estimation to detect high-risk positions: sitting at bed edge, standing, leaning forward, or attempting egress
- Motion sensing and temporal analysis to distinguish normal repositioning from purposeful fall-risk movement patterns
- Edge AI processing for low latency inference and compliance with healthcare data regulations — video never leaves the hospital network
- Peer-to-peer WebRTC connectivity between edge device and virtual observer console, enabling secure low-latency video with alert overlay
- Azure IoT Edge integration for fleet-wide device management, OTA updates, and telemetry aggregation across hospital systems
- Azure Event Grid for scalable, event-driven alert routing to nursing stations and mobile devices