175 lines
4.4 KiB
Markdown
175 lines
4.4 KiB
Markdown
# Driver DSMS/ADAS Real-Time Validator
|
|
|
|
A Streamlit-based application for real-time Driver State Monitoring System (DSMS) and Advanced Driver Assistance System (ADAS) validation using computer vision and deep learning.
|
|
|
|
## 📋 Project Status
|
|
|
|
**Current Status**: ⚠️ **Requires Critical Fixes Before Use**
|
|
|
|
- **Dependencies**: 2/11 installed (18%)
|
|
- **Code Quality**: Multiple critical bugs identified
|
|
- **Performance**: Not optimized for low-spec CPUs
|
|
- **Functionality**: Non-functional (will crash on execution)
|
|
|
|
## 🚀 Quick Start
|
|
|
|
### 1. Check Current Status
|
|
|
|
```bash
|
|
python3 check_dependencies.py
|
|
```
|
|
|
|
### 2. Install Dependencies
|
|
|
|
```bash
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
**Note**: This will download ~2GB and require ~5GB disk space.
|
|
|
|
### 3. Configure
|
|
|
|
Edit `track_drive.py` and set your Roboflow API key:
|
|
```python
|
|
'roboflow_api_key': 'YOUR_ACTUAL_KEY_HERE',
|
|
```
|
|
|
|
### 4. ⚠️ **DO NOT RUN YET**
|
|
|
|
The code has critical bugs that must be fixed first. See [ASSESSMENT_REPORT.md](ASSESSMENT_REPORT.md) for details.
|
|
|
|
## 📚 Documentation
|
|
|
|
- **[ASSESSMENT_REPORT.md](ASSESSMENT_REPORT.md)** - Comprehensive evaluation, issues, and improvement plan
|
|
- **[QUICK_START.md](QUICK_START.md)** - Installation and setup guide
|
|
- **[requirements.txt](requirements.txt)** - Python dependencies
|
|
|
|
## 🔍 What This Project Does
|
|
|
|
### DSMS (Driver State Monitoring)
|
|
- Drowsiness detection (PERCLOS)
|
|
- Distraction detection (phone use, looking away)
|
|
- Smoking detection
|
|
- Seatbelt detection
|
|
- Driver absence detection
|
|
|
|
### ADAS (Advanced Driver Assistance)
|
|
- Forward Collision Warning (FCW)
|
|
- Lane Departure Warning (LDW)
|
|
- Pedestrian detection
|
|
- Tailgating detection
|
|
- Hard braking/acceleration detection
|
|
- Overspeed detection
|
|
|
|
## 🛠️ Technology Stack
|
|
|
|
- **Streamlit**: Web UI framework
|
|
- **YOLOv8n**: Object detection (vehicles, pedestrians, phones)
|
|
- **MediaPipe**: Face mesh analysis for PERCLOS
|
|
- **OpenCV**: Image processing and optical flow
|
|
- **Roboflow**: Seatbelt detection API
|
|
- **VideoMAE**: Action recognition (⚠️ too heavy, needs replacement)
|
|
- **scikit-learn**: Anomaly detection
|
|
|
|
## ⚠️ Known Issues
|
|
|
|
### Critical Bugs (Must Fix)
|
|
1. **Optical Flow API Error**: `calcOpticalFlowPyrLK` used incorrectly - will crash
|
|
2. **VideoMAE JIT Scripting**: Will fail - transformers can't be JIT scripted
|
|
3. **YOLO ONNX Parsing**: Incorrect output format assumption
|
|
4. **ONNX Export**: Runs on every load instead of conditionally
|
|
|
|
### Performance Issues
|
|
1. **VideoMAE Too Heavy**: Not suitable for low-spec CPUs
|
|
2. **All Models Load at Startup**: Slow initialization
|
|
3. **No Model Quantization**: VideoMAE runs in FP32
|
|
4. **Untrained Isolation Forest**: Produces random predictions
|
|
|
|
See [ASSESSMENT_REPORT.md](ASSESSMENT_REPORT.md) for complete analysis.
|
|
|
|
## 📊 Performance Targets
|
|
|
|
**Target Hardware**: Low-spec CPU (4 cores, 2GHz, 8GB RAM)
|
|
|
|
**Current (Estimated After Fixes)**:
|
|
- FPS: 5-8
|
|
- Memory: 4-6GB
|
|
- CPU: 70-90%
|
|
|
|
**Target (After Optimizations)**:
|
|
- FPS: 12-15
|
|
- Memory: 2-3GB
|
|
- CPU: <80%
|
|
- Accuracy: >90% precision, >85% recall
|
|
|
|
## 🗺️ Implementation Roadmap
|
|
|
|
### Phase 1: Critical Fixes (Week 1)
|
|
- Fix optical flow implementation
|
|
- Remove VideoMAE JIT scripting
|
|
- Fix YOLO ONNX parsing
|
|
- Add error handling
|
|
- Install and test dependencies
|
|
|
|
### Phase 2: Performance Optimization (Week 2)
|
|
- Replace VideoMAE with lightweight alternative
|
|
- Implement lazy model loading
|
|
- Optimize frame processing pipeline
|
|
- Add smart frame skipping
|
|
|
|
### Phase 3: Accuracy Improvements (Week 3)
|
|
- Train Isolation Forest
|
|
- Improve PERCLOS calculation
|
|
- Add temporal smoothing
|
|
- Fix distance estimation
|
|
|
|
### Phase 4: Testing & Validation (Week 4)
|
|
- Unit tests
|
|
- Integration tests
|
|
- Performance benchmarking
|
|
- Documentation
|
|
|
|
## 🧪 Testing
|
|
|
|
After fixes are implemented:
|
|
|
|
```bash
|
|
# Run dependency check
|
|
python3 check_dependencies.py
|
|
|
|
# Run application
|
|
streamlit run track_drive.py
|
|
```
|
|
|
|
## 📝 Requirements
|
|
|
|
- Python 3.8+
|
|
- ~5GB disk space
|
|
- Webcam or video file
|
|
- Roboflow API key (free tier available)
|
|
|
|
## 🤝 Contributing
|
|
|
|
Before making changes:
|
|
1. Read [ASSESSMENT_REPORT.md](ASSESSMENT_REPORT.md)
|
|
2. Follow the implementation plan
|
|
3. Test on low-spec hardware
|
|
4. Document changes
|
|
|
|
## 📄 License
|
|
|
|
[Add your license here]
|
|
|
|
## 🙏 Acknowledgments
|
|
|
|
- Ultralytics for YOLOv8
|
|
- Google for MediaPipe
|
|
- Hugging Face for transformers
|
|
- Roboflow for model hosting
|
|
|
|
---
|
|
|
|
**Last Updated**: November 2024
|
|
**Status**: Assessment Complete - Awaiting Implementation
|
|
|