CP_AUTOMATION/tests/student_assessment/README.md
2025-12-12 19:54:54 +05:30

234 lines
6.5 KiB
Markdown

# Student Assessment Tests
Comprehensive test suite for the complete assessment flow in Cognitive Prism platform.
## Overview
This test suite covers the entire assessment journey:
1. **Assessments Page** - Assessment hub and selection
2. **Domains Page** - Domain listing and navigation
3. **Domain Assessment** - Question answering and submission
4. **Domain Feedback** - Per-domain feedback collection
5. **Final Feedback** - Overall assessment feedback
6. **Complete Flow** - End-to-end assessment completion
## Test Structure
```
tests/student_assessment/
├── conftest.py # Fixtures (assessment-ready student)
├── test_01_assessments_page.py # Assessments hub tests
├── test_02_domains_page.py # Domains listing tests
├── test_03_domain_assessment.py # Question answering tests
├── test_04_domain_feedback.py # Domain feedback tests
├── test_05_final_feedback.py # Final feedback tests
└── test_06_complete_assessment_flow.py # End-to-end flow tests
```
## Fixtures
### `assessment_ready_student`
Ensures student is ready for assessments:
- ✅ Login (with smart password handling)
- ✅ Password reset (if needed)
- ✅ Profile completion to 100% (if needed)
- ✅ Navigate to Assessments page
**Returns**: Dictionary with driver, CPID, and page objects
### `assessment_with_domains`
Extends `assessment_ready_student` and starts an assessment:
- ✅ Selects first available assessment
- ✅ Navigates to domains page
- ✅ Extracts domain IDs
**Returns**: Dictionary with domains page, assessment ID, and domain IDs
### `domain_assessment_started`
Starts a domain assessment:
- ✅ Finds first unlocked domain
- ✅ Starts domain assessment
- ✅ Dismisses guidance modal
**Returns**: Dictionary with domain assessment page and domain ID
## Test Categories
### Assessments Page Tests (`test_01_assessments_page.py`)
- Page load verification
- Assessment cards visibility
- Assessment ID extraction
- Card structure validation
- Begin assessment navigation
- Multiple assessments handling
### Domains Page Tests (`test_02_domains_page.py`)
- Page load verification
- Domain cards visibility
- Domain ID extraction
- Lock/unlock status checking
- Action button validation
- Overall progress tracking
- Start domain navigation
- Back button navigation
### Domain Assessment Tests (`test_03_domain_assessment.py`)
- Page load verification
- Progress tracking
- Timer display
- Question detection
- Question type detection
- Navigation buttons
- Answer submission (all types):
- Multiple choice
- True/False
- Rating
- Open-ended
- Matrix
- Submit modal flow
### Domain Feedback Tests (`test_04_domain_feedback.py`)
- Modal detection
- Modal structure validation
- Question 1 (Yes/No + Reason)
- Question 2 (Textarea)
- Submit feedback
- Skip feedback
- Navigation after feedback
### Final Feedback Tests (`test_05_final_feedback.py`)
- Overall feedback modal
- Per-question feedback modal
- Rating selection
- Comment entry
- Feedback submission
### Complete Flow Tests (`test_06_complete_assessment_flow.py`)
- Single domain completion (questions + feedback)
- Full assessment completion (all domains + final feedback)
## Running Tests
### Run All Assessment Tests
```bash
pytest tests/student_assessment/ -v
```
### Run Specific Test Category
```bash
# Assessments page only
pytest tests/student_assessment/test_01_assessments_page.py -v
# Domains page only
pytest tests/student_assessment/test_02_domains_page.py -v
# Domain assessment only
pytest tests/student_assessment/test_03_domain_assessment.py -v
# Domain feedback only
pytest tests/student_assessment/test_04_domain_feedback.py -v
# Final feedback only
pytest tests/student_assessment/test_05_final_feedback.py -v
# Complete flow only
pytest tests/student_assessment/test_06_complete_assessment_flow.py -v
```
### Run with Markers
```bash
# All assessment tests
pytest -m assessment -v
# End-to-end tests only
pytest -m e2e -v
# Complete flow tests
pytest -m complete_flow -v
# Slow tests (complete flow)
pytest -m slow -v
```
### Run Single Test
```bash
pytest tests/student_assessment/test_01_assessments_page.py::TestAssessmentsPage::test_assessments_page_loads -v
```
## Test Coverage
### ✅ Covered
- [x] Assessments page rendering
- [x] Assessment card structure
- [x] Domain listing and navigation
- [x] Domain lock/unlock status
- [x] Question type detection
- [x] All question types (MC, T/F, Rating, Open-ended, Matrix)
- [x] Question navigation (Previous/Next)
- [x] Domain submission flow
- [x] Domain feedback collection
- [x] Final feedback collection
- [x] End-to-end flow
### 🔄 Future Enhancements
- [ ] Question validation testing
- [ ] Timer expiration handling
- [ ] Resume incomplete assessments
- [ ] Assessment results verification
- [ ] Export functionality
- [ ] Performance metrics collection
## Test Data
Tests use:
- **Default test student**: From `config.config.TEST_USERNAME`
- **Smart password handling**: Tries Excel password, then `Admin@123`
- **Default profile data**: For profile completion
- **Automated answers**: For question answering
## Best Practices
1. **Fixtures**: Use fixtures to ensure test isolation and proper setup
2. **Explicit Waits**: All waits use explicit waits (no hard sleeps)
3. **Error Handling**: Comprehensive error handling with meaningful messages
4. **Logging**: Detailed logging for debugging and tracking
5. **Skip Logic**: Tests skip gracefully when prerequisites aren't met
6. **Data-testid**: All locators use `data-testid` attributes
## Troubleshooting
### "No assessments available"
- Ensure student profile is 100% complete
- Check if assessments are assigned to the student
- Verify environment (local vs live)
### "No unlocked domains available"
- Some domains may require previous domains to be completed
- Check domain dependencies
- Verify assessment structure
### "Question not found"
- Question may have been answered already
- Check if on last question (Submit button should be visible)
- Verify question elements are loaded
### "Feedback modal not present"
- Domain may not be completed yet
- Check if feedback is optional
- Verify modal timing
## Notes
- Tests are designed to be **independent** and **idempotent**
- Each test handles its own setup and cleanup
- Tests use **smart skipping** when prerequisites aren't met
- **End-to-end tests** are marked as `@pytest.mark.slow` for selective execution
- All tests follow **Page Object Model** pattern
- Comprehensive **logging** for debugging and tracking