| .. | ||
| __init__.py | ||
| LOAD_TEST_USAGE.md | ||
| QUICK_START.md | ||
| README.md | ||
| test_generic_load_assessments.py | ||
| test_generic_load.py | ||
| validate_function_signature.py | ||
| VERIFICATION_SUMMARY.md | ||
World-Class Load Testing Script - Complete Guide
📖 Table of Contents
- What is This?
- Prerequisites
- Quick Start
- Understanding the Flow
- Command Line Arguments
- Multi-Device Execution
- Understanding Results
- Troubleshooting
- Best Practices
- Advanced Usage
🎯 What is This?
This is a world-class load testing script that simulates multiple students completing the full assessment flow simultaneously. It's designed to:
- ✅ Test backend/server performance under load
- ✅ Verify system stability with concurrent users
- ✅ Provide transparent, real-time metrics
- ✅ Support multi-device execution (distributed load testing)
- ✅ Use only 100% verified, reliable automation flows
What Each Student Does
Complete Flow (9 Steps):
- Login → Excel password first, fallback to Admin@123
- Password Reset → If modal appears (smart detection)
- Profile Fill → Complete to 100% if incomplete (smart detection)
- Navigate to Assessments → Go to assessments page
- Start Assessment → Click first available assessment
- Select Domain → Click first unlocked domain
- Answer All Questions → Answer all questions in domain (handles all 5 question types)
- Submit Assessment → Submit when all questions answered
- Feedback → Submit domain feedback if modal appears
Total Time: ~3-5 minutes per student (depending on number of questions)
📋 Prerequisites
Required
- ✅ Python 3.8+
- ✅ Virtual environment activated
- ✅ Chrome browser installed (or will be auto-managed)
- ✅ ChromeDriver (auto-managed by webdriver-manager)
- ✅ CSV file with student data
- ✅ Backend server running (localhost:3983 for local)
🚀 Quick Setup (Fresh System)
For a completely fresh system, run the setup script:
cd /home/tech4biz/work/CP_Front_Automation_Test
chmod +x setup_load_test_environment.sh
./setup_load_test_environment.sh
This script will:
- ✅ Check Python version (3.8+)
- ✅ Create virtual environment
- ✅ Install all dependencies from
requirements.txt - ✅ Check Chrome/ChromeDriver
- ✅ Validate project structure
- ✅ Verify function signature
- ✅ Create necessary directories
Manual Setup (if needed):
# 1. Create virtual environment
python3 -m venv venv
# 2. Activate it
source venv/bin/activate
# 3. Install dependencies
pip install -r requirements.txt
CSV File Format
Your CSV file must have these columns (case-insensitive):
Student CPIDorstudent_cpidorcpidorCPID(required)PasswordorpasswordorPASSWORD(optional, will use Admin@123 if missing)First Name(optional, for display)Last Name(optional, for display)
Example CSV:
Student CPID,Password,First Name,Last Name
STU001,Pass123,John,Doe
STU002,Pass456,Jane,Smith
🚀 Quick Start
Step 0: Setup (First Time Only)
If this is a fresh system, run the setup script first:
cd /home/tech4biz/work/CP_Front_Automation_Test
./setup_load_test_environment.sh
If already set up, just activate the virtual environment:
cd /home/tech4biz/work/CP_Front_Automation_Test
source venv/bin/activate
Step 2: Validate Function Signature (Recommended)
python3 tests/load_tests/validate_function_signature.py
Expected Output: ✅ Function signature is valid!
Step 3: Run Your First Test (1 Student)
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-12T13-19-32.csv \
--start 0 \
--end 1 \
--workers 1 \
--headless \
--metrics-interval 1
Step 4: Scale Up (10 Students)
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-12T13-19-32.csv \
--start 0 \
--end 10 \
--workers 10 \
--headless \
--metrics-interval 5
Step 5: Full Load Test (100 Students)
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-12T13-19-32.csv \
--start 0 \
--end 100 \
--workers 100 \
--headless \
--metrics-interval 10
🔍 Understanding the Flow
What Happens Behind the Scenes
- Script starts → Loads students from CSV (with range filter)
- For each student (in parallel):
- Creates Chrome browser (headless or visible)
- Logs in (smart password handling)
- Handles password reset if needed
- Completes profile if needed
- Navigates to assessments
- Starts first assessment
- Selects first domain
- Answers ALL questions
- Submits assessment
- Handles feedback
- Closes browser
- Metrics collected → Real-time performance tracking
- Results saved → JSON report generated
Smart Features
- Smart Login: Tries Excel password first, falls back to Admin@123
- Smart Password Reset: Only resets if needed (checks current password state)
- Smart Profile Completion: Only completes if profile is incomplete
- Smart Question Answering: Handles all 5 question types automatically
- Smart Error Recovery: Retries on transient failures
⚙️ Command Line Arguments
Required Arguments
| Argument | Description | Example |
|---|---|---|
--csv |
Path to CSV file | students_with_passwords_2025-12-12T13-19-32.csv |
Optional Arguments
| Argument | Description | Default | Example |
|---|---|---|---|
--start |
Start index (0-based, excluding header) | 0 |
0, 100, 200 |
--end |
End index (exclusive, None = all remaining) | None |
100, 200, 500 |
--workers |
Max concurrent workers | All students | 10, 50, 100 |
--headless |
Run in headless mode | True |
(flag) |
--visible |
Run in visible mode (overrides headless) | False |
(flag) |
--metrics-interval |
Print metrics every N students | 10 |
5, 10, 20 |
Examples
Basic usage:
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students.csv \
--start 0 \
--end 100
With all options:
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students.csv \
--start 0 \
--end 100 \
--workers 50 \
--headless \
--metrics-interval 10
Visible mode (for debugging):
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students.csv \
--start 0 \
--end 5 \
--workers 5 \
--visible
🌐 Multi-Device Execution
This script supports distributed load testing across multiple devices. Each device runs a different range of students.
Example: 500 Students on 5 Devices
Device 1 (Students 0-99):
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-12T13-19-32.csv \
--start 0 \
--end 100 \
--workers 100 \
--headless
Device 2 (Students 100-199):
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-12T13-19-32.csv \
--start 100 \
--end 200 \
--workers 100 \
--headless
Device 3 (Students 200-299):
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-12T13-19-32.csv \
--start 200 \
--end 300 \
--workers 100 \
--headless
Device 4 (Students 300-399):
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-12T13-19-32.csv \
--start 300 \
--end 400 \
--workers 100 \
--headless
Device 5 (Students 400-500):
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-12T13-19-32.csv \
--start 400 \
--end 500 \
--workers 100 \
--headless
Range Calculation
--start 0 --end 100= Students at indices 0-99 (100 students)--start 100 --end 200= Students at indices 100-199 (100 students)--start 0 --end 500= Students at indices 0-499 (500 students)
Note: Index 0 is the first student (after header row in CSV)
📊 Understanding Results
Real-Time Metrics
During execution, you'll see metrics printed every N students:
📊 REAL-TIME METRICS
================================================================================
⏱️ Elapsed Time: 125.3s
✅ Completed: 50
❌ Failed: 2
📈 Success Rate: 96.2%
⚡ Rate: 0.40 students/sec
⏳ Avg Duration: 245.6s
❓ Avg Questions: 12.3
📊 Total Questions: 615
📋 STEP METRICS:
login : 100.0% success, 2.1s avg
password_reset : 95.0% success, 3.5s avg
profile_completion : 90.0% success, 15.2s avg
assessment : 96.0% success, 220.5s avg
================================================================================
Final Summary
After completion, you'll see:
================================================================================
LOAD TEST SUMMARY: Complete Assessment Flow
================================================================================
📊 OVERALL METRICS
Total Users: 100
Successful: 95 (95.00%)
Failed: 5
Skipped: 0
Total Duration: 1250.5 seconds
⏱️ PERFORMANCE METRICS
Average Duration: 245.6 seconds
Min Duration: 180.2 seconds
Max Duration: 320.5 seconds
Median (P50): 240.1 seconds
95th Percentile (P95): 310.2 seconds
99th Percentile (P99): 318.5 seconds
📄 PAGE METRICS
Avg Page Load Time: 2.3 seconds
Avg Scroll Time: 0.5 seconds
Scroll Smooth Rate: 98.0%
JSON Report
Results are saved to:
reports/load_tests/load_test_Complete_Assessment_Flow_{N}users_{timestamp}.json
Report contains:
- Summary metrics (success rate, durations, percentiles)
- Individual student results
- Step-by-step completion status
- Error details (if any)
- Page load metrics
🔧 Troubleshooting
Issue: "No module named 'utils'"
Solution:
# Make sure you're in the project root
cd /home/tech4biz/work/CP_Front_Automation_Test
source venv/bin/activate
Issue: "TypeError: got multiple values for argument 'headless'"
Status: ✅ FIXED - This was resolved by adding user_id as first parameter
Verification:
python3 tests/load_tests/validate_function_signature.py
Issue: "No students loaded"
Causes:
- CSV path is incorrect
--startand--endare invalid- CSV doesn't have
Student CPIDcolumn
Solution:
- Check CSV path is correct
- Verify
--start<--end - Ensure CSV has required columns
Issue: High Failure Rate
Possible Causes:
- Too many concurrent browsers (reduce
--workers) - System resources exhausted (RAM/CPU)
- Backend server overloaded
- Network issues
Solutions:
- Reduce
--workersto 20-50 - Monitor system resources (
htop,free -h) - Check backend server logs
- Test with smaller batch first
Issue: "DevToolsActivePort file doesn't exist"
Cause: Too many Chrome instances trying to start simultaneously
Solution:
- Reduce
--workers(try 20-50) - Add delay between browser starts
- Use headless mode (
--headless)
Issue: Slow Execution
Causes:
- Too many concurrent browsers
- System resources limited
- Network latency
- Backend server slow
Solutions:
- Use
--headlessmode (faster) - Reduce
--workers - Check network connection
- Monitor backend performance
💡 Best Practices
1. Start Small, Scale Up
- ✅ Test with 1 student first
- ✅ Then 10 students
- ✅ Then 50 students
- ✅ Finally 100+ students
2. Use Headless Mode
- ✅ Always use
--headlessfor load testing - ✅ Much faster and more stable
- ✅ Lower resource usage
- ✅ Use
--visibleonly for debugging
3. Monitor System Resources
# In another terminal, monitor:
watch -n 1 'ps aux | grep chrome | wc -l' # Count Chrome processes
htop # Monitor CPU/RAM
free -h # Check memory
4. Validate Before Running
# Always validate signature first
python3 tests/load_tests/validate_function_signature.py
5. Use Appropriate Concurrency
- Small test (1-10 students):
--workers= number of students - Medium test (10-50 students):
--workers= 20-50 - Large test (50-100 students):
--workers= 50-100 - Very large test (100+ students):
--workers= 100 (or use multi-device)
6. Multi-Device Strategy
- Divide students evenly across devices
- Each device runs 100-200 students
- All devices run simultaneously
- Combine results from all devices
7. Metrics Interval
- Short tests (< 50 students):
--metrics-interval 5 - Medium tests (50-100 students):
--metrics-interval 10 - Long tests (100+ students):
--metrics-interval 20
🚀 Advanced Usage
Custom Range Testing
Test specific students:
# Students 50-75
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students.csv \
--start 50 \
--end 75 \
--workers 25
Resuming Failed Tests
If a test fails, you can resume by running the same command with adjusted --start:
# Original: --start 0 --end 100 (failed at student 50)
# Resume from student 50:
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students.csv \
--start 50 \
--end 100 \
--workers 50
Performance Analysis
Analyze JSON report:
import json
with open('reports/load_tests/load_test_Complete_Assessment_Flow_100users_*.json') as f:
data = json.load(f)
summary = data['summary']
print(f"Success Rate: {summary['success_rate']:.2f}%")
print(f"Avg Duration: {summary['avg_duration']:.2f}s")
print(f"P95 Duration: {summary['p95_duration']:.2f}s")
📝 Notes
Password Strategy
- Excel password is tried first (from CSV)
- Admin@123 is used as fallback
- Password reset only happens if needed (smart detection)
Assessment Flow
- Completes ONE domain (first unlocked domain)
- Answers ALL questions in that domain
- Handles all 5 question types automatically
- Submits when all questions answered
Error Handling
- Robust retry logic for transient failures
- Smart error recovery
- Comprehensive error logging
- Driver cleanup on errors
Resource Management
- Automatic browser cleanup
- Thread pool management
- Memory-efficient execution
- Progress tracking
✅ Verification Checklist
Before running a large load test:
- Virtual environment activated
- Function signature validated (
validate_function_signature.py) - CSV file exists and has correct format
- Backend server is running
- Tested with 1 student first
- System resources are adequate
- Using headless mode for load testing
- Appropriate
--workersvalue set - Monitoring system resources
🆘 Getting Help
Check These First:
- Run validation script:
python3 tests/load_tests/validate_function_signature.py - Test with 1 student first
- Check system resources
- Review error messages in JSON report
- Check backend server logs
Common Questions:
Q: How many students can I run simultaneously? A: Depends on system resources. Start with 20-50, scale up based on performance.
Q: Can I run this on multiple machines?
A: Yes! Use --start and --end to divide students across machines.
Q: How long does it take? A: ~3-5 minutes per student. 100 students = ~5-8 hours (depending on concurrency).
Q: What if a student fails? A: The script continues with other students. Check the JSON report for details.
Q: Can I pause and resume?
A: Not automatically, but you can use --start to resume from a specific index.
📚 Related Files
test_generic_load_assessments.py- Main load test scriptvalidate_function_signature.py- Signature validation scriptVERIFICATION_SUMMARY.md- Issue analysis and resolutionLOAD_TEST_USAGE.md- Quick reference guide
Last Updated: 2025-12-12 Status: ✅ Production Ready - All Issues Resolved