load_test_multiPC

This commit is contained in:
Kenil Bhikadiya 2025-12-16 13:04:32 +05:30
parent fa9ce0f149
commit b44e165212
9 changed files with 547 additions and 2 deletions

View File

@ -0,0 +1,202 @@
# 🚀 Multi-PC Load Test - 300 Students Simultaneously
## Overview
This guide provides commands for **3 PCs** to test backend capacity by running **300 students simultaneously** (100 per PC).
## 📊 Test Configuration
- **Total Students**: 300
- **Students per PC**: 100
- **Concurrent Browsers per PC**: 30 (adjustable)
- **Total Concurrent Browsers**: 90 (30 × 3)
- **Backend Load**: 300 students hitting the server simultaneously
## 🖥️ Commands for Each PC
### PC 1 - First CSV (100 students)
```bash
cd /home/tech4biz/work/CP_Front_Automation_Test
source venv/bin/activate
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-15T10-49-08_01.csv \
--start 0 --end 100 \
--workers 30 \
--headless \
--metrics-interval 10
```
### PC 2 - Second CSV (100 students)
```bash
cd /home/tech4biz/work/CP_Front_Automation_Test
source venv/bin/activate
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-15T10-59-02_03.csv \
--start 0 --end 100 \
--workers 30 \
--headless \
--metrics-interval 10
```
### PC 3 - Third CSV (100 students)
```bash
cd /home/tech4biz/work/CP_Front_Automation_Test
source venv/bin/activate
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-15T11-06-37_05.csv \
--start 0 --end 100 \
--workers 30 \
--headless \
--metrics-interval 10
```
## ⚡ Execution Steps
1. **Prepare all 3 PCs**:
- Ensure all have the project cloned
- Ensure all have virtual environment activated
- Ensure all have dependencies installed
- Ensure backend is running and accessible
2. **Coordinate start time**:
- Use a countdown (3... 2... 1... GO!)
- Or use a shared timer
- **CRITICAL**: All 3 must start at the same time
3. **Run commands simultaneously**:
- PC 1: Run PC 1 command
- PC 2: Run PC 2 command
- PC 3: Run PC 3 command
- All at the exact same time
4. **Monitor progress**:
- Each PC will show real-time metrics
- Check backend logs for API performance
- Monitor backend server resources
## 🔧 Adjusting Workers (if PC crashes)
If a PC crashes or runs out of resources, reduce workers:
### Option 1: Reduce to 20 workers per PC
```bash
--workers 20 # Instead of 30
```
### Option 2: Reduce to 15 workers per PC
```bash
--workers 15 # More conservative
```
### Option 3: Reduce to 10 workers per PC
```bash
--workers 10 # Very conservative, but safer
```
**Note**: Lower workers = slower execution, but more stable
## 📈 Expected Results
### Backend Load
- **300 students** hitting the backend simultaneously
- **90 concurrent browsers** (30 per PC)
- **All students** going through complete flow:
- Login
- Password reset (if needed)
- Profile completion (if needed)
- Assessment completion
- Feedback submission
### Performance Metrics
- Each PC will generate a report in `reports/load_tests/`
- Check backend logs for:
- API response times
- Database query performance
- Server resource usage (CPU, RAM)
- Error rates
## 🎯 Success Criteria
### Backend Should Handle:
- ✅ 300 concurrent logins
- ✅ 300 concurrent password resets (if needed)
- ✅ 300 concurrent profile completions (if needed)
- ✅ 300 concurrent assessment submissions
- ✅ 300 concurrent feedback submissions
### What to Monitor:
1. **Backend Response Times**: Should stay reasonable (< 5 seconds)
2. **Error Rates**: Should be minimal (< 5%)
3. **Server Resources**: CPU/RAM should not max out
4. **Database Performance**: Queries should complete in time
5. **API Timeouts**: Should be minimal
## 📊 Results Analysis
After test completion:
1. **Collect reports from all 3 PCs**:
```bash
# On each PC, check:
ls -lh reports/load_tests/load_test_Complete_Assessment_Flow_*users_*.json
```
2. **Check backend logs**:
- API response times
- Error logs
- Database query logs
- Server resource usage
3. **Analyze results**:
- Success rate across all 300 students
- Average completion time
- Error patterns
- Backend bottlenecks
## ⚠️ Troubleshooting
### PC Crashes
- **Symptom**: PC becomes unresponsive
- **Solution**: Reduce `--workers` to 20 or 15
### Backend Timeouts
- **Symptom**: Many "timeout" errors in results
- **Solution**: Backend may need scaling or optimization
### High Error Rate
- **Symptom**: > 10% failure rate
- **Solution**: Check backend logs for root cause
### Slow Performance
- **Symptom**: Tests taking very long (> 2 hours)
- **Solution**: Normal for 300 students, but check backend performance
## 📝 Quick Reference
### All 3 Commands (Copy-Paste Ready)
**PC 1:**
```bash
cd /home/tech4biz/work/CP_Front_Automation_Test && source venv/bin/activate && python3 tests/load_tests/test_generic_load_assessments.py --csv students_with_passwords_2025-12-15T10-49-08_01.csv --start 0 --end 100 --workers 30 --headless --metrics-interval 10
```
**PC 2:**
```bash
cd /home/tech4biz/work/CP_Front_Automation_Test && source venv/bin/activate && python3 tests/load_tests/test_generic_load_assessments.py --csv students_with_passwords_2025-12-15T10-59-02_03.csv --start 0 --end 100 --workers 30 --headless --metrics-interval 10
```
**PC 3:**
```bash
cd /home/tech4biz/work/CP_Front_Automation_Test && source venv/bin/activate && python3 tests/load_tests/test_generic_load_assessments.py --csv students_with_passwords_2025-12-15T11-06-37_05.csv --start 0 --end 100 --workers 30 --headless --metrics-interval 10
```
---
**Ready to test backend capacity for 300 concurrent users!** 🚀

15
scripts/PC1_100_students.sh Executable file
View File

@ -0,0 +1,15 @@
#!/bin/bash
# PC 1 - Load Test Command
# 100 students from first CSV
# Run this on PC 1
cd /home/tech4biz/work/CP_Front_Automation_Test
source venv/bin/activate
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-15T10-49-08_01.csv \
--start 0 --end 100 \
--workers 30 \
--headless \
--metrics-interval 10

15
scripts/PC2_100_students.sh Executable file
View File

@ -0,0 +1,15 @@
#!/bin/bash
# PC 2 - Load Test Command
# 100 students from second CSV
# Run this on PC 2
cd /home/tech4biz/work/CP_Front_Automation_Test
source venv/bin/activate
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-15T10-59-02_03.csv \
--start 0 --end 100 \
--workers 30 \
--headless \
--metrics-interval 10

15
scripts/PC3_100_students.sh Executable file
View File

@ -0,0 +1,15 @@
#!/bin/bash
# PC 3 - Load Test Command
# 100 students from third CSV
# Run this on PC 3
cd /home/tech4biz/work/CP_Front_Automation_Test
source venv/bin/activate
python3 tests/load_tests/test_generic_load_assessments.py \
--csv students_with_passwords_2025-12-15T11-06-37_05.csv \
--start 0 --end 100 \
--workers 30 \
--headless \
--metrics-interval 10

View File

@ -0,0 +1,86 @@
#!/bin/bash
# Multi-PC Load Test Script - 300 Students Simultaneously
#
# This script provides commands for 3 PCs to test backend capacity
# Each PC runs 100 students simultaneously
# Total: 300 concurrent students hitting the backend
#
# Usage:
# PC 1: Run the command for PC 1
# PC 2: Run the command for PC 2
# PC 3: Run the command for PC 3
# All at the same time to test 300 concurrent users
echo "=================================================================================="
echo "🚀 MULTI-PC LOAD TEST - 300 STUDENTS SIMULTANEOUSLY"
echo "=================================================================================="
echo ""
echo "📋 INSTRUCTIONS:"
echo " 1. Copy the command for your PC (PC 1, PC 2, or PC 3)"
echo " 2. Run it on your respective machine"
echo " 3. Start all 3 commands at the SAME TIME (coordinate via chat/phone)"
echo " 4. This will test backend capacity for 300 concurrent users"
echo ""
echo "=================================================================================="
echo ""
# Configuration
WORKERS_PER_PC=30 # 30 concurrent browsers per PC (adjust based on PC specs)
METRICS_INTERVAL=10
echo "📊 CONFIGURATION:"
echo " Workers per PC: $WORKERS_PER_PC"
echo " Total concurrent browsers: $((WORKERS_PER_PC * 3))"
echo " Total students: 300 (100 per PC)"
echo " Metrics interval: Every $METRICS_INTERVAL students"
echo ""
echo "=================================================================================="
echo ""
echo "🖥️ PC 1 COMMAND (First CSV - 100 students):"
echo "----------------------------------------------------------------------"
echo "cd /home/tech4biz/work/CP_Front_Automation_Test && source venv/bin/activate && \\"
echo "python3 tests/load_tests/test_generic_load_assessments.py \\"
echo " --csv students_with_passwords_2025-12-15T10-49-08_01.csv \\"
echo " --start 0 --end 100 \\"
echo " --workers $WORKERS_PER_PC \\"
echo " --headless \\"
echo " --metrics-interval $METRICS_INTERVAL"
echo ""
echo "=================================================================================="
echo ""
echo "🖥️ PC 2 COMMAND (Second CSV - 100 students):"
echo "----------------------------------------------------------------------"
echo "cd /home/tech4biz/work/CP_Front_Automation_Test && source venv/bin/activate && \\"
echo "python3 tests/load_tests/test_generic_load_assessments.py \\"
echo " --csv students_with_passwords_2025-12-15T10-59-02_03.csv \\"
echo " --start 0 --end 100 \\"
echo " --workers $WORKERS_PER_PC \\"
echo " --headless \\"
echo " --metrics-interval $METRICS_INTERVAL"
echo ""
echo "=================================================================================="
echo ""
echo "🖥️ PC 3 COMMAND (Third CSV - 100 students):"
echo "----------------------------------------------------------------------"
echo "cd /home/tech4biz/work/CP_Front_Automation_Test && source venv/bin/activate && \\"
echo "python3 tests/load_tests/test_generic_load_assessments.py \\"
echo " --csv students_with_passwords_2025-12-15T11-06-37_05.csv \\"
echo " --start 0 --end 100 \\"
echo " --workers $WORKERS_PER_PC \\"
echo " --headless \\"
echo " --metrics-interval $METRICS_INTERVAL"
echo ""
echo "=================================================================================="
echo ""
echo "💡 TIPS:"
echo " - If PC crashes, reduce --workers to 20 or 15"
echo " - Monitor system resources: htop, free -h"
echo " - Results saved in: reports/load_tests/"
echo " - Check backend logs for API performance"
echo ""
echo "=================================================================================="

View File

@ -0,0 +1,182 @@
# Load Test Issue Analysis - 100 Students Failed
## 🔴 Issue 1: Chrome User Data Directory Conflict (FIXED)
### Error
```
SessionNotCreatedException: session not created: probably user data directory is already in use,
please specify a unique value for --user-data-dir argument
```
### Root Cause
- **Problem**: All 30 Chrome browsers were trying to use the same user data directory
- **Result**: Chrome instances conflicted with each other and failed to start
- **Impact**: 100% failure rate (all browsers crashed at startup)
### Solution Applied ✅
- **Fix**: Each browser now gets a **unique temporary user data directory**
- **Implementation**: `tempfile.mkdtemp(prefix=f'chrome_user_data_{user_id}_')`
- **Cleanup**: Temp directories are automatically cleaned up after driver quits
### Status
**FIXED** - This issue is now resolved in the code
---
## 🔴 Issue 2: Metrics Interval Confusion (CLARIFICATION)
### Your Question
> "how can we get metrics interval at 10,10, because it's continuous process, all the Students should work simultaneously"
### Clarification
**`--metrics-interval 10` does NOT mean:**
- ❌ Students run in batches of 10
- ❌ Only 10 students run at a time
- ❌ Students wait for each other
**`--metrics-interval 10` ACTUALLY means:**
- ✅ **Print metrics every 10 students complete**
- ✅ All students run **simultaneously** (30 at a time with 30 workers)
- ✅ Metrics are just printed for visibility, not controlling execution
### How It Works
```
Timeline:
0s: Start 30 browsers (workers=30)
5s: Student 1 completes → metrics NOT printed (1 < 10)
8s: Student 2 completes → metrics NOT printed (2 < 10)
...
15s: Student 10 completes → ✅ METRICS PRINTED (10 % 10 == 0)
20s: Student 11 completes → metrics NOT printed
...
30s: Student 20 completes → ✅ METRICS PRINTED (20 % 10 == 0)
```
**All 30 browsers are running simultaneously!** The metrics interval just controls when you see the progress report.
### Visual Explanation
```
With --workers 30:
┌─────────────────────────────────────────┐
│ Browser 1 │ Browser 2 │ Browser 3 │ ← All start at same time
│ Browser 4 │ Browser 5 │ Browser 6 │
│ ... │ ... │ ... │
│ Browser 28 │ Browser 29 │ Browser 30 │
└─────────────────────────────────────────┘
↓ ↓ ↓
Running Running Running
(simultaneously, not in batches!)
```
---
## 📊 What Actually Happened
### Execution Flow
1. ✅ Script started successfully
2. ✅ Loaded 100 students from CSV
3. ✅ Started 30 concurrent browsers (workers=30)
4. ❌ **Chrome user data directory conflict** → All browsers failed to start
5. ❌ All 100 students failed within 60 seconds
### Error Breakdown
- **User 1, 3, 4, etc.**: `SessionNotCreatedException` (user data dir conflict)
- **User 2, etc.**: `InvalidSessionIdException` (browser crashed after conflict)
### Why So Fast?
- All failures happened at **browser startup** (within seconds)
- No students even reached the backend
- This is a **local Chrome configuration issue**, NOT a backend issue
---
## ✅ Solution Applied
### Code Changes
1. **Added unique user data directory** for each browser:
```python
user_data_dir = tempfile.mkdtemp(prefix=f'chrome_user_data_{user_id}_')
options.add_argument(f'--user-data-dir={user_data_dir}')
```
2. **Added cleanup** for temp directories after driver quits
### Test Again
```bash
./scripts/PC1_100_students.sh
```
**Expected Result**: Browsers should start successfully now!
---
## 🎯 Is This a Backend Issue?
### Answer: **NO** - This is a local Chrome configuration issue
**Evidence:**
- ❌ No students reached the backend (all failed at browser startup)
- ❌ Error is `SessionNotCreatedException` (Chrome issue, not API issue)
- ❌ All failures happened within seconds (before any API calls)
**Backend was never tested** because browsers couldn't even start!
---
## 📈 Next Steps
1. **Test again with the fix**:
```bash
./scripts/PC1_100_students.sh
```
2. **If it works**, then proceed with 3 PCs (300 students)
3. **Monitor backend** during the test:
- Check backend logs
- Monitor API response times
- Check database performance
- Monitor server resources (CPU, RAM)
---
## 💡 Understanding Metrics Interval
### Current Behavior (Correct)
- **30 workers** = 30 browsers running simultaneously
- **Metrics interval 10** = Print progress every 10 completions
- **All students process in parallel**, not sequentially
### If You Want Different Behavior
**Option 1: Print metrics more frequently**
```bash
--metrics-interval 5 # Print every 5 completions
```
**Option 2: Print metrics less frequently**
```bash
--metrics-interval 20 # Print every 20 completions
```
**Option 3: Print only at the end**
```bash
--metrics-interval 1000 # Print only at the end (if < 1000 students)
```
**Note**: This does NOT change how students run - they still run simultaneously!
---
## ✅ Summary
1. **Issue 1 (FIXED)**: Chrome user data directory conflict → Each browser now has unique directory
2. **Issue 2 (CLARIFIED)**: Metrics interval is just for printing, not batching
3. **Backend**: Was never tested (browsers failed before reaching backend)
4. **Next**: Test again with the fix to actually test backend capacity
**The fix is ready - test again!** 🚀

View File

@ -237,6 +237,7 @@ def complete_assessment_flow_for_student(
raise ValueError(f"Missing 'data' in student_info: {student_info}") raise ValueError(f"Missing 'data' in student_info: {student_info}")
driver = None driver = None
user_data_dir = None # Track temp directory for cleanup
steps_completed = [] steps_completed = []
cpid = student_info['cpid'] cpid = student_info['cpid']
student_data = student_info['data'] student_data = student_info['data']
@ -256,6 +257,13 @@ def complete_assessment_flow_for_student(
options.add_argument('--disable-software-rasterizer') options.add_argument('--disable-software-rasterizer')
options.add_argument('--disable-extensions') options.add_argument('--disable-extensions')
# CRITICAL: Each browser needs unique user data directory to avoid conflicts
import tempfile
import os
import shutil
user_data_dir = tempfile.mkdtemp(prefix=f'chrome_user_data_{user_id}_')
options.add_argument(f'--user-data-dir={user_data_dir}')
for attempt in range(3): for attempt in range(3):
try: try:
driver = webdriver.Chrome(options=options) driver = webdriver.Chrome(options=options)
@ -550,13 +558,17 @@ def complete_assessment_flow_for_student(
performance_metrics['total_durations'].append(duration) performance_metrics['total_durations'].append(duration)
performance_metrics['questions_answered'].append(questions_answered) performance_metrics['questions_answered'].append(questions_answered)
# Note: Driver cleanup is handled by LoadTestBase
# Temp directory cleanup will be done after driver.quit() in LoadTestBase
# Store user_data_dir in result for cleanup
return { return {
'driver': driver, 'driver': driver,
'steps_completed': steps_completed, 'steps_completed': steps_completed,
'success': True, 'success': True,
'questions_answered': questions_answered, 'questions_answered': questions_answered,
'cpid': cpid, 'cpid': cpid,
'duration': duration 'duration': duration,
'user_data_dir': user_data_dir # For cleanup
} }
except Exception as e: except Exception as e:
@ -566,13 +578,20 @@ def complete_assessment_flow_for_student(
with progress_lock: with progress_lock:
performance_metrics['failed_students'] += 1 performance_metrics['failed_students'] += 1
# Always cleanup driver on error # Always cleanup driver and temp directory on error
if driver: if driver:
try: try:
driver.quit() driver.quit()
except: except:
pass pass
# Cleanup temporary user data directory
if user_data_dir and os.path.exists(user_data_dir):
try:
shutil.rmtree(user_data_dir, ignore_errors=True)
except:
pass
# Re-raise with more context for LoadTestBase to handle # Re-raise with more context for LoadTestBase to handle
raise Exception(error_msg) raise Exception(error_msg)

View File

@ -202,6 +202,17 @@ class LoadTestBase:
try: try:
from utils.driver_manager import DriverManager from utils.driver_manager import DriverManager
DriverManager.quit_driver(driver) DriverManager.quit_driver(driver)
# Clean up temporary user data directory if it exists
user_data_dir = result.get('user_data_dir')
if user_data_dir:
import os
import shutil
try:
if os.path.exists(user_data_dir):
shutil.rmtree(user_data_dir, ignore_errors=True)
except:
pass
except: except:
pass pass