CP_AUTOMATION/.cursorrules
2025-12-12 19:54:54 +05:30

237 lines
7.9 KiB
Plaintext

# Cognitive Prism Automation Testing - Cursor Rules & Best Practices
## Core Principles
- **Zero Tolerance for Assumptions**: Never assume, always verify
- **100% Confidence**: Only document/implement what we're 100% certain about
- **Concrete Locators Only**: Use stable, permanent locators (ID, Name, data-testid) - avoid CSS, dynamic classes, XPath with text when possible
- **Best Practices Always**: Follow industry best practices for automation testing
- **Perfectionist Approach**: Every line of code must be production-ready
## Locator Strategy (CRITICAL)
### ✅ ALLOWED Locators (Priority Order)
1. **data-testid** - `By.CSS_SELECTOR, "[data-testid='value']"` - **PRIMARY** - Most stable, standardized in local
2. **ID** - `By.ID` - If unique and static
3. **Name** - `By.NAME` - If unique and stable
4. **XPath with ID/Name** - `By.XPATH, "//*[@id='value']"` - As fallback only
5. **XPath with stable attributes** - Only if above not available
### ❌ FORBIDDEN Locators
- **CSS Selectors** - Unless using data-testid
- **Dynamic Classes** - Classes that change (e.g., `class="button-1234"`)
- **XPath with text** - `//button[text()='Click']` - Only use if absolutely necessary
- **XPath with position** - `//div[1]`, `//div[last()]` - Fragile
- **XPath with contains(text())** - Only as last resort
- **Dynamic IDs** - IDs that change on each load
### data-testid Naming Convention
- Format: `scope__element_name`
- Scopes: `student_login`, `mandatory_reset`, `profile_incomplete`, `profile_editor`, `student_nav`, `assessment_card`, `domains_page`, `domain_card`, `domain_assessment`, `domain_question`, `domain_feedback`, `domains_final_feedback`, `feedback_survey`
- Dynamic elements: `assessment_card__{assignmentId}_action`, `domain_card__{domainId}_action`, `domain_question__{questionId}__option_{label}`
## Code Quality Standards
### Python Best Practices
- Follow PEP 8 style guide strictly
- Use type hints where possible
- Maximum line length: 100 characters
- Use descriptive variable and function names
- Always include docstrings for classes and methods
### Page Object Model (POM)
- One page object per page/section
- All locators must be class-level constants
- All page interactions must go through page objects
- Never use WebDriver directly in test files
- Page objects should return other page objects for navigation
### Wait Strategies
- **ALWAYS** use explicit waits - never use `time.sleep()` except for specific timing requirements
- Use `WebDriverWait` with appropriate expected conditions
- Wait for element visibility before interaction
- Wait for page load completion
- Handle loading states explicitly
### Test Structure
- One test class per feature/page
- Use descriptive test method names: `test_<action>_<expected_result>`
- Tests must be independent - no dependencies between tests
- Use pytest fixtures for setup/teardown
- Use pytest markers appropriately
### Error Handling
- Use try-except blocks only when necessary
- Log errors appropriately
- Take screenshots on failures
- Provide meaningful error messages
- Never swallow exceptions silently
## File Organization
### Directory Structure
```
pages/ - Page Object Model classes
tests/ - Test cases
utils/ - Utility classes
config/ - Configuration files
reports/ - Test reports
logs/ - Test logs
downloads/ - Downloaded files
```
### Naming Conventions
- **Files**: `snake_case.py`
- **Classes**: `PascalCase`
- **Methods/Functions**: `snake_case`
- **Constants**: `UPPER_SNAKE_CASE`
- **Variables**: `snake_case`
## Documentation Requirements
### Code Documentation
- Every class must have a docstring
- Every public method must have a docstring
- Document parameters and return values
- Include usage examples for complex methods
### Test Documentation
- Document test purpose in docstring
- Explain any test data requirements
- Document expected behavior
- Note any known limitations
## Testing Standards
### Test Data
- Use configuration files for test data
- Never hardcode credentials (use environment variables)
- Use fixtures for test data setup
- Clean up test data after tests
### Assertions
- Use meaningful assertion messages
- Verify multiple aspects when possible
- Use appropriate assertion methods
- Don't over-assert, but don't under-assert
### Test Execution
- Tests must be able to run independently
- Tests should be idempotent
- Handle cleanup in fixtures
- Support parallel execution where possible
## Environment Configuration
### Local vs Live
- **Default**: LOCAL environment (`localhost:3983`)
- Switch to live only after local automation is complete
- Use `ENVIRONMENT` variable to switch: `local` or `live`
- All URLs are dynamic based on environment
## Security Best Practices
### Credentials
- Never commit credentials to repository
- Use environment variables or .env files
- Use .gitignore for sensitive files
- Rotate test credentials regularly
### Data Privacy
- Don't log sensitive information
- Clean up test data after execution
- Respect data privacy regulations
## Performance Considerations
### Test Execution Speed
- Minimize unnecessary waits
- Use appropriate wait timeouts
- Optimize locator strategies
- Avoid redundant operations
### Resource Management
- Always quit WebDriver instances
- Clean up temporary files
- Release resources in fixtures
- Monitor memory usage
## Maintenance Guidelines
### Code Reviews
- All code must be reviewed
- Follow consistent coding style
- Ensure all tests pass
- Verify documentation is updated
### Refactoring
- Refactor when code duplication occurs
- Improve locators when better options available
- Update documentation with changes
- Maintain backward compatibility when possible
## Specific Rules for This Project
### Cognitive Prism Platform (Local)
- All locators use `data-testid` attributes (standardized in local)
- Document all findings in analysis documents
- Update locators if platform changes
- Follow exact student journey flow:
1. Sign-In (student_login)
2. Password Reset (mandatory_reset) - if first login
3. Profile Completion (profile_incomplete) - if incomplete
4. Profile Editor (profile_editor) - complete to 100%
5. Assessments (assessment_card)
6. Domains (domains_page, domain_card)
7. Domain Assessment (domain_assessment, domain_question)
8. Domain Feedback (domain_feedback)
9. Final Feedback (domains_final_feedback, feedback_survey)
### Test Flows
- Complete flows must be documented before automation
- Handle all test phases (password reset, profile completion, assessments, domains, questions, feedback)
- Verify export functionality where applicable
### Framework Requirements
- Use pytest as test runner
- Use Page Object Model pattern
- Use explicit waits exclusively
- Support multiple browsers
- Generate HTML reports
## Common Pitfalls to Avoid
1. **Fragile Locators**: Don't use locators that break easily
2. **Hard-coded Waits**: Don't use time.sleep() unnecessarily
3. **Duplicate Code**: Refactor common functionality
4. **Missing Documentation**: Document all code
5. **Incomplete Error Handling**: Handle errors appropriately
6. **Test Dependencies**: Keep tests independent
7. **Ignoring Best Practices**: Always follow best practices
8. **Assumptions**: Never assume, always verify
## Quality Checklist
Before committing code, ensure:
- [ ] All locators use data-testid (or fallback to ID/Name)
- [ ] No hard-coded waits (except specific timing needs)
- [ ] All code has docstrings
- [ ] All tests pass
- [ ] Code follows PEP 8
- [ ] No duplicate code
- [ ] Error handling is appropriate
- [ ] Screenshots on failure are working
- [ ] Documentation is updated
- [ ] No sensitive data in code
## Continuous Improvement
- Regularly review and update locators
- Refactor code for better maintainability
- Update documentation as needed
- Stay updated with best practices
- Verify all locators match local implementation standards
---
**Remember**: We are building a world-class automation framework. Every line of code must reflect this commitment to excellence.