Compare commits

..

No commits in common. "main" and "dev_branch" have entirely different histories.

289 changed files with 3298 additions and 63406 deletions

1
.gitignore vendored
View File

@ -135,4 +135,3 @@ uploads/
# GCP Service Account Key
config/gcp-key.json
Jenkinsfile

535
Data_Collection_Analysis.md Normal file
View File

@ -0,0 +1,535 @@
# Data Collection Analysis - What We Have vs What We're Collecting
## Overview
This document compares the database structure with what we're currently collecting and recommends what we should start collecting for the Detailed Reports.
---
## 1. ACTIVITIES TABLE
### ✅ **Database Fields Available:**
```sql
- activity_id (PK)
- request_id (FK) ✅ COLLECTING
- user_id (FK) ✅ COLLECTING
- user_name ✅ COLLECTING
- activity_type ✅ COLLECTING
- activity_description ✅ COLLECTING
- activity_category ❌ NOT COLLECTING (set to NULL)
- severity ❌ NOT COLLECTING (set to NULL)
- metadata ✅ COLLECTING (partially)
- is_system_event ✅ COLLECTING
- ip_address ❌ NOT COLLECTING (set to NULL)
- user_agent ❌ NOT COLLECTING (set to NULL)
- created_at ✅ COLLECTING
```
### 🔴 **Currently NOT Collecting (But Should):**
1. **IP Address** (`ip_address`)
- **Status:** Field exists, but always set to `null`
- **Impact:** Cannot show IP in User Activity Log Report
- **Fix:** Extract from `req.ip` or `req.headers['x-forwarded-for']` in controllers
- **Priority:** HIGH (needed for security/audit)
2. **User Agent** (`user_agent`)
- **Status:** Field exists, but always set to `null`
- **Impact:** Cannot show device/browser info in reports
- **Fix:** Extract from `req.headers['user-agent']` in controllers
- **Priority:** MEDIUM (nice to have for analytics)
3. **Activity Category** (`activity_category`)
- **Status:** Field exists, but always set to `null`
- **Impact:** Cannot categorize activities (e.g., "AUTHENTICATION", "WORKFLOW", "DOCUMENT")
- **Fix:** Map `activity_type` to category:
- `created`, `approval`, `rejection`, `status_change` → "WORKFLOW"
- `comment` → "COLLABORATION"
- `document_added` → "DOCUMENT"
- `sla_warning` → "SYSTEM"
- **Priority:** MEDIUM (helps with filtering/reporting)
4. **Severity** (`severity`)
- **Status:** Field exists, but always set to `null`
- **Impact:** Cannot prioritize critical activities
- **Fix:** Map based on activity type:
- `rejection`, `sla_warning` → "WARNING"
- `approval`, `closed` → "INFO"
- `status_change` → "INFO"
- **Priority:** LOW (optional enhancement)
### 📝 **Recommendation:**
**Update `activity.service.ts` to accept and store:**
```typescript
async log(entry: ActivityEntry & {
ipAddress?: string;
userAgent?: string;
category?: string;
severity?: string;
}) {
// ... existing code ...
const activityData = {
// ... existing fields ...
ipAddress: entry.ipAddress || null,
userAgent: entry.userAgent || null,
activityCategory: entry.category || this.inferCategory(entry.type),
severity: entry.severity || this.inferSeverity(entry.type),
};
}
```
**Update all controller calls to pass IP and User Agent:**
```typescript
activityService.log({
// ... existing fields ...
ipAddress: req.ip || req.headers['x-forwarded-for'] || null,
userAgent: req.headers['user-agent'] || null,
});
```
---
## 2. APPROVAL_LEVELS TABLE
### ✅ **Database Fields Available:**
```sql
- level_id (PK)
- request_id (FK) ✅ COLLECTING
- level_number ✅ COLLECTING
- level_name ❌ OPTIONAL (may not be set)
- approver_id (FK) ✅ COLLECTING
- approver_email ✅ COLLECTING
- approver_name ✅ COLLECTING
- tat_hours ✅ COLLECTING
- tat_days ✅ COLLECTING (auto-calculated)
- status ✅ COLLECTING
- level_start_time ✅ COLLECTING
- level_end_time ✅ COLLECTING
- action_date ✅ COLLECTING
- comments ✅ COLLECTING
- rejection_reason ✅ COLLECTING
- is_final_approver ✅ COLLECTING
- elapsed_hours ✅ COLLECTING
- remaining_hours ✅ COLLECTING
- tat_percentage_used ✅ COLLECTING
- tat50_alert_sent ✅ COLLECTING
- tat75_alert_sent ✅ COLLECTING
- tat_breached ✅ COLLECTING
- tat_start_time ✅ COLLECTING
- created_at ✅ COLLECTING
- updated_at ✅ COLLECTING
```
### 🔴 **Currently NOT Collecting (But Should):**
1. **Level Name** (`level_name`)
- **Status:** Field exists, but may be NULL
- **Impact:** Cannot show stage name in reports (only level number)
- **Fix:** When creating approval levels, prompt for or auto-generate level names:
- "Department Head Review"
- "Finance Approval"
- "Final Approval"
- **Priority:** MEDIUM (improves report readability)
### 📝 **Recommendation:**
**Ensure level_name is set when creating approval levels:**
```typescript
await ApprovalLevel.create({
// ... existing fields ...
levelName: levelData.levelName || `Level ${levelNumber}`,
});
```
---
## 3. USER_SESSIONS TABLE
### ✅ **Database Fields Available:**
```sql
- session_id (PK)
- user_id (FK)
- session_token ✅ COLLECTING
- refresh_token ✅ COLLECTING
- ip_address ❓ CHECK IF COLLECTING
- user_agent ❓ CHECK IF COLLECTING
- device_type ❓ CHECK IF COLLECTING
- browser ❓ CHECK IF COLLECTING
- os ❓ CHECK IF COLLECTING
- login_at ✅ COLLECTING
- last_activity_at ✅ COLLECTING
- logout_at ❓ CHECK IF COLLECTING
- expires_at ✅ COLLECTING
- is_active ✅ COLLECTING
- logout_reason ❓ CHECK IF COLLECTING
```
### 🔴 **Missing for Login Activity Tracking:**
1. **Login Activities in Activities Table**
- **Status:** Login events are NOT logged in `activities` table
- **Impact:** Cannot show login activities in User Activity Log Report
- **Fix:** Add login activity logging in auth middleware/controller:
```typescript
// After successful login
await activityService.log({
requestId: 'SYSTEM_LOGIN', // Special request ID for system events
type: 'login',
user: { userId, name: user.displayName },
ipAddress: req.ip,
userAgent: req.headers['user-agent'],
category: 'AUTHENTICATION',
severity: 'INFO',
timestamp: new Date().toISOString(),
action: 'User Login',
details: `User logged in from ${req.ip}`
});
```
- **Priority:** HIGH (needed for security audit)
2. **Device/Browser Parsing**
- **Status:** Fields exist but may not be populated
- **Impact:** Cannot show device type in reports
- **Fix:** Parse user agent to extract:
- `device_type`: "WEB", "MOBILE"
- `browser`: "Chrome", "Firefox", "Safari"
- `os`: "Windows", "macOS", "iOS", "Android"
- **Priority:** MEDIUM (nice to have)
---
## 4. WORKFLOW_REQUESTS TABLE
### ✅ **All Fields Are Being Collected:**
- All fields in `workflow_requests` are properly collected
- No missing data here
### 📝 **Note:**
- `submission_date` vs `created_at`: Use `submission_date` for "days open" calculation
- `closure_date`: Available for completed requests
---
## 5. TAT_TRACKING TABLE
### ✅ **Database Fields Available:**
```sql
- tracking_id (PK)
- request_id (FK)
- level_id (FK)
- tracking_type ✅ COLLECTING
- tat_status ✅ COLLECTING
- total_tat_hours ✅ COLLECTING
- elapsed_hours ✅ COLLECTING
- remaining_hours ✅ COLLECTING
- percentage_used ✅ COLLECTING
- threshold_50_breached ✅ COLLECTING
- threshold_50_alerted_at ✅ COLLECTING
- threshold_80_breached ✅ COLLECTING
- threshold_80_alerted_at ✅ COLLECTING
- threshold_100_breached ✅ COLLECTING
- threshold_100_alerted_at ✅ COLLECTING
- alert_count ✅ COLLECTING
- last_calculated_at ✅ COLLECTING
```
### ✅ **All Fields Are Being Collected:**
- TAT tracking appears to be fully implemented
---
## 6. AUDIT_LOGS TABLE
### ✅ **Database Fields Available:**
```sql
- audit_id (PK)
- user_id (FK)
- entity_type
- entity_id
- action
- action_category
- old_values (JSONB)
- new_values (JSONB)
- changes_summary
- ip_address
- user_agent
- session_id
- request_method
- request_url
- response_status
- execution_time_ms
- created_at
```
### 🔴 **Status:**
- **Audit logging may not be fully implemented**
- **Impact:** Cannot track all system changes for audit purposes
- **Priority:** MEDIUM (for compliance/security)
---
## SUMMARY: What to Start Collecting
### 🔴 **HIGH PRIORITY (Must Have for Reports):**
1. **IP Address in Activities** ✅ Field exists, just need to populate
- Extract from `req.ip` or `req.headers['x-forwarded-for']`
- Update `activity.service.ts` to accept IP
- Update all controller calls
2. **User Agent in Activities** ✅ Field exists, just need to populate
- Extract from `req.headers['user-agent']`
- Update `activity.service.ts` to accept user agent
- Update all controller calls
3. **Login Activities** ❌ Not currently logged
- Add login activity logging in auth controller
- Use special `requestId: 'SYSTEM_LOGIN'` for system events
- Include IP and user agent
### 🟡 **MEDIUM PRIORITY (Nice to Have):**
4. **Activity Category** ✅ Field exists, just need to populate
- Auto-infer from `activity_type`
- Helps with filtering and reporting
5. **Level Names** ✅ Field exists, ensure it's set
- Improve readability in reports
- Auto-generate if not provided
6. **Severity** ✅ Field exists, just need to populate
- Auto-infer from `activity_type`
- Helps prioritize critical activities
### 🟢 **LOW PRIORITY (Future Enhancement):**
7. **Device/Browser Parsing**
- Parse user agent to extract device type, browser, OS
- Store in `user_sessions` table
8. **Audit Logging**
- Implement comprehensive audit logging
- Track all system changes
---
## 7. BUSINESS DAYS CALCULATION FOR WORKFLOW AGING
### ✅ **Available:**
- `calculateElapsedWorkingHours()` - Calculates working hours (excludes weekends/holidays)
- Working hours configuration (9 AM - 6 PM, Mon-Fri)
- Holiday support (from database)
- Priority-based calculation (express vs standard)
### ❌ **Missing:**
1. **Business Days Count Function**
- Need a function to calculate business days (not hours)
- For Workflow Aging Report: "Days Open" should be business days
- Currently only have working hours calculation
2. **TAT Processor Using Wrong Calculation**
- `tatProcessor.ts` uses simple calendar hours:
```typescript
const elapsedMs = now.getTime() - new Date(levelStartTime).getTime();
const elapsedHours = elapsedMs / (1000 * 60 * 60);
```
- Should use `calculateElapsedWorkingHours()` instead
- This causes incorrect TAT breach calculations
### 🔧 **What Needs to be Built:**
1. **Add Business Days Calculation Function:**
```typescript
// In tatTimeUtils.ts
export async function calculateBusinessDays(
startDate: Date | string,
endDate: Date | string = new Date(),
priority: string = 'standard'
): Promise<number> {
await loadWorkingHoursCache();
await loadHolidaysCache();
let start = dayjs(startDate);
const end = dayjs(endDate);
const config = workingHoursCache || { /* defaults */ };
let businessDays = 0;
let current = start.startOf('day');
while (current.isBefore(end) || current.isSame(end, 'day')) {
const dayOfWeek = current.day();
const dateStr = current.format('YYYY-MM-DD');
const isWorkingDay = priority === 'express'
? true
: (dayOfWeek >= config.startDay && dayOfWeek <= config.endDay);
const isNotHoliday = !holidaysCache.has(dateStr);
if (isWorkingDay && isNotHoliday) {
businessDays++;
}
current = current.add(1, 'day');
}
return businessDays;
}
```
2. **Fix TAT Processor:**
- Replace calendar hours calculation with `calculateElapsedWorkingHours()`
- This will fix TAT breach alerts to use proper working hours
3. **Update Workflow Aging Report:**
- Use `calculateBusinessDays()` instead of calendar days
- Filter by business days threshold
---
## IMPLEMENTATION CHECKLIST
### Phase 1: Quick Wins (Fields Exist, Just Need to Populate)
- [ ] Update `activity.service.ts` to accept `ipAddress` and `userAgent`
- [ ] Update all controller calls to pass IP and user agent
- [ ] Add activity category inference
- [ ] Add severity inference
### Phase 2: Fix TAT Calculations (CRITICAL)
- [x] Fix `tatProcessor.ts` to use `calculateElapsedWorkingHours()` instead of calendar hours ✅
- [x] Add `calculateBusinessDays()` function to `tatTimeUtils.ts`
- [ ] Test TAT breach calculations with working hours
### Phase 3: New Functionality
- [x] Add login activity logging ✅ (Implemented in auth.controller.ts for SSO and token exchange)
- [x] Ensure level names are set when creating approval levels ✅ (levelName set in workflow.service.ts)
- [x] Add device/browser parsing for user sessions ✅ (userAgentParser.ts utility created - can be used for parsing user agent strings)
### Phase 4: Enhanced Reporting
- [x] Build report endpoints using collected data ✅ (getLifecycleReport, getActivityLogReport, getWorkflowAgingReport)
- [x] Add filtering by category, severity ✅ (Filtering by category and severity added to getActivityLogReport, frontend UI added)
- [x] Add IP/user agent to activity log reports ✅ (IP and user agent captured and displayed)
- [x] Use business days in Workflow Aging Report ✅ (calculateBusinessDays implemented and used)
---
## CODE CHANGES NEEDED
### 1. Update Activity Service (`activity.service.ts`)
```typescript
export type ActivityEntry = {
requestId: string;
type: 'created' | 'assignment' | 'approval' | 'rejection' | 'status_change' | 'comment' | 'reminder' | 'document_added' | 'sla_warning' | 'ai_conclusion_generated' | 'closed' | 'login';
user?: { userId: string; name?: string; email?: string };
timestamp: string;
action: string;
details: string;
metadata?: any;
ipAddress?: string; // NEW
userAgent?: string; // NEW
category?: string; // NEW
severity?: string; // NEW
};
class ActivityService {
private inferCategory(type: string): string {
const categoryMap: Record<string, string> = {
'created': 'WORKFLOW',
'approval': 'WORKFLOW',
'rejection': 'WORKFLOW',
'status_change': 'WORKFLOW',
'assignment': 'WORKFLOW',
'comment': 'COLLABORATION',
'document_added': 'DOCUMENT',
'sla_warning': 'SYSTEM',
'reminder': 'SYSTEM',
'ai_conclusion_generated': 'SYSTEM',
'closed': 'WORKFLOW',
'login': 'AUTHENTICATION'
};
return categoryMap[type] || 'OTHER';
}
private inferSeverity(type: string): string {
const severityMap: Record<string, string> = {
'rejection': 'WARNING',
'sla_warning': 'WARNING',
'approval': 'INFO',
'closed': 'INFO',
'status_change': 'INFO',
'login': 'INFO',
'created': 'INFO',
'comment': 'INFO',
'document_added': 'INFO'
};
return severityMap[type] || 'INFO';
}
async log(entry: ActivityEntry) {
// ... existing code ...
const activityData = {
requestId: entry.requestId,
userId: entry.user?.userId || null,
userName: entry.user?.name || entry.user?.email || null,
activityType: entry.type,
activityDescription: entry.details,
activityCategory: entry.category || this.inferCategory(entry.type),
severity: entry.severity || this.inferSeverity(entry.type),
metadata: entry.metadata || null,
isSystemEvent: !entry.user,
ipAddress: entry.ipAddress || null, // NEW
userAgent: entry.userAgent || null, // NEW
};
// ... rest of code ...
}
}
```
### 2. Update Controller Calls (Example)
```typescript
// In workflow.controller.ts, approval.controller.ts, etc.
activityService.log({
requestId: workflow.requestId,
type: 'created',
user: { userId, name: user.displayName },
timestamp: new Date().toISOString(),
action: 'Request Created',
details: `Request ${workflow.requestNumber} created`,
ipAddress: req.ip || req.headers['x-forwarded-for'] || null, // NEW
userAgent: req.headers['user-agent'] || null, // NEW
});
```
### 3. Add Login Activity Logging
```typescript
// In auth.controller.ts after successful login
await activityService.log({
requestId: 'SYSTEM_LOGIN', // Special ID for system events
type: 'login',
user: { userId: user.userId, name: user.displayName },
timestamp: new Date().toISOString(),
action: 'User Login',
details: `User logged in successfully`,
ipAddress: req.ip || req.headers['x-forwarded-for'] || null,
userAgent: req.headers['user-agent'] || null,
category: 'AUTHENTICATION',
severity: 'INFO'
});
```
---
## CONCLUSION
**Good News:** Most fields already exist in the database! We just need to:
1. Populate existing fields (IP, user agent, category, severity)
2. Add login activity logging
3. Ensure level names are set
**Estimated Effort:**
- Phase 1 (Quick Wins): 2-4 hours
- Phase 2 (New Functionality): 4-6 hours
- Phase 3 (Enhanced Reporting): 8-12 hours
**Total: ~14-22 hours of development work**

View File

@ -1,125 +0,0 @@
# GCS (Google Cloud Storage) Configuration Guide
## Overview
All document uploads (workflow documents, work note attachments) are now configured to use Google Cloud Storage (GCS) instead of local file storage.
## Configuration Steps
### 1. Update `.env` File
Add or update the following environment variables in your `.env` file:
```env
# Cloud Storage (GCP)
GCP_PROJECT_ID=re-platform-workflow-dealer
GCP_BUCKET_NAME=your-bucket-name-here
GCP_KEY_FILE=./credentials/re-platform-workflow-dealer-3d5738fcc1f9.json
```
**Important Notes:**
- `GCP_PROJECT_ID`: Should match the `project_id` in your credentials JSON file (currently: `re-platform-workflow-dealer`)
- `GCP_BUCKET_NAME`: The name of your GCS bucket (create one in GCP Console if needed)
- `GCP_KEY_FILE`: Path to your service account credentials JSON file (relative to project root or absolute path)
### 2. Create GCS Bucket (if not exists)
1. Go to [Google Cloud Console](https://console.cloud.google.com/)
2. Navigate to **Cloud Storage** > **Buckets**
3. Click **Create Bucket**
4. Choose a unique bucket name (e.g., `re-workflow-documents`)
5. Select a location for your bucket
6. Set permissions:
- Make bucket publicly readable (for public URLs) OR
- Keep private and use signed URLs (more secure)
### 3. Grant Service Account Permissions
Your service account (`re-bridge-workflow@re-platform-workflow-dealer.iam.gserviceaccount.com`) needs:
- **Storage Object Admin** role (to upload/delete files)
- **Storage Object Viewer** role (to read files)
### 4. Verify Configuration
The system will:
- ✅ Automatically detect if GCS is configured
- ✅ Fall back to local storage if GCS is not configured
- ✅ Upload files to GCS when configured
- ✅ Store GCS URLs in the database
- ✅ Redirect downloads/previews to GCS URLs
## File Storage Structure
Files are organized in GCS by request number with subfolders for documents and attachments:
```
reflow-documents-uat/
├── requests/
│ ├── REQ-2025-12-0001/
│ │ ├── documents/
│ │ │ ├── {timestamp}-{hash}-{filename}
│ │ │ └── ...
│ │ └── attachments/
│ │ ├── {timestamp}-{hash}-{filename}
│ │ └── ...
│ ├── REQ-2025-12-0002/
│ │ ├── documents/
│ │ └── attachments/
│ └── ...
```
- **Documents**: `requests/{requestNumber}/documents/{timestamp}-{hash}-{filename}`
- **Work Note Attachments**: `requests/{requestNumber}/attachments/{timestamp}-{hash}-{filename}`
This structure makes it easy to:
- Track all files for a specific request
- Organize documents vs attachments separately
- Navigate and manage files in GCS console
## How It Works
### Upload Flow
1. File is received via multer (memory storage)
2. File buffer is uploaded to GCS
3. GCS returns a public URL
4. URL is stored in database (`storage_url` field)
5. Local file is deleted (if it existed)
### Download/Preview Flow
1. System checks if `storage_url` is a GCS URL
2. If GCS URL: Redirects to GCS public URL
3. If local path: Serves file from local storage
## Troubleshooting
### Files not uploading to GCS
- Check `.env` configuration matches your credentials
- Verify service account has correct permissions
- Check bucket name exists and is accessible
- Review application logs for GCS errors
### Files uploading but not accessible
- Verify bucket permissions (public read or signed URLs)
- Check CORS configuration if accessing from browser
- Ensure `storage_url` is being saved correctly in database
### Fallback to Local Storage
If GCS is not configured or fails, the system will:
- Log a warning
- Continue using local file storage
- Store local paths in database
## Testing
After configuration:
1. Upload a document via API
2. Check database - `storage_url` should contain GCS URL
3. Try downloading/previewing the document
4. Verify file is accessible at GCS URL
## Security Notes
- **Public Buckets**: Files are publicly accessible via URL
- **Private Buckets**: Consider using signed URLs for better security
- **Service Account**: Keep credentials file secure, never commit to git
- **Bucket Policies**: Configure bucket-level permissions as needed

View File

@ -1,124 +0,0 @@
# VAPID Key Generation Guide
## What are VAPID Keys?
VAPID (Voluntary Application Server Identification) keys are cryptographic keys used for web push notifications. They identify your application server to push services and ensure secure communication.
## Generating VAPID Keys
### Method 1: Using npx (Recommended - No Installation Required)
The easiest way to generate VAPID keys is using `npx`, which doesn't require any global installation:
```bash
npx web-push generate-vapid-keys
```
This command will output something like:
```
=======================================
Public Key:
BEl62iUYgUivxIkvpY5kXK3t3b9i5X8YzA1B2C3D4E5F6G7H8I9J0K1L2M3N4O5P6Q7R8S9T0U1V2W3X4Y5Z6
Private Key:
aBcDeFgHiJkLmNoPqRsTuVwXyZ1234567890AbCdEfGhIjKlMnOpQrStUvWxYz
=======================================
```
### Method 2: Using Node.js Script
If you prefer to generate keys programmatically, you can create a simple script:
```javascript
// generate-vapid-keys.js
const webpush = require('web-push');
const vapidKeys = webpush.generateVAPIDKeys();
console.log('=======================================');
console.log('Public Key:');
console.log(vapidKeys.publicKey);
console.log('');
console.log('Private Key:');
console.log(vapidKeys.privateKey);
console.log('=======================================');
```
Then run:
```bash
node generate-vapid-keys.js
```
## Configuration
### Backend Configuration
Add the generated keys to your backend `.env` file:
```env
# Notification Service Worker credentials (Web Push / VAPID)
VAPID_PUBLIC_KEY=BEl62iUYgUivxIkvpY5kXK3t3b9i5X8YzA1B2C3D4E5F6G7H8I9J0K1L2M3N4O5P6Q7R8S9T0U1V2W3X4Y5Z6
VAPID_PRIVATE_KEY=aBcDeFgHiJkLmNoPqRsTuVwXyZ1234567890AbCdEfGhIjKlMnOpQrStUvWxYz
VAPID_CONTACT=mailto:admin@royalenfield.com
```
**Important Notes:**
- The `VAPID_CONTACT` should be a valid `mailto:` URL
- Keep your `VAPID_PRIVATE_KEY` secure and **never commit it to version control**
- The private key should only be stored on your backend server
### Frontend Configuration
Add the **SAME** `VAPID_PUBLIC_KEY` to your frontend `.env` file:
```env
# Push Notifications (Web Push / VAPID)
VITE_PUBLIC_VAPID_KEY=BEl62iUYgUivxIkvpY5kXK3t3b9i5X8YzA1B2C3D4E5F6G7H8I9J0K1L2M3N4O5P6Q7R8S9T0U1V2W3X4Y5Z6
```
**Important:**
- Only the **public key** goes in the frontend
- The **private key** stays on the backend only
- Both frontend and backend must use the **same public key** from the same key pair
## Security Best Practices
1. **Never commit private keys to version control**
- Add `.env` to `.gitignore`
- Use environment variables in production
2. **Use different keys for different environments**
- Development, staging, and production should have separate VAPID key pairs
3. **Keep private keys secure**
- Store them in secure environment variable management systems
- Use secrets management tools in production (AWS Secrets Manager, Azure Key Vault, etc.)
4. **Rotate keys if compromised**
- If a private key is ever exposed, generate new keys immediately
- Update both frontend and backend configurations
## Troubleshooting
### Issue: Push notifications not working
1. **Verify keys match**: Ensure the public key in frontend matches the public key in backend
2. **Check VAPID_CONTACT**: Must be a valid `mailto:` URL
3. **Verify HTTPS**: Web push requires HTTPS (except for localhost)
4. **Check browser console**: Look for errors in the browser console
5. **Verify service worker**: Ensure service worker is properly registered
### Issue: "Invalid VAPID key" error
- Ensure you're using the public key (not private key) in the frontend
- Verify the key format is correct (no extra spaces or line breaks)
- Make sure both frontend and backend are using keys from the same key pair
## Additional Resources
- [Web Push Protocol](https://web.dev/push-notifications-overview/)
- [VAPID Specification](https://tools.ietf.org/html/rfc8292)
- [web-push npm package](https://www.npmjs.com/package/web-push)

134
INSTALL_REDIS.txt Normal file
View File

@ -0,0 +1,134 @@
========================================
REDIS SETUP FOR TAT NOTIFICATIONS
========================================
-----------------------------------------
OPTION 1: UPSTASH (★ RECOMMENDED ★)
-----------------------------------------
✅ NO INSTALLATION NEEDED
✅ 100% FREE FOR DEVELOPMENT
✅ WORKS ON WINDOWS, MAC, LINUX
✅ PRODUCTION READY
SETUP (2 MINUTES):
1. Go to: https://console.upstash.com/
2. Sign up (GitHub/Google/Email)
3. Click "Create Database"
- Name: redis-tat-dev
- Type: Regional
- Region: Choose closest to you
- Click "Create"
4. Copy the Redis URL (looks like):
rediss://default:AbC123...@us1-mighty-12345.upstash.io:6379
5. Add to Re_Backend/.env:
REDIS_URL=rediss://default:AbC123...@us1-mighty-12345.upstash.io:6379
TAT_TEST_MODE=true
6. Restart backend:
cd Re_Backend
npm run dev
7. ✅ Done! Look for: "[TAT Queue] Connected to Redis"
-----------------------------------------
OPTION 2: DOCKER (IF YOU PREFER LOCAL)
-----------------------------------------
If you have Docker Desktop:
1. Run Redis container:
docker run -d --name redis-tat -p 6379:6379 redis:latest
2. Add to Re_Backend/.env:
REDIS_URL=redis://localhost:6379
TAT_TEST_MODE=true
3. Restart backend
-----------------------------------------
OPTION 3: PRODUCTION (LINUX SERVER)
-----------------------------------------
For Ubuntu/Debian servers:
1. Install Redis:
sudo apt update
sudo apt install redis-server -y
2. Enable and start:
sudo systemctl enable redis-server
sudo systemctl start redis-server
3. Verify:
redis-cli ping
# → PONG
4. Add to .env on server:
REDIS_URL=redis://localhost:6379
TAT_TEST_MODE=false
✅ FREE, NO LICENSE, PRODUCTION READY
-----------------------------------------
VERIFY CONNECTION
-----------------------------------------
After setup, check backend logs for:
✅ [TAT Queue] Connected to Redis
✅ [TAT Worker] Initialized and listening
Or test manually:
For Upstash:
- Use Upstash Console → CLI tab
- Type: PING → Should return PONG
For Local/Docker:
Test-NetConnection localhost -Port 6379
# Should show: TcpTestSucceeded : True
-----------------------------------------
RESTART BACKEND
-----------------------------------------
After Redis is running:
cd Re_Backend
npm run dev
You should see:
✅ [TAT Queue] Connected to Redis
✅ [TAT Worker] Initialized and listening
-----------------------------------------
TEST TAT NOTIFICATIONS
-----------------------------------------
1. Create a new workflow request
2. Set a short TAT (e.g., 2 hours for testing)
3. Submit the request
4. Check logs for:
- "TAT jobs scheduled"
- Notifications at 50%, 75%, 100%
For testing, you can modify working hours in:
Re_Backend/src/utils/tatTimeUtils.ts
-----------------------------------------
CURRENT STATUS
-----------------------------------------
❌ Redis: NOT RUNNING
❌ TAT Notifications: DISABLED
After installing Redis:
✅ Redis: RUNNING
✅ TAT Notifications: ENABLED
========================================

325
Jenkinsfile vendored
View File

@ -1,325 +0,0 @@
pipeline {
agent any
environment {
SSH_CREDENTIALS = 'cloudtopiaa'
REMOTE_SERVER = 'ubuntu@160.187.166.17'
PROJECT_NAME = 'Royal-Enfield-Backend'
DEPLOY_PATH = '/home/ubuntu/Royal-Enfield/Re_Backend'
GIT_CREDENTIALS = 'git-cred'
REPO_URL = 'https://git.tech4biz.wiki/laxmanhalaki/Re_Backend.git'
GIT_BRANCH = 'main'
NPM_PATH = '/home/ubuntu/.nvm/versions/node/v22.21.1/bin/npm'
NODE_PATH = '/home/ubuntu/.nvm/versions/node/v22.21.1/bin/node'
PM2_PATH = '/home/ubuntu/.nvm/versions/node/v22.21.1/bin/pm2'
PM2_APP_NAME = 'royal-enfield-backend'
APP_PORT = '5000'
EMAIL_RECIPIENT = 'laxman.halaki@tech4biz.org'
}
options {
timeout(time: 20, unit: 'MINUTES')
disableConcurrentBuilds()
timestamps()
buildDiscarder(logRotator(numToKeepStr: '10', daysToKeepStr: '30'))
}
stages {
stage('Pre-deployment Check') {
steps {
script {
echo "═══════════════════════════════════════════"
echo "🚀 Starting ${PROJECT_NAME} Deployment"
echo "═══════════════════════════════════════════"
echo "Server: ${REMOTE_SERVER}"
echo "Deploy Path: ${DEPLOY_PATH}"
echo "PM2 App: ${PM2_APP_NAME}"
echo "Build #: ${BUILD_NUMBER}"
echo "═══════════════════════════════════════════"
}
}
}
stage('Pull Latest Code') {
steps {
sshagent(credentials: [SSH_CREDENTIALS]) {
withCredentials([usernamePassword(credentialsId: GIT_CREDENTIALS, usernameVariable: 'GIT_USER', passwordVariable: 'GIT_PASS')]) {
sh """
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=10 ${REMOTE_SERVER} << 'ENDSSH'
set -e
echo "📦 Git Operations..."
if [ -d "${DEPLOY_PATH}/.git" ]; then
cd ${DEPLOY_PATH}
echo "Configuring git..."
git config --global --add safe.directory ${DEPLOY_PATH}
git config credential.helper store
echo "Fetching updates..."
git fetch https://${GIT_USER}:${GIT_PASS}@git.tech4biz.wiki/laxmanhalaki/Re_Backend.git ${GIT_BRANCH}
CURRENT_COMMIT=\$(git rev-parse HEAD)
LATEST_COMMIT=\$(git rev-parse FETCH_HEAD)
if [ "\$CURRENT_COMMIT" = "\$LATEST_COMMIT" ]; then
echo "⚠️ Already up to date. No changes to deploy."
echo "Current: \$CURRENT_COMMIT"
else
echo "Pulling new changes..."
git reset --hard FETCH_HEAD
git clean -fd
echo "✓ Updated from \${CURRENT_COMMIT:0:7} to \${LATEST_COMMIT:0:7}"
fi
else
echo "Cloning repository..."
rm -rf ${DEPLOY_PATH}
mkdir -p /home/ubuntu/Royal-Enfield
cd /home/ubuntu/Royal-Enfield
git clone https://${GIT_USER}:${GIT_PASS}@git.tech4biz.wiki/laxmanhalaki/Re_Backend.git Re_Backend
cd ${DEPLOY_PATH}
git checkout ${GIT_BRANCH}
git config --global --add safe.directory ${DEPLOY_PATH}
echo "✓ Repository cloned successfully"
fi
cd ${DEPLOY_PATH}
echo "Current commit: \$(git log -1 --oneline)"
ENDSSH
"""
}
}
}
}
stage('Install Dependencies') {
steps {
sshagent(credentials: [SSH_CREDENTIALS]) {
sh """
ssh -o StrictHostKeyChecking=no ${REMOTE_SERVER} << 'ENDSSH'
set -e
export PATH="/home/ubuntu/.nvm/versions/node/v22.21.1/bin:\$PATH"
cd ${DEPLOY_PATH}
echo "🔧 Environment Check..."
echo "Node: \$(${NODE_PATH} -v)"
echo "NPM: \$(${NPM_PATH} -v)"
echo ""
echo "📥 Installing Dependencies..."
${NPM_PATH} install --prefer-offline --no-audit --progress=false
echo ""
echo "✅ Dependencies installed successfully!"
ENDSSH
"""
}
}
}
stage('Build Application') {
steps {
sshagent(credentials: [SSH_CREDENTIALS]) {
sh """
ssh -o StrictHostKeyChecking=no ${REMOTE_SERVER} << 'ENDSSH'
set -e
export PATH="/home/ubuntu/.nvm/versions/node/v22.21.1/bin:\$PATH"
cd ${DEPLOY_PATH}
echo "🔨 Building application..."
${NPM_PATH} run build
echo "✅ Build completed successfully!"
ENDSSH
"""
}
}
}
stage('Stop PM2 Process') {
steps {
sshagent(credentials: [SSH_CREDENTIALS]) {
sh """
ssh -o StrictHostKeyChecking=no ${REMOTE_SERVER} << 'ENDSSH'
set -e
export PATH="/home/ubuntu/.nvm/versions/node/v22.21.1/bin:\$PATH"
echo "🛑 Stopping existing PM2 process..."
if ${PM2_PATH} list | grep -q "${PM2_APP_NAME}"; then
echo "Stopping ${PM2_APP_NAME}..."
${PM2_PATH} stop ${PM2_APP_NAME} || true
${PM2_PATH} delete ${PM2_APP_NAME} || true
echo "✓ Process stopped"
else
echo "No existing process found"
fi
ENDSSH
"""
}
}
}
stage('Start with PM2') {
steps {
sshagent(credentials: [SSH_CREDENTIALS]) {
sh """
ssh -o StrictHostKeyChecking=no ${REMOTE_SERVER} << 'ENDSSH'
set -e
export PATH="/home/ubuntu/.nvm/versions/node/v22.21.1/bin:\$PATH"
cd ${DEPLOY_PATH}
echo "🚀 Starting application with PM2..."
# Start with PM2
${PM2_PATH} start ${NPM_PATH} --name "${PM2_APP_NAME}" -- start
echo ""
echo "⏳ Waiting for application to start..."
sleep 5
# Save PM2 configuration
${PM2_PATH} save
# Show PM2 status
echo ""
echo "📊 PM2 Process Status:"
${PM2_PATH} list
# Show logs (last 20 lines)
echo ""
echo "📝 Application Logs:"
${PM2_PATH} logs ${PM2_APP_NAME} --lines 20 --nostream || true
echo ""
echo "✅ Application started successfully!"
ENDSSH
"""
}
}
}
stage('Health Check') {
steps {
sshagent(credentials: [SSH_CREDENTIALS]) {
sh """
ssh -o StrictHostKeyChecking=no ${REMOTE_SERVER} << 'ENDSSH'
set -e
export PATH="/home/ubuntu/.nvm/versions/node/v22.21.1/bin:\$PATH"
echo "🔍 Deployment Verification..."
# Check if PM2 process is running
if ${PM2_PATH} list | grep -q "${PM2_APP_NAME}.*online"; then
echo "✓ PM2 process is running"
else
echo "✗ PM2 process is NOT running!"
${PM2_PATH} logs ${PM2_APP_NAME} --lines 50 --nostream || true
exit 1
fi
# Check if port is listening
echo ""
echo "Checking if port ${APP_PORT} is listening..."
if ss -tuln | grep -q ":${APP_PORT} "; then
echo "✓ Application is listening on port ${APP_PORT}"
else
echo "⚠️ Port ${APP_PORT} not detected (may take a moment to start)"
fi
# Show process info
echo ""
echo "📊 Process Information:"
${PM2_PATH} info ${PM2_APP_NAME}
echo ""
echo "═══════════════════════════════════════════"
echo "✅ DEPLOYMENT SUCCESSFUL"
echo "═══════════════════════════════════════════"
ENDSSH
"""
}
}
}
}
post {
always {
cleanWs()
}
success {
script {
def duration = currentBuild.durationString.replace(' and counting', '')
mail to: "${EMAIL_RECIPIENT}",
subject: "✅ ${PROJECT_NAME} - Deployment Successful #${BUILD_NUMBER}",
body: """
Deployment completed successfully!
Project: ${PROJECT_NAME}
Build: #${BUILD_NUMBER}
Duration: ${duration}
Server: ${REMOTE_SERVER}
PM2 App: ${PM2_APP_NAME}
Port: ${APP_PORT}
Deployed at: ${new Date().format('yyyy-MM-dd HH:mm:ss')}
Console: ${BUILD_URL}console
Commands to manage:
- View logs: pm2 logs ${PM2_APP_NAME}
- Restart: pm2 restart ${PM2_APP_NAME}
- Stop: pm2 stop ${PM2_APP_NAME}
"""
}
}
failure {
script {
sshagent(credentials: [SSH_CREDENTIALS]) {
try {
def logs = sh(
script: """ssh -o StrictHostKeyChecking=no ${REMOTE_SERVER} '
export PATH="/home/ubuntu/.nvm/versions/node/v22.21.1/bin:\$PATH"
${PM2_PATH} logs ${PM2_APP_NAME} --lines 50 --nostream || echo "No logs available"
'""",
returnStdout: true
).trim()
mail to: "${EMAIL_RECIPIENT}",
subject: "❌ ${PROJECT_NAME} - Deployment Failed #${BUILD_NUMBER}",
body: """
Deployment FAILED!
Project: ${PROJECT_NAME}
Build: #${BUILD_NUMBER}
Server: ${REMOTE_SERVER}
Failed at: ${new Date().format('yyyy-MM-dd HH:mm:ss')}
Console Log: ${BUILD_URL}console
Recent PM2 Logs:
${logs}
Action required immediately!
"""
} catch (Exception e) {
mail to: "${EMAIL_RECIPIENT}",
subject: "❌ ${PROJECT_NAME} - Deployment Failed #${BUILD_NUMBER}",
body: """
Deployment FAILED!
Project: ${PROJECT_NAME}
Build: #${BUILD_NUMBER}
Server: ${REMOTE_SERVER}
Failed at: ${new Date().format('yyyy-MM-dd HH:mm:ss')}
Console Log: ${BUILD_URL}console
Could not retrieve PM2 logs. Please check manually.
"""
}
}
}
}
}
}

View File

@ -1,63 +0,0 @@
# Migration Merge Complete ✅
## Status: All Conflicts Resolved
Both migration files have been successfully merged with all conflicts resolved.
## Files Merged
### 1. `src/scripts/auto-setup.ts`
- **Status**: Clean, no conflict markers
- **Migrations**: All 40 migrations in correct order
- **Format**: Uses `require()` for CommonJS compatibility
### 2. `src/scripts/migrate.ts`
- **Status**: Clean, no conflict markers
- **Migrations**: All 40 migrations in correct order
- **Format**: Uses ES6 `import * as` syntax
## Migration Order (Final)
### Base Branch Migrations (m0-m29)
1. m0-m27: Core system migrations
2. m28: `20250130-migrate-to-vertex-ai`
3. m29: `20251203-add-user-notification-preferences`
### Dealer Claim Branch Migrations (m30-m39)
4. m30: `20251210-add-workflow-type-support`
5. m31: `20251210-enhance-workflow-templates`
6. m32: `20251210-add-template-id-foreign-key`
7. m33: `20251210-create-dealer-claim-tables`
8. m34: `20251210-create-proposal-cost-items-table`
9. m35: `20251211-create-internal-orders-table`
10. m36: `20251211-create-claim-budget-tracking-table`
11. m37: `20251213-drop-claim-details-invoice-columns`
12. m38: `20251213-create-claim-invoice-credit-note-tables`
13. m39: `20251214-create-dealer-completion-expenses`
## Verification
✅ No conflict markers (`<<<<<<<`, `=======`, `>>>>>>>`) found
✅ All migrations properly ordered
✅ Base branch migrations come first
✅ Dealer claim migrations follow
✅ Both files synchronized
## Next Steps
1. **If you see conflicts in your IDE/Git client:**
- Refresh your IDE/editor
- Run `git status` to check Git state
- If conflicts show in Git, run: `git add src/scripts/auto-setup.ts src/scripts/migrate.ts`
2. **Test the migrations:**
```bash
npm run migrate
# or
npm run setup
```
## Files Are Ready ✅
Both files are properly merged and ready to use. All 40 migrations are in the correct order with base branch migrations first, followed by dealer claim branch migrations.

350
README.md
View File

@ -50,21 +50,9 @@ A comprehensive backend API for the Royal Enfield Workflow Management System bui
```
3. **Setup environment**
**Option A: Automated Setup (Recommended - Unix/Linux/Mac)**
```bash
chmod +x setup-env.sh
./setup-env.sh
# Follow the interactive prompts
# The script will generate secure secrets automatically
```
**Option B: Manual Setup (Windows or Custom Configuration)**
```bash
cp env.example .env
# Edit .env with your configuration
# Generate JWT secrets manually:
# openssl rand -base64 32 | tr -d "=+/" | cut -c1-32
```
4. **Setup database**
@ -86,14 +74,8 @@ The API will be available at `http://localhost:5000`
### Docker Setup
```bash
# Setup environment (use automated script or manual)
# Option 1: Automated (if running on Unix/Linux/Mac host)
chmod +x setup-env.sh
./setup-env.sh
# Option 2: Manual
# Copy environment file
cp env.example .env
# Edit .env with your configuration
# Start services
docker-compose up --build -d
@ -102,291 +84,25 @@ docker-compose up --build -d
docker-compose logs -f
```
**Note:** Ensure your `.env` file is properly configured before starting Docker containers.
## API Endpoints
### Base URL
All API endpoints are prefixed with `/api/v1` unless otherwise specified.
### Authentication
- `POST /api/v1/auth/sso-callback` - SSO callback from frontend
- `GET /api/v1/auth/me` - Get current user profile
- `POST /api/v1/auth/refresh` - Refresh access token
- `POST /api/v1/auth/logout` - Logout user
- `GET /api/v1/auth/validate` - Validate token
### Authentication & Authorization
- `GET /health` - API health status (no auth required)
- `GET /api/v1/config` - Get public system configuration (no auth required)
### Authentication Endpoints (`/api/v1/auth`)
| Method | Endpoint | Description | Auth Required |
|--------|----------|-------------|---------------|
| `POST` | `/token-exchange` | Token exchange for localhost development | No |
| `POST` | `/sso-callback` | SSO callback from frontend | No |
| `POST` | `/refresh` | Refresh access token | No |
| `GET` | `/me` | Get current user profile | Yes |
| `GET` | `/validate` | Validate authentication token | Yes |
| `POST` | `/logout` | Logout user and clear cookies | Optional |
### Workflow Management (`/api/v1/workflows`)
| Method | Endpoint | Description | Auth Required |
|--------|----------|-------------|---------------|
| `GET` | `/` | List all workflows | Yes |
| `GET` | `/my` | List workflows created by current user | Yes |
| `GET` | `/open-for-me` | List workflows open for current user | Yes |
| `GET` | `/closed-by-me` | List workflows closed by current user | Yes |
| `POST` | `/` | Create new workflow | Yes |
| `POST` | `/multipart` | Create workflow with file uploads | Yes |
| `GET` | `/:id` | Get workflow by ID | Yes |
| `GET` | `/:id/details` | Get detailed workflow information | Yes |
| `PUT` | `/:id` | Update workflow | Yes |
| `PUT` | `/:id/multipart` | Update workflow with file uploads | Yes |
| `PATCH` | `/:id/submit` | Submit workflow for approval | Yes |
### Approval Management (`/api/v1/workflows`)
| Method | Endpoint | Description | Auth Required |
|--------|----------|-------------|---------------|
| `GET` | `/:id/approvals` | Get all approval levels for workflow | Yes |
| `GET` | `/:id/approvals/current` | Get current approval level | Yes |
| `PATCH` | `/:id/approvals/:levelId/approve` | Approve at specific level | Yes (Approver) |
| `PATCH` | `/:id/approvals/:levelId/reject` | Reject at specific level | Yes (Approver) |
| `POST` | `/:id/approvals/:levelId/skip` | Skip approver at level | Yes (Initiator/Approver) |
| `POST` | `/:id/approvers/at-level` | Add approver at specific level | Yes (Initiator/Approver) |
### Participants Management (`/api/v1/workflows`)
| Method | Endpoint | Description | Auth Required |
|--------|----------|-------------|---------------|
| `POST` | `/:id/participants/approver` | Add approver to workflow | Yes |
| `POST` | `/:id/participants/spectator` | Add spectator to workflow | Yes |
### Work Notes (`/api/v1/workflows`)
| Method | Endpoint | Description | Auth Required |
|--------|----------|-------------|---------------|
| `GET` | `/:id/work-notes` | Get all work notes for workflow | Yes |
| `POST` | `/:id/work-notes` | Create work note with attachments | Yes |
| `GET` | `/work-notes/attachments/:attachmentId/preview` | Preview work note attachment | Yes |
| `GET` | `/work-notes/attachments/:attachmentId/download` | Download work note attachment | Yes |
### Document Management (`/api/v1/workflows` & `/api/v1/documents`)
| Method | Endpoint | Description | Auth Required |
|--------|----------|-------------|---------------|
| `GET` | `/workflows/documents/:documentId/preview` | Preview workflow document | Yes |
| `GET` | `/workflows/documents/:documentId/download` | Download workflow document | Yes |
| `POST` | `/documents` | Upload document (multipart/form-data) | Yes |
### Activities & Notifications (`/api/v1/workflows`)
| Method | Endpoint | Description | Auth Required |
|--------|----------|-------------|---------------|
| `GET` | `/:id/activity` | Get activity log for workflow | Yes |
| `POST` | `/notifications/subscribe` | Subscribe to push notifications | Yes |
| `POST` | `/notifications/test` | Send test push notification | Yes |
### User Management (`/api/v1/users`)
| Method | Endpoint | Description | Auth Required |
|--------|----------|-------------|---------------|
| `GET` | `/search?q=<email or name>` | Search users by email or name | Yes |
| `POST` | `/ensure` | Ensure user exists (create if not) | Yes |
### Dashboard & Analytics (`/api/v1/dashboard`)
| Method | Endpoint | Description | Auth Required |
|--------|----------|-------------|---------------|
| `GET` | `/kpis` | Get KPI summary (all KPI cards) | Yes |
| `GET` | `/stats/requests` | Get detailed request statistics | Yes |
| `GET` | `/stats/tat-efficiency` | Get TAT efficiency metrics | Yes |
| `GET` | `/stats/approver-load` | Get approver load statistics | Yes |
| `GET` | `/stats/engagement` | Get engagement & quality metrics | Yes |
| `GET` | `/stats/ai-insights` | Get AI & closure insights | Yes |
| `GET` | `/stats/ai-remark-utilization` | Get AI remark utilization with trends | Yes |
| `GET` | `/stats/approver-performance` | Get approver performance metrics | Yes |
| `GET` | `/stats/by-department` | Get department-wise summary | Yes |
| `GET` | `/stats/priority-distribution` | Get priority distribution | Yes |
| `GET` | `/activity/recent` | Get recent activity feed | Yes |
| `GET` | `/requests/critical` | Get high priority/critical requests | Yes |
| `GET` | `/deadlines/upcoming` | Get upcoming deadlines | Yes |
| `GET` | `/reports/lifecycle` | Get Request Lifecycle Report | Yes |
| `GET` | `/reports/activity-log` | Get enhanced User Activity Log Report | Yes |
| `GET` | `/reports/workflow-aging` | Get Workflow Aging Report | Yes |
### Notifications (`/api/v1/notifications`)
| Method | Endpoint | Description | Auth Required |
|--------|----------|-------------|---------------|
| `GET` | `/?page=&limit=&unreadOnly=` | Get user's notifications (paginated) | Yes |
| `GET` | `/unread-count` | Get unread notification count | Yes |
| `PATCH` | `/:notificationId/read` | Mark notification as read | Yes |
| `POST` | `/mark-all-read` | Mark all notifications as read | Yes |
| `DELETE` | `/:notificationId` | Delete notification | Yes |
### TAT (Turnaround Time) Monitoring (`/api/v1/tat`)
| Method | Endpoint | Description | Auth Required |
|--------|----------|-------------|---------------|
| `GET` | `/alerts/request/:requestId` | Get all TAT alerts for a request | Yes |
| `GET` | `/alerts/level/:levelId` | Get TAT alerts for a specific level | Yes |
| `GET` | `/compliance/summary?startDate=&endDate=` | Get TAT compliance summary | Yes |
| `GET` | `/breaches` | Get TAT breach report | Yes |
| `GET` | `/performance/:approverId` | Get TAT performance for approver | Yes |
### AI & Conclusion Management (`/api/v1/conclusions` & `/api/v1/ai`)
| Method | Endpoint | Description | Auth Required |
|--------|----------|-------------|---------------|
| `POST` | `/conclusions/:requestId/generate` | Generate AI-powered conclusion remark | Yes (Initiator) |
| `PUT` | `/conclusions/:requestId` | Update conclusion remark | Yes (Initiator) |
| `POST` | `/conclusions/:requestId/finalize` | Finalize conclusion and close request | Yes (Initiator) |
| `GET` | `/conclusions/:requestId` | Get conclusion for a request | Yes |
| `GET` | `/ai/status` | Get AI service status | Yes (Admin) |
| `POST` | `/ai/reinitialize` | Reinitialize AI service | Yes (Admin) |
### Admin Management (`/api/v1/admin`)
**All admin routes require authentication and admin role.**
#### Holiday Management
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/holidays?year=` | Get all holidays (optional year filter) |
| `GET` | `/holidays/calendar/:year` | Get holiday calendar for specific year |
| `POST` | `/holidays` | Create new holiday |
| `PUT` | `/holidays/:holidayId` | Update holiday |
| `DELETE` | `/holidays/:holidayId` | Delete (deactivate) holiday |
| `POST` | `/holidays/bulk-import` | Bulk import holidays from CSV/JSON |
#### Configuration Management
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/configurations?category=` | Get all admin configurations |
| `PUT` | `/configurations/:configKey` | Update configuration value |
| `POST` | `/configurations/:configKey/reset` | Reset configuration to default |
#### User Role Management (RBAC)
| Method | Endpoint | Description |
|--------|----------|-------------|
| `POST` | `/users/assign-role` | Assign role to user by email |
| `PUT` | `/users/:userId/role` | Update user's role |
| `GET` | `/users/by-role?role=` | Get users filtered by role |
| `GET` | `/users/role-statistics` | Get count of users in each role |
### Debug Endpoints (`/api/v1/debug`)
**Note:** Debug endpoints are for development/testing purposes.
| Method | Endpoint | Description | Auth Required |
|--------|----------|-------------|---------------|
| `GET` | `/tat-jobs/:requestId` | Check scheduled TAT jobs for request | No |
| `GET` | `/tat-jobs` | Check all queued TAT jobs | No |
| `POST` | `/tat-calculate` | Calculate TAT times (debug) | No |
| `GET` | `/queue-status` | Check queue and worker status | No |
| `POST` | `/trigger-test-tat` | Manually trigger test TAT job | No |
### Health Check
- `GET /health` - API health status
- `GET /api/v1/health` - Detailed health check
## Environment Variables
### Quick Setup Options
1. **Automated Setup (Recommended)**
```bash
chmod +x setup-env.sh
./setup-env.sh
```
The script will:
- Guide you through all required configuration
- Automatically generate secure JWT and session secrets
- Provide VAPID key generation instructions for web push notifications
- Create a properly formatted `.env` file
2. **Manual Setup**
- Copy `env.example` to `.env`
- Fill in all required values
- Generate secrets using:
```bash
openssl rand -base64 32 | tr -d "=+/" | cut -c1-32
```
### Required Environment Variables
#### Application Configuration
- `NODE_ENV` - Environment (development/production)
- `PORT` - Server port (default: 5000)
- `BASE_URL` - Backend deployed URL
- `FRONTEND_URL` - Frontend URL for CORS
#### Database Configuration
- `DB_HOST` - PostgreSQL host (default: localhost)
- `DB_PORT` - PostgreSQL port (default: 5432)
- `DB_NAME` - Database name (default: re_workflow_db)
- `DB_USER` - Database user
- `DB_PASSWORD` - Database password
#### Authentication & Security
- `JWT_SECRET` - JWT signing secret (min 32 characters, auto-generated by setup script)
- `JWT_EXPIRY` - JWT expiration time (default: 24h)
- `REFRESH_TOKEN_SECRET` - Refresh token secret (auto-generated by setup script)
- `REFRESH_TOKEN_EXPIRY` - Refresh token expiration (default: 7d)
- `SESSION_SECRET` - Session secret (min 32 characters, auto-generated by setup script)
#### SSO Configuration (Okta)
- `OKTA_DOMAIN` - Okta domain URL
- `OKTA_CLIENT_ID` - Okta application client ID
- `OKTA_CLIENT_SECRET` - Okta application client secret
- `OKTA_API_TOKEN` - Okta API token (optional, for user management)
#### Web Push Notifications (VAPID)
- `VAPID_PUBLIC_KEY` - VAPID public key for web push
- `VAPID_PRIVATE_KEY` - VAPID private key for web push
- `VAPID_CONTACT` - Contact email (format: mailto:admin@example.com)
**To generate VAPID keys:**
```bash
npx web-push generate-vapid-keys
```
The setup script provides detailed instructions, or run:
```bash
./setup-env.sh
# Select option 2 for VAPID key generation instructions
```
#### Redis Configuration (TAT Queue)
- `REDIS_URL` - Redis connection URL (default: redis://localhost:6379)
#### Optional Services
**Email Service (SMTP)**
- `SMTP_HOST` - SMTP server host
- `SMTP_PORT` - SMTP port (default: 587)
- `SMTP_SECURE` - Use TLS (default: false)
- `SMTP_USER` - SMTP username
- `SMTP_PASSWORD` - SMTP password
- `EMAIL_FROM` - Sender email address
**Cloud Storage (GCP)**
- `GCP_PROJECT_ID` - Google Cloud Project ID
- `GCP_BUCKET_NAME` - GCS bucket name
- `GCP_KEY_FILE` - Path to GCP service account key file
**AI Service (Claude)**
- `CLAUDE_MODEL` - Claude model name (default: claude-sonnet-4-20250514)
### Environment Variables Reference
See `env.example` for the complete list with default values and descriptions.
See `env.example` for all required environment variables.
## Development
### Prerequisites
- Backend API server running
- PostgreSQL database configured and accessible
- Environment variables set up (use `./setup-env.sh` or manually)
- Redis server (optional, for TAT queue)
### Available Scripts
```bash
# Run in development mode
npm run dev
@ -404,18 +120,6 @@ npm run type-check
npm run build
```
### Setup Checklist
- [ ] Node.js 22.x LTS installed
- [ ] PostgreSQL 16.x installed and running
- [ ] Environment variables configured (`.env` file created)
- [ ] Database created: `createdb re_workflow_db` (or your DB_NAME)
- [ ] Database schema applied (if applicable)
- [ ] Redis installed and running (for TAT monitoring)
- [ ] VAPID keys generated (for web push notifications)
- [ ] Optional: SMTP configured (for email notifications)
- [ ] Optional: GCP credentials configured (for cloud storage)
## Project Structure
```
@ -449,39 +153,11 @@ The database schema includes all tables from the ERD:
## Authentication Flow
1. Frontend handles SSO authentication (Okta/Auth0)
1. Frontend handles SSO authentication
2. Frontend sends user data to `/api/v1/auth/sso-callback`
3. Backend creates/updates user record
4. Backend generates JWT access and refresh tokens
4. Backend generates JWT tokens
5. Frontend uses tokens for subsequent API calls
6. Tokens are automatically refreshed when expired
## Web Push Notifications
The backend supports web push notifications via VAPID (Voluntary Application Server Identification).
### Setup Instructions
1. **Generate VAPID Keys:**
```bash
npx web-push generate-vapid-keys
```
2. **Add to Backend `.env`:**
```
VAPID_PUBLIC_KEY=<your-public-key>
VAPID_PRIVATE_KEY=<your-private-key>
VAPID_CONTACT=mailto:admin@royalenfield.com
```
3. **Add to Frontend `.env`:**
```
VITE_PUBLIC_VAPID_KEY=<same-public-key-as-backend>
```
4. **Important:** The VAPID public key must be the same in both backend and frontend `.env` files.
The `setup-env.sh` script provides detailed VAPID key generation instructions (select option 2).
## Contributing

View File

@ -50,47 +50,6 @@
{
"name": "Authentication",
"item": [
{
"name": "Login with Username/Password",
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"value": "application/json"
}
],
"body": {
"mode": "raw",
"raw": "{\n // Okta username (email)\n \"username\": \"user@royalenfield.com\",\n \n // Okta password\n \"password\": \"YourOktaPassword123\"\n}"
},
"url": {
"raw": "{{baseUrl}}/auth/login",
"host": ["{{baseUrl}}"],
"path": ["auth", "login"]
},
"description": "Authenticate with username (Okta email) and password.\n\nFlow:\n1. Validates credentials against Okta\n2. If user exists in Okta but not in our DB: creates user automatically\n3. Returns JWT access token and refresh token\n\nPerfect for:\n- Postman testing\n- Mobile apps\n- API clients\n- Development/testing\n\nResponse includes:\n- User profile (created if didn't exist)\n- Access token (24hr validity)\n- Refresh token (7 days validity)"
},
"response": [],
"event": [
{
"listen": "test",
"script": {
"exec": [
"// Auto-save access token for subsequent requests",
"if (pm.response.code === 200) {",
" var jsonData = pm.response.json();",
" if (jsonData.data && jsonData.data.accessToken) {",
" pm.collectionVariables.set('accessToken', jsonData.data.accessToken);",
" console.log('✅ Access token saved to collection variable');",
" }",
"}"
],
"type": "text/javascript"
}
}
]
},
{
"name": "Token Exchange (Development)",
"request": {
@ -260,19 +219,6 @@
},
"description": "Ensure user exists in database - creates if not exists"
}
},
{
"name": "Get Public Configurations",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/users/configurations",
"host": ["{{baseUrl}}"],
"path": ["users", "configurations"]
},
"description": "Get public configurations (document policy, workflow sharing, TAT settings)"
}
}
]
},
@ -293,7 +239,7 @@
}
},
{
"name": "List My Requests (DEPRECATED)",
"name": "List My Requests",
"request": {
"method": "GET",
"header": [],
@ -302,75 +248,7 @@
"host": ["{{baseUrl}}"],
"path": ["workflows", "my"]
},
"description": "DEPRECATED - Use /participant-requests instead. Get workflows where user is participant"
}
},
{
"name": "List Participant Requests",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/workflows/participant-requests?page=1&limit=10",
"host": ["{{baseUrl}}"],
"path": ["workflows", "participant-requests"],
"query": [
{
"key": "page",
"value": "1",
"description": "Page number"
},
{
"key": "limit",
"value": "10",
"description": "Items per page"
},
{
"key": "status",
"value": "",
"description": "Filter by status (optional)",
"disabled": true
},
{
"key": "priority",
"value": "",
"description": "Filter by priority (optional)",
"disabled": true
}
]
},
"description": "Get all requests where user is initiator OR participant (approver/spectator) - for regular users' All Requests page"
}
},
{
"name": "List My Initiated Requests",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/workflows/my-initiated?page=1&limit=10",
"host": ["{{baseUrl}}"],
"path": ["workflows", "my-initiated"],
"query": [
{
"key": "page",
"value": "1",
"description": "Page number"
},
{
"key": "limit",
"value": "10",
"description": "Items per page"
},
{
"key": "status",
"value": "",
"description": "Filter by status (optional)",
"disabled": true
}
]
},
"description": "Get only requests where current user is the initiator - for My Requests page"
"description": "Get workflows initiated by current user"
}
},
{
@ -400,7 +278,7 @@
}
},
{
"name": "Create Workflow (JSON) - Simplified",
"name": "Create Workflow (JSON)",
"request": {
"method": "POST",
"header": [
@ -411,18 +289,18 @@
],
"body": {
"mode": "raw",
"raw": "{\n \"templateType\": \"CUSTOM\",\n \"title\": \"Purchase Order Approval for Office Equipment\",\n \"description\": \"Approval needed for purchasing new office equipment including laptops, monitors, and office furniture. Total budget: $50,000\",\n \"priority\": \"STANDARD\",\n \"approvalLevels\": [\n {\n \"email\": \"manager@royalenfield.com\",\n \"tatHours\": 24\n },\n {\n \"email\": \"director@royalenfield.com\",\n \"tatHours\": 48\n },\n {\n \"email\": \"cfo@royalenfield.com\",\n \"tatHours\": 72\n }\n ],\n \"spectators\": [\n {\n \"email\": \"hr@royalenfield.com\"\n },\n {\n \"email\": \"finance@royalenfield.com\"\n }\n ]\n}"
"raw": "{\n // Request title - brief description\n \"requestTitle\": \"Purchase Order Approval for Office Equipment\",\n \n // Detailed description of the request\n \"requestDescription\": \"Approval needed for purchasing new office equipment including laptops, monitors, and office furniture. Total budget: $50,000\",\n \n // Priority: STANDARD | EXPRESS\n \"priority\": \"STANDARD\",\n \n // Department requesting approval\n \"requestingDepartment\": \"IT\",\n \n // Category of request\n \"requestCategory\": \"PURCHASE_ORDER\",\n \n // Approvers list - array of approval levels\n \"approvers\": [\n {\n // Approver's email\n \"email\": \"manager@example.com\",\n \n // TAT (Turn Around Time) in hours\n \"tatHours\": 24,\n \n // Level number (sequential)\n \"level\": 1\n },\n {\n \"email\": \"director@example.com\",\n \"tatHours\": 48,\n \"level\": 2\n },\n {\n \"email\": \"cfo@example.com\",\n \"tatHours\": 72,\n \"level\": 3\n }\n ],\n \n // Spectators (optional) - users who can view but not approve\n \"spectators\": [\n {\n \"email\": \"hr@example.com\"\n },\n {\n \"email\": \"finance@example.com\"\n }\n ],\n \n // Document IDs (if documents uploaded separately)\n \"documentIds\": []\n}"
},
"url": {
"raw": "{{baseUrl}}/workflows",
"host": ["{{baseUrl}}"],
"path": ["workflows"]
},
"description": "Create new workflow request with JSON payload. Backend automatically:\n- Finds/creates users from Okta/AD\n- Generates level names from designation/department\n- Auto-detects final approver (last level)\n- Sets proper permissions\n\nOnly email and tatHours required per approver!"
"description": "Create new workflow request with JSON payload"
}
},
{
"name": "Create Workflow (Multipart with Files) - Simplified",
"name": "Create Workflow (Multipart with Files)",
"request": {
"method": "POST",
"header": [],
@ -430,22 +308,52 @@
"mode": "formdata",
"formdata": [
{
"key": "payload",
"value": "{\"templateType\":\"CUSTOM\",\"title\":\"Purchase Order Approval with Documents\",\"description\":\"Approval needed for office equipment purchase with supporting documents\",\"priority\":\"STANDARD\",\"approvalLevels\":[{\"email\":\"manager@royalenfield.com\",\"tatHours\":24},{\"email\":\"director@royalenfield.com\",\"tatHours\":48}],\"spectators\":[{\"email\":\"hr@royalenfield.com\"}]}",
"key": "requestTitle",
"value": "Purchase Order Approval for Office Equipment",
"type": "text",
"description": "JSON payload with simplified format (email + tatHours only)"
"description": "Request title"
},
{
"key": "requestDescription",
"value": "Approval needed for purchasing new office equipment",
"type": "text",
"description": "Detailed description"
},
{
"key": "priority",
"value": "STANDARD",
"type": "text",
"description": "STANDARD or EXPRESS"
},
{
"key": "requestingDepartment",
"value": "IT",
"type": "text",
"description": "Department name"
},
{
"key": "requestCategory",
"value": "PURCHASE_ORDER",
"type": "text",
"description": "Category of request"
},
{
"key": "approvers",
"value": "[{\"email\":\"manager@example.com\",\"tatHours\":24,\"level\":1},{\"email\":\"director@example.com\",\"tatHours\":48,\"level\":2}]",
"type": "text",
"description": "JSON array of approvers"
},
{
"key": "spectators",
"value": "[{\"email\":\"hr@example.com\"}]",
"type": "text",
"description": "JSON array of spectators (optional)"
},
{
"key": "files",
"type": "file",
"src": [],
"description": "Upload files (multiple files supported)"
},
{
"key": "category",
"value": "SUPPORTING",
"type": "text",
"description": "Document category: SUPPORTING | APPROVAL | REFERENCE | FINAL | OTHER"
}
]
},
@ -454,7 +362,7 @@
"host": ["{{baseUrl}}"],
"path": ["workflows", "multipart"]
},
"description": "Create workflow with file uploads. Backend automatically:\n- Finds/creates users from Okta/AD\n- Generates level names\n- Auto-detects final approver\n- Uploads and attaches documents\n\nOnly email and tatHours required per approver!"
"description": "Create workflow with file uploads using multipart/form-data"
}
},
{
@ -583,64 +491,6 @@
"description": "Submit workflow for approval (changes status from DRAFT to OPEN)"
}
},
{
"name": "Add Approver at Level - Simplified",
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"value": "application/json"
}
],
"body": {
"mode": "raw",
"raw": "{\n \"email\": \"newapprover@royalenfield.com\",\n \"tatHours\": 24,\n \"level\": 2\n}"
},
"url": {
"raw": "{{baseUrl}}/workflows/:id/approvers/at-level",
"host": ["{{baseUrl}}"],
"path": ["workflows", ":id", "approvers", "at-level"],
"variable": [
{
"key": "id",
"value": "REQ-2024-0001",
"description": "Workflow ID or Request Number"
}
]
},
"description": "Add a new approver at specific level. Backend automatically:\n- Finds/creates user from Okta/AD\n- Generates level name from designation/department\n- Shifts existing levels if needed\n- Updates final approver flag\n- Sends notifications\n\nOnly email, tatHours, and level required!"
}
},
{
"name": "Add Spectator - Simplified",
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"value": "application/json"
}
],
"body": {
"mode": "raw",
"raw": "{\n \"email\": \"spectator@royalenfield.com\"\n}"
},
"url": {
"raw": "{{baseUrl}}/workflows/:id/participants/spectator",
"host": ["{{baseUrl}}"],
"path": ["workflows", ":id", "participants", "spectator"],
"variable": [
{
"key": "id",
"value": "REQ-2024-0001",
"description": "Workflow ID or Request Number"
}
]
},
"description": "Add a spectator to request. Backend automatically:\n- Finds/creates user from Okta/AD\n- Sets spectator permissions (view + comment, no download)\n- Sends notification\n\nOnly email required!"
}
},
{
"name": "Get Workflow Activity",
"request": {
@ -659,109 +509,6 @@
},
"description": "Get activity log/history for a workflow"
}
},
{
"name": "Pause Workflow",
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"value": "application/json"
}
],
"body": {
"mode": "raw",
"raw": "{\n // Reason for pausing the workflow\n \"reason\": \"Waiting for additional documentation from vendor\",\n \n // Expected resume date (optional)\n \"expectedResumeDate\": \"2024-12-30T00:00:00Z\"\n}"
},
"url": {
"raw": "{{baseUrl}}/workflows/:id/pause",
"host": ["{{baseUrl}}"],
"path": ["workflows", ":id", "pause"],
"variable": [
{
"key": "id",
"value": "REQ-2024-0001"
}
]
},
"description": "Pause a workflow (approver only). Sets status to PAUSED."
}
},
{
"name": "Resume Workflow",
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"value": "application/json"
}
],
"body": {
"mode": "raw",
"raw": "{\n // Reason for resuming (optional)\n \"reason\": \"Documentation received, continuing approval process\"\n}"
},
"url": {
"raw": "{{baseUrl}}/workflows/:id/resume",
"host": ["{{baseUrl}}"],
"path": ["workflows", ":id", "resume"],
"variable": [
{
"key": "id",
"value": "REQ-2024-0001"
}
]
},
"description": "Resume a paused workflow (approver who paused or initiator)"
}
},
{
"name": "Retrigger Pause",
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"value": "application/json"
}
],
"body": {
"mode": "raw",
"raw": "{\n // Message to approver requesting resume\n \"message\": \"Documents have been uploaded. Please review and resume the workflow.\"\n}"
},
"url": {
"raw": "{{baseUrl}}/workflows/:id/pause/retrigger",
"host": ["{{baseUrl}}"],
"path": ["workflows", ":id", "pause", "retrigger"],
"variable": [
{
"key": "id",
"value": "REQ-2024-0001"
}
]
},
"description": "Initiator requests approver to resume paused workflow (sends notification)"
}
},
{
"name": "Get Pause Details",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/workflows/:id/pause",
"host": ["{{baseUrl}}"],
"path": ["workflows", ":id", "pause"],
"variable": [
{
"key": "id",
"value": "REQ-2024-0001"
}
]
},
"description": "Get pause details for a workflow (pause history, current pause status)"
}
}
]
},
@ -1565,137 +1312,6 @@
},
"description": "Get priority distribution statistics"
}
},
{
"name": "Get Lifecycle Report",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/dashboard/reports/lifecycle?dateRange=month",
"host": ["{{baseUrl}}"],
"path": ["dashboard", "reports", "lifecycle"],
"query": [
{
"key": "dateRange",
"value": "month",
"description": "Date range: today, week, month, quarter, year, all"
},
{
"key": "startDate",
"value": "",
"description": "Custom start date (ISO format)",
"disabled": true
},
{
"key": "endDate",
"value": "",
"description": "Custom end date (ISO format)",
"disabled": true
}
]
},
"description": "Get request lifecycle report with stage-wise breakdown"
}
},
{
"name": "Get Activity Log Report",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/dashboard/reports/activity-log?page=1&limit=50",
"host": ["{{baseUrl}}"],
"path": ["dashboard", "reports", "activity-log"],
"query": [
{
"key": "page",
"value": "1",
"description": "Page number"
},
{
"key": "limit",
"value": "50",
"description": "Items per page"
},
{
"key": "dateRange",
"value": "month",
"description": "Date range filter",
"disabled": true
},
{
"key": "userId",
"value": "",
"description": "Filter by user ID",
"disabled": true
}
]
},
"description": "Get enhanced user activity log report with detailed actions"
}
},
{
"name": "Get Workflow Aging Report",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/dashboard/reports/workflow-aging",
"host": ["{{baseUrl}}"],
"path": ["dashboard", "reports", "workflow-aging"]
},
"description": "Get workflow aging report showing requests by age bucket"
}
},
{
"name": "Get Departments Metadata",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/dashboard/metadata/departments",
"host": ["{{baseUrl}}"],
"path": ["dashboard", "metadata", "departments"]
},
"description": "Get list of all departments (for filtering)"
}
},
{
"name": "Get Requests By Approver",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/dashboard/requests/by-approver?approverId=approver-uuid-here&page=1&limit=10",
"host": ["{{baseUrl}}"],
"path": ["dashboard", "requests", "by-approver"],
"query": [
{
"key": "approverId",
"value": "approver-uuid-here",
"description": "Approver's user ID"
},
{
"key": "page",
"value": "1",
"description": "Page number"
},
{
"key": "limit",
"value": "10",
"description": "Items per page"
},
{
"key": "status",
"value": "",
"description": "Filter by status (optional)",
"disabled": true
}
]
},
"description": "Get requests handled by a specific approver (for performance analysis)"
}
}
]
},
@ -2131,190 +1747,6 @@
}
]
},
{
"name": "Summaries",
"item": [
{
"name": "Create Summary",
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"value": "application/json"
}
],
"body": {
"mode": "raw",
"raw": "{\n // Request ID of the closed workflow\n \"requestId\": \"request-uuid-here\",\n \n // Summary title (optional - defaults to request title)\n \"title\": \"Summary: Purchase Order Approval\",\n \n // Summary content/notes (optional - can be AI generated)\n \"content\": \"This purchase order was approved with all requirements met.\",\n \n // Whether to generate AI summary\n \"generateAISummary\": true\n}"
},
"url": {
"raw": "{{baseUrl}}/summaries",
"host": ["{{baseUrl}}"],
"path": ["summaries"]
},
"description": "Create a summary for a closed workflow request (initiator only)"
}
},
{
"name": "List My Summaries",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/summaries/my?page=1&limit=10",
"host": ["{{baseUrl}}"],
"path": ["summaries", "my"],
"query": [
{
"key": "page",
"value": "1",
"description": "Page number"
},
{
"key": "limit",
"value": "10",
"description": "Items per page"
}
]
},
"description": "List summaries created by current user"
}
},
{
"name": "List Shared Summaries",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/summaries/shared?page=1&limit=10",
"host": ["{{baseUrl}}"],
"path": ["summaries", "shared"],
"query": [
{
"key": "page",
"value": "1",
"description": "Page number"
},
{
"key": "limit",
"value": "10",
"description": "Items per page"
}
]
},
"description": "List summaries shared with current user"
}
},
{
"name": "Get Summary By Request ID",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/summaries/request/:requestId",
"host": ["{{baseUrl}}"],
"path": ["summaries", "request", ":requestId"],
"variable": [
{
"key": "requestId",
"value": "request-uuid-here",
"description": "Request ID to get summary for"
}
]
},
"description": "Get summary by workflow request ID"
}
},
{
"name": "Get Summary Details",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/summaries/:summaryId",
"host": ["{{baseUrl}}"],
"path": ["summaries", ":summaryId"],
"variable": [
{
"key": "summaryId",
"value": "summary-uuid-here",
"description": "Summary ID"
}
]
},
"description": "Get summary details by summary ID"
}
},
{
"name": "Share Summary",
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"value": "application/json"
}
],
"body": {
"mode": "raw",
"raw": "{\n // Array of user IDs to share with\n \"userIds\": [\"user-uuid-1\", \"user-uuid-2\"],\n \n // Optional message to include\n \"message\": \"Please review this workflow summary\",\n \n // Optional: Share with all participants of the original request\n \"shareWithParticipants\": false\n}"
},
"url": {
"raw": "{{baseUrl}}/summaries/:summaryId/share",
"host": ["{{baseUrl}}"],
"path": ["summaries", ":summaryId", "share"],
"variable": [
{
"key": "summaryId",
"value": "summary-uuid-here"
}
]
},
"description": "Share summary with other users"
}
},
{
"name": "Get Shared Recipients",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/summaries/:summaryId/recipients",
"host": ["{{baseUrl}}"],
"path": ["summaries", ":summaryId", "recipients"],
"variable": [
{
"key": "summaryId",
"value": "summary-uuid-here"
}
]
},
"description": "Get list of users this summary has been shared with"
}
},
{
"name": "Mark Shared Summary as Viewed",
"request": {
"method": "PATCH",
"header": [],
"url": {
"raw": "{{baseUrl}}/summaries/shared/:sharedSummaryId/view",
"host": ["{{baseUrl}}"],
"path": ["summaries", "shared", ":sharedSummaryId", "view"],
"variable": [
{
"key": "sharedSummaryId",
"value": "shared-summary-uuid-here",
"description": "Shared summary record ID"
}
]
},
"description": "Mark a shared summary as viewed by current user"
}
}
]
},
{
"name": "Debug & Testing",
"item": [

View File

@ -1,216 +0,0 @@
# Testing GCS File Uploads from Frontend
## ✅ Pre-Testing Checklist
Before testing, ensure the following are configured:
### 1. Environment Variables (.env file)
Make sure your `.env` file has these values:
```env
GCP_PROJECT_ID=re-platform-workflow-dealer
GCP_BUCKET_NAME=your-bucket-name-here
GCP_KEY_FILE=./credentials/re-platform-workflow-dealer-3d5738fcc1f9.json
```
**Important:**
- Replace `your-bucket-name-here` with your actual GCS bucket name
- Ensure the credentials file path is correct
- The credentials file should exist at the specified path
### 2. GCS Bucket Setup
- [ ] Bucket exists in GCP Console
- [ ] Service account has permissions (Storage Object Admin)
- [ ] Bucket is accessible (public or with proper IAM)
### 3. Backend Server
- [ ] Backend server is running
- [ ] Check backend logs for GCS initialization message:
```
[GCS] Initialized successfully
```
## 🧪 Testing Steps
### Test 1: Upload Document (Standalone)
1. **Navigate to a Request Detail page**
- Open any existing workflow request
- Go to the "Documents" tab
2. **Upload a document**
- Click "Upload Document" or browse button
- Select a file (PDF, DOCX, etc.)
- Wait for upload to complete
3. **Verify in Backend Logs:**
```
[GCS] File uploaded successfully
```
4. **Check Database:**
- `storage_url` field should contain GCS URL like:
```
https://storage.googleapis.com/BUCKET_NAME/requests/REQ-2025-12-0001/documents/...
```
5. **Verify in GCS Console:**
- Go to GCS Console
- Navigate to: `requests/{requestNumber}/documents/`
- File should be there
### Test 2: Upload Document During Workflow Creation
1. **Create New Workflow**
- Go to "Create Request"
- Fill in workflow details
- In "Documents" step, upload files
- Submit workflow
2. **Verify:**
- Check backend logs for GCS upload
- Check GCS bucket: `requests/{requestNumber}/documents/`
- Files should be organized by request number
### Test 3: Upload Work Note Attachment
1. **Open Work Notes/Chat**
- Go to any request
- Open the work notes/chat section
2. **Attach File to Comment**
- Type a comment
- Click attachment icon
- Select a file
- Send the comment
3. **Verify:**
- Check backend logs
- Check GCS bucket: `requests/{requestNumber}/attachments/`
- File should appear in attachments folder
### Test 4: Download/Preview Files
1. **Download Document**
- Click download on any document
- Should redirect to GCS URL or download from GCS
2. **Preview Document**
- Click preview on any document
- Should open from GCS URL
## 🔍 What to Check
### Backend Logs
**Success:**
```
[GCS] Initialized successfully { projectId: '...', bucketName: '...' }
[GCS] File uploaded successfully { fileName: '...', gcsPath: '...' }
```
**Error (Falls back to local):**
```
[GCS] GCP configuration missing. File uploads will fail.
[GCS] GCS upload failed, falling back to local storage
```
### Database Verification
Check the `documents` and `work_note_attachments` tables:
```sql
-- Check documents
SELECT document_id, file_name, storage_url, file_path
FROM documents
WHERE request_id = 'YOUR_REQUEST_ID';
-- Check attachments
SELECT attachment_id, file_name, storage_url, file_path
FROM work_note_attachments
WHERE note_id IN (
SELECT note_id FROM work_notes WHERE request_id = 'YOUR_REQUEST_ID'
);
```
**Expected:**
- `storage_url` should contain GCS URL (if GCS configured)
- `file_path` should contain GCS path like `requests/REQ-2025-12-0001/documents/...`
### GCS Console Verification
1. Go to [GCS Console](https://console.cloud.google.com/storage)
2. Navigate to your bucket
3. Check folder structure:
```
requests/
├── REQ-2025-12-0001/
│ ├── documents/
│ │ └── {timestamp}-{hash}-{filename}
│ └── attachments/
│ └── {timestamp}-{hash}-{filename}
```
## 🐛 Troubleshooting
### Issue: Files not uploading to GCS
**Check:**
1. `.env` file has correct values
2. Credentials file exists at specified path
3. Service account has correct permissions
4. Bucket name is correct
5. Backend logs for errors
**Solution:**
- System will automatically fall back to local storage
- Fix configuration and restart backend
- Re-upload files
### Issue: "GCP configuration missing" in logs
**Cause:** Missing or incorrect environment variables
**Fix:**
```env
GCP_PROJECT_ID=re-platform-workflow-dealer
GCP_BUCKET_NAME=your-actual-bucket-name
GCP_KEY_FILE=./credentials/re-platform-workflow-dealer-3d5738fcc1f9.json
```
### Issue: "Key file not found"
**Cause:** Credentials file path is incorrect
**Fix:**
- Verify file exists at: `Re_Backend/credentials/re-platform-workflow-dealer-3d5738fcc1f9.json`
- Update `GCP_KEY_FILE` path in `.env` if needed
### Issue: Files upload but can't download/preview
**Cause:** Bucket permissions or CORS configuration
**Fix:**
- Check bucket IAM permissions
- Verify CORS is configured (see GCP_STORAGE_SETUP.md)
- Check if bucket is public or using signed URLs
## ✅ Success Indicators
- ✅ Backend logs show "GCS Initialized successfully"
- ✅ Files upload without errors
- ✅ Database `storage_url` contains GCS URLs
- ✅ Files visible in GCS Console under correct folder structure
- ✅ Downloads/previews work from GCS URLs
- ✅ Files organized by request number with documents/attachments separation
## 📝 Notes
- **No Frontend Changes Required:** The frontend uses the same API endpoints
- **Automatic Fallback:** If GCS is not configured, system uses local storage
- **Backward Compatible:** Existing local files continue to work
- **Folder Structure:** Files are automatically organized by request number

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,2 +0,0 @@
import{a as s}from"./index-7JN9lLwu.js";import"./radix-vendor-DIkYAdWy.js";import"./charts-vendor-Bme4E5cb.js";import"./utils-vendor-DNMmNUQL.js";import"./ui-vendor-DbB0YGPu.js";import"./socket-vendor-TjCxX7sJ.js";import"./redux-vendor-tbZCm13o.js";import"./router-vendor-B1UBYWWO.js";async function m(n){return(await s.post(`/conclusions/${n}/generate`)).data.data}async function f(n,t){return(await s.post(`/conclusions/${n}/finalize`,{finalRemark:t})).data.data}async function d(n){var t;try{return(await s.get(`/conclusions/${n}`)).data.data}catch(o){if(((t=o.response)==null?void 0:t.status)===404)return null;throw o}}export{f as finalizeConclusion,m as generateConclusion,d as getConclusion};
//# sourceMappingURL=conclusionApi-CMghC3Jo.js.map

View File

@ -1 +0,0 @@
{"version":3,"file":"conclusionApi-CMghC3Jo.js","sources":["../../src/services/conclusionApi.ts"],"sourcesContent":["import apiClient from './authApi';\r\n\r\nexport interface ConclusionRemark {\r\n conclusionId: string;\r\n requestId: string;\r\n aiGeneratedRemark: string | null;\r\n aiModelUsed: string | null;\r\n aiConfidenceScore: number | null;\r\n finalRemark: string | null;\r\n editedBy: string | null;\r\n isEdited: boolean;\r\n editCount: number;\r\n approvalSummary: any;\r\n documentSummary: any;\r\n keyDiscussionPoints: string[];\r\n generatedAt: string | null;\r\n finalizedAt: string | null;\r\n createdAt: string;\r\n updatedAt: string;\r\n}\r\n\r\n/**\r\n * Generate AI-powered conclusion remark\r\n */\r\nexport async function generateConclusion(requestId: string): Promise<{\r\n conclusionId: string;\r\n aiGeneratedRemark: string;\r\n keyDiscussionPoints: string[];\r\n confidence: number;\r\n generatedAt: string;\r\n}> {\r\n const response = await apiClient.post(`/conclusions/${requestId}/generate`);\r\n return response.data.data;\r\n}\r\n\r\n/**\r\n * Update conclusion remark (edit by initiator)\r\n */\r\nexport async function updateConclusion(requestId: string, finalRemark: string): Promise<ConclusionRemark> {\r\n const response = await apiClient.put(`/conclusions/${requestId}`, { finalRemark });\r\n return response.data.data;\r\n}\r\n\r\n/**\r\n * Finalize conclusion and close request\r\n */\r\nexport async function finalizeConclusion(requestId: string, finalRemark: string): Promise<{\r\n conclusionId: string;\r\n requestNumber: string;\r\n status: string;\r\n finalRemark: string;\r\n finalizedAt: string;\r\n}> {\r\n const response = await apiClient.post(`/conclusions/${requestId}/finalize`, { finalRemark });\r\n return response.data.data;\r\n}\r\n\r\n/**\r\n * Get conclusion for a request\r\n * Returns null if conclusion doesn't exist (404) instead of throwing error\r\n */\r\nexport async function getConclusion(requestId: string): Promise<ConclusionRemark | null> {\r\n try {\r\n const response = await apiClient.get(`/conclusions/${requestId}`);\r\n return response.data.data;\r\n } catch (error: any) {\r\n // Handle 404 gracefully - conclusion doesn't exist yet, which is normal\r\n if (error.response?.status === 404) {\r\n return null;\r\n }\r\n // Re-throw other errors\r\n throw error;\r\n }\r\n}\r\n\r\n"],"names":["generateConclusion","requestId","apiClient","finalizeConclusion","finalRemark","getConclusion","error","_a"],"mappings":"6RAwBA,eAAsBA,EAAmBC,EAMtC,CAED,OADiB,MAAMC,EAAU,KAAK,gBAAgBD,CAAS,WAAW,GAC1D,KAAK,IACvB,CAaA,eAAsBE,EAAmBF,EAAmBG,EAMzD,CAED,OADiB,MAAMF,EAAU,KAAK,gBAAgBD,CAAS,YAAa,CAAE,YAAAG,EAAa,GAC3E,KAAK,IACvB,CAMA,eAAsBC,EAAcJ,EAAqD,OACvF,GAAI,CAEF,OADiB,MAAMC,EAAU,IAAI,gBAAgBD,CAAS,EAAE,GAChD,KAAK,IACvB,OAASK,EAAY,CAEnB,KAAIC,EAAAD,EAAM,WAAN,YAAAC,EAAgB,UAAW,IAC7B,OAAO,KAGT,MAAMD,CACR,CACF"}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,69 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<!-- CSP: Allows blob URLs for file previews and cross-origin API calls during development -->
<meta http-equiv="Content-Security-Policy" content="default-src 'self' blob:; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; script-src 'self'; img-src 'self' data: https: blob:; connect-src 'self' blob: data: http://localhost:5000 http://localhost:3000 ws://localhost:5000 ws://localhost:3000 wss://localhost:5000 wss://localhost:3000; frame-src 'self' blob:; font-src 'self' https://fonts.gstatic.com data:; object-src 'none'; base-uri 'self'; form-action 'self';" />
<link rel="icon" type="image/svg+xml" href="/royal_enfield_logo.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta name="description" content="Royal Enfield Approval & Request Management Portal - Streamlined approval workflows for enterprise operations" />
<meta name="theme-color" content="#2d4a3e" />
<title>Royal Enfield | Approval Portal</title>
<!-- Preload critical fonts and icons -->
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<!-- Ensure proper icon rendering and layout -->
<style>
/* Ensure Lucide icons render properly */
svg {
display: inline-block;
vertical-align: middle;
}
/* Fix for icon alignment in buttons */
button svg {
flex-shrink: 0;
}
/* Ensure proper text rendering */
body {
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
text-rendering: optimizeLegibility;
}
/* Fix for mobile viewport and sidebar */
@media (max-width: 768px) {
html {
overflow-x: hidden;
}
}
/* Ensure proper sidebar toggle behavior */
.sidebar-toggle {
transition: all 0.3s ease-in-out;
}
/* Fix for icon button hover states */
button:hover svg {
transform: scale(1.05);
transition: transform 0.2s ease;
}
</style>
<script type="module" crossorigin src="/assets/index-7JN9lLwu.js"></script>
<link rel="modulepreload" crossorigin href="/assets/charts-vendor-Bme4E5cb.js">
<link rel="modulepreload" crossorigin href="/assets/radix-vendor-DIkYAdWy.js">
<link rel="modulepreload" crossorigin href="/assets/utils-vendor-DNMmNUQL.js">
<link rel="modulepreload" crossorigin href="/assets/ui-vendor-DbB0YGPu.js">
<link rel="modulepreload" crossorigin href="/assets/socket-vendor-TjCxX7sJ.js">
<link rel="modulepreload" crossorigin href="/assets/redux-vendor-tbZCm13o.js">
<link rel="modulepreload" crossorigin href="/assets/router-vendor-B1UBYWWO.js">
<link rel="stylesheet" crossorigin href="/assets/index-B-mLDzJe.css">
</head>
<body>
<div id="root"></div>
</body>
</html>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 88 KiB

View File

@ -1,26 +0,0 @@
self.addEventListener('push', event => {
const data = event.data ? event.data.json() : {};
const title = data.title || 'Notification';
console.log('notification dat i recive', data);
const rawUrl = data.url || (data.requestNumber ? `/request/${data.requestNumber}` : '/');
const absoluteUrl = /^https?:\/\//i.test(rawUrl) ? rawUrl : (self.location.origin + rawUrl);
const options = {
body: data.body || 'New message',
icon: '/royal_enfield_logo.png',
badge: '/royal_enfield_logo.png',
data: { url: absoluteUrl }
};
console.log('options', options);
event.waitUntil(self.registration.showNotification(title, options));
});
self.addEventListener('notificationclick', function (event) {
event.notification.close();
const targetUrl = (event.notification && event.notification.data && event.notification.data.url) || (self.location.origin + '/');
event.waitUntil((async () => {
// Always open a new window/tab to ensure SPA router picks up the correct path
if (clients.openWindow) return clients.openWindow(targetUrl);
})());
});

View File

@ -1,2 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="31.88" height="32" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 257"><defs><linearGradient id="IconifyId1813088fe1fbc01fb466" x1="-.828%" x2="57.636%" y1="7.652%" y2="78.411%"><stop offset="0%" stop-color="#41D1FF"></stop><stop offset="100%" stop-color="#BD34FE"></stop></linearGradient><linearGradient id="IconifyId1813088fe1fbc01fb467" x1="43.376%" x2="50.316%" y1="2.242%" y2="89.03%"><stop offset="0%" stop-color="#FFEA83"></stop><stop offset="8.333%" stop-color="#FFDD35"></stop><stop offset="100%" stop-color="#FFA800"></stop></linearGradient></defs><path fill="url(#IconifyId1813088fe1fbc01fb466)" d="M255.153 37.938L134.897 252.976c-2.483 4.44-8.862 4.466-11.382.048L.875 37.958c-2.746-4.814 1.371-10.646 6.827-9.67l120.385 21.517a6.537 6.537 0 0 0 2.322-.004l117.867-21.483c5.438-.991 9.574 4.796 6.877 9.62Z"></path><path fill="url(#IconifyId1813088fe1fbc01fb467)" d="M185.432.063L96.44 17.501a3.268 3.268 0 0 0-2.634 3.014l-5.474 92.456a3.268 3.268 0 0 0 3.997 3.378l24.777-5.718c2.318-.535 4.413 1.507 3.936 3.838l-7.361 36.047c-.495 2.426 1.782 4.5 4.151 3.78l15.304-4.649c2.372-.72 4.652 1.36 4.15 3.788l-11.698 56.621c-.732 3.542 3.979 5.473 5.943 2.437l1.313-2.028l72.516-144.72c1.215-2.423-.88-5.186-3.54-4.672l-25.505 4.922c-2.396.462-4.435-1.77-3.759-4.114l16.646-57.705c.677-2.35-1.37-4.583-3.769-4.113Z"></path></svg>

Before

Width:  |  Height:  |  Size: 1.5 KiB

View File

@ -1,13 +0,0 @@
{
"type": "service_account",
"project_id": "re-platform-workflow-dealer",
"private_key_id": "3d5738fcc1f9d44e4521f86d690d09317cb40f3b",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC2PM6D3wtRBSHm\nrbDqPraUt+EdJkZDOABC2U7sLeO8fIJjdwC3tzDBiCFJkUF2LoFWgI4S0xNFLNk2\njvK6+J8DsTv1feZ6UwrFazYyC1Xexcm6OAQBsfIZdVHsBjzOLNvVI/83Sl+siQv9\nKteN/OoBnAC+ietxk9RGzW706m6irte7nJ4BhOW+SDMaB8QuKJFSDQfpraLL7osI\ntcxG+n7LhO+Qi4slvcCrEIo0jUEHREbjlBagjpJnfnuVpi2Le0UBzRd8seMCzH3c\n4d4sxSI6ChaJBldv0TRKSpj2O0Vc35+tGCd0D/iUzSlLvdMkv7ettYGSjhM2rL4b\nc6O0vQbDAgMBAAECggEAWgki0v5Qvf+2Jx0rah//3uwWOWuejTlOz7hDiOaHPKmb\nVf8GiL3mRce3AnzUhcomNpGfH+fO/n9Q9eacQAnzrkRTZk+Enm0GxlDY3tLA4yZ/\nKxTfzeKXxUI0blMKmaaKGf0F69BAAqNXHAadptYM2yyzJXBItb2exDhdGH32mULI\nG8ZPFnw+pNwJkxGPy60CZvbbwTp4dfGwVabPLx08B0hRLjggke0dCm7I5SgPxTwa\nrqemkF0M+OMGNi87eTuhgYVG8ApGgW11fvFOtvQBZ9VCQgQiqLl4nvraSdGBmKtf\nZQKxsqMHfpqrcndF7m07hWgk/mn6rRnsnj8BHn0XcQKBgQDyFjO9SEl++byJ99FJ\nEsUvj6IUq22wejJNMtdyETyT5rPCD3nMiPjU73HbYsggZ0m1C5cCU9VIHPRy3jy+\nO3WW2pv5YeIyUmfZqk5FWJktFOPisDEggZAOZE3D9V47tfvd7L5uK5yo83ncDRrz\n8p60v7imf2eMKdTjF8wB08xkCQKBgQDAtgycmJmQrbTjj5CJD1EWADxMBeIyBoNW\nV6qHCiKOdNw+NME0RDhy5Uuv70bjHnc41fhHRZprzoUjpNQSEbgg/eQI7dKKQjHP\n4ISb9y7rbfIbV9BUvR+TLTBEyTxknPmwRnknYmSy9e4XjzZOduGgZ0glFPIJWKkR\nYozHimk/awKBgQCWwkbUUKkcfw/v57mYxSyxUsSAFMYJif+7XbcX3S4ZeSlm59ZV\nDtPPX5JLKngw3cHkEmSnWWfQMd/1jPrNCSBQorFRm6iO6AyuW8XEn8k8bu7/4/Ok\nJ6t7mvFm4G4fx1Qjv2RUHarA+GdiJ3MqimRVcbPfVCY6/m4KQm6UkL6PaQKBgGLg\nhZQLkC91kSx5SvWoEDizojx3gFmekeDJVku3XYeuWhrowoDox/XbxHvez4ZU6WMW\nFi+rfNH3wsRJHC6xPMJgwpH6RF6AHELGtgO4TjCp1uFEqzXvW7YOJ4gDoKMXD93s\nKtmUWIqiOKmJ55lW0emVVKUCHDXDcevjnsv7LolFAoGAeDo7II0y/iUtb9Pni8V2\nnqwdZ9h+RyxD8ua374/fTTnwKDrt8+XkL1oU2Zca6aaF5NDudjta9ZyLuga3/RjH\nCKOyT1nuWBKW67fVS7yosOCksoFygs5O/ZvfC3D1b7hrJN8oaMJCECB5sJSCjyM9\nyjsJCTPGSnE9LKEJURCZYsM=\n-----END PRIVATE KEY-----\n",
"client_email": "re-bridge-workflow@re-platform-workflow-dealer.iam.gserviceaccount.com",
"client_id": "108776059196607325512",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/re-bridge-workflow%40re-platform-workflow-dealer.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
}

56
debug_tat_alerts.sql Normal file
View File

@ -0,0 +1,56 @@
-- Debug script to check TAT alerts
-- Run this to see if alerts are being created
-- 1. Check if tat_alerts table exists
SELECT
table_name,
column_name,
data_type
FROM information_schema.columns
WHERE table_name = 'tat_alerts'
ORDER BY ordinal_position;
-- 2. Count total TAT alerts
SELECT COUNT(*) as total_alerts FROM tat_alerts;
-- 3. Show recent TAT alerts (if any)
SELECT
alert_id,
threshold_percentage,
alert_sent_at,
alert_message,
metadata
FROM tat_alerts
ORDER BY alert_sent_at DESC
LIMIT 5;
-- 4. Check approval levels with TAT status
SELECT
level_id,
request_id,
level_number,
approver_name,
tat_hours,
status,
tat50_alert_sent,
tat75_alert_sent,
tat_breached,
tat_start_time
FROM approval_levels
WHERE tat_start_time IS NOT NULL
ORDER BY tat_start_time DESC
LIMIT 5;
-- 5. Check if Redis is needed (are there any pending/in-progress levels?)
SELECT
w.request_number,
al.level_number,
al.approver_name,
al.status,
al.level_start_time,
al.tat_hours
FROM approval_levels al
JOIN workflow_requests w ON al.request_id = w.request_id
WHERE al.status IN ('PENDING', 'IN_PROGRESS')
ORDER BY al.level_start_time DESC;

View File

@ -1,228 +0,0 @@
# =============================================================================
# RE Workflow - Full Stack Docker Compose
# Includes: Application + Database + Monitoring Stack
# =============================================================================
# Usage:
# docker-compose -f docker-compose.full.yml up -d
# =============================================================================
version: '3.8'
services:
# ===========================================================================
# APPLICATION SERVICES
# ===========================================================================
postgres:
image: postgres:16-alpine
container_name: re_workflow_db
environment:
POSTGRES_USER: ${DB_USER:-laxman}
POSTGRES_PASSWORD: ${DB_PASSWORD:-Admin@123}
POSTGRES_DB: ${DB_NAME:-re_workflow_db}
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./database/schema:/docker-entrypoint-initdb.d
networks:
- re_workflow_network
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-laxman}"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
container_name: re_workflow_redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
networks:
- re_workflow_network
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
backend:
build:
context: .
dockerfile: Dockerfile
container_name: re_workflow_backend
environment:
NODE_ENV: development
DB_HOST: postgres
DB_PORT: 5432
DB_USER: ${DB_USER:-laxman}
DB_PASSWORD: ${DB_PASSWORD:-Admin@123}
DB_NAME: ${DB_NAME:-re_workflow_db}
REDIS_URL: redis://redis:6379
PORT: 5000
# Loki for logging
LOKI_HOST: http://loki:3100
ports:
- "5000:5000"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
volumes:
- ./logs:/app/logs
- ./uploads:/app/uploads
networks:
- re_workflow_network
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "node -e \"require('http').get('http://localhost:5000/health', (r) => {process.exit(r.statusCode === 200 ? 0 : 1)})\""]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# ===========================================================================
# MONITORING SERVICES
# ===========================================================================
prometheus:
image: prom/prometheus:v2.47.2
container_name: re_prometheus
ports:
- "9090:9090"
volumes:
- ./monitoring/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- ./monitoring/prometheus/alert.rules.yml:/etc/prometheus/alert.rules.yml:ro
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=15d'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
- '--web.enable-lifecycle'
networks:
- re_workflow_network
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:9090/-/healthy"]
interval: 30s
timeout: 10s
retries: 3
loki:
image: grafana/loki:2.9.2
container_name: re_loki
ports:
- "3100:3100"
volumes:
- ./monitoring/loki/loki-config.yml:/etc/loki/local-config.yaml:ro
- loki_data:/loki
command: -config.file=/etc/loki/local-config.yaml
networks:
- re_workflow_network
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3100/ready || exit 1"]
interval: 30s
timeout: 10s
retries: 5
promtail:
image: grafana/promtail:2.9.2
container_name: re_promtail
volumes:
- ./monitoring/promtail/promtail-config.yml:/etc/promtail/config.yml:ro
- ./logs:/var/log/app:ro
- promtail_data:/tmp/promtail
command: -config.file=/etc/promtail/config.yml
depends_on:
- loki
networks:
- re_workflow_network
restart: unless-stopped
grafana:
image: grafana/grafana:10.2.2
container_name: re_grafana
ports:
- "3001:3000"
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=REWorkflow@2024
- GF_USERS_ALLOW_SIGN_UP=false
- GF_FEATURE_TOGGLES_ENABLE=publicDashboards
- GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource,grafana-piechart-panel
volumes:
- grafana_data:/var/lib/grafana
- ./monitoring/grafana/provisioning/datasources:/etc/grafana/provisioning/datasources:ro
- ./monitoring/grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards:ro
- ./monitoring/grafana/dashboards:/var/lib/grafana/dashboards:ro
depends_on:
- prometheus
- loki
networks:
- re_workflow_network
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
node-exporter:
image: prom/node-exporter:v1.6.1
container_name: re_node_exporter
ports:
- "9100:9100"
networks:
- re_workflow_network
restart: unless-stopped
alertmanager:
image: prom/alertmanager:v0.26.0
container_name: re_alertmanager
ports:
- "9093:9093"
volumes:
- ./monitoring/alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml:ro
- alertmanager_data:/alertmanager
command:
- '--config.file=/etc/alertmanager/alertmanager.yml'
- '--storage.path=/alertmanager'
networks:
- re_workflow_network
restart: unless-stopped
# ===========================================================================
# NETWORKS
# ===========================================================================
networks:
re_workflow_network:
driver: bridge
name: re_workflow_network
# ===========================================================================
# VOLUMES
# ===========================================================================
volumes:
postgres_data:
name: re_postgres_data
redis_data:
name: re_redis_data
prometheus_data:
name: re_prometheus_data
loki_data:
name: re_loki_data
promtail_data:
name: re_promtail_data
grafana_data:
name: re_grafana_data
alertmanager_data:
name: re_alertmanager_data

View File

@ -1,726 +0,0 @@
# AI Conclusion Remark Generation Documentation
## Table of Contents
1. [Overview](#overview)
2. [Architecture](#architecture)
3. [Configuration](#configuration)
4. [API Usage](#api-usage)
5. [Implementation Details](#implementation-details)
6. [Prompt Engineering](#prompt-engineering)
7. [Error Handling](#error-handling)
8. [Best Practices](#best-practices)
9. [Troubleshooting](#troubleshooting)
---
## Overview
The AI Conclusion Remark Generation feature automatically generates professional, context-aware conclusion remarks for workflow requests that have been approved or rejected. This feature uses **Google Cloud Vertex AI Gemini** to analyze the entire request lifecycle and create a comprehensive summary suitable for permanent archiving.
### Key Features
- **Vertex AI Integration**: Uses Google Cloud Vertex AI Gemini with service account authentication
- **Context-Aware**: Analyzes approval flow, work notes, documents, and activities
- **Configurable**: Admin-configurable max length, model selection, and enable/disable
- **Automatic Generation**: Can be triggered automatically when a request is approved/rejected
- **Manual Generation**: Users can regenerate conclusions on demand
- **Editable**: Generated remarks can be edited before finalization
- **Enterprise Security**: Uses same service account credentials as Google Cloud Storage
### Use Cases
1. **Automatic Generation**: When the final approver approves/rejects a request, an AI conclusion is generated in the background
2. **Manual Generation**: Initiator can click "Generate AI Conclusion" button to create or regenerate a conclusion
3. **Finalization**: Initiator reviews, edits (if needed), and finalizes the conclusion to close the request
---
## Architecture
### Component Diagram
```
┌─────────────────────────────────────────────────────────────┐
│ Frontend (React) │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ useConclusionRemark Hook │ │
│ │ - handleGenerateConclusion() │ │
│ │ - handleFinalizeConclusion() │ │
│ └──────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ conclusionApi Service │ │
│ │ - generateConclusion(requestId) │ │
│ │ - finalizeConclusion(requestId, remark) │ │
│ └──────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│ HTTP API
┌─────────────────────────────────────────────────────────────┐
│ Backend (Node.js/Express) │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ ConclusionController │ │
│ │ - generateConclusion() │ │
│ │ - finalizeConclusion() │ │
│ └──────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ AIService │ │
│ │ - generateConclusionRemark(context) │ │
│ │ - buildConclusionPrompt(context) │ │
│ │ - extractKeyPoints(remark) │ │
│ │ - calculateConfidence(remark, context) │ │
│ └──────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Vertex AI Gemini (Google Cloud) │ │
│ │ - VertexAI Client │ │
│ │ - Service Account Authentication │ │
│ │ - Gemini Models (gemini-2.5-flash, etc.) │ │
│ └──────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Database (PostgreSQL) │ │
│ │ - conclusion_remarks table │ │
│ │ - workflow_requests table │ │
│ └──────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
### Data Flow
1. **Request Approval/Rejection**`ApprovalService.approveLevel()`
- Automatically triggers AI generation in background
- Saves to `conclusion_remarks` table
2. **Manual Generation**`ConclusionController.generateConclusion()`
- User clicks "Generate AI Conclusion"
- Fetches request context
- Calls `AIService.generateConclusionRemark()`
- Returns generated remark
3. **Finalization**`ConclusionController.finalizeConclusion()`
- User reviews and edits (optional)
- Submits final remark
- Updates request status to `CLOSED`
- Saves `finalRemark` to database
---
## Configuration
### Environment Variables
```bash
# Google Cloud Configuration (required - same as GCS)
GCP_PROJECT_ID=re-platform-workflow-dealer
GCP_KEY_FILE=./credentials/re-platform-workflow-dealer-3d5738fcc1f9.json
# Vertex AI Configuration (optional - defaults provided)
VERTEX_AI_MODEL=gemini-2.5-flash
VERTEX_AI_LOCATION=asia-south1
AI_ENABLED=true
```
**Note**: The service account key file is the same one used for Google Cloud Storage, ensuring consistent authentication across services.
### Admin Configuration (Database)
The system reads configuration from the `system_config` table. Key settings:
| Config Key | Default | Description |
|------------|---------|-------------|
| `AI_ENABLED` | `true` | Enable/disable all AI features |
| `AI_REMARK_GENERATION_ENABLED` | `true` | Enable/disable conclusion generation |
| `AI_MAX_REMARK_LENGTH` | `2000` | Maximum characters for generated remarks |
| `VERTEX_AI_MODEL` | `gemini-2.5-flash` | Vertex AI Gemini model name |
### Available Models
| Model Name | Description | Use Case |
|------------|-------------|----------|
| `gemini-2.5-flash` | Latest fast model (default) | General purpose, quick responses |
| `gemini-1.5-flash` | Previous fast model | General purpose |
| `gemini-1.5-pro` | Advanced model | Complex tasks, better quality |
| `gemini-1.5-pro-latest` | Latest Pro version | Best quality, complex reasoning |
### Supported Regions
| Region Code | Location | Availability |
|-------------|----------|--------------|
| `us-central1` | Iowa, USA | ✅ Default |
| `us-east1` | South Carolina, USA | ✅ |
| `us-west1` | Oregon, USA | ✅ |
| `europe-west1` | Belgium | ✅ |
| `asia-south1` | Mumbai, India | ✅ (Current default) |
**Note**: Model and region are configured via environment variables, not database config.
---
## API Usage
### Generate AI Conclusion
**Endpoint**: `POST /api/v1/conclusions/:requestId/generate`
**Authentication**: Required (JWT token)
**Authorization**: Only the request initiator can generate conclusions
**Request**:
```http
POST /api/v1/conclusions/REQ-2025-00123/generate
Authorization: Bearer <token>
```
**Response** (Success - 200):
```json
{
"success": true,
"data": {
"conclusionId": "concl-123",
"aiGeneratedRemark": "This request for [title] was approved through [levels]...",
"keyDiscussionPoints": [
"Approved by John Doe at Level 1",
"TAT compliance: 85%",
"3 documents attached"
],
"confidence": 0.85,
"generatedAt": "2025-01-15T10:30:00Z",
"provider": "Vertex AI (Gemini)"
}
}
```
**Response** (Error - 403):
```json
{
"success": false,
"error": "Only the initiator can generate conclusion remarks"
}
```
**Response** (Error - 400):
```json
{
"success": false,
"error": "Conclusion can only be generated for approved or rejected requests"
}
```
### Finalize Conclusion
**Endpoint**: `POST /api/v1/conclusions/:requestId/finalize`
**Authentication**: Required (JWT token)
**Authorization**: Only the request initiator can finalize
**Request**:
```http
POST /api/v1/conclusions/REQ-2025-00123/finalize
Authorization: Bearer <token>
Content-Type: application/json
{
"finalRemark": "This request was approved through all levels. The implementation will begin next week."
}
```
**Response** (Success - 200):
```json
{
"success": true,
"data": {
"conclusionId": "concl-123",
"finalRemark": "This request was approved through all levels...",
"finalizedAt": "2025-01-15T10:35:00Z",
"requestStatus": "CLOSED"
}
}
```
### Get Existing Conclusion
**Endpoint**: `GET /api/v1/conclusions/:requestId`
**Response**:
```json
{
"success": true,
"data": {
"conclusionId": "concl-123",
"requestId": "REQ-2025-00123",
"aiGeneratedRemark": "Generated text...",
"finalRemark": "Finalized text...",
"isEdited": true,
"editCount": 2,
"aiModelUsed": "Vertex AI (Gemini)",
"aiConfidenceScore": 0.85,
"keyDiscussionPoints": ["Point 1", "Point 2"],
"generatedAt": "2025-01-15T10:30:00Z",
"finalizedAt": "2025-01-15T10:35:00Z"
}
}
```
---
## Implementation Details
### Context Data Structure
The `generateConclusionRemark()` method accepts a context object with the following structure:
```typescript
interface ConclusionContext {
requestTitle: string;
requestDescription: string;
requestNumber: string;
priority: string;
approvalFlow: Array<{
levelNumber: number;
approverName: string;
status: 'APPROVED' | 'REJECTED' | 'PENDING' | 'IN_PROGRESS';
comments?: string;
actionDate?: string;
tatHours?: number;
elapsedHours?: number;
tatPercentageUsed?: number;
}>;
workNotes: Array<{
userName: string;
message: string;
createdAt: string;
}>;
documents: Array<{
fileName: string;
uploadedBy: string;
uploadedAt: string;
}>;
activities: Array<{
type: string;
action: string;
details: string;
timestamp: string;
}>;
rejectionReason?: string;
rejectedBy?: string;
}
```
### Generation Process
1. **Context Collection**:
- Fetches request details from `workflow_requests`
- Fetches approval levels from `approval_levels`
- Fetches work notes from `work_notes`
- Fetches documents from `documents`
- Fetches activities from `activities`
2. **Prompt Building**:
- Constructs a detailed prompt with all context
- Includes TAT risk information (ON_TRACK, AT_RISK, CRITICAL, BREACHED)
- Includes rejection context if applicable
- Sets target word count based on `AI_MAX_REMARK_LENGTH`
3. **AI Generation**:
- Sends prompt to Vertex AI Gemini
- Receives generated text (up to 4096 tokens)
- Preserves full AI response (no truncation)
- Extracts key points
- Calculates confidence score
4. **Storage**:
- Saves to `conclusion_remarks` table
- Links to `workflow_requests` via `requestId`
- Stores metadata (provider, confidence, key points)
### Automatic Generation
When a request is approved/rejected, `ApprovalService.approveLevel()` automatically generates a conclusion in the background:
```typescript
// In ApprovalService.approveLevel()
if (isFinalApproval) {
// Background task - doesn't block the approval response
(async () => {
const context = { /* ... */ };
const aiResult = await aiService.generateConclusionRemark(context);
await ConclusionRemark.create({ /* ... */ });
})();
}
```
---
## Prompt Engineering
### Prompt Structure
The prompt is designed to generate professional, archival-quality conclusions:
```
You are writing a closure summary for a workflow request at Royal Enfield.
Write a practical, realistic conclusion that an employee would write when closing a request.
**Request:**
[Request Number] - [Title]
Description: [Description]
Priority: [Priority]
**What Happened:**
[Approval Summary with TAT info]
[Rejection Context if applicable]
**Discussions (if any):**
[Work Notes Summary]
**Documents:**
[Document List]
**YOUR TASK:**
Write a brief, professional conclusion (approximately X words, max Y characters) that:
- Summarizes what was requested and the final decision
- Mentions who approved it and any key comments
- Mentions if any approval levels were AT_RISK, CRITICAL, or BREACHED
- Notes the outcome and next steps (if applicable)
- Uses clear, factual language without time-specific references
- Is suitable for permanent archiving and future reference
- Sounds natural and human-written (not AI-generated)
**IMPORTANT:**
- Be concise and direct
- MUST stay within [maxLength] characters limit
- No time-specific words like "today", "now", "currently", "recently"
- No corporate jargon or buzzwords
- No emojis or excessive formatting
- Write like a professional documenting a completed process
- Focus on facts: what was requested, who approved, what was decided
- Use past tense for completed actions
```
### Key Prompt Features
1. **TAT Risk Integration**: Includes TAT percentage usage and risk status for each approval level
2. **Rejection Handling**: Different instructions for rejected vs approved requests
3. **Length Control**: Dynamically sets target word count based on config
4. **Tone Guidelines**: Emphasizes natural, professional, archival-quality writing
5. **Context Awareness**: Includes all relevant data (approvals, notes, documents, activities)
### Vertex AI Settings
| Setting | Value | Description |
|---------|-------|-------------|
| Model | `gemini-2.5-flash` (default) | Fast, efficient model for conclusion generation |
| Max Output Tokens | `4096` | Maximum tokens in response (technical limit) |
| Character Limit | `2000` (configurable) | Actual limit enforced via prompt (`AI_MAX_REMARK_LENGTH`) |
| Temperature | `0.3` | Lower temperature for more focused, consistent output |
| Location | `asia-south1` (default) | Google Cloud region for API calls |
| Authentication | Service Account | Uses same credentials as Google Cloud Storage |
**Note on Token vs Character Limits:**
- **4096 tokens** is the technical maximum Vertex AI can generate
- **2000 characters** (default) is the actual limit enforced by the prompt
- Token-to-character conversion: ~1 token ≈ 3-4 characters
- With HTML tags: 4096 tokens ≈ 12,000-16,000 characters (including tags)
- The AI is instructed to stay within the character limit, not the token limit
- The token limit provides headroom but the character limit is what matters for storage
---
## Error Handling
### Common Errors
1. **No AI Provider Available**
```
Error: AI features are currently unavailable. Please verify Vertex AI configuration and service account credentials.
```
**Solution**:
- Verify service account key file exists at path specified in `GCP_KEY_FILE`
- Ensure Vertex AI API is enabled in Google Cloud Console
- Check service account has `Vertex AI User` role (`roles/aiplatform.user`)
2. **Vertex AI API Error**
```
Error: AI generation failed (Vertex AI): Model was not found or your project does not have access
```
**Solution**:
- Verify model name is correct (e.g., `gemini-2.5-flash`)
- Ensure model is available in selected region
- Check Vertex AI API is enabled in Google Cloud Console
3. **Request Not Found**
```
Error: Request not found
```
**Solution**: Verify requestId is correct and request exists
4. **Unauthorized Access**
```
Error: Only the initiator can generate conclusion remarks
```
**Solution**: Ensure user is the request initiator
5. **Invalid Request Status**
```
Error: Conclusion can only be generated for approved or rejected requests
```
**Solution**: Request must be in APPROVED or REJECTED status
### Error Recovery
- **Graceful Degradation**: If AI generation fails, user can write conclusion manually
- **Retry Logic**: Manual regeneration is always available
- **Logging**: All errors are logged with context for debugging
- **Token Limit Handling**: If response hits token limit, full response is preserved (no truncation)
---
## Best Practices
### For Developers
1. **Error Handling**: Always wrap AI calls in try-catch blocks
2. **Async Operations**: Use background tasks for automatic generation (don't block approval)
3. **Validation**: Validate context data before sending to AI
4. **Logging**: Log all AI operations for debugging and monitoring
5. **Configuration**: Use database config for flexibility (not hardcoded values)
### For Administrators
1. **Service Account Setup**:
- Ensure service account key file exists and is accessible
- Verify service account has `Vertex AI User` role
- Use same credentials as Google Cloud Storage for consistency
2. **Model Selection**: Choose model based on needs:
- **gemini-2.5-flash**: Fast, cost-effective (default, recommended)
- **gemini-1.5-pro**: Better quality for complex requests
3. **Length Configuration**: Set `AI_MAX_REMARK_LENGTH` based on your archival needs
4. **Monitoring**: Monitor AI usage and costs through Google Cloud Console
5. **Testing**: Test with sample requests before enabling in production
6. **Region Selection**: Choose region closest to your deployment for lower latency
### For Users
1. **Review Before Finalizing**: Always review AI-generated conclusions
2. **Edit if Needed**: Don't hesitate to edit the generated text
3. **Regenerate**: If not satisfied, regenerate with updated context
4. **Finalize Promptly**: Finalize conclusions soon after generation for accuracy
---
## Troubleshooting
### Issue: AI Generation Not Working
**Symptoms**: Error message "AI features are currently unavailable"
**Diagnosis**:
1. Check `AI_ENABLED` config value
2. Check `AI_REMARK_GENERATION_ENABLED` config value
3. Verify service account key file exists and is accessible
4. Check Vertex AI API is enabled in Google Cloud Console
5. Verify service account has `Vertex AI User` role
6. Check provider initialization logs
**Solution**:
```bash
# Check logs
tail -f logs/app.log | grep "AI Service"
# Verify config
SELECT * FROM system_config WHERE config_key LIKE 'AI_%';
# Verify service account key file
ls -la credentials/re-platform-workflow-dealer-3d5738fcc1f9.json
# Check environment variables
echo $GCP_PROJECT_ID
echo $GCP_KEY_FILE
echo $VERTEX_AI_MODEL
```
### Issue: Generated Text Too Long/Short
**Symptoms**: Generated remarks exceed or are much shorter than expected
**Solution**:
1. Adjust `AI_MAX_REMARK_LENGTH` in admin config
2. Check prompt target word count calculation
3. Note: Vertex AI max output tokens is 4096 (system handles this automatically)
4. AI is instructed to stay within character limit, but full response is preserved
### Issue: Poor Quality Conclusions
**Symptoms**: Generated text is generic or inaccurate
**Solution**:
1. Verify context data is complete (approvals, notes, documents)
2. Check prompt includes all relevant information
3. Try different model (e.g., `gemini-1.5-pro` for better quality)
4. Temperature is set to 0.3 for focused output (can be adjusted in code if needed)
### Issue: Slow Generation
**Symptoms**: AI generation takes too long
**Solution**:
1. Check Vertex AI API status in Google Cloud Console
2. Verify network connectivity
3. Consider using `gemini-2.5-flash` model (fastest option)
4. Check for rate limiting in Google Cloud Console
5. Verify region selection (closer region = lower latency)
### Issue: Vertex AI Not Initializing
**Symptoms**: Provider shows as "None" or initialization fails in logs
**Diagnosis**:
1. Check service account key file exists and is valid
2. Verify `@google-cloud/vertexai` package is installed
3. Check environment variables (`GCP_PROJECT_ID`, `GCP_KEY_FILE`)
4. Verify Vertex AI API is enabled in Google Cloud Console
5. Check service account permissions
**Solution**:
```bash
# Install missing SDK
npm install @google-cloud/vertexai
# Verify service account key file
ls -la credentials/re-platform-workflow-dealer-3d5738fcc1f9.json
# Verify environment variables
echo $GCP_PROJECT_ID
echo $GCP_KEY_FILE
echo $VERTEX_AI_MODEL
echo $VERTEX_AI_LOCATION
# Check Google Cloud Console
# 1. Go to APIs & Services > Library
# 2. Search for "Vertex AI API"
# 3. Ensure it's enabled
# 4. Verify service account has "Vertex AI User" role
```
---
## Database Schema
### conclusion_remarks Table
```sql
CREATE TABLE conclusion_remarks (
conclusion_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
request_id VARCHAR(50) NOT NULL UNIQUE,
ai_generated_remark TEXT,
ai_model_used VARCHAR(100),
ai_confidence_score DECIMAL(3,2),
final_remark TEXT,
edited_by UUID,
is_edited BOOLEAN DEFAULT false,
edit_count INTEGER DEFAULT 0,
approval_summary JSONB,
document_summary JSONB,
key_discussion_points TEXT[],
generated_at TIMESTAMP,
finalized_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
FOREIGN KEY (request_id) REFERENCES workflow_requests(request_id),
FOREIGN KEY (edited_by) REFERENCES users(user_id)
);
```
### Key Fields
- `ai_generated_remark`: Original AI-generated text
- `final_remark`: User-edited/finalized text
- `ai_confidence_score`: Quality score (0.0 - 1.0)
- `key_discussion_points`: Extracted key points array
- `approval_summary`: JSON with approval statistics
- `document_summary`: JSON with document information
---
## Examples
### Example 1: Approved Request Conclusion
**Context**:
- Request: "Purchase 50 laptops for IT department"
- Priority: STANDARD
- 3 approval levels, all approved
- TAT: 100%, 85%, 90% usage
- 2 documents attached
**Generated Conclusion**:
```
This request for the purchase of 50 laptops for the IT department was approved
through all three approval levels. The request was reviewed and approved by
John Doe at Level 1, Jane Smith at Level 2, and Bob Johnson at Level 3. All
approval levels completed within their respective TAT windows, with Level 1
using 100% of allocated time. The purchase order has been generated and
forwarded to the procurement team for processing. Implementation is expected
to begin within the next two weeks.
```
### Example 2: Rejected Request Conclusion
**Context**:
- Request: "Implement new HR policy"
- Priority: EXPRESS
- Rejected at Level 2 by Jane Smith
- Reason: "Budget constraints"
**Generated Conclusion**:
```
This request for implementing a new HR policy was reviewed through two approval
levels but was ultimately rejected. The request was approved by John Doe at
Level 1, but rejected by Jane Smith at Level 2 due to budget constraints.
The rejection was communicated to the initiator, and alternative approaches
are being considered. The request documentation has been archived for future
reference.
```
---
## Version History
- **v2.0.0**: Vertex AI Migration
- Migrated to Google Cloud Vertex AI Gemini
- Service account authentication (same as GCS)
- Removed multi-provider support
- Increased max output tokens to 4096
- Full response preservation (no truncation)
- HTML format support for rich text editor
---
## Support
For issues or questions:
1. Check logs: `logs/app.log`
2. Review admin configuration panel
3. Contact development team
4. Refer to Vertex AI documentation:
- [Vertex AI Documentation](https://cloud.google.com/vertex-ai/docs)
- [Gemini Models](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/gemini)
- [Vertex AI Setup Guide](../VERTEX_AI_INTEGRATION.md)
---
**Maintained By**: Royal Enfield Development Team
---
## Related Documentation
- [Vertex AI Integration Guide](./VERTEX_AI_INTEGRATION.md) - Detailed setup and migration information

View File

@ -1,134 +0,0 @@
# Claim Management - Approver Mapping Documentation
## Overview
The Claim Management workflow has **8 fixed steps** with specific approvers and action types. This document explains how approvers are mapped when a claim request is created.
## 8-Step Workflow Structure
### Step 1: Dealer Proposal Submission
- **Approver Type**: Dealer (External)
- **Action Type**: **SUBMIT** (Dealer submits proposal documents)
- **TAT**: 72 hours
- **Mapping**: Uses `dealerEmail` from claim data
- **Status**: PENDING (waiting for dealer to submit)
### Step 2: Requestor Evaluation
- **Approver Type**: Initiator (Internal RE Employee)
- **Action Type**: **APPROVE/REJECT** (Requestor reviews dealer proposal)
- **TAT**: 48 hours
- **Mapping**: Uses `initiatorId` (the person who created the request)
- **Status**: PENDING (waiting for requestor to evaluate)
### Step 3: Department Lead Approval
- **Approver Type**: Department Lead (Internal RE Employee)
- **Action Type**: **APPROVE/REJECT** (Department lead approves and blocks IO budget)
- **TAT**: 72 hours
- **Mapping**:
- Option 1: Find user with role `MANAGEMENT` in same department as initiator
- Option 2: Use initiator's `manager` field from User model
- Option 3: Find user with designation containing "Lead" or "Head" in same department
- **Status**: PENDING (waiting for department lead approval)
### Step 4: Activity Creation
- **Approver Type**: System (Auto-processed)
- **Action Type**: **AUTO** (System automatically creates activity)
- **TAT**: 1 hour
- **Mapping**: System user (`system@royalenfield.com`)
- **Status**: Auto-approved when triggered
### Step 5: Dealer Completion Documents
- **Approver Type**: Dealer (External)
- **Action Type**: **SUBMIT** (Dealer submits completion documents)
- **TAT**: 120 hours
- **Mapping**: Uses `dealerEmail` from claim data
- **Status**: PENDING (waiting for dealer to submit)
### Step 6: Requestor Claim Approval
- **Approver Type**: Initiator (Internal RE Employee)
- **Action Type**: **APPROVE/REJECT** (Requestor approves completion)
- **TAT**: 48 hours
- **Mapping**: Uses `initiatorId`
- **Status**: PENDING (waiting for requestor approval)
### Step 7: E-Invoice Generation
- **Approver Type**: System (Auto-processed via DMS)
- **Action Type**: **AUTO** (System generates e-invoice via DMS integration)
- **TAT**: 1 hour
- **Mapping**: System user (`system@royalenfield.com`)
- **Status**: Auto-approved when triggered
### Step 8: Credit Note Confirmation
- **Approver Type**: Finance Team (Internal RE Employee)
- **Action Type**: **APPROVE/REJECT** (Finance confirms credit note)
- **TAT**: 48 hours
- **Mapping**:
- Option 1: Find user with role `MANAGEMENT` and department contains "Finance"
- Option 2: Find user with designation containing "Finance" or "Accountant"
- Option 3: Use configured finance team email from admin settings
- **Status**: PENDING (waiting for finance confirmation)
- **Is Final Approver**: Yes (final step)
## Current Implementation Issues
### Problems:
1. **Step 1 & 5**: Dealer email not being used - using placeholder UUID
2. **Step 3**: Department Lead not resolved - using placeholder UUID
3. **Step 8**: Finance team not resolved - using placeholder UUID
4. **All steps**: Using initiator email for non-initiator steps
### Impact:
- Steps 1, 3, 5, 8 won't have correct approvers assigned
- Notifications won't be sent to correct users
- Workflow will be stuck waiting for non-existent approvers
## Action Types Summary
| Step | Action Type | Description |
|------|-------------|-------------|
| 1 | SUBMIT | Dealer submits proposal (not approve/reject) |
| 2 | APPROVE/REJECT | Requestor evaluates proposal |
| 3 | APPROVE/REJECT | Department Lead approves and blocks budget |
| 4 | AUTO | System creates activity automatically |
| 5 | SUBMIT | Dealer submits completion documents |
| 6 | APPROVE/REJECT | Requestor approves completion |
| 7 | AUTO | System generates e-invoice via DMS |
| 8 | APPROVE/REJECT | Finance confirms credit note (FINAL) |
## Approver Resolution Logic
### For Dealer Steps (1, 5):
```typescript
// Use dealer email from claim data
const dealerEmail = claimData.dealerEmail;
// Find or create dealer user (if dealer is external, may need special handling)
const dealerUser = await User.findOne({ where: { email: dealerEmail } });
// If dealer doesn't exist in system, create participant entry
```
### For Department Lead (Step 3):
```typescript
// Priority order:
1. Find user with same department and role = 'MANAGEMENT'
2. Use initiator.manager field to find manager
3. Find user with designation containing "Lead" or "Head" in same department
4. Fallback: Use initiator's manager email from User model
```
### For Finance Team (Step 8):
```typescript
// Priority order:
1. Find user with department containing "Finance" and role = 'MANAGEMENT'
2. Find user with designation containing "Finance" or "Accountant"
3. Use configured finance team email from admin_configurations table
4. Fallback: Use default finance email (e.g., finance@royalenfield.com)
```
## Next Steps
The `createClaimApprovalLevels()` method needs to be updated to:
1. Accept `dealerEmail` parameter
2. Resolve Department Lead dynamically
3. Resolve Finance team member dynamically
4. Handle cases where approvers don't exist in the system

View File

@ -1,149 +0,0 @@
# Cost Breakup Table Architecture
## Overview
This document describes the enhanced architecture for storing cost breakups in the Dealer Claim Management system. Instead of storing cost breakups as JSONB arrays, we now use a dedicated relational table for better querying, reporting, and data integrity.
## Architecture Decision
### Previous Approach (JSONB)
- **Storage**: Cost breakups stored as JSONB array in `dealer_proposal_details.cost_breakup`
- **Limitations**:
- Difficult to query individual cost items
- Hard to update specific items
- Not ideal for reporting and analytics
- No referential integrity
### New Approach (Separate Table)
- **Storage**: Dedicated `dealer_proposal_cost_items` table
- **Benefits**:
- Better querying and filtering capabilities
- Easier to update individual cost items
- Better for analytics and reporting
- Maintains referential integrity
- Supports proper ordering of items
## Database Schema
### Table: `dealer_proposal_cost_items`
```sql
CREATE TABLE dealer_proposal_cost_items (
cost_item_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
proposal_id UUID NOT NULL REFERENCES dealer_proposal_details(proposal_id) ON DELETE CASCADE,
request_id UUID NOT NULL REFERENCES workflow_requests(request_id) ON DELETE CASCADE,
item_description VARCHAR(500) NOT NULL,
amount DECIMAL(15, 2) NOT NULL,
item_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
);
```
**Indexes**:
- `idx_proposal_cost_items_proposal_id` on `proposal_id`
- `idx_proposal_cost_items_request_id` on `request_id`
- `idx_proposal_cost_items_proposal_order` on `(proposal_id, item_order)`
## Backward Compatibility
The system maintains backward compatibility by:
1. **Dual Storage**: Still saves cost breakups to JSONB field for backward compatibility
2. **Smart Retrieval**: When fetching proposal details:
- First tries to get cost items from the new table
- Falls back to JSONB field if table is empty
3. **Migration**: Automatically migrates existing JSONB data to the new table during migration
## API Response Format
The API always returns cost breakups as an array, regardless of storage method:
```json
{
"proposalDetails": {
"proposalId": "uuid",
"costBreakup": [
{
"description": "Item 1",
"amount": 10000
},
{
"description": "Item 2",
"amount": 20000
}
],
"costItems": [
{
"costItemId": "uuid",
"itemDescription": "Item 1",
"amount": 10000,
"itemOrder": 0
}
]
}
}
```
## Implementation Details
### Saving Cost Items
When a proposal is submitted:
1. Save proposal details to `dealer_proposal_details` (with JSONB for backward compatibility)
2. Delete existing cost items for the proposal (if updating)
3. Insert new cost items into `dealer_proposal_cost_items` table
4. Items are ordered by `itemOrder` field
### Retrieving Cost Items
When fetching proposal details:
1. Query `dealer_proposal_details` with `include` for `costItems`
2. If cost items exist in the table, use them
3. If not, fall back to parsing JSONB `costBreakup` field
4. Always return as a normalized array format
## Migration
The migration (`20251210-create-proposal-cost-items-table.ts`):
1. Creates the new table
2. Creates indexes for performance
3. Migrates existing JSONB data to the new table automatically
4. Handles errors gracefully (doesn't fail if migration of existing data fails)
## Model Associations
```typescript
DealerProposalDetails.hasMany(DealerProposalCostItem, {
as: 'costItems',
foreignKey: 'proposalId',
sourceKey: 'proposalId'
});
DealerProposalCostItem.belongsTo(DealerProposalDetails, {
as: 'proposal',
foreignKey: 'proposalId',
targetKey: 'proposalId'
});
```
## Benefits for Frontend
1. **Consistent Format**: Always receives cost breakups as an array
2. **No Changes Required**: Frontend code doesn't need to change
3. **Better Performance**: Can query specific cost items if needed
4. **Future Extensibility**: Easy to add features like:
- Cost item categories
- Approval status per item
- Historical tracking of cost changes
## Future Enhancements
Potential future improvements:
- Add `category` field to cost items
- Add `approved_amount` vs `requested_amount` for budget approval workflows
- Add `notes` field for item-level comments
- Add audit trail for cost item changes
- Add `is_approved` flag for individual item approval

View File

@ -1,224 +0,0 @@
-- ============================================================
-- DEALERS CSV IMPORT - WORKING SOLUTION
-- ============================================================
-- This script provides a working solution for importing dealers
-- from CSV with auto-generated columns (dealer_id, created_at, updated_at, is_active)
-- ============================================================
-- METHOD 1: If your CSV does NOT have dealer_id, created_at, updated_at, is_active
-- ============================================================
-- Use this COPY command if your CSV has exactly 44 columns (without the auto-generated ones)
\copy public.dealers (sales_code,service_code,gear_code,gma_code,region,dealership,state,district,city,location,city_category_pst,layout_format,tier_city_category,on_boarding_charges,"date",single_format_month_year,domain_id,replacement,termination_resignation_status,date_of_termination_resignation,last_date_of_operations,old_codes,branch_details,dealer_principal_name,dealer_principal_email_id,dp_contact_number,dp_contacts,showroom_address,showroom_pincode,workshop_address,workshop_pincode,location_district,state_workshop,no_of_studios,website_update,gst,pan,firm_type,prop_managing_partners_directors,total_prop_partners_directors,docs_folder_link,workshop_gma_codes,existing_new,dlrcode) FROM 'C:/Users/BACKPACKERS/Downloads/Dealer_Master.csv' CSV HEADER ENCODING 'WIN1252';
-- ============================================================
-- METHOD 2: If your CSV HAS dealer_id, created_at, updated_at, is_active columns
-- ============================================================
-- Use this approach if your CSV has 48 columns (including the auto-generated ones)
-- This creates a temporary table, imports, then inserts with defaults
-- Step 1: Create temporary table matching your CSV structure
-- This accepts ALL columns from CSV (whether 44 or 48 columns)
CREATE TEMP TABLE dealers_temp (
dealer_id TEXT,
sales_code TEXT,
service_code TEXT,
gear_code TEXT,
gma_code TEXT,
region TEXT,
dealership TEXT,
state TEXT,
district TEXT,
city TEXT,
location TEXT,
city_category_pst TEXT,
layout_format TEXT,
tier_city_category TEXT,
on_boarding_charges TEXT,
date TEXT,
single_format_month_year TEXT,
domain_id TEXT,
replacement TEXT,
termination_resignation_status TEXT,
date_of_termination_resignation TEXT,
last_date_of_operations TEXT,
old_codes TEXT,
branch_details TEXT,
dealer_principal_name TEXT,
dealer_principal_email_id TEXT,
dp_contact_number TEXT,
dp_contacts TEXT,
showroom_address TEXT,
showroom_pincode TEXT,
workshop_address TEXT,
workshop_pincode TEXT,
location_district TEXT,
state_workshop TEXT,
no_of_studios TEXT,
website_update TEXT,
gst TEXT,
pan TEXT,
firm_type TEXT,
prop_managing_partners_directors TEXT,
total_prop_partners_directors TEXT,
docs_folder_link TEXT,
workshop_gma_codes TEXT,
existing_new TEXT,
dlrcode TEXT,
created_at TEXT,
updated_at TEXT,
is_active TEXT
);
-- Step 2: Import CSV into temporary table
-- This will work whether your CSV has 44 or 48 columns
\copy dealers_temp FROM 'C:/Users/COMP/Downloads/DEALERS_CLEAN.csv' WITH (FORMAT csv, HEADER true, ENCODING 'UTF8');
-- Optional: Check what was imported
-- SELECT COUNT(*) FROM dealers_temp;
-- Step 3: Insert into actual dealers table
-- IMPORTANT: We IGNORE dealer_id, created_at, updated_at, is_active from CSV
-- These will use database DEFAULT values (auto-generated UUID, current timestamp, true)
INSERT INTO public.dealers (
sales_code,
service_code,
gear_code,
gma_code,
region,
dealership,
state,
district,
city,
location,
city_category_pst,
layout_format,
tier_city_category,
on_boarding_charges,
date,
single_format_month_year,
domain_id,
replacement,
termination_resignation_status,
date_of_termination_resignation,
last_date_of_operations,
old_codes,
branch_details,
dealer_principal_name,
dealer_principal_email_id,
dp_contact_number,
dp_contacts,
showroom_address,
showroom_pincode,
workshop_address,
workshop_pincode,
location_district,
state_workshop,
no_of_studios,
website_update,
gst,
pan,
firm_type,
prop_managing_partners_directors,
total_prop_partners_directors,
docs_folder_link,
workshop_gma_codes,
existing_new,
dlrcode
)
SELECT
NULLIF(sales_code, ''),
NULLIF(service_code, ''),
NULLIF(gear_code, ''),
NULLIF(gma_code, ''),
NULLIF(region, ''),
NULLIF(dealership, ''),
NULLIF(state, ''),
NULLIF(district, ''),
NULLIF(city, ''),
NULLIF(location, ''),
NULLIF(city_category_pst, ''),
NULLIF(layout_format, ''),
NULLIF(tier_city_category, ''),
NULLIF(on_boarding_charges, ''),
NULLIF(date, ''),
NULLIF(single_format_month_year, ''),
NULLIF(domain_id, ''),
NULLIF(replacement, ''),
NULLIF(termination_resignation_status, ''),
NULLIF(date_of_termination_resignation, ''),
NULLIF(last_date_of_operations, ''),
NULLIF(old_codes, ''),
NULLIF(branch_details, ''),
NULLIF(dealer_principal_name, ''),
NULLIF(dealer_principal_email_id, ''),
NULLIF(dp_contact_number, ''),
NULLIF(dp_contacts, ''),
NULLIF(showroom_address, ''),
NULLIF(showroom_pincode, ''),
NULLIF(workshop_address, ''),
NULLIF(workshop_pincode, ''),
NULLIF(location_district, ''),
NULLIF(state_workshop, ''),
CASE WHEN no_of_studios = '' THEN 0 ELSE no_of_studios::INTEGER END,
NULLIF(website_update, ''),
NULLIF(gst, ''),
NULLIF(pan, ''),
NULLIF(firm_type, ''),
NULLIF(prop_managing_partners_directors, ''),
NULLIF(total_prop_partners_directors, ''),
NULLIF(docs_folder_link, ''),
NULLIF(workshop_gma_codes, ''),
NULLIF(existing_new, ''),
NULLIF(dlrcode, '')
FROM dealers_temp;
-- Step 4: Clean up temporary table
DROP TABLE dealers_temp;
-- ============================================================
-- METHOD 3: Using COPY with DEFAULT (PostgreSQL 12+)
-- ============================================================
-- Alternative approach using a function to set defaults
-- Create a function to handle the import with defaults
CREATE OR REPLACE FUNCTION import_dealers_from_csv()
RETURNS void AS $$
BEGIN
-- This will be called from a COPY command that uses a function
-- See METHOD 1 for the actual COPY command
END;
$$ LANGUAGE plpgsql;
-- ============================================================
-- VERIFICATION QUERIES
-- ============================================================
-- Check import results
SELECT
COUNT(*) as total_dealers,
COUNT(dealer_id) as has_dealer_id,
COUNT(created_at) as has_created_at,
COUNT(updated_at) as has_updated_at,
COUNT(*) FILTER (WHERE is_active = true) as active_count
FROM dealers;
-- View sample records with auto-generated values
SELECT
dealer_id,
dlrcode,
dealership,
created_at,
updated_at,
is_active
FROM dealers
LIMIT 5;
-- Check for any issues
SELECT
COUNT(*) FILTER (WHERE dealer_id IS NULL) as missing_dealer_id,
COUNT(*) FILTER (WHERE created_at IS NULL) as missing_created_at,
COUNT(*) FILTER (WHERE updated_at IS NULL) as missing_updated_at
FROM dealers;

View File

@ -1,515 +0,0 @@
# Dealers CSV Import Guide
This guide explains how to format and import dealer data from a CSV file into the PostgreSQL `dealers` table.
## ⚠️ Important: Auto-Generated Columns
**DO NOT include these columns in your CSV file** - they are automatically generated by the database:
- ❌ `dealer_id` - Auto-generated UUID (e.g., `550e8400-e29b-41d4-a716-446655440000`)
- ❌ `created_at` - Auto-generated timestamp (current time on import)
- ❌ `updated_at` - Auto-generated timestamp (current time on import)
- ❌ `is_active` - Defaults to `true`
Your CSV should have **exactly 44 columns** (the data columns listed below).
## Table of Contents
- [CSV File Format Requirements](#csv-file-format-requirements)
- [Column Mapping](#column-mapping)
- [Preparing Your CSV File](#preparing-your-csv-file)
- [Import Methods](#import-methods)
- [Troubleshooting](#troubleshooting)
---
## CSV File Format Requirements
### File Requirements
- **Format**: CSV (Comma-Separated Values)
- **Encoding**: UTF-8
- **Header Row**: Required (first row must contain column names)
- **Delimiter**: Comma (`,`)
- **Text Qualifier**: Double quotes (`"`) for fields containing commas or special characters
### Required Columns (in exact order)
**Important Notes:**
- **DO NOT include** `dealer_id`, `created_at`, `updated_at`, or `is_active` in your CSV file
- These columns will be automatically generated by the database:
- `dealer_id`: Auto-generated UUID
- `created_at`: Auto-generated timestamp (current time)
- `updated_at`: Auto-generated timestamp (current time)
- `is_active`: Defaults to `true`
Your CSV file must have these **44 columns** in the following order:
1. `sales_code`
2. `service_code`
3. `gear_code`
4. `gma_code`
5. `region`
6. `dealership`
7. `state`
8. `district`
9. `city`
10. `location`
11. `city_category_pst`
12. `layout_format`
13. `tier_city_category`
14. `on_boarding_charges`
15. `date`
16. `single_format_month_year`
17. `domain_id`
18. `replacement`
19. `termination_resignation_status`
20. `date_of_termination_resignation`
21. `last_date_of_operations`
22. `old_codes`
23. `branch_details`
24. `dealer_principal_name`
25. `dealer_principal_email_id`
26. `dp_contact_number`
27. `dp_contacts`
28. `showroom_address`
29. `showroom_pincode`
30. `workshop_address`
31. `workshop_pincode`
32. `location_district`
33. `state_workshop`
34. `no_of_studios`
35. `website_update`
36. `gst`
37. `pan`
38. `firm_type`
39. `prop_managing_partners_directors`
40. `total_prop_partners_directors`
41. `docs_folder_link`
42. `workshop_gma_codes`
43. `existing_new`
44. `dlrcode`
---
## Column Mapping
### Column Details
| Column Name | Type | Required | Notes |
|------------|------|----------|-------|
| `sales_code` | String(50) | No | Sales code identifier |
| `service_code` | String(50) | No | Service code identifier |
| `gear_code` | String(50) | No | Gear code identifier |
| `gma_code` | String(50) | No | GMA code identifier |
| `region` | String(50) | No | Geographic region |
| `dealership` | String(255) | No | Dealership business name |
| `state` | String(100) | No | State name |
| `district` | String(100) | No | District name |
| `city` | String(100) | No | City name |
| `location` | String(255) | No | Location details |
| `city_category_pst` | String(50) | No | City category (PST) |
| `layout_format` | String(50) | No | Layout format |
| `tier_city_category` | String(100) | No | TIER City Category |
| `on_boarding_charges` | Decimal | No | Numeric value (e.g., 1000.50) |
| `date` | Date | No | Format: YYYY-MM-DD (e.g., 2014-09-30) |
| `single_format_month_year` | String(50) | No | Format: Sep-2014 |
| `domain_id` | String(255) | No | Email domain (e.g., dealer@royalenfield.com) |
| `replacement` | String(50) | No | Replacement status |
| `termination_resignation_status` | String(255) | No | Termination/Resignation status |
| `date_of_termination_resignation` | Date | No | Format: YYYY-MM-DD |
| `last_date_of_operations` | Date | No | Format: YYYY-MM-DD |
| `old_codes` | String(255) | No | Old code references |
| `branch_details` | Text | No | Branch information |
| `dealer_principal_name` | String(255) | No | Principal's full name |
| `dealer_principal_email_id` | String(255) | No | Principal's email |
| `dp_contact_number` | String(20) | No | Contact phone number |
| `dp_contacts` | String(20) | No | Additional contacts |
| `showroom_address` | Text | No | Full showroom address |
| `showroom_pincode` | String(10) | No | Showroom postal code |
| `workshop_address` | Text | No | Full workshop address |
| `workshop_pincode` | String(10) | No | Workshop postal code |
| `location_district` | String(100) | No | Location/District |
| `state_workshop` | String(100) | No | State for workshop |
| `no_of_studios` | Integer | No | Number of studios (default: 0) |
| `website_update` | String(10) | No | Yes/No value |
| `gst` | String(50) | No | GST number |
| `pan` | String(50) | No | PAN number |
| `firm_type` | String(100) | No | Type of firm (e.g., Proprietorship) |
| `prop_managing_partners_directors` | String(255) | No | Proprietor/Partners/Directors names |
| `total_prop_partners_directors` | String(255) | No | Total count or names |
| `docs_folder_link` | Text | No | Google Drive or document folder URL |
| `workshop_gma_codes` | String(255) | No | Workshop GMA codes |
| `existing_new` | String(50) | No | Existing/New status |
| `dlrcode` | String(50) | No | Dealer code |
---
## Preparing Your CSV File
### Step 1: Create/Edit Your CSV File
1. **Open your CSV file** in Excel, Google Sheets, or a text editor
2. **Remove auto-generated columns** (if present):
- ❌ **DO NOT include**: `dealer_id`, `created_at`, `updated_at`, `is_active`
- ✅ These will be automatically generated by the database
3. **Ensure the header row** matches the column names exactly (see [Column Mapping](#column-mapping))
4. **Verify column order** - columns must be in the exact order listed above (44 columns total)
5. **Check data formats**:
- Dates: Use `YYYY-MM-DD` format (e.g., `2014-09-30`)
- Numbers: Use decimal format for `on_boarding_charges` (e.g., `1000.50`)
- Empty values: Leave cells empty (don't use "NULL" or "N/A" as text)
### Step 2: Handle Special Characters
- **Commas in text**: Wrap the entire field in double quotes
- Example: `"No.335, HVP RR Nagar Sector B"`
- **Quotes in text**: Use double quotes to escape: `""quoted text""`
- **Newlines in text**: Wrap field in double quotes
### Step 3: Date Formatting
Ensure dates are in `YYYY-MM-DD` format:
- ✅ Correct: `2014-09-30`
- ❌ Wrong: `30-Sep-14`, `09/30/2014`, `30-09-2014`
### Step 4: Save the File
1. **Save as CSV** (UTF-8 encoding)
2. **File location**: Save to an accessible path (e.g., `C:/Users/COMP/Downloads/DEALERS_CLEAN.csv`)
3. **File name**: Use a descriptive name (e.g., `DEALERS_CLEAN.csv`)
### Sample CSV Format
**Important:** Your CSV should **NOT** include `dealer_id`, `created_at`, `updated_at`, or `is_active` columns. These are auto-generated.
```csv
sales_code,service_code,gear_code,gma_code,region,dealership,state,district,city,location,city_category_pst,layout_format,tier_city_category,on_boarding_charges,date,single_format_month_year,domain_id,replacement,termination_resignation_status,date_of_termination_resignation,last_date_of_operations,old_codes,branch_details,dealer_principal_name,dealer_principal_email_id,dp_contact_number,dp_contacts,showroom_address,showroom_pincode,workshop_address,workshop_pincode,location_district,state_workshop,no_of_studios,website_update,gst,pan,firm_type,prop_managing_partners_directors,total_prop_partners_directors,docs_folder_link,workshop_gma_codes,existing_new,dlrcode
5124,5125,5573,9430,S3,Accelerate Motors,Karnataka,Bengaluru,Bengaluru,RAJA RAJESHWARI NAGAR,A+,A+,Tier 1 City,,2014-09-30,Sep-2014,acceleratemotors.rrnagar@dealer.royalenfield.com,,,,,,,N. Shyam Charmanna,shyamcharmanna@yahoo.co.in,7022049621,7022049621,"No.335, HVP RR Nagar Sector B, Ideal Homes Town Ship, Bangalore - 560098, Dist Bangalore, Karnataka",560098,"Works Shop No.460, 80ft Road, 2nd Phase R R Nagar, Bangalore - 560098, Dist Bangalore, Karnataka",560098,Bangalore,Karnataka,0,Yes,29ARCPS1311D1Z6,ARCPS1311D,Proprietorship,CHARMANNA SHYAM NELLAMAKADA,CHARMANNA SHYAM NELLAMAKADA,https://drive.google.com/drive/folders/1sGtg3s1h9aBXX9fhxJufYuBWar8gVvnb,,,3386
```
**What gets auto-generated:**
- `dealer_id`: `550e8400-e29b-41d4-a716-446655440000` (example UUID)
- `created_at`: `2025-01-20 10:30:45.123` (current timestamp)
- `updated_at`: `2025-01-20 10:30:45.123` (current timestamp)
- `is_active`: `true`
---
## Import Methods
### Method 1: PostgreSQL COPY Command (Recommended - If CSV has 44 columns)
**Use this if your CSV does NOT include `dealer_id`, `created_at`, `updated_at`, `is_active` columns.**
**Prerequisites:**
- PostgreSQL client (psql) installed
- Access to PostgreSQL server
- CSV file path accessible from PostgreSQL server
**Steps:**
1. **Connect to PostgreSQL:**
```bash
psql -U your_username -d royal_enfield_workflow -h localhost
```
2. **Run the COPY command:**
**Note:** The COPY command explicitly lists only the columns from your CSV. The following columns are **automatically handled by the database** and should **NOT** be in your CSV:
- `dealer_id` - Auto-generated UUID
- `created_at` - Auto-generated timestamp
- `updated_at` - Auto-generated timestamp
- `is_active` - Defaults to `true`
```sql
\copy public.dealers(
sales_code,
service_code,
gear_code,
gma_code,
region,
dealership,
state,
district,
city,
location,
city_category_pst,
layout_format,
tier_city_category,
on_boarding_charges,
date,
single_format_month_year,
domain_id,
replacement,
termination_resignation_status,
date_of_termination_resignation,
last_date_of_operations,
old_codes,
branch_details,
dealer_principal_name,
dealer_principal_email_id,
dp_contact_number,
dp_contacts,
showroom_address,
showroom_pincode,
workshop_address,
workshop_pincode,
location_district,
state_workshop,
no_of_studios,
website_update,
gst,
pan,
firm_type,
prop_managing_partners_directors,
total_prop_partners_directors,
docs_folder_link,
workshop_gma_codes,
existing_new,
dlrcode
)
FROM 'C:/Users/COMP/Downloads/DEALERS_CLEAN.csv'
WITH (
FORMAT csv,
HEADER true,
ENCODING 'UTF8'
);
```
**What happens:**
- `dealer_id` will be automatically generated as a UUID for each row
- `created_at` will be set to the current timestamp
- `updated_at` will be set to the current timestamp
- `is_active` will default to `true`
3. **Verify import:**
```sql
SELECT COUNT(*) FROM dealers;
SELECT * FROM dealers LIMIT 5;
```
### Method 2: Using Temporary Table (If CSV has 48 columns including auto-generated ones)
**Use this if your CSV includes `dealer_id`, `created_at`, `updated_at`, `is_active` columns and you're getting errors.**
This method uses a temporary table to import the CSV, then inserts into the actual table while ignoring the auto-generated columns:
```sql
-- Step 1: Create temporary table
CREATE TEMP TABLE dealers_temp (
dealer_id TEXT,
sales_code TEXT,
service_code TEXT,
-- ... (all 48 columns as TEXT)
);
-- Step 2: Import CSV into temp table
\copy dealers_temp FROM 'C:/Users/COMP/Downloads/DEALERS_CLEAN.csv' WITH (FORMAT csv, HEADER true, ENCODING 'UTF8');
-- Step 3: Insert into actual table (ignoring dealer_id, created_at, updated_at, is_active)
INSERT INTO public.dealers (
sales_code,
service_code,
-- ... (only the 44 data columns)
)
SELECT
NULLIF(sales_code, ''),
NULLIF(service_code, ''),
-- ... (convert and handle empty strings)
FROM dealers_temp
WHERE sales_code IS NOT NULL OR dealership IS NOT NULL; -- Skip completely empty rows
-- Step 4: Clean up
DROP TABLE dealers_temp;
```
**See `DEALERS_CSV_IMPORT_FIX.sql` for the complete working script.**
### Method 3: Using pgAdmin
1. Open pgAdmin and connect to your database
2. Right-click on `dealers` table → **Import/Export Data**
3. Select **Import**
4. Configure:
- **Filename**: Browse to your CSV file
- **Format**: CSV
- **Header**: Yes
- **Encoding**: UTF8
- **Delimiter**: Comma
5. Click **OK** to import
### Method 4: Using Node.js Script
Create a script to import CSV programmatically (useful for automation):
```typescript
import { sequelize } from '../config/database';
import { QueryTypes } from 'sequelize';
import * as fs from 'fs';
import * as path from 'path';
import * as csv from 'csv-parser';
async function importDealersFromCSV(csvFilePath: string) {
const dealers: any[] = [];
return new Promise((resolve, reject) => {
fs.createReadStream(csvFilePath)
.pipe(csv())
.on('data', (row) => {
dealers.push(row);
})
.on('end', async () => {
try {
// Bulk insert dealers
// Implementation depends on your needs
console.log(`Imported ${dealers.length} dealers`);
resolve(dealers);
} catch (error) {
reject(error);
}
});
});
}
```
---
## Troubleshooting
### Common Issues and Solutions
#### 1. **"Column count mismatch" Error**
- **Problem**: CSV has different number of columns than expected
- **Solution**:
- Verify your CSV has exactly **44 columns** (excluding header)
- **Remove** `dealer_id`, `created_at`, `updated_at`, and `is_active` if they exist in your CSV
- These columns are auto-generated and should NOT be in the CSV file
#### 2. **"Invalid date format" Error**
- **Problem**: Dates not in `YYYY-MM-DD` format
- **Solution**: Convert dates to `YYYY-MM-DD` format (e.g., `2014-09-30`)
#### 3. **"Encoding error" or "Special characters not displaying correctly**
- **Problem**: CSV file not saved in UTF-8 encoding
- **Solution**:
- In Excel: Save As → CSV UTF-8 (Comma delimited) (*.csv)
- In Notepad++: Encoding → Convert to UTF-8 → Save
#### 4. **"Permission denied" Error (COPY command)**
- **Problem**: PostgreSQL server cannot access the file path
- **Solution**:
- Use absolute path with forward slashes: `C:/Users/COMP/Downloads/DEALERS_CLEAN.csv`
- Ensure file permissions allow read access
- For remote servers, upload file to server first
#### 5. **"Duplicate key" Error**
- **Problem**: Trying to import duplicate records
- **Solution**:
- Use `ON CONFLICT` handling in your import
- Or clean CSV to remove duplicates before import
#### 6. **Empty values showing as "NULL" text**
- **Problem**: CSV contains literal "NULL" or "N/A" strings
- **Solution**: Replace with empty cells in CSV
#### 7. **Commas in address fields breaking import**
- **Problem**: Address fields contain commas not properly quoted
- **Solution**: Wrap fields containing commas in double quotes:
```csv
"No.335, HVP RR Nagar Sector B, Ideal Homes Town Ship"
```
### Pre-Import Checklist
- [ ] CSV file saved in UTF-8 encoding
- [ ] **Removed** `dealer_id`, `created_at`, `updated_at`, and `is_active` columns (if present)
- [ ] Header row matches column names exactly
- [ ] All 44 columns present in correct order
- [ ] Dates formatted as `YYYY-MM-DD`
- [ ] Numeric fields contain valid numbers (or are empty)
- [ ] Text fields with commas are wrapped in quotes
- [ ] File path is accessible from PostgreSQL server
- [ ] Database connection credentials are correct
### Verification Queries
After import, run these queries to verify:
```sql
-- Count total dealers
SELECT COUNT(*) as total_dealers FROM dealers;
-- Verify auto-generated columns
SELECT
dealer_id,
created_at,
updated_at,
is_active,
dlrcode,
dealership
FROM dealers
LIMIT 5;
-- Check for null values in key fields
SELECT
COUNT(*) FILTER (WHERE dlrcode IS NULL) as null_dlrcode,
COUNT(*) FILTER (WHERE domain_id IS NULL) as null_domain_id,
COUNT(*) FILTER (WHERE dealership IS NULL) as null_dealership
FROM dealers;
-- View sample records
SELECT
dealer_id,
dlrcode,
dealership,
city,
state,
domain_id,
created_at,
is_active
FROM dealers
LIMIT 10;
-- Check date formats
SELECT
dlrcode,
date,
date_of_termination_resignation,
last_date_of_operations
FROM dealers
WHERE date IS NOT NULL
LIMIT 5;
-- Verify all dealers have dealer_id and timestamps
SELECT
COUNT(*) as total,
COUNT(dealer_id) as has_dealer_id,
COUNT(created_at) as has_created_at,
COUNT(updated_at) as has_updated_at,
COUNT(*) FILTER (WHERE is_active = true) as active_count
FROM dealers;
```
---
## Additional Notes
- **Backup**: Always backup your database before bulk imports
- **Testing**: Test import with a small sample (5-10 rows) first
- **Validation**: Validate data quality before import
- **Updates**: Use `UPSERT` logic if you need to update existing records
---
## Support
For issues or questions:
1. Check the troubleshooting section above
2. Review PostgreSQL COPY documentation
3. Verify CSV format matches the sample provided
4. Check database logs for detailed error messages
---
**Last Updated**: December 2025
**Version**: 1.0

View File

@ -1,181 +0,0 @@
# Dealer Claim Management - Fresh Start Guide
## Overview
This guide helps you start fresh with the dealer claim management system by cleaning up all existing data and ensuring the database structure is ready for new requests.
## Prerequisites
1. **Database Migrations**: Ensure all migrations are up to date, including the new tables:
- `internal_orders` (for IO details)
- `claim_budget_tracking` (for comprehensive budget tracking)
2. **Backup** (Optional but Recommended):
- If you have important data, backup your database before running cleanup
## Fresh Start Steps
### Step 1: Run Database Migrations
Ensure all new tables are created:
```bash
cd Re_Backend
npm run migrate
```
This will create:
- ✅ `internal_orders` table (for IO details with `ioRemark`)
- ✅ `claim_budget_tracking` table (for comprehensive budget tracking)
- ✅ All other dealer claim related tables
### Step 2: Clean Up All Existing Dealer Claims
Run the cleanup script to remove all existing CLAIM_MANAGEMENT requests:
```bash
npm run cleanup:dealer-claims
```
**What this script does:**
- Finds all workflow requests with `workflow_type = 'CLAIM_MANAGEMENT'`
- Deletes all related data from:
- `claim_budget_tracking`
- `internal_orders`
- `dealer_proposal_cost_items`
- `dealer_completion_details`
- `dealer_proposal_details`
- `dealer_claim_details`
- `activities`
- `work_notes`
- `documents`
- `participants`
- `approval_levels`
- `subscriptions`
- `notifications`
- `request_summaries`
- `shared_summaries`
- `conclusion_remarks`
- `tat_alerts`
- `workflow_requests` (finally)
**Note:** This script uses a database transaction, so if any step fails, all changes will be rolled back.
### Step 3: Verify Cleanup
After running the cleanup script, verify that no CLAIM_MANAGEMENT requests remain:
```sql
SELECT COUNT(*) FROM workflow_requests WHERE workflow_type = 'CLAIM_MANAGEMENT';
-- Should return 0
```
### Step 4: Seed Dealers (If Needed)
If you need to seed dealer users:
```bash
npm run seed:dealers
```
## Database Structure Summary
### New Tables Created
1. **`internal_orders`** - Dedicated table for IO (Internal Order) details
- `io_id` (PK)
- `request_id` (FK, unique)
- `io_number`
- `io_remark` ✅ (dedicated field, not in comments)
- `io_available_balance`
- `io_blocked_amount`
- `io_remaining_balance`
- `organized_by` (FK to users)
- `organized_at`
- `status` (PENDING, BLOCKED, RELEASED, CANCELLED)
2. **`claim_budget_tracking`** - Comprehensive budget tracking
- `budget_id` (PK)
- `request_id` (FK, unique)
- `initial_estimated_budget`
- `proposal_estimated_budget`
- `approved_budget`
- `io_blocked_amount`
- `closed_expenses`
- `final_claim_amount`
- `credit_note_amount`
- `budget_status` (DRAFT, PROPOSED, APPROVED, BLOCKED, CLOSED, SETTLED)
- `variance_amount` & `variance_percentage`
- Audit fields (last_modified_by, last_modified_at, modification_reason)
### Existing Tables (Enhanced)
- `dealer_claim_details` - Main claim information
- `dealer_proposal_details` - Step 1: Dealer proposal
- `dealer_proposal_cost_items` - Cost breakdown items
- `dealer_completion_details` - Step 5: Completion documents
## What's New
### 1. IO Details in Separate Table
- ✅ IO remark is now stored in `internal_orders.io_remark` (not parsed from comments)
- ✅ Tracks who organized the IO (`organized_by`, `organized_at`)
- ✅ Better data integrity and querying
### 2. Comprehensive Budget Tracking
- ✅ All budget-related values in one place
- ✅ Tracks budget lifecycle (DRAFT → PROPOSED → APPROVED → BLOCKED → CLOSED → SETTLED)
- ✅ Calculates variance automatically
- ✅ Audit trail for budget modifications
### 3. Proper Data Structure
- ✅ Estimated budget: `claimDetails.estimatedBudget` or `proposalDetails.totalEstimatedBudget`
- ✅ Claim amount: `completionDetails.totalClosedExpenses` or `budgetTracking.finalClaimAmount`
- ✅ IO details: `internalOrder` table (separate, dedicated)
- ✅ E-Invoice: `claimDetails.eInvoiceNumber`, `claimDetails.eInvoiceDate`
- ✅ Credit Note: `claimDetails.creditNoteNumber`, `claimDetails.creditNoteAmount`
## Next Steps After Cleanup
1. **Create New Claim Requests**: Use the API or frontend to create fresh dealer claim requests
2. **Test Workflow**: Go through the 8-step workflow to ensure everything works correctly
3. **Verify Data Storage**: Check that IO details and budget tracking are properly stored
## Troubleshooting
### If Cleanup Fails
1. Check database connection
2. Verify foreign key constraints are not blocking deletion
3. Check logs for specific error messages
4. The script uses transactions, so partial deletions won't occur
### If Tables Don't Exist
Run migrations again:
```bash
npm run migrate
```
### If You Need to Restore Data
If you backed up before cleanup, restore from your backup. The cleanup script does not create backups automatically.
## API Endpoints Ready
After cleanup, you can use these endpoints:
- `POST /api/v1/dealer-claims` - Create new claim request
- `POST /api/v1/dealer-claims/:requestId/proposal` - Submit proposal (Step 1)
- `PUT /api/v1/dealer-claims/:requestId/io` - Update IO details (Step 3)
- `POST /api/v1/dealer-claims/:requestId/completion` - Submit completion (Step 5)
- `PUT /api/v1/dealer-claims/:requestId/e-invoice` - Update e-invoice (Step 7)
- `PUT /api/v1/dealer-claims/:requestId/credit-note` - Update credit note (Step 8)
## Summary
**Cleanup Script**: `npm run cleanup:dealer-claims`
**Migrations**: `npm run migrate`
**Fresh Start**: Database is ready for new dealer claim requests
**Proper Structure**: IO details and budget tracking in dedicated tables

View File

@ -1,134 +0,0 @@
# Dealer User Architecture
## Overview
**Dealers and regular users are stored in the SAME `users` table.** This is the correct approach because dealers ARE users in the system - they login via SSO, participate in workflows, receive notifications, etc.
## Why Single Table?
### ✅ Advantages:
1. **Unified Authentication**: Dealers login via the same Okta SSO as regular users
2. **Shared Functionality**: Dealers need all user features (notifications, workflow participation, etc.)
3. **Simpler Architecture**: No need for joins or complex queries
4. **Data Consistency**: Single source of truth for all users
5. **Workflow Integration**: Dealers can be approvers, participants, or action takers seamlessly
### ❌ Why NOT Separate Table:
- Would require complex joins for every query
- Data duplication (email, name, etc. in both tables)
- Dealers still need user authentication and permissions
- More complex to maintain
## How Dealers Are Identified
Dealers are identified using **three criteria** (any one matches):
1. **`employeeId` field starts with `'RE-'`** (e.g., `RE-MH-001`, `RE-DL-002`)
- This is the **primary identifier** for dealers
- Dealer code is stored in `employeeId` field
2. **`designation` contains `'dealer'`** (case-insensitive)
- Example: `"Dealer"`, `"Senior Dealer"`, etc.
3. **`department` contains `'dealer'`** (case-insensitive)
- Example: `"Dealer Operations"`, `"Dealer Management"`, etc.
## Database Schema
```sql
users {
user_id UUID PK
email VARCHAR(255) UNIQUE
okta_sub VARCHAR(100) UNIQUE -- From Okta SSO
employee_id VARCHAR(50) -- For dealers: stores dealer code (RE-MH-001)
display_name VARCHAR(255)
designation VARCHAR(255) -- For dealers: "Dealer"
department VARCHAR(255) -- For dealers: "Dealer Operations"
role ENUM('USER', 'MANAGEMENT', 'ADMIN')
is_active BOOLEAN
-- ... other user fields
}
```
## Example Data
### Regular User:
```json
{
"userId": "uuid-1",
"email": "john.doe@royalenfield.com",
"employeeId": "E12345", // Regular employee ID
"designation": "Software Engineer",
"department": "IT",
"role": "USER"
}
```
### Dealer User:
```json
{
"userId": "uuid-2",
"email": "test.2@royalenfield.com",
"employeeId": "RE-MH-001", // Dealer code stored here
"designation": "Dealer",
"department": "Dealer Operations",
"role": "USER"
}
```
## Querying Dealers
The `dealer.service.ts` uses these filters to find dealers:
```typescript
User.findAll({
where: {
[Op.or]: [
{ designation: { [Op.iLike]: '%dealer%' } },
{ employeeId: { [Op.like]: 'RE-%' } },
{ department: { [Op.iLike]: '%dealer%' } },
],
isActive: true,
}
});
```
## Seed Script Behavior
When running `npm run seed:dealers`:
1. **If user exists (from Okta SSO)**:
- ✅ Preserves `oktaSub` (real Okta subject ID)
- ✅ Preserves `role` (from Okta)
- ✅ Updates `employeeId` with dealer code
- ✅ Updates `designation` to "Dealer" (if not already)
- ✅ Updates `department` to "Dealer Operations" (if not already)
2. **If user doesn't exist**:
- Creates placeholder user
- Sets `oktaSub` to `dealer-{code}-pending-sso`
- When dealer logs in via SSO, `oktaSub` gets updated automatically
## Workflow Integration
Dealers participate in workflows just like regular users:
- **As Approvers**: In Steps 1 & 5 of claim management workflow
- **As Participants**: Can be added to any workflow
- **As Action Takers**: Can submit proposals, completion documents, etc.
The system identifies them as dealers by checking `employeeId` starting with `'RE-'` or `designation` containing `'dealer'`.
## API Endpoints
- `GET /api/v1/dealers` - Get all dealers (filters users table)
- `GET /api/v1/dealers/code/:dealerCode` - Get dealer by code
- `GET /api/v1/dealers/email/:email` - Get dealer by email
- `GET /api/v1/dealers/search?q=term` - Search dealers
All endpoints query the same `users` table with dealer-specific filters.
## Conclusion
**✅ Single `users` table is the correct approach.** No separate dealer table needed. Dealers are users with special identification markers (dealer code in `employeeId`, dealer designation, etc.).

View File

@ -1,695 +0,0 @@
# DMS Integration API Documentation
## Overview
This document describes the data exchange between the Royal Enfield Workflow System (RE-Flow) and the DMS (Document Management System) for:
1. **E-Invoice Generation** - Submitting claim data to DMS for e-invoice creation
2. **Credit Note Generation** - Fetching/Generating credit note from DMS
## Data Flow Overview
### Inputs from RE-Flow System
The following data is sent **FROM** RE-Flow System **TO** DMS:
1. **Dealer Code** - Unique dealer identifier
2. **Dealer Name** - Dealer business name
3. **Activity Name** - Name of the activity/claim type (see Activity Types below)
4. **Activity Description** - Detailed description of the activity
5. **Claim Amount** - Total claim amount (before taxes)
6. **Request Number** - Unique request identifier from RE-Flow (e.g., "REQ-2025-12-0001")
7. **IO Number** - Internal Order number (if available)
### Inputs from DMS Team
The following data is **PROVIDED BY** DMS Team **TO** RE-Flow System (via webhook):
3. **Document No** - Generated invoice/credit note number
4. **Document Type** - Type of document ("E-INVOICE", "INVOICE", or "CREDIT_NOTE")
10. **Item Code No** - Item code number (same as provided in request, used for GST calculation)
11. **HSN/SAC Code** - HSN/SAC code for tax calculation (determined by DMS based on Item Code No)
12. **CGST %** - CGST percentage (e.g., 9.0 for 9%) - calculated by DMS based on Item Code No and dealer location
13. **SGST %** - SGST percentage (e.g., 9.0 for 9%) - calculated by DMS based on Item Code No and dealer location
14. **IGST %** - IGST percentage (0.0 for intra-state, >0 for inter-state) - calculated by DMS based on Item Code No and dealer location
15. **CGST Amount** - CGST amount in INR - calculated by DMS
16. **SGST Amount** - SGST amount in INR - calculated by DMS
17. **IGST Amount** - IGST amount in INR - calculated by DMS
18. **Credit Type** - Type of credit: "GST" or "Commercial Credit" (for credit notes only)
19. **IRN No** - Invoice Reference Number from GST portal (response from GST system)
20. **SAP Credit Note No** - SAP Credit Note Number (response from SAP system, for credit notes only)
**Important:** Item Code No is used by DMS for GST calculation. DMS determines HSN/SAC code, tax percentages, and tax amounts based on the Item Code No and dealer location.
### Predefined Activity Types
The following is the complete list of predefined Activity Types that RE-Flow System uses. DMS Team must provide **Item Code No** mapping for each Activity Type:
- **Riders Mania Claims**
- **Marketing Cost Bike to Vendor**
- **Media Bike Service**
- **ARAI Motorcycle Liquidation**
- **ARAI Certification STA Approval CNR**
- **Procurement of Spares/Apparel/GMA for Events**
- **Fuel for Media Bike Used for Event**
- **Motorcycle Buyback and Goodwill Support**
- **Liquidation of Used Motorcycle**
- **Motorcycle Registration CNR (Owned or Gifted by RE)**
- **Legal Claims Reimbursement**
- **Service Camp Claims**
- **Corporate Claims Institutional Sales PDI**
**Item Code No Lookup Process:**
1. RE-Flow sends `activity_name` to DMS
2. DMS responds with corresponding `item_code_no` based on activity type mapping
3. RE-Flow includes the `item_code_no` in invoice/credit note generation payload
4. DMS uses `item_code_no` to determine HSN/SAC code and calculate GST (CGST/SGST/IGST percentages and amounts)
**Note:** DMS Team must configure the Activity Type → Item Code No mapping in their system. This mapping is used for GST calculation.
---
## 1. E-Invoice Generation (DMS Push)
### When It's Called
This API is called when:
- **Step 6** of the claim management workflow is approved (Requestor approves the claim)
- User manually pushes claim data to DMS via the "Push to DMS" action
- System auto-generates e-invoice after claim approval
### Request Details
**Endpoint:** `POST {DMS_BASE_URL}/api/invoices/generate`
**Headers:**
```http
Authorization: Bearer {DMS_API_KEY}
Content-Type: application/json
```
**Request Body (Complete Payload):**
```json
{
"request_number": "REQ-2025-12-0001",
"dealer_code": "DLR001",
"dealer_name": "ABC Motors",
"activity_name": "Marketing Cost Bike to Vendor",
"activity_description": "Q4 Marketing Campaign for Royal Enfield",
"claim_amount": 150000.00,
"io_number": "IO-2025-001",
"item_code_no": "ITEM-001"
}
```
**Complete Webhook Response Payload (from DMS to RE-Flow):**
After processing, DMS will send the following complete payload to RE-Flow webhook endpoint `POST /api/v1/webhooks/dms/invoice`:
```json
{
"request_number": "REQ-2025-12-0001",
"document_no": "EINV-2025-001234",
"document_type": "E-INVOICE",
"document_date": "2025-12-17T10:30:00.000Z",
"dealer_code": "DLR001",
"dealer_name": "ABC Motors",
"activity_name": "Marketing Cost Bike to Vendor",
"activity_description": "Q4 Marketing Campaign for Royal Enfield",
"claim_amount": 150000.00,
"io_number": "IO-2025-001",
"item_code_no": "ITEM-001",
"hsn_sac_code": "998314",
"cgst_percentage": 9.0,
"sgst_percentage": 9.0,
"igst_percentage": 0.0,
"cgst_amount": 13500.00,
"sgst_amount": 13500.00,
"igst_amount": 0.00,
"total_amount": 177000.00,
"irn_no": "IRN123456789012345678901234567890123456789012345678901234567890",
"invoice_file_path": "https://dms.example.com/invoices/EINV-2025-001234.pdf",
"error_message": null,
"timestamp": "2025-12-17T10:30:00.000Z"
}
```
**Important Notes:**
- RE-Flow sends all required details including `item_code_no` (determined by DMS based on `activity_name` mapping)
- DMS processes the invoice generation **asynchronously**
- DMS responds with acknowledgment only
- **Status Verification (Primary Method):** DMS sends webhook to RE-Flow webhook URL `POST /api/v1/webhooks/dms/invoice` (see DMS_WEBHOOK_API.md) to notify when invoice is generated with complete details
- `item_code_no` is used by DMS for GST calculation (HSN/SAC code, tax percentages, tax amounts)
- **Status Verification (Backup Method):** If webhook fails, RE-Flow can use backup status check API (see section "Backup: Status Check API" below)
### Request Field Descriptions
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `request_number` | string | ✅ Yes | Unique request number from RE-Flow System (e.g., "REQ-2025-12-0001") |
| `dealer_code` | string | ✅ Yes | Dealer's unique code/identifier |
| `dealer_name` | string | ✅ Yes | Dealer's business name |
| `activity_name` | string | ✅ Yes | Activity type name (must match one of the predefined Activity Types) |
| `activity_description` | string | ✅ Yes | Detailed description of the activity/claim |
| `claim_amount` | number | ✅ Yes | Total claim amount before taxes (in INR, decimal format) |
| `io_number` | string | No | Internal Order (IO) number if available |
| `item_code_no` | string | ✅ Yes | Item code number determined by DMS based on `activity_name` mapping. RE-Flow includes this in the request. Used by DMS for GST calculation. |
### Expected Response
**Success Response (200 OK):**
**Note:** DMS should respond with a simple acknowledgment. The actual invoice details (document number, tax calculations, IRN, etc.) will be sent back to RE-Flow via **webhook** (see DMS_WEBHOOK_API.md).
```json
{
"success": true,
"message": "Invoice generation request received and queued for processing",
"request_number": "REQ-2025-12-0001"
}
```
### Response Field Descriptions
| Field | Type | Description |
|-------|------|-------------|
| `success` | boolean | Indicates if the request was accepted |
| `message` | string | Status message |
| `request_number` | string | Echo of the request number for reference |
**Important:**
- The actual invoice generation happens **asynchronously**
- DMS will send the complete invoice details (including document number, tax calculations, IRN, file path, `item_code_no`, etc.) via **webhook** to RE-Flow System once processing is complete
- Webhook endpoint: `POST /api/v1/webhooks/dms/invoice` (see DMS_WEBHOOK_API.md for details)
- If webhook delivery fails, RE-Flow can use the backup status check API (see section "Backup: Status Check API" below)
### Error Response
**Error Response (400/500):**
```json
{
"success": false,
"error": "Error message describing what went wrong",
"error_code": "INVALID_DEALER_CODE"
}
```
### Error Scenarios
| Error Code | Description | Possible Causes |
|------------|-------------|-----------------|
| `INVALID_DEALER_CODE` | Dealer code not found in DMS | Dealer not registered in DMS |
| `INVALID_AMOUNT` | Amount validation failed | Negative amount or invalid format |
| `IO_NOT_FOUND` | IO number not found | Invalid or non-existent IO number |
| `DMS_SERVICE_ERROR` | DMS internal error | DMS system unavailable or processing error |
### Example cURL Request
```bash
curl -X POST "https://dms.example.com/api/invoices/generate" \
-H "Authorization: Bearer YOUR_DMS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"request_number": "REQ-2025-12-0001",
"dealer_code": "DLR001",
"dealer_name": "ABC Motors",
"activity_name": "Marketing Cost Bike to Vendor",
"activity_description": "Q4 Marketing Campaign for Royal Enfield",
"claim_amount": 150000.00,
"io_number": "IO-2025-001",
"item_code_no": "ITEM-001"
}'
```
---
## 2. Credit Note Generation (DMS Fetch)
### When It's Called
This API is called when:
- **Step 8** of the claim management workflow is initiated (Credit Note Confirmation)
- User requests to generate/fetch credit note from DMS
- System auto-generates credit note after e-invoice is confirmed
### Request Details
**Endpoint:** `POST {DMS_BASE_URL}/api/credit-notes/generate`
**Headers:**
```http
Authorization: Bearer {DMS_API_KEY}
Content-Type: application/json
```
**Request Body (Complete Payload):**
```json
{
"request_number": "REQ-2025-12-0001",
"e_invoice_number": "EINV-2025-001234",
"dealer_code": "DLR001",
"dealer_name": "ABC Motors",
"activity_name": "Marketing Cost Bike to Vendor",
"activity_description": "Q4 Marketing Campaign for Royal Enfield",
"claim_amount": 150000.00,
"io_number": "IO-2025-001",
"item_code_no": "ITEM-001"
}
```
**Complete Webhook Response Payload (from DMS to RE-Flow):**
After processing, DMS will send the following complete payload to RE-Flow webhook endpoint `POST /api/v1/webhooks/dms/credit-note`:
```json
{
"request_number": "REQ-2025-12-0001",
"document_no": "CN-2025-001234",
"document_type": "CREDIT_NOTE",
"document_date": "2025-12-17T11:00:00.000Z",
"dealer_code": "DLR001",
"dealer_name": "ABC Motors",
"activity_name": "Marketing Cost Bike to Vendor",
"activity_description": "Q4 Marketing Campaign for Royal Enfield",
"claim_amount": 150000.00,
"io_number": "IO-2025-001",
"item_code_no": "ITEM-001",
"hsn_sac_code": "998314",
"cgst_percentage": 9.0,
"sgst_percentage": 9.0,
"igst_percentage": 0.0,
"cgst_amount": 13500.00,
"sgst_amount": 13500.00,
"igst_amount": 0.00,
"total_amount": 177000.00,
"credit_type": "GST",
"irn_no": "IRN987654321098765432109876543210987654321098765432109876543210",
"sap_credit_note_no": "SAP-CN-2025-001234",
"credit_note_file_path": "https://dms.example.com/credit-notes/CN-2025-001234.pdf",
"error_message": null,
"timestamp": "2025-12-17T11:00:00.000Z"
}
```
**Important Notes:**
- RE-Flow sends `activity_name` in the request
- DMS should use the same Item Code No from the original invoice (determined by `activity_name`)
- DMS returns `item_code_no` in the webhook response (see DMS_WEBHOOK_API.md)
- `item_code_no` is used by DMS for GST calculation (HSN/SAC code, tax percentages, tax amounts)
### Request Field Descriptions
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `request_number` | string | ✅ Yes | Original request number from RE-Flow System |
| `e_invoice_number` | string | ✅ Yes | E-invoice number that was generated earlier (must exist in DMS) |
| `dealer_code` | string | ✅ Yes | Dealer's unique code/identifier (must match invoice) |
| `dealer_name` | string | ✅ Yes | Dealer's business name |
| `activity_name` | string | ✅ Yes | Activity type name (must match original invoice) |
| `activity_description` | string | ✅ Yes | Activity description (must match original invoice) |
| `claim_amount` | number | ✅ Yes | Credit note amount (in INR, decimal format) - typically matches invoice amount |
| `io_number` | string | No | Internal Order (IO) number if available |
| `item_code_no` | string | ✅ Yes | Item code number (same as original invoice, determined by `activity_name` mapping). RE-Flow includes this in the request. Used by DMS for GST calculation. |
### Expected Response
**Success Response (200 OK):**
**Note:** DMS should respond with a simple acknowledgment. The actual credit note details (document number, tax calculations, SAP credit note number, IRN, etc.) will be sent back to RE-Flow via **webhook** (see DMS_WEBHOOK_API.md).
```json
{
"success": true,
"message": "Credit note generation request received and queued for processing",
"request_number": "REQ-2025-12-0001"
}
```
### Response Field Descriptions
| Field | Type | Description |
|-------|------|-------------|
| `success` | boolean | Indicates if the request was accepted |
| `message` | string | Status message |
| `request_number` | string | Echo of the request number for reference |
**Important:** The actual credit note generation happens asynchronously. DMS will send the complete credit note details (including document number, tax calculations, SAP credit note number, IRN, file path, etc.) via webhook to RE-Flow System once processing is complete.
### Error Response
**Error Response (400/500):**
```json
{
"success": false,
"error": "Error message describing what went wrong",
"error_code": "INVOICE_NOT_FOUND"
}
```
### Error Scenarios
| Error Code | Description | Possible Causes |
|------------|-------------|-----------------|
| `INVOICE_NOT_FOUND` | E-invoice number not found in DMS | Invoice was not generated or invalid invoice number |
| `INVALID_AMOUNT` | Amount validation failed | Amount mismatch with invoice or invalid format |
| `DEALER_MISMATCH` | Dealer code/name doesn't match invoice | Different dealer code than original invoice |
| `CREDIT_NOTE_EXISTS` | Credit note already generated for this invoice | Duplicate request for same invoice |
| `DMS_SERVICE_ERROR` | DMS internal error | DMS system unavailable or processing error |
### Example cURL Request
```bash
curl -X POST "https://dms.example.com/api/credit-notes/generate" \
-H "Authorization: Bearer YOUR_DMS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"request_number": "REQ-2025-12-0001",
"e_invoice_number": "EINV-2025-001234",
"dealer_code": "DLR001",
"dealer_name": "ABC Motors",
"activity_name": "Marketing Cost Bike to Vendor",
"activity_description": "Q4 Marketing Campaign for Royal Enfield",
"claim_amount": 150000.00,
"io_number": "IO-2025-001",
"item_code_no": "ITEM-001"
}'
```
---
## Configuration
### Environment Variables
The following environment variables need to be configured in the RE Workflow System:
```env
# DMS Integration Configuration
DMS_BASE_URL=https://dms.example.com
DMS_API_KEY=your_dms_api_key_here
# Alternative: Username/Password Authentication
DMS_USERNAME=your_dms_username
DMS_PASSWORD=your_dms_password
```
### Authentication Methods
DMS supports two authentication methods:
1. **API Key Authentication** (Recommended)
- Set `DMS_API_KEY` in environment variables
- Header: `Authorization: Bearer {DMS_API_KEY}`
2. **Username/Password Authentication**
- Set `DMS_USERNAME` and `DMS_PASSWORD` in environment variables
- Use Basic Auth or custom authentication as per DMS requirements
---
## Integration Flow
### E-Invoice Generation Flow
```
┌─────────────────┐
│ RE-Flow System │
│ (Step 6) │
└────────┬────────┘
│ POST /api/invoices/generate
│ { request_number, dealer_code, activity_name,
│ claim_amount, item_code_no, ... }
┌─────────────────┐
│ DMS System │
│ │
│ - Validates │
│ - Queues for │
│ processing │
│ │
│ Response: │
│ { success: true }│
└────────┬────────┘
│ (Asynchronous Processing)
│ - Determines Item Code No
│ - Calculates GST
│ - Generates E-Invoice
│ - Gets IRN from GST
│ POST /api/v1/webhooks/dms/invoice
│ { document_no, item_code_no,
│ hsn_sac_code, tax details,
│ irn_no, invoice_file_path, ... }
┌─────────────────┐
│ RE-Flow System │
│ │
│ - Receives │
│ webhook │
│ - Stores │
│ invoice data │
│ - Updates │
│ workflow │
│ - Moves to │
│ Step 8 │
└─────────────────┘
Backup (if webhook fails):
┌─────────────────┐
│ RE-Flow System │
│ │
│ GET /api/invoices/status/{request_number}
│ │
┌─────────────────┐
│ DMS System │
│ │
│ Returns current │
│ invoice status │
│ and details │
└─────────────────┘
```
### Credit Note Generation Flow
```
┌─────────────────┐
│ RE-Flow System │
│ (Step 8) │
└────────┬────────┘
│ POST /api/credit-notes/generate
│ { e_invoice_number, request_number,
│ activity_name, claim_amount,
│ item_code_no, ... }
┌─────────────────┐
│ DMS System │
│ │
│ - Validates │
│ invoice │
│ - Queues for │
│ processing │
│ │
│ Response: │
│ { success: true }│
└────────┬────────┘
│ (Asynchronous Processing)
│ - Uses Item Code No from invoice
│ - Calculates GST
│ - Generates Credit Note
│ - Gets IRN from GST
│ - Gets SAP Credit Note No
│ POST /api/v1/webhooks/dms/credit-note
│ { document_no, item_code_no,
│ hsn_sac_code, tax details,
│ irn_no, sap_credit_note_no,
│ credit_note_file_path, ... }
┌─────────────────┐
│ RE-Flow System │
│ │
│ - Receives │
│ webhook │
│ - Stores │
│ credit note │
│ - Updates │
│ workflow │
│ - Completes │
│ request │
└─────────────────┘
Backup (if webhook fails):
┌─────────────────┐
│ RE-Flow System │
│ │
│ GET /api/credit-notes/status/{request_number}
│ │
┌─────────────────┐
│ DMS System │
│ │
│ Returns current │
│ credit note │
│ status and │
│ details │
└─────────────────┘
```
---
## Data Mapping
### RE-Flow System → DMS (API Request)
| RE-Flow Field | DMS Request Field | Notes |
|----------------|-------------------|-------|
| `request.requestNumber` | `request_number` | Direct mapping |
| `claimDetails.dealerCode` | `dealer_code` | Direct mapping |
| `claimDetails.dealerName` | `dealer_name` | Direct mapping |
| `claimDetails.activityName` | `activity_name` | Must match predefined Activity Types |
| `claimDetails.activityDescription` | `activity_description` | Direct mapping |
| `budgetTracking.closedExpenses` | `claim_amount` | Total claim amount (before taxes) |
| `internalOrder.ioNumber` | `io_number` | Optional, if available |
| `itemCodeNo` (determined by DMS) | `item_code_no` | Included in payload. DMS determines this based on `activity_name` mapping. Used by DMS for GST calculation. |
| `claimInvoice.invoiceNumber` | `e_invoice_number` | For credit note request only |
### DMS → RE-Flow System (Webhook Response)
**Note:** All invoice and credit note details are sent via webhook (see DMS_WEBHOOK_API.md), not in the API response.
| DMS Webhook Field | RE-Flow Database Field | Table | Notes |
|-------------------|------------------------|-------|-------|
| `document_no` | `invoice_number` / `credit_note_number` | `claim_invoices` / `claim_credit_notes` | Generated by DMS |
| `document_date` | `invoice_date` / `credit_note_date` | `claim_invoices` / `claim_credit_notes` | Converted to Date object |
| `total_amount` | `invoice_amount` / `credit_amount` | `claim_invoices` / `claim_credit_notes` | Includes taxes |
| `invoice_file_path` | `invoice_file_path` | `claim_invoices` | URL/path to PDF |
| `credit_note_file_path` | `credit_note_file_path` | `claim_credit_notes` | URL/path to PDF |
| `irn_no` | Stored in `description` field | Both tables | From GST portal |
| `sap_credit_note_no` | `sap_document_number` | `claim_credit_notes` | From SAP system |
| `item_code_no` | Stored in `description` field | Both tables | Provided by DMS based on activity |
| `hsn_sac_code` | Stored in `description` field | Both tables | Provided by DMS |
| `cgst_amount`, `sgst_amount`, `igst_amount` | Stored in `description` field | Both tables | Tax breakdown |
| `credit_type` | Stored in `description` field | `claim_credit_notes` | "GST" or "Commercial Credit" |
---
## Testing
### Mock Mode
When DMS is not configured, the system operates in **mock mode**:
- Returns mock invoice/credit note numbers
- Logs warnings instead of making actual API calls
- Useful for development and testing
### Test Data
**E-Invoice Test Request:**
```json
{
"request_number": "REQ-TEST-001",
"dealer_code": "TEST-DLR-001",
"dealer_name": "Test Dealer",
"activity_name": "Marketing Cost Bike to Vendor",
"activity_description": "Test invoice generation for marketing activity",
"claim_amount": 10000.00,
"io_number": "IO-TEST-001",
"item_code_no": "ITEM-001"
}
```
**Credit Note Test Request:**
```json
{
"request_number": "REQ-TEST-001",
"e_invoice_number": "EINV-TEST-001",
"dealer_code": "TEST-DLR-001",
"dealer_name": "Test Dealer",
"activity_name": "Marketing Cost Bike to Vendor",
"activity_description": "Test credit note generation for marketing activity",
"claim_amount": 10000.00,
"io_number": "IO-TEST-001",
"item_code_no": "ITEM-001"
}
```
---
## Notes
1. **Asynchronous Processing**: Invoice and credit note generation happens asynchronously. DMS should:
- Accept the request immediately and return a success acknowledgment
- Process the invoice/credit note in the background
- Send complete details via webhook once processing is complete
2. **Activity Type to Item Code No Mapping**:
- DMS Team must provide **Item Code No** mapping for each predefined Activity Type
- This mapping should be configured in DMS system
- RE-Flow includes `item_code_no` in the request payload (determined by DMS based on `activity_name` mapping)
- DMS uses Item Code No to determine HSN/SAC code and calculate GST (CGST/SGST/IGST percentages and amounts)
- DMS returns `item_code_no` in the webhook response for verification
3. **Tax Calculation**: DMS is responsible for:
- Determining CGST/SGST/IGST percentages based on dealer location and activity type
- Calculating tax amounts
- Providing HSN/SAC codes
4. **Amount Validation**: DMS should validate that credit note amount matches or is less than the original invoice amount.
5. **Invoice Dependency**: Credit note generation requires a valid e-invoice to exist in DMS first.
6. **Error Handling**: RE-Flow System handles DMS errors gracefully and allows manual entry if DMS is unavailable.
7. **Retry Logic**: Consider implementing retry logic for transient DMS failures.
8. **Webhooks (Primary Method)**: DMS **MUST** send webhooks to notify RE-Flow System when invoice/credit note processing is complete. See DMS_WEBHOOK_API.md for webhook specifications. This is the **primary method** for status verification.
9. **Status Check API (Backup Method)**: If webhook delivery fails, RE-Flow can use the backup status check API to verify invoice/credit note generation status. See section "Backup: Status Check API" above.
10. **IRN Generation**: DMS should generate IRN (Invoice Reference Number) from GST portal and include it in the webhook response.
11. **SAP Integration**: For credit notes, DMS should generate SAP Credit Note Number and include it in the webhook response.
12. **Webhook URL Configuration**: DMS must be configured with RE-Flow webhook URLs:
- Invoice Webhook: `POST /api/v1/webhooks/dms/invoice`
- Credit Note Webhook: `POST /api/v1/webhooks/dms/credit-note`
- See DMS_WEBHOOK_API.md for complete webhook specifications
---
## Support
For issues or questions regarding DMS integration:
- **Backend Team**: Check logs in `Re_Backend/src/services/dmsIntegration.service.ts`
- **DMS Team**: Contact DMS support for API-related issues
- **Documentation**: Refer to DMS API documentation for latest updates
---
**Last Updated:** December 19, 2025
**Version:** 2.0
## Changelog
### Version 2.0 (December 19, 2025)
- Added clear breakdown of inputs from RE-Flow vs DMS Team
- Added predefined Activity Types list
- Updated request/response structure to reflect asynchronous processing
- Clarified that detailed responses come via webhook, not API response
- Updated field names to match actual implementation (`claim_amount` instead of `amount`, `activity_name`, `activity_description`)
- Added notes about Item Code No mapping requirement for DMS Team
- Updated data mapping section with webhook fields
### Version 1.0 (December 17, 2025)
- Initial documentation

View File

@ -1,574 +0,0 @@
# DMS Webhook API Documentation
## Overview
This document describes the webhook endpoints that DMS (Document Management System) will call to notify the RE Workflow System after processing invoice and credit note generation requests.
---
## Table of Contents
1. [Webhook Overview](#1-webhook-overview)
2. [Authentication](#2-authentication)
3. [Invoice Webhook](#3-invoice-webhook)
4. [Credit Note Webhook](#4-credit-note-webhook)
5. [Payload Specifications](#5-payload-specifications)
6. [Error Handling](#6-error-handling)
7. [Testing](#7-testing)
---
## 1. Webhook Overview
### 1.1 Purpose
After RE Workflow System pushes invoice/credit note generation requests to DMS, DMS processes them and sends webhook callbacks with the generated document details, tax information, and other metadata.
### 1.2 Webhook Flow
```
┌─────────────────┐ ┌─────────────────┐
│ RE Workflow │ │ DMS System │
│ System │ │ │
└────────┬────────┘ └────────┬────────┘
│ │
│ POST /api/invoices/generate │
│ { request_number, dealer_code, ... }│
├─────────────────────────────────────►│
│ │
│ │ Process Invoice
│ │ Generate Document
│ │ Calculate GST
│ │
│ │ POST /api/v1/webhooks/dms/invoice
│ │ { document_no, irn_no, ... }
│◄─────────────────────────────────────┤
│ │
│ Update Invoice Record │
│ Store IRN, GST Details, etc. │
│ │
```
---
## 2. Authentication
### 2.1 Webhook Signature
DMS must include a signature in the request header for security validation:
**Header:**
```
X-DMS-Signature: <HMAC-SHA256-signature>
```
**Signature Generation:**
1. Create HMAC-SHA256 hash of the request body (JSON string)
2. Use the shared secret key (`DMS_WEBHOOK_SECRET`)
3. Send the hex-encoded signature in the `X-DMS-Signature` header
**Example:**
```javascript
const crypto = require('crypto');
const body = JSON.stringify(payload);
const signature = crypto
.createHmac('sha256', DMS_WEBHOOK_SECRET)
.update(body)
.digest('hex');
// Send in header: X-DMS-Signature: <signature>
```
### 2.2 Environment Variable
Configure the webhook secret in RE Workflow System:
```env
DMS_WEBHOOK_SECRET=your_shared_secret_key_here
```
**Note:** If `DMS_WEBHOOK_SECRET` is not configured, signature validation is skipped (development mode only).
---
## 3. Invoice Webhook
### 3.1 Endpoint
**URL:** `POST /api/v1/webhooks/dms/invoice`
**Base URL Examples:**
- Development: `http://localhost:5000/api/v1/webhooks/dms/invoice`
- UAT: `https://reflow-uat.royalenfield.com/api/v1/webhooks/dms/invoice`
- Production: `https://reflow.royalenfield.com/api/v1/webhooks/dms/invoice`
### 3.2 Request Headers
```http
Content-Type: application/json
X-DMS-Signature: <HMAC-SHA256-signature>
User-Agent: DMS-Webhook-Client/1.0
```
### 3.3 Request Payload
```json
{
"request_number": "REQ-2025-12-0001",
"document_no": "EINV-2025-001234",
"document_type": "E-INVOICE",
"document_date": "2025-12-17T10:30:00.000Z",
"dealer_code": "DLR001",
"dealer_name": "ABC Motors",
"activity_name": "Marketing Campaign",
"activity_description": "Q4 Marketing Campaign for Royal Enfield",
"claim_amount": 150000.00,
"io_number": "IO-2025-001",
"item_code_no": "ITEM-001",
"hsn_sac_code": "998314",
"cgst_percentage": 9.0,
"sgst_percentage": 9.0,
"igst_percentage": 0.0,
"cgst_amount": 13500.00,
"sgst_amount": 13500.00,
"igst_amount": 0.00,
"total_amount": 177000.00,
"irn_no": "IRN123456789012345678901234567890123456789012345678901234567890",
"invoice_file_path": "https://dms.example.com/invoices/EINV-2025-001234.pdf",
"error_message": null,
"timestamp": "2025-12-17T10:30:00.000Z"
}
```
### 3.4 Payload Field Descriptions
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `request_number` | string | ✅ Yes | Original request number from RE Workflow System (e.g., "REQ-2025-12-0001") |
| `document_no` | string | ✅ Yes | Generated invoice/document number from DMS |
| `document_type` | string | ✅ Yes | Type of document: "E-INVOICE" or "INVOICE" |
| `document_date` | string (ISO 8601) | ✅ Yes | Date when invoice was generated |
| `dealer_code` | string | ✅ Yes | Dealer code (should match original request) |
| `dealer_name` | string | ✅ Yes | Dealer name (should match original request) |
| `activity_name` | string | ✅ Yes | Activity name from original request |
| `activity_description` | string | ✅ Yes | Activity description from original request |
| `claim_amount` | number | ✅ Yes | Original claim amount (before tax) |
| `io_number` | string | No | Internal Order number (if provided in original request) |
| `item_code_no` | string | ✅ Yes | Item code number (provided by DMS team based on activity list) |
| `hsn_sac_code` | string | ✅ Yes | HSN/SAC code for the invoice |
| `cgst_percentage` | number | ✅ Yes | CGST percentage (e.g., 9.0 for 9%) |
| `sgst_percentage` | number | ✅ Yes | SGST percentage (e.g., 9.0 for 9%) |
| `igst_percentage` | number | ✅ Yes | IGST percentage (0.0 for intra-state, >0 for inter-state) |
| `cgst_amount` | number | ✅ Yes | CGST amount in INR |
| `sgst_amount` | number | ✅ Yes | SGST amount in INR |
| `igst_amount` | number | ✅ Yes | IGST amount in INR |
| `total_amount` | number | ✅ Yes | Total invoice amount (claim_amount + all taxes) |
| `irn_no` | string | No | Invoice Reference Number (IRN) from GST portal (if generated) |
| `invoice_file_path` | string | ✅ Yes | URL or path to the generated invoice PDF/document file |
| `error_message` | string | No | Error message if invoice generation failed |
| `timestamp` | string (ISO 8601) | ✅ Yes | Timestamp when webhook is sent |
### 3.5 Success Response
**Status Code:** `200 OK`
```json
{
"success": true,
"message": "Invoice webhook processed successfully",
"data": {
"message": "Invoice webhook processed successfully",
"invoiceNumber": "EINV-2025-001234",
"requestNumber": "REQ-2025-12-0001"
}
}
```
### 3.6 Error Response
**Status Code:** `400 Bad Request` or `500 Internal Server Error`
```json
{
"success": false,
"message": "Failed to process invoice webhook",
"error": "Request not found: REQ-2025-12-0001"
}
```
---
## 4. Credit Note Webhook
### 4.1 Endpoint
**URL:** `POST /api/v1/webhooks/dms/credit-note`
**Base URL Examples:**
- Development: `http://localhost:5000/api/v1/webhooks/dms/credit-note`
- UAT: `https://reflow-uat.royalenfield.com/api/v1/webhooks/dms/credit-note`
- Production: `https://reflow.royalenfield.com/api/v1/webhooks/dms/credit-note`
### 4.2 Request Headers
```http
Content-Type: application/json
X-DMS-Signature: <HMAC-SHA256-signature>
User-Agent: DMS-Webhook-Client/1.0
```
### 4.3 Request Payload
```json
{
"request_number": "REQ-2025-12-0001",
"document_no": "CN-2025-001234",
"document_type": "CREDIT_NOTE",
"document_date": "2025-12-17T11:00:00.000Z",
"dealer_code": "DLR001",
"dealer_name": "ABC Motors",
"activity_name": "Marketing Campaign",
"activity_description": "Q4 Marketing Campaign for Royal Enfield",
"claim_amount": 150000.00,
"io_number": "IO-2025-001",
"item_code_no": "ITEM-001",
"hsn_sac_code": "998314",
"cgst_percentage": 9.0,
"sgst_percentage": 9.0,
"igst_percentage": 0.0,
"cgst_amount": 13500.00,
"sgst_amount": 13500.00,
"igst_amount": 0.00,
"total_amount": 177000.00,
"credit_type": "GST",
"irn_no": "IRN987654321098765432109876543210987654321098765432109876543210",
"sap_credit_note_no": "SAP-CN-2025-001234",
"credit_note_file_path": "https://dms.example.com/credit-notes/CN-2025-001234.pdf",
"error_message": null,
"timestamp": "2025-12-17T11:00:00.000Z"
}
```
### 4.4 Payload Field Descriptions
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `request_number` | string | ✅ Yes | Original request number from RE Workflow System |
| `document_no` | string | ✅ Yes | Generated credit note number from DMS |
| `document_type` | string | ✅ Yes | Type of document: "CREDIT_NOTE" |
| `document_date` | string (ISO 8601) | ✅ Yes | Date when credit note was generated |
| `dealer_code` | string | ✅ Yes | Dealer code (should match original request) |
| `dealer_name` | string | ✅ Yes | Dealer name (should match original request) |
| `activity_name` | string | ✅ Yes | Activity name from original request |
| `activity_description` | string | ✅ Yes | Activity description from original request |
| `claim_amount` | string | ✅ Yes | Original claim amount (before tax) |
| `io_number` | string | No | Internal Order number (if provided) |
| `item_code_no` | string | ✅ Yes | Item code number (provided by DMS team) |
| `hsn_sac_code` | string | ✅ Yes | HSN/SAC code for the credit note |
| `cgst_percentage` | number | ✅ Yes | CGST percentage |
| `sgst_percentage` | number | ✅ Yes | SGST percentage |
| `igst_percentage` | number | ✅ Yes | IGST percentage |
| `cgst_amount` | number | ✅ Yes | CGST amount in INR |
| `sgst_amount` | number | ✅ Yes | SGST amount in INR |
| `igst_amount` | number | ✅ Yes | IGST amount in INR |
| `total_amount` | number | ✅ Yes | Total credit note amount (claim_amount + all taxes) |
| `credit_type` | string | ✅ Yes | Type of credit: "GST" or "Commercial Credit" |
| `irn_no` | string | No | Invoice Reference Number (IRN) for credit note (if generated) |
| `sap_credit_note_no` | string | ✅ Yes | SAP Credit Note Number (generated by SAP system) |
| `credit_note_file_path` | string | ✅ Yes | URL or path to the generated credit note PDF/document file |
| `error_message` | string | No | Error message if credit note generation failed |
| `timestamp` | string (ISO 8601) | ✅ Yes | Timestamp when webhook is sent |
### 4.5 Success Response
**Status Code:** `200 OK`
```json
{
"success": true,
"message": "Credit note webhook processed successfully",
"data": {
"message": "Credit note webhook processed successfully",
"creditNoteNumber": "CN-2025-001234",
"requestNumber": "REQ-2025-12-0001"
}
}
```
### 4.6 Error Response
**Status Code:** `400 Bad Request` or `500 Internal Server Error`
```json
{
"success": false,
"message": "Failed to process credit note webhook",
"error": "Credit note record not found for request: REQ-2025-12-0001"
}
```
---
## 5. Payload Specifications
### 5.1 Data Mapping: RE Workflow → DMS
When RE Workflow System sends data to DMS, it includes:
| RE Workflow Field | DMS Receives | Notes |
|-------------------|--------------|-------|
| `requestNumber` | `request_number` | Direct mapping |
| `dealerCode` | `dealer_code` | Direct mapping |
| `dealerName` | `dealer_name` | Direct mapping |
| `activityName` | `activity_name` | From claim details |
| `activityDescription` | `activity_description` | From claim details |
| `claimAmount` | `claim_amount` | Total claim amount |
| `ioNumber` | `io_number` | If available |
### 5.2 Data Mapping: DMS → RE Workflow
When DMS sends webhook, RE Workflow System stores:
| DMS Webhook Field | RE Workflow Database Field | Table |
|-------------------|---------------------------|-------|
| `document_no` | `invoice_number` / `credit_note_number` | `claim_invoices` / `claim_credit_notes` |
| `document_date` | `invoice_date` / `credit_note_date` | `claim_invoices` / `claim_credit_notes` |
| `total_amount` | `invoice_amount` / `credit_note_amount` | `claim_invoices` / `claim_credit_notes` |
| `invoice_file_path` | `invoice_file_path` | `claim_invoices` |
| `credit_note_file_path` | `credit_note_file_path` | `claim_credit_notes` |
| `irn_no` | Stored in `description` field | Both tables |
| `sap_credit_note_no` | `sap_document_number` | `claim_credit_notes` |
| `item_code_no` | Stored in `description` field | Both tables |
| `hsn_sac_code` | Stored in `description` field | Both tables |
| GST amounts | Stored in `description` field | Both tables |
| `credit_type` | Stored in `description` field | `claim_credit_notes` |
### 5.3 GST Calculation Logic
**Intra-State (Same State):**
- CGST: Applied (e.g., 9%)
- SGST: Applied (e.g., 9%)
- IGST: 0%
**Inter-State (Different State):**
- CGST: 0%
- SGST: 0%
- IGST: Applied (e.g., 18%)
**Total Amount Calculation:**
```
total_amount = claim_amount + cgst_amount + sgst_amount + igst_amount
```
---
## 6. Error Handling
### 6.1 Common Error Scenarios
| Error | Status Code | Description | Solution |
|-------|-------------|-------------|----------|
| Invalid Signature | 401 | Webhook signature validation failed | Check `DMS_WEBHOOK_SECRET` and signature generation |
| Missing Required Field | 400 | Required field is missing in payload | Ensure all required fields are included |
| Request Not Found | 400 | Request number doesn't exist in system | Verify request number matches original request |
| Invoice Not Found | 400 | Invoice record not found for request | Ensure invoice was created before webhook |
| Credit Note Not Found | 400 | Credit note record not found for request | Ensure credit note was created before webhook |
| Database Error | 500 | Internal database error | Check database connection and logs |
### 6.2 Retry Logic
DMS should implement retry logic for failed webhook deliveries:
- **Initial Retry:** After 1 minute
- **Second Retry:** After 5 minutes
- **Third Retry:** After 15 minutes
- **Final Retry:** After 1 hour
**Maximum Retries:** 4 attempts
**Retry Conditions:**
- HTTP 5xx errors (server errors)
- Network timeouts
- Connection failures
**Do NOT Retry:**
- HTTP 400 errors (client errors - invalid payload)
- HTTP 401 errors (authentication errors)
### 6.3 Idempotency
Webhooks should be idempotent. If DMS sends the same webhook multiple times:
- RE Workflow System will update the record with the latest data
- No duplicate records will be created
- Status will be updated to reflect the latest state
---
## 7. Testing
### 7.1 Test Invoice Webhook
```bash
curl -X POST "http://localhost:5000/api/v1/webhooks/dms/invoice" \
-H "Content-Type: application/json" \
-H "X-DMS-Signature: <calculated-signature>" \
-d '{
"request_number": "REQ-2025-12-0001",
"document_no": "EINV-TEST-001",
"document_type": "E-INVOICE",
"document_date": "2025-12-17T10:30:00.000Z",
"dealer_code": "DLR001",
"dealer_name": "Test Dealer",
"activity_name": "Test Activity",
"activity_description": "Test Description",
"claim_amount": 100000.00,
"io_number": "IO-TEST-001",
"item_code_no": "ITEM-001",
"hsn_sac_code": "998314",
"cgst_percentage": 9.0,
"sgst_percentage": 9.0,
"igst_percentage": 0.0,
"cgst_amount": 9000.00,
"sgst_amount": 9000.00,
"igst_amount": 0.00,
"total_amount": 118000.00,
"irn_no": "IRN123456789012345678901234567890123456789012345678901234567890",
"invoice_file_path": "https://dms.example.com/invoices/EINV-TEST-001.pdf",
"timestamp": "2025-12-17T10:30:00.000Z"
}'
```
### 7.2 Test Credit Note Webhook
```bash
curl -X POST "http://localhost:5000/api/v1/webhooks/dms/credit-note" \
-H "Content-Type: application/json" \
-H "X-DMS-Signature: <calculated-signature>" \
-d '{
"request_number": "REQ-2025-12-0001",
"document_no": "CN-TEST-001",
"document_type": "CREDIT_NOTE",
"document_date": "2025-12-17T11:00:00.000Z",
"dealer_code": "DLR001",
"dealer_name": "Test Dealer",
"activity_name": "Test Activity",
"activity_description": "Test Description",
"claim_amount": 100000.00,
"io_number": "IO-TEST-001",
"item_code_no": "ITEM-001",
"hsn_sac_code": "998314",
"cgst_percentage": 9.0,
"sgst_percentage": 9.0,
"igst_percentage": 0.0,
"cgst_amount": 9000.00,
"sgst_amount": 9000.00,
"igst_amount": 0.00,
"total_amount": 118000.00,
"credit_type": "GST",
"irn_no": "IRN987654321098765432109876543210987654321098765432109876543210",
"sap_credit_note_no": "SAP-CN-TEST-001",
"credit_note_file_path": "https://dms.example.com/credit-notes/CN-TEST-001.pdf",
"timestamp": "2025-12-17T11:00:00.000Z"
}'
```
### 7.3 Signature Calculation (Node.js Example)
```javascript
const crypto = require('crypto');
function calculateSignature(payload, secret) {
const body = JSON.stringify(payload);
return crypto
.createHmac('sha256', secret)
.update(body)
.digest('hex');
}
const payload = { /* webhook payload */ };
const secret = process.env.DMS_WEBHOOK_SECRET;
const signature = calculateSignature(payload, secret);
// Use in header: X-DMS-Signature: <signature>
```
---
## 8. Integration Checklist
### 8.1 DMS Team Checklist
- [ ] Configure webhook URLs in DMS system
- [ ] Set up `DMS_WEBHOOK_SECRET` (shared secret)
- [ ] Implement signature generation (HMAC-SHA256)
- [ ] Test webhook delivery to RE Workflow endpoints
- [ ] Implement retry logic for failed deliveries
- [ ] Set up monitoring/alerting for webhook failures
- [ ] Document webhook payload structure
- [ ] Coordinate with RE Workflow team for testing
### 8.2 RE Workflow Team Checklist
- [ ] Configure `DMS_WEBHOOK_SECRET` in environment variables
- [ ] Deploy webhook endpoints to UAT/Production
- [ ] Test webhook endpoints with sample payloads
- [ ] Verify database updates after webhook processing
- [ ] Set up monitoring/alerting for webhook failures
- [ ] Document webhook endpoints for DMS team
- [ ] Coordinate with DMS team for integration testing
---
## 9. Support & Troubleshooting
### 9.1 Logs
RE Workflow System logs webhook processing:
- **Success:** `[DMSWebhook] Invoice webhook processed successfully`
- **Error:** `[DMSWebhook] Error processing invoice webhook: <error>`
- **Validation:** `[DMSWebhook] Invalid webhook signature`
### 9.2 Common Issues
**Issue: Webhook signature validation fails**
- Verify `DMS_WEBHOOK_SECRET` matches in both systems
- Check signature calculation method (HMAC-SHA256)
- Ensure request body is JSON stringified correctly
**Issue: Request not found**
- Verify `request_number` matches the original request
- Check if request exists in RE Workflow database
- Ensure request was created before webhook is sent
**Issue: Invoice/Credit Note record not found**
- Verify invoice/credit note was created in RE Workflow
- Check if webhook is sent before record creation
- Review workflow step sequence
---
## 10. Environment Configuration
### 10.1 Environment Variables
Add to RE Workflow System `.env` file:
```env
# DMS Webhook Configuration
DMS_WEBHOOK_SECRET=your_shared_secret_key_here
```
### 10.2 Webhook URLs by Environment
| Environment | Invoice Webhook URL | Credit Note Webhook URL |
|-------------|---------------------|-------------------------|
| Development | `http://localhost:5000/api/v1/webhooks/dms/invoice` | `http://localhost:5000/api/v1/webhooks/dms/credit-note` |
| UAT | `https://reflow-uat.royalenfield.com/api/v1/webhooks/dms/invoice` | `https://reflow-uat.royalenfield.com/api/v1/webhooks/dms/credit-note` |
| Production | `https://reflow.royalenfield.com/api/v1/webhooks/dms/invoice` | `https://reflow.royalenfield.com/api/v1/webhooks/dms/credit-note` |
---
**Document Version:** 1.0
**Last Updated:** December 2024
**Maintained By:** RE Workflow Development Team

File diff suppressed because it is too large Load Diff

View File

@ -1,507 +0,0 @@
erDiagram
users ||--o{ workflow_requests : initiates
users ||--o{ approval_levels : approves
users ||--o{ participants : participates
users ||--o{ work_notes : posts
users ||--o{ documents : uploads
users ||--o{ activities : performs
users ||--o{ notifications : receives
users ||--o{ user_sessions : has
workflow_requests ||--|{ approval_levels : has
workflow_requests ||--o{ participants : involves
workflow_requests ||--o{ documents : contains
workflow_requests ||--o{ work_notes : has
workflow_requests ||--o{ activities : logs
workflow_requests ||--o{ tat_tracking : monitors
workflow_requests ||--o{ notifications : triggers
workflow_requests ||--|| conclusion_remarks : concludes
workflow_requests ||--|| dealer_claim_details : claim_details
workflow_requests ||--|| dealer_proposal_details : proposal_details
dealer_proposal_details ||--o{ dealer_proposal_cost_items : cost_items
workflow_requests ||--|| dealer_completion_details : completion_details
workflow_requests ||--|| internal_orders : internal_order
workflow_requests ||--|| claim_budget_tracking : budget_tracking
workflow_requests ||--|| claim_invoices : claim_invoice
workflow_requests ||--|| claim_credit_notes : claim_credit_note
work_notes ||--o{ work_note_attachments : has
notifications ||--o{ email_logs : sends
notifications ||--o{ sms_logs : sends
workflow_requests ||--o{ report_cache : caches
workflow_requests ||--o{ audit_logs : audits
workflow_requests ||--o{ workflow_templates : templates
users ||--o{ system_settings : updates
users {
uuid user_id PK
varchar employee_id
varchar okta_sub
varchar email
varchar first_name
varchar last_name
varchar display_name
varchar department
varchar designation
varchar phone
varchar manager
varchar second_email
text job_title
varchar employee_number
varchar postal_address
varchar mobile_phone
jsonb ad_groups
jsonb location
boolean is_active
enum role
timestamp last_login
timestamp created_at
timestamp updated_at
}
workflow_requests {
uuid request_id PK
varchar request_number
uuid initiator_id FK
varchar template_type
varchar title
text description
enum priority
enum status
integer current_level
integer total_levels
decimal total_tat_hours
timestamp submission_date
timestamp closure_date
text conclusion_remark
text ai_generated_conclusion
boolean is_draft
boolean is_deleted
timestamp created_at
timestamp updated_at
}
approval_levels {
uuid level_id PK
uuid request_id FK
integer level_number
varchar level_name
uuid approver_id FK
varchar approver_email
varchar approver_name
decimal tat_hours
integer tat_days
enum status
timestamp level_start_time
timestamp level_end_time
timestamp action_date
text comments
text rejection_reason
boolean is_final_approver
decimal elapsed_hours
decimal remaining_hours
decimal tat_percentage_used
timestamp created_at
timestamp updated_at
}
participants {
uuid participant_id PK
uuid request_id FK
uuid user_id FK
varchar user_email
varchar user_name
enum participant_type
boolean can_comment
boolean can_view_documents
boolean can_download_documents
boolean notification_enabled
uuid added_by FK
timestamp added_at
boolean is_active
}
documents {
uuid document_id PK
uuid request_id FK
uuid uploaded_by FK
varchar file_name
varchar original_file_name
varchar file_type
varchar file_extension
bigint file_size
varchar file_path
varchar storage_url
varchar mime_type
varchar checksum
boolean is_google_doc
varchar google_doc_url
enum category
integer version
uuid parent_document_id
boolean is_deleted
integer download_count
timestamp uploaded_at
}
work_notes {
uuid note_id PK
uuid request_id FK
uuid user_id FK
varchar user_name
varchar user_role
text message
varchar message_type
boolean is_priority
boolean has_attachment
uuid parent_note_id
uuid[] mentioned_users
jsonb reactions
boolean is_edited
boolean is_deleted
timestamp created_at
timestamp updated_at
}
work_note_attachments {
uuid attachment_id PK
uuid note_id FK
varchar file_name
varchar file_type
bigint file_size
varchar file_path
varchar storage_url
boolean is_downloadable
integer download_count
timestamp uploaded_at
}
activities {
uuid activity_id PK
uuid request_id FK
uuid user_id FK
varchar user_name
varchar activity_type
text activity_description
varchar activity_category
varchar severity
jsonb metadata
boolean is_system_event
varchar ip_address
text user_agent
timestamp created_at
}
notifications {
uuid notification_id PK
uuid user_id FK
uuid request_id FK
varchar notification_type
varchar title
text message
boolean is_read
enum priority
varchar action_url
boolean action_required
jsonb metadata
varchar[] sent_via
boolean email_sent
boolean sms_sent
boolean push_sent
timestamp read_at
timestamp expires_at
timestamp created_at
}
tat_tracking {
uuid tracking_id PK
uuid request_id FK
uuid level_id FK
varchar tracking_type
enum tat_status
decimal total_tat_hours
decimal elapsed_hours
decimal remaining_hours
decimal percentage_used
boolean threshold_50_breached
timestamp threshold_50_alerted_at
boolean threshold_80_breached
timestamp threshold_80_alerted_at
boolean threshold_100_breached
timestamp threshold_100_alerted_at
integer alert_count
timestamp last_calculated_at
}
conclusion_remarks {
uuid conclusion_id PK
uuid request_id FK
text ai_generated_remark
varchar ai_model_used
decimal ai_confidence_score
text final_remark
uuid edited_by FK
boolean is_edited
integer edit_count
jsonb approval_summary
jsonb document_summary
text[] key_discussion_points
timestamp generated_at
timestamp finalized_at
}
audit_logs {
uuid audit_id PK
uuid user_id FK
varchar entity_type
uuid entity_id
varchar action
varchar action_category
jsonb old_values
jsonb new_values
text changes_summary
varchar ip_address
text user_agent
varchar session_id
varchar request_method
varchar request_url
integer response_status
integer execution_time_ms
timestamp created_at
}
user_sessions {
uuid session_id PK
uuid user_id FK
varchar session_token
varchar refresh_token
varchar ip_address
text user_agent
varchar device_type
varchar browser
varchar os
timestamp login_at
timestamp last_activity_at
timestamp logout_at
timestamp expires_at
boolean is_active
varchar logout_reason
}
email_logs {
uuid email_log_id PK
uuid request_id FK
uuid notification_id FK
varchar recipient_email
uuid recipient_user_id FK
text[] cc_emails
text[] bcc_emails
varchar subject
text body
varchar email_type
varchar status
integer send_attempts
timestamp sent_at
timestamp failed_at
text failure_reason
timestamp opened_at
timestamp clicked_at
timestamp created_at
}
sms_logs {
uuid sms_log_id PK
uuid request_id FK
uuid notification_id FK
varchar recipient_phone
uuid recipient_user_id FK
text message
varchar sms_type
varchar status
integer send_attempts
timestamp sent_at
timestamp delivered_at
timestamp failed_at
text failure_reason
varchar sms_provider
varchar sms_provider_message_id
decimal cost
timestamp created_at
}
system_settings {
uuid setting_id PK
varchar setting_key
text setting_value
varchar setting_type
varchar setting_category
text description
boolean is_editable
boolean is_sensitive
jsonb validation_rules
text default_value
uuid updated_by FK
timestamp created_at
timestamp updated_at
}
workflow_templates {
uuid template_id PK
varchar template_name
text template_description
varchar template_category
jsonb approval_levels_config
decimal default_tat_hours
boolean is_active
integer usage_count
uuid created_by FK
timestamp created_at
timestamp updated_at
}
report_cache {
uuid cache_id PK
varchar report_type
jsonb report_params
jsonb report_data
uuid generated_by FK
timestamp generated_at
timestamp expires_at
integer access_count
timestamp last_accessed_at
}
dealer_claim_details {
uuid claim_id PK
uuid request_id
varchar activity_name
varchar activity_type
varchar dealer_code
varchar dealer_name
varchar dealer_email
varchar dealer_phone
text dealer_address
date activity_date
varchar location
date period_start_date
date period_end_date
timestamp created_at
timestamp updated_at
}
dealer_proposal_details {
uuid proposal_id PK
uuid request_id
string proposal_document_path
string proposal_document_url
decimal total_estimated_budget
string timeline_mode
date expected_completion_date
int expected_completion_days
text dealer_comments
date submitted_at
timestamp created_at
timestamp updated_at
}
dealer_proposal_cost_items {
uuid cost_item_id PK
uuid proposal_id FK
uuid request_id FK
string item_description
decimal amount
int item_order
timestamp created_at
timestamp updated_at
}
dealer_completion_details {
uuid completion_id PK
uuid request_id
date activity_completion_date
int number_of_participants
decimal total_closed_expenses
date submitted_at
timestamp created_at
timestamp updated_at
}
dealer_completion_expenses {
uuid expense_id PK
uuid request_id
uuid completion_id
string description
decimal amount
timestamp created_at
timestamp updated_at
}
internal_orders {
uuid io_id PK
uuid request_id
string io_number
text io_remark
decimal io_available_balance
decimal io_blocked_amount
decimal io_remaining_balance
uuid organized_by FK
date organized_at
string sap_document_number
enum status
timestamp created_at
timestamp updated_at
}
claim_budget_tracking {
uuid budget_id PK
uuid request_id
decimal initial_estimated_budget
decimal proposal_estimated_budget
date proposal_submitted_at
decimal approved_budget
date approved_at
uuid approved_by FK
decimal io_blocked_amount
date io_blocked_at
decimal closed_expenses
date closed_expenses_submitted_at
decimal final_claim_amount
date final_claim_amount_approved_at
uuid final_claim_amount_approved_by FK
decimal credit_note_amount
date credit_note_issued_at
enum budget_status
string currency
decimal variance_amount
decimal variance_percentage
uuid last_modified_by FK
date last_modified_at
text modification_reason
timestamp created_at
timestamp updated_at
}
claim_invoices {
uuid invoice_id PK
uuid request_id
string invoice_number
date invoice_date
string dms_number
decimal amount
string status
text description
timestamp created_at
timestamp updated_at
}
claim_credit_notes {
uuid credit_note_id PK
uuid request_id
string credit_note_number
date credit_note_date
decimal credit_note_amount
string status
text reason
text description
timestamp created_at
timestamp updated_at
}

View File

@ -1,583 +0,0 @@
# Extensible Workflow Architecture Plan
## Supporting Multiple Template Types (Claim Management, Non-Templatized, Future Templates)
## Overview
This document outlines how to design the backend architecture to support:
1. **Unified Request System**: All requests (templatized, non-templatized, claim management) use the same `workflow_requests` table
2. **Template Identification**: Distinguish between different workflow types
3. **Extensibility**: Easy addition of new templates by admins without code changes
4. **Unified Views**: All requests appear in "My Requests", "Open Requests", etc. automatically
---
## Architecture Principles
### 1. **Single Source of Truth: `workflow_requests` Table**
All requests, regardless of type, are stored in the same table:
```sql
workflow_requests {
request_id UUID PK
request_number VARCHAR(20) UK
initiator_id UUID FK
template_type VARCHAR(20) -- 'CUSTOM' | 'TEMPLATE' (high-level)
workflow_type VARCHAR(50) -- 'NON_TEMPLATIZED' | 'CLAIM_MANAGEMENT' | 'DEALER_ONBOARDING' | etc.
template_id UUID FK (nullable) -- Reference to workflow_templates if using admin template
title VARCHAR(500)
description TEXT
status workflow_status
current_level INTEGER
total_levels INTEGER
-- ... common fields
}
```
**Key Fields:**
- `template_type`: High-level classification ('CUSTOM' for user-created, 'TEMPLATE' for admin templates)
- `workflow_type`: Specific workflow identifier (e.g., 'CLAIM_MANAGEMENT', 'NON_TEMPLATIZED')
- `template_id`: Optional reference to `workflow_templates` table if using an admin-created template
### 2. **Template-Specific Data Storage**
Each workflow type can have its own extension table for type-specific data:
```sql
-- For Claim Management
dealer_claim_details {
claim_id UUID PK
request_id UUID FK -> workflow_requests(request_id)
activity_name VARCHAR(500)
activity_type VARCHAR(100)
dealer_code VARCHAR(50)
dealer_name VARCHAR(200)
dealer_email VARCHAR(255)
dealer_phone VARCHAR(20)
dealer_address TEXT
activity_date DATE
location VARCHAR(255)
period_start_date DATE
period_end_date DATE
estimated_budget DECIMAL(15,2)
closed_expenses DECIMAL(15,2)
io_number VARCHAR(50)
io_blocked_amount DECIMAL(15,2)
sap_document_number VARCHAR(100)
dms_number VARCHAR(100)
e_invoice_number VARCHAR(100)
credit_note_number VARCHAR(100)
-- ... claim-specific fields
}
-- For Non-Templatized (if needed)
non_templatized_details {
detail_id UUID PK
request_id UUID FK -> workflow_requests(request_id)
custom_fields JSONB -- Flexible storage for any custom data
-- ... any specific fields
}
-- For Future Templates
-- Each new template can have its own extension table
```
### 3. **Workflow Templates Table (Admin-Created Templates)**
```sql
workflow_templates {
template_id UUID PK
template_name VARCHAR(200) -- Display name: "Claim Management", "Dealer Onboarding"
template_code VARCHAR(50) UK -- Unique identifier: "CLAIM_MANAGEMENT", "DEALER_ONBOARDING"
template_description TEXT
template_category VARCHAR(100) -- "Dealer Operations", "HR", "Finance", etc.
workflow_type VARCHAR(50) -- Maps to workflow_requests.workflow_type
approval_levels_config JSONB -- Step definitions, TAT, roles, etc.
default_tat_hours DECIMAL(10,2)
form_fields_config JSONB -- Form field definitions for wizard
is_active BOOLEAN
is_system_template BOOLEAN -- True for built-in (Claim Management), False for admin-created
created_by UUID FK
created_at TIMESTAMP
updated_at TIMESTAMP
}
```
---
## Database Schema Changes
### Migration: Add Workflow Type Support
```sql
-- Migration: 20251210-add-workflow-type-support.ts
-- 1. Add workflow_type column to workflow_requests
ALTER TABLE workflow_requests
ADD COLUMN IF NOT EXISTS workflow_type VARCHAR(50) DEFAULT 'NON_TEMPLATIZED';
-- 2. Add template_id column (nullable, for admin templates)
ALTER TABLE workflow_requests
ADD COLUMN IF NOT EXISTS template_id UUID REFERENCES workflow_templates(template_id);
-- 3. Create index for workflow_type
CREATE INDEX IF NOT EXISTS idx_workflow_requests_workflow_type
ON workflow_requests(workflow_type);
-- 4. Create index for template_id
CREATE INDEX IF NOT EXISTS idx_workflow_requests_template_id
ON workflow_requests(template_id);
-- 5. Create dealer_claim_details table
CREATE TABLE IF NOT EXISTS dealer_claim_details (
claim_id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
request_id UUID NOT NULL UNIQUE REFERENCES workflow_requests(request_id) ON DELETE CASCADE,
activity_name VARCHAR(500) NOT NULL,
activity_type VARCHAR(100) NOT NULL,
dealer_code VARCHAR(50) NOT NULL,
dealer_name VARCHAR(200) NOT NULL,
dealer_email VARCHAR(255),
dealer_phone VARCHAR(20),
dealer_address TEXT,
activity_date DATE,
location VARCHAR(255),
period_start_date DATE,
period_end_date DATE,
estimated_budget DECIMAL(15,2),
closed_expenses DECIMAL(15,2),
io_number VARCHAR(50),
io_available_balance DECIMAL(15,2),
io_blocked_amount DECIMAL(15,2),
io_remaining_balance DECIMAL(15,2),
sap_document_number VARCHAR(100),
dms_number VARCHAR(100),
e_invoice_number VARCHAR(100),
e_invoice_date DATE,
credit_note_number VARCHAR(100),
credit_note_date DATE,
credit_note_amount DECIMAL(15,2),
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_dealer_claim_details_request_id ON dealer_claim_details(request_id);
CREATE INDEX idx_dealer_claim_details_dealer_code ON dealer_claim_details(dealer_code);
-- 6. Create proposal_details table (Step 1: Dealer Proposal)
CREATE TABLE IF NOT EXISTS dealer_proposal_details (
proposal_id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
request_id UUID NOT NULL REFERENCES workflow_requests(request_id) ON DELETE CASCADE,
proposal_document_path VARCHAR(500),
proposal_document_url VARCHAR(500),
cost_breakup JSONB, -- Array of {description, amount}
total_estimated_budget DECIMAL(15,2),
timeline_mode VARCHAR(10), -- 'date' | 'days'
expected_completion_date DATE,
expected_completion_days INTEGER,
dealer_comments TEXT,
submitted_at TIMESTAMP WITH TIME ZONE,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_dealer_proposal_details_request_id ON dealer_proposal_details(request_id);
-- 7. Create completion_documents table (Step 5: Dealer Completion)
CREATE TABLE IF NOT EXISTS dealer_completion_details (
completion_id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
request_id UUID NOT NULL REFERENCES workflow_requests(request_id) ON DELETE CASCADE,
activity_completion_date DATE NOT NULL,
number_of_participants INTEGER,
closed_expenses JSONB, -- Array of {description, amount}
total_closed_expenses DECIMAL(15,2),
completion_documents JSONB, -- Array of document references
activity_photos JSONB, -- Array of photo references
submitted_at TIMESTAMP WITH TIME ZONE,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_dealer_completion_details_request_id ON dealer_completion_details(request_id);
```
---
## Model Updates
### 1. Update WorkflowRequest Model
```typescript
// Re_Backend/src/models/WorkflowRequest.ts
interface WorkflowRequestAttributes {
requestId: string;
requestNumber: string;
initiatorId: string;
templateType: 'CUSTOM' | 'TEMPLATE';
workflowType: string; // NEW: 'NON_TEMPLATIZED' | 'CLAIM_MANAGEMENT' | etc.
templateId?: string; // NEW: Reference to workflow_templates
title: string;
description: string;
// ... existing fields
}
// Add association
WorkflowRequest.hasOne(DealerClaimDetails, {
as: 'claimDetails',
foreignKey: 'requestId',
sourceKey: 'requestId'
});
```
### 2. Create DealerClaimDetails Model
```typescript
// Re_Backend/src/models/DealerClaimDetails.ts
import { DataTypes, Model } from 'sequelize';
import { sequelize } from '@config/database';
import { WorkflowRequest } from './WorkflowRequest';
interface DealerClaimDetailsAttributes {
claimId: string;
requestId: string;
activityName: string;
activityType: string;
dealerCode: string;
dealerName: string;
// ... all claim-specific fields
}
class DealerClaimDetails extends Model<DealerClaimDetailsAttributes> {
public claimId!: string;
public requestId!: string;
// ... fields
}
DealerClaimDetails.init({
claimId: {
type: DataTypes.UUID,
defaultValue: DataTypes.UUIDV4,
primaryKey: true,
field: 'claim_id'
},
requestId: {
type: DataTypes.UUID,
allowNull: false,
unique: true,
field: 'request_id',
references: {
model: 'workflow_requests',
key: 'request_id'
}
},
// ... all other fields
}, {
sequelize,
modelName: 'DealerClaimDetails',
tableName: 'dealer_claim_details',
timestamps: true
});
// Association
DealerClaimDetails.belongsTo(WorkflowRequest, {
as: 'workflowRequest',
foreignKey: 'requestId',
targetKey: 'requestId'
});
export { DealerClaimDetails };
```
---
## Service Layer Pattern
### 1. Template-Aware Service Factory
```typescript
// Re_Backend/src/services/templateService.factory.ts
import { WorkflowRequest } from '../models/WorkflowRequest';
import { DealerClaimService } from './dealerClaim.service';
import { NonTemplatizedService } from './nonTemplatized.service';
export class TemplateServiceFactory {
static getService(workflowType: string) {
switch (workflowType) {
case 'CLAIM_MANAGEMENT':
return new DealerClaimService();
case 'NON_TEMPLATIZED':
return new NonTemplatizedService();
default:
// For future templates, use a generic service or throw error
throw new Error(`Unsupported workflow type: ${workflowType}`);
}
}
static async getRequestDetails(requestId: string) {
const request = await WorkflowRequest.findByPk(requestId);
if (!request) return null;
const service = this.getService(request.workflowType);
return service.getRequestDetails(request);
}
}
```
### 2. Unified Workflow Service (No Changes Needed)
The existing `WorkflowService.listMyRequests()` and `listOpenForMe()` methods will **automatically** include all request types because they query `workflow_requests` table without filtering by `workflow_type`.
```typescript
// Existing code works as-is - no changes needed!
async listMyRequests(userId: string, page: number, limit: number, filters?: {...}) {
// This query automatically includes ALL workflow types
const requests = await WorkflowRequest.findAll({
where: {
initiatorId: userId,
isDraft: false,
// ... filters
// NO workflow_type filter - includes everything!
}
});
return requests;
}
```
---
## API Endpoints
### 1. Create Claim Management Request
```typescript
// Re_Backend/src/controllers/dealerClaim.controller.ts
async createClaimRequest(req: AuthenticatedRequest, res: Response) {
const userId = req.user?.userId;
const {
activityName,
activityType,
dealerCode,
// ... claim-specific fields
} = req.body;
// 1. Create workflow request (common)
const workflowRequest = await WorkflowRequest.create({
initiatorId: userId,
templateType: 'CUSTOM',
workflowType: 'CLAIM_MANAGEMENT', // Identify as claim
title: `${activityName} - Claim Request`,
description: req.body.requestDescription,
totalLevels: 8, // Fixed 8-step workflow
// ... other common fields
});
// 2. Create claim-specific details
const claimDetails = await DealerClaimDetails.create({
requestId: workflowRequest.requestId,
activityName,
activityType,
dealerCode,
// ... claim-specific fields
});
// 3. Create approval levels (8 steps)
await this.createClaimApprovalLevels(workflowRequest.requestId);
return ResponseHandler.success(res, {
request: workflowRequest,
claimDetails
});
}
```
### 2. Get Request Details (Template-Aware)
```typescript
async getRequestDetails(req: Request, res: Response) {
const { requestId } = req.params;
const request = await WorkflowRequest.findByPk(requestId, {
include: [
{ model: User, as: 'initiator' },
// Conditionally include template-specific data
...(request.workflowType === 'CLAIM_MANAGEMENT'
? [{ model: DealerClaimDetails, as: 'claimDetails' }]
: [])
]
});
// Use factory to get template-specific service
const templateService = TemplateServiceFactory.getService(request.workflowType);
const enrichedDetails = await templateService.enrichRequestDetails(request);
return ResponseHandler.success(res, enrichedDetails);
}
```
---
## Frontend Integration
### 1. Request List Views (No Changes Needed)
The existing "My Requests" and "Open Requests" pages will automatically show all request types because the backend doesn't filter by `workflow_type`.
```typescript
// Frontend: MyRequests.tsx - No changes needed!
const fetchMyRequests = async () => {
const result = await workflowApi.listMyInitiatedWorkflows({
page,
limit: itemsPerPage
});
// Returns ALL request types automatically
};
```
### 2. Request Detail Page (Template-Aware Rendering)
```typescript
// Frontend: RequestDetail.tsx
const RequestDetail = ({ requestId }) => {
const request = useRequestDetails(requestId);
// Render based on workflow type
if (request.workflowType === 'CLAIM_MANAGEMENT') {
return <ClaimManagementDetail request={request} />;
} else if (request.workflowType === 'NON_TEMPLATIZED') {
return <NonTemplatizedDetail request={request} />;
} else {
// Future templates - use generic renderer or template config
return <GenericWorkflowDetail request={request} />;
}
};
```
---
## Adding New Templates (Future)
### Step 1: Admin Creates Template in UI
1. Admin goes to "Template Management" page
2. Creates new template with:
- Template name: "Vendor Payment"
- Template code: "VENDOR_PAYMENT"
- Approval levels configuration
- Form fields configuration
### Step 2: Database Entry Created
```sql
INSERT INTO workflow_templates (
template_name,
template_code,
workflow_type,
approval_levels_config,
form_fields_config,
is_active,
is_system_template
) VALUES (
'Vendor Payment',
'VENDOR_PAYMENT',
'VENDOR_PAYMENT',
'{"levels": [...], "tat": {...}}'::jsonb,
'{"fields": [...]}'::jsonb,
true,
false -- Admin-created, not system template
);
```
### Step 3: Create Extension Table (If Needed)
```sql
CREATE TABLE vendor_payment_details (
payment_id UUID PRIMARY KEY,
request_id UUID UNIQUE REFERENCES workflow_requests(request_id),
vendor_code VARCHAR(50),
invoice_number VARCHAR(100),
payment_amount DECIMAL(15,2),
-- ... vendor-specific fields
);
```
### Step 4: Create Service (Optional - Can Use Generic Service)
```typescript
// Re_Backend/src/services/vendorPayment.service.ts
export class VendorPaymentService {
async getRequestDetails(request: WorkflowRequest) {
const paymentDetails = await VendorPaymentDetails.findOne({
where: { requestId: request.requestId }
});
return {
...request.toJSON(),
paymentDetails
};
}
}
// Update factory
TemplateServiceFactory.getService(workflowType: string) {
switch (workflowType) {
case 'VENDOR_PAYMENT':
return new VendorPaymentService();
// ... existing cases
}
}
```
### Step 5: Frontend Component (Optional)
```typescript
// Frontend: components/VendorPaymentDetail.tsx
export function VendorPaymentDetail({ request }) {
// Render vendor payment specific UI
}
```
---
## Benefits of This Architecture
1. **Unified Data Model**: All requests in one table, easy to query
2. **Automatic Inclusion**: My Requests/Open Requests show all types automatically
3. **Extensibility**: Add new templates without modifying existing code
4. **Type Safety**: Template-specific data in separate tables
5. **Flexibility**: Support both system templates and admin-created templates
6. **Backward Compatible**: Existing non-templatized requests continue to work
---
## Migration Strategy
1. **Phase 1**: Add `workflow_type` column, set default to 'NON_TEMPLATIZED' for existing requests
2. **Phase 2**: Create `dealer_claim_details` table and models
3. **Phase 3**: Update claim management creation flow to use new structure
4. **Phase 4**: Update request detail endpoints to be template-aware
5. **Phase 5**: Frontend updates (if needed) for template-specific rendering
---
## Summary
- **All requests** use `workflow_requests` table
- **Template identification** via `workflow_type` field
- **Template-specific data** in extension tables (e.g., `dealer_claim_details`)
- **Unified views** automatically include all types
- **Future templates** can be added by admins without code changes
- **Existing functionality** remains unchanged
This architecture ensures that:
- ✅ Claim Management requests appear in My Requests/Open Requests
- ✅ Non-templatized requests continue to work
- ✅ Future templates can be added easily
- ✅ No code duplication
- ✅ Single source of truth for all requests

View File

@ -1,212 +0,0 @@
# File Path Storage in Database - How It Works
This document explains how file paths and storage URLs are stored in the database for different storage scenarios (GCS vs Local Storage).
## Database Schema
### Documents Table
- **`file_path`** (VARCHAR(500), NOT NULL): Stores the relative path or GCS path
- **`storage_url`** (VARCHAR(500), NULLABLE): Stores the full URL for accessing the file
### Work Note Attachments Table
- **`file_path`** (VARCHAR(500), NOT NULL): Stores the relative path or GCS path
- **`storage_url`** (VARCHAR(500), NULLABLE): Stores the full URL for accessing the file
## Storage Scenarios
### Scenario 1: File Uploaded to GCS (Successfully)
When GCS is configured and the upload succeeds:
**Database Values:**
```sql
file_path = "requests/REQ-2025-12-0001/documents/1701234567890-abc123-proposal.pdf"
storage_url = "https://storage.googleapis.com/bucket-name/requests/REQ-2025-12-0001/documents/1701234567890-abc123-proposal.pdf"
```
**File Location:**
- Physical: Google Cloud Storage bucket
- Path Structure: `requests/{requestNumber}/{fileType}/{fileName}`
- Access: Public URL or signed URL (depending on bucket configuration)
---
### Scenario 2: File Saved to Local Storage (GCS Not Configured or Failed)
When GCS is not configured or upload fails, files are saved to local storage:
**Database Values:**
```sql
file_path = "requests/REQ-2025-12-0001/documents/1701234567890-abc123-proposal.pdf"
storage_url = "/uploads/requests/REQ-2025-12-0001/documents/1701234567890-abc123-proposal.pdf"
```
**File Location:**
- Physical: Local filesystem at `{UPLOAD_DIR}/requests/{requestNumber}/{fileType}/{fileName}`
- Path Structure: Same as GCS structure for consistency
- Access: Served via Express static middleware at `/uploads/*`
**Example:**
```
uploads/
└── requests/
└── REQ-2025-12-0001/
├── documents/
│ └── 1701234567890-abc123-proposal.pdf
└── attachments/
└── 1701234567890-xyz789-note.pdf
```
---
### Scenario 3: Legacy Files (Before This Implementation)
Older files may have different path formats:
**Possible Database Values:**
```sql
file_path = "/absolute/path/to/uploads/file.pdf" -- Absolute path
-- OR
file_path = "file.pdf" -- Simple filename (in root uploads folder)
storage_url = "/uploads/file.pdf" -- Simple URL
```
**File Location:**
- Physical: Various locations depending on when file was uploaded
- Access: Handled by legacy route logic
---
## How Download/Preview Routes Handle Different Storage Types
### Document Preview Route (`GET /workflows/documents/:documentId/preview`)
1. **Check if GCS URL:**
```typescript
const isGcsUrl = storageUrl && (
storageUrl.startsWith('https://storage.googleapis.com') ||
storageUrl.startsWith('gs://')
);
```
- If yes → Redirect to GCS URL
2. **Check if Local Storage URL:**
```typescript
if (storageUrl && storageUrl.startsWith('/uploads/')) {
res.redirect(storageUrl); // Express static serves it
return;
}
```
- If yes → Redirect to `/uploads/...` (served by Express static middleware)
3. **Legacy File Handling:**
```typescript
const absolutePath = filePath && !path.isAbsolute(filePath)
? path.join(UPLOAD_DIR, filePath)
: filePath;
```
- Resolve relative path to absolute
- Serve file directly using `res.sendFile()`
### Work Note Attachment Routes
Same logic as document routes:
- Preview: `/workflows/work-notes/attachments/:attachmentId/preview`
- Download: `/workflows/work-notes/attachments/:attachmentId/download`
---
## Key Points
### 1. Consistent Path Structure
- **Both GCS and local storage** use the same path structure: `requests/{requestNumber}/{fileType}/{fileName}`
- This makes migration seamless when moving from local to GCS
### 2. Storage URL Format
- **GCS:** Full HTTPS URL (`https://storage.googleapis.com/...`)
- **Local:** Relative URL (`/uploads/requests/...`)
- **Legacy:** May vary
### 3. File Path Format
- **GCS:** Relative path in bucket (`requests/REQ-.../documents/file.pdf`)
- **Local:** Same relative path format for consistency
- **Legacy:** May be absolute path or simple filename
### 4. Automatic Fallback
- When GCS fails, system automatically saves to local storage
- Same folder structure maintained
- No data loss
### 5. Serving Files
- **GCS files:** Redirect to public/signed URL
- **Local files (new):** Redirect to `/uploads/...` (Express static)
- **Local files (legacy):** Direct file serving with `res.sendFile()`
---
## Migration Path
When migrating from local storage to GCS:
1. **Files already follow same structure** - No path changes needed
2. **Upload new files** - They automatically go to GCS if configured
3. **Existing files** - Can remain in local storage until migrated
4. **Database** - Only `storage_url` field changes (from `/uploads/...` to `https://...`)
---
## Example Database Records
### GCS File (New Upload)
```json
{
"document_id": "uuid-123",
"file_path": "requests/REQ-2025-12-0001/documents/1701234567890-abc123-proposal.pdf",
"storage_url": "https://storage.googleapis.com/my-bucket/requests/REQ-2025-12-0001/documents/1701234567890-abc123-proposal.pdf",
"file_name": "1701234567890-abc123-proposal.pdf",
"original_file_name": "proposal.pdf"
}
```
### Local Storage File (Fallback)
```json
{
"document_id": "uuid-456",
"file_path": "requests/REQ-2025-12-0001/documents/1701234567891-def456-report.pdf",
"storage_url": "/uploads/requests/REQ-2025-12-0001/documents/1701234567891-def456-report.pdf",
"file_name": "1701234567891-def456-report.pdf",
"original_file_name": "report.pdf"
}
```
### Legacy File (Old Format)
```json
{
"document_id": "uuid-789",
"file_path": "/var/app/uploads/old-file.pdf",
"storage_url": "/uploads/old-file.pdf",
"file_name": "old-file.pdf",
"original_file_name": "old-file.pdf"
}
```
---
## Troubleshooting
### Issue: File not found when downloading
**Check:**
1. Verify `storage_url` format in database
2. Check if file exists at expected location:
- GCS: Check bucket and path
- Local: Check `{UPLOAD_DIR}/requests/...` path
3. Verify Express static middleware is mounted at `/uploads`
### Issue: Files not organizing correctly
**Check:**
1. Verify `requestNumber` is being passed correctly to upload functions
2. Check folder structure matches: `requests/{requestNumber}/{fileType}/`
3. Verify `fileType` is either `'documents'` or `'attachments'`

View File

@ -1,669 +0,0 @@
# GCP Cloud Storage - Production Setup Guide
## Overview
This guide provides step-by-step instructions for setting up Google Cloud Storage (GCS) for the **Royal Enfield Workflow System** in **Production** environment. This document focuses specifically on production deployment requirements, folder structure, and environment configuration.
---
## Table of Contents
1. [Production Requirements](#1-production-requirements)
2. [GCP Bucket Configuration](#2-gcp-bucket-configuration)
3. [Service Account Setup](#3-service-account-setup)
4. [Environment Variables Configuration](#4-environment-variables-configuration)
5. [Folder Structure in GCS](#5-folder-structure-in-gcs)
6. [Security & Access Control](#6-security--access-control)
7. [CORS Configuration](#7-cors-configuration)
8. [Lifecycle Management](#8-lifecycle-management)
9. [Monitoring & Alerts](#9-monitoring--alerts)
10. [Verification & Testing](#10-verification--testing)
---
## 1. Production Requirements
### 1.1 Application Details
| Item | Production Value |
|------|------------------|
| **Application** | Royal Enfield Workflow System |
| **Environment** | Production |
| **Domain** | `https://reflow.royalenfield.com` |
| **Purpose** | Store workflow documents, attachments, invoices, and credit notes |
| **Storage Type** | Google Cloud Storage (GCS) |
| **Region** | `asia-south1` (Mumbai) |
### 1.2 Storage Requirements
The application stores:
- **Workflow Documents**: Initial documents uploaded during request creation
- **Work Note Attachments**: Files attached during approval workflow
- **Invoice Files**: Generated e-invoice PDFs
- **Credit Note Files**: Generated credit note PDFs
- **Dealer Claim Documents**: Proposal documents, completion documents
---
## 2. GCP Bucket Configuration
### 2.1 Production Bucket Settings
| Setting | Production Value |
|---------|------------------|
| **Bucket Name** | `reflow-documents-prod` |
| **Location Type** | Region |
| **Region** | `asia-south1` (Mumbai) |
| **Storage Class** | Standard (for active files) |
| **Access Control** | Uniform bucket-level access |
| **Public Access Prevention** | Enforced (Block all public access) |
| **Versioning** | Enabled (for recovery) |
| **Lifecycle Rules** | Configured (see section 8) |
### 2.2 Create Production Bucket
```bash
# Create production bucket
gcloud storage buckets create gs://reflow-documents-prod \
--project=re-platform-workflow-dealer \
--location=asia-south1 \
--uniform-bucket-level-access \
--public-access-prevention
# Enable versioning
gcloud storage buckets update gs://reflow-documents-prod \
--versioning
# Verify bucket creation
gcloud storage buckets describe gs://reflow-documents-prod
```
### 2.3 Bucket Naming Convention
| Environment | Bucket Name | Purpose |
|-------------|-------------|---------|
| Development | `reflow-documents-dev` | Development testing |
| UAT | `reflow-documents-uat` | User acceptance testing |
| Production | `reflow-documents-prod` | Live production data |
---
## 3. Service Account Setup
### 3.1 Create Production Service Account
```bash
# Create service account for production
gcloud iam service-accounts create reflow-storage-prod-sa \
--display-name="RE Workflow Production Storage Service Account" \
--description="Service account for production file storage operations" \
--project=re-platform-workflow-dealer
```
### 3.2 Assign Required Roles
The service account needs the following IAM roles:
| Role | Purpose | Required For |
|------|---------|--------------|
| `roles/storage.objectAdmin` | Full control over objects | Upload, delete, update files |
| `roles/storage.objectViewer` | Read objects | Download and preview files |
| `roles/storage.legacyBucketReader` | Read bucket metadata | List files and check bucket status |
```bash
# Grant Storage Object Admin role
gcloud projects add-iam-policy-binding re-platform-workflow-dealer \
--member="serviceAccount:reflow-storage-prod-sa@re-platform-workflow-dealer.iam.gserviceaccount.com" \
--role="roles/storage.objectAdmin"
# Grant Storage Object Viewer role (for read operations)
gcloud projects add-iam-policy-binding re-platform-workflow-dealer \
--member="serviceAccount:reflow-storage-prod-sa@re-platform-workflow-dealer.iam.gserviceaccount.com" \
--role="roles/storage.objectViewer"
```
### 3.3 Generate Service Account Key
```bash
# Generate JSON key file for production
gcloud iam service-accounts keys create ./config/gcp-key-prod.json \
--iam-account=reflow-storage-prod-sa@re-platform-workflow-dealer.iam.gserviceaccount.com \
--project=re-platform-workflow-dealer
```
⚠️ **Security Warning:**
- Store the key file securely (not in Git)
- Use secure file transfer methods
- Rotate keys periodically (every 90 days recommended)
- Restrict file permissions: `chmod 600 ./config/gcp-key-prod.json`
---
## 4. Environment Variables Configuration
### 4.1 Required Environment Variables
Add the following environment variables to your production `.env` file:
```env
# ============================================
# Google Cloud Storage (GCP) Configuration
# ============================================
# GCP Project ID - Must match the project_id in your service account key file
GCP_PROJECT_ID=re-platform-workflow-dealer
# GCP Bucket Name - Production bucket name
GCP_BUCKET_NAME=reflow-documents-prod
# GCP Service Account Key File Path
# Can be relative to project root or absolute path
# Example: ./config/gcp-key-prod.json
# Example: /etc/reflow/config/gcp-key-prod.json
GCP_KEY_FILE=./config/gcp-key-prod.json
```
### 4.2 Environment Variable Details
| Variable | Description | Example Value | Required |
|----------|-------------|---------------|----------|
| `GCP_PROJECT_ID` | Your GCP project ID. Must match the `project_id` field in the service account JSON key file. | `re-platform-workflow-dealer` | ✅ Yes |
| `GCP_BUCKET_NAME` | Name of the GCS bucket where files will be stored. Must exist in your GCP project. | `reflow-documents-prod` | ✅ Yes |
| `GCP_KEY_FILE` | Path to the service account JSON key file. Can be relative (from project root) or absolute path. | `./config/gcp-key-prod.json` | ✅ Yes |
### 4.3 File Path Configuration
**Relative Path (Recommended for Development):**
```env
GCP_KEY_FILE=./config/gcp-key-prod.json
```
**Absolute Path (Recommended for Production):**
```env
GCP_KEY_FILE=/etc/reflow/config/gcp-key-prod.json
```
### 4.4 Verification
After setting environment variables, verify the configuration:
```bash
# Check if variables are set
echo $GCP_PROJECT_ID
echo $GCP_BUCKET_NAME
echo $GCP_KEY_FILE
# Verify key file exists
ls -la $GCP_KEY_FILE
# Verify key file permissions (should be 600)
stat -c "%a %n" $GCP_KEY_FILE
```
---
## 5. Folder Structure in GCS
### 5.1 Production Bucket Structure
```
reflow-documents-prod/
├── requests/ # All workflow-related files
│ ├── REQ-2025-12-0001/ # Request-specific folder
│ │ ├── documents/ # Initial request documents
│ │ │ ├── 1701234567890-abc123-proposal.pdf
│ │ │ ├── 1701234567891-def456-specification.docx
│ │ │ └── 1701234567892-ghi789-budget.xlsx
│ │ │
│ │ ├── attachments/ # Work note attachments
│ │ │ ├── 1701234567893-jkl012-approval_note.pdf
│ │ │ ├── 1701234567894-mno345-signature.png
│ │ │ └── 1701234567895-pqr678-supporting_doc.pdf
│ │ │
│ │ ├── invoices/ # Generated invoice files
│ │ │ └── 1701234567896-stu901-invoice_REQ-2025-12-0001.pdf
│ │ │
│ │ └── credit-notes/ # Generated credit note files
│ │ └── 1701234567897-vwx234-credit_note_REQ-2025-12-0001.pdf
│ │
│ ├── REQ-2025-12-0002/
│ │ ├── documents/
│ │ ├── attachments/
│ │ ├── invoices/
│ │ └── credit-notes/
│ │
│ └── REQ-2025-12-0003/
│ └── ...
└── temp/ # Temporary uploads (auto-deleted after 24h)
└── (temporary files before processing)
```
### 5.2 File Path Patterns
| File Type | Path Pattern | Example |
|-----------|--------------|---------|
| **Documents** | `requests/{requestNumber}/documents/{timestamp}-{hash}-{filename}` | `requests/REQ-2025-12-0001/documents/1701234567890-abc123-proposal.pdf` |
| **Attachments** | `requests/{requestNumber}/attachments/{timestamp}-{hash}-{filename}` | `requests/REQ-2025-12-0001/attachments/1701234567893-jkl012-approval_note.pdf` |
| **Invoices** | `requests/{requestNumber}/invoices/{timestamp}-{hash}-{filename}` | `requests/REQ-2025-12-0001/invoices/1701234567896-stu901-invoice_REQ-2025-12-0001.pdf` |
| **Credit Notes** | `requests/{requestNumber}/credit-notes/{timestamp}-{hash}-{filename}` | `requests/REQ-2025-12-0001/credit-notes/1701234567897-vwx234-credit_note_REQ-2025-12-0001.pdf` |
### 5.3 File Naming Convention
Files are automatically renamed with the following pattern:
```
{timestamp}-{randomHash}-{sanitizedOriginalName}
```
**Example:**
- Original: `My Proposal Document (Final).pdf`
- Stored: `1701234567890-abc123-My_Proposal_Document__Final_.pdf`
**Benefits:**
- Prevents filename conflicts
- Maintains original filename for reference
- Ensures unique file identifiers
- Safe for URL encoding
---
## 6. Security & Access Control
### 6.1 Bucket Security Settings
```bash
# Enforce public access prevention
gcloud storage buckets update gs://reflow-documents-prod \
--public-access-prevention
# Enable uniform bucket-level access
gcloud storage buckets update gs://reflow-documents-prod \
--uniform-bucket-level-access
```
### 6.2 Access Control Strategy
**Production Approach:**
- **Private Bucket**: All files are private by default
- **Signed URLs**: Generate time-limited signed URLs for file access (recommended)
- **Service Account**: Only service account has direct access
- **IAM Policies**: Restrict access to specific service accounts only
### 6.3 Signed URL Configuration (Recommended)
For production, use signed URLs instead of public URLs:
```typescript
// Example: Generate signed URL (valid for 1 hour)
const [url] = await file.getSignedUrl({
action: 'read',
expires: Date.now() + 60 * 60 * 1000, // 1 hour
});
```
### 6.4 Security Checklist
- [ ] Public access prevention enabled
- [ ] Uniform bucket-level access enabled
- [ ] Service account has minimal required permissions
- [ ] JSON key file stored securely (not in Git)
- [ ] Key file permissions set to 600
- [ ] CORS configured for specific domains only
- [ ] Bucket versioning enabled
- [ ] Access logging enabled
- [ ] Signed URLs used for file access (if applicable)
---
## 7. CORS Configuration
### 7.1 Production CORS Policy
Create `cors-config-prod.json`:
```json
[
{
"origin": [
"https://reflow.royalenfield.com",
"https://www.royalenfield.com"
],
"method": ["GET", "PUT", "POST", "DELETE", "HEAD", "OPTIONS"],
"responseHeader": [
"Content-Type",
"Content-Disposition",
"Content-Length",
"Cache-Control",
"x-goog-meta-*"
],
"maxAgeSeconds": 3600
}
]
```
### 7.2 Apply CORS Configuration
```bash
gcloud storage buckets update gs://reflow-documents-prod \
--cors-file=cors-config-prod.json
```
### 7.3 Verify CORS
```bash
# Check CORS configuration
gcloud storage buckets describe gs://reflow-documents-prod \
--format="value(cors)"
```
---
## 8. Lifecycle Management
### 8.1 Lifecycle Rules Configuration
Create `lifecycle-config-prod.json`:
```json
{
"lifecycle": {
"rule": [
{
"action": { "type": "Delete" },
"condition": {
"age": 1,
"matchesPrefix": ["temp/"]
},
"description": "Delete temporary files after 24 hours"
},
{
"action": { "type": "SetStorageClass", "storageClass": "NEARLINE" },
"condition": {
"age": 90,
"matchesPrefix": ["requests/"]
},
"description": "Move old files to Nearline storage after 90 days"
},
{
"action": { "type": "SetStorageClass", "storageClass": "COLDLINE" },
"condition": {
"age": 365,
"matchesPrefix": ["requests/"]
},
"description": "Move archived files to Coldline storage after 1 year"
}
]
}
}
```
### 8.2 Apply Lifecycle Rules
```bash
gcloud storage buckets update gs://reflow-documents-prod \
--lifecycle-file=lifecycle-config-prod.json
```
### 8.3 Lifecycle Rule Benefits
| Rule | Purpose | Cost Savings |
|------|---------|--------------|
| Delete temp files | Remove temporary uploads after 24h | Prevents storage bloat |
| Move to Nearline | Archive files older than 90 days | ~50% cost reduction |
| Move to Coldline | Archive files older than 1 year | ~70% cost reduction |
---
## 9. Monitoring & Alerts
### 9.1 Enable Access Logging
```bash
# Create logging bucket (if not exists)
gcloud storage buckets create gs://reflow-logs-prod \
--project=re-platform-workflow-dealer \
--location=asia-south1
# Enable access logging
gcloud storage buckets update gs://reflow-documents-prod \
--log-bucket=gs://reflow-logs-prod \
--log-object-prefix=reflow-storage-logs/
```
### 9.2 Set Up Monitoring Alerts
**Recommended Alerts:**
1. **Storage Quota Alert**
- Trigger: Storage exceeds 80% of quota
- Action: Notify DevOps team
2. **Unusual Access Patterns**
- Trigger: Unusual download patterns detected
- Action: Security team notification
3. **Failed Access Attempts**
- Trigger: Multiple failed authentication attempts
- Action: Immediate security alert
4. **High Upload Volume**
- Trigger: Upload volume exceeds normal threshold
- Action: Performance team notification
### 9.3 Cost Monitoring
Monitor storage costs via:
- GCP Console → Billing → Reports
- Set up budget alerts at 50%, 75%, 90% of monthly budget
- Review storage class usage (Standard vs Nearline vs Coldline)
---
## 10. Verification & Testing
### 10.1 Pre-Deployment Verification
```bash
# 1. Verify bucket exists
gcloud storage buckets describe gs://reflow-documents-prod
# 2. Verify service account has access
gcloud storage ls gs://reflow-documents-prod \
--impersonate-service-account=reflow-storage-prod-sa@re-platform-workflow-dealer.iam.gserviceaccount.com
# 3. Test file upload
echo "test file" > test-upload.txt
gcloud storage cp test-upload.txt gs://reflow-documents-prod/temp/test-upload.txt
# 4. Test file download
gcloud storage cp gs://reflow-documents-prod/temp/test-upload.txt ./test-download.txt
# 5. Test file delete
gcloud storage rm gs://reflow-documents-prod/temp/test-upload.txt
# 6. Clean up
rm test-upload.txt test-download.txt
```
### 10.2 Application-Level Testing
1. **Upload Test:**
- Upload a document via API
- Verify file appears in GCS bucket
- Check database `storage_url` field contains GCS URL
2. **Download Test:**
- Download file via API
- Verify file is accessible
- Check response headers
3. **Delete Test:**
- Delete file via API
- Verify file is removed from GCS
- Check database record is updated
### 10.3 Production Readiness Checklist
- [ ] Bucket created and configured
- [ ] Service account created with correct permissions
- [ ] JSON key file generated and stored securely
- [ ] Environment variables configured in `.env`
- [ ] CORS policy applied
- [ ] Lifecycle rules configured
- [ ] Versioning enabled
- [ ] Access logging enabled
- [ ] Monitoring alerts configured
- [ ] Upload/download/delete operations tested
- [ ] Backup and recovery procedures documented
---
## 11. Troubleshooting
### 11.1 Common Issues
**Issue: Files not uploading to GCS**
- ✅ Check `.env` configuration matches credentials
- ✅ Verify service account has correct permissions
- ✅ Check bucket name exists and is accessible
- ✅ Review application logs for GCS errors
- ✅ Verify key file path is correct
**Issue: Files uploading but not accessible**
- ✅ Verify bucket permissions (private vs public)
- ✅ Check CORS configuration if accessing from browser
- ✅ Ensure `storage_url` is being saved correctly in database
- ✅ Verify signed URL generation (if using private bucket)
**Issue: Permission denied errors**
- ✅ Verify service account has `roles/storage.objectAdmin`
- ✅ Check bucket IAM policies
- ✅ Verify key file is valid and not expired
### 11.2 Log Analysis
Check application logs for GCS-related messages:
```bash
# Search for GCS initialization
grep "GCS.*Initialized" logs/app.log
# Search for GCS errors
grep "GCS.*Error" logs/app.log
# Search for upload failures
grep "GCS.*upload.*failed" logs/app.log
```
---
## 12. Production Deployment Steps
### 12.1 Deployment Checklist
1. **Pre-Deployment:**
- [ ] Create production bucket
- [ ] Create production service account
- [ ] Generate and secure key file
- [ ] Configure environment variables
- [ ] Test upload/download operations
2. **Deployment:**
- [ ] Deploy application with new environment variables
- [ ] Verify GCS initialization in logs
- [ ] Test file upload functionality
- [ ] Monitor for errors
3. **Post-Deployment:**
- [ ] Verify files are being stored in GCS
- [ ] Check database `storage_url` fields
- [ ] Monitor storage costs
- [ ] Review access logs
---
## 13. Cost Estimation (Production)
| Item | Monthly Estimate | Notes |
|------|------------------|-------|
| **Storage (500GB)** | ~$10.00 | Standard storage class |
| **Operations (100K)** | ~$0.50 | Upload/download operations |
| **Network Egress** | Variable | Depends on download volume |
| **Nearline Storage** | ~$5.00 | Files older than 90 days |
| **Coldline Storage** | ~$2.00 | Files older than 1 year |
**Total Estimated Monthly Cost:** ~$17.50 (excluding network egress)
---
## 14. Support & Contacts
| Role | Responsibility | Contact |
|------|----------------|---------|
| **DevOps Team** | GCP infrastructure setup | [DevOps Email] |
| **Application Team** | Application configuration | [App Team Email] |
| **Security Team** | Access control and permissions | [Security Email] |
---
## 15. Quick Reference
### 15.1 Essential Commands
```bash
# Create bucket
gcloud storage buckets create gs://reflow-documents-prod \
--project=re-platform-workflow-dealer \
--location=asia-south1 \
--uniform-bucket-level-access \
--public-access-prevention
# Create service account
gcloud iam service-accounts create reflow-storage-prod-sa \
--display-name="RE Workflow Production Storage" \
--project=re-platform-workflow-dealer
# Generate key
gcloud iam service-accounts keys create ./config/gcp-key-prod.json \
--iam-account=reflow-storage-prod-sa@re-platform-workflow-dealer.iam.gserviceaccount.com
# Set CORS
gcloud storage buckets update gs://reflow-documents-prod \
--cors-file=cors-config-prod.json
# Enable versioning
gcloud storage buckets update gs://reflow-documents-prod \
--versioning
```
### 15.2 Environment Variables Template
```env
# Production GCP Configuration
GCP_PROJECT_ID=re-platform-workflow-dealer
GCP_BUCKET_NAME=reflow-documents-prod
GCP_KEY_FILE=./config/gcp-key-prod.json
```
---
## Appendix: File Structure Reference
### Database Storage Fields
The application stores file information in the database:
| Table | Field | Description |
|-------|-------|-------------|
| `documents` | `file_path` | GCS path: `requests/{requestNumber}/documents/{filename}` |
| `documents` | `storage_url` | Full GCS URL: `https://storage.googleapis.com/bucket/path` |
| `work_note_attachments` | `file_path` | GCS path: `requests/{requestNumber}/attachments/{filename}` |
| `work_note_attachments` | `storage_url` | Full GCS URL |
| `claim_invoices` | `invoice_file_path` | GCS path: `requests/{requestNumber}/invoices/{filename}` |
| `claim_credit_notes` | `credit_note_file_path` | GCS path: `requests/{requestNumber}/credit-notes/{filename}` |
---
**Document Version:** 1.0
**Last Updated:** December 2024
**Maintained By:** RE Workflow Development Team

View File

@ -1,326 +0,0 @@
# GCP Cloud Storage Setup Guide for RE Workflow
## Project Information
| Item | Value |
|------|-------|
| **Application** | RE Workflow System |
| **Environment** | UAT |
| **Domain** | https://reflow-uat.royalenfield.com |
| **Purpose** | Store workflow documents and attachments |
---
## 1. Requirements Overview
The RE Workflow application needs Google Cloud Storage to store:
- Request documents (uploaded during workflow creation)
- Attachments (added during approval process)
- Supporting documents
### Folder Structure in Bucket
```
reflow-documents-uat/
├── requests/
│ ├── REQ-2025-12-0001/
│ │ ├── documents/
│ │ │ ├── proposal.pdf
│ │ │ └── specification.docx
│ │ └── attachments/
│ │ ├── approval_note.pdf
│ │ └── signature.png
│ │
│ ├── REQ-2025-12-0002/
│ │ ├── documents/
│ │ │ └── budget_report.xlsx
│ │ └── attachments/
│ │ └── manager_approval.pdf
│ │
│ └── REQ-2025-12-0003/
│ ├── documents/
│ └── attachments/
└── temp/
└── (temporary uploads before processing)
```
---
## 2. GCP Bucket Configuration
### 2.1 Create Bucket
| Setting | Value |
|---------|-------|
| **Bucket Name** | `reflow-documents-uat` (UAT) / `reflow-documents-prod` (Production) |
| **Location Type** | Region |
| **Region** | `asia-south1` (Mumbai) |
| **Storage Class** | Standard |
| **Access Control** | Uniform |
| **Public Access Prevention** | Enforced (Block all public access) |
### 2.2 Console Commands (gcloud CLI)
```bash
# Create bucket
gcloud storage buckets create gs://reflow-documents-uat \
--project=YOUR_PROJECT_ID \
--location=asia-south1 \
--uniform-bucket-level-access
# Block public access
gcloud storage buckets update gs://reflow-documents-uat \
--public-access-prevention
```
---
## 3. Service Account Setup
### 3.1 Create Service Account
| Setting | Value |
|---------|-------|
| **Name** | `reflow-storage-sa` |
| **Description** | Service account for RE Workflow file storage |
```bash
# Create service account
gcloud iam service-accounts create reflow-storage-sa \
--display-name="RE Workflow Storage Service Account" \
--project=YOUR_PROJECT_ID
```
### 3.2 Assign Permissions
The service account needs these roles:
| Role | Purpose |
|------|---------|
| `roles/storage.objectCreator` | Upload files |
| `roles/storage.objectViewer` | Download/preview files |
| `roles/storage.objectAdmin` | Delete files |
```bash
# Grant permissions
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
--member="serviceAccount:reflow-storage-sa@YOUR_PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/storage.objectAdmin"
```
### 3.3 Generate JSON Key
```bash
# Generate key file
gcloud iam service-accounts keys create gcp-key.json \
--iam-account=reflow-storage-sa@YOUR_PROJECT_ID.iam.gserviceaccount.com
```
⚠️ **Security:** Share this key file securely (not via email). Use a secure file transfer method.
---
## 4. CORS Configuration
Apply this CORS policy to allow browser uploads:
### 4.1 Create `cors-config.json`
```json
[
{
"origin": [
"https://reflow-uat.royalenfield.com",
"https://reflow.royalenfield.com"
],
"method": ["GET", "PUT", "POST", "DELETE", "HEAD", "OPTIONS"],
"responseHeader": [
"Content-Type",
"Content-Disposition",
"Content-Length",
"Cache-Control",
"x-goog-meta-*"
],
"maxAgeSeconds": 3600
}
]
```
### 4.2 Apply CORS Policy
```bash
gcloud storage buckets update gs://reflow-documents-uat \
--cors-file=cors-config.json
```
---
## 5. Lifecycle Rules (Optional but Recommended)
### 5.1 Auto-delete Temporary Files
Delete files in `temp/` folder after 24 hours:
```json
{
"lifecycle": {
"rule": [
{
"action": { "type": "Delete" },
"condition": {
"age": 1,
"matchesPrefix": ["temp/"]
}
}
]
}
}
```
```bash
gcloud storage buckets update gs://reflow-documents-uat \
--lifecycle-file=lifecycle-config.json
```
---
## 6. Bucket Versioning (Recommended)
Enable versioning for accidental delete recovery:
```bash
gcloud storage buckets update gs://reflow-documents-uat \
--versioning
```
---
## 7. Deliverables to Application Team
Please provide the following to the development team:
### 7.1 Environment Variables
| Variable | Value |
|----------|-------|
| `GCP_PROJECT_ID` | `your-gcp-project-id` |
| `GCP_BUCKET_NAME` | `reflow-documents-uat` |
| `GCP_KEY_FILE` | `./config/gcp-key.json` |
### 7.2 Files to Share
| File | Description | How to Share |
|------|-------------|--------------|
| `gcp-key.json` | Service account key | Secure transfer (not email) |
---
## 8. Verification Steps
After setup, verify with:
```bash
# List bucket contents
gcloud storage ls gs://reflow-documents-uat/
# Test upload
echo "test" > test.txt
gcloud storage cp test.txt gs://reflow-documents-uat/temp/
# Test download
gcloud storage cp gs://reflow-documents-uat/temp/test.txt ./downloaded.txt
# Test delete
gcloud storage rm gs://reflow-documents-uat/temp/test.txt
```
---
## 9. Environment-Specific Buckets
| Environment | Bucket Name | Region |
|-------------|-------------|--------|
| Development | `reflow-documents-dev` | asia-south1 |
| UAT | `reflow-documents-uat` | asia-south1 |
| Production | `reflow-documents-prod` | asia-south1 |
---
## 10. Monitoring & Alerts (Optional)
### 10.1 Enable Logging
```bash
gcloud storage buckets update gs://reflow-documents-uat \
--log-bucket=gs://your-logging-bucket \
--log-object-prefix=reflow-storage-logs/
```
### 10.2 Storage Alerts
Set up alerts for:
- Storage exceeds 80% of quota
- Unusual download patterns
- Failed access attempts
---
## 11. Cost Estimation
| Item | Estimate (Monthly) |
|------|-------------------|
| Storage (100GB) | ~$2.00 |
| Operations (10K) | ~$0.05 |
| Network Egress | Varies by usage |
---
## 12. Security Checklist
- [ ] Public access prevention enabled
- [ ] Service account has minimal required permissions
- [ ] JSON key stored securely (not in Git)
- [ ] CORS configured for specific domains only
- [ ] Bucket versioning enabled
- [ ] Lifecycle rules for temp files
- [ ] Access logging enabled
---
## 13. Contact
| Role | Contact |
|------|---------|
| Application Team | [Your Email] |
| DevOps Team | [DevOps Email] |
---
## Appendix: Quick Reference
### GCP Console URLs
- **Buckets:** https://console.cloud.google.com/storage/browser
- **Service Accounts:** https://console.cloud.google.com/iam-admin/serviceaccounts
- **IAM:** https://console.cloud.google.com/iam-admin/iam
### gcloud Commands Summary
```bash
# Create bucket
gcloud storage buckets create gs://BUCKET_NAME --location=asia-south1
# Create service account
gcloud iam service-accounts create SA_NAME
# Generate key
gcloud iam service-accounts keys create key.json --iam-account=SA@PROJECT.iam.gserviceaccount.com
# Set CORS
gcloud storage buckets update gs://BUCKET_NAME --cors-file=cors.json
# Enable versioning
gcloud storage buckets update gs://BUCKET_NAME --versioning
```

View File

@ -1,277 +0,0 @@
# Google Secret Manager Integration Guide
This guide explains how to integrate Google Cloud Secret Manager with your Node.js application to securely manage environment variables.
## Overview
The Google Secret Manager integration allows you to:
- Store sensitive configuration values (passwords, API keys, tokens) in Google Cloud Secret Manager
- Load secrets at application startup and merge them with your existing environment variables
- Maintain backward compatibility with `.env` files for local development
- Use minimal code changes - existing `process.env.VARIABLE_NAME` access continues to work
## Prerequisites
1. **Google Cloud Project** with Secret Manager API enabled
2. **Service Account** with Secret Manager Secret Accessor role
3. **Authentication** - Service account credentials configured (via `GCP_KEY_FILE` or default credentials)
## Setup Instructions
### 1. Enable Secret Manager API
```bash
gcloud services enable secretmanager.googleapis.com --project=YOUR_PROJECT_ID
```
### 2. Create Secrets in Google Secret Manager
Create secrets using the Google Cloud Console or gcloud CLI:
```bash
# Example: Create a database password secret
echo -n "your-secure-password" | gcloud secrets create DB_PASSWORD \
--project=YOUR_PROJECT_ID \
--data-file=-
# Example: Create a JWT secret
echo -n "your-jwt-secret-key" | gcloud secrets create JWT_SECRET \
--project=YOUR_PROJECT_ID \
--data-file=-
# Grant service account access to secrets
gcloud secrets add-iam-policy-binding DB_PASSWORD \
--member="serviceAccount:YOUR_SERVICE_ACCOUNT@YOUR_PROJECT.iam.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor" \
--project=YOUR_PROJECT_ID
```
### 3. Configure Environment Variables
Add the following to your `.env` file:
```env
# Google Secret Manager Configuration
USE_GOOGLE_SECRET_MANAGER=true
GCP_PROJECT_ID=your-project-id
# Optional: Prefix for all secret names (e.g., "prod" -> looks for "prod-DB_PASSWORD")
GCP_SECRET_PREFIX=
# Optional: JSON file mapping secret names to env var names
GCP_SECRET_MAP_FILE=./secret-map.json
```
**Important Notes:**
- Set `USE_GOOGLE_SECRET_MANAGER=true` to enable the integration
- `GCP_PROJECT_ID` must be set (same as used for GCS/Vertex AI)
- `GCP_KEY_FILE` should already be configured for other GCP services
- When `USE_GOOGLE_SECRET_MANAGER=false` or not set, the app uses `.env` file only
### 4. Secret Name Mapping
By default, secrets in Google Secret Manager are automatically mapped to environment variables:
- Secret name: `DB_PASSWORD` → Environment variable: `DB_PASSWORD`
- Secret name: `db-password` → Environment variable: `DB_PASSWORD` (hyphens converted to underscores, uppercase)
- Secret name: `jwt-secret-key` → Environment variable: `JWT_SECRET_KEY`
#### Custom Mapping (Optional)
If you need custom mappings, create a JSON file (e.g., `secret-map.json`):
```json
{
"db-password-prod": "DB_PASSWORD",
"jwt-secret-key": "JWT_SECRET",
"okta-client-secret-prod": "OKTA_CLIENT_SECRET"
}
```
Then set in `.env`:
```env
GCP_SECRET_MAP_FILE=./secret-map.json
```
### 5. Secret Prefix (Optional)
If all your secrets share a common prefix:
```env
GCP_SECRET_PREFIX=prod
```
This will look for secrets named `prod-DB_PASSWORD`, `prod-JWT_SECRET`, etc.
## How It Works
1. **Application Startup:**
- `.env` file is loaded first (provides fallback values)
- If `USE_GOOGLE_SECRET_MANAGER=true`, secrets are fetched from Google Secret Manager
- Secrets are merged into `process.env`, overriding `.env` values if they exist
- Application continues with merged environment variables
2. **Fallback Behavior:**
- If Secret Manager is disabled or fails, the app falls back to `.env` file
- No errors are thrown - the app continues with available configuration
- Logs indicate whether secrets were loaded successfully
3. **Existing Code Compatibility:**
- No changes needed to existing code
- Continue using `process.env.VARIABLE_NAME` as before
- Secrets from GCS automatically populate `process.env`
## Default Secrets Loaded
The service automatically attempts to load these common secrets (if they exist in Secret Manager):
**Database:**
- `DB_HOST`, `DB_PORT`, `DB_NAME`, `DB_USER`, `DB_PASSWORD`
**Authentication:**
- `JWT_SECRET`, `REFRESH_TOKEN_SECRET`, `SESSION_SECRET`
**SSO/Okta:**
- `OKTA_DOMAIN`, `OKTA_CLIENT_ID`, `OKTA_CLIENT_SECRET`, `OKTA_API_TOKEN`
**Email:**
- `SMTP_HOST`, `SMTP_PORT`, `SMTP_USER`, `SMTP_PASSWORD`
**Web Push (VAPID):**
- `VAPID_PUBLIC_KEY`, `VAPID_PRIVATE_KEY`
**Logging:**
- `LOKI_HOST`, `LOKI_USER`, `LOKI_PASSWORD`
### Loading Custom Secrets
To load additional secrets, modify the code:
```typescript
// In server.ts or app.ts
import { googleSecretManager } from './services/googleSecretManager.service';
// Load default secrets + custom ones
await googleSecretManager.loadSecrets([
'DB_PASSWORD',
'JWT_SECRET',
'CUSTOM_API_KEY', // Your custom secret
'CUSTOM_SECRET_2'
]);
```
Or load a single secret on-demand:
```typescript
import { googleSecretManager } from './services/googleSecretManager.service';
const apiKey = await googleSecretManager.getSecretValue('CUSTOM_API_KEY', 'API_KEY');
```
## Security Best Practices
1. **Service Account Permissions:**
- Grant only `roles/secretmanager.secretAccessor` role
- Use separate service accounts for different environments
- Never grant `roles/owner` or `roles/editor` to service accounts
2. **Secret Rotation:**
- Rotate secrets regularly in Google Secret Manager
- The app automatically uses the `latest` version of each secret
- No code changes needed when secrets are rotated
3. **Environment Separation:**
- Use different Google Cloud projects for dev/staging/prod
- Use `GCP_SECRET_PREFIX` to namespace secrets by environment
- Never commit `.env` files with production secrets to version control
4. **Access Control:**
- Use IAM policies to control who can read secrets
- Enable audit logging for secret access
- Regularly review secret access logs
## Troubleshooting
### Secrets Not Loading
**Check logs for:**
```
[Secret Manager] Google Secret Manager is disabled (USE_GOOGLE_SECRET_MANAGER != true)
[Secret Manager] GCP_PROJECT_ID not set, skipping Google Secret Manager
[Secret Manager] Failed to load secrets: [error message]
```
**Common issues:**
1. `USE_GOOGLE_SECRET_MANAGER` not set to `true`
2. `GCP_PROJECT_ID` not configured
3. Service account lacks Secret Manager permissions
4. Secrets don't exist in Secret Manager
5. Incorrect secret names (check case sensitivity)
### Service Account Authentication
Ensure service account credentials are available:
- Set `GCP_KEY_FILE` to point to service account JSON file
- Or configure Application Default Credentials (ADC)
- Test with: `gcloud auth application-default login`
### Secret Not Found
If a secret doesn't exist in Secret Manager:
- The app logs a debug message and continues
- Falls back to `.env` file value
- This is expected behavior - not all secrets need to be in GCS
### Debugging
Enable debug logging by setting:
```env
LOG_LEVEL=debug
```
This will show detailed logs about which secrets are being loaded.
## Example Configuration
**Local Development (.env):**
```env
USE_GOOGLE_SECRET_MANAGER=false
DB_PASSWORD=local-dev-password
JWT_SECRET=local-jwt-secret
```
**Production (.env):**
```env
USE_GOOGLE_SECRET_MANAGER=true
GCP_PROJECT_ID=re-platform-workflow-dealer
GCP_SECRET_PREFIX=prod
GCP_KEY_FILE=./credentials/service-account.json
# DB_PASSWORD and other secrets loaded from GCS
```
## Migration Strategy
1. **Phase 1: Setup**
- Create secrets in Google Secret Manager
- Keep `.env` file with current values (as backup)
2. **Phase 2: Test**
- Set `USE_GOOGLE_SECRET_MANAGER=true` in development
- Verify secrets are loaded correctly
- Test application functionality
3. **Phase 3: Production**
- Deploy with `USE_GOOGLE_SECRET_MANAGER=true`
- Monitor logs for secret loading success
- Remove sensitive values from `.env` file (keep placeholders)
4. **Phase 4: Cleanup**
- Remove production secrets from `.env` file
- Ensure all secrets are in Secret Manager
- Document secret names and mappings
## Additional Resources
- [Google Secret Manager Documentation](https://cloud.google.com/secret-manager/docs)
- [Secret Manager Client Library](https://cloud.google.com/nodejs/docs/reference/secret-manager/latest)
- [Service Account Best Practices](https://cloud.google.com/iam/docs/best-practices-service-accounts)

View File

@ -1,78 +0,0 @@
# Dealer Claim Management - Implementation Progress
## ✅ Completed
### 1. Database Migrations
- ✅ `20251210-add-workflow-type-support.ts` - Adds `workflow_type` and `template_id` to `workflow_requests`
- ✅ `20251210-enhance-workflow-templates.ts` - Enhances `workflow_templates` with form configuration fields
- ✅ `20251210-create-dealer-claim-tables.ts` - Creates dealer claim related tables:
- `dealer_claim_details` - Main claim information
- `dealer_proposal_details` - Step 1: Dealer proposal submission
- `dealer_completion_details` - Step 5: Dealer completion documents
### 2. Models
- ✅ Updated `WorkflowRequest` model with `workflowType` and `templateId` fields
- ✅ Created `DealerClaimDetails` model
- ✅ Created `DealerProposalDetails` model
- ✅ Created `DealerCompletionDetails` model
### 3. Services
- ✅ Created `TemplateFieldResolver` service for dynamic user field references
## 🚧 In Progress
### 4. Services (Next Steps)
- ⏳ Create `EnhancedTemplateService` - Main service for template operations
- ⏳ Create `DealerClaimService` - Claim-specific business logic
### 5. Controllers & Routes
- ⏳ Create `DealerClaimController` - API endpoints for claim management
- ⏳ Create routes for dealer claim operations
- ⏳ Create template management endpoints
## 📋 Next Steps
1. **Create EnhancedTemplateService**
- Get form configuration with resolved user references
- Save step data
- Validate form data
2. **Create DealerClaimService**
- Create claim request
- Handle 8-step workflow transitions
- Manage proposal and completion submissions
3. **Create Controllers**
- POST `/api/v1/dealer-claims` - Create claim request
- GET `/api/v1/dealer-claims/:requestId` - Get claim details
- POST `/api/v1/dealer-claims/:requestId/proposal` - Submit proposal (Step 1)
- POST `/api/v1/dealer-claims/:requestId/completion` - Submit completion (Step 5)
- GET `/api/v1/templates/:templateId/form-config` - Get form configuration
4. **Integration Services**
- SAP integration for IO validation and budget blocking
- DMS integration for e-invoice and credit note generation
## 📝 Notes
- All migrations are ready to run
- Models are created with proper associations
- Template field resolver supports dynamic user references
- System is designed to be extensible for future templates
## 🔄 Running Migrations
To apply the migrations:
```bash
cd Re_Backend
npm run migrate
```
Or run individually:
```bash
npx ts-node src/scripts/run-migration.ts 20251210-add-workflow-type-support
npx ts-node src/scripts/run-migration.ts 20251210-enhance-workflow-templates
npx ts-node src/scripts/run-migration.ts 20251210-create-dealer-claim-tables
```

View File

@ -1,159 +0,0 @@
# Dealer Claim Management - Implementation Summary
## ✅ Completed Implementation
### 1. Database Migrations (4 files)
- ✅ `20251210-add-workflow-type-support.ts` - Adds `workflow_type` and `template_id` to `workflow_requests`
- ✅ `20251210-enhance-workflow-templates.ts` - Enhances `workflow_templates` with form configuration
- ✅ `20251210-add-template-id-foreign-key.ts` - Adds FK constraint for `template_id`
- ✅ `20251210-create-dealer-claim-tables.ts` - Creates dealer claim tables:
- `dealer_claim_details` - Main claim information
- `dealer_proposal_details` - Step 1: Dealer proposal
- `dealer_completion_details` - Step 5: Completion documents
### 2. Models (5 files)
- ✅ Updated `WorkflowRequest` - Added `workflowType` and `templateId` fields
- ✅ Created `DealerClaimDetails` - Main claim information model
- ✅ Created `DealerProposalDetails` - Proposal submission model
- ✅ Created `DealerCompletionDetails` - Completion documents model
- ✅ Created `WorkflowTemplate` - Template configuration model
### 3. Services (3 files)
- ✅ Created `TemplateFieldResolver` - Resolves dynamic user field references
- ✅ Created `EnhancedTemplateService` - Template form management
- ✅ Created `DealerClaimService` - Claim-specific business logic:
- `createClaimRequest()` - Create new claim with 8-step workflow
- `getClaimDetails()` - Get complete claim information
- `submitDealerProposal()` - Step 1: Dealer proposal submission
- `submitCompletionDocuments()` - Step 5: Completion submission
- `updateIODetails()` - Step 3: IO budget blocking
- `updateEInvoiceDetails()` - Step 7: E-Invoice generation
- `updateCreditNoteDetails()` - Step 8: Credit note issuance
### 4. Controllers & Routes (2 files)
- ✅ Created `DealerClaimController` - API endpoints for claim operations
- ✅ Created `dealerClaim.routes.ts` - Route definitions
- ✅ Registered routes in `routes/index.ts`
### 5. Frontend Utilities (1 file)
- ✅ Created `claimRequestUtils.ts` - Utility functions for detecting claim requests
## 📋 API Endpoints Created
### Dealer Claim Management
- `POST /api/v1/dealer-claims` - Create claim request
- `GET /api/v1/dealer-claims/:requestId` - Get claim details
- `POST /api/v1/dealer-claims/:requestId/proposal` - Submit dealer proposal (Step 1)
- `POST /api/v1/dealer-claims/:requestId/completion` - Submit completion (Step 5)
- `PUT /api/v1/dealer-claims/:requestId/io` - Update IO details (Step 3)
- `PUT /api/v1/dealer-claims/:requestId/e-invoice` - Update e-invoice (Step 7)
- `PUT /api/v1/dealer-claims/:requestId/credit-note` - Update credit note (Step 8)
## 🔄 8-Step Workflow Implementation
The system automatically creates 8 approval levels:
1. **Dealer Proposal Submission** (72h) - Dealer submits proposal
2. **Requestor Evaluation** (48h) - Initiator reviews and confirms
3. **Department Lead Approval** (72h) - Dept lead approves and blocks IO
4. **Activity Creation** (1h, Auto) - System creates activity record
5. **Dealer Completion Documents** (120h) - Dealer submits completion docs
6. **Requestor Claim Approval** (48h) - Initiator approves claim
7. **E-Invoice Generation** (1h, Auto) - System generates e-invoice via DMS
8. **Credit Note Confirmation** (48h) - Finance confirms credit note
## 🎯 Key Features
1. **Unified Request System**
- All requests use same `workflow_requests` table
- Identified by `workflowType: 'CLAIM_MANAGEMENT'`
- Automatically appears in "My Requests" and "Open Requests"
2. **Template-Specific Data Storage**
- Claim data stored in extension tables
- Linked via `request_id` foreign key
- Supports future templates with their own tables
3. **Dynamic User References**
- Auto-populate fields from initiator, dealer, approvers
- Supports team lead, department lead references
- Configurable per template
4. **File Upload Integration**
- Uses GCS with local fallback
- Organized by request number and file type
- Supports proposal documents and completion files
## 📝 Next Steps
### Backend
1. ⏳ Add SAP integration for IO validation and budget blocking
2. ⏳ Add DMS integration for e-invoice and credit note generation
3. ⏳ Create template management API endpoints
4. ⏳ Add validation for dealer codes (SAP integration)
### Frontend
1. ⏳ Create `claimDataMapper.ts` utility functions
2. ⏳ Update `RequestDetail.tsx` to conditionally render claim components
3. ⏳ Update API services to include `workflowType`
4. ⏳ Create `dealerClaimApi.ts` service
5. ⏳ Update request cards to show workflow type
## 🚀 Running the Implementation
### 1. Run Migrations
```bash
cd Re_Backend
npm run migrate
```
### 2. Test API Endpoints
```bash
# Create claim request
POST /api/v1/dealer-claims
{
"activityName": "Diwali Campaign",
"activityType": "Marketing Activity",
"dealerCode": "RE-MH-001",
"dealerName": "Royal Motors Mumbai",
"location": "Mumbai",
"requestDescription": "Marketing campaign details..."
}
# Submit proposal
POST /api/v1/dealer-claims/:requestId/proposal
FormData with proposalDocument file and JSON data
```
## 📊 Database Structure
```
workflow_requests (common)
├── workflow_type: 'CLAIM_MANAGEMENT'
└── template_id: (nullable)
dealer_claim_details (claim-specific)
└── request_id → workflow_requests
dealer_proposal_details (Step 1)
└── request_id → workflow_requests
dealer_completion_details (Step 5)
└── request_id → workflow_requests
approval_levels (8 steps)
└── request_id → workflow_requests
```
## ✅ Testing Checklist
- [ ] Run migrations successfully
- [ ] Create claim request via API
- [ ] Submit dealer proposal
- [ ] Update IO details
- [ ] Submit completion documents
- [ ] Verify request appears in "My Requests"
- [ ] Verify request appears in "Open Requests"
- [ ] Test file uploads (GCS and local fallback)
- [ ] Test workflow progression through 8 steps

View File

@ -0,0 +1,222 @@
# In-App Notification System - Setup Guide
## 🎯 Overview
Complete real-time in-app notification system for Royal Enfield Workflow Management System.
## ✅ Features Implemented
### Backend:
1. **Notification Model** (`models/Notification.ts`)
- Stores all in-app notifications
- Tracks read/unread status
- Supports priority levels (LOW, MEDIUM, HIGH, URGENT)
- Metadata for request context
2. **Notification Controller** (`controllers/notification.controller.ts`)
- GET `/api/v1/notifications` - List user's notifications with pagination
- GET `/api/v1/notifications/unread-count` - Get unread count
- PATCH `/api/v1/notifications/:notificationId/read` - Mark as read
- POST `/api/v1/notifications/mark-all-read` - Mark all as read
- DELETE `/api/v1/notifications/:notificationId` - Delete notification
3. **Enhanced Notification Service** (`services/notification.service.ts`)
- Saves notifications to database (for in-app display)
- Emits real-time socket.io events
- Sends push notifications (if subscribed)
- All in one call: `notificationService.sendToUsers()`
4. **Socket.io Enhancement** (`realtime/socket.ts`)
- Added `join:user` event for personal notification room
- Added `emitToUser()` function for targeted notifications
- Real-time delivery without page refresh
### Frontend:
1. **Notification API Service** (`services/notificationApi.ts`)
- Complete API client for all notification endpoints
2. **PageLayout Integration** (`components/layout/PageLayout/PageLayout.tsx`)
- Real-time notification bell with unread count badge
- Dropdown showing latest 10 notifications
- Click to mark as read and navigate to request
- "Mark all as read" functionality
- Auto-refreshes when new notifications arrive
- Works even if browser push notifications disabled
3. **Data Freshness** (MyRequests, OpenRequests, ClosedRequests)
- Fixed stale data after DB deletion
- Always shows fresh data from API
## 📦 Database Setup
### Step 1: Run Migration
Execute this SQL in your PostgreSQL database:
```bash
psql -U postgres -d re_workflow_db -f migrations/create_notifications_table.sql
```
OR run manually in pgAdmin/SQL tool:
```sql
-- See: migrations/create_notifications_table.sql
```
### Step 2: Verify Table Created
```sql
SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public' AND table_name = 'notifications';
```
## 🚀 How It Works
### 1. When an Event Occurs (e.g., Request Assigned):
**Backend:**
```typescript
await notificationService.sendToUsers(
[approverId],
{
title: 'New request assigned',
body: 'Marketing Campaign Approval - REQ-2025-12345',
requestId: workflowId,
requestNumber: 'REQ-2025-12345',
url: `/request/REQ-2025-12345`,
type: 'assignment',
priority: 'HIGH',
actionRequired: true
}
);
```
This automatically:
- ✅ Saves notification to `notifications` table
- ✅ Emits `notification:new` socket event to user
- ✅ Sends browser push notification (if enabled)
### 2. Frontend Receives Notification:
**PageLayout** automatically:
- ✅ Receives socket event in real-time
- ✅ Updates notification count badge
- ✅ Adds to notification dropdown
- ✅ Shows blue dot for unread
- ✅ User clicks → marks as read → navigates to request
## 📌 Notification Events (Major)
Based on your requirement, here are the key events that trigger notifications:
| Event | Type | Sent To | Priority |
|-------|------|---------|----------|
| Request Created | `created` | Initiator | MEDIUM |
| Request Assigned | `assignment` | Approver | HIGH |
| Approval Given | `approved` | Initiator | HIGH |
| Request Rejected | `rejected` | Initiator | URGENT |
| TAT Alert (50%) | `tat_alert` | Approver | MEDIUM |
| TAT Alert (75%) | `tat_alert` | Approver | HIGH |
| TAT Breached | `tat_breach` | Approver + Initiator | URGENT |
| Work Note Mention | `mention` | Tagged Users | MEDIUM |
| New Comment | `comment` | Participants | LOW |
## 🔧 Configuration
### Backend (.env):
```env
# Already configured - no changes needed
VAPID_PUBLIC_KEY=your_vapid_public_key
VAPID_PRIVATE_KEY=your_vapid_private_key
```
### Frontend (.env):
```env
# Already configured
VITE_API_BASE_URL=http://localhost:5000/api/v1
```
## ✅ Testing
### 1. Test Basic Notification:
```bash
# Create a workflow and assign to an approver
# Check approver's notification bell - should show count
```
### 2. Test Real-Time Delivery:
```bash
# Have 2 users logged in (different browsers)
# User A creates request, assigns to User B
# User B should see notification appear immediately (no refresh needed)
```
### 3. Test TAT Notifications:
```bash
# Create request with 1-hour TAT
# Wait for threshold notifications (50%, 75%, 100%)
# Approver should receive in-app notifications
```
### 4. Test Work Note Mentions:
```bash
# Add work note with @mention
# Tagged user should receive notification
```
## 🎨 UI Features
- **Unread Badge**: Shows count (1-9, or "9+" for 10+)
- **Blue Dot**: Indicates unread notifications
- **Blue Background**: Highlights unread items
- **Time Ago**: "5 minutes ago", "2 hours ago", etc.
- **Click to Navigate**: Automatically opens the related request
- **Mark All Read**: Single click to clear all unread
- **Scrollable**: Shows latest 10, with "View all" link
## 📱 Fallback for Disabled Push Notifications
Even if user denies browser push notifications:
- ✅ In-app notifications ALWAYS work
- ✅ Notifications saved to database
- ✅ Real-time delivery via socket.io
- ✅ No permission required
- ✅ Works on all browsers
## 🔍 Debug Endpoints
```bash
# Get notifications for current user
GET /api/v1/notifications?page=1&limit=10
# Get only unread
GET /api/v1/notifications?unreadOnly=true
# Get unread count
GET /api/v1/notifications/unread-count
```
## 🎉 Benefits
1. **No Browser Permission Needed** - Always works, unlike push notifications
2. **Real-Time Updates** - Instant delivery via socket.io
3. **Persistent** - Saved in database, available after login
4. **Actionable** - Click to navigate to related request
5. **User-Friendly** - Clean UI integrated into header
6. **Complete Tracking** - Know what was sent via which channel
## 🔥 Next Steps (Optional)
1. **Email Integration**: Send email for URGENT priority notifications
2. **SMS Integration**: Critical alerts via SMS
3. **Notification Preferences**: Let users choose which events to receive
4. **Notification History Page**: Full-page view with filters
5. **Sound Alerts**: Play sound when new notification arrives
6. **Desktop Notifications**: Browser native notifications (if permitted)
---
**✅ In-App Notifications are now fully operational!**
Users will receive instant notifications for all major workflow events, even without browser push permissions enabled.

View File

@ -1,726 +0,0 @@
# Loki + Grafana Deployment Guide for RE Workflow
## Overview
This guide covers deploying **Loki with Grafana** for log aggregation in the RE Workflow application.
```
┌─────────────────────────┐ ┌─────────────────────────┐
│ RE Workflow Backend │──────────▶│ Loki │
│ (Node.js + Winston) │ HTTP │ (Log Storage) │
└─────────────────────────┘ :3100 └───────────┬─────────────┘
┌───────────▼─────────────┐
│ Grafana │
│ monitoring.cloudtopiaa │
│ (Your existing!) │
└─────────────────────────┘
```
**Why Loki + Grafana?**
- ✅ Lightweight - designed for logs (unlike ELK)
- ✅ Uses your existing Grafana instance
- ✅ Same query language as Prometheus (LogQL)
- ✅ Cost-effective - indexes labels, not content
---
# Part 1: Windows Development Setup
## Prerequisites (Windows)
- Docker Desktop for Windows installed
- WSL2 enabled (recommended)
- 4GB+ RAM available for Docker
---
## Step 1: Install Docker Desktop
1. Download from: https://www.docker.com/products/docker-desktop/
2. Run installer
3. Enable WSL2 integration when prompted
4. Restart computer
---
## Step 2: Create Project Directory
Open PowerShell as Administrator:
```powershell
# Create directory
mkdir C:\loki
cd C:\loki
```
---
## Step 3: Create Loki Configuration (Windows)
Create file `C:\loki\loki-config.yaml`:
```powershell
# Using PowerShell
notepad C:\loki\loki-config.yaml
```
**Paste this configuration:**
```yaml
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
instance_addr: 127.0.0.1
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
limits_config:
retention_period: 7d
ingestion_rate_mb: 10
ingestion_burst_size_mb: 20
```
---
## Step 4: Create Docker Compose (Windows)
Create file `C:\loki\docker-compose.yml`:
```powershell
notepad C:\loki\docker-compose.yml
```
**Paste this configuration:**
```yaml
version: '3.8'
services:
loki:
image: grafana/loki:2.9.2
container_name: loki
ports:
- "3100:3100"
volumes:
- ./loki-config.yaml:/etc/loki/local-config.yaml
- loki-data:/loki
command: -config.file=/etc/loki/local-config.yaml
restart: unless-stopped
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3001:3000" # Using 3001 since 3000 is used by React frontend
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin123
volumes:
- grafana-data:/var/lib/grafana
depends_on:
- loki
restart: unless-stopped
volumes:
loki-data:
grafana-data:
```
---
## Step 5: Start Services (Windows)
```powershell
cd C:\loki
docker-compose up -d
```
**Wait 30 seconds for services to initialize.**
---
## Step 6: Verify Services (Windows)
```powershell
# Check containers are running
docker ps
# Test Loki health
Invoke-WebRequest -Uri http://localhost:3100/ready
# Or using curl (if installed)
curl http://localhost:3100/ready
```
---
## Step 7: Configure Grafana (Windows Dev)
1. Open browser: `http://localhost:3001` *(port 3001 to avoid conflict with React on 3000)*
2. Login: `admin` / `admin123`
3. Go to: **Connections → Data Sources → Add data source**
4. Select: **Loki**
5. Configure:
- URL: `http://loki:3100`
6. Click: **Save & Test**
---
## Step 8: Configure Backend .env (Windows Dev)
```env
# Development - Local Loki
LOKI_HOST=http://localhost:3100
```
---
## Windows Commands Reference
| Command | Purpose |
|---------|---------|
| `docker-compose up -d` | Start Loki + Grafana |
| `docker-compose down` | Stop services |
| `docker-compose logs -f loki` | View Loki logs |
| `docker-compose restart` | Restart services |
| `docker ps` | Check running containers |
---
# Part 2: Linux Production Setup (DevOps)
## Prerequisites (Linux)
- Ubuntu 20.04+ / CentOS 7+ / RHEL 8+
- Docker & Docker Compose installed
- 2GB+ RAM (4GB recommended)
- 10GB+ disk space
- Grafana running at `http://monitoring.cloudtopiaa.com/`
---
## Step 1: Install Docker (if not installed)
**Ubuntu/Debian:**
```bash
# Update packages
sudo apt update
# Install Docker
sudo apt install -y docker.io docker-compose
# Start Docker
sudo systemctl start docker
sudo systemctl enable docker
# Add user to docker group
sudo usermod -aG docker $USER
```
**CentOS/RHEL:**
```bash
# Install Docker
sudo yum install -y docker docker-compose
# Start Docker
sudo systemctl start docker
sudo systemctl enable docker
```
---
## Step 2: Create Loki Directory
```bash
sudo mkdir -p /opt/loki
cd /opt/loki
```
---
## Step 3: Create Loki Configuration (Linux)
```bash
sudo nano /opt/loki/loki-config.yaml
```
**Paste this configuration:**
```yaml
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
instance_addr: 127.0.0.1
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
limits_config:
retention_period: 30d
ingestion_rate_mb: 10
ingestion_burst_size_mb: 20
# Storage retention
compactor:
working_directory: /tmp/loki/compactor
retention_enabled: true
retention_delete_delay: 2h
delete_request_store: filesystem
```
---
## Step 4: Create Docker Compose (Linux Production)
```bash
sudo nano /opt/loki/docker-compose.yml
```
**Paste this configuration (Loki only - uses existing Grafana):**
```yaml
version: '3.8'
services:
loki:
image: grafana/loki:2.9.2
container_name: loki
ports:
- "3100:3100"
volumes:
- ./loki-config.yaml:/etc/loki/local-config.yaml
- loki-data:/tmp/loki
command: -config.file=/etc/loki/local-config.yaml
networks:
- monitoring
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3100/ready || exit 1"]
interval: 30s
timeout: 10s
retries: 5
networks:
monitoring:
driver: bridge
volumes:
loki-data:
driver: local
```
---
## Step 5: Start Loki (Linux)
```bash
cd /opt/loki
sudo docker-compose up -d
```
**Wait 30 seconds for Loki to initialize.**
---
## Step 6: Verify Loki (Linux)
```bash
# Check container
sudo docker ps | grep loki
# Test Loki health
curl http://localhost:3100/ready
# Test Loki is accepting logs
curl http://localhost:3100/loki/api/v1/labels
```
**Expected response:**
```json
{"status":"success","data":[]}
```
---
## Step 7: Open Firewall Port (Linux)
**Ubuntu/Debian:**
```bash
sudo ufw allow 3100/tcp
sudo ufw reload
```
**CentOS/RHEL:**
```bash
sudo firewall-cmd --permanent --add-port=3100/tcp
sudo firewall-cmd --reload
```
---
## Step 8: Add Loki to Existing Grafana
1. **Open Grafana:** `http://monitoring.cloudtopiaa.com/`
2. **Login** with admin credentials
3. **Go to:** Connections → Data Sources → Add data source
4. **Select:** Loki
5. **Configure:**
| Field | Value |
|-------|-------|
| Name | `RE-Workflow-Logs` |
| URL | `http://<loki-server-ip>:3100` |
| Timeout | `60` |
6. **Click:** Save & Test
7. **Should see:** ✅ "Data source successfully connected"
---
## Step 9: Configure Backend .env (Production)
```env
# Production - Remote Loki
LOKI_HOST=http://<loki-server-ip>:3100
# LOKI_USER= # Optional: if basic auth enabled
# LOKI_PASSWORD= # Optional: if basic auth enabled
```
---
## Linux Commands Reference
| Command | Purpose |
|---------|---------|
| `sudo docker-compose up -d` | Start Loki |
| `sudo docker-compose down` | Stop Loki |
| `sudo docker-compose logs -f` | View logs |
| `sudo docker-compose restart` | Restart |
| `sudo docker ps` | Check containers |
---
## Step 10: Enable Basic Auth (Optional - Production)
For added security, enable basic auth:
```bash
# Install apache2-utils for htpasswd
sudo apt install apache2-utils
# Create password file
sudo htpasswd -c /opt/loki/.htpasswd lokiuser
# Update docker-compose.yml to use nginx reverse proxy with auth
```
---
# Part 3: Grafana Dashboard Setup
## Create Dashboard
1. Go to: `http://monitoring.cloudtopiaa.com/dashboards` (or `http://localhost:3001` for dev)
2. Click: **New → New Dashboard**
3. Add panels as described below
---
### Panel 1: Error Count (Stat)
**Query (LogQL):**
```
count_over_time({app="re-workflow"} |= "error" [24h])
```
- Visualization: **Stat**
- Title: "Errors (24h)"
---
### Panel 2: Error Timeline (Time Series)
**Query (LogQL):**
```
sum by (level) (count_over_time({app="re-workflow"} | json | level=~"error|warn" [5m]))
```
- Visualization: **Time Series**
- Title: "Errors Over Time"
---
### Panel 3: Recent Errors (Logs)
**Query (LogQL):**
```
{app="re-workflow"} | json | level="error"
```
- Visualization: **Logs**
- Title: "Recent Errors"
---
### Panel 4: TAT Breaches (Stat)
**Query (LogQL):**
```
count_over_time({app="re-workflow"} | json | tatEvent="breached" [24h])
```
- Visualization: **Stat**
- Title: "TAT Breaches"
- Color: Red
---
### Panel 5: Workflow Events (Pie)
**Query (LogQL):**
```
sum by (workflowEvent) (count_over_time({app="re-workflow"} | json | workflowEvent!="" [24h]))
```
- Visualization: **Pie Chart**
- Title: "Workflow Events"
---
### Panel 6: Auth Failures (Table)
**Query (LogQL):**
```
{app="re-workflow"} | json | authEvent="auth_failure"
```
- Visualization: **Table**
- Title: "Authentication Failures"
---
## Useful LogQL Queries
| Purpose | Query |
|---------|-------|
| All errors | `{app="re-workflow"} \| json \| level="error"` |
| Specific request | `{app="re-workflow"} \| json \| requestId="REQ-2024-001"` |
| User activity | `{app="re-workflow"} \| json \| userId="user-123"` |
| TAT breaches | `{app="re-workflow"} \| json \| tatEvent="breached"` |
| Auth failures | `{app="re-workflow"} \| json \| authEvent="auth_failure"` |
| Workflow created | `{app="re-workflow"} \| json \| workflowEvent="created"` |
| API errors (5xx) | `{app="re-workflow"} \| json \| statusCode>=500` |
| Slow requests | `{app="re-workflow"} \| json \| duration>3000` |
| Error rate | `sum(rate({app="re-workflow"} \| json \| level="error"[5m]))` |
| By department | `{app="re-workflow"} \| json \| department="Engineering"` |
---
# Part 4: Alerting Setup
## Alert 1: High Error Rate
1. Go to: **Alerting → Alert Rules → New Alert Rule**
2. Configure:
- Name: `RE Workflow - High Error Rate`
- Data source: `RE-Workflow-Logs`
- Query: `count_over_time({app="re-workflow"} | json | level="error" [5m])`
- Condition: IS ABOVE 10
3. Add notification (Slack, Email)
## Alert 2: TAT Breach
1. Create new alert rule
2. Configure:
- Name: `RE Workflow - TAT Breach`
- Query: `count_over_time({app="re-workflow"} | json | tatEvent="breached" [15m])`
- Condition: IS ABOVE 0
3. Add notification
## Alert 3: Auth Attack Detection
1. Create new alert rule
2. Configure:
- Name: `RE Workflow - Auth Attack`
- Query: `count_over_time({app="re-workflow"} | json | authEvent="auth_failure" [5m])`
- Condition: IS ABOVE 20
3. Add notification to Security team
---
# Part 5: Troubleshooting
## Windows Issues
### Docker Desktop not starting
```powershell
# Restart Docker Desktop service
Restart-Service docker
# Or restart Docker Desktop from system tray
```
### Port 3100 already in use
```powershell
# Find process using port
netstat -ano | findstr :3100
# Kill process
taskkill /PID <pid> /F
```
### WSL2 issues
```powershell
# Update WSL
wsl --update
# Restart WSL
wsl --shutdown
```
---
## Linux Issues
### Loki won't start
```bash
# Check logs
sudo docker logs loki
# Common fix - permissions
sudo chown -R 10001:10001 /opt/loki
```
### Grafana can't connect to Loki
```bash
# Verify Loki is healthy
curl http://localhost:3100/ready
# Check network from Grafana server
curl http://loki-server:3100/ready
# Restart Loki
sudo docker-compose restart
```
### Logs not appearing in Grafana
1. Check application env has correct `LOKI_HOST`
2. Verify network connectivity: `curl http://loki:3100/ready`
3. Check labels: `curl http://localhost:3100/loki/api/v1/labels`
4. Wait for application to send first logs
### High memory usage
```bash
# Reduce retention period in loki-config.yaml
limits_config:
retention_period: 7d # Reduce from 30d
```
---
# Quick Reference
## Environment Comparison
| Setting | Windows Dev | Linux Production |
|---------|-------------|------------------|
| LOKI_HOST | `http://localhost:3100` | `http://<server-ip>:3100` |
| Grafana URL | `http://localhost:3001` | `http://monitoring.cloudtopiaa.com` |
| Config Path | `C:\loki\` | `/opt/loki/` |
| Retention | 7 days | 30 days |
## Port Reference
| Service | Port | URL |
|---------|------|-----|
| Loki | 3100 | `http://server:3100` |
| Grafana (Dev) | 3001 | `http://localhost:3001` |
| Grafana (Prod) | 80/443 | `http://monitoring.cloudtopiaa.com/` |
| React Frontend | 3000 | `http://localhost:3000` |
---
# Verification Checklist
## Windows Development
- [ ] Docker Desktop running
- [ ] `docker ps` shows loki and grafana containers
- [ ] `http://localhost:3100/ready` returns "ready"
- [ ] `http://localhost:3001` shows Grafana login
- [ ] Loki data source connected in Grafana
- [ ] Backend `.env` has `LOKI_HOST=http://localhost:3100`
## Linux Production
- [ ] Loki container running (`docker ps`)
- [ ] `curl localhost:3100/ready` returns "ready"
- [ ] Firewall port 3100 open
- [ ] Grafana connected to Loki
- [ ] Backend `.env` has correct `LOKI_HOST`
- [ ] Logs appearing in Grafana Explore
- [ ] Dashboard created
- [ ] Alerts configured
---
# Contact
For issues with this setup:
- Backend logs: Check Grafana dashboard
- Infrastructure: Contact DevOps team

View File

@ -1,164 +0,0 @@
# Migration and Setup Summary
## ✅ Current Status
### Tables Created by Migrations
All **6 new dealer claim tables** are included in the migration system:
1. ✅ `dealer_claim_details` - Main claim information
2. ✅ `dealer_proposal_details` - Step 1: Dealer proposal
3. ✅ `dealer_completion_details` - Step 5: Completion documents
4. ✅ `dealer_proposal_cost_items` - Cost breakdown items
5. ✅ `internal_orders` ⭐ - IO details with dedicated fields
6. ✅ `claim_budget_tracking` ⭐ - Comprehensive budget tracking
## Migration Commands
### 1. **`npm run migrate`** ✅
**Status:** ✅ **Fully configured**
This command runs `src/scripts/migrate.ts` which includes **ALL** migrations including:
- ✅ All dealer claim tables (m25-m28)
- ✅ New tables: `internal_orders` (m27) and `claim_budget_tracking` (m28)
**Usage:**
```bash
npm run migrate
```
**What it does:**
- Checks which migrations have already run (via `migrations` table)
- Runs only pending migrations
- Marks them as executed
- Creates all new tables automatically
---
### 2. **`npm run dev`** ✅
**Status:** ✅ **Now fixed and configured**
This command runs:
```bash
npm run setup && nodemon --exec ts-node ...
```
Which calls `npm run setup``src/scripts/auto-setup.ts`
**What `auto-setup.ts` does:**
1. ✅ Checks if database exists, creates if missing
2. ✅ Installs PostgreSQL extensions (uuid-ossp)
3. ✅ **Runs all pending migrations** (including dealer claim tables)
4. ✅ Tests database connection
**Fixed:** ✅ Now includes all dealer claim migrations (m29-m35)
**Usage:**
```bash
npm run dev
```
This will automatically:
- Create database if needed
- Run all migrations (including new tables)
- Start the development server
---
### 3. **`npm run setup`** ✅
**Status:** ✅ **Now fixed and configured**
Same as what `npm run dev` calls - runs `auto-setup.ts`
**Usage:**
```bash
npm run setup
```
---
## Migration Files Included
### In `migrate.ts` (for `npm run migrate`):
- ✅ `20251210-add-workflow-type-support` (m22)
- ✅ `20251210-enhance-workflow-templates` (m23)
- ✅ `20251210-add-template-id-foreign-key` (m24)
- ✅ `20251210-create-dealer-claim-tables` (m25) - Creates 3 tables
- ✅ `20251210-create-proposal-cost-items-table` (m26)
- ✅ `20251211-create-internal-orders-table` (m27) ⭐ NEW
- ✅ `20251211-create-claim-budget-tracking-table` (m28) ⭐ NEW
### In `auto-setup.ts` (for `npm run dev` / `npm run setup`):
- ✅ All migrations from `migrate.ts` are now included (m29-m35)
---
## What Gets Created
When you run either `npm run migrate` or `npm run dev`, these tables will be created:
### Dealer Claim Tables (from `20251210-create-dealer-claim-tables.ts`):
1. `dealer_claim_details`
2. `dealer_proposal_details`
3. `dealer_completion_details`
### Additional Tables:
4. `dealer_proposal_cost_items` (from `20251210-create-proposal-cost-items-table.ts`)
5. `internal_orders` ⭐ (from `20251211-create-internal-orders-table.ts`)
6. `claim_budget_tracking` ⭐ (from `20251211-create-claim-budget-tracking-table.ts`)
---
## Verification
After running migrations, verify tables exist:
```sql
-- Check if new tables exist
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name IN (
'dealer_claim_details',
'dealer_proposal_details',
'dealer_completion_details',
'dealer_proposal_cost_items',
'internal_orders',
'claim_budget_tracking'
)
ORDER BY table_name;
```
Should return 6 rows.
---
## Summary
| Command | Runs Migrations? | Includes New Tables? | Status |
|---------|------------------|---------------------|--------|
| `npm run migrate` | ✅ Yes | ✅ Yes | ✅ Working |
| `npm run dev` | ✅ Yes | ✅ Yes | ✅ Fixed |
| `npm run setup` | ✅ Yes | ✅ Yes | ✅ Fixed |
**All commands now create the new tables automatically!** 🎉
---
## Next Steps
1. **Run migrations:**
```bash
npm run migrate
```
OR
```bash
npm run dev # This will also run migrations via setup
```
2. **Verify tables created:**
Check the database to confirm all 6 tables exist.
3. **Start using:**
The tables are ready for dealer claim management!

View File

@ -1,216 +0,0 @@
# New Tables Created for Dealer Claim Management
## Overview
This document lists all the new database tables created specifically for the Dealer Claim Management system.
## Tables Created
### 1. **`dealer_claim_details`**
**Migration:** `20251210-create-dealer-claim-tables.ts`
**Purpose:** Main table storing claim-specific information
**Key Fields:**
- `claim_id` (PK)
- `request_id` (FK to `workflow_requests`, unique)
- `activity_name`, `activity_type`
- `dealer_code`, `dealer_name`, `dealer_email`, `dealer_phone`, `dealer_address`
- `activity_date`, `location`
- `period_start_date`, `period_end_date`
- `estimated_budget`, `closed_expenses`
- `io_number`, `io_available_balance`, `io_blocked_amount`, `io_remaining_balance` (legacy - now in `internal_orders`)
- `sap_document_number`, `dms_number`
- `e_invoice_number`, `e_invoice_date`
- `credit_note_number`, `credit_note_date`, `credit_note_amount`
**Created:** December 10, 2025
---
### 2. **`dealer_proposal_details`**
**Migration:** `20251210-create-dealer-claim-tables.ts`
**Purpose:** Stores dealer proposal submission data (Step 1 of workflow)
**Key Fields:**
- `proposal_id` (PK)
- `request_id` (FK to `workflow_requests`, unique)
- `proposal_document_path`, `proposal_document_url`
- `cost_breakup` (JSONB - legacy, now use `dealer_proposal_cost_items`)
- `total_estimated_budget`
- `timeline_mode` ('date' | 'days')
- `expected_completion_date`, `expected_completion_days`
- `dealer_comments`
- `submitted_at`
**Created:** December 10, 2025
---
### 3. **`dealer_completion_details`**
**Migration:** `20251210-create-dealer-claim-tables.ts`
**Purpose:** Stores dealer completion documents and expenses (Step 5 of workflow)
**Key Fields:**
- `completion_id` (PK)
- `request_id` (FK to `workflow_requests`, unique)
- `activity_completion_date`
- `number_of_participants`
- `closed_expenses` (JSONB array)
- `total_closed_expenses`
- `completion_documents` (JSONB array)
- `activity_photos` (JSONB array)
- `submitted_at`
**Created:** December 10, 2025
---
### 4. **`dealer_proposal_cost_items`**
**Migration:** `20251210-create-proposal-cost-items-table.ts`
**Purpose:** Separate table for cost breakdown items (replaces JSONB in `dealer_proposal_details`)
**Key Fields:**
- `cost_item_id` (PK)
- `proposal_id` (FK to `dealer_proposal_details`)
- `request_id` (FK to `workflow_requests` - denormalized for easier querying)
- `item_description`
- `amount` (DECIMAL 15,2)
- `item_order` (for maintaining order in cost breakdown)
**Benefits:**
- Better querying and filtering
- Easier to update individual cost items
- Better for analytics and reporting
- Maintains referential integrity
**Created:** December 10, 2025
---
### 5. **`internal_orders`** ⭐ NEW
**Migration:** `20251211-create-internal-orders-table.ts`
**Purpose:** Dedicated table for IO (Internal Order) details with proper structure
**Key Fields:**
- `io_id` (PK)
- `request_id` (FK to `workflow_requests`, unique - one IO per request)
- `io_number` (STRING 50)
- `io_remark` (TEXT) ⭐ - Dedicated field for IO remarks (not in comments)
- `io_available_balance` (DECIMAL 15,2)
- `io_blocked_amount` (DECIMAL 15,2)
- `io_remaining_balance` (DECIMAL 15,2)
- `organized_by` (FK to `users`) ⭐ - Tracks who organized the IO
- `organized_at` (DATE) ⭐ - When IO was organized
- `sap_document_number` (STRING 100)
- `status` (ENUM: 'PENDING', 'BLOCKED', 'RELEASED', 'CANCELLED')
**Why This Table:**
- Previously IO details were stored in `dealer_claim_details` table
- IO remark was being parsed from comments
- Now dedicated table with proper fields and relationships
- Better data integrity and querying
**Created:** December 11, 2025
---
### 6. **`claim_budget_tracking`** ⭐ NEW
**Migration:** `20251211-create-claim-budget-tracking-table.ts`
**Purpose:** Comprehensive budget tracking throughout the claim lifecycle
**Key Fields:**
- `budget_id` (PK)
- `request_id` (FK to `workflow_requests`, unique - one budget record per request)
**Budget Values:**
- `initial_estimated_budget` - From claim creation
- `proposal_estimated_budget` - From Step 1 (Dealer Proposal)
- `approved_budget` - From Step 2 (Requestor Evaluation)
- `io_blocked_amount` - From Step 3 (Department Lead - IO blocking)
- `closed_expenses` - From Step 5 (Dealer Completion)
- `final_claim_amount` - From Step 6 (Requestor Claim Approval)
- `credit_note_amount` - From Step 8 (Finance)
**Tracking Fields:**
- `proposal_submitted_at`
- `approved_at`, `approved_by` (FK to `users`)
- `io_blocked_at`
- `closed_expenses_submitted_at`
- `final_claim_amount_approved_at`, `final_claim_amount_approved_by` (FK to `users`)
- `credit_note_issued_at`
**Status & Analysis:**
- `budget_status` (ENUM: 'DRAFT', 'PROPOSED', 'APPROVED', 'BLOCKED', 'CLOSED', 'SETTLED')
- `currency` (STRING 3, default: 'INR')
- `variance_amount` - Difference between approved and closed expenses
- `variance_percentage` - Variance as percentage
**Audit Fields:**
- `last_modified_by` (FK to `users`)
- `last_modified_at`
- `modification_reason` (TEXT)
**Why This Table:**
- Previously budget data was scattered across multiple tables
- No single source of truth for budget lifecycle
- No audit trail for budget modifications
- Now comprehensive tracking with status and variance calculation
**Created:** December 11, 2025
---
## Summary
### Total New Tables: **6**
1. ✅ `dealer_claim_details` - Main claim information
2. ✅ `dealer_proposal_details` - Step 1: Dealer proposal
3. ✅ `dealer_completion_details` - Step 5: Completion documents
4. ✅ `dealer_proposal_cost_items` - Cost breakdown items
5. ✅ `internal_orders` ⭐ - IO details with dedicated fields
6. ✅ `claim_budget_tracking` ⭐ - Comprehensive budget tracking
### Most Recent Additions (December 11, 2025):
- **`internal_orders`** - Proper IO data structure with `ioRemark` field
- **`claim_budget_tracking`** - Complete budget lifecycle tracking
## Migration Order
Run migrations in this order:
```bash
npm run migrate
```
The migrations will run in chronological order:
1. `20251210-create-dealer-claim-tables.ts` (creates tables 1-3)
2. `20251210-create-proposal-cost-items-table.ts` (creates table 4)
3. `20251211-create-internal-orders-table.ts` (creates table 5)
4. `20251211-create-claim-budget-tracking-table.ts` (creates table 6)
## Relationships
```
workflow_requests (1)
├── dealer_claim_details (1:1)
├── dealer_proposal_details (1:1)
│ └── dealer_proposal_cost_items (1:many)
├── dealer_completion_details (1:1)
├── internal_orders (1:1) ⭐ NEW
└── claim_budget_tracking (1:1) ⭐ NEW
```
## Notes
- All tables have `request_id` foreign key to `workflow_requests`
- Most tables have unique constraint on `request_id` (one record per request)
- `dealer_proposal_cost_items` can have multiple items per proposal
- All tables use UUID primary keys
- All tables have `created_at` and `updated_at` timestamps

View File

@ -1,159 +0,0 @@
# Notification Triggers Documentation
This document lists all notification triggers in the Royal Enfield Workflow Management System.
---
## Overview
The system sends push notifications to users based on various workflow events. Notifications are sent via the `notificationService.sendToUsers()` method and support web push notifications.
---
## 1. Workflow Service Notifications
**File:** `src/services/workflow.service.ts`
| # | Trigger Event | Recipient | Title | Body | Priority |
|---|---------------|-----------|-------|------|----------|
| 1 | New approver added to request | New Approver | "New Request Assignment" | "You have been added as an approver to request {requestNumber}: {title}" | DEFAULT |
| 2 | Approver skipped → Next approver notified | Next Approver | "Request Escalated" | "Previous approver was skipped. Request {requestNumber} is now awaiting your approval." | DEFAULT |
| 3 | New approver added at specific level | New Approver | "New Request Assignment" | "You have been added as Level {X} approver to request {requestNumber}: {title}" | DEFAULT |
| 4 | Spectator added to request | Spectator | "Added to Request" | "You have been added as a spectator to request {requestNumber}: {title}" | DEFAULT |
| 5 | Request submitted by initiator | Initiator | "Request Submitted Successfully" | "Your request '{title}' has been submitted and is now with the first approver." | MEDIUM |
| 6 | Request submitted → First approver assigned | First Approver | "New Request Assigned" | "{title}" | HIGH |
---
## 2. Approval Service Notifications
**File:** `src/services/approval.service.ts`
| # | Trigger Event | Recipient | Title | Body | Priority |
|---|---------------|-----------|-------|------|----------|
| 7 | Final approval complete (closure pending) | Initiator | "Request Approved - Closure Pending" | "Your request '{title}' has been fully approved. Please review and finalize the conclusion remark to close the request." | HIGH |
| 8 | Final approval complete (info only) | All Participants | "Request Approved" | "Request '{title}' has been fully approved. The initiator will finalize the conclusion remark to close the request." | MEDIUM |
| 9 | Level approved → Next level activated | Next Approver | "Action required: {requestNumber}" | "{title}" | DEFAULT |
| 10 | Request fully approved (auto-close) | Initiator | "Approved: {requestNumber}" | "{title}" | DEFAULT |
| 11 | Request rejected | Initiator + All Participants | "Rejected: {requestNumber}" | "{title}" | DEFAULT |
---
## 3. Pause Service Notifications
**File:** `src/services/pause.service.ts`
| # | Trigger Event | Recipient | Title | Body | Priority |
|---|---------------|-----------|-------|------|----------|
| 12 | Workflow paused by approver | Initiator | "Workflow Paused" | "Your request '{title}' has been paused by {userName}. Reason: {reason}. Will resume on {date}." | HIGH |
| 13 | Workflow paused (confirmation) | Approver (self) | "Workflow Paused Successfully" | "You have paused request '{title}'. It will automatically resume on {date}." | MEDIUM |
| 14 | Workflow resumed | Initiator | "Workflow Resumed" | "Your request '{title}' has been resumed {by user/automatically}." | HIGH |
| 15 | Workflow resumed | Approver | "Workflow Resumed" | "Request '{title}' has been resumed. Please continue with your review." | HIGH |
| 16 | Initiator requests pause cancellation | Approver who paused | "Pause Retrigger Request" | "{initiatorName} is requesting you to cancel the pause and resume work on request '{title}'." | HIGH |
---
## 4. TAT (Turnaround Time) Notifications
**File:** `src/queues/tatProcessor.ts`
| # | Trigger Event | Recipient | Title | Body | Priority |
|---|---------------|-----------|-------|------|----------|
| 17 | TAT 50% threshold reached | Approver | "TAT Reminder" | "50% TAT Alert: {message}" | MEDIUM |
| 18 | TAT 75% threshold reached | Approver | "TAT Reminder" | "75% TAT Alert: {message}" | HIGH |
| 19 | TAT 100% breach | Approver | "TAT Breach Alert" | "TAT Breached: {message}" | URGENT |
| 20 | TAT breach (initiator notification) | Initiator | "TAT Breach - Request Delayed" | "Your request {requestNumber}: '{title}' has exceeded its TAT. The approver has been notified." | HIGH |
---
## 5. Work Note Notifications
**File:** `src/services/worknote.service.ts`
| # | Trigger Event | Recipient | Title | Body | Priority |
|---|---------------|-----------|-------|------|----------|
| 21 | User mentioned in work note | Mentioned Users | "💬 Mentioned in Work Note" | "{userName} mentioned you in {requestNumber}: '{message preview}'" | DEFAULT |
---
## Summary by Recipient
### Initiator Receives:
- ✅ Request submitted confirmation
- ✅ Approval pending closure (action required)
- ✅ Request approved
- ✅ Request rejected
- ✅ Workflow paused
- ✅ Workflow resumed
- ✅ TAT breach notification
### Approver Receives:
- ✅ New request assignment
- ✅ Request escalation (previous approver skipped)
- ✅ Action required (next level activated)
- ✅ Workflow paused confirmation
- ✅ Workflow resumed
- ✅ Pause retrigger request from initiator
- ✅ TAT 50% reminder
- ✅ TAT 75% reminder
- ✅ TAT 100% breach alert
### Spectator/Participant Receives:
- ✅ Added to request
- ✅ Request approved (info only)
- ✅ Request rejected
### Mentioned Users Receive:
- ✅ Work note mention notification
---
## Notification Payload Structure
```typescript
interface NotificationPayload {
title: string; // Notification title
body: string; // Notification message
requestId?: string; // UUID of the request
requestNumber?: string; // Human-readable request number (e.g., REQ-2025-11-0001)
url?: string; // Deep link URL (e.g., /request/REQ-2025-11-0001)
type?: string; // Notification type for categorization
priority?: 'LOW' | 'MEDIUM' | 'HIGH' | 'URGENT';
actionRequired?: boolean; // Whether user action is required
}
```
---
## Priority Levels
| Priority | Use Case |
|----------|----------|
| **URGENT** | TAT breach alerts |
| **HIGH** | Action required notifications, assignments, resumptions |
| **MEDIUM** | Informational updates, confirmations |
| **LOW/DEFAULT** | General updates, spectator notifications |
---
## Configuration
Notifications can be enabled/disabled via environment variables:
```env
ENABLE_EMAIL_NOTIFICATIONS=true
ENABLE_PUSH_NOTIFICATIONS=true
```
Web push requires VAPID keys:
```env
VAPID_PUBLIC_KEY=your_public_key
VAPID_PRIVATE_KEY=your_private_key
VAPID_SUBJECT=mailto:admin@example.com
```
---
*Last Updated: November 2025*

View File

@ -1,167 +0,0 @@
# Okta Users API Integration
## Overview
The authentication service now uses the Okta Users API (`/api/v1/users/{userId}`) to fetch complete user profile information including manager, employeeID, designation, and other fields that may not be available in the standard OAuth2 userinfo endpoint.
## Configuration
Add the following environment variable to your `.env` file:
```env
OKTA_API_TOKEN=your_okta_api_token_here
```
This is the SSWS (Server-Side Web Service) token for Okta API access. You can generate this token from your Okta Admin Console under **Security > API > Tokens**.
## How It Works
### 1. Primary Method: Okta Users API
When a user logs in for the first time:
1. The system exchanges the authorization code for tokens (OAuth2 flow)
2. Gets the `oktaSub` (subject identifier) from the userinfo endpoint
3. **Attempts to fetch full user profile from Users API** using:
- First: Email address (as shown in curl example)
- Fallback: oktaSub (user ID) if email lookup fails
4. Extracts complete user information including:
- `profile.employeeID` - Employee ID
- `profile.manager` - Manager name
- `profile.title` - Job title/designation
- `profile.department` - Department
- `profile.mobilePhone` - Phone number
- `profile.firstName`, `profile.lastName`, `profile.displayName`
- And other profile fields
### 2. Fallback Method: OAuth2 Userinfo Endpoint
If the Users API:
- Is not configured (missing `OKTA_API_TOKEN`)
- Returns an error (4xx/5xx)
- Fails for any reason
The system automatically falls back to the standard OAuth2 userinfo endpoint (`/oauth2/default/v1/userinfo`) which provides basic user information.
## API Endpoint
```
GET https://{oktaDomain}/api/v1/users/{userId}
Authorization: SSWS {OKTA_API_TOKEN}
Accept: application/json
```
Where `{userId}` can be:
- Email address (e.g., `testuser10@eichergroup.com`)
- Okta user ID (e.g., `00u1e1japegDV2DkP0h8`)
## Response Structure
The Users API returns a complete user object:
```json
{
"id": "00u1e1japegDV2DkP0h8",
"status": "ACTIVE",
"profile": {
"firstName": "Sanjay",
"lastName": "Sahu",
"manager": "Ezhilan subramanian",
"mobilePhone": "8826740087",
"displayName": "Sanjay Sahu",
"employeeID": "E09994",
"title": "Supports Business Applications (SAP) portfolio",
"department": "Deputy Manager - Digital & IT",
"login": "sanjaysahu@Royalenfield.com",
"email": "sanjaysahu@royalenfield.com"
},
...
}
```
## Field Mapping
| Users API Field | Database Field | Notes |
|----------------|----------------|-------|
| `profile.employeeID` | `employeeId` | Employee ID from HR system |
| `profile.manager` | `manager` | Manager name |
| `profile.title` | `designation` | Job title/designation |
| `profile.department` | `department` | Department name |
| `profile.mobilePhone` | `phone` | Phone number |
| `profile.firstName` | `firstName` | First name |
| `profile.lastName` | `lastName` | Last name |
| `profile.displayName` | `displayName` | Display name |
| `profile.email` | `email` | Email address |
| `id` | `oktaSub` | Okta subject identifier |
## Benefits
1. **Complete User Profile**: Gets all available user information including manager, employeeID, and other custom attributes
2. **Automatic Fallback**: If Users API is unavailable, gracefully falls back to userinfo endpoint
3. **No Breaking Changes**: Existing functionality continues to work even without API token
4. **Better Data Quality**: Reduces missing user information (manager, employeeID, etc.)
## Logging
The service logs:
- When Users API is used vs. userinfo fallback
- Which lookup method succeeded (email or oktaSub)
- Extracted fields (employeeId, manager, department, etc.)
- Any errors or warnings
Example log:
```
[AuthService] Fetching user from Okta Users API (using email)
[AuthService] Successfully fetched user from Okta Users API (using email)
[AuthService] Extracted user data from Okta Users API
- oktaSub: 00u1e1japegDV2DkP0h8
- email: testuser10@eichergroup.com
- employeeId: E09994
- hasManager: true
- hasDepartment: true
- hasDesignation: true
```
## Testing
### Test with curl
```bash
curl --location 'https://dev-830839.oktapreview.com/api/v1/users/testuser10@eichergroup.com' \
--header 'Authorization: SSWS YOUR_OKTA_API_TOKEN' \
--header 'Accept: application/json'
```
### Test in Application
1. Set `OKTA_API_TOKEN` in `.env`
2. Log in with a user
3. Check logs to see if Users API was used
4. Verify user record in database has complete information (manager, employeeID, etc.)
## Troubleshooting
### Users API Not Being Used
- Check if `OKTA_API_TOKEN` is set in `.env`
- Check logs for warnings about missing API token
- Verify API token has correct permissions in Okta
### Users API Returns 404
- User may not exist in Okta
- Email format may be incorrect
- Try using oktaSub (user ID) instead
### Missing Fields in Database
- Check if fields exist in Okta user profile
- Verify field mapping in `extractUserDataFromUsersAPI` method
- Check logs to see which fields were extracted
## Security Notes
- **API Token Security**: Store `OKTA_API_TOKEN` securely, never commit to version control
- **Token Permissions**: Ensure API token has read access to user profiles
- **Rate Limiting**: Be aware of Okta API rate limits when fetching user data

View File

@ -1,92 +0,0 @@
# Request Summary Feature - Database Design
## Overview
This feature allows initiators to create and share comprehensive summaries of closed requests with other users.
## Database Schema
### 1. `request_summaries` Table
Stores the summary data for a closed request.
```sql
CREATE TABLE request_summaries (
summary_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
request_id UUID NOT NULL REFERENCES workflow_requests(request_id) ON DELETE CASCADE,
initiator_id UUID NOT NULL REFERENCES users(user_id) ON DELETE CASCADE,
title VARCHAR(500) NOT NULL, -- Request title
description TEXT, -- Request description
closing_remarks TEXT, -- Final conclusion remarks (from conclusion_remarks or manual)
is_ai_generated BOOLEAN DEFAULT false, -- Whether closing remarks are AI-generated
conclusion_id UUID REFERENCES conclusion_remarks(conclusion_id) ON DELETE SET NULL,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT uk_request_summary UNIQUE (request_id)
);
CREATE INDEX idx_request_summaries_request_id ON request_summaries(request_id);
CREATE INDEX idx_request_summaries_initiator_id ON request_summaries(initiator_id);
CREATE INDEX idx_request_summaries_created_at ON request_summaries(created_at);
```
### 2. `shared_summaries` Table
Stores sharing relationships - who shared which summary with whom.
```sql
CREATE TABLE shared_summaries (
shared_summary_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
summary_id UUID NOT NULL REFERENCES request_summaries(summary_id) ON DELETE CASCADE,
shared_by UUID NOT NULL REFERENCES users(user_id) ON DELETE CASCADE, -- Who shared it
shared_with UUID NOT NULL REFERENCES users(user_id) ON DELETE CASCADE, -- Who can view it
shared_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
viewed_at TIMESTAMP, -- When the recipient viewed it
is_read BOOLEAN DEFAULT false, -- Whether recipient has viewed it
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT uk_shared_summary UNIQUE (summary_id, shared_with) -- Prevent duplicate shares
);
CREATE INDEX idx_shared_summaries_summary_id ON shared_summaries(summary_id);
CREATE INDEX idx_shared_summaries_shared_by ON shared_summaries(shared_by);
CREATE INDEX idx_shared_summaries_shared_with ON shared_summaries(shared_with);
CREATE INDEX idx_shared_summaries_shared_at ON shared_summaries(shared_at);
```
## Data Flow
1. **Request Closure**: When a request is closed, the initiator can create a summary
2. **Summary Creation**:
- Pulls data from `workflow_requests`, `approval_levels`, and `conclusion_remarks`
- Creates a record in `request_summaries`
3. **Sharing**:
- Initiator selects users to share with
- Creates records in `shared_summaries` for each recipient
4. **Viewing**:
- Users see shared summaries in "Shared Summary" menu
- When viewed, `viewed_at` and `is_read` are updated
## Summary Content Structure
The summary will contain:
- **Request Information**: Request number, title, description
- **Initiator Details**: Name, designation, status, timestamp, remarks
- **Approver Details** (for each level): Name, designation, status, timestamp, remarks
- **Closing Remarks**: Final conclusion (AI-generated or manual)
## API Endpoints
### Summary Management
- `POST /api/v1/summaries` - Create summary for a closed request
- `GET /api/v1/summaries/:summaryId` - Get summary details
- `POST /api/v1/summaries/:summaryId/share` - Share summary with users
- `DELETE /api/v1/summaries/:summaryId/share/:userId` - Unshare summary
### Shared Summaries (for recipients)
- `GET /api/v1/summaries/shared` - List summaries shared with current user
- `GET /api/v1/summaries/shared/:sharedSummaryId` - Get shared summary details
- `PATCH /api/v1/summaries/shared/:sharedSummaryId/view` - Mark as viewed
### My Summaries (for initiators)
- `GET /api/v1/summaries/my` - List summaries created by current user

View File

@ -1,214 +0,0 @@
# SAP Integration Testing Guide
## Postman Testing
### 1. Testing IO Validation API
**Endpoint:** `GET /api/v1/dealer-claims/:requestId/io`
**Method:** GET
**Headers:**
```
Authorization: Bearer <your_jwt_token>
Content-Type: application/json
```
**Note:** The CSRF error in Postman is likely coming from SAP, not our backend. Our backend doesn't have CSRF protection enabled.
### 2. Testing Budget Blocking API
**Endpoint:** `PUT /api/v1/dealer-claims/:requestId/io`
**Method:** PUT
**Headers:**
```
Authorization: Bearer <your_jwt_token>
Content-Type: application/json
```
**Body:**
```json
{
"ioNumber": "600060",
"ioRemark": "Test remark",
"availableBalance": 1000000,
"blockedAmount": 500,
"remainingBalance": 999500
}
```
### 3. Direct SAP API Testing in Postman
If you want to test SAP API directly (bypassing our backend):
#### IO Validation
- **URL:** `https://RENOIHND01.Eichergroup.com:1443/sap/opu/odata/sap/ZFI_BUDGET_CHECK_API_SRV/GetSenderDataSet?$filter=IONumber eq '600060'&$select=Sender,ResponseDate,GetIODetailsSet01&$expand=GetIODetailsSet01&$format=json`
- **Method:** GET
- **Authentication:** Basic Auth
- Username: Your SAP username
- Password: Your SAP password
- **Headers:**
- `Accept: application/json`
- `Content-Type: application/json`
#### Budget Blocking
- **URL:** `https://RENOIHND01.Eichergroup.com:1443/sap/opu/odata/sap/ZFI_BUDGET_BLOCK_API_SRV/RequesterInputSet`
- **Method:** POST
- **Authentication:** Basic Auth
- Username: Your SAP username
- Password: Your SAP password
- **Headers:**
- `Accept: application/json`
- `Content-Type: application/json`
- **Body:**
```json
{
"Request_Date_Time": "2025-08-29T10:51:00",
"Requester": "REFMS",
"lt_io_input": [
{
"IONumber": "600060",
"Amount": "500"
}
],
"lt_io_output": [],
"ls_response": []
}
```
## Common Errors and Solutions
### 1. CSRF Token Validation Error
**Error:** "CSRF token validation error"
**Possible Causes:**
- SAP API requires CSRF tokens for POST/PUT requests
- SAP might be checking for specific headers
**Solutions:**
1. **Get CSRF Token First:**
- Make a GET request to the SAP service root to get CSRF token
- Example: `GET https://RENOIHND01.Eichergroup.com:1443/sap/opu/odata/sap/ZFI_BUDGET_BLOCK_API_SRV/`
- Look for `x-csrf-token` header in response
- Add this token to subsequent POST/PUT requests as header: `X-CSRF-Token: <token>`
2. **Add Required Headers:**
```
X-CSRF-Token: Fetch
X-Requested-With: XMLHttpRequest
```
### 2. Authentication Failed
**Error:** "Authentication failed" or "401 Unauthorized"
**Possible Causes:**
1. Wrong username/password
2. Basic auth not being sent correctly
3. SSL certificate issues
4. SAP account locked or expired
**Solutions:**
1. **Verify Credentials:**
- Double-check `SAP_USERNAME` and `SAP_PASSWORD` in `.env`
- Ensure no extra spaces or special characters
- Test credentials in browser first
2. **Check SSL Certificate:**
- If using self-signed certificate, set `SAP_DISABLE_SSL_VERIFY=true` in `.env` (testing only!)
- For production, ensure proper SSL certificates are configured
3. **Test Basic Auth Manually:**
- Use Postman with Basic Auth enabled
- Verify the Authorization header format: `Basic <base64(username:password)>`
4. **Check SAP Account Status:**
- Verify account is active and not locked
- Check if password has expired
- Contact SAP administrator if needed
### 3. Connection Errors
**Error:** "ECONNREFUSED" or "ENOTFOUND"
**Solutions:**
1. Verify `SAP_BASE_URL` is correct
2. Check network connectivity to SAP server
3. Ensure firewall allows connections to port 1443
4. Verify Zscaler is configured correctly
### 4. Timeout Errors
**Error:** "Request timeout"
**Solutions:**
1. Increase `SAP_TIMEOUT_MS` in `.env` (default: 30000ms = 30 seconds)
2. Check SAP server response time
3. Verify network latency
## Debugging
### Enable Debug Logging
Set log level to debug in your `.env`:
```
LOG_LEVEL=debug
```
This will log:
- Request URLs
- Request payloads
- Response status codes
- Response data
- Error details
### Check Backend Logs
Look for `[SAP]` prefixed log messages:
```bash
# In development
npm run dev
# Check logs for SAP-related messages
```
### Test SAP Connection
You can test if SAP is reachable:
```bash
curl -u "username:password" \
"https://RENOIHND01.Eichergroup.com:1443/sap/opu/odata/sap/ZFI_BUDGET_CHECK_API_SRV/"
```
## Environment Variables Checklist
Ensure these are set in your `.env`:
```bash
# Required
SAP_BASE_URL=https://RENOIHND01.Eichergroup.com:1443
SAP_USERNAME=your_username
SAP_PASSWORD=your_password
# Optional (with defaults)
SAP_TIMEOUT_MS=30000
SAP_SERVICE_NAME=ZFI_BUDGET_CHECK_API_SRV
SAP_BLOCK_SERVICE_NAME=ZFI_BUDGET_BLOCK_API_SRV
SAP_REQUESTER=REFMS
SAP_DISABLE_SSL_VERIFY=false # Only for testing
```
## Next Steps
If you're still getting errors:
1. **Check Backend Logs:** Look for detailed error messages
2. **Test Directly in Postman:** Bypass backend and test SAP API directly
3. **Verify SAP Credentials:** Test with SAP administrator
4. **Check Network:** Ensure server can reach SAP URL
5. **Review SAP Documentation:** Check if there are additional requirements

File diff suppressed because it is too large Load Diff

View File

@ -1,299 +0,0 @@
# Step 3 (Department Lead Approval) - User Addition Flow Analysis
## Overview
This document analyzes how Step 3 approvers (Department Lead) are added to the dealer claim workflow, covering both frontend and backend implementation.
---
## Backend Implementation
### 1. Request Creation Flow (`dealerClaim.service.ts`)
#### Entry Point: `createClaimRequest()`
- **Location**: `Re_Backend/src/services/dealerClaim.service.ts:37`
- **Parameters**:
- `userId`: Initiator's user ID
- `claimData`: Includes optional `selectedManagerEmail` for user selection
#### Step 3 Approver Resolution Process:
**Phase 1: Pre-Validation (Before Creating Records)**
```typescript
// Lines 67-87: Resolve Department Lead BEFORE creating workflow
let departmentLead: User | null = null;
if (claimData.selectedManagerEmail) {
// User selected a manager from multiple options
departmentLead = await this.userService.ensureUserExists({
email: claimData.selectedManagerEmail,
});
} else {
// Search Okta using manager displayName from initiator's user record
departmentLead = await this.resolveDepartmentLeadFromManager(initiator);
// If no manager found, throw error BEFORE creating any records
if (!departmentLead) {
throw new Error(`No reporting manager found...`);
}
}
```
**Phase 2: Approval Level Creation**
```typescript
// Line 136: Create approval levels with pre-resolved department lead
await this.createClaimApprovalLevels(
workflowRequest.requestId,
userId,
claimData.dealerEmail,
claimData.selectedManagerEmail,
departmentLead // Pre-resolved to avoid re-searching
);
```
### 2. Approval Level Creation (`createClaimApprovalLevels()`)
#### Location: `Re_Backend/src/services/dealerClaim.service.ts:253`
#### Step 3 Configuration:
```typescript
// Lines 310-318: Step 3 definition
{
level: 3,
name: 'Department Lead Approval',
tatHours: 72,
isAuto: false,
approverType: 'department_lead' as const,
approverId: departmentLead?.userId || null,
approverEmail: departmentLead?.email || initiator.manager || 'deptlead@royalenfield.com',
}
```
#### Approver Resolution Logic:
```typescript
// Lines 405-417: Department Lead resolution
else if (step.approverType === 'department_lead') {
if (finalDepartmentLead) {
approverId = finalDepartmentLead.userId;
approverName = finalDepartmentLead.displayName || finalDepartmentLead.email || 'Department Lead';
approverEmail = finalDepartmentLead.email;
} else {
// This should never happen as we validate manager before creating records
throw new Error('Department lead not found...');
}
}
```
#### Database Record Creation:
```typescript
// Lines 432-454: Create ApprovalLevel record
await ApprovalLevel.create({
requestId,
levelNumber: 3,
levelName: 'Department Lead Approval',
approverId: approverId, // Department Lead's userId
approverEmail,
approverName,
tatHours: 72,
status: ApprovalStatus.PENDING, // Will be activated when Step 2 is approved
isFinalApprover: false,
// ... other fields
});
```
### 3. Department Lead Resolution Methods
#### Method 1: `resolveDepartmentLeadFromManager()` (Primary)
- **Location**: `Re_Backend/src/services/dealerClaim.service.ts:622`
- **Flow**:
1. Get `manager` displayName from initiator's User record
2. Search Okta directory by displayName using `userService.searchOktaByDisplayName()`
3. **If 0 matches**: Return `null` (fallback to legacy method)
4. **If 1 match**: Create user in DB if needed, return User object
5. **If multiple matches**: Throw error with `MULTIPLE_MANAGERS_FOUND` code and list of managers
#### Method 2: `resolveDepartmentLead()` (Fallback/Legacy)
- **Location**: `Re_Backend/src/services/dealerClaim.service.ts:699`
- **Priority Order**:
1. User with `MANAGEMENT` role in same department
2. User with designation containing "Lead"/"Head"/"Manager" in same department
3. User matching `initiator.manager` email field
4. Any user in same department (excluding initiator)
5. Any user with "Department Lead" designation (across all departments)
6. Any user with `MANAGEMENT` role (across all departments)
7. Any user with `ADMIN` role (across all departments)
### 4. Participant Creation
#### Location: `Re_Backend/src/services/dealerClaim.service.ts:463`
- Department Lead is automatically added as a participant when approval levels are created
- Participant type: `APPROVER`
- Allows department lead to view, comment, and approve the request
---
## Frontend Implementation
### 1. Request Creation (`ClaimManagementWizard.tsx`)
#### Location: `Re_Figma_Code/src/dealer-claim/components/request-creation/ClaimManagementWizard.tsx`
#### Current Implementation:
- **No UI for selecting Step 3 approver during creation**
- Step 3 approver is automatically resolved by backend based on:
- Initiator's manager field
- Department hierarchy
- Role-based lookup
#### Form Data Structure:
```typescript
// Lines 61-75: Form data structure
const [formData, setFormData] = useState({
activityName: '',
activityType: '',
dealerCode: '',
// ... other fields
// Note: No selectedManagerEmail field in wizard
});
```
#### Submission:
```typescript
// Lines 152-216: handleSubmit()
const claimData = {
...formData,
templateType: 'claim-management',
// selectedManagerEmail is NOT included in current wizard
// Backend will auto-resolve department lead
};
```
### 2. Request Detail View (`RequestDetail.tsx`)
#### Location: `Re_Figma_Code/src/dealer-claim/pages/RequestDetail.tsx`
#### Step 3 Approver Detection:
```typescript
// Lines 147-173: Finding Step 3 approver
const step3Level = approvalFlow.find((level: any) =>
(level.step || level.levelNumber || level.level_number) === 3
) || approvals.find((level: any) =>
(level.levelNumber || level.level_number) === 3
);
const deptLeadUserId = step3Level?.approverId || step3Level?.approver_id || step3Level?.approver?.userId;
const deptLeadEmail = (step3Level?.approverEmail || '').toLowerCase().trim();
// User is department lead if they match Step 3 approver
const isDeptLead = (deptLeadUserId && deptLeadUserId === currentUserId) ||
(deptLeadEmail && currentUserEmail && deptLeadEmail === currentUserEmail);
```
#### Add Approver Functionality:
- **Lines 203-217, 609, 621, 688, 701, 711**: References to `handleAddApprover` and `AddApproverModal`
- **Note**: This appears to be generic approver addition (for other workflow types), not specifically for Step 3
- Step 3 approver is **fixed** and cannot be changed after request creation
### 3. Workflow Tab (`WorkflowTab.tsx`)
#### Location: `Re_Figma_Code/src/dealer-claim/components/request-detail/WorkflowTab.tsx`
#### Step 3 Action Button Visibility:
```typescript
// Lines 1109-1126: Step 3 approval button
{step.step === 3 && (() => {
// Find step 3 from approvalFlow to get approverEmail
const step3Level = approvalFlow.find((l: any) => (l.step || l.levelNumber || l.level_number) === 3);
const step3ApproverEmail = (step3Level?.approverEmail || '').toLowerCase();
const isStep3ApproverByEmail = step3ApproverEmail && userEmail === step3ApproverEmail;
return isStep3ApproverByEmail || isStep3Approver || isCurrentApprover;
})() && (
<Button onClick={() => setShowIOApprovalModal(true)}>
Approve and Organise IO
</Button>
)}
```
#### Step 3 Approval Handler:
```typescript
// Lines 535-583: handleIOApproval()
// 1. Finds Step 3 levelId from approval levels
// 2. Updates IO details (ioNumber, ioRemark)
// 3. Approves Step 3 using approveLevel() API
// 4. Moves workflow to Step 4 (auto-processed)
```
---
## Key Findings
### Current Flow Summary:
1. **Request Creation**:
- User creates claim request via `ClaimManagementWizard`
- **No UI for selecting Step 3 approver**
- Backend automatically resolves department lead using:
- Initiator's `manager` displayName → Okta search
- Fallback to legacy resolution methods
2. **Multiple Managers Scenario**:
- If Okta search returns multiple managers:
- Backend throws `MULTIPLE_MANAGERS_FOUND` error
- Error includes list of manager options
- **Frontend needs to handle this** (currently not implemented in wizard)
3. **Approval Level Creation**:
- Step 3 approver is **fixed** at request creation
- Stored in `ApprovalLevel` table with:
- `levelNumber: 3`
- `approverId`: Department Lead's userId
- `approverEmail`: Department Lead's email
- `status: PENDING` (activated when Step 2 is approved)
4. **After Request Creation**:
- Step 3 approver **cannot be changed** via UI
- Generic `AddApproverModal` exists but is not used for Step 3
- Step 3 approver is determined by backend logic only
### Limitations:
1. **No User Selection During Creation**:
- Wizard doesn't allow user to select/override Step 3 approver
- If multiple managers found, error handling not implemented in frontend
2. **No Post-Creation Modification**:
- No UI to change Step 3 approver after request is created
- Would require backend API to update `ApprovalLevel.approverId`
3. **Fixed Resolution Logic**:
- Department lead resolution is hardcoded in backend
- No configuration or override mechanism
---
## Potential Enhancement Areas
1. **Frontend**: Add manager selection UI in wizard when multiple managers found
2. **Frontend**: Add "Change Approver" option for Step 3 (if allowed by business rules)
3. **Backend**: Add API endpoint to update Step 3 approver after request creation
4. **Backend**: Add configuration for department lead resolution rules
5. **Both**: Handle `MULTIPLE_MANAGERS_FOUND` error gracefully in frontend
---
## Related Files
### Backend:
- `Re_Backend/src/services/dealerClaim.service.ts` - Main service
- `Re_Backend/src/controllers/dealerClaim.controller.ts` - API endpoints
- `Re_Backend/src/services/user.service.ts` - User/Okta integration
- `Re_Backend/src/models/ApprovalLevel.ts` - Database model
### Frontend:
- `Re_Figma_Code/src/dealer-claim/components/request-creation/ClaimManagementWizard.tsx` - Request creation
- `Re_Figma_Code/src/dealer-claim/pages/RequestDetail.tsx` - Request detail view
- `Re_Figma_Code/src/dealer-claim/components/request-detail/WorkflowTab.tsx` - Workflow display
- `Re_Figma_Code/src/dealer-claim/components/request-detail/modals/DeptLeadIOApprovalModal.tsx` - Step 3 approval modal
### Documentation:
- `Re_Backend/docs/CLAIM_MANAGEMENT_APPROVER_MAPPING.md` - Approver mapping rules

View File

@ -1,201 +0,0 @@
# Tanflow SSO User Data Mapping
This document outlines all user information available from Tanflow IAM Suite and how it maps to our User model for user creation.
## Tanflow Userinfo Endpoint Response
Tanflow uses **OpenID Connect (OIDC) standard claims** via the `/protocol/openid-connect/userinfo` endpoint. The following fields are available:
### Standard OIDC Claims (Available from Tanflow)
| Tanflow Field | OIDC Standard Claim | Type | Description | Currently Extracted |
|--------------|---------------------|------|--------------|-------------------|
| `sub` | `sub` | string | **REQUIRED** - Subject identifier (unique user ID) | ✅ Yes (as `oktaSub`) |
| `email` | `email` | string | Email address | ✅ Yes |
| `email_verified` | `email_verified` | boolean | Email verification status | ❌ No |
| `preferred_username` | `preferred_username` | string | Preferred username (fallback for email) | ✅ Yes (fallback) |
| `name` | `name` | string | Full display name | ✅ Yes (as `displayName`) |
| `given_name` | `given_name` | string | First name | ✅ Yes (as `firstName`) |
| `family_name` | `family_name` | string | Last name | ✅ Yes (as `lastName`) |
| `phone_number` | `phone_number` | string | Phone number | ✅ Yes (as `phone`) |
| `phone_number_verified` | `phone_number_verified` | boolean | Phone verification status | ❌ No |
| `address` | `address` | object | Address object (structured) | ❌ No |
| `locale` | `locale` | string | User locale (e.g., "en-US") | ❌ No |
| `picture` | `picture` | string | Profile picture URL | ❌ No |
| `website` | `website` | string | Website URL | ❌ No |
| `profile` | `profile` | string | Profile page URL | ❌ No |
| `birthdate` | `birthdate` | string | Date of birth | ❌ No |
| `gender` | `gender` | string | Gender | ❌ No |
| `zoneinfo` | `zoneinfo` | string | Timezone (e.g., "America/New_York") | ❌ No |
| `updated_at` | `updated_at` | number | Last update timestamp | ❌ No |
### Custom Tanflow Claims (May be available)
These are **custom claims** that Tanflow may include based on their configuration:
| Tanflow Field | Type | Description | Currently Extracted |
|--------------|------|-------------|-------------------|
| `employeeId` | string | Employee ID from HR system | ✅ Yes |
| `employee_id` | string | Alternative employee ID field | ✅ Yes (fallback) |
| `department` | string | Department/Division | ✅ Yes |
| `designation` | string | Job designation/position | ✅ Yes |
| `title` | string | Job title | ❌ No |
| `designation` | string | Job designation/position | ✅ Yes (as `designation`) |
| `employeeType` | string | Employee type (Dealer, Full-time, Contract, etc.) | ✅ Yes (as `jobTitle`) |
| `organization` | string | Organization name | ❌ No |
| `division` | string | Division name | ❌ No |
| `location` | string | Office location | ❌ No |
| `manager` | string | Manager name/email | ❌ No |
| `manager_id` | string | Manager employee ID | ❌ No |
| `cost_center` | string | Cost center code | ❌ No |
| `hire_date` | string | Date of hire | ❌ No |
| `office_location` | string | Office location | ❌ No |
| `country` | string | Country code | ❌ No |
| `city` | string | City name | ❌ No |
| `state` | string | State/Province | ❌ No |
| `postal_code` | string | Postal/ZIP code | ❌ No |
| `groups` | array | Group memberships | ❌ No |
| `roles` | array | User roles | ❌ No |
## Current Extraction Logic
**Location:** `Re_Backend/src/services/auth.service.ts``exchangeTanflowCodeForTokens()`
```typescript
const userData: SSOUserData = {
oktaSub: tanflowSub, // Reuse oktaSub field for Tanflow sub
email: tanflowUserInfo.email || tanflowUserInfo.preferred_username || '',
employeeId: tanflowUserInfo.employeeId || tanflowUserInfo.employee_id || undefined,
firstName: tanflowUserInfo.given_name || tanflowUserInfo.firstName || undefined,
lastName: tanflowUserInfo.family_name || tanflowUserInfo.lastName || undefined,
displayName: tanflowUserInfo.name || tanflowUserInfo.displayName || undefined,
department: tanflowUserInfo.department || undefined,
designation: tanflowUserInfo.designation || undefined, // Map designation to designation
phone: tanflowUserInfo.phone_number || tanflowUserInfo.phone || undefined,
// Additional fields
manager: tanflowUserInfo.manager || undefined,
jobTitle: tanflowUserInfo.employeeType || undefined, // Map employeeType to jobTitle
postalAddress: tanflowUserInfo.address ? (typeof tanflowUserInfo.address === 'string' ? tanflowUserInfo.address : JSON.stringify(tanflowUserInfo.address)) : undefined,
mobilePhone: tanflowUserInfo.mobile_phone || tanflowUserInfo.mobilePhone || undefined,
adGroups: Array.isArray(tanflowUserInfo.groups) ? tanflowUserInfo.groups : undefined,
};
```
## User Model Fields Mapping
**Location:** `Re_Backend/src/models/User.ts`
| User Model Field | Tanflow Source | Required | Notes |
|-----------------|----------------|----------|-------|
| `userId` | Auto-generated UUID | ✅ | Primary key |
| `oktaSub` | `sub` | ✅ | Unique identifier from Tanflow |
| `email` | `email` or `preferred_username` | ✅ | Primary identifier |
| `employeeId` | `employeeId` or `employee_id` | ❌ | Optional HR system ID |
| `firstName` | `given_name` or `firstName` | ❌ | Optional |
| `lastName` | `family_name` or `lastName` | ❌ | Optional |
| `displayName` | `name` or `displayName` | ❌ | Auto-generated if missing |
| `department` | `department` | ❌ | Optional |
| `designation` | `designation` | ❌ | Optional |
| `phone` | `phone_number` or `phone` | ❌ | Optional |
| `manager` | `manager` | ❌ | Optional (extracted if available) |
| `secondEmail` | N/A | ❌ | Not available from Tanflow |
| `jobTitle` | `employeeType` | ❌ | Optional (maps employeeType to jobTitle) |
| `employeeNumber` | N/A | ❌ | Not available from Tanflow |
| `postalAddress` | `address` (structured) | ❌ | **NOT currently extracted** |
| `mobilePhone` | N/A | ❌ | Not available from Tanflow |
| `adGroups` | `groups` | ❌ | **NOT currently extracted** |
| `location` | `address`, `city`, `state`, `country` | ❌ | **NOT currently extracted** |
| `role` | Default: 'USER' | ✅ | Default role assigned |
| `isActive` | Default: true | ✅ | Auto-set to true |
| `lastLogin` | Current timestamp | ✅ | Auto-set on login |
## Recommended Enhancements
### 1. Extract Additional Fields
Consider extracting these fields if available from Tanflow:
```typescript
// Enhanced extraction (to be implemented)
const userData: SSOUserData = {
// ... existing fields ...
// Additional fields (already implemented)
manager: tanflowUserInfo.manager || undefined,
jobTitle: tanflowUserInfo.employeeType || undefined, // Map employeeType to jobTitle
postalAddress: tanflowUserInfo.address ? (typeof tanflowUserInfo.address === 'string' ? tanflowUserInfo.address : JSON.stringify(tanflowUserInfo.address)) : undefined,
mobilePhone: tanflowUserInfo.mobile_phone || tanflowUserInfo.mobilePhone || undefined,
adGroups: Array.isArray(tanflowUserInfo.groups) ? tanflowUserInfo.groups : undefined,
// Location object
location: {
city: tanflowUserInfo.city || undefined,
state: tanflowUserInfo.state || undefined,
country: tanflowUserInfo.country || undefined,
office: tanflowUserInfo.office_location || undefined,
timezone: tanflowUserInfo.zoneinfo || undefined,
},
};
```
### 2. Log Available Fields
Add logging to see what Tanflow actually returns:
```typescript
logger.info('Tanflow userinfo response', {
availableFields: Object.keys(tanflowUserInfo),
hasEmail: !!tanflowUserInfo.email,
hasEmployeeId: !!(tanflowUserInfo.employeeId || tanflowUserInfo.employee_id),
hasDepartment: !!tanflowUserInfo.department,
hasManager: !!tanflowUserInfo.manager,
hasGroups: Array.isArray(tanflowUserInfo.groups),
groupsCount: Array.isArray(tanflowUserInfo.groups) ? tanflowUserInfo.groups.length : 0,
sampleData: {
sub: tanflowUserInfo.sub?.substring(0, 10) + '...',
email: tanflowUserInfo.email?.substring(0, 10) + '...',
name: tanflowUserInfo.name,
}
});
```
## User Creation Flow
1. **Token Exchange** → Get `access_token` from Tanflow
2. **Userinfo Call** → Call `/protocol/openid-connect/userinfo` with `access_token`
3. **Extract Data** → Map Tanflow fields to `SSOUserData` interface
4. **User Lookup** → Check if user exists by `email`
5. **Create/Update** → Create new user or update existing user
6. **Generate Tokens** → Generate JWT access/refresh tokens
## Testing Recommendations
1. **Test with Real Tanflow Account**
- Log actual userinfo response
- Document all available fields
- Verify field mappings
2. **Handle Missing Fields**
- Ensure graceful fallbacks
- Don't fail if optional fields are missing
- Log warnings for missing expected fields
3. **Validate Required Fields**
- `sub` (oktaSub) - REQUIRED
- `email` or `preferred_username` - REQUIRED
## Next Steps
1. ✅ **Current Implementation** - Basic OIDC claims extraction
2. 🔄 **Enhancement** - Extract additional custom claims (manager, groups, location)
3. 🔄 **Logging** - Add detailed logging of Tanflow response
4. 🔄 **Testing** - Test with real Tanflow account to see actual fields
5. 🔄 **Documentation** - Update this doc with actual Tanflow response structure
## Notes
- Tanflow uses **Keycloak** under the hood (based on URL structure)
- Keycloak supports custom user attributes that may be available
- Some fields may require specific realm/client configuration in Tanflow
- Contact Tanflow support to confirm available custom claims

View File

@ -1,254 +0,0 @@
# Username/Password Authentication Endpoint
## Overview
This endpoint allows users to authenticate using their Okta username (email) and password directly via API, without any browser redirects. Perfect for testing with Postman, mobile apps, or other API clients.
## Endpoint Details
**URL:** `POST /api/v1/auth/login`
**Authentication Required:** No
**Content-Type:** `application/json`
## How It Works
1. **Client sends credentials** → Backend validates with Okta
2. **Okta authenticates** → Returns access token
3. **Backend fetches user info** from Okta
4. **User exists in DB?**
- ✅ Yes → Update user info and last login
- ❌ No → **Create new user** in database (like spectator/approver flow)
5. **Return JWT tokens** → Access token + Refresh token
## Request Body
```json
{
"username": "user@example.com",
"password": "YourOktaPassword123"
}
```
### Fields
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `username` | string | Yes | User's Okta email/username |
| `password` | string | Yes | User's Okta password |
## Response
### Success Response (200 OK)
```json
{
"success": true,
"message": "Login successful",
"data": {
"user": {
"userId": "123e4567-e89b-12d3-a456-426614174000",
"employeeId": "EMP001",
"email": "user@example.com",
"firstName": "John",
"lastName": "Doe",
"displayName": "John Doe",
"department": "Engineering",
"designation": "Senior Developer",
"role": "USER"
},
"accessToken": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"refreshToken": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
},
"timestamp": "2024-01-15T10:30:00.000Z"
}
```
### Error Response (401 Unauthorized)
```json
{
"success": false,
"error": "Login failed",
"message": "Authentication failed: Invalid username or password",
"timestamp": "2024-01-15T10:30:00.000Z"
}
```
## Postman Testing
### Step 1: Create New Request
1. Open Postman
2. Create new request: **POST**
3. URL: `http://localhost:5000/api/v1/auth/login`
- Or your backend URL (e.g., `https://re-workflow-nt-dev.siplsolutions.com/api/v1/auth/login`)
### Step 2: Set Headers
```
Content-Type: application/json
```
### Step 3: Set Request Body
Choose **Body****raw** → **JSON**
```json
{
"username": "your-okta-email@example.com",
"password": "your-okta-password"
}
```
### Step 4: Send Request
Click **Send** and you should receive:
- ✅ User object with details
- ✅ Access token (valid for 24 hours)
- ✅ Refresh token (valid for 7 days)
### Step 5: Use Access Token
Copy the `accessToken` from the response and use it in subsequent API calls:
**Headers:**
```
Authorization: Bearer <your-access-token>
```
## User Creation Logic
### Scenario 1: User Exists in Okta & Our DB
- ✅ User authenticated
- ✅ User info updated (department, designation, last login, etc.)
- ✅ Tokens returned
### Scenario 2: User Exists in Okta but NOT in Our DB
- ✅ User authenticated with Okta
- ✅ **New user created** in our database with info from Okta:
- `oktaSub` (Okta subject ID)
- `email` (primary identifier)
- `employeeId` (if available in Okta profile)
- `firstName`, `lastName`, `displayName`
- `department`, `designation`, `phone`
- `role` = "USER" (default)
- `isActive` = true
- ✅ Tokens returned
This is the **same behavior** as when adding a spectator or approver for the first time!
### Scenario 3: User Does NOT Exist in Okta
- ❌ Authentication fails
- ❌ Error: "Invalid username or password"
- ❌ No user created in our DB
## Important Notes
### 🔐 Security
1. **HTTPS Required in Production**: Always use HTTPS to protect credentials in transit
2. **Rate Limiting**: Consider adding rate limiting to prevent brute force attacks
3. **Okta Password Policy**: Follows Okta's password complexity requirements
### ⚙️ Okta Configuration Required
This endpoint uses **Resource Owner Password Credentials** grant type. Your Okta application must have this enabled:
1. Go to Okta Admin Console
2. Navigate to **Applications** → Your Application
3. Under **General Settings** → **Grant Types**
4. Enable: ✅ **Resource Owner Password**
### 📝 Token Management
**Access Token:**
- Valid for: 24 hours
- Used for: API authentication
- Header: `Authorization: Bearer <token>`
**Refresh Token:**
- Valid for: 7 days
- Used for: Getting new access tokens
- Endpoint: `POST /api/v1/auth/refresh`
## Example Postman Collection
### 1. Login
```http
POST http://localhost:5000/api/v1/auth/login
Content-Type: application/json
{
"username": "john.doe@royalenfield.com",
"password": "SecurePassword123!"
}
```
### 2. Get Current User (Protected Route)
```http
GET http://localhost:5000/api/v1/auth/me
Authorization: Bearer <access-token-from-login>
```
### 3. Refresh Access Token
```http
POST http://localhost:5000/api/v1/auth/refresh
Content-Type: application/json
{
"refreshToken": "<refresh-token-from-login>"
}
```
## Troubleshooting
### Error: "OKTA_CLIENT_SECRET is not configured"
**Solution:** Check your `.env` file has valid Okta credentials:
```env
OKTA_DOMAIN=https://dev-xxxxx.okta.com
OKTA_CLIENT_ID=your_client_id
OKTA_CLIENT_SECRET=your_client_secret
```
### Error: "Authentication failed: invalid_grant"
**Possible causes:**
1. Username or password is incorrect
2. User account is locked/suspended in Okta
3. Resource Owner Password grant not enabled in Okta
### Error: "Authentication failed: invalid_client"
**Solution:** Verify `OKTA_CLIENT_ID` and `OKTA_CLIENT_SECRET` are correct
### New User Not Created
**Check logs:** Backend logs will show if user creation failed and why
```bash
npm run dev # Check console output
```
## Comparison: SSO vs Password Login
| Feature | SSO Flow (Browser) | Password Login (API) |
|---------|-------------------|---------------------|
| **Use Case** | Web application login | Postman, Mobile apps, API clients |
| **User Experience** | Browser redirect to Okta | Direct username/password |
| **Security** | OAuth 2.0 Authorization Code | Resource Owner Password |
| **Best For** | Production web apps | Testing, Mobile apps, Internal tools |
| **Endpoint** | `/api/v1/auth/token-exchange` | `/api/v1/auth/login` |
## Next Steps
After successful authentication:
1. Store the `accessToken` securely
2. Use it in `Authorization` header for protected endpoints
3. Refresh it using `refreshToken` before expiry
4. Call `/api/v1/auth/logout` when user logs out
## Related Endpoints
- `POST /api/v1/auth/refresh` - Refresh access token
- `GET /api/v1/auth/me` - Get current user profile
- `GET /api/v1/auth/validate` - Validate current token
- `POST /api/v1/auth/logout` - Logout and clear cookies

View File

@ -1,202 +0,0 @@
# Vertex AI Gemini Integration
## Overview
The AI service has been migrated from multi-provider support (Claude, OpenAI, Gemini API) to **Google Cloud Vertex AI Gemini** using service account authentication. This provides better enterprise-grade security and uses the same credentials as Google Cloud Storage.
## Changes Made
### 1. Package Dependencies
**Removed:**
- `@anthropic-ai/sdk` (Claude SDK)
- `@google/generative-ai` (Gemini API SDK)
- `openai` (OpenAI SDK)
**Added:**
- `@google-cloud/vertexai` (Vertex AI SDK)
### 2. Service Account Authentication
The AI service now uses the same service account JSON file as GCS:
- **Location**: `credentials/re-platform-workflow-dealer-3d5738fcc1f9.json`
- **Project ID**: `re-platform-workflow-dealer`
- **Default Region**: `us-central1`
### 3. Configuration
**Environment Variables:**
```env
# Required (already configured for GCS)
GCP_PROJECT_ID=re-platform-workflow-dealer
GCP_KEY_FILE=./credentials/re-platform-workflow-dealer-3d5738fcc1f9.json
# Optional (defaults provided)
VERTEX_AI_MODEL=gemini-2.5-flash
VERTEX_AI_LOCATION=asia-south1
AI_ENABLED=true
```
**Admin Panel Configuration:**
- `AI_ENABLED` - Enable/disable AI features
- `VERTEX_AI_MODEL` - Model name (default: `gemini-2.5-flash`)
- `AI_MAX_REMARK_LENGTH` - Maximum characters for conclusion remarks (default: 2000)
### 4. Available Models
| Model Name | Description | Use Case |
|------------|-------------|----------|
| `gemini-2.5-flash` | Latest fast model (default) | General purpose, quick responses |
| `gemini-1.5-flash` | Previous fast model | General purpose |
| `gemini-1.5-pro` | Advanced model | Complex tasks, better quality |
| `gemini-1.5-pro-latest` | Latest Pro version | Best quality, complex reasoning |
### 5. Supported Regions
| Region Code | Location | Availability |
|-------------|----------|--------------|
| `us-central1` | Iowa, USA | ✅ Default |
| `us-east1` | South Carolina, USA | ✅ |
| `us-west1` | Oregon, USA | ✅ |
| `europe-west1` | Belgium | ✅ |
| `asia-south1` | Mumbai, India | ✅ |
## Setup Instructions
### Step 1: Install Dependencies
```bash
cd Re_Backend
npm install
```
This will install `@google-cloud/vertexai` and remove old AI SDKs.
### Step 2: Verify Service Account Permissions
Ensure your service account (`re-bridge-workflow@re-platform-workflow-dealer.iam.gserviceaccount.com`) has:
- **Vertex AI User** role (`roles/aiplatform.user`)
### Step 3: Enable Vertex AI API
1. Go to [Google Cloud Console](https://console.cloud.google.com/)
2. Navigate to **APIs & Services** > **Library**
3. Search for **"Vertex AI API"**
4. Click **Enable**
### Step 4: Verify Configuration
Check that your `.env` file has:
```env
GCP_PROJECT_ID=re-platform-workflow-dealer
GCP_KEY_FILE=./credentials/re-platform-workflow-dealer-3d5738fcc1f9.json
VERTEX_AI_MODEL=gemini-2.5-flash
VERTEX_AI_LOCATION=us-central1
```
### Step 5: Test the Integration
Start the backend and check logs for:
```
[AI Service] ✅ Vertex AI provider initialized successfully with model: gemini-2.5-flash
```
## API Interface (Unchanged)
The public API remains the same - no changes needed in controllers or routes:
```typescript
// Check if AI is available
aiService.isAvailable(): boolean
// Get provider name
aiService.getProviderName(): string
// Generate conclusion remark
aiService.generateConclusionRemark(context): Promise<{
remark: string;
confidence: number;
keyPoints: string[];
provider: string;
}>
// Reinitialize (after config changes)
aiService.reinitialize(): Promise<void>
```
## Troubleshooting
### Error: "Module not found: @google-cloud/vertexai"
**Solution:**
```bash
npm install @google-cloud/vertexai
```
### Error: "Service account key file not found"
**Solution:**
- Verify file exists at: `credentials/re-platform-workflow-dealer-3d5738fcc1f9.json`
- Check `GCP_KEY_FILE` path in `.env` is correct
- Ensure file has read permissions
### Error: "Model was not found or your project does not have access"
**Solution:**
- Verify Vertex AI API is enabled in Google Cloud Console
- Check model name is correct (e.g., `gemini-2.5-flash`)
- Ensure model is available in your selected region
- Verify service account has `roles/aiplatform.user` role
### Error: "Permission denied"
**Solution:**
- Verify service account has Vertex AI User role
- Check service account key hasn't been revoked
- Regenerate service account key if needed
### Error: "API not enabled"
**Solution:**
- Enable Vertex AI API in Google Cloud Console
- Wait a few minutes for propagation
- Restart the backend service
## Migration Notes
### What Changed
- ✅ Removed multi-provider support (Claude, OpenAI, Gemini API)
- ✅ Now uses Vertex AI Gemini exclusively
- ✅ Uses service account authentication (same as GCS)
- ✅ Simplified configuration (no API keys needed)
### What Stayed the Same
- ✅ Public API interface (`aiService` methods)
- ✅ Conclusion generation functionality
- ✅ Admin panel configuration structure
- ✅ Error handling and logging
### Backward Compatibility
- ✅ Existing code using `aiService` will work without changes
- ✅ Conclusion controller unchanged
- ✅ Admin panel can still enable/disable AI features
- ✅ Configuration cache system still works
## Verification Checklist
- [ ] `@google-cloud/vertexai` package installed
- [ ] Service account key file exists and is valid
- [ ] Vertex AI API is enabled in Google Cloud Console
- [ ] Service account has `Vertex AI User` role
- [ ] `.env` file has correct `GCP_PROJECT_ID` and `GCP_KEY_FILE`
- [ ] Backend logs show successful initialization
- [ ] AI conclusion generation works for test requests
## Support
For issues or questions:
1. Check backend logs for detailed error messages
2. Verify Google Cloud Console settings
3. Ensure service account permissions are correct
4. Test with a simple request to isolate issues

View File

@ -2,15 +2,14 @@
NODE_ENV=development
PORT=5000
API_VERSION=v1
BASE_URL={{CURRENT_BACKEND_DEPLOYED_URL}}
FRONTEND_URL={{FrontEND_BASE_URL}}
BASE_URL=http://localhost:5000
# Database
DB_HOST={{DB_HOST}}
DB_HOST=localhost
DB_PORT=5432
DB_NAME=re_workflow_db
DB_USER={{DB_USER}}
DB_PASSWORD={{DB_PASWORD}}
DB_USER=postgres
DB_PASSWORD=postgres
DB_SSL=false
DB_POOL_MIN=2
DB_POOL_MAX=10
@ -22,6 +21,12 @@ JWT_EXPIRY=24h
REFRESH_TOKEN_SECRET=your_refresh_token_secret_here
REFRESH_TOKEN_EXPIRY=7d
# Okta/Auth0 Configuration (for backend token exchange in localhost)
OKTA_DOMAIN=https://dev-830839.oktapreview.com
OKTA_CLIENT_ID=0oa2j8slwj5S4bG5k0h8
OKTA_CLIENT_SECRET=your_okta_client_secret_here
OKTA_API_TOKEN=your_okta_api_token_here # For Okta User Management API (user search)
# Session
SESSION_SECRET=your_session_secret_here_min_32_chars
@ -30,13 +35,6 @@ GCP_PROJECT_ID=re-workflow-project
GCP_BUCKET_NAME=re-workflow-documents
GCP_KEY_FILE=./config/gcp-key.json
# Google Secret Manager (Optional - for production)
# Set USE_GOOGLE_SECRET_MANAGER=true to enable loading secrets from Google Secret Manager
# Secrets from GCS will override .env file values
USE_GOOGLE_SECRET_MANAGER=false
# GCP_SECRET_PREFIX=optional-prefix-for-secret-names (e.g., "prod" -> looks for "prod-DB_PASSWORD")
# GCP_SECRET_MAP_FILE=./secret-map.json (optional JSON file to map secret names to env var names)
# Email Service (Optional)
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
@ -45,25 +43,20 @@ SMTP_USER=notifications@royalenfield.com
SMTP_PASSWORD=your_smtp_password
EMAIL_FROM=RE Workflow System <notifications@royalenfield.com>
# AI Service (for conclusion generation) - Vertex AI Gemini
# Uses service account credentials from GCP_KEY_FILE
# Vertex AI Model Configuration (optional - defaults used if not set)
VERTEX_AI_MODEL=gemini-2.5-flash
VERTEX_AI_LOCATION=asia-south1
# Note: GCP_PROJECT_ID and GCP_KEY_FILE are already configured above for GCS
# AI Service (for conclusion generation)
AI_API_KEY=your_ai_api_key
AI_MODEL=gpt-4
AI_MAX_TOKENS=500
# Logging
LOG_LEVEL=info
LOG_FILE_PATH=./logs
APP_VERSION=1.2.0
# ============ Loki Configuration (Grafana Log Aggregation) ============
LOKI_HOST= # e.g., http://loki:3100 or http://monitoring.cloudtopiaa.com:3100
LOKI_USER= # Optional: Basic auth username
LOKI_PASSWORD= # Optional: Basic auth password
# CORS
CORS_ORIGIN="*"
# CORS - Comma-separated list of allowed origins
# Local: http://localhost:3000
# Production: https://your-frontend-domain.com
# Multiple: http://localhost:3000,http://localhost:5173,https://your-frontend-domain.com
CORS_ORIGIN=http://localhost:3000
# Rate Limiting
RATE_LIMIT_WINDOW_MS=900000
@ -77,32 +70,15 @@ ALLOWED_FILE_TYPES=pdf,doc,docx,xls,xlsx,ppt,pptx,jpg,jpeg,png,gif
TAT_CHECK_INTERVAL_MINUTES=30
TAT_REMINDER_THRESHOLD_1=50
TAT_REMINDER_THRESHOLD_2=80
OKTA_API_TOKEN={{api token given fto access octa users}}
OKTA_DOMAIN={{okta_domain_given for the envirnment}}
OKTA_CLIENT_ID={{okta_client_id}}
OKTA_CLIENT_SECRET={{okta_client_secret}}
# Notificaton Service Worker credentials
VAPID_PUBLIC_KEY={{vapid_public_key}} note: same key need to add on front end for web push
VAPID_PRIVATE_KEY={{vapid_private_key}}
VAPID_CONTACT=mailto:you@example.com
# Redis (for TAT Queue)
REDIS_URL=redis://localhost:6379
#Redis
REDIS_URL={{REDIS_URL_FOR DELAY JoBS create redis setup and add url here}}
TAT_TEST_MODE=false (on true it will consider 1 hour==1min)
# SAP Integration (OData Service via Zscaler)
SAP_BASE_URL=https://RENOIHND01.Eichergroup.com:1443
SAP_USERNAME={{SAP_USERNAME}}
SAP_PASSWORD={{SAP_PASSWORD}}
SAP_TIMEOUT_MS=30000
# SAP OData Service Name for IO Validation (default: ZFI_BUDGET_CHECK_API_SRV)
SAP_SERVICE_NAME=ZFI_BUDGET_CHECK_API_SRV
# SAP OData Service Name for Budget Blocking (default: ZFI_BUDGET_BLOCK_API_SRV)
SAP_BLOCK_SERVICE_NAME=ZFI_BUDGET_BLOCK_API_SRV
# SAP Requester identifier for budget blocking API (default: REFMS)
SAP_REQUESTER=REFMS
# SAP SSL Verification (set to 'true' to disable SSL verification for testing with self-signed certs)
# WARNING: Only use in development/testing environments
SAP_DISABLE_SSL_VERIFY=false
# TAT Test Mode (for development/testing)
# When enabled, 1 TAT hour = 1 minute (for faster testing)
# Example: 48-hour TAT becomes 48 minutes
TAT_TEST_MODE=false
# Working Hours Configuration (optional)
WORK_START_HOUR=9
WORK_END_HOUR=18

View File

@ -1,12 +0,0 @@
# =============================================================================
# MONITORING STACK ENVIRONMENT VARIABLES
# =============================================================================
# Copy this file to .env and update with your actual values
# Command: copy .env.example .env
# =============================================================================
# REDIS CONNECTION (External Redis Server)
# =============================================================================
REDIS_HOST=160.187.166.17
REDIS_PORT=6379
REDIS_PASSWORD=Redis@123

12
monitoring/.gitignore vendored
View File

@ -1,12 +0,0 @@
# Data volumes (mounted from Docker containers)
prometheus_data/
grafana_data/
alertmanager_data/
loki_data/
# Environment files with sensitive data
.env
# Logs
*.log

View File

@ -1,167 +0,0 @@
# RE Workflow Dashboard - Metrics Reference
## 📊 Complete KPI List with Data Sources
### **Section 1: API Overview**
| Panel Name | Metric Query | Data Source | What It Measures |
|------------|--------------|-------------|------------------|
| **Request Rate** | `sum(rate(http_requests_total{job="re-workflow-backend"}[5m]))` | Backend metrics | HTTP requests per second (all endpoints) |
| **Error Rate** | `sum(rate(http_request_errors_total{job="re-workflow-backend"}[5m])) / sum(rate(http_requests_total{job="re-workflow-backend"}[5m]))` | Backend metrics | Percentage of failed HTTP requests |
| **P95 Latency** | `histogram_quantile(0.95, sum(rate(http_request_duration_seconds_bucket{job="re-workflow-backend"}[5m])) by (le))` | Backend metrics | 95th percentile response time (seconds) |
| **API Status** | `up{job="re-workflow-backend"}` | Prometheus | Backend service up/down status (1=up, 0=down) |
| **Request Rate by Method** | `sum(rate(http_requests_total{job="re-workflow-backend"}[5m])) by (method)` | Backend metrics | Requests per method (GET, POST, etc.) |
| **Response Time Percentiles** | `histogram_quantile(0.50/0.95/0.99, ...)` | Backend metrics | Response time distribution (P50, P95, P99) |
---
### **Section 2: Logs**
| Panel Name | Metric Query | Data Source | What It Measures |
|------------|--------------|-------------|------------------|
| **Errors (Time Range)** | `count_over_time({job="re-workflow-backend", level="error"}[...])` | Loki logs | Total error log entries in selected time range |
| **Warnings (Time Range)** | `count_over_time({job="re-workflow-backend", level="warn"}[...])` | Loki logs | Total warning log entries in selected time range |
| **TAT Breaches (Time Range)** | Log filter for TAT breaches | Loki logs | TAT breach events logged |
| **Auth Failures (Time Range)** | Log filter for auth failures | Loki logs | Authentication failure events |
| **Recent Errors & Warnings** | `{job="re-workflow-backend"} \|= "error" or "warn"` | Loki logs | Live log stream of errors and warnings |
---
### **Section 3: Node.js Runtime** (Process-Level Metrics)
| Panel Name | Metric Query | Data Source | What It Measures |
|------------|--------------|-------------|------------------|
| **Node.js Process Memory (Heap)** | `process_resident_memory_bytes{job="re-workflow-backend"}` <br> `nodejs_heap_size_used_bytes{job="re-workflow-backend"}` <br> `nodejs_heap_size_total_bytes{job="re-workflow-backend"}` | Node.js metrics (prom-client) | Node.js process memory usage: <br>- RSS (Resident Set Size) <br>- Heap Used <br>- Heap Total |
| **Node.js Event Loop Lag** | `nodejs_eventloop_lag_seconds{job="re-workflow-backend"}` | Node.js metrics | Event loop lag in seconds (high = performance issue) |
| **Node.js Active Handles & Requests** | `nodejs_active_handles_total{job="re-workflow-backend"}` <br> `nodejs_active_requests_total{job="re-workflow-backend"}` | Node.js metrics | Active file handles and pending async requests |
| **Node.js Process CPU Usage** | `rate(process_cpu_seconds_total{job="re-workflow-backend"}[5m])` | Node.js metrics | CPU usage by Node.js process only (0-1 = 0-100%) |
**Key Point**: These metrics track the **Node.js application process** specifically, not the entire host system.
---
### **Section 4: Redis & Queue Status**
| Panel Name | Metric Query | Data Source | What It Measures |
|------------|--------------|-------------|------------------|
| **Redis Status** | `redis_up` | Redis Exporter | Redis server status (1=up, 0=down) |
| **Redis Connections** | `redis_connected_clients` | Redis Exporter | Number of active client connections to Redis |
| **Redis Memory** | `redis_memory_used_bytes` | Redis Exporter | Memory used by Redis (bytes) |
| **TAT Queue Waiting** | `queue_jobs_waiting{queue_name="tatQueue"}` | Backend queue metrics | Jobs waiting in TAT notification queue |
| **Pause/Resume Queue Waiting** | `queue_jobs_waiting{queue_name="pauseResumeQueue"}` | Backend queue metrics | Jobs waiting in pause/resume queue |
| **TAT Queue Failed** | `queue_jobs_failed{queue_name="tatQueue"}` | Backend queue metrics | Failed TAT notification jobs (should be 0) |
| **Pause/Resume Queue Failed** | `queue_jobs_failed{queue_name="pauseResumeQueue"}` | Backend queue metrics | Failed pause/resume jobs (should be 0) |
| **All Queues - Job Status** | `queue_jobs_waiting` <br> `queue_jobs_active` <br> `queue_jobs_delayed` | Backend queue metrics | Timeline of job status across all queues (stacked) |
| **Redis Commands Rate** | `rate(redis_commands_processed_total[1m])` | Redis Exporter | Redis commands executed per second |
**Key Point**: Queue metrics are collected by the backend every 15 seconds via BullMQ queue API.
---
### **Section 5: System Resources (Host)** (Host-Level Metrics)
| Panel Name | Metric Query | Data Source | What It Measures |
|------------|--------------|-------------|------------------|
| **Host CPU Usage (All Cores)** | `100 - (avg(rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)` | Node Exporter | Total CPU usage across all cores on host machine (%) |
| **Host Memory Usage (RAM)** | `(1 - (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes)) * 100` | Node Exporter | RAM usage on host machine (%) |
| **Host Disk Usage (/root)** | `100 - ((node_filesystem_avail_bytes{mountpoint="/",fstype!="tmpfs"} / node_filesystem_size_bytes{mountpoint="/",fstype!="tmpfs"}) * 100)` | Node Exporter | Disk usage of root filesystem (%) |
| **Disk Space Left** | `node_filesystem_avail_bytes{mountpoint="/",fstype!="tmpfs"}` | Node Exporter | Available disk space in gigabytes |
**Key Point**: These metrics track the **entire host system**, not just the Node.js process.
---
## 🔍 Data Source Summary
| Exporter/Service | Port | Metrics Provided | Collection Interval |
|------------------|------|------------------|---------------------|
| **RE Workflow Backend** | 5000 | HTTP metrics, custom business metrics, Node.js runtime | 10s (Prometheus scrape) |
| **Node Exporter** | 9100 | Host system metrics (CPU, memory, disk, network) | 15s (Prometheus scrape) |
| **Redis Exporter** | 9121 | Redis server metrics (connections, memory, commands) | 15s (Prometheus scrape) |
| **Queue Metrics** | 5000 | BullMQ queue job counts (via backend) | 15s (internal collection) |
| **Loki** | 3100 | Application logs | Real-time streaming |
---
## 🎯 Renamed Panels for Clarity
### Before → After
**Node.js Runtime Section:**
- ❌ "Memory Usage" → ✅ "Node.js Process Memory (Heap)"
- ❌ "CPU Usage" → ✅ "Node.js Process CPU Usage"
- ❌ "Event Loop Lag" → ✅ "Node.js Event Loop Lag"
- ❌ "Active Handles & Requests" → ✅ "Node.js Active Handles & Requests"
**System Resources Section:**
- ❌ "System CPU Usage" → ✅ "Host CPU Usage (All Cores)"
- ❌ "System Memory Usage" → ✅ "Host Memory Usage (RAM)"
- ❌ "System Disk Usage" → ✅ "Host Disk Usage (/root)"
---
## 📈 Understanding the Difference
### **Process vs Host Metrics**
| Aspect | Node.js Process Metrics | Host System Metrics |
|--------|------------------------|---------------------|
| **Scope** | Single Node.js application | Entire server/container |
| **CPU** | CPU used by Node.js only | CPU used by all processes |
| **Memory** | Node.js heap memory | Total RAM on machine |
| **Purpose** | Application performance | Infrastructure health |
| **Example Use** | Detect memory leaks in app | Ensure server has capacity |
**Example Scenario:**
- **Node.js Process CPU**: 15% → Your app is using 15% of one CPU core
- **Host CPU Usage**: 75% → The entire server is at 75% CPU (all processes combined)
---
## 🚨 Alert Thresholds (Recommended)
| Metric | Warning | Critical | Action |
|--------|---------|----------|--------|
| **Node.js Process Memory** | 80% of heap | 90% of heap | Investigate memory leaks |
| **Host Memory Usage** | 70% | 85% | Scale up or optimize |
| **Host CPU Usage** | 60% | 80% | Scale horizontally |
| **Redis Memory** | 500MB | 1GB | Review Redis usage |
| **Queue Jobs Waiting** | >10 | >50 | Check worker health |
| **Queue Jobs Failed** | >0 | >5 | Immediate investigation |
| **Event Loop Lag** | >100ms | >500ms | Performance optimization needed |
---
## 🔧 Troubleshooting
### No Data Showing?
1. **Check Prometheus Targets**: http://localhost:9090/targets
- All targets should show "UP" status
2. **Test Metric Availability**:
```promql
up{job="re-workflow-backend"}
```
Should return `1`
3. **Check Time Range**: Set to "Last 15 minutes" in Grafana
4. **Verify Backend**: http://localhost:5000/metrics should show all metrics
### Metrics Not Updating?
1. **Backend**: Ensure backend is running with metrics collection enabled
2. **Prometheus**: Check scrape interval in prometheus.yml
3. **Queue Metrics**: Verify queue metrics collection started (check backend logs for "Queue Metrics ✅")
---
## 📚 Additional Resources
- **Prometheus Query Language**: https://prometheus.io/docs/prometheus/latest/querying/basics/
- **Grafana Dashboard Guide**: https://grafana.com/docs/grafana/latest/dashboards/
- **Node Exporter Metrics**: https://github.com/prometheus/node_exporter
- **Redis Exporter Metrics**: https://github.com/oliver006/redis_exporter
- **BullMQ Monitoring**: https://docs.bullmq.io/guide/metrics

View File

@ -1,291 +0,0 @@
# RE Workflow Monitoring Stack
Complete monitoring solution with **Grafana**, **Prometheus**, **Loki**, and **Promtail** for the RE Workflow Management System.
## 🏗️ Architecture
```
┌────────────────────────────────────────────────────────────────────────┐
│ RE Workflow System │
├────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Node.js API │────│ PostgreSQL │────│ Redis │ │
│ │ (Port 5000) │ │ (Port 5432) │ │ (Port 6379) │ │
│ └────────┬────────┘ └─────────────────┘ └─────────────────┘ │
│ │ │
│ │ /metrics endpoint │
│ │ Log files (./logs/) │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Monitoring Stack │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────────┐ │ │
│ │ │ Prometheus │──│ Loki │──│ Promtail │ │ │
│ │ │ (Port 9090)│ │ (Port 3100) │ │ (Collects log files) │ │ │
│ │ └──────┬──────┘ └──────┬──────┘ └─────────────────────────┘ │ │
│ │ │ │ │ │
│ │ └────────┬───────┘ │ │
│ │ ▼ │ │
│ │ ┌─────────────────┐ │ │
│ │ │ Grafana │ │ │
│ │ │ (Port 3001) │◄── Pre-configured Dashboards │ │
│ │ └─────────────────┘ │ │
│ └─────────────────────────────────────────────────────────────────┘ │
└────────────────────────────────────────────────────────────────────────┘
```
## 📦 What's Included
The monitoring stack includes:
- **Redis** - In-memory data store for BullMQ job queues
- **Prometheus** - Metrics collection and storage
- **Grafana** - Visualization and dashboards
- **Loki** - Log aggregation
- **Promtail** - Log shipping agent
- **Node Exporter** - Host system metrics
- **Redis Exporter** - Redis server metrics
- **Alertmanager** - Alert routing and notifications
## 🚀 Quick Start
### Prerequisites
- **Docker Desktop** installed and running
- **WSL2** enabled (recommended for Windows)
- Backend API running on port 5000
### Step 1: Start Monitoring Stack
```powershell
# Navigate to monitoring folder
cd C:\Laxman\Royal_Enfield\Re_Backend\monitoring
# Start all monitoring services
docker-compose -f docker-compose.monitoring.yml up -d
# Check status
docker ps
```
### Step 2: Configure Backend Environment
Add these to your backend `.env` file:
```env
# Loki configuration (for direct log shipping from Winston)
LOKI_HOST=http://localhost:3100
# Optional: Basic auth if enabled
# LOKI_USER=your_username
# LOKI_PASSWORD=your_password
```
### Step 3: Access Dashboards
| Service | URL | Credentials |
|---------|-----|-------------|
| **Grafana** | http://localhost:3001 | admin / REWorkflow@2024 |
| **Prometheus** | http://localhost:9090 | - |
| **Loki** | http://localhost:3100 | - |
| **Alertmanager** | http://localhost:9093 | - |
## 📊 Available Dashboards
### **RE Workflow Overview** (Enhanced!)
**URL**: http://localhost:3001/d/re-workflow-overview
**Sections:**
1. **📊 API Overview**
- Request rate, error rate, response times
- HTTP status codes distribution
2. **🔴 Redis & Queue Status** (NEW!)
- Redis connection status (Up/Down)
- Redis active connections
- Redis memory usage
- TAT Queue waiting/failed jobs
- Pause/Resume Queue waiting/failed jobs
- All queues job status timeline
- Redis commands rate
3. **💻 System Resources** (NEW!)
- System CPU Usage (gauge)
- System Memory Usage (gauge)
- System Disk Usage (gauge)
- Disk Space Left (GB available)
4. **🔄 Business Metrics**
- Workflow operations
- TAT breaches
- Node.js process metrics
**Refresh Rate**: Auto-refresh every 30 seconds
### 1. RE Workflow Overview
Pre-configured dashboard with:
- **API Metrics**: Request rate, error rate, latency percentiles
- **Logs Overview**: Error count, warnings, TAT breaches
- **Node.js Runtime**: Memory usage, event loop lag, CPU
### 2. Custom LogQL Queries
| Purpose | Query |
|---------|-------|
| All errors | `{app="re-workflow"} \| json \| level="error"` |
| TAT breaches | `{app="re-workflow"} \| json \| tatEvent="breached"` |
| Auth failures | `{app="re-workflow"} \| json \| authEvent="auth_failure"` |
| Slow requests (>3s) | `{app="re-workflow"} \| json \| duration>3000` |
| By user | `{app="re-workflow"} \| json \| userId="USER-ID"` |
| By request | `{app="re-workflow"} \| json \| requestId="REQ-XXX"` |
### 3. PromQL Queries (Prometheus)
| Purpose | Query |
|---------|-------|
| Request rate | `rate(http_requests_total{job="re-workflow-backend"}[5m])` |
| Error rate | `rate(http_request_errors_total[5m]) / rate(http_requests_total[5m])` |
| P95 latency | `histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))` |
| Memory usage | `process_resident_memory_bytes{job="re-workflow-backend"}` |
| Event loop lag | `nodejs_eventloop_lag_seconds{job="re-workflow-backend"}` |
## 📁 File Structure
```
monitoring/
├── docker-compose.monitoring.yml # Main compose file
├── prometheus/
│ ├── prometheus.yml # Prometheus configuration
│ └── alert.rules.yml # Alert rules
├── loki/
│ └── loki-config.yml # Loki configuration
├── promtail/
│ └── promtail-config.yml # Promtail log shipper config
├── alertmanager/
│ └── alertmanager.yml # Alert notification config
└── grafana/
├── provisioning/
│ ├── datasources/
│ │ └── datasources.yml # Auto-configure data sources
│ └── dashboards/
│ └── dashboards.yml # Dashboard provisioning
└── dashboards/
└── re-workflow-overview.json # Pre-built dashboard
```
## 🔧 Configuration
### Prometheus Scrape Targets
Edit `prometheus/prometheus.yml` to add/modify scrape targets:
```yaml
scrape_configs:
- job_name: 're-workflow-backend'
static_configs:
# For local development (backend outside Docker)
- targets: ['host.docker.internal:5000']
# For Docker deployment (backend in Docker)
# - targets: ['re_workflow_backend:5000']
```
### Log Retention
Edit `loki/loki-config.yml`:
```yaml
limits_config:
retention_period: 15d # Adjust retention period
```
### Alert Notifications
Edit `alertmanager/alertmanager.yml` to configure:
- **Email** notifications
- **Slack** webhooks
- **Custom** webhook endpoints
## 🛠️ Common Commands
```powershell
# Start services
docker-compose -f docker-compose.monitoring.yml up -d
# Stop services
docker-compose -f docker-compose.monitoring.yml down
# View logs
docker-compose -f docker-compose.monitoring.yml logs -f
# View specific service logs
docker-compose -f docker-compose.monitoring.yml logs -f grafana
# Restart a service
docker-compose -f docker-compose.monitoring.yml restart prometheus
# Check service health
docker ps
# Remove all data (fresh start)
docker-compose -f docker-compose.monitoring.yml down -v
```
## ⚡ Metrics Exposed by Backend
The backend exposes these metrics at `/metrics`:
### HTTP Metrics
- `http_requests_total` - Total HTTP requests (by method, route, status)
- `http_request_duration_seconds` - Request latency histogram
- `http_request_errors_total` - Error count (4xx, 5xx)
- `http_active_connections` - Current active connections
### Business Metrics
- `tat_breaches_total` - TAT breach events
- `pending_workflows_count` - Pending workflow gauge
- `workflow_operations_total` - Workflow operation count
- `auth_events_total` - Authentication events
### Node.js Runtime
- `nodejs_heap_size_*` - Heap memory metrics
- `nodejs_eventloop_lag_*` - Event loop lag
- `process_cpu_*` - CPU usage
- `process_resident_memory_bytes` - RSS memory
## 🔒 Security Notes
1. **Change default passwords** in production
2. **Enable TLS** for external access
3. **Configure firewall** to restrict access to monitoring ports
4. **Use reverse proxy** (nginx) for HTTPS
## 🐛 Troubleshooting
### Prometheus can't scrape backend
1. Ensure backend is running on port 5000
2. Check `/metrics` endpoint: `curl http://localhost:5000/metrics`
3. For Docker: use `host.docker.internal:5000`
### Logs not appearing in Loki
1. Check Promtail logs: `docker logs re_promtail`
2. Verify log file path in `promtail-config.yml`
3. Ensure backend has `LOKI_HOST` configured
### Grafana dashboards empty
1. Wait 30-60 seconds for data collection
2. Check data source configuration in Grafana
3. Verify time range selection
### Docker memory issues
```powershell
# Increase Docker Desktop memory allocation
# Settings → Resources → Memory → 4GB+
```
## 📞 Support
For issues with the monitoring stack:
1. Check container logs: `docker logs <container_name>`
2. Verify configuration files syntax
3. Ensure Docker Desktop is running with sufficient resources

View File

@ -1,176 +0,0 @@
# 🔄 Redis Migration Guide
## Overview
Redis is now part of the monitoring stack and running locally in Docker.
---
## ✅ What Was Done
1. **Added Redis to monitoring stack**
- Image: `redis:7-alpine`
- Container name: `re_redis`
- Port: `6379` (mapped to host)
- Password: Uses `REDIS_PASSWORD` from environment (default: `Redis@123`)
- Data persistence: Volume `re_redis_data`
2. **Updated Redis Exporter**
- Now connects to local Redis container
- Automatically starts after Redis is healthy
---
## 🔧 Update Backend Configuration
### Step 1: Update `.env` file
Open `Re_Backend/.env` and change:
```bash
# OLD (External Redis)
REDIS_HOST=160.187.166.17
# NEW (Local Docker Redis)
REDIS_HOST=localhost
```
Or if you want to use Docker network (recommended for production):
```bash
REDIS_HOST=re_redis # Use container name if backend is also in Docker
```
### Step 2: Restart Backend
```powershell
# Stop backend (Ctrl+C in terminal)
# Then restart
npm run dev
```
---
## 📊 Verify Everything Works
### 1. Check Redis is Running
```powershell
docker ps --filter "name=redis"
```
Should show:
```
re_redis Up (healthy)
re_redis_exporter Up
```
### 2. Test Redis Connection
```powershell
# Test from host
redis-cli -h localhost -p 6379 -a Redis@123 ping
# Should return: PONG
```
### 3. Check Backend Logs
```
[info]: [Redis] ✅ Connected successfully
[info]: [TAT Queue] ✅ Queue initialized
[info]: [Pause Resume Queue] ✅ Queue initialized
```
### 4. Refresh Grafana Dashboard
- Go to: http://localhost:3001/d/re-workflow-overview
- **Redis Status** should show "Up" (green)
- **Redis Connections** should show a number
- **Redis Memory** should show bytes used
- **Queue metrics** should work as before
---
## 🎯 Benefits of Local Redis
**Simpler Setup** - Everything in one place
**Faster Performance** - Local network, no external latency
**Data Persistence** - Redis data saved in Docker volume
**Easy Monitoring** - Redis Exporter automatically connected
**Environment Isolation** - No conflicts with external Redis
---
## 🔄 Docker Commands
### Start all monitoring services (including Redis)
```powershell
cd Re_Backend/monitoring
docker-compose -f docker-compose.monitoring.yml up -d
```
### Stop all services
```powershell
docker-compose -f docker-compose.monitoring.yml down
```
### View Redis logs
```powershell
docker logs re_redis
```
### Redis CLI access
```powershell
docker exec -it re_redis redis-cli -a Redis@123
```
### Check Redis data
```powershell
# Inside redis-cli
INFO
DBSIZE
KEYS *
```
---
## 🗄️ Data Persistence
Redis data is persisted in Docker volume:
```
Volume: re_redis_data
Location: Docker managed volume
```
To backup Redis data:
```powershell
docker exec re_redis redis-cli -a Redis@123 SAVE
docker cp re_redis:/data/dump.rdb ./redis-backup.rdb
```
---
## ⚠️ If You Want to Keep External Redis
If you prefer to keep using the external Redis server, simply:
1. Update `docker-compose.monitoring.yml`:
```yaml
redis-exporter:
environment:
- REDIS_ADDR=redis://160.187.166.17:6379
command:
- '--redis.addr=160.187.166.17:6379'
- '--redis.password=Redis@123'
```
2. Don't change `.env` in backend
3. Remove the `redis` service from docker-compose if you don't need it locally
---
## 🎉 Summary
Your setup now includes:
- ✅ Redis running locally in Docker
- ✅ Redis Exporter connected and working
- ✅ Backend ready to connect (just update `REDIS_HOST=localhost` in `.env`)
- ✅ All monitoring metrics available in Grafana
**Next step**: Update `Re_Backend/.env` with `REDIS_HOST=localhost` and restart your backend!

View File

@ -1,88 +0,0 @@
# =============================================================================
# Alertmanager Configuration for RE Workflow
# =============================================================================
global:
# Global configuration options
resolve_timeout: 5m
# Route configuration
route:
# Default receiver
receiver: 'default-receiver'
# Group alerts by these labels
group_by: ['alertname', 'service', 'severity']
# Wait before sending grouped notifications
group_wait: 30s
# Interval for sending updates for a group
group_interval: 5m
# Interval for resending notifications
repeat_interval: 4h
# Child routes for specific routing
routes:
# Critical alerts - immediate notification
- match:
severity: critical
receiver: 'critical-receiver'
group_wait: 10s
repeat_interval: 1h
# Warning alerts
- match:
severity: warning
receiver: 'warning-receiver'
group_wait: 1m
repeat_interval: 4h
# Receivers configuration
receivers:
# Default receiver (logs to console)
- name: 'default-receiver'
# Webhook receiver for testing
webhook_configs:
- url: 'http://localhost:5000/api/webhooks/alerts'
send_resolved: true
# Critical alerts receiver
- name: 'critical-receiver'
# Configure email notifications
# email_configs:
# - to: 'devops@royalenfield.com'
# from: 'alerts@royalenfield.com'
# smarthost: 'smtp.gmail.com:587'
# auth_username: 'alerts@royalenfield.com'
# auth_password: 'your-app-password'
# send_resolved: true
# Slack notifications (uncomment and configure)
# slack_configs:
# - api_url: 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'
# channel: '#alerts-critical'
# send_resolved: true
# title: '{{ .Status | toUpper }}: {{ .CommonAnnotations.summary }}'
# text: '{{ .CommonAnnotations.description }}'
webhook_configs:
- url: 'http://host.docker.internal:5000/api/webhooks/alerts'
send_resolved: true
# Warning alerts receiver
- name: 'warning-receiver'
webhook_configs:
- url: 'http://host.docker.internal:5000/api/webhooks/alerts'
send_resolved: true
# Inhibition rules - prevent duplicate alerts
inhibit_rules:
# If critical alert fires, inhibit warning alerts for same alertname
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'service']

View File

@ -1 +0,0 @@
Redis

View File

@ -1,213 +0,0 @@
# =============================================================================
# RE Workflow - Complete Monitoring Stack
# Docker Compose for Grafana, Prometheus, Loki, and Promtail
# =============================================================================
# Usage:
# cd monitoring
# docker-compose -f docker-compose.monitoring.yml up -d
# =============================================================================
version: '3.8'
services:
# ===========================================================================
# REDIS - In-Memory Data Store (for BullMQ queues)
# ===========================================================================
redis:
image: redis:7-alpine
container_name: re_redis
ports:
- "${REDIS_PORT:-6379}:6379"
command: redis-server --requirepass ${REDIS_PASSWORD:-Redis@123}
volumes:
- redis_data:/data
networks:
- monitoring_network
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
interval: 10s
timeout: 3s
retries: 5
# ===========================================================================
# PROMETHEUS - Metrics Collection
# ===========================================================================
prometheus:
image: prom/prometheus:v2.47.2
container_name: re_prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- ./prometheus/alert.rules.yml:/etc/prometheus/alert.rules.yml:ro
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=15d'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
- '--web.enable-lifecycle'
networks:
- monitoring_network
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:9090/-/healthy"]
interval: 30s
timeout: 10s
retries: 3
# ===========================================================================
# LOKI - Log Aggregation
# ===========================================================================
loki:
image: grafana/loki:2.9.2
container_name: re_loki
ports:
- "3100:3100"
volumes:
- ./loki/loki-config.yml:/etc/loki/local-config.yaml:ro
- loki_data:/loki
command: -config.file=/etc/loki/local-config.yaml
networks:
- monitoring_network
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3100/ready || exit 1"]
interval: 30s
timeout: 10s
retries: 5
# ===========================================================================
# PROMTAIL - Log Shipping Agent
# ===========================================================================
promtail:
image: grafana/promtail:2.9.2
container_name: re_promtail
volumes:
- ./promtail/promtail-config.yml:/etc/promtail/config.yml:ro
- ../logs:/var/log/app:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- promtail_data:/tmp/promtail
command: -config.file=/etc/promtail/config.yml
depends_on:
- loki
networks:
- monitoring_network
restart: unless-stopped
# ===========================================================================
# GRAFANA - Visualization & Dashboards
# ===========================================================================
grafana:
image: grafana/grafana:10.2.2
container_name: re_grafana
ports:
- "3001:3000" # Using 3001 to avoid conflict with React frontend (3000)
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=REWorkflow@2024
- GF_USERS_ALLOW_SIGN_UP=false
- GF_FEATURE_TOGGLES_ENABLE=publicDashboards
- GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource,grafana-piechart-panel
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources:ro
- ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards:ro
- ./grafana/dashboards:/var/lib/grafana/dashboards:ro
depends_on:
- prometheus
- loki
networks:
- monitoring_network
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
# ===========================================================================
# NODE EXPORTER - Host Metrics (Optional but recommended)
# ===========================================================================
node-exporter:
image: prom/node-exporter:v1.6.1
container_name: re_node_exporter
ports:
- "9100:9100"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
networks:
- monitoring_network
restart: unless-stopped
# ===========================================================================
# REDIS EXPORTER - Redis Metrics
# ===========================================================================
redis-exporter:
image: oliver006/redis_exporter:v1.55.0
container_name: re_redis_exporter
ports:
- "9121:9121"
environment:
- REDIS_ADDR=redis://redis:6379
- REDIS_PASSWORD=Redis@123
command:
- '--redis.addr=redis:6379'
- '--redis.password=Redis@123'
networks:
- monitoring_network
depends_on:
redis:
condition: service_healthy
restart: unless-stopped
# ===========================================================================
# ALERTMANAGER - Alert Notifications (Optional)
# ===========================================================================
alertmanager:
image: prom/alertmanager:v0.26.0
container_name: re_alertmanager
ports:
- "9093:9093"
volumes:
- ./alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml:ro
- alertmanager_data:/alertmanager
command:
- '--config.file=/etc/alertmanager/alertmanager.yml'
- '--storage.path=/alertmanager'
networks:
- monitoring_network
restart: unless-stopped
# ===========================================================================
# NETWORKS
# ===========================================================================
networks:
monitoring_network:
driver: bridge
name: re_monitoring_network
# ===========================================================================
# VOLUMES
# ===========================================================================
volumes:
redis_data:
name: re_redis_data
prometheus_data:
name: re_prometheus_data
loki_data:
name: re_loki_data
promtail_data:
name: re_promtail_data
grafana_data:
name: re_grafana_data
alertmanager_data:
name: re_alertmanager_data

File diff suppressed because it is too large Load Diff

View File

@ -1,19 +0,0 @@
# =============================================================================
# Grafana Dashboards Provisioning
# Auto-loads dashboards from JSON files
# =============================================================================
apiVersion: 1
providers:
- name: 'RE Workflow Dashboards'
orgId: 1
folder: 'RE Workflow'
folderUid: 're-workflow'
type: file
disableDeletion: false
updateIntervalSeconds: 30
allowUiUpdates: true
options:
path: /var/lib/grafana/dashboards

View File

@ -1,43 +0,0 @@
# =============================================================================
# Grafana Datasources Provisioning
# Auto-configures Prometheus and Loki as data sources
# =============================================================================
apiVersion: 1
datasources:
# Prometheus - Metrics
- name: Prometheus
uid: prometheus
type: prometheus
access: proxy
url: http://prometheus:9090
isDefault: true
editable: false
jsonData:
httpMethod: POST
manageAlerts: true
prometheusType: Prometheus
prometheusVersion: 2.47.2
# Loki - Logs
- name: Loki
uid: loki
type: loki
access: proxy
url: http://loki:3100
editable: false
jsonData:
maxLines: 1000
timeout: 60
# Alertmanager
- name: Alertmanager
uid: alertmanager
type: alertmanager
access: proxy
url: http://alertmanager:9093
editable: false
jsonData:
implementation: prometheus

View File

@ -1,79 +0,0 @@
# =============================================================================
# Loki Configuration for RE Workflow
# =============================================================================
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
log_level: info
common:
instance_addr: 127.0.0.1
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
# Query range settings
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
# Schema configuration
schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
# Ingestion limits
limits_config:
retention_period: 15d # Keep logs for 15 days
ingestion_rate_mb: 10 # 10MB/s ingestion rate
ingestion_burst_size_mb: 20 # 20MB burst
max_streams_per_user: 10000 # Max number of streams
max_line_size: 256kb # Max log line size
max_entries_limit_per_query: 5000 # Max entries per query
max_query_length: 721h # Max query time range (30 days)
# Compactor for retention
compactor:
working_directory: /loki/compactor
retention_enabled: true
retention_delete_delay: 2h
delete_request_store: filesystem
compaction_interval: 10m
# Ruler configuration (for alerting)
ruler:
alertmanager_url: http://alertmanager:9093
storage:
type: local
local:
directory: /loki/rules
rule_path: /loki/rules-temp
enable_api: true
# Table manager (for index retention)
table_manager:
retention_deletes_enabled: true
retention_period: 360h # 15 days
# Analytics (optional - disable for privacy)
analytics:
reporting_enabled: false

View File

@ -1,150 +0,0 @@
# =============================================================================
# Prometheus Alert Rules for RE Workflow
# =============================================================================
groups:
# ===========================================================================
# Backend API Alerts
# ===========================================================================
- name: re-workflow-backend
interval: 30s
rules:
# High Error Rate
- alert: HighErrorRate
expr: rate(http_request_errors_total{job="re-workflow-backend"}[5m]) > 0.1
for: 5m
labels:
severity: critical
service: backend
annotations:
summary: "High error rate detected in RE Workflow Backend"
description: "Error rate is {{ $value | printf \"%.2f\" }} errors/sec for the last 5 minutes."
# High Request Latency
- alert: HighRequestLatency
expr: histogram_quantile(0.95, rate(http_request_duration_seconds_bucket{job="re-workflow-backend"}[5m])) > 2
for: 5m
labels:
severity: warning
service: backend
annotations:
summary: "High API latency detected"
description: "95th percentile latency is {{ $value | printf \"%.2f\" }}s"
# API Down
- alert: BackendDown
expr: up{job="re-workflow-backend"} == 0
for: 1m
labels:
severity: critical
service: backend
annotations:
summary: "RE Workflow Backend is DOWN"
description: "Backend API has been unreachable for more than 1 minute."
# High Memory Usage
- alert: HighMemoryUsage
expr: process_resident_memory_bytes{job="re-workflow-backend"} / 1024 / 1024 > 500
for: 10m
labels:
severity: warning
service: backend
annotations:
summary: "High memory usage in Backend"
description: "Memory usage is {{ $value | printf \"%.0f\" }}MB"
# Event Loop Lag
- alert: HighEventLoopLag
expr: nodejs_eventloop_lag_seconds{job="re-workflow-backend"} > 0.5
for: 5m
labels:
severity: warning
service: backend
annotations:
summary: "High Node.js event loop lag"
description: "Event loop lag is {{ $value | printf \"%.3f\" }}s"
# ===========================================================================
# TAT/Workflow Alerts
# ===========================================================================
- name: re-workflow-tat
interval: 1m
rules:
# TAT Breach Rate
- alert: HighTATBreachRate
expr: increase(tat_breaches_total[1h]) > 10
for: 5m
labels:
severity: warning
service: workflow
annotations:
summary: "High TAT breach rate detected"
description: "{{ $value | printf \"%.0f\" }} TAT breaches in the last hour"
# Pending Workflows Queue
- alert: LargePendingQueue
expr: pending_workflows_count > 100
for: 30m
labels:
severity: warning
service: workflow
annotations:
summary: "Large number of pending workflows"
description: "{{ $value | printf \"%.0f\" }} workflows pending approval"
# ===========================================================================
# Infrastructure Alerts
# ===========================================================================
- name: infrastructure
interval: 30s
rules:
# High CPU Usage (Node Exporter)
- alert: HighCPUUsage
expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80
for: 10m
labels:
severity: warning
service: infrastructure
annotations:
summary: "High CPU usage on {{ $labels.instance }}"
description: "CPU usage is {{ $value | printf \"%.1f\" }}%"
# High Disk Usage
- alert: HighDiskUsage
expr: (node_filesystem_size_bytes - node_filesystem_free_bytes) / node_filesystem_size_bytes * 100 > 85
for: 10m
labels:
severity: warning
service: infrastructure
annotations:
summary: "High disk usage on {{ $labels.instance }}"
description: "Disk usage is {{ $value | printf \"%.1f\" }}%"
# Low Memory
- alert: LowMemory
expr: (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100 < 15
for: 5m
labels:
severity: critical
service: infrastructure
annotations:
summary: "Low memory on {{ $labels.instance }}"
description: "Available memory is {{ $value | printf \"%.1f\" }}%"
# ===========================================================================
# Loki/Logging Alerts
# ===========================================================================
- name: logging
interval: 1m
rules:
# Loki Down
- alert: LokiDown
expr: up{job="loki"} == 0
for: 2m
labels:
severity: critical
service: loki
annotations:
summary: "Loki is DOWN"
description: "Loki has been unreachable for more than 2 minutes."

View File

@ -1,61 +0,0 @@
# =============================================================================
# Prometheus Configuration for RE Workflow (Full Docker Stack)
# Use this when running docker-compose.full.yml
# =============================================================================
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
monitor: 're-workflow-monitor'
environment: 'docker'
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093
rule_files:
- /etc/prometheus/alert.rules.yml
scrape_configs:
# Prometheus Self-Monitoring
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
labels:
service: 'prometheus'
# RE Workflow Backend (running in Docker)
- job_name: 're-workflow-backend'
static_configs:
- targets: ['re_workflow_backend:5000']
labels:
service: 'backend'
environment: 'docker'
metrics_path: /metrics
scrape_interval: 10s
scrape_timeout: 5s
# Node Exporter
- job_name: 'node-exporter'
static_configs:
- targets: ['node-exporter:9100']
labels:
service: 'node-exporter'
# Loki
- job_name: 'loki'
static_configs:
- targets: ['loki:3100']
labels:
service: 'loki'
# Grafana
- job_name: 'grafana'
static_configs:
- targets: ['grafana:3000']
labels:
service: 'grafana'

View File

@ -1,101 +0,0 @@
# =============================================================================
# Prometheus Configuration for RE Workflow
# =============================================================================
global:
scrape_interval: 15s # How frequently to scrape targets
evaluation_interval: 15s # How frequently to evaluate rules
external_labels:
monitor: 're-workflow-monitor'
environment: 'development'
# Alerting configuration
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093
# Rule files
rule_files:
- /etc/prometheus/alert.rules.yml
# Scrape configurations
scrape_configs:
# ============================================
# Prometheus Self-Monitoring
# ============================================
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
labels:
service: 'prometheus'
# ============================================
# RE Workflow Backend API Metrics
# ============================================
- job_name: 're-workflow-backend'
static_configs:
# Option 1: Backend running locally (outside Docker monitoring stack)
- targets: ['host.docker.internal:5000']
labels:
service: 'backend'
environment: 'development'
deployment: 'local'
# Option 2: Backend running in Docker (docker-compose.full.yml)
# Uncomment below and comment above when using full stack
# - targets: ['re_workflow_backend:5000']
# labels:
# service: 'backend'
# environment: 'development'
# deployment: 'docker'
metrics_path: /metrics
scrape_interval: 10s
scrape_timeout: 5s
# ============================================
# Node Exporter - Host Metrics
# ============================================
- job_name: 'node-exporter'
static_configs:
- targets: ['node-exporter:9100']
labels:
service: 'node-exporter'
# ============================================
# PostgreSQL Metrics (if using pg_exporter)
# ============================================
# - job_name: 'postgres'
# static_configs:
# - targets: ['postgres-exporter:9187']
# labels:
# service: 'postgresql'
# ============================================
# Redis Metrics
# ============================================
- job_name: 'redis'
static_configs:
- targets: ['redis-exporter:9121']
labels:
service: 'redis'
environment: 'development'
# ============================================
# Loki Metrics
# ============================================
- job_name: 'loki'
static_configs:
- targets: ['loki:3100']
labels:
service: 'loki'
# ============================================
# Grafana Metrics
# ============================================
- job_name: 'grafana'
static_configs:
- targets: ['grafana:3000']
labels:
service: 'grafana'

View File

@ -1,129 +0,0 @@
# =============================================================================
# Promtail Configuration for RE Workflow
# Ships logs from application log files to Loki
# =============================================================================
server:
http_listen_port: 9080
grpc_listen_port: 0
# Positions file (tracks what's been read)
positions:
filename: /tmp/promtail/positions.yaml
# Loki client configuration
clients:
- url: http://loki:3100/loki/api/v1/push
batchwait: 1s
batchsize: 1048576 # 1MB
timeout: 10s
# Scrape configurations
scrape_configs:
# ============================================
# RE Workflow Backend Application Logs
# ============================================
- job_name: re-workflow-app
static_configs:
- targets:
- localhost
labels:
job: re-workflow
app: re-workflow
service: backend
__path__: /var/log/app/*.log
pipeline_stages:
# Parse JSON logs
- json:
expressions:
level: level
message: message
timestamp: timestamp
requestId: requestId
userId: userId
method: method
url: url
statusCode: statusCode
duration: duration
workflowEvent: workflowEvent
tatEvent: tatEvent
authEvent: authEvent
error: error
# Set log level as label
- labels:
level:
requestId:
workflowEvent:
tatEvent:
authEvent:
# Timestamp parsing
- timestamp:
source: timestamp
format: "2006-01-02 15:04:05"
fallback_formats:
- RFC3339
# Output stage
- output:
source: message
# ============================================
# Docker Container Logs (if running in Docker)
# ============================================
- job_name: docker-containers
static_configs:
- targets:
- localhost
labels:
job: docker
__path__: /var/lib/docker/containers/*/*-json.log
pipeline_stages:
# Parse Docker JSON format
- json:
expressions:
output: log
stream: stream
time: time
# Extract container info from path
- regex:
source: filename
expression: '/var/lib/docker/containers/(?P<container_id>[a-f0-9]+)/.*'
# Add labels
- labels:
stream:
container_id:
# Parse application JSON from log field
- json:
source: output
expressions:
level: level
message: message
service: service
# Add level as label if present
- labels:
level:
service:
# Output the log message
- output:
source: output
# ============================================
# System Logs (optional - for infrastructure monitoring)
# ============================================
# - job_name: system
# static_configs:
# - targets:
# - localhost
# labels:
# job: system
# __path__: /var/log/syslog

View File

@ -1,68 +0,0 @@
@echo off
echo ============================================================
echo RE Workflow Monitoring Stack - Startup Script
echo ============================================================
echo.
:: Check if Docker is running
docker info >nul 2>&1
if errorlevel 1 (
echo [ERROR] Docker is not running. Please start Docker Desktop first.
pause
exit /b 1
)
echo [INFO] Docker is running.
echo.
:: Navigate to monitoring directory
cd /d "%~dp0"
echo [INFO] Working directory: %cd%
echo.
:: Start monitoring stack
echo [INFO] Starting monitoring stack...
echo.
docker-compose -f docker-compose.monitoring.yml up -d
if errorlevel 1 (
echo.
echo [ERROR] Failed to start monitoring stack.
pause
exit /b 1
)
echo.
echo ============================================================
echo Monitoring Stack Started Successfully!
echo ============================================================
echo.
echo Services:
echo ---------------------------------------------------------
echo Grafana: http://localhost:3001
echo Username: admin
echo Password: REWorkflow@2024
echo.
echo Prometheus: http://localhost:9090
echo.
echo Loki: http://localhost:3100
echo.
echo Alertmanager: http://localhost:9093
echo ---------------------------------------------------------
echo.
echo Next Steps:
echo 1. Add LOKI_HOST=http://localhost:3100 to your .env file
echo 2. Restart your backend application
echo 3. Open Grafana at http://localhost:3001
echo 4. Navigate to Dashboards ^> RE Workflow
echo.
echo ============================================================
:: Show container status
echo.
echo [INFO] Container Status:
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
echo.
pause

View File

@ -1,36 +0,0 @@
@echo off
echo ============================================================
echo RE Workflow Monitoring Stack - Shutdown Script
echo ============================================================
echo.
:: Navigate to monitoring directory
cd /d "%~dp0"
echo [INFO] Stopping monitoring stack...
echo.
docker-compose -f docker-compose.monitoring.yml down
if errorlevel 1 (
echo.
echo [ERROR] Failed to stop monitoring stack.
pause
exit /b 1
)
echo.
echo ============================================================
echo Monitoring Stack Stopped Successfully!
echo ============================================================
echo.
echo Note: Data volumes are preserved. Use the following
echo command to remove all data:
echo.
echo docker-compose -f docker-compose.monitoring.yml down -v
echo.
echo ============================================================
echo.
pause

2544
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -4,28 +4,32 @@
"description": "Royal Enfield Workflow Management System - Backend API (TypeScript)",
"main": "dist/server.js",
"scripts": {
"start": "npm run build && npm run start:prod && npm run setup",
"start": "node dist/server.js",
"dev": "npm run setup && nodemon --exec ts-node -r tsconfig-paths/register src/server.ts",
"dev:no-setup": "nodemon --exec ts-node -r tsconfig-paths/register src/server.ts",
"build": "tsc && tsc-alias",
"build": "tsc",
"build:watch": "tsc --watch",
"start:prod": "node dist/server.js",
"start:prod": "NODE_ENV=production node dist/server.js",
"test": "jest --coverage",
"test:unit": "jest --testPathPattern=tests/unit",
"test:integration": "jest --testPathPattern=tests/integration",
"test:watch": "jest --watch",
"lint": "eslint src/**/*.ts",
"lint:fix": "eslint src/**/*.ts --fix",
"format": "prettier --write \"src/**/*.ts\"",
"type-check": "tsc --noEmit",
"db:migrate": "sequelize-cli db:migrate",
"db:migrate:undo": "sequelize-cli db:migrate:undo",
"db:seed": "sequelize-cli db:seed:all",
"clean": "rm -rf dist",
"setup": "ts-node -r tsconfig-paths/register src/scripts/auto-setup.ts",
"migrate": "ts-node -r tsconfig-paths/register src/scripts/migrate.ts",
"seed:config": "ts-node -r tsconfig-paths/register src/scripts/seed-admin-config.ts",
"seed:test-dealer": "ts-node -r tsconfig-paths/register src/scripts/seed-test-dealer.ts",
"cleanup:dealer-claims": "ts-node -r tsconfig-paths/register src/scripts/cleanup-dealer-claims.ts"
"seed:config": "ts-node -r tsconfig-paths/register src/scripts/seed-admin-config.ts"
},
"dependencies": {
"@google-cloud/secret-manager": "^6.1.1",
"@google-cloud/storage": "^7.18.0",
"@google-cloud/vertexai": "^1.10.0",
"@types/nodemailer": "^7.0.4",
"@anthropic-ai/sdk": "^0.68.0",
"@google-cloud/storage": "^7.14.0",
"@google/generative-ai": "^0.24.1",
"@types/uuid": "^8.3.4",
"axios": "^1.7.9",
"bcryptjs": "^2.4.3",
@ -36,26 +40,22 @@
"dotenv": "^16.4.7",
"express": "^4.21.2",
"express-rate-limit": "^7.5.0",
"fast-xml-parser": "^5.3.3",
"helmet": "^8.0.0",
"ioredis": "^5.8.2",
"jsonwebtoken": "^9.0.2",
"morgan": "^1.10.0",
"multer": "^1.4.5-lts.1",
"node-cron": "^3.0.3",
"nodemailer": "^7.0.11",
"openai": "^6.8.1",
"passport": "^0.7.0",
"passport-jwt": "^4.0.1",
"pg": "^8.13.1",
"pg-hstore": "^2.3.4",
"prom-client": "^15.1.3",
"sequelize": "^6.37.5",
"socket.io": "^4.8.1",
"uuid": "^8.3.2",
"web-push": "^3.6.7",
"winston": "^3.17.0",
"winston-loki": "^6.1.3",
"zod": "^3.24.1"
},
"devDependencies": {
@ -67,7 +67,7 @@
"@types/jsonwebtoken": "^9.0.7",
"@types/morgan": "^1.9.9",
"@types/multer": "^1.4.12",
"@types/node": "^22.19.1",
"@types/node": "^22.10.5",
"@types/passport": "^1.0.16",
"@types/passport-jwt": "^4.0.1",
"@types/pg": "^8.15.6",
@ -84,7 +84,6 @@
"ts-jest": "^29.2.5",
"ts-node": "^10.9.2",
"ts-node-dev": "^2.0.0",
"tsc-alias": "^1.8.16",
"tsconfig-paths": "^4.2.0",
"typescript": "^5.7.2"
},

View File

@ -1,11 +0,0 @@
{
"_comment": "Optional: Map Google Secret Manager secret names to environment variable names",
"_comment2": "If not provided, secrets are mapped automatically: secret-name -> SECRET_NAME (uppercase)",
"examples": {
"db-password": "DB_PASSWORD",
"jwt-secret-key": "JWT_SECRET",
"okta-client-secret": "OKTA_CLIENT_SECRET"
}
}

View File

@ -1,289 +0,0 @@
#!/bin/bash
# Environment Setup Script for Royal Enfield Workflow Backend
echo "=================================================="
echo "Royal Enfield - Backend Environment Setup"
echo "=================================================="
echo ""
# Function to generate random secret
generate_secret() {
openssl rand -base64 32 | tr -d "=+/" | cut -c1-32
}
# Function to create .env file
create_env_file() {
local env_type=$1
local file_name=$2
echo ""
echo "=================================================="
echo "Creating ${env_type} Environment File"
echo "=================================================="
echo ""
# Application Configuration
read -p "Enter NODE_ENV (development/production) [default: development]: " NODE_ENV
NODE_ENV=${NODE_ENV:-development}
read -p "Enter PORT [default: 5000]: " PORT
PORT=${PORT:-5000}
read -p "Enter BASE_URL (backend deployed URL): " BASE_URL
read -p "Enter FRONTEND_URL (frontend URL for CORS): " FRONTEND_URL
# Database Configuration
echo ""
echo "--- Database Configuration ---"
read -p "Enter DB_HOST [default: localhost]: " DB_HOST
DB_HOST=${DB_HOST:-localhost}
read -p "Enter DB_PORT [default: 5432]: " DB_PORT
DB_PORT=${DB_PORT:-5432}
read -p "Enter DB_NAME [default: re_workflow_db]: " DB_NAME
DB_NAME=${DB_NAME:-re_workflow_db}
read -p "Enter DB_USER: " DB_USER
read -p "Enter DB_PASSWORD: " DB_PASSWORD
# JWT Secrets
echo ""
echo "--- JWT Configuration ---"
read -p "Generate JWT_SECRET automatically? (y/n) [default: y]: " GEN_JWT
GEN_JWT=${GEN_JWT:-y}
if [ "$GEN_JWT" = "y" ]; then
JWT_SECRET=$(generate_secret)
echo "✅ Generated JWT_SECRET"
else
read -p "Enter JWT_SECRET (min 32 chars): " JWT_SECRET
fi
read -p "Generate REFRESH_TOKEN_SECRET automatically? (y/n) [default: y]: " GEN_REFRESH
GEN_REFRESH=${GEN_REFRESH:-y}
if [ "$GEN_REFRESH" = "y" ]; then
REFRESH_TOKEN_SECRET=$(generate_secret)
echo "✅ Generated REFRESH_TOKEN_SECRET"
else
read -p "Enter REFRESH_TOKEN_SECRET: " REFRESH_TOKEN_SECRET
fi
# Session Secret
read -p "Generate SESSION_SECRET automatically? (y/n) [default: y]: " GEN_SESSION
GEN_SESSION=${GEN_SESSION:-y}
if [ "$GEN_SESSION" = "y" ]; then
SESSION_SECRET=$(generate_secret)
echo "✅ Generated SESSION_SECRET"
else
read -p "Enter SESSION_SECRET (min 32 chars): " SESSION_SECRET
fi
# Okta Configuration
echo ""
echo "--- Okta SSO Configuration ---"
read -p "Enter OKTA_DOMAIN: " OKTA_DOMAIN
read -p "Enter OKTA_CLIENT_ID: " OKTA_CLIENT_ID
read -p "Enter OKTA_CLIENT_SECRET: " OKTA_CLIENT_SECRET
read -p "Enter OKTA_API_TOKEN (optional): " OKTA_API_TOKEN
# VAPID Keys for Web Push
echo ""
echo "--- Web Push (VAPID) Configuration ---"
echo "Note: VAPID keys are required for push notifications."
echo "Run 'npx web-push generate-vapid-keys' to generate them, or enter manually."
read -p "Enter VAPID_PUBLIC_KEY (or press Enter to skip): " VAPID_PUBLIC_KEY
read -p "Enter VAPID_PRIVATE_KEY (or press Enter to skip): " VAPID_PRIVATE_KEY
read -p "Enter VAPID_CONTACT email [default: mailto:admin@example.com]: " VAPID_CONTACT
VAPID_CONTACT=${VAPID_CONTACT:-mailto:admin@example.com}
# Redis Configuration
echo ""
echo "--- Redis Configuration (for TAT Queue) ---"
read -p "Enter REDIS_URL [default: redis://localhost:6379]: " REDIS_URL
REDIS_URL=${REDIS_URL:-redis://localhost:6379}
# Optional Services
echo ""
echo "--- Optional Services ---"
read -p "Enter SMTP_HOST (or press Enter to skip): " SMTP_HOST
read -p "Enter SMTP_USER (or press Enter to skip): " SMTP_USER
read -p "Enter SMTP_PASSWORD (or press Enter to skip): " SMTP_PASSWORD
read -p "Enter GCP_PROJECT_ID (or press Enter to skip): " GCP_PROJECT_ID
read -p "Enter GCP_BUCKET_NAME (or press Enter to skip): " GCP_BUCKET_NAME
# Vertex AI Configuration
echo ""
echo "--- Vertex AI Gemini Configuration (Optional) ---"
echo "Note: These have defaults and are optional. Service account credentials are required."
read -p "Enter VERTEX_AI_MODEL [default: gemini-2.5-flash]: " VERTEX_AI_MODEL
VERTEX_AI_MODEL=${VERTEX_AI_MODEL:-gemini-2.5-flash}
read -p "Enter VERTEX_AI_LOCATION [default: us-central1]: " VERTEX_AI_LOCATION
VERTEX_AI_LOCATION=${VERTEX_AI_LOCATION:-us-central1}
# Create .env file
cat > "$file_name" << EOF
# Application
NODE_ENV=${NODE_ENV}
PORT=${PORT}
API_VERSION=v1
BASE_URL=${BASE_URL}
FRONTEND_URL=${FRONTEND_URL}
# Database
DB_HOST=${DB_HOST}
DB_PORT=${DB_PORT}
DB_NAME=${DB_NAME}
DB_USER=${DB_USER}
DB_PASSWORD=${DB_PASSWORD}
DB_SSL=false
DB_POOL_MIN=2
DB_POOL_MAX=10
# SSO Configuration (Frontend-handled)
# Backend only needs JWT secrets for token validation
JWT_SECRET=${JWT_SECRET}
JWT_EXPIRY=24h
REFRESH_TOKEN_SECRET=${REFRESH_TOKEN_SECRET}
REFRESH_TOKEN_EXPIRY=7d
# Session
SESSION_SECRET=${SESSION_SECRET}
# Cloud Storage (GCP)
GCP_PROJECT_ID=${GCP_PROJECT_ID}
GCP_BUCKET_NAME=${GCP_BUCKET_NAME}
GCP_KEY_FILE=./config/gcp-key.json
# Email Service (Optional)
SMTP_HOST=${SMTP_HOST}
SMTP_PORT=587
SMTP_SECURE=false
SMTP_USER=${SMTP_USER}
SMTP_PASSWORD=${SMTP_PASSWORD}
EMAIL_FROM=RE Workflow System <notifications@royalenfield.com>
# Vertex AI Gemini Configuration (for conclusion generation)
# Service account credentials should be placed in ./credentials/ folder
VERTEX_AI_MODEL=${VERTEX_AI_MODEL}
VERTEX_AI_LOCATION=${VERTEX_AI_LOCATION}
# Logging
LOG_LEVEL=info
LOG_FILE_PATH=./logs
# Rate Limiting
RATE_LIMIT_WINDOW_MS=900000
RATE_LIMIT_MAX_REQUESTS=100
# File Upload
MAX_FILE_SIZE_MB=10
ALLOWED_FILE_TYPES=pdf,doc,docx,xls,xlsx,ppt,pptx,jpg,jpeg,png,gif
# TAT Monitoring
TAT_CHECK_INTERVAL_MINUTES=30
TAT_REMINDER_THRESHOLD_1=50
TAT_REMINDER_THRESHOLD_2=80
OKTA_API_TOKEN=${OKTA_API_TOKEN}
OKTA_DOMAIN=${OKTA_DOMAIN}
OKTA_CLIENT_ID=${OKTA_CLIENT_ID}
OKTA_CLIENT_SECRET=${OKTA_CLIENT_SECRET}
# Notification Service Worker credentials (Web Push / VAPID)
VAPID_PUBLIC_KEY=${VAPID_PUBLIC_KEY}
VAPID_PRIVATE_KEY=${VAPID_PRIVATE_KEY}
VAPID_CONTACT=${VAPID_CONTACT}
# Redis (for TAT Queue)
REDIS_URL=${REDIS_URL}
TAT_TEST_MODE=false
EOF
echo ""
echo "✅ Created ${file_name}"
}
# Function to show VAPID key generation instructions
show_vapid_instructions() {
echo ""
echo "=================================================="
echo "VAPID Key Generation Instructions"
echo "=================================================="
echo ""
echo "VAPID (Voluntary Application Server Identification) keys are required"
echo "for web push notifications. You need to generate a key pair:"
echo ""
echo "1. Generate VAPID keys using npx (no installation needed):"
echo " npx web-push generate-vapid-keys"
echo ""
echo " This will output:"
echo " ================================================"
echo " Public Key: <your-public-key>"
echo " Private Key: <your-private-key>"
echo " ================================================"
echo ""
echo "3. Add the keys to your .env file:"
echo " VAPID_PUBLIC_KEY=<your-public-key>"
echo " VAPID_PRIVATE_KEY=<your-private-key>"
echo " VAPID_CONTACT=mailto:your-email@example.com"
echo ""
echo "4. IMPORTANT: Add the SAME VAPID_PUBLIC_KEY to your frontend .env file:"
echo " VITE_PUBLIC_VAPID_KEY=<your-public-key>"
echo ""
echo "5. The VAPID_CONTACT should be a valid mailto: URL"
echo " Example: mailto:admin@royalenfield.com"
echo ""
echo "Note: Keep your VAPID_PRIVATE_KEY secure and never commit it to version control!"
echo ""
}
# Main execution
echo "This script will help you create environment configuration files for your backend."
echo ""
echo "Options:"
echo "1. Create .env file (interactive)"
echo "2. Show VAPID key generation instructions"
echo "3. Exit"
echo ""
read -p "Select an option (1-3): " OPTION
case $OPTION in
1)
create_env_file "Development" ".env"
echo ""
echo "=================================================="
echo "Setup Complete!"
echo "=================================================="
echo ""
echo "Next Steps:"
echo ""
echo "1. Generate VAPID keys for web push notifications:"
echo " npx web-push generate-vapid-keys"
echo " Then add them to your .env file"
echo ""
echo "2. Set up your database:"
echo " - Ensure PostgreSQL is running"
echo " - Run migrations if needed"
echo ""
echo "3. Set up Redis (for TAT queue):"
echo " - Install and start Redis"
echo " - Update REDIS_URL in .env"
echo ""
echo "4. Start the backend:"
echo " npm run dev"
echo ""
;;
2)
show_vapid_instructions
;;
3)
echo "Exiting..."
exit 0
;;
*)
echo "Invalid option. Exiting..."
exit 1
;;
esac

View File

@ -7,17 +7,13 @@ import { UserService } from './services/user.service';
import { SSOUserData } from './types/auth.types';
import { sequelize } from './config/database';
import { corsMiddleware } from './middlewares/cors.middleware';
import { metricsMiddleware, createMetricsRouter } from './middlewares/metrics.middleware';
import routes from './routes/index';
import { ensureUploadDir, UPLOAD_DIR } from './config/storage';
import { initializeGoogleSecretManager } from './services/googleSecretManager.service';
import path from 'path';
// Load environment variables from .env file first
// Load environment variables
dotenv.config();
// Secrets are now initialized in server.ts before app is imported
const app: express.Application = express();
const userService = new UserService();
@ -47,51 +43,16 @@ if (process.env.TRUST_PROXY === 'true' || process.env.NODE_ENV === 'production')
app.use(corsMiddleware);
// Security middleware - Configure Helmet to work with CORS
// Get frontend URL for CSP - allow cross-origin connections in development
const frontendUrl = process.env.FRONTEND_URL || 'http://localhost:3000';
const isDevelopment = process.env.NODE_ENV !== 'production';
// Build connect-src directive - allow backend API and blob URLs
const connectSrc = ["'self'", "blob:", "data:"];
if (isDevelopment) {
// In development, allow connections to common dev ports
connectSrc.push("http://localhost:3000", "http://localhost:5000", "ws://localhost:3000", "ws://localhost:5000");
// Also allow the configured frontend URL if it's a localhost URL
if (frontendUrl.includes('localhost')) {
connectSrc.push(frontendUrl);
}
} else {
// In production, only allow the configured frontend URL
if (frontendUrl && frontendUrl !== '*') {
const frontendOrigins = frontendUrl.split(',').map(url => url.trim()).filter(Boolean);
connectSrc.push(...frontendOrigins);
}
}
// Build CSP directives - conditionally include upgradeInsecureRequests
const cspDirectives: any = {
defaultSrc: ["'self'", "blob:"],
styleSrc: ["'self'", "'unsafe-inline'", "https://fonts.googleapis.com"],
scriptSrc: ["'self'"],
imgSrc: ["'self'", "data:", "https:", "blob:"],
connectSrc: connectSrc,
frameSrc: ["'self'", "blob:"],
fontSrc: ["'self'", "https://fonts.gstatic.com", "data:"],
objectSrc: ["'none'"],
baseUri: ["'self'"],
formAction: ["'self'"],
};
// Only add upgradeInsecureRequests in production (it forces HTTPS)
if (!isDevelopment) {
cspDirectives.upgradeInsecureRequests = [];
}
app.use(helmet({
crossOriginEmbedderPolicy: false,
crossOriginResourcePolicy: { policy: "cross-origin" },
contentSecurityPolicy: {
directives: cspDirectives,
directives: {
defaultSrc: ["'self'"],
styleSrc: ["'self'", "'unsafe-inline'"],
scriptSrc: ["'self'"],
imgSrc: ["'self'", "data:", "https:"],
},
},
}));
@ -105,13 +66,7 @@ app.use(express.urlencoded({ extended: true, limit: '10mb' }));
// Logging middleware
app.use(morgan('combined'));
// Prometheus metrics middleware - collect request metrics
app.use(metricsMiddleware);
// Prometheus metrics endpoint - expose metrics for scraping
app.use(createMetricsRouter());
// Health check endpoint (before API routes)
// Health check endpoint
app.get('/health', (_req: express.Request, res: express.Response) => {
res.status(200).json({
status: 'OK',
@ -121,13 +76,23 @@ app.get('/health', (_req: express.Request, res: express.Response) => {
});
});
// Mount API routes - MUST be before static file serving
// Mount API routes
app.use('/api/v1', routes);
// Serve uploaded files statically
ensureUploadDir();
app.use('/uploads', express.static(UPLOAD_DIR));
// Root endpoint
app.get('/', (_req: express.Request, res: express.Response) => {
res.status(200).json({
message: 'Royal Enfield Workflow Management System API',
version: '1.0.0',
status: 'running',
timestamp: new Date()
});
});
// Legacy SSO Callback endpoint for user creation/update (kept for backward compatibility)
app.post('/api/v1/auth/sso-callback', async (req: express.Request, res: express.Response): Promise<void> => {
try {
@ -218,71 +183,13 @@ app.get('/api/v1/users', async (_req: express.Request, res: express.Response): P
}
});
// Serve React build static files (only in production or when build folder exists)
// Check for both 'build' (Create React App) and 'dist' (Vite) folders
const buildPath = path.join(__dirname, "..", "build");
const distPath = path.join(__dirname, "..", "dist");
const fs = require('fs');
// Try to find React build directory
let reactBuildPath: string | null = null;
if (fs.existsSync(buildPath)) {
reactBuildPath = buildPath;
} else if (fs.existsSync(distPath)) {
reactBuildPath = distPath;
}
// Serve static files if React build exists
if (reactBuildPath && fs.existsSync(path.join(reactBuildPath, "index.html"))) {
// Serve static assets (JS, CSS, images, etc.) - these will have CSP headers from Helmet
app.use(express.static(reactBuildPath, {
setHeaders: (res: express.Response, filePath: string) => {
// Apply CSP headers to HTML files served as static files
if (filePath.endsWith('.html')) {
// CSP headers are already set by Helmet middleware, but ensure they're applied
// The meta tag in index.html will also enforce CSP
}
}
}));
// Catch-all handler: serve React app for all non-API routes
// This must be AFTER all API routes to avoid intercepting API requests
app.get('*', (req: express.Request, res: express.Response): void => {
// Don't serve React for API routes, uploads, or health check
if (req.path.startsWith('/api/') || req.path.startsWith('/uploads/') || req.path === '/health') {
// Error handling middleware
app.use((req: express.Request, res: express.Response) => {
res.status(404).json({
success: false,
message: `Route ${req.originalUrl} not found`,
timestamp: new Date(),
});
return;
}
// Serve React app for all other routes (SPA routing)
// This handles client-side routing in React Router
// CSP headers from Helmet will be applied to this response
res.sendFile(path.join(reactBuildPath!, "index.html"));
});
} else {
// No React build found - provide API info at root and use standard 404 handler
app.get('/', (_req: express.Request, res: express.Response): void => {
res.status(200).json({
message: 'Royal Enfield Workflow Management System API',
version: '1.0.0',
status: 'running',
timestamp: new Date(),
note: 'React build not found. API is available at /api/v1'
});
});
// Standard 404 handler for non-existent routes
app.use((req: express.Request, res: express.Response): void => {
res.status(404).json({
success: false,
message: `Route ${req.originalUrl} not found`,
timestamp: new Date(),
});
});
}
});
export default app;

View File

@ -66,8 +66,6 @@ export const constants = {
REFERENCE: 'REFERENCE',
FINAL: 'FINAL',
OTHER: 'OTHER',
COMPLETION_DOC: 'COMPLETION_DOC',
ACTIVITY_PHOTO: 'ACTIVITY_PHOTO',
},
// Work Note Types

View File

@ -1,25 +1,15 @@
import { SSOConfig, SSOUserData } from '../types/auth.types';
// Use getter functions to read from process.env dynamically
// This ensures values are read after secrets are loaded from Google Secret Manager
const ssoConfig: SSOConfig = {
get jwtSecret() { return process.env.JWT_SECRET || ''; },
get jwtExpiry() { return process.env.JWT_EXPIRY || '24h'; },
get refreshTokenExpiry() { return process.env.REFRESH_TOKEN_EXPIRY || '7d'; },
get sessionSecret() { return process.env.SESSION_SECRET || ''; },
// Use only FRONTEND_URL from environment - no fallbacks
get allowedOrigins() {
return process.env.FRONTEND_URL?.split(',').map(s => s.trim()).filter(Boolean) || [];
},
jwtSecret: process.env.JWT_SECRET || '',
jwtExpiry: process.env.JWT_EXPIRY || '24h',
refreshTokenExpiry: process.env.REFRESH_TOKEN_EXPIRY || '7d',
sessionSecret: process.env.SESSION_SECRET || '',
allowedOrigins: process.env.CORS_ORIGIN?.split(',') || ['http://localhost:3000'],
// Okta/Auth0 configuration for token exchange
get oktaDomain() { return process.env.OKTA_DOMAIN || 'https://dev-830839.oktapreview.com'; },
get oktaClientId() { return process.env.OKTA_CLIENT_ID || ''; },
get oktaClientSecret() { return process.env.OKTA_CLIENT_SECRET || ''; },
get oktaApiToken() { return process.env.OKTA_API_TOKEN || ''; }, // SSWS token for Users API
// Tanflow configuration for token exchange
get tanflowBaseUrl() { return process.env.TANFLOW_BASE_URL || 'https://ssodev.rebridge.co.in/realms/RE'; },
get tanflowClientId() { return process.env.TANFLOW_CLIENT_ID || 'REFLOW'; },
get tanflowClientSecret() { return process.env.TANFLOW_CLIENT_SECRET || 'cfIzMlwAMF1m4QWAP5StzZbV47HIrCox'; },
oktaDomain: process.env.OKTA_DOMAIN || 'https://dev-830839.oktapreview.com',
oktaClientId: process.env.OKTA_CLIENT_ID || '',
oktaClientSecret: process.env.OKTA_CLIENT_SECRET || '',
};
export { ssoConfig };

View File

@ -145,18 +145,16 @@ export async function getPublicConfig() {
ui: SYSTEM_CONFIG.UI
};
// Get configuration from database first (always try to read from DB)
// Try to get AI service status and configuration (gracefully handle if not available)
try {
const { aiService } = require('../services/ai.service');
const { getConfigValue } = require('../services/configReader.service');
// Get AI configuration from admin settings (database)
// Get AI configuration from admin settings
const aiEnabled = (await getConfigValue('AI_ENABLED', 'true'))?.toLowerCase() === 'true';
const remarkGenerationEnabled = (await getConfigValue('AI_REMARK_GENERATION_ENABLED', 'true'))?.toLowerCase() === 'true';
const maxRemarkLength = parseInt(await getConfigValue('AI_MAX_REMARK_LENGTH', '2000') || '2000', 10);
// Try to get AI service status (gracefully handle if not available)
try {
const { aiService } = require('../services/ai.service');
return {
...baseConfig,
ai: {
@ -170,14 +168,14 @@ export async function getPublicConfig() {
}
};
} catch (error) {
// AI service not available - return config with database values but AI disabled
// AI service not available - return config without AI info
return {
...baseConfig,
ai: {
enabled: false,
provider: 'None',
remarkGenerationEnabled: false,
maxRemarkLength: maxRemarkLength, // Use database value, not hardcoded
maxRemarkLength: 2000,
features: {
conclusionGeneration: false
}

View File

@ -1,7 +1,6 @@
import { Request, Response } from 'express';
import { Holiday, HolidayType } from '@models/Holiday';
import { holidayService } from '@services/holiday.service';
import { activityTypeService } from '@services/activityType.service';
import { sequelize } from '@config/database';
import { QueryTypes, Op } from 'sequelize';
import logger from '@utils/logger';
@ -241,68 +240,6 @@ export const bulkImportHolidays = async (req: Request, res: Response): Promise<v
}
};
/**
* Get public configurations (read-only, non-sensitive)
* Accessible to all authenticated users
*/
export const getPublicConfigurations = async (req: Request, res: Response): Promise<void> => {
try {
const { category } = req.query;
// Only allow certain categories for public access
const allowedCategories = ['DOCUMENT_POLICY', 'TAT_SETTINGS', 'WORKFLOW_SHARING', 'SYSTEM_SETTINGS'];
if (category && !allowedCategories.includes(category as string)) {
res.status(403).json({
success: false,
error: 'Access denied to this configuration category'
});
return;
}
let whereClause = '';
if (category) {
whereClause = `WHERE config_category = '${category}' AND is_sensitive = false`;
} else {
whereClause = `WHERE config_category IN ('DOCUMENT_POLICY', 'TAT_SETTINGS', 'WORKFLOW_SHARING', 'SYSTEM_SETTINGS') AND is_sensitive = false`;
}
const rawConfigurations = await sequelize.query(`
SELECT
config_key,
config_category,
config_value,
value_type,
display_name,
description
FROM admin_configurations
${whereClause}
ORDER BY config_category, sort_order
`, { type: QueryTypes.SELECT });
// Map snake_case to camelCase for frontend
const configurations = (rawConfigurations as any[]).map((config: any) => ({
configKey: config.config_key,
configCategory: config.config_category,
configValue: config.config_value,
valueType: config.value_type,
displayName: config.display_name,
description: config.description
}));
res.json({
success: true,
data: configurations,
count: configurations.length
});
} catch (error) {
logger.error('[Admin] Error fetching public configurations:', error);
res.status(500).json({
success: false,
error: 'Failed to fetch configurations'
});
}
};
/**
* Get all admin configurations
*/
@ -434,7 +371,7 @@ export const updateConfiguration = async (req: Request, res: Response): Promise<
}
// If AI config was updated, reinitialize AI service
const aiConfigKeys = ['AI_ENABLED'];
const aiConfigKeys = ['AI_PROVIDER', 'CLAUDE_API_KEY', 'OPENAI_API_KEY', 'GEMINI_API_KEY', 'AI_ENABLED'];
if (aiConfigKeys.includes(configKey)) {
try {
const { aiService } = require('../services/ai.service');
@ -783,15 +720,15 @@ export const assignRoleByEmail = async (req: Request, res: Response): Promise<vo
// User doesn't exist, need to fetch from Okta and create
logger.info(`[Admin] User ${email} not found in database, fetching from Okta...`);
// Import UserService to fetch full profile from Okta
// Import UserService to search Okta
const { UserService } = await import('@services/user.service');
const userService = new UserService();
try {
// Fetch full user profile from Okta Users API (includes manager, jobTitle, etc.)
const oktaUserData = await userService.fetchAndExtractOktaUserByEmail(email);
// Search Okta for this user
const oktaUsers = await userService.searchUsers(email, 1);
if (!oktaUserData) {
if (!oktaUsers || oktaUsers.length === 0) {
res.status(404).json({
success: false,
error: 'User not found in Okta. Please ensure the email is correct.'
@ -799,15 +736,25 @@ export const assignRoleByEmail = async (req: Request, res: Response): Promise<vo
return;
}
// Create user in our database via centralized userService with all fields including manager
const ensured = await userService.createOrUpdateUser({
...oktaUserData,
role, // Set the assigned role
isActive: true, // Ensure user is active
});
user = ensured;
const oktaUser = oktaUsers[0];
logger.info(`[Admin] Created new user ${email} with role ${role} (manager: ${oktaUserData.manager || 'N/A'})`);
// Create user in our database
user = await User.create({
email: oktaUser.email,
oktaSub: (oktaUser as any).userId || (oktaUser as any).oktaSub, // Okta user ID as oktaSub
employeeId: (oktaUser as any).employeeNumber || (oktaUser as any).employeeId || null,
firstName: oktaUser.firstName || null,
lastName: oktaUser.lastName || null,
displayName: oktaUser.displayName || `${oktaUser.firstName || ''} ${oktaUser.lastName || ''}`.trim() || oktaUser.email,
department: oktaUser.department || null,
designation: (oktaUser as any).designation || (oktaUser as any).title || null,
phone: (oktaUser as any).phone || (oktaUser as any).mobilePhone || null,
isActive: true,
role: role, // Assign the requested role
lastLogin: undefined // Not logged in yet
});
logger.info(`[Admin] Created new user ${email} with role ${role}`);
} catch (oktaError: any) {
logger.error('[Admin] Error fetching from Okta:', oktaError);
res.status(500).json({
@ -817,7 +764,7 @@ export const assignRoleByEmail = async (req: Request, res: Response): Promise<vo
return;
}
} else {
// User exists - fetch latest data from Okta and sync all fields including role
// User exists, update their role
const previousRole = user.role;
// Prevent self-demotion
@ -829,35 +776,9 @@ export const assignRoleByEmail = async (req: Request, res: Response): Promise<vo
return;
}
// Import UserService to fetch latest data from Okta
const { UserService } = await import('@services/user.service');
const userService = new UserService();
try {
// Fetch full user profile from Okta Users API to sync manager and other fields
const oktaUserData = await userService.fetchAndExtractOktaUserByEmail(email);
if (oktaUserData) {
// Sync all fields from Okta including the new role using centralized method
const updated = await userService.createOrUpdateUser({
...oktaUserData, // Includes all fields: manager, jobTitle, postalAddress, etc.
role, // Set the new role
isActive: true, // Ensure user is active
});
user = updated;
logger.info(`[Admin] Synced user ${email} from Okta (manager: ${oktaUserData.manager || 'N/A'}) and updated role from ${previousRole} to ${role}`);
} else {
// Okta user not found, just update role
await user.update({ role });
logger.info(`[Admin] Updated user ${email} role from ${previousRole} to ${role} (Okta data not available)`);
}
} catch (oktaError: any) {
// If Okta fetch fails, just update the role
logger.warn(`[Admin] Failed to fetch Okta data for ${email}, updating role only:`, oktaError.message);
await user.update({ role });
logger.info(`[Admin] Updated user ${email} role from ${previousRole} to ${role} (Okta sync failed)`);
}
logger.info(`[Admin] Updated user ${email} role from ${previousRole} to ${role}`);
}
res.json({
@ -879,174 +800,3 @@ export const assignRoleByEmail = async (req: Request, res: Response): Promise<vo
}
};
// ==================== Activity Type Management Routes ====================
/**
* Get all activity types (optionally filtered by active status)
*/
export const getAllActivityTypes = async (req: Request, res: Response): Promise<void> => {
try {
const { activeOnly } = req.query;
const activeOnlyBool = activeOnly === 'true';
const activityTypes = await activityTypeService.getAllActivityTypes(activeOnlyBool);
res.json({
success: true,
data: activityTypes,
count: activityTypes.length
});
} catch (error: any) {
logger.error('[Admin] Error fetching activity types:', error);
res.status(500).json({
success: false,
error: error.message || 'Failed to fetch activity types'
});
}
};
/**
* Get a single activity type by ID
*/
export const getActivityTypeById = async (req: Request, res: Response): Promise<void> => {
try {
const { activityTypeId } = req.params;
const activityType = await activityTypeService.getActivityTypeById(activityTypeId);
if (!activityType) {
res.status(404).json({
success: false,
error: 'Activity type not found'
});
return;
}
res.json({
success: true,
data: activityType
});
} catch (error: any) {
logger.error('[Admin] Error fetching activity type:', error);
res.status(500).json({
success: false,
error: error.message || 'Failed to fetch activity type'
});
}
};
/**
* Create a new activity type
*/
export const createActivityType = async (req: Request, res: Response): Promise<void> => {
try {
const userId = req.user?.userId;
if (!userId) {
res.status(401).json({
success: false,
error: 'User not authenticated'
});
return;
}
const {
title,
itemCode,
taxationType,
sapRefNo
} = req.body;
// Validate required fields
if (!title) {
res.status(400).json({
success: false,
error: 'Activity type title is required'
});
return;
}
const activityType = await activityTypeService.createActivityType({
title,
itemCode: itemCode || null,
taxationType: taxationType || null,
sapRefNo: sapRefNo || null,
createdBy: userId
});
res.status(201).json({
success: true,
message: 'Activity type created successfully',
data: activityType
});
} catch (error: any) {
logger.error('[Admin] Error creating activity type:', error);
res.status(500).json({
success: false,
error: error.message || 'Failed to create activity type'
});
}
};
/**
* Update an activity type
*/
export const updateActivityType = async (req: Request, res: Response): Promise<void> => {
try {
const userId = req.user?.userId;
if (!userId) {
res.status(401).json({
success: false,
error: 'User not authenticated'
});
return;
}
const { activityTypeId } = req.params;
const updates = req.body;
const activityType = await activityTypeService.updateActivityType(activityTypeId, updates, userId);
if (!activityType) {
res.status(404).json({
success: false,
error: 'Activity type not found'
});
return;
}
res.json({
success: true,
message: 'Activity type updated successfully',
data: activityType
});
} catch (error: any) {
logger.error('[Admin] Error updating activity type:', error);
res.status(500).json({
success: false,
error: error.message || 'Failed to update activity type'
});
}
};
/**
* Delete (deactivate) an activity type
*/
export const deleteActivityType = async (req: Request, res: Response): Promise<void> => {
try {
const { activityTypeId } = req.params;
await activityTypeService.deleteActivityType(activityTypeId);
res.json({
success: true,
message: 'Activity type deleted successfully'
});
} catch (error: any) {
logger.error('[Admin] Error deleting activity type:', error);
res.status(500).json({
success: false,
error: error.message || 'Failed to delete activity type'
});
}
};

View File

@ -1,15 +1,11 @@
import { Request, Response } from 'express';
import { ApprovalService } from '@services/approval.service';
import { DealerClaimApprovalService } from '@services/dealerClaimApproval.service';
import { ApprovalLevel } from '@models/ApprovalLevel';
import { WorkflowRequest } from '@models/WorkflowRequest';
import { validateApprovalAction } from '@validators/approval.validator';
import { ResponseHandler } from '@utils/responseHandler';
import type { AuthenticatedRequest } from '../types/express';
import { getRequestMetadata } from '@utils/requestUtils';
const approvalService = new ApprovalService();
const dealerClaimApprovalService = new DealerClaimApprovalService();
export class ApprovalController {
async approveLevel(req: AuthenticatedRequest, res: Response): Promise<void> {
@ -17,54 +13,18 @@ export class ApprovalController {
const { levelId } = req.params;
const validatedData = validateApprovalAction(req.body);
// Determine which service to use based on workflow type
const level = await ApprovalLevel.findByPk(levelId);
const requestMeta = getRequestMetadata(req);
const level = await approvalService.approveLevel(levelId, validatedData, req.user.userId, {
ipAddress: requestMeta.ipAddress,
userAgent: requestMeta.userAgent
});
if (!level) {
ResponseHandler.notFound(res, 'Approval level not found');
return;
}
const workflow = await WorkflowRequest.findByPk(level.requestId);
if (!workflow) {
ResponseHandler.notFound(res, 'Workflow not found');
return;
}
const workflowType = (workflow as any)?.workflowType;
const requestMeta = getRequestMetadata(req);
// Route to appropriate service based on workflow type
let approvedLevel: any;
if (workflowType === 'CLAIM_MANAGEMENT') {
// Use DealerClaimApprovalService for claim management workflows
approvedLevel = await dealerClaimApprovalService.approveLevel(
levelId,
validatedData,
req.user.userId,
{
ipAddress: requestMeta.ipAddress,
userAgent: requestMeta.userAgent
}
);
} else {
// Use ApprovalService for custom workflows
approvedLevel = await approvalService.approveLevel(
levelId,
validatedData,
req.user.userId,
{
ipAddress: requestMeta.ipAddress,
userAgent: requestMeta.userAgent
}
);
}
if (!approvedLevel) {
ResponseHandler.notFound(res, 'Approval level not found');
return;
}
ResponseHandler.success(res, approvedLevel, 'Approval level updated successfully');
ResponseHandler.success(res, level, 'Approval level updated successfully');
} catch (error) {
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
ResponseHandler.error(res, 'Failed to update approval level', 400, errorMessage);
@ -74,23 +34,7 @@ export class ApprovalController {
async getCurrentApprovalLevel(req: Request, res: Response): Promise<void> {
try {
const { id } = req.params;
// Determine which service to use based on workflow type
const workflow = await WorkflowRequest.findByPk(id);
if (!workflow) {
ResponseHandler.notFound(res, 'Workflow not found');
return;
}
const workflowType = (workflow as any)?.workflowType;
// Route to appropriate service based on workflow type
let level: any;
if (workflowType === 'CLAIM_MANAGEMENT') {
level = await dealerClaimApprovalService.getCurrentApprovalLevel(id);
} else {
level = await approvalService.getCurrentApprovalLevel(id);
}
const level = await approvalService.getCurrentApprovalLevel(id);
ResponseHandler.success(res, level, 'Current approval level retrieved successfully');
} catch (error) {
@ -102,23 +46,7 @@ export class ApprovalController {
async getApprovalLevels(req: Request, res: Response): Promise<void> {
try {
const { id } = req.params;
// Determine which service to use based on workflow type
const workflow = await WorkflowRequest.findByPk(id);
if (!workflow) {
ResponseHandler.notFound(res, 'Workflow not found');
return;
}
const workflowType = (workflow as any)?.workflowType;
// Route to appropriate service based on workflow type
let levels: any[];
if (workflowType === 'CLAIM_MANAGEMENT') {
levels = await dealerClaimApprovalService.getApprovalLevels(id);
} else {
levels = await approvalService.getApprovalLevels(id);
}
const levels = await approvalService.getApprovalLevels(id);
ResponseHandler.success(res, levels, 'Approval levels retrieved successfully');
} catch (error) {

Some files were not shown because too many files have changed in this diff Show More