Compare commits
11 Commits
cfbb1c8b04
...
43b0a493f4
| Author | SHA1 | Date | |
|---|---|---|---|
| 43b0a493f4 | |||
| 8968b86a9f | |||
| b4407f59a6 | |||
| cbca9d1b15 | |||
| 56258205ea | |||
| 54ecae5b7b | |||
| 8c7965e469 | |||
| 826c0eedea | |||
| c7c9b62358 | |||
| c76b799cf7 | |||
| 1aa7fb9056 |
@ -1,270 +0,0 @@
|
|||||||
# Admin Configurable Settings - Complete Reference
|
|
||||||
|
|
||||||
## 📋 All 18 Settings Across 7 Categories
|
|
||||||
|
|
||||||
This document lists all admin-configurable settings as per the SRS document requirements.
|
|
||||||
All settings are **editable via the Settings page** (Admin users only) and stored in the `admin_configurations` table.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1️⃣ **TAT Settings** (6 Settings)
|
|
||||||
|
|
||||||
Settings that control Turnaround Time calculations and reminders.
|
|
||||||
|
|
||||||
| Setting | Key | Type | Default | Range | Description |
|
|
||||||
|---------|-----|------|---------|-------|-------------|
|
|
||||||
| Default TAT - Express | `DEFAULT_TAT_EXPRESS_HOURS` | Number | 24 | 1-168 | Default TAT hours for express priority (calendar days) |
|
|
||||||
| Default TAT - Standard | `DEFAULT_TAT_STANDARD_HOURS` | Number | 48 | 1-720 | Default TAT hours for standard priority (working days) |
|
|
||||||
| First Reminder Threshold | `TAT_REMINDER_THRESHOLD_1` | Number | 50 | 1-100 | Send gentle reminder at this % of TAT elapsed |
|
|
||||||
| Second Reminder Threshold | `TAT_REMINDER_THRESHOLD_2` | Number | 75 | 1-100 | Send escalation warning at this % of TAT elapsed |
|
|
||||||
| Work Start Hour | `WORK_START_HOUR` | Number | 9 | 0-23 | Hour when working day starts (24h format) |
|
|
||||||
| Work End Hour | `WORK_END_HOUR` | Number | 18 | 0-23 | Hour when working day ends (24h format) |
|
|
||||||
|
|
||||||
**UI Component:** Number input + Slider for thresholds
|
|
||||||
**Category Color:** Blue 🔵
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2️⃣ **Document Policy** (3 Settings)
|
|
||||||
|
|
||||||
Settings that control file uploads and document management.
|
|
||||||
|
|
||||||
| Setting | Key | Type | Default | Range | Description |
|
|
||||||
|---------|-----|------|---------|-------|-------------|
|
|
||||||
| Max File Size | `MAX_FILE_SIZE_MB` | Number | 10 | 1-100 | Maximum file upload size in MB |
|
|
||||||
| Allowed File Types | `ALLOWED_FILE_TYPES` | String | pdf,doc,docx... | - | Comma-separated list of allowed extensions |
|
|
||||||
| Document Retention Period | `DOCUMENT_RETENTION_DAYS` | Number | 365 | 30-3650 | Days to retain documents after closure |
|
|
||||||
|
|
||||||
**UI Component:** Number input + Text input
|
|
||||||
**Category Color:** Purple 🟣
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3️⃣ **AI Configuration** (2 Settings)
|
|
||||||
|
|
||||||
Settings for AI-generated conclusion remarks.
|
|
||||||
|
|
||||||
| Setting | Key | Type | Default | Range | Description |
|
|
||||||
|---------|-----|------|---------|-------|-------------|
|
|
||||||
| Enable AI Remarks | `AI_REMARK_GENERATION_ENABLED` | Boolean | true | - | Toggle AI-generated conclusion remarks |
|
|
||||||
| Max Remark Characters | `AI_REMARK_MAX_CHARACTERS` | Number | 500 | 100-2000 | Maximum character limit for AI remarks |
|
|
||||||
|
|
||||||
**UI Component:** Toggle + Number input
|
|
||||||
**Category Color:** Pink 💗
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4️⃣ **Notification Rules** (3 Settings)
|
|
||||||
|
|
||||||
Settings for notification channels and frequency.
|
|
||||||
|
|
||||||
| Setting | Key | Type | Default | Range | Description |
|
|
||||||
|---------|-----|------|---------|-------|-------------|
|
|
||||||
| Enable Email Notifications | `ENABLE_EMAIL_NOTIFICATIONS` | Boolean | true | - | Send email notifications for events |
|
|
||||||
| Enable Push Notifications | `ENABLE_PUSH_NOTIFICATIONS` | Boolean | true | - | Send browser push notifications |
|
|
||||||
| Notification Batch Delay | `NOTIFICATION_BATCH_DELAY_MS` | Number | 5000 | 1000-30000 | Delay (ms) before sending batched notifications |
|
|
||||||
|
|
||||||
**UI Component:** Toggle + Number input
|
|
||||||
**Category Color:** Amber 🟠
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5️⃣ **Dashboard Layout** (4 Settings)
|
|
||||||
|
|
||||||
Settings to enable/disable KPI cards on dashboard per role.
|
|
||||||
|
|
||||||
| Setting | Key | Type | Default | Description |
|
|
||||||
|---------|-----|------|---------|-------------|
|
|
||||||
| Show Total Requests | `DASHBOARD_SHOW_TOTAL_REQUESTS` | Boolean | true | Display total requests KPI card |
|
|
||||||
| Show Open Requests | `DASHBOARD_SHOW_OPEN_REQUESTS` | Boolean | true | Display open requests KPI card |
|
|
||||||
| Show TAT Compliance | `DASHBOARD_SHOW_TAT_COMPLIANCE` | Boolean | true | Display TAT compliance KPI card |
|
|
||||||
| Show Pending Actions | `DASHBOARD_SHOW_PENDING_ACTIONS` | Boolean | true | Display pending actions KPI card |
|
|
||||||
|
|
||||||
**UI Component:** Toggle switches
|
|
||||||
**Category Color:** Teal 🟢
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6️⃣ **Workflow Sharing Policy** (3 Settings)
|
|
||||||
|
|
||||||
Settings to control who can add spectators and share workflows.
|
|
||||||
|
|
||||||
| Setting | Key | Type | Default | Range | Description |
|
|
||||||
|---------|-----|------|---------|-------|-------------|
|
|
||||||
| Allow Add Spectator | `ALLOW_ADD_SPECTATOR` | Boolean | true | - | Enable users to add spectators |
|
|
||||||
| Max Spectators | `MAX_SPECTATORS_PER_REQUEST` | Number | 20 | 1-100 | Maximum spectators per workflow |
|
|
||||||
| Allow External Sharing | `ALLOW_EXTERNAL_SHARING` | Boolean | false | - | Allow sharing with external users |
|
|
||||||
|
|
||||||
**UI Component:** Toggle + Number input
|
|
||||||
**Category Color:** Emerald 💚
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 7️⃣ **Workflow Limits** (2 Settings)
|
|
||||||
|
|
||||||
System limits for workflow structure.
|
|
||||||
|
|
||||||
| Setting | Key | Type | Default | Range | Description |
|
|
||||||
|---------|-----|------|---------|-------|-------------|
|
|
||||||
| Max Approval Levels | `MAX_APPROVAL_LEVELS` | Number | 10 | 1-20 | Maximum approval levels per workflow |
|
|
||||||
| Max Participants | `MAX_PARTICIPANTS_PER_REQUEST` | Number | 50 | 2-200 | Maximum total participants per workflow |
|
|
||||||
|
|
||||||
**UI Component:** Number input
|
|
||||||
**Category Color:** Gray ⚪
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Total Settings Summary
|
|
||||||
|
|
||||||
| Category | Count | Editable | UI |
|
|
||||||
|----------|-------|----------|-----|
|
|
||||||
| TAT Settings | 6 | ✅ All | Number + Slider |
|
|
||||||
| Document Policy | 3 | ✅ All | Number + Text |
|
|
||||||
| AI Configuration | 2 | ✅ All | Toggle + Number |
|
|
||||||
| Notification Rules | 3 | ✅ All | Toggle + Number |
|
|
||||||
| Dashboard Layout | 4 | ✅ All | Toggle |
|
|
||||||
| Workflow Sharing | 3 | ✅ All | Toggle + Number |
|
|
||||||
| Workflow Limits | 2 | ✅ All | Number |
|
|
||||||
| **TOTAL** | **18** | **18/18** | **All Editable** |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 SRS Document Compliance
|
|
||||||
|
|
||||||
### Required Config Areas (from SRS Section 7):
|
|
||||||
|
|
||||||
1. ✅ **TAT Settings** - Default TAT per priority, auto-reminder thresholds
|
|
||||||
2. ✅ **User Roles** - Covered via Workflow Limits (max participants, levels)
|
|
||||||
3. ✅ **Notification Rules** - Channels (email/push), frequency (batch delay)
|
|
||||||
4. ✅ **Document Policy** - Max upload size, allowed types, retention period
|
|
||||||
5. ✅ **Dashboard Layout** - Enable/disable KPI cards per role
|
|
||||||
6. ✅ **AI Configuration** - Toggle AI, set max characters
|
|
||||||
7. ✅ **Workflow Sharing Policy** - Control spectators, external sharing
|
|
||||||
|
|
||||||
**All 7 required areas are fully covered!** ✅
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔧 How to Edit Settings
|
|
||||||
|
|
||||||
### **Step 1: Access Settings** (Admin Only)
|
|
||||||
1. Login as Admin user
|
|
||||||
2. Navigate to **Settings** from sidebar
|
|
||||||
3. Click **"System Configuration"** tab
|
|
||||||
|
|
||||||
### **Step 2: Select Category**
|
|
||||||
Choose from 7 category tabs:
|
|
||||||
- TAT Settings
|
|
||||||
- Document Policy
|
|
||||||
- AI Configuration
|
|
||||||
- Notification Rules
|
|
||||||
- Dashboard Layout
|
|
||||||
- Workflow Sharing
|
|
||||||
- Workflow Limits
|
|
||||||
|
|
||||||
### **Step 3: Modify Values**
|
|
||||||
- **Number fields**: Enter numeric value within allowed range
|
|
||||||
- **Toggles**: Switch ON/OFF
|
|
||||||
- **Sliders**: Drag to set percentage
|
|
||||||
- **Text fields**: Enter comma-separated values
|
|
||||||
|
|
||||||
### **Step 4: Save Changes**
|
|
||||||
1. Click **"Save"** button for each modified setting
|
|
||||||
2. See success message confirmation
|
|
||||||
3. Some settings may show **"Requires Restart"** badge
|
|
||||||
|
|
||||||
### **Step 5: Reset if Needed**
|
|
||||||
- Click **"Reset to Default"** to revert any setting
|
|
||||||
- Confirmation dialog appears before reset
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Initial Setup
|
|
||||||
|
|
||||||
### **First Time Setup:**
|
|
||||||
|
|
||||||
1. **Start backend** - Configurations auto-seed on first run:
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Check logs** - Should see:
|
|
||||||
```
|
|
||||||
⚙️ System configurations initialized
|
|
||||||
✅ Default configurations seeded (18 settings across 7 categories)
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Login as Admin** and verify settings are editable
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🗄️ Database Storage
|
|
||||||
|
|
||||||
**Table:** `admin_configurations`
|
|
||||||
|
|
||||||
**Key Columns:**
|
|
||||||
- `config_key` - Unique identifier
|
|
||||||
- `config_category` - Grouping (TAT_SETTINGS, DOCUMENT_POLICY, etc.)
|
|
||||||
- `config_value` - Current value
|
|
||||||
- `default_value` - Reset value
|
|
||||||
- `is_editable` - Whether admin can edit (all are `true`)
|
|
||||||
- `ui_component` - UI type (toggle, number, slider, text)
|
|
||||||
- `validation_rules` - JSON with min/max constraints
|
|
||||||
- `sort_order` - Display order within category
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔄 How Settings Are Applied
|
|
||||||
|
|
||||||
### **Backend:**
|
|
||||||
```typescript
|
|
||||||
import { SYSTEM_CONFIG } from '@config/system.config';
|
|
||||||
|
|
||||||
const workStartHour = SYSTEM_CONFIG.WORKING_HOURS.START_HOUR;
|
|
||||||
// Value is loaded from admin_configurations table
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Frontend:**
|
|
||||||
```typescript
|
|
||||||
import { configService } from '@/services/configService';
|
|
||||||
|
|
||||||
const config = await configService.getConfig();
|
|
||||||
const maxFileSize = config.upload.maxFileSizeMB;
|
|
||||||
// Fetched from backend API: GET /api/v1/config
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Benefits
|
|
||||||
|
|
||||||
✅ **No hardcoded values** - Everything configurable
|
|
||||||
✅ **Admin-friendly UI** - No technical knowledge needed
|
|
||||||
✅ **Validation built-in** - Prevents invalid values
|
|
||||||
✅ **Audit trail** - All changes logged with timestamps
|
|
||||||
✅ **Reset capability** - Can revert to defaults anytime
|
|
||||||
✅ **Real-time effect** - Most changes apply immediately
|
|
||||||
✅ **SRS compliant** - All 7 required areas covered
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📝 Notes
|
|
||||||
|
|
||||||
- **User Role Management** is handled separately via user administration (not in this config)
|
|
||||||
- **Holiday Calendar** has its own dedicated management interface
|
|
||||||
- All settings have **validation rules** to prevent invalid configurations
|
|
||||||
- Settings marked **"Requires Restart"** need backend restart to take effect
|
|
||||||
- Non-admin users cannot see or edit system configurations
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Result
|
|
||||||
|
|
||||||
Your system now has **complete admin configurability** as specified in the SRS document with:
|
|
||||||
|
|
||||||
📌 **18 editable settings**
|
|
||||||
📌 **7 configuration categories**
|
|
||||||
📌 **100% SRS compliance**
|
|
||||||
📌 **Admin-friendly UI**
|
|
||||||
📌 **Database-driven** (not hardcoded)
|
|
||||||
|
|
||||||
@ -1,276 +0,0 @@
|
|||||||
# ✅ Auto-Migration Setup Complete
|
|
||||||
|
|
||||||
## 🎯 What Was Done
|
|
||||||
|
|
||||||
### 1. Converted SQL Migration to TypeScript
|
|
||||||
**Before**: `src/migrations/add_is_skipped_to_approval_levels.sql` (manual SQL)
|
|
||||||
**After**: `src/migrations/20251105-add-skip-fields-to-approval-levels.ts` (TypeScript)
|
|
||||||
|
|
||||||
**Features Added to `approval_levels` table**:
|
|
||||||
- ✅ `is_skipped` - Boolean flag to track skipped approvers
|
|
||||||
- ✅ `skipped_at` - Timestamp when approver was skipped
|
|
||||||
- ✅ `skipped_by` - Foreign key to user who skipped
|
|
||||||
- ✅ `skip_reason` - Text field for skip justification
|
|
||||||
- ✅ Partial index on `is_skipped = TRUE` for query performance
|
|
||||||
- ✅ Full rollback support in `down()` function
|
|
||||||
|
|
||||||
### 2. Updated Migration Runner
|
|
||||||
**File**: `src/scripts/migrate.ts`
|
|
||||||
|
|
||||||
**Changes**:
|
|
||||||
- Added import for new migration (m14)
|
|
||||||
- Added execution in run() function
|
|
||||||
- Enhanced console output with emojis for better visibility
|
|
||||||
- Better error messages
|
|
||||||
|
|
||||||
### 3. Auto-Run Migrations on Development Start
|
|
||||||
**File**: `package.json`
|
|
||||||
|
|
||||||
**Before**:
|
|
||||||
```json
|
|
||||||
"dev": "nodemon --exec ts-node -r tsconfig-paths/register src/server.ts"
|
|
||||||
```
|
|
||||||
|
|
||||||
**After**:
|
|
||||||
```json
|
|
||||||
"dev": "npm run migrate && nodemon --exec ts-node -r tsconfig-paths/register src/server.ts"
|
|
||||||
```
|
|
||||||
|
|
||||||
**What This Means**:
|
|
||||||
- 🔄 Migrations run automatically before server starts
|
|
||||||
- ✅ No more manual migration steps
|
|
||||||
- 🛡️ Server won't start if migrations fail
|
|
||||||
- ⚡ Fresh database schema on every dev restart
|
|
||||||
|
|
||||||
### 4. Created Documentation
|
|
||||||
- 📘 `MIGRATION_WORKFLOW.md` - Complete migration guide
|
|
||||||
- 📗 `MIGRATION_QUICK_REFERENCE.md` - Quick reference card
|
|
||||||
- 📕 `AUTO_MIGRATION_SETUP_COMPLETE.md` - This file
|
|
||||||
|
|
||||||
## 🚀 How to Use
|
|
||||||
|
|
||||||
### Starting Development (Most Common)
|
|
||||||
```bash
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
This will:
|
|
||||||
1. Connect to database
|
|
||||||
2. Run all 14 migrations sequentially
|
|
||||||
3. Start development server with hot reload
|
|
||||||
4. Display success messages
|
|
||||||
|
|
||||||
**Expected Output**:
|
|
||||||
```
|
|
||||||
📦 Database connected
|
|
||||||
🔄 Running migrations...
|
|
||||||
|
|
||||||
✅ Created workflow_requests table
|
|
||||||
✅ Created approval_levels table
|
|
||||||
...
|
|
||||||
✅ Added skip-related fields to approval_levels table
|
|
||||||
|
|
||||||
✅ All migrations applied successfully
|
|
||||||
🚀 Server running on port 5000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Running Migrations Only
|
|
||||||
```bash
|
|
||||||
npm run migrate
|
|
||||||
```
|
|
||||||
Use when you want to update database without starting server.
|
|
||||||
|
|
||||||
## 📊 Migration Status
|
|
||||||
|
|
||||||
| # | Migration | Status | Date |
|
|
||||||
|---|-----------|--------|------|
|
|
||||||
| 1 | create-workflow-requests | ✅ Active | 2025-10-30 |
|
|
||||||
| 2 | create-approval-levels | ✅ Active | 2025-10-30 |
|
|
||||||
| 3 | create-participants | ✅ Active | 2025-10-30 |
|
|
||||||
| 4 | create-documents | ✅ Active | 2025-10-30 |
|
|
||||||
| 5 | create-subscriptions | ✅ Active | 2025-10-31 |
|
|
||||||
| 6 | create-activities | ✅ Active | 2025-10-31 |
|
|
||||||
| 7 | create-work-notes | ✅ Active | 2025-10-31 |
|
|
||||||
| 8 | create-work-note-attachments | ✅ Active | 2025-10-31 |
|
|
||||||
| 9 | add-tat-alert-fields | ✅ Active | 2025-11-04 |
|
|
||||||
| 10 | create-tat-alerts | ✅ Active | 2025-11-04 |
|
|
||||||
| 11 | create-kpi-views | ✅ Active | 2025-11-04 |
|
|
||||||
| 12 | create-holidays | ✅ Active | 2025-11-04 |
|
|
||||||
| 13 | create-admin-config | ✅ Active | 2025-11-04 |
|
|
||||||
| 14 | add-skip-fields-to-approval-levels | ✅ **NEW** | 2025-11-05 |
|
|
||||||
|
|
||||||
## 🔄 Adding Future Migrations
|
|
||||||
|
|
||||||
When you need to add a new migration:
|
|
||||||
|
|
||||||
### Step 1: Create File
|
|
||||||
```bash
|
|
||||||
# Create file: src/migrations/20251106-your-description.ts
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Write Migration
|
|
||||||
```typescript
|
|
||||||
import { QueryInterface, DataTypes } from 'sequelize';
|
|
||||||
|
|
||||||
export async function up(queryInterface: QueryInterface): Promise<void> {
|
|
||||||
// Your changes here
|
|
||||||
await queryInterface.addColumn('table', 'column', {
|
|
||||||
type: DataTypes.STRING
|
|
||||||
});
|
|
||||||
console.log('✅ Your migration completed');
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function down(queryInterface: QueryInterface): Promise<void> {
|
|
||||||
// Rollback here
|
|
||||||
await queryInterface.removeColumn('table', 'column');
|
|
||||||
console.log('✅ Rollback completed');
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Register in migrate.ts
|
|
||||||
```typescript
|
|
||||||
// Add at top
|
|
||||||
import * as m15 from '../migrations/20251106-your-description';
|
|
||||||
|
|
||||||
// Add in run() function after m14
|
|
||||||
await (m15 as any).up(sequelize.getQueryInterface());
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Test
|
|
||||||
```bash
|
|
||||||
npm run migrate
|
|
||||||
# or
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🎯 Benefits
|
|
||||||
|
|
||||||
### For Development
|
|
||||||
- ✅ **No manual steps** - migrations run automatically
|
|
||||||
- ✅ **Consistent state** - everyone on team has same schema
|
|
||||||
- ✅ **Error prevention** - server won't start with schema mismatch
|
|
||||||
- ✅ **Fast iteration** - add migration, restart, test
|
|
||||||
|
|
||||||
### For Production
|
|
||||||
- ✅ **Idempotent** - safe to run multiple times
|
|
||||||
- ✅ **Versioned** - migrations tracked in git
|
|
||||||
- ✅ **Rollback support** - down() functions for reverting
|
|
||||||
- ✅ **Error handling** - clear failure messages
|
|
||||||
|
|
||||||
### For Team
|
|
||||||
- ✅ **TypeScript** - type-safe migrations
|
|
||||||
- ✅ **Documentation** - comprehensive guides
|
|
||||||
- ✅ **Best practices** - professional .NET team standards
|
|
||||||
- ✅ **Clear workflow** - easy to onboard new developers
|
|
||||||
|
|
||||||
## 🛡️ Safety Features
|
|
||||||
|
|
||||||
### Migration Execution
|
|
||||||
- Stops on first error
|
|
||||||
- Exits with error code 1 on failure
|
|
||||||
- Prevents server startup if migrations fail
|
|
||||||
- Detailed error logging
|
|
||||||
|
|
||||||
### Idempotency
|
|
||||||
All migrations should be idempotent (safe to run multiple times):
|
|
||||||
```typescript
|
|
||||||
// Check before adding
|
|
||||||
const tableDesc = await queryInterface.describeTable('table');
|
|
||||||
if (!tableDesc.column) {
|
|
||||||
await queryInterface.addColumn(/* ... */);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Transactions
|
|
||||||
For complex migrations, wrap in transaction:
|
|
||||||
```typescript
|
|
||||||
const transaction = await queryInterface.sequelize.transaction();
|
|
||||||
try {
|
|
||||||
await queryInterface.addColumn(/* ... */, { transaction });
|
|
||||||
await queryInterface.addIndex(/* ... */, { transaction });
|
|
||||||
await transaction.commit();
|
|
||||||
} catch (error) {
|
|
||||||
await transaction.rollback();
|
|
||||||
throw error;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 📝 Database Structure Reference
|
|
||||||
|
|
||||||
Always refer to **`backend_structure.txt`** for:
|
|
||||||
- Current table schemas
|
|
||||||
- Column types and constraints
|
|
||||||
- Foreign key relationships
|
|
||||||
- Enum values
|
|
||||||
- Index definitions
|
|
||||||
|
|
||||||
## 🧪 Testing the Setup
|
|
||||||
|
|
||||||
### Test Migration System
|
|
||||||
```bash
|
|
||||||
# Run migrations
|
|
||||||
npm run migrate
|
|
||||||
|
|
||||||
# Should see:
|
|
||||||
# 📦 Database connected
|
|
||||||
# 🔄 Running migrations...
|
|
||||||
# ✅ [migration messages]
|
|
||||||
# ✅ All migrations applied successfully
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test Auto-Run on Dev
|
|
||||||
```bash
|
|
||||||
# Start development
|
|
||||||
npm run dev
|
|
||||||
|
|
||||||
# Should see migrations run, then:
|
|
||||||
# 🚀 Server running on port 5000
|
|
||||||
# 📊 Environment: development
|
|
||||||
# ...
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test New Migration
|
|
||||||
1. Create test migration file
|
|
||||||
2. Register in migrate.ts
|
|
||||||
3. Run `npm run dev`
|
|
||||||
4. Verify migration executed
|
|
||||||
5. Check database schema
|
|
||||||
|
|
||||||
## 🎓 Pro Tips
|
|
||||||
|
|
||||||
1. **Always test locally first** - never test migrations in production
|
|
||||||
2. **Backup before migrating** - especially in production
|
|
||||||
3. **Keep migrations atomic** - one logical change per file
|
|
||||||
4. **Write descriptive names** - make purpose clear
|
|
||||||
5. **Add comments** - explain why, not just what
|
|
||||||
6. **Test rollbacks** - verify down() functions work
|
|
||||||
7. **Update documentation** - keep backend_structure.txt current
|
|
||||||
8. **Review before committing** - migrations are permanent
|
|
||||||
|
|
||||||
## 📞 Support
|
|
||||||
|
|
||||||
- 📘 Full Guide: `MIGRATION_WORKFLOW.md`
|
|
||||||
- 📗 Quick Reference: `MIGRATION_QUICK_REFERENCE.md`
|
|
||||||
- 📊 Database Structure: `backend_structure.txt`
|
|
||||||
|
|
||||||
## ✨ Summary
|
|
||||||
|
|
||||||
Your development workflow is now streamlined:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# That's it! This one command does everything:
|
|
||||||
npm run dev
|
|
||||||
|
|
||||||
# 1. Runs all migrations ✅
|
|
||||||
# 2. Starts development server ✅
|
|
||||||
# 3. Enables hot reload ✅
|
|
||||||
# 4. You focus on coding ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Setup Date**: November 5, 2025
|
|
||||||
**Total Migrations**: 14
|
|
||||||
**Auto-Run**: ✅ Enabled
|
|
||||||
**Status**: 🟢 Production Ready
|
|
||||||
**Team**: Royal Enfield .NET Expert Team
|
|
||||||
|
|
||||||
241
Business_Days_Calculation_Recommendations.md
Normal file
241
Business_Days_Calculation_Recommendations.md
Normal file
@ -0,0 +1,241 @@
|
|||||||
|
# Business Days Calculation - Current Issues & Recommendations
|
||||||
|
|
||||||
|
## 🔴 **CRITICAL ISSUE: TAT Processor Using Wrong Calculation**
|
||||||
|
|
||||||
|
### Current Problem:
|
||||||
|
In `Re_Backend/src/queues/tatProcessor.ts` (lines 64-65), the TAT calculation uses **simple calendar hours**:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const elapsedMs = now.getTime() - new Date(levelStartTime).getTime();
|
||||||
|
const elapsedHours = elapsedMs / (1000 * 60 * 60);
|
||||||
|
```
|
||||||
|
|
||||||
|
**This is WRONG because:**
|
||||||
|
- ❌ Counts ALL hours (24/7), including nights, weekends, holidays
|
||||||
|
- ❌ Doesn't respect working hours (9 AM - 6 PM)
|
||||||
|
- ❌ Doesn't exclude weekends for STANDARD priority
|
||||||
|
- ❌ Doesn't exclude holidays
|
||||||
|
- ❌ Causes incorrect TAT breach alerts
|
||||||
|
|
||||||
|
### ✅ **Solution Available:**
|
||||||
|
You already have a proper function `calculateElapsedWorkingHours()` in `tatTimeUtils.ts` that:
|
||||||
|
- ✅ Respects working hours (9 AM - 6 PM)
|
||||||
|
- ✅ Excludes weekends for STANDARD priority
|
||||||
|
- ✅ Excludes holidays
|
||||||
|
- ✅ Handles EXPRESS vs STANDARD differently
|
||||||
|
- ✅ Uses minute-by-minute precision
|
||||||
|
|
||||||
|
### 🔧 **Fix Required:**
|
||||||
|
|
||||||
|
**Update `tatProcessor.ts` to use proper working hours calculation:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// BEFORE (WRONG):
|
||||||
|
const elapsedMs = now.getTime() - new Date(levelStartTime).getTime();
|
||||||
|
const elapsedHours = elapsedMs / (1000 * 60 * 60);
|
||||||
|
|
||||||
|
// AFTER (CORRECT):
|
||||||
|
import { calculateElapsedWorkingHours } from '@utils/tatTimeUtils';
|
||||||
|
const priority = ((workflow as any).priority || 'STANDARD').toString().toLowerCase();
|
||||||
|
const elapsedHours = await calculateElapsedWorkingHours(levelStartTime, now, priority);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📊 **Business Days Calculation for Workflow Aging Report**
|
||||||
|
|
||||||
|
### Current Situation:
|
||||||
|
- ✅ You have `calculateElapsedWorkingHours()` - calculates hours
|
||||||
|
- ❌ You DON'T have `calculateBusinessDays()` - calculates days
|
||||||
|
|
||||||
|
### Need:
|
||||||
|
For the **Workflow Aging Report**, you need to show "Days Open" as **business days** (excluding weekends and holidays), not calendar days.
|
||||||
|
|
||||||
|
### 🔧 **Solution: Add Business Days Function**
|
||||||
|
|
||||||
|
Add this function to `Re_Backend/src/utils/tatTimeUtils.ts`:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
/**
|
||||||
|
* Calculate business days between two dates
|
||||||
|
* Excludes weekends and holidays
|
||||||
|
* @param startDate - Start date
|
||||||
|
* @param endDate - End date (defaults to now)
|
||||||
|
* @param priority - 'express' or 'standard' (express includes weekends, standard excludes)
|
||||||
|
* @returns Number of business days
|
||||||
|
*/
|
||||||
|
export async function calculateBusinessDays(
|
||||||
|
startDate: Date | string,
|
||||||
|
endDate: Date | string | null = null,
|
||||||
|
priority: string = 'standard'
|
||||||
|
): Promise<number> {
|
||||||
|
await loadWorkingHoursCache();
|
||||||
|
await loadHolidaysCache();
|
||||||
|
|
||||||
|
let start = dayjs(startDate).startOf('day');
|
||||||
|
const end = dayjs(endDate || new Date()).startOf('day');
|
||||||
|
|
||||||
|
// In test mode, use calendar days
|
||||||
|
if (isTestMode()) {
|
||||||
|
return end.diff(start, 'day') + 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
const config = workingHoursCache || {
|
||||||
|
startHour: TAT_CONFIG.WORK_START_HOUR,
|
||||||
|
endHour: TAT_CONFIG.WORK_END_HOUR,
|
||||||
|
startDay: TAT_CONFIG.WORK_START_DAY,
|
||||||
|
endDay: TAT_CONFIG.WORK_END_DAY
|
||||||
|
};
|
||||||
|
|
||||||
|
let businessDays = 0;
|
||||||
|
let current = start;
|
||||||
|
|
||||||
|
// Count each day from start to end (inclusive)
|
||||||
|
while (current.isBefore(end) || current.isSame(end, 'day')) {
|
||||||
|
const dayOfWeek = current.day(); // 0 = Sunday, 6 = Saturday
|
||||||
|
const dateStr = current.format('YYYY-MM-DD');
|
||||||
|
|
||||||
|
// For express priority: count all days (including weekends) but exclude holidays
|
||||||
|
// For standard priority: count only working days (Mon-Fri) and exclude holidays
|
||||||
|
const isWorkingDay = priority === 'express'
|
||||||
|
? true // Express includes weekends
|
||||||
|
: (dayOfWeek >= config.startDay && dayOfWeek <= config.endDay);
|
||||||
|
|
||||||
|
const isNotHoliday = !holidaysCache.has(dateStr);
|
||||||
|
|
||||||
|
if (isWorkingDay && isNotHoliday) {
|
||||||
|
businessDays++;
|
||||||
|
}
|
||||||
|
|
||||||
|
current = current.add(1, 'day');
|
||||||
|
|
||||||
|
// Safety check to prevent infinite loops
|
||||||
|
if (current.diff(start, 'day') > 730) { // 2 years
|
||||||
|
console.error('[TAT] Safety break - exceeded 2 years in business days calculation');
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return businessDays;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📋 **Summary of Issues & Fixes**
|
||||||
|
|
||||||
|
### Issue 1: TAT Processor Using Calendar Hours ✅ **FIXED**
|
||||||
|
- **File:** `Re_Backend/src/queues/tatProcessor.ts`
|
||||||
|
- **Line:** 64-65 (now 66-77)
|
||||||
|
- **Problem:** Uses simple calendar hours instead of working hours
|
||||||
|
- **Impact:** Incorrect TAT breach calculations
|
||||||
|
- **Fix:** ✅ Replaced with `calculateElapsedWorkingHours()` and `addWorkingHours()`/`addWorkingHoursExpress()`
|
||||||
|
- **Status:** ✅ **COMPLETED** - Now uses proper working hours calculation
|
||||||
|
|
||||||
|
### Issue 2: Missing Business Days Function ✅ **FIXED**
|
||||||
|
- **File:** `Re_Backend/src/utils/tatTimeUtils.ts`
|
||||||
|
- **Problem:** No function to calculate business days count
|
||||||
|
- **Impact:** Workflow Aging Report shows calendar days instead of business days
|
||||||
|
- **Fix:** ✅ Added `calculateBusinessDays()` function (lines 697-758)
|
||||||
|
- **Status:** ✅ **COMPLETED** - Function implemented and exported
|
||||||
|
|
||||||
|
### Issue 3: Workflow Aging Report Using Calendar Days ✅ **FIXED**
|
||||||
|
- **File:** `Re_Backend/src/services/dashboard.service.ts`
|
||||||
|
- **Problem:** Will use calendar days if not fixed
|
||||||
|
- **Impact:** Incorrect "Days Open" calculation
|
||||||
|
- **Fix:** ✅ Uses `calculateBusinessDays()` in report endpoint (getWorkflowAgingReport method)
|
||||||
|
- **Status:** ✅ **COMPLETED** - Report now uses business days calculation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🛠️ **Implementation Steps** ✅ **ALL COMPLETED**
|
||||||
|
|
||||||
|
### Step 1: Fix TAT Processor (CRITICAL) ✅ **DONE**
|
||||||
|
1. ✅ Opened `Re_Backend/src/queues/tatProcessor.ts`
|
||||||
|
2. ✅ Imported `calculateElapsedWorkingHours`, `addWorkingHours`, `addWorkingHoursExpress` from `@utils/tatTimeUtils`
|
||||||
|
3. ✅ Replaced lines 64-65 with proper working hours calculation (now lines 66-77)
|
||||||
|
4. ✅ Gets priority from workflow
|
||||||
|
5. ⏳ **TODO:** Test TAT breach alerts
|
||||||
|
|
||||||
|
### Step 2: Add Business Days Function ✅ **DONE**
|
||||||
|
1. ✅ Opened `Re_Backend/src/utils/tatTimeUtils.ts`
|
||||||
|
2. ✅ Added `calculateBusinessDays()` function (lines 697-758)
|
||||||
|
3. ✅ Exported the function
|
||||||
|
4. ⏳ **TODO:** Test with various date ranges
|
||||||
|
|
||||||
|
### Step 3: Update Workflow Aging Report ✅ **DONE**
|
||||||
|
1. ✅ Built report endpoint using `calculateBusinessDays()`
|
||||||
|
2. ✅ Filters requests where `businessDays > threshold`
|
||||||
|
3. ✅ Displays business days instead of calendar days
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ **What's Already Working**
|
||||||
|
|
||||||
|
- ✅ `calculateElapsedWorkingHours()` - Properly calculates working hours
|
||||||
|
- ✅ `calculateSLAStatus()` - Comprehensive SLA calculation
|
||||||
|
- ✅ Working hours configuration (from admin settings)
|
||||||
|
- ✅ Holiday support (from database)
|
||||||
|
- ✅ Priority-based calculation (express vs standard)
|
||||||
|
- ✅ Used correctly in `approval.service.ts` and `dashboard.service.ts`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎯 **Priority Order**
|
||||||
|
|
||||||
|
1. **🔴 CRITICAL:** Fix TAT Processor (affects all TAT calculations)
|
||||||
|
2. **🟡 HIGH:** Add Business Days Function (needed for reports)
|
||||||
|
3. **🟡 HIGH:** Update Workflow Aging Report to use business days
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📝 **Code Example: Fixed TAT Processor**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// In tatProcessor.ts, around line 60-70
|
||||||
|
import { calculateElapsedWorkingHours } from '@utils/tatTimeUtils';
|
||||||
|
|
||||||
|
// ... existing code ...
|
||||||
|
|
||||||
|
const tatHours = Number((approvalLevel as any).tatHours || 0);
|
||||||
|
const levelStartTime = (approvalLevel as any).levelStartTime || (approvalLevel as any).createdAt;
|
||||||
|
const now = new Date();
|
||||||
|
|
||||||
|
// FIXED: Use proper working hours calculation
|
||||||
|
const priority = ((workflow as any).priority || 'STANDARD').toString().toLowerCase();
|
||||||
|
const elapsedHours = await calculateElapsedWorkingHours(levelStartTime, now, priority);
|
||||||
|
const remainingHours = Math.max(0, tatHours - elapsedHours);
|
||||||
|
const expectedCompletionTime = dayjs(levelStartTime).add(tatHours, 'hour').toDate();
|
||||||
|
|
||||||
|
// ... rest of code ...
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🧪 **Testing Recommendations**
|
||||||
|
|
||||||
|
1. **Test TAT Breach Calculation:**
|
||||||
|
- Create a request with 8-hour TAT
|
||||||
|
- Submit on Friday 5 PM
|
||||||
|
- Should NOT breach until Monday 1 PM (next working hour)
|
||||||
|
- Currently will breach on Saturday 1 AM (wrong!)
|
||||||
|
|
||||||
|
2. **Test Business Days:**
|
||||||
|
- Start: Monday, Jan 1
|
||||||
|
- End: Friday, Jan 5
|
||||||
|
- Should return: 5 business days (not 5 calendar days if there are holidays)
|
||||||
|
|
||||||
|
3. **Test Express vs Standard:**
|
||||||
|
- Express: Should count weekends
|
||||||
|
- Standard: Should exclude weekends
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📚 **Related Files**
|
||||||
|
|
||||||
|
- `Re_Backend/src/queues/tatProcessor.ts` - ✅ **FIXED** - Now uses `calculateElapsedWorkingHours()` and proper deadline calculation
|
||||||
|
- `Re_Backend/src/utils/tatTimeUtils.ts` - ✅ **FIXED** - Added `calculateBusinessDays()` function
|
||||||
|
- `Re_Backend/src/services/approval.service.ts` - ✅ Already using correct calculation
|
||||||
|
- `Re_Backend/src/services/dashboard.service.ts` - ✅ **FIXED** - Uses `calculateBusinessDays()` in Workflow Aging Report
|
||||||
|
- `Re_Backend/src/services/workflow.service.ts` - ✅ Already using correct calculation
|
||||||
|
|
||||||
@ -1,571 +0,0 @@
|
|||||||
# 🎉 Complete TAT Implementation Guide
|
|
||||||
|
|
||||||
## ✅ EVERYTHING IS READY!
|
|
||||||
|
|
||||||
You now have a **production-ready TAT notification system** with:
|
|
||||||
- ✅ Automated notifications to approvers (50%, 75%, 100%)
|
|
||||||
- ✅ Complete alert storage in database
|
|
||||||
- ✅ Enhanced UI display with detailed time tracking
|
|
||||||
- ✅ Full KPI reporting capabilities
|
|
||||||
- ✅ Test mode for fast development
|
|
||||||
- ✅ API endpoints for custom queries
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Enhanced Alert Display
|
|
||||||
|
|
||||||
### **What Approvers See in Workflow Tab:**
|
|
||||||
|
|
||||||
```
|
|
||||||
┌────────────────────────────────────────────────────────────┐
|
|
||||||
│ Step 2: Lisa Wong (Finance Manager) │
|
|
||||||
│ Status: pending TAT: 12h Elapsed: 6.5h │
|
|
||||||
│ │
|
|
||||||
│ ┌────────────────────────────────────────────────────┐ │
|
|
||||||
│ │ ⏳ Reminder 1 - 50% TAT Threshold [WARNING] │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ 50% of SLA breach reminder have been sent │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ Allocated: 12h │ Elapsed: 6.0h │ │
|
|
||||||
│ │ Remaining: 6.0h │ Due by: Oct 7 │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ Reminder sent by system automatically │ │
|
|
||||||
│ │ Sent at: Oct 6 at 2:30 PM │ │
|
|
||||||
│ └────────────────────────────────────────────────────┘ │
|
|
||||||
│ │
|
|
||||||
│ ┌────────────────────────────────────────────────────┐ │
|
|
||||||
│ │ ⚠️ Reminder 2 - 75% TAT Threshold [WARNING] │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ 75% of SLA breach reminder have been sent │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ Allocated: 12h │ Elapsed: 9.0h │ │
|
|
||||||
│ │ Remaining: 3.0h │ Due by: Oct 7 │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ Reminder sent by system automatically │ │
|
|
||||||
│ │ Sent at: Oct 6 at 6:30 PM │ │
|
|
||||||
│ └────────────────────────────────────────────────────┘ │
|
|
||||||
└────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Quick Start (3 Steps)
|
|
||||||
|
|
||||||
### **Step 1: Setup Upstash Redis** (2 minutes)
|
|
||||||
|
|
||||||
1. Go to: https://console.upstash.com/
|
|
||||||
2. Create free account
|
|
||||||
3. Create database: `redis-tat-dev`
|
|
||||||
4. Copy URL: `rediss://default:PASSWORD@host.upstash.io:6379`
|
|
||||||
|
|
||||||
### **Step 2: Configure Backend**
|
|
||||||
|
|
||||||
Edit `Re_Backend/.env`:
|
|
||||||
```bash
|
|
||||||
# Add these lines:
|
|
||||||
REDIS_URL=rediss://default:YOUR_PASSWORD@YOUR_HOST.upstash.io:6379
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 3: Restart & Test**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
**You should see:**
|
|
||||||
```
|
|
||||||
✅ [TAT Queue] Connected to Redis
|
|
||||||
✅ [TAT Worker] Worker is ready and listening
|
|
||||||
⏰ TAT Configuration:
|
|
||||||
- Test Mode: ENABLED (1 hour = 1 minute)
|
|
||||||
- Working Hours: 9:00 - 18:00
|
|
||||||
- Redis: rediss://***@upstash.io:6379
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🧪 Test It (6 Minutes)
|
|
||||||
|
|
||||||
1. **Create Request** with 6-hour TAT
|
|
||||||
2. **Submit Request**
|
|
||||||
3. **Open Request Detail** → Workflow tab
|
|
||||||
4. **Watch Alerts Appear**:
|
|
||||||
- 3 min: ⏳ 50% alert with full details
|
|
||||||
- 4.5 min: ⚠️ 75% alert with full details
|
|
||||||
- 6 min: ⏰ 100% breach with full details
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📦 What's Been Implemented
|
|
||||||
|
|
||||||
### **Backend Components:**
|
|
||||||
|
|
||||||
| Component | Purpose | File |
|
|
||||||
|-----------|---------|------|
|
|
||||||
| **TAT Time Utils** | Working hours calculation | `utils/tatTimeUtils.ts` |
|
|
||||||
| **TAT Queue** | BullMQ queue setup | `queues/tatQueue.ts` |
|
|
||||||
| **TAT Worker** | Background job processor | `queues/tatWorker.ts` |
|
|
||||||
| **TAT Processor** | Alert handler | `queues/tatProcessor.ts` |
|
|
||||||
| **TAT Scheduler** | Job scheduling service | `services/tatScheduler.service.ts` |
|
|
||||||
| **TAT Alert Model** | Database model | `models/TatAlert.ts` |
|
|
||||||
| **TAT Controller** | API endpoints | `controllers/tat.controller.ts` |
|
|
||||||
| **TAT Routes** | API routes | `routes/tat.routes.ts` |
|
|
||||||
| **TAT Config** | Configuration | `config/tat.config.ts` |
|
|
||||||
|
|
||||||
### **Database:**
|
|
||||||
|
|
||||||
| Object | Purpose |
|
|
||||||
|--------|---------|
|
|
||||||
| `tat_alerts` table | Store all TAT notifications |
|
|
||||||
| `approval_levels` (updated) | Added 4 TAT status fields |
|
|
||||||
| 8 KPI Views | Pre-aggregated reporting data |
|
|
||||||
|
|
||||||
### **Frontend:**
|
|
||||||
|
|
||||||
| Component | Change |
|
|
||||||
|-----------|--------|
|
|
||||||
| `RequestDetail.tsx` | Display TAT alerts in workflow tab |
|
|
||||||
| Enhanced cards | Show detailed time tracking |
|
|
||||||
| Test mode indicator | Purple badge when in test mode |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔑 Key Features
|
|
||||||
|
|
||||||
### **1. Approver-Specific Alerts** ✅
|
|
||||||
- Sent ONLY to current approver
|
|
||||||
- NOT to initiator or previous approvers
|
|
||||||
- Each level gets its own alert set
|
|
||||||
|
|
||||||
### **2. Detailed Time Tracking** ✅
|
|
||||||
- Allocated hours
|
|
||||||
- Elapsed hours (when alert sent)
|
|
||||||
- Remaining hours (color-coded if critical)
|
|
||||||
- Due date/time
|
|
||||||
|
|
||||||
### **3. Test Mode Support** ✅
|
|
||||||
- 1 hour = 1 minute for fast testing
|
|
||||||
- Purple badge indicator
|
|
||||||
- Clear note to prevent confusion
|
|
||||||
- Easy toggle in `.env`
|
|
||||||
|
|
||||||
### **4. Complete Audit Trail** ✅
|
|
||||||
- Every alert stored in database
|
|
||||||
- Completion status tracked
|
|
||||||
- Response time measured
|
|
||||||
- KPI-ready data
|
|
||||||
|
|
||||||
### **5. Visual Clarity** ✅
|
|
||||||
- Color-coded by threshold (yellow/orange/red)
|
|
||||||
- Icons (⏳/⚠️/⏰)
|
|
||||||
- Status badges (WARNING/BREACHED)
|
|
||||||
- Grid layout for time details
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 KPI Capabilities
|
|
||||||
|
|
||||||
### **All Your Required KPIs Supported:**
|
|
||||||
|
|
||||||
#### Request Volume & Status ✅
|
|
||||||
- Total Requests Created
|
|
||||||
- Open Requests (with age)
|
|
||||||
- Approved/Rejected Requests
|
|
||||||
|
|
||||||
#### TAT Efficiency ✅
|
|
||||||
- Average TAT Compliance %
|
|
||||||
- Avg Approval Cycle Time
|
|
||||||
- Delayed Workflows
|
|
||||||
- Breach History & Trends
|
|
||||||
|
|
||||||
#### Approver Load ✅
|
|
||||||
- Pending Actions (My Queue)
|
|
||||||
- Approvals Completed
|
|
||||||
- Response Time After Alerts
|
|
||||||
|
|
||||||
#### Engagement & Quality ✅
|
|
||||||
- Comments/Work Notes
|
|
||||||
- Documents Uploaded
|
|
||||||
- Collaboration Metrics
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Production Deployment
|
|
||||||
|
|
||||||
### **When Ready for Production:**
|
|
||||||
|
|
||||||
1. **Disable Test Mode:**
|
|
||||||
```bash
|
|
||||||
# .env
|
|
||||||
TAT_TEST_MODE=false
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Choose Redis Option:**
|
|
||||||
|
|
||||||
**Option A: Keep Upstash** (Recommended)
|
|
||||||
```bash
|
|
||||||
REDIS_URL=rediss://default:...@upstash.io:6379
|
|
||||||
```
|
|
||||||
- ✅ Zero maintenance
|
|
||||||
- ✅ Global CDN
|
|
||||||
- ✅ Auto-scaling
|
|
||||||
|
|
||||||
**Option B: Self-Hosted Redis**
|
|
||||||
```bash
|
|
||||||
# On Linux server:
|
|
||||||
sudo apt install redis-server -y
|
|
||||||
sudo systemctl start redis-server
|
|
||||||
|
|
||||||
# .env
|
|
||||||
REDIS_URL=redis://localhost:6379
|
|
||||||
```
|
|
||||||
- ✅ Full control
|
|
||||||
- ✅ No external dependency
|
|
||||||
- ✅ Free forever
|
|
||||||
|
|
||||||
3. **Set Working Hours:**
|
|
||||||
```bash
|
|
||||||
WORK_START_HOUR=9
|
|
||||||
WORK_END_HOUR=18
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Restart Backend**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Complete Documentation Index
|
|
||||||
|
|
||||||
| Document | Purpose | When to Read |
|
|
||||||
|----------|---------|--------------|
|
|
||||||
| **START_HERE.md** | Quick setup | Read first! |
|
|
||||||
| **TAT_QUICK_START.md** | 5-min guide | Getting started |
|
|
||||||
| **TAT_ENHANCED_DISPLAY_SUMMARY.md** | UI guide | Understanding display |
|
|
||||||
| **COMPLETE_TAT_IMPLEMENTATION_GUIDE.md** | This doc | Overview |
|
|
||||||
| **docs/TAT_NOTIFICATION_SYSTEM.md** | Architecture | Deep dive |
|
|
||||||
| **docs/KPI_REPORTING_SYSTEM.md** | KPI queries | Building reports |
|
|
||||||
| **docs/UPSTASH_SETUP_GUIDE.md** | Redis setup | Redis config |
|
|
||||||
| **UPSTASH_QUICK_REFERENCE.md** | Commands | Daily reference |
|
|
||||||
| **KPI_SETUP_COMPLETE.md** | KPI summary | KPI overview |
|
|
||||||
| **TAT_ALERTS_DISPLAY_COMPLETE.md** | Display docs | UI integration |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 Troubleshooting
|
|
||||||
|
|
||||||
### **No Alerts Showing in UI?**
|
|
||||||
|
|
||||||
**Check:**
|
|
||||||
1. Redis connected? Look for "Connected to Redis" in logs
|
|
||||||
2. Request submitted? (Not just created)
|
|
||||||
3. Waited long enough? (3 min in test mode, 12h in production for 24h TAT)
|
|
||||||
4. Check browser console for errors
|
|
||||||
5. Verify `tatAlerts` in API response
|
|
||||||
|
|
||||||
**Debug:**
|
|
||||||
```sql
|
|
||||||
-- Check if alerts exist in database
|
|
||||||
SELECT * FROM tat_alerts
|
|
||||||
WHERE request_id = 'YOUR_REQUEST_ID'
|
|
||||||
ORDER BY alert_sent_at;
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Alerts Not Triggering?**
|
|
||||||
|
|
||||||
**Check:**
|
|
||||||
1. TAT worker running? Look for "TAT Worker: Initialized" in logs
|
|
||||||
2. Jobs scheduled? Look for "TAT jobs scheduled" in logs
|
|
||||||
3. Redis queue status:
|
|
||||||
```bash
|
|
||||||
# In Upstash Console → CLI:
|
|
||||||
KEYS bull:tatQueue:*
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Confusing Times in Test Mode?**
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
- Look for purple "TEST MODE" badge
|
|
||||||
- Read note: "Test mode active (1 hour = 1 minute)"
|
|
||||||
- For production feel, set `TAT_TEST_MODE=false`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📈 Sample KPI Queries
|
|
||||||
|
|
||||||
### **TAT Compliance This Month:**
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
ROUND(
|
|
||||||
COUNT(CASE WHEN was_completed_on_time = true THEN 1 END) * 100.0 /
|
|
||||||
NULLIF(COUNT(*), 0),
|
|
||||||
2
|
|
||||||
) as compliance_rate
|
|
||||||
FROM tat_alerts
|
|
||||||
WHERE DATE(alert_sent_at) >= DATE_TRUNC('month', CURRENT_DATE)
|
|
||||||
AND was_completed_on_time IS NOT NULL;
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Top Performers (On-Time Completion):**
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
u.display_name,
|
|
||||||
u.department,
|
|
||||||
COUNT(DISTINCT ta.level_id) as total_approvals,
|
|
||||||
COUNT(CASE WHEN ta.was_completed_on_time = true THEN 1 END) as on_time,
|
|
||||||
ROUND(
|
|
||||||
COUNT(CASE WHEN ta.was_completed_on_time = true THEN 1 END) * 100.0 /
|
|
||||||
NULLIF(COUNT(DISTINCT ta.level_id), 0),
|
|
||||||
2
|
|
||||||
) as compliance_rate
|
|
||||||
FROM tat_alerts ta
|
|
||||||
JOIN users u ON ta.approver_id = u.user_id
|
|
||||||
WHERE ta.was_completed_on_time IS NOT NULL
|
|
||||||
GROUP BY u.user_id, u.display_name, u.department
|
|
||||||
ORDER BY compliance_rate DESC
|
|
||||||
LIMIT 10;
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Breach Trend (Last 30 Days):**
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
DATE(alert_sent_at) as date,
|
|
||||||
COUNT(CASE WHEN alert_type = 'TAT_50' THEN 1 END) as warnings_50,
|
|
||||||
COUNT(CASE WHEN alert_type = 'TAT_75' THEN 1 END) as warnings_75,
|
|
||||||
COUNT(CASE WHEN is_breached = true THEN 1 END) as breaches
|
|
||||||
FROM tat_alerts
|
|
||||||
WHERE alert_sent_at >= CURRENT_DATE - INTERVAL '30 days'
|
|
||||||
GROUP BY DATE(alert_sent_at)
|
|
||||||
ORDER BY date DESC;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✨ Benefits Recap
|
|
||||||
|
|
||||||
### **For Approvers:**
|
|
||||||
- 📧 Get timely notifications (50%, 75%, 100%)
|
|
||||||
- 📊 See historical reminders in request details
|
|
||||||
- ⏱️ Know exactly how much time remaining
|
|
||||||
- 🎯 Clear deadlines and expectations
|
|
||||||
|
|
||||||
### **For Management:**
|
|
||||||
- 📈 Track TAT compliance rates
|
|
||||||
- 👥 Identify bottlenecks and delays
|
|
||||||
- 📊 Generate performance reports
|
|
||||||
- 🎯 Data-driven decision making
|
|
||||||
|
|
||||||
### **For System Admins:**
|
|
||||||
- 🔧 Easy configuration
|
|
||||||
- 📝 Complete audit trail
|
|
||||||
- 🚀 Scalable architecture
|
|
||||||
- 🛠️ Robust error handling
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎓 Next Steps
|
|
||||||
|
|
||||||
1. ✅ **Setup Redis** (Upstash recommended)
|
|
||||||
2. ✅ **Enable Test Mode** (`TAT_TEST_MODE=true`)
|
|
||||||
3. ✅ **Test with 6-hour TAT** (becomes 6 minutes)
|
|
||||||
4. ✅ **Verify alerts display** in Request Detail
|
|
||||||
5. ✅ **Check database** for stored alerts
|
|
||||||
6. ✅ **Run KPI queries** to verify data
|
|
||||||
7. ✅ **Build dashboards** using KPI views
|
|
||||||
8. ✅ **Deploy to production** when ready
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📞 Support
|
|
||||||
|
|
||||||
**Documentation:**
|
|
||||||
- Read `START_HERE.md` for immediate setup
|
|
||||||
- Check `TAT_QUICK_START.md` for testing
|
|
||||||
- Review `docs/` folder for detailed guides
|
|
||||||
|
|
||||||
**Troubleshooting:**
|
|
||||||
- Check backend logs: `logs/app.log`
|
|
||||||
- Verify Redis: Upstash Console → CLI → `PING`
|
|
||||||
- Query database: See KPI queries above
|
|
||||||
- Review worker status: Look for "TAT Worker" in logs
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎉 Status Summary
|
|
||||||
|
|
||||||
| Component | Status | Notes |
|
|
||||||
|-----------|--------|-------|
|
|
||||||
| **Packages Installed** | ✅ | bullmq, ioredis, dayjs |
|
|
||||||
| **Database Schema** | ✅ | tat_alerts table + 4 fields in approval_levels |
|
|
||||||
| **KPI Views** | ✅ | 8 views created |
|
|
||||||
| **Backend Services** | ✅ | Scheduler, processor, worker |
|
|
||||||
| **API Endpoints** | ✅ | 5 TAT endpoints |
|
|
||||||
| **Frontend Display** | ✅ | Enhanced cards in workflow tab |
|
|
||||||
| **Test Mode** | ✅ | Configurable via .env |
|
|
||||||
| **Documentation** | ✅ | 10+ guides created |
|
|
||||||
| **Migrations** | ✅ | All applied successfully |
|
|
||||||
| **Redis Connection** | ⏳ | **You need to setup** |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Final Checklist
|
|
||||||
|
|
||||||
- [ ] Read `START_HERE.md`
|
|
||||||
- [ ] Setup Upstash Redis (https://console.upstash.com/)
|
|
||||||
- [ ] Add `REDIS_URL` to `.env`
|
|
||||||
- [ ] Set `TAT_TEST_MODE=true`
|
|
||||||
- [ ] Restart backend server
|
|
||||||
- [ ] Verify logs show "Connected to Redis"
|
|
||||||
- [ ] Create test request (6-hour TAT)
|
|
||||||
- [ ] Submit request
|
|
||||||
- [ ] Open Request Detail → Workflow tab
|
|
||||||
- [ ] See first alert at 3 minutes ⏳
|
|
||||||
- [ ] See second alert at 4.5 minutes ⚠️
|
|
||||||
- [ ] See third alert at 6 minutes ⏰
|
|
||||||
- [ ] Verify in database: `SELECT * FROM tat_alerts`
|
|
||||||
- [ ] Test KPI queries
|
|
||||||
- [ ] Approve request and verify completion tracking
|
|
||||||
|
|
||||||
✅ **All done? You're production ready!**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📂 Files Created/Modified
|
|
||||||
|
|
||||||
### **New Files (35):**
|
|
||||||
|
|
||||||
**Backend:**
|
|
||||||
- `src/utils/tatTimeUtils.ts`
|
|
||||||
- `src/queues/tatQueue.ts`
|
|
||||||
- `src/queues/tatWorker.ts`
|
|
||||||
- `src/queues/tatProcessor.ts`
|
|
||||||
- `src/services/tatScheduler.service.ts`
|
|
||||||
- `src/models/TatAlert.ts`
|
|
||||||
- `src/controllers/tat.controller.ts`
|
|
||||||
- `src/routes/tat.routes.ts`
|
|
||||||
- `src/config/tat.config.ts`
|
|
||||||
- `src/migrations/20251104-add-tat-alert-fields.ts`
|
|
||||||
- `src/migrations/20251104-create-tat-alerts.ts`
|
|
||||||
- `src/migrations/20251104-create-kpi-views.ts`
|
|
||||||
|
|
||||||
**Documentation:**
|
|
||||||
- `START_HERE.md`
|
|
||||||
- `TAT_QUICK_START.md`
|
|
||||||
- `UPSTASH_QUICK_REFERENCE.md`
|
|
||||||
- `INSTALL_REDIS.txt`
|
|
||||||
- `KPI_SETUP_COMPLETE.md`
|
|
||||||
- `TAT_ALERTS_DISPLAY_COMPLETE.md`
|
|
||||||
- `TAT_ENHANCED_DISPLAY_SUMMARY.md`
|
|
||||||
- `COMPLETE_TAT_IMPLEMENTATION_GUIDE.md` (this file)
|
|
||||||
- `docs/TAT_NOTIFICATION_SYSTEM.md`
|
|
||||||
- `docs/TAT_TESTING_GUIDE.md`
|
|
||||||
- `docs/UPSTASH_SETUP_GUIDE.md`
|
|
||||||
- `docs/KPI_REPORTING_SYSTEM.md`
|
|
||||||
- `docs/REDIS_SETUP_WINDOWS.md`
|
|
||||||
|
|
||||||
### **Modified Files (7):**
|
|
||||||
|
|
||||||
**Backend:**
|
|
||||||
- `src/models/ApprovalLevel.ts` - Added TAT status fields
|
|
||||||
- `src/models/index.ts` - Export TatAlert
|
|
||||||
- `src/services/workflow.service.ts` - Include TAT alerts, schedule jobs
|
|
||||||
- `src/services/approval.service.ts` - Cancel jobs, update alerts
|
|
||||||
- `src/server.ts` - Initialize worker, log config
|
|
||||||
- `src/routes/index.ts` - Register TAT routes
|
|
||||||
- `src/scripts/migrate.ts` - Include new migrations
|
|
||||||
|
|
||||||
**Frontend:**
|
|
||||||
- `src/pages/RequestDetail/RequestDetail.tsx` - Display TAT alerts
|
|
||||||
|
|
||||||
**Infrastructure:**
|
|
||||||
- `env.example` - Added Redis and test mode config
|
|
||||||
- `docker-compose.yml` - Added Redis service
|
|
||||||
- `package.json` - Added dependencies
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 💾 Database Schema Summary
|
|
||||||
|
|
||||||
### **New Table: `tat_alerts`**
|
|
||||||
```
|
|
||||||
17 columns, 7 indexes
|
|
||||||
Stores every TAT notification sent
|
|
||||||
Tracks completion status for KPIs
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Updated Table: `approval_levels`**
|
|
||||||
```
|
|
||||||
Added 4 columns:
|
|
||||||
- tat50_alert_sent
|
|
||||||
- tat75_alert_sent
|
|
||||||
- tat_breached
|
|
||||||
- tat_start_time
|
|
||||||
```
|
|
||||||
|
|
||||||
### **New Views: 8 KPI Views**
|
|
||||||
```
|
|
||||||
- vw_request_volume_summary
|
|
||||||
- vw_tat_compliance
|
|
||||||
- vw_approver_performance
|
|
||||||
- vw_tat_alerts_summary
|
|
||||||
- vw_department_summary
|
|
||||||
- vw_daily_kpi_metrics
|
|
||||||
- vw_workflow_aging
|
|
||||||
- vw_engagement_metrics
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🌟 Production Best Practices
|
|
||||||
|
|
||||||
1. **Monitor Redis Health**
|
|
||||||
- Check connection in logs
|
|
||||||
- Monitor queue size
|
|
||||||
- Set up alerts for failures
|
|
||||||
|
|
||||||
2. **Regular Database Maintenance**
|
|
||||||
- Archive old TAT alerts (> 1 year)
|
|
||||||
- Refresh materialized views if using
|
|
||||||
- Monitor query performance
|
|
||||||
|
|
||||||
3. **Test Mode Management**
|
|
||||||
- NEVER use test mode in production
|
|
||||||
- Document when test mode is on
|
|
||||||
- Clear test data regularly
|
|
||||||
|
|
||||||
4. **Alert Thresholds**
|
|
||||||
- Adjust if needed (currently 50%, 75%, 100%)
|
|
||||||
- Can be configured in `tat.config.ts`
|
|
||||||
- Consider business requirements
|
|
||||||
|
|
||||||
5. **Working Hours**
|
|
||||||
- Verify for your organization
|
|
||||||
- Update holidays if needed
|
|
||||||
- Consider time zones for global teams
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎊 Congratulations!
|
|
||||||
|
|
||||||
You've implemented a **world-class TAT notification system** with:
|
|
||||||
|
|
||||||
✅ Automated notifications
|
|
||||||
✅ Complete tracking
|
|
||||||
✅ Beautiful UI display
|
|
||||||
✅ Comprehensive KPIs
|
|
||||||
✅ Production-ready architecture
|
|
||||||
✅ Excellent documentation
|
|
||||||
|
|
||||||
**Just connect Redis and you're live!** 🚀
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**See `START_HERE.md` for immediate next steps!**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 4, 2025
|
|
||||||
**Version**: 1.0.0
|
|
||||||
**Status**: ✅ Production Ready
|
|
||||||
**Team**: Royal Enfield Workflow System
|
|
||||||
|
|
||||||
363
CONFIGURATION.md
363
CONFIGURATION.md
@ -1,363 +0,0 @@
|
|||||||
# Royal Enfield Workflow Management System - Configuration Guide
|
|
||||||
|
|
||||||
## 📋 Overview
|
|
||||||
|
|
||||||
All system configurations are centralized in `src/config/system.config.ts` and can be customized via environment variables.
|
|
||||||
|
|
||||||
## ⚙️ Configuration Structure
|
|
||||||
|
|
||||||
### 1. **Working Hours**
|
|
||||||
Controls when TAT tracking is active.
|
|
||||||
|
|
||||||
```env
|
|
||||||
WORK_START_HOUR=9 # 9 AM (default)
|
|
||||||
WORK_END_HOUR=18 # 6 PM (default)
|
|
||||||
TZ=Asia/Kolkata # Timezone
|
|
||||||
```
|
|
||||||
|
|
||||||
**Working Days:** Monday - Friday (hardcoded)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 2. **TAT (Turnaround Time) Settings**
|
|
||||||
|
|
||||||
```env
|
|
||||||
TAT_TEST_MODE=false # Enable for testing (1 hour = 1 minute)
|
|
||||||
DEFAULT_EXPRESS_TAT=24 # Express priority default TAT (hours)
|
|
||||||
DEFAULT_STANDARD_TAT=72 # Standard priority default TAT (hours)
|
|
||||||
```
|
|
||||||
|
|
||||||
**TAT Thresholds** (hardcoded):
|
|
||||||
- 50% - Warning notification
|
|
||||||
- 75% - Critical notification
|
|
||||||
- 100% - Breach notification
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 3. **File Upload Limits**
|
|
||||||
|
|
||||||
```env
|
|
||||||
MAX_FILE_SIZE_MB=10 # Max file size per upload
|
|
||||||
MAX_FILES_PER_REQUEST=10 # Max files per request
|
|
||||||
ALLOWED_FILE_TYPES=pdf,doc,docx,xls,xlsx,ppt,pptx,jpg,jpeg,png,gif,txt
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 4. **Workflow Limits**
|
|
||||||
|
|
||||||
```env
|
|
||||||
MAX_APPROVAL_LEVELS=10 # Max approval stages
|
|
||||||
MAX_PARTICIPANTS_PER_REQUEST=50 # Max total participants
|
|
||||||
MAX_SPECTATORS=20 # Max spectators
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 5. **Work Notes Configuration**
|
|
||||||
|
|
||||||
```env
|
|
||||||
MAX_MESSAGE_LENGTH=2000 # Max characters per message
|
|
||||||
MAX_ATTACHMENTS_PER_NOTE=5 # Max files per work note
|
|
||||||
ENABLE_REACTIONS=true # Allow emoji reactions
|
|
||||||
ENABLE_MENTIONS=true # Allow @mentions
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 6. **Redis & Queue**
|
|
||||||
|
|
||||||
```env
|
|
||||||
REDIS_URL=redis://localhost:6379 # Redis connection string
|
|
||||||
QUEUE_CONCURRENCY=5 # Concurrent job processing
|
|
||||||
RATE_LIMIT_MAX=10 # Max requests per duration
|
|
||||||
RATE_LIMIT_DURATION=1000 # Rate limit window (ms)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 7. **Security & Session**
|
|
||||||
|
|
||||||
```env
|
|
||||||
JWT_SECRET=your_secret_min_32_characters # JWT signing key
|
|
||||||
JWT_EXPIRY=8h # Token expiration
|
|
||||||
SESSION_TIMEOUT_MINUTES=480 # 8 hours
|
|
||||||
ENABLE_2FA=false # Two-factor authentication
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 8. **Notifications**
|
|
||||||
|
|
||||||
```env
|
|
||||||
ENABLE_EMAIL_NOTIFICATIONS=true # Email alerts
|
|
||||||
ENABLE_PUSH_NOTIFICATIONS=true # Browser push
|
|
||||||
NOTIFICATION_BATCH_DELAY=5000 # Batch delay (ms)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Email SMTP** (if email enabled):
|
|
||||||
```env
|
|
||||||
SMTP_HOST=smtp.gmail.com
|
|
||||||
SMTP_PORT=587
|
|
||||||
SMTP_USER=your_email@royalenfield.com
|
|
||||||
SMTP_PASSWORD=your_password
|
|
||||||
SMTP_FROM=noreply@royalenfield.com
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 9. **Feature Flags**
|
|
||||||
|
|
||||||
```env
|
|
||||||
ENABLE_AI_CONCLUSION=true # AI-generated conclusion remarks
|
|
||||||
ENABLE_TEMPLATES=false # Template-based workflows (future)
|
|
||||||
ENABLE_ANALYTICS=true # Dashboard analytics
|
|
||||||
ENABLE_EXPORT=true # Export to CSV/PDF
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 10. **Database**
|
|
||||||
|
|
||||||
```env
|
|
||||||
DB_HOST=localhost
|
|
||||||
DB_PORT=5432
|
|
||||||
DB_NAME=re_workflow
|
|
||||||
DB_USER=postgres
|
|
||||||
DB_PASSWORD=your_password
|
|
||||||
DB_SSL=false
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 11. **Storage**
|
|
||||||
|
|
||||||
```env
|
|
||||||
STORAGE_TYPE=local # Options: local, s3, gcs
|
|
||||||
STORAGE_PATH=./uploads # Local storage path
|
|
||||||
```
|
|
||||||
|
|
||||||
**For S3 (if using cloud storage):**
|
|
||||||
```env
|
|
||||||
AWS_ACCESS_KEY_ID=your_access_key
|
|
||||||
AWS_SECRET_ACCESS_KEY=your_secret
|
|
||||||
AWS_REGION=ap-south-1
|
|
||||||
AWS_S3_BUCKET=re-workflow-documents
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Quick Setup
|
|
||||||
|
|
||||||
### Development Environment
|
|
||||||
|
|
||||||
1. Copy example configuration:
|
|
||||||
```bash
|
|
||||||
cp .env.example .env
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Update critical values:
|
|
||||||
```env
|
|
||||||
DB_PASSWORD=your_local_postgres_password
|
|
||||||
JWT_SECRET=generate_random_32_char_string
|
|
||||||
REDIS_URL=redis://localhost:6379
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Enable test mode for faster TAT testing:
|
|
||||||
```env
|
|
||||||
TAT_TEST_MODE=true # 1 hour = 1 minute
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Production Environment
|
|
||||||
|
|
||||||
1. Set environment to production:
|
|
||||||
```env
|
|
||||||
NODE_ENV=production
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Configure secure secrets:
|
|
||||||
```env
|
|
||||||
JWT_SECRET=use_very_strong_secret_here
|
|
||||||
DB_PASSWORD=strong_database_password
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Disable test mode:
|
|
||||||
```env
|
|
||||||
TAT_TEST_MODE=false
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Enable SSL:
|
|
||||||
```env
|
|
||||||
DB_SSL=true
|
|
||||||
```
|
|
||||||
|
|
||||||
5. Configure email/push notifications with real credentials
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Configuration API
|
|
||||||
|
|
||||||
### GET `/api/v1/config`
|
|
||||||
Returns public (non-sensitive) configuration for frontend.
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"data": {
|
|
||||||
"appName": "Royal Enfield Workflow Management",
|
|
||||||
"appVersion": "1.2.0",
|
|
||||||
"workingHours": {
|
|
||||||
"START_HOUR": 9,
|
|
||||||
"END_HOUR": 18,
|
|
||||||
"START_DAY": 1,
|
|
||||||
"END_DAY": 5,
|
|
||||||
"TIMEZONE": "Asia/Kolkata"
|
|
||||||
},
|
|
||||||
"tat": {
|
|
||||||
"thresholds": {
|
|
||||||
"warning": 50,
|
|
||||||
"critical": 75,
|
|
||||||
"breach": 100
|
|
||||||
},
|
|
||||||
"testMode": false
|
|
||||||
},
|
|
||||||
"upload": {
|
|
||||||
"maxFileSizeMB": 10,
|
|
||||||
"allowedFileTypes": ["pdf", "doc", "docx", ...],
|
|
||||||
"maxFilesPerRequest": 10
|
|
||||||
},
|
|
||||||
"workflow": {
|
|
||||||
"maxApprovalLevels": 10,
|
|
||||||
"maxParticipants": 50,
|
|
||||||
"maxSpectators": 20
|
|
||||||
},
|
|
||||||
"workNotes": {
|
|
||||||
"maxMessageLength": 2000,
|
|
||||||
"maxAttachmentsPerNote": 5,
|
|
||||||
"enableReactions": true,
|
|
||||||
"enableMentions": true
|
|
||||||
},
|
|
||||||
"features": {
|
|
||||||
"ENABLE_AI_CONCLUSION": true,
|
|
||||||
"ENABLE_TEMPLATES": false,
|
|
||||||
"ENABLE_ANALYTICS": true,
|
|
||||||
"ENABLE_EXPORT": true
|
|
||||||
},
|
|
||||||
"ui": {
|
|
||||||
"DEFAULT_THEME": "light",
|
|
||||||
"DEFAULT_LANGUAGE": "en",
|
|
||||||
"DATE_FORMAT": "DD/MM/YYYY",
|
|
||||||
"TIME_FORMAT": "12h",
|
|
||||||
"CURRENCY": "INR",
|
|
||||||
"CURRENCY_SYMBOL": "₹"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Usage in Code
|
|
||||||
|
|
||||||
### Backend
|
|
||||||
```typescript
|
|
||||||
import { SYSTEM_CONFIG } from '@config/system.config';
|
|
||||||
|
|
||||||
// Access configuration
|
|
||||||
const maxLevels = SYSTEM_CONFIG.WORKFLOW.MAX_APPROVAL_LEVELS;
|
|
||||||
const workStart = SYSTEM_CONFIG.WORKING_HOURS.START_HOUR;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Frontend
|
|
||||||
```typescript
|
|
||||||
import { configService } from '@/services/configService';
|
|
||||||
|
|
||||||
// Async usage
|
|
||||||
const config = await configService.getConfig();
|
|
||||||
const maxFileSize = config.upload.maxFileSizeMB;
|
|
||||||
|
|
||||||
// Helper functions
|
|
||||||
import { getWorkingHours, getTATThresholds } from '@/services/configService';
|
|
||||||
const workingHours = await getWorkingHours();
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔐 Security Best Practices
|
|
||||||
|
|
||||||
1. **Never commit `.env`** with real credentials
|
|
||||||
2. **Use strong JWT secrets** (min 32 characters)
|
|
||||||
3. **Rotate secrets regularly** in production
|
|
||||||
4. **Use environment-specific configs** for dev/staging/prod
|
|
||||||
5. **Store secrets in secure vaults** (AWS Secrets Manager, Azure Key Vault)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📝 Configuration Checklist
|
|
||||||
|
|
||||||
### Before Deployment
|
|
||||||
|
|
||||||
- [ ] Set `NODE_ENV=production`
|
|
||||||
- [ ] Configure database with SSL
|
|
||||||
- [ ] Set strong JWT secret
|
|
||||||
- [ ] Disable TAT test mode
|
|
||||||
- [ ] Configure email SMTP
|
|
||||||
- [ ] Set up Redis connection
|
|
||||||
- [ ] Configure file storage (local/S3/GCS)
|
|
||||||
- [ ] Test working hours match business hours
|
|
||||||
- [ ] Verify TAT thresholds are correct
|
|
||||||
- [ ] Enable/disable feature flags as needed
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🛠️ Adding New Configuration
|
|
||||||
|
|
||||||
1. Add to `system.config.ts`:
|
|
||||||
```typescript
|
|
||||||
export const SYSTEM_CONFIG = {
|
|
||||||
// ...existing config
|
|
||||||
MY_NEW_SETTING: {
|
|
||||||
VALUE: process.env.MY_VALUE || 'default',
|
|
||||||
},
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Add to `getPublicConfig()` if needed on frontend:
|
|
||||||
```typescript
|
|
||||||
export function getPublicConfig() {
|
|
||||||
return {
|
|
||||||
// ...existing
|
|
||||||
myNewSetting: SYSTEM_CONFIG.MY_NEW_SETTING,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Access in code:
|
|
||||||
```typescript
|
|
||||||
const value = SYSTEM_CONFIG.MY_NEW_SETTING.VALUE;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Related Files
|
|
||||||
|
|
||||||
- `src/config/system.config.ts` - Central configuration
|
|
||||||
- `src/config/tat.config.ts` - TAT-specific (re-exports from system.config)
|
|
||||||
- `src/config/constants.ts` - Legacy constants (being migrated)
|
|
||||||
- `src/routes/config.routes.ts` - Configuration API endpoint
|
|
||||||
- Frontend: `src/services/configService.ts` - Configuration fetching service
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Benefits of Centralized Configuration
|
|
||||||
|
|
||||||
✅ **Single Source of Truth** - All settings in one place
|
|
||||||
✅ **Environment-based** - Different configs for dev/staging/prod
|
|
||||||
✅ **Frontend Sync** - Frontend fetches config from backend
|
|
||||||
✅ **No Hardcoding** - All values configurable via .env
|
|
||||||
✅ **Type-Safe** - TypeScript interfaces ensure correctness
|
|
||||||
✅ **Easy Updates** - Change .env without code changes
|
|
||||||
|
|
||||||
@ -1,281 +0,0 @@
|
|||||||
# 📊 Design Document vs Actual Implementation
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The `backend_structure.txt` is a **DESIGN DOCUMENT** that shows the intended/planned database structure. However, not all tables have been implemented yet.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ **Currently Implemented Tables**
|
|
||||||
|
|
||||||
| Table | Status | Migration File | Notes |
|
|
||||||
|-------|--------|---------------|-------|
|
|
||||||
| `users` | ✅ Implemented | (Okta-based, external) | User management |
|
|
||||||
| `workflow_requests` | ✅ Implemented | 2025103001-create-workflow-requests.ts | Core workflow |
|
|
||||||
| `approval_levels` | ✅ Implemented | 2025103002-create-approval-levels.ts | Approval hierarchy |
|
|
||||||
| `participants` | ✅ Implemented | 2025103003-create-participants.ts | Spectators, etc. |
|
|
||||||
| `documents` | ✅ Implemented | 2025103004-create-documents.ts | File uploads |
|
|
||||||
| `subscriptions` | ✅ Implemented | 20251031_01_create_subscriptions.ts | Push notifications |
|
|
||||||
| `activities` | ✅ Implemented | 20251031_02_create_activities.ts | Activity log |
|
|
||||||
| `work_notes` | ✅ Implemented | 20251031_03_create_work_notes.ts | Chat/comments |
|
|
||||||
| `work_note_attachments` | ✅ Implemented | 20251031_04_create_work_note_attachments.ts | Chat attachments |
|
|
||||||
| `tat_alerts` | ✅ Implemented | 20251104-create-tat-alerts.ts | TAT notification history |
|
|
||||||
| **`holidays`** | ✅ Implemented | 20251104-create-holidays.ts | **NEW - Not in design** |
|
|
||||||
| **`admin_configurations`** | ✅ Implemented | 20251104-create-admin-config.ts | **Similar to planned `system_settings`** |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ❌ **Planned But Not Yet Implemented**
|
|
||||||
|
|
||||||
| Table | Status | Design Location | Purpose |
|
|
||||||
|-------|--------|----------------|---------|
|
|
||||||
| `notifications` | ❌ Not Implemented | Lines 186-205 | Notification management |
|
|
||||||
| **`tat_tracking`** | ❌ Not Implemented | Lines 207-225 | **Real-time TAT tracking** |
|
|
||||||
| `conclusion_remarks` | ❌ Not Implemented | Lines 227-242 | AI-generated conclusions |
|
|
||||||
| `audit_logs` | ❌ Not Implemented | Lines 244-262 | Comprehensive audit trail |
|
|
||||||
| `user_sessions` | ❌ Not Implemented | Lines 264-280 | Session management |
|
|
||||||
| `email_logs` | ❌ Not Implemented | Lines 282-301 | Email tracking |
|
|
||||||
| `sms_logs` | ❌ Not Implemented | Lines 303-321 | SMS tracking |
|
|
||||||
| **`system_settings`** | ❌ Not Implemented | Lines 323-337 | **System configuration** |
|
|
||||||
| `workflow_templates` | ❌ Not Implemented | Lines 339-351 | Template system |
|
|
||||||
| `report_cache` | ❌ Not Implemented | Lines 353-362 | Report caching |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⚠️ **Key Discrepancies**
|
|
||||||
|
|
||||||
### **1. `admin_configurations` vs `system_settings`**
|
|
||||||
|
|
||||||
**Problem:** I created `admin_configurations` which overlaps with the planned `system_settings`.
|
|
||||||
|
|
||||||
**Design (`system_settings`):**
|
|
||||||
```sql
|
|
||||||
system_settings {
|
|
||||||
setting_id PK
|
|
||||||
setting_key UK
|
|
||||||
setting_value
|
|
||||||
setting_type
|
|
||||||
setting_category
|
|
||||||
is_editable
|
|
||||||
is_sensitive
|
|
||||||
validation_rules
|
|
||||||
...
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**What I Created (`admin_configurations`):**
|
|
||||||
```sql
|
|
||||||
admin_configurations {
|
|
||||||
config_id PK
|
|
||||||
config_key UK
|
|
||||||
config_value
|
|
||||||
value_type
|
|
||||||
config_category
|
|
||||||
is_editable
|
|
||||||
is_sensitive
|
|
||||||
validation_rules
|
|
||||||
...
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Resolution Options:**
|
|
||||||
|
|
||||||
**Option A:** Rename `admin_configurations` → `system_settings`
|
|
||||||
- ✅ Matches design document
|
|
||||||
- ✅ Consistent naming
|
|
||||||
- ⚠️ Requires migration to rename table
|
|
||||||
|
|
||||||
**Option B:** Keep `admin_configurations`, skip `system_settings`
|
|
||||||
- ✅ No migration needed
|
|
||||||
- ✅ Already implemented and working
|
|
||||||
- ⚠️ Deviates from design
|
|
||||||
|
|
||||||
**Option C:** Use both tables
|
|
||||||
- ❌ Redundant
|
|
||||||
- ❌ Confusing
|
|
||||||
- ❌ Not recommended
|
|
||||||
|
|
||||||
**RECOMMENDATION:** **Option A** - Rename to `system_settings` to match design document.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **2. `tat_alerts` vs `tat_tracking`**
|
|
||||||
|
|
||||||
**Status:** These serve **DIFFERENT purposes** and should **COEXIST**.
|
|
||||||
|
|
||||||
**`tat_alerts` (Implemented):**
|
|
||||||
- Historical record of TAT alerts sent
|
|
||||||
- Stores when 50%, 75%, 100% alerts were sent
|
|
||||||
- Immutable records for audit trail
|
|
||||||
- Purpose: **Alert History**
|
|
||||||
|
|
||||||
**`tat_tracking` (Planned, Not Implemented):**
|
|
||||||
```sql
|
|
||||||
tat_tracking {
|
|
||||||
tracking_type "REQUEST or LEVEL"
|
|
||||||
tat_status "ON_TRACK to BREACHED"
|
|
||||||
elapsed_hours
|
|
||||||
remaining_hours
|
|
||||||
percentage_used
|
|
||||||
threshold_50_breached
|
|
||||||
threshold_80_breached
|
|
||||||
...
|
|
||||||
}
|
|
||||||
```
|
|
||||||
- Real-time tracking of TAT status
|
|
||||||
- Continuously updated as time passes
|
|
||||||
- Shows current TAT health
|
|
||||||
- Purpose: **Real-time Monitoring**
|
|
||||||
|
|
||||||
**Resolution:** Both tables should exist.
|
|
||||||
|
|
||||||
**RECOMMENDATION:** Implement `tat_tracking` table as per design document.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **3. `holidays` Table**
|
|
||||||
|
|
||||||
**Status:** **NEW addition** not in original design.
|
|
||||||
|
|
||||||
**Resolution:** This is fine! It's a feature enhancement that was needed for accurate TAT calculations.
|
|
||||||
|
|
||||||
**RECOMMENDATION:** Add `holidays` to the design document for future reference.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 **Recommended Actions**
|
|
||||||
|
|
||||||
### **Immediate Actions:**
|
|
||||||
|
|
||||||
1. **Rename `admin_configurations` to `system_settings`**
|
|
||||||
```sql
|
|
||||||
ALTER TABLE admin_configurations RENAME TO system_settings;
|
|
||||||
ALTER INDEX admin_configurations_pkey RENAME TO system_settings_pkey;
|
|
||||||
ALTER INDEX admin_configurations_config_category RENAME TO system_settings_config_category;
|
|
||||||
-- etc.
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Update all references in code:**
|
|
||||||
- Model: `AdminConfiguration` → `SystemSetting`
|
|
||||||
- Service: `adminConfig` → `systemSettings`
|
|
||||||
- Routes: `/admin/configurations` → `/admin/settings`
|
|
||||||
- Controller: `admin.controller.ts` → Update variable names
|
|
||||||
|
|
||||||
3. **Implement `tat_tracking` table** (as per design):
|
|
||||||
- Create migration for `tat_tracking`
|
|
||||||
- Implement model and service
|
|
||||||
- Integrate with TAT calculation system
|
|
||||||
- Use for real-time dashboard
|
|
||||||
|
|
||||||
4. **Update `backend_structure.txt`**:
|
|
||||||
- Add `holidays` table to design
|
|
||||||
- Update `system_settings` if we made any changes
|
|
||||||
- Add `tat_alerts` if not present
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Future Implementations (Phase 2):**
|
|
||||||
|
|
||||||
Based on the design document, these should be implemented next:
|
|
||||||
|
|
||||||
1. **`notifications` table** - In-app notification system
|
|
||||||
2. **`conclusion_remarks` table** - AI-generated conclusions
|
|
||||||
3. **`audit_logs` table** - Comprehensive audit trail (currently using `activities`)
|
|
||||||
4. **`email_logs` & `sms_logs`** - Communication tracking
|
|
||||||
5. **`workflow_templates`** - Template system for common workflows
|
|
||||||
6. **`report_cache`** - Performance optimization for reports
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 **Implementation Progress**
|
|
||||||
|
|
||||||
### **Core Workflow:**
|
|
||||||
- ✅ Users
|
|
||||||
- ✅ Workflow Requests
|
|
||||||
- ✅ Approval Levels
|
|
||||||
- ✅ Participants
|
|
||||||
- ✅ Documents
|
|
||||||
- ✅ Work Notes
|
|
||||||
- ✅ Activities
|
|
||||||
|
|
||||||
### **TAT & Monitoring:**
|
|
||||||
- ✅ TAT Alerts (historical)
|
|
||||||
- ✅ Holidays (for TAT calculation)
|
|
||||||
- ❌ TAT Tracking (real-time) **← MISSING**
|
|
||||||
|
|
||||||
### **Configuration & Admin:**
|
|
||||||
- ✅ Admin Configurations (needs rename to `system_settings`)
|
|
||||||
- ❌ Workflow Templates **← MISSING**
|
|
||||||
|
|
||||||
### **Notifications & Logs:**
|
|
||||||
- ✅ Subscriptions (push notifications)
|
|
||||||
- ❌ Notifications table **← MISSING**
|
|
||||||
- ❌ Email Logs **← MISSING**
|
|
||||||
- ❌ SMS Logs **← MISSING**
|
|
||||||
|
|
||||||
### **Advanced Features:**
|
|
||||||
- ❌ Conclusion Remarks (AI) **← MISSING**
|
|
||||||
- ❌ Audit Logs **← MISSING**
|
|
||||||
- ❌ Report Cache **← MISSING**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 **Alignment with Design Document**
|
|
||||||
|
|
||||||
### **What Matches Design:**
|
|
||||||
- ✅ Core workflow tables (90% match)
|
|
||||||
- ✅ Work notes system
|
|
||||||
- ✅ Document management
|
|
||||||
- ✅ Activity logging
|
|
||||||
|
|
||||||
### **What Differs:**
|
|
||||||
- ⚠️ `admin_configurations` should be `system_settings`
|
|
||||||
- ⚠️ `tat_alerts` exists but `tat_tracking` doesn't
|
|
||||||
- ✅ `holidays` is a new addition (enhancement)
|
|
||||||
|
|
||||||
### **What's Missing:**
|
|
||||||
- ❌ 10 tables from design not yet implemented
|
|
||||||
- ❌ Some relationships not fully realized
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 💡 **Recommendations Summary**
|
|
||||||
|
|
||||||
### **Critical (Do Now):**
|
|
||||||
1. ✅ **Rename `admin_configurations` to `system_settings`** - Align with design
|
|
||||||
2. ✅ **Implement `tat_tracking` table** - Complete TAT system
|
|
||||||
3. ✅ **Update design document** - Add holidays table
|
|
||||||
|
|
||||||
### **Important (Phase 2):**
|
|
||||||
4. ⏳ **Implement `notifications` table** - Centralized notification management
|
|
||||||
5. ⏳ **Implement `audit_logs` table** - Enhanced audit trail
|
|
||||||
6. ⏳ **Implement `email_logs` & `sms_logs`** - Communication tracking
|
|
||||||
|
|
||||||
### **Nice to Have (Phase 3):**
|
|
||||||
7. 🔮 **Implement `conclusion_remarks`** - AI integration
|
|
||||||
8. 🔮 **Implement `workflow_templates`** - Template system
|
|
||||||
9. 🔮 **Implement `report_cache`** - Performance optimization
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📝 **Conclusion**
|
|
||||||
|
|
||||||
**Answer to the question:** "Did you consider backend_structure.txt?"
|
|
||||||
|
|
||||||
**Honest Answer:** Not fully. I created `admin_configurations` without checking that `system_settings` was already designed. However:
|
|
||||||
|
|
||||||
1. ✅ The functionality is the same
|
|
||||||
2. ⚠️ The naming is different
|
|
||||||
3. 🔧 Easy to fix with a rename migration
|
|
||||||
|
|
||||||
**Next Steps:**
|
|
||||||
1. Decide: Rename to `system_settings` (recommended) or keep as-is?
|
|
||||||
2. Implement missing `tat_tracking` table
|
|
||||||
3. Update design document with new `holidays` table
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Created:** November 4, 2025
|
|
||||||
**Status:** Analysis Complete
|
|
||||||
**Action Required:** Yes - Table rename + implement tat_tracking
|
|
||||||
|
|
||||||
@ -1,341 +0,0 @@
|
|||||||
# Dynamic TAT Thresholds Implementation
|
|
||||||
|
|
||||||
## Problem Statement
|
|
||||||
|
|
||||||
### Original Issue
|
|
||||||
The TAT system had **hardcoded threshold percentages** (50%, 75%, 100%) which created several problems:
|
|
||||||
|
|
||||||
1. **Job Naming Conflict**: Jobs were named using threshold percentages (`tat50-{reqId}-{levelId}`)
|
|
||||||
2. **Configuration Changes Didn't Apply**: Changing threshold in settings didn't affect scheduled jobs
|
|
||||||
3. **Message Mismatch**: Messages always said "50% elapsed" even if admin configured 55%
|
|
||||||
4. **Cancellation Issues**: Uncertainty about whether jobs could be properly cancelled after config changes
|
|
||||||
|
|
||||||
### Critical Edge Case Identified by User
|
|
||||||
|
|
||||||
**Scenario:**
|
|
||||||
```
|
|
||||||
1. Request created → TAT jobs scheduled:
|
|
||||||
- tat50-REQ123-LEVEL456 (fires at 8 hours, says "50% elapsed")
|
|
||||||
- tat75-REQ123-LEVEL456 (fires at 12 hours)
|
|
||||||
- tatBreach-REQ123-LEVEL456 (fires at 16 hours)
|
|
||||||
|
|
||||||
2. Admin changes threshold from 50% → 55%
|
|
||||||
|
|
||||||
3. User approves at 9 hours (after old 50% fired)
|
|
||||||
→ Job already fired with "50% elapsed" message ❌
|
|
||||||
→ But admin configured 55% ❌
|
|
||||||
→ Inconsistent!
|
|
||||||
|
|
||||||
4. Even if approval happens before old 50%:
|
|
||||||
→ System cancels `tat50-REQ123-LEVEL456` ✅
|
|
||||||
→ But message would still say "50%" (hardcoded) ❌
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Solution: Generic Job Names + Dynamic Thresholds
|
|
||||||
|
|
||||||
### 1. **Generic Job Naming**
|
|
||||||
Changed from percentage-based to generic names:
|
|
||||||
|
|
||||||
**Before:**
|
|
||||||
```typescript
|
|
||||||
tat50-{requestId}-{levelId}
|
|
||||||
tat75-{requestId}-{levelId}
|
|
||||||
tatBreach-{requestId}-{levelId}
|
|
||||||
```
|
|
||||||
|
|
||||||
**After:**
|
|
||||||
```typescript
|
|
||||||
tat-threshold1-{requestId}-{levelId} // First threshold (configurable: 50%, 55%, 60%, etc.)
|
|
||||||
tat-threshold2-{requestId}-{levelId} // Second threshold (configurable: 75%, 80%, etc.)
|
|
||||||
tat-breach-{requestId}-{levelId} // Always 100% (deadline)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. **Store Threshold in Job Data**
|
|
||||||
Instead of relying on job name, we store the actual percentage in job payload:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
interface TatJobData {
|
|
||||||
type: 'threshold1' | 'threshold2' | 'breach';
|
|
||||||
threshold: number; // Actual % (e.g., 55, 80, 100)
|
|
||||||
requestId: string;
|
|
||||||
levelId: string;
|
|
||||||
approverId: string;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. **Dynamic Message Generation**
|
|
||||||
Messages use the threshold from job data:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
case 'threshold1':
|
|
||||||
message = `⏳ ${threshold}% of TAT elapsed for Request ${requestNumber}`;
|
|
||||||
// If threshold = 55, message says "55% of TAT elapsed" ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. **Configuration Cache Management**
|
|
||||||
- Configurations are cached for 5 minutes (performance)
|
|
||||||
- Cache is **automatically cleared** when admin updates settings
|
|
||||||
- Next scheduled job will use new thresholds
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## How It Solves the Edge Cases
|
|
||||||
|
|
||||||
### ✅ **Case 1: Config Changed After Job Creation**
|
|
||||||
|
|
||||||
**Scenario:**
|
|
||||||
```
|
|
||||||
1. Request created with TAT = 16 hours (thresholds: 50%, 75%)
|
|
||||||
Jobs scheduled:
|
|
||||||
- tat-threshold1-REQ123 → fires at 8h, threshold=50
|
|
||||||
- tat-threshold2-REQ123 → fires at 12h, threshold=75
|
|
||||||
|
|
||||||
2. Admin changes threshold from 50% → 55%
|
|
||||||
|
|
||||||
3. Old request jobs STILL fire at 8h (50%)
|
|
||||||
✅ BUT message correctly shows "50% elapsed" (from job data)
|
|
||||||
✅ No confusion because that request WAS scheduled at 50%
|
|
||||||
|
|
||||||
4. NEW requests created after config change:
|
|
||||||
Jobs scheduled:
|
|
||||||
- tat-threshold1-REQ456 → fires at 8.8h, threshold=55 ✅
|
|
||||||
- tat-threshold2-REQ456 → fires at 12h, threshold=75
|
|
||||||
|
|
||||||
5. Message says "55% of TAT elapsed" ✅ CORRECT!
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:**
|
|
||||||
- ✅ Existing jobs maintain their original thresholds (consistent)
|
|
||||||
- ✅ New jobs use updated thresholds (respects config changes)
|
|
||||||
- ✅ Messages always match actual threshold used
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ✅ **Case 2: User Approves Before Threshold**
|
|
||||||
|
|
||||||
**Scenario:**
|
|
||||||
```
|
|
||||||
1. Job scheduled: tat-threshold1-REQ123 (fires at 55%)
|
|
||||||
|
|
||||||
2. User approves at 40% elapsed
|
|
||||||
|
|
||||||
3. cancelTatJobs('REQ123', 'LEVEL456') is called:
|
|
||||||
→ Looks for: tat-threshold1-REQ123-LEVEL456 ✅ FOUND
|
|
||||||
→ Removes job ✅ SUCCESS
|
|
||||||
|
|
||||||
4. No notification sent ✅ CORRECT!
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:**
|
|
||||||
- ✅ Generic names allow consistent cancellation
|
|
||||||
- ✅ Works regardless of threshold percentage
|
|
||||||
- ✅ No ambiguity in job identification
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ✅ **Case 3: User Approves After Threshold Fired**
|
|
||||||
|
|
||||||
**Scenario:**
|
|
||||||
```
|
|
||||||
1. Job scheduled: tat-threshold1-REQ123 (fires at 55%)
|
|
||||||
|
|
||||||
2. Job fires at 55% → notification sent
|
|
||||||
|
|
||||||
3. User approves at 60%
|
|
||||||
|
|
||||||
4. cancelTatJobs called:
|
|
||||||
→ Tries to cancel tat-threshold1-REQ123
|
|
||||||
→ Job already processed and removed (removeOnComplete: true)
|
|
||||||
→ No error (gracefully handled) ✅
|
|
||||||
|
|
||||||
5. Later jobs (threshold2, breach) are still cancelled ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:**
|
|
||||||
- ✅ Already-fired jobs don't cause errors
|
|
||||||
- ✅ Remaining jobs are still cancelled
|
|
||||||
- ✅ System behaves correctly in all scenarios
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Configuration Flow
|
|
||||||
|
|
||||||
### **Admin Updates Threshold**
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Admin changes "First TAT Threshold" from 50% → 55%
|
|
||||||
↓
|
|
||||||
2. Frontend sends: PUT /api/v1/admin/configurations/TAT_REMINDER_THRESHOLD_1
|
|
||||||
Body: { configValue: '55' }
|
|
||||||
↓
|
|
||||||
3. Backend updates database:
|
|
||||||
UPDATE admin_configurations
|
|
||||||
SET config_value = '55'
|
|
||||||
WHERE config_key = 'TAT_REMINDER_THRESHOLD_1'
|
|
||||||
↓
|
|
||||||
4. Backend clears config cache:
|
|
||||||
clearConfigCache() ✅
|
|
||||||
↓
|
|
||||||
5. Next request created:
|
|
||||||
- getTatThresholds() → reads '55' from DB
|
|
||||||
- Schedules job at 55% (8.8 hours for 16h TAT)
|
|
||||||
- Job data: { threshold: 55 }
|
|
||||||
↓
|
|
||||||
6. Job fires at 55%:
|
|
||||||
- Message: "55% of TAT elapsed" ✅ CORRECT!
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Database Impact
|
|
||||||
|
|
||||||
### **No Database Changes Required!**
|
|
||||||
|
|
||||||
The `admin_configurations` table already has all required fields:
|
|
||||||
- ✅ `TAT_REMINDER_THRESHOLD_1` → First threshold (50% default)
|
|
||||||
- ✅ `TAT_REMINDER_THRESHOLD_2` → Second threshold (75% default)
|
|
||||||
|
|
||||||
### **Job Queue Data Structure**
|
|
||||||
|
|
||||||
**Old Job Data:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"type": "tat50",
|
|
||||||
"requestId": "...",
|
|
||||||
"levelId": "...",
|
|
||||||
"approverId": "..."
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**New Job Data:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"type": "threshold1",
|
|
||||||
"threshold": 55,
|
|
||||||
"requestId": "...",
|
|
||||||
"levelId": "...",
|
|
||||||
"approverId": "..."
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Testing Scenarios
|
|
||||||
|
|
||||||
### **Test 1: Change Threshold, Create New Request**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Change threshold from 50% to 55%
|
|
||||||
curl -X PUT http://localhost:5000/api/v1/admin/configurations/TAT_REMINDER_THRESHOLD_1 \
|
|
||||||
-H "Authorization: Bearer TOKEN" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{"configValue": "55"}'
|
|
||||||
|
|
||||||
# 2. Create new workflow request
|
|
||||||
# → Jobs scheduled at 55%, 75%, 100%
|
|
||||||
|
|
||||||
# 3. Wait for 55% elapsed
|
|
||||||
# → Notification says "55% of TAT elapsed" ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test 2: Approve Before Threshold**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Request created (TAT = 16 hours)
|
|
||||||
# → threshold1 scheduled at 8.8 hours (55%)
|
|
||||||
|
|
||||||
# 2. Approve at 6 hours (before 55%)
|
|
||||||
curl -X POST http://localhost:5000/api/v1/workflows/REQ123/approve/LEVEL456
|
|
||||||
|
|
||||||
# 3. cancelTatJobs is called internally
|
|
||||||
# → tat-threshold1-REQ123-LEVEL456 removed ✅
|
|
||||||
# → tat-threshold2-REQ123-LEVEL456 removed ✅
|
|
||||||
# → tat-breach-REQ123-LEVEL456 removed ✅
|
|
||||||
|
|
||||||
# 4. No notifications sent ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test 3: Mixed Old and New Jobs**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create Request A with old threshold (50%)
|
|
||||||
# → Jobs use threshold=50
|
|
||||||
|
|
||||||
# 2. Admin changes to 55%
|
|
||||||
|
|
||||||
# 3. Create Request B with new threshold (55%)
|
|
||||||
# → Jobs use threshold=55
|
|
||||||
|
|
||||||
# 4. Both requests work correctly:
|
|
||||||
# → Request A fires at 50%, message says "50%" ✅
|
|
||||||
# → Request B fires at 55%, message says "55%" ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Summary
|
|
||||||
|
|
||||||
### **What Changed:**
|
|
||||||
1. ✅ Job names: `tat50` → `tat-threshold1` (generic)
|
|
||||||
2. ✅ Job data: Now includes actual threshold percentage
|
|
||||||
3. ✅ Messages: Dynamic based on threshold from job data
|
|
||||||
4. ✅ Scheduling: Reads thresholds from database configuration
|
|
||||||
5. ✅ Cache: Automatically cleared on config update
|
|
||||||
|
|
||||||
### **What Didn't Change:**
|
|
||||||
1. ✅ Database schema (admin_configurations already has all needed fields)
|
|
||||||
2. ✅ API endpoints (no breaking changes)
|
|
||||||
3. ✅ Frontend UI (works exactly the same)
|
|
||||||
4. ✅ Cancellation logic (still works, just uses new names)
|
|
||||||
|
|
||||||
### **Benefits:**
|
|
||||||
1. ✅ **No Job Name Conflicts**: Generic names work for any percentage
|
|
||||||
2. ✅ **Accurate Messages**: Always show actual threshold used
|
|
||||||
3. ✅ **Config Flexibility**: Admin can change thresholds anytime
|
|
||||||
4. ✅ **Backward Compatible**: Existing jobs complete normally
|
|
||||||
5. ✅ **Reliable Cancellation**: Works regardless of threshold value
|
|
||||||
6. ✅ **Immediate Effect**: New requests use updated thresholds immediately
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Files Modified
|
|
||||||
|
|
||||||
1. `Re_Backend/src/services/configReader.service.ts` - **NEW** (configuration reader)
|
|
||||||
2. `Re_Backend/src/services/tatScheduler.service.ts` - Updated job scheduling
|
|
||||||
3. `Re_Backend/src/queues/tatProcessor.ts` - Updated job processing
|
|
||||||
4. `Re_Backend/src/controllers/admin.controller.ts` - Added cache clearing
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Configuration Keys
|
|
||||||
|
|
||||||
| Key | Description | Default | Example |
|
|
||||||
|-----|-------------|---------|---------|
|
|
||||||
| `TAT_REMINDER_THRESHOLD_1` | First warning threshold | 50 | 55 (sends alert at 55%) |
|
|
||||||
| `TAT_REMINDER_THRESHOLD_2` | Critical warning threshold | 75 | 80 (sends alert at 80%) |
|
|
||||||
| Breach | Deadline reached (always 100%) | 100 | 100 (non-configurable) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Example Timeline
|
|
||||||
|
|
||||||
**TAT = 16 hours, Thresholds: 55%, 80%**
|
|
||||||
|
|
||||||
```
|
|
||||||
Hour 0 ─────────────────────────────────────► Hour 16
|
|
||||||
│ │ │
|
|
||||||
START 55% (8.8h) 80% (12.8h) 100%
|
|
||||||
│ │ │
|
|
||||||
threshold1 threshold2 breach
|
|
||||||
"55% elapsed" "80% elapsed" "BREACHED"
|
|
||||||
⏳ ⚠️ ⏰
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:**
|
|
||||||
- ✅ Job names don't hardcode percentages
|
|
||||||
- ✅ Messages show actual configured thresholds
|
|
||||||
- ✅ Cancellation works consistently
|
|
||||||
- ✅ No edge cases or race conditions
|
|
||||||
|
|
||||||
@ -1,562 +0,0 @@
|
|||||||
# Dynamic Working Hours Configuration
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Working hours for TAT (Turn Around Time) calculations are now **dynamically configurable** through the admin settings interface. Admins can change these settings at any time, and the changes will be reflected in all future TAT calculations.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## What's Configurable
|
|
||||||
|
|
||||||
### **Working Hours Settings:**
|
|
||||||
|
|
||||||
| Setting | Description | Default | Example |
|
|
||||||
|---------|-------------|---------|---------|
|
|
||||||
| `WORK_START_HOUR` | Working day starts at (hour) | 9 | 8 (8:00 AM) |
|
|
||||||
| `WORK_END_HOUR` | Working day ends at (hour) | 18 | 19 (7:00 PM) |
|
|
||||||
| `WORK_START_DAY` | First working day of week | 1 (Monday) | 1 (Monday) |
|
|
||||||
| `WORK_END_DAY` | Last working day of week | 5 (Friday) | 6 (Saturday) |
|
|
||||||
|
|
||||||
**Days:** 0 = Sunday, 1 = Monday, 2 = Tuesday, ..., 6 = Saturday
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## How It Works
|
|
||||||
|
|
||||||
### **1. Admin Changes Working Hours**
|
|
||||||
|
|
||||||
```
|
|
||||||
Settings → System Configuration → Working Hours
|
|
||||||
- Work Start Hour: 9:00 → Change to 8:00
|
|
||||||
- Work End Hour: 18:00 → Change to 20:00
|
|
||||||
✅ Save
|
|
||||||
```
|
|
||||||
|
|
||||||
### **2. Backend Updates Database**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
UPDATE admin_configurations
|
|
||||||
SET config_value = '8'
|
|
||||||
WHERE config_key = 'WORK_START_HOUR';
|
|
||||||
|
|
||||||
UPDATE admin_configurations
|
|
||||||
SET config_value = '20'
|
|
||||||
WHERE config_key = 'WORK_END_HOUR';
|
|
||||||
```
|
|
||||||
|
|
||||||
### **3. Cache is Cleared Automatically**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// In admin.controller.ts
|
|
||||||
clearConfigCache(); // Clear general config cache
|
|
||||||
clearWorkingHoursCache(); // Clear TAT working hours cache
|
|
||||||
```
|
|
||||||
|
|
||||||
### **4. Next TAT Calculation Uses New Values**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// TAT calculation loads fresh values
|
|
||||||
await loadWorkingHoursCache();
|
|
||||||
// → Reads: startHour=8, endHour=20 from database
|
|
||||||
|
|
||||||
// Applies new working hours
|
|
||||||
if (hour >= 8 && hour < 20) {
|
|
||||||
// This hour counts as working time ✅
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Cache Management
|
|
||||||
|
|
||||||
### **Working Hours Cache:**
|
|
||||||
|
|
||||||
**Cache Duration:** 5 minutes (shorter than holidays since it's more critical)
|
|
||||||
|
|
||||||
**Why Cache?**
|
|
||||||
- Performance: Avoids repeated database queries
|
|
||||||
- Speed: TAT calculations can happen hundreds of times per hour
|
|
||||||
- Efficiency: Reading from memory is ~1000x faster than DB query
|
|
||||||
|
|
||||||
**Cache Lifecycle:**
|
|
||||||
```
|
|
||||||
1. First TAT Calculation:
|
|
||||||
→ loadWorkingHoursCache() called
|
|
||||||
→ Database query: SELECT config_value WHERE config_key IN (...)
|
|
||||||
→ Store in memory: workingHoursCache = { startHour: 9, endHour: 18, ... }
|
|
||||||
→ Set expiry: now + 5 minutes
|
|
||||||
|
|
||||||
2. Next 5 Minutes (Cache Valid):
|
|
||||||
→ All TAT calculations use cached values
|
|
||||||
→ No database queries ✅ FAST
|
|
||||||
|
|
||||||
3. After 5 Minutes (Cache Expired):
|
|
||||||
→ Next TAT calculation reloads from database
|
|
||||||
→ New cache created with 5-minute expiry
|
|
||||||
|
|
||||||
4. Admin Updates Config:
|
|
||||||
→ clearWorkingHoursCache() called immediately
|
|
||||||
→ Cache invalidated
|
|
||||||
→ Next calculation loads fresh values ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Example Scenarios
|
|
||||||
|
|
||||||
### **Scenario 1: Extend Working Hours**
|
|
||||||
|
|
||||||
**Before:**
|
|
||||||
```
|
|
||||||
Working Hours: 9:00 AM - 6:00 PM (9 hours/day)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Admin Changes To:**
|
|
||||||
```
|
|
||||||
Working Hours: 8:00 AM - 8:00 PM (12 hours/day)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Impact on TAT:**
|
|
||||||
```
|
|
||||||
Request: STANDARD Priority, 24 working hours
|
|
||||||
Created: Monday 9:00 AM
|
|
||||||
|
|
||||||
OLD Calculation (9 hours/day):
|
|
||||||
Monday 9 AM - 6 PM = 9 hours (15h remaining)
|
|
||||||
Tuesday 9 AM - 6 PM = 9 hours (6h remaining)
|
|
||||||
Wednesday 9 AM - 3 PM = 6 hours (0h remaining)
|
|
||||||
Deadline: Wednesday 3:00 PM
|
|
||||||
|
|
||||||
NEW Calculation (12 hours/day):
|
|
||||||
Monday 9 AM - 8 PM = 11 hours (13h remaining)
|
|
||||||
Tuesday 8 AM - 8 PM = 12 hours (1h remaining)
|
|
||||||
Wednesday 8 AM - 9 AM = 1 hour (0h remaining)
|
|
||||||
Deadline: Wednesday 9:00 AM ✅ FASTER!
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Scenario 2: Include Saturday as Working Day**
|
|
||||||
|
|
||||||
**Before:**
|
|
||||||
```
|
|
||||||
Working Days: Monday - Friday (1-5)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Admin Changes To:**
|
|
||||||
```
|
|
||||||
Working Days: Monday - Saturday (1-6)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Impact on TAT:**
|
|
||||||
```
|
|
||||||
Request: STANDARD Priority, 16 working hours
|
|
||||||
Created: Friday 2:00 PM
|
|
||||||
|
|
||||||
OLD Calculation (Mon-Fri only):
|
|
||||||
Friday 2 PM - 6 PM = 4 hours (12h remaining)
|
|
||||||
Saturday-Sunday = SKIPPED
|
|
||||||
Monday 9 AM - 6 PM = 9 hours (3h remaining)
|
|
||||||
Tuesday 9 AM - 12 PM = 3 hours (0h remaining)
|
|
||||||
Deadline: Tuesday 12:00 PM
|
|
||||||
|
|
||||||
NEW Calculation (Mon-Sat):
|
|
||||||
Friday 2 PM - 6 PM = 4 hours (12h remaining)
|
|
||||||
Saturday 9 AM - 6 PM = 9 hours (3h remaining) ✅ Saturday counts!
|
|
||||||
Sunday = SKIPPED
|
|
||||||
Monday 9 AM - 12 PM = 3 hours (0h remaining)
|
|
||||||
Deadline: Monday 12:00 PM ✅ EARLIER!
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Scenario 3: Reduce Working Hours (After-Hours Emergency)**
|
|
||||||
|
|
||||||
**Before:**
|
|
||||||
```
|
|
||||||
Working Hours: 9:00 AM - 6:00 PM
|
|
||||||
```
|
|
||||||
|
|
||||||
**Admin Changes To:**
|
|
||||||
```
|
|
||||||
Working Hours: 9:00 AM - 10:00 PM (extended for emergency)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Impact:**
|
|
||||||
```
|
|
||||||
Request created at 7:00 PM (after old hours but within new hours)
|
|
||||||
|
|
||||||
OLD System:
|
|
||||||
7:00 PM → Not working time
|
|
||||||
First working hour: Tomorrow 9:00 AM
|
|
||||||
TAT starts counting from tomorrow ❌
|
|
||||||
|
|
||||||
NEW System:
|
|
||||||
7:00 PM → Still working time! ✅
|
|
||||||
TAT starts counting immediately
|
|
||||||
Faster response for urgent requests ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation Details
|
|
||||||
|
|
||||||
### **Configuration Reader Service**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Re_Backend/src/services/configReader.service.ts
|
|
||||||
|
|
||||||
export async function getWorkingHours(): Promise<{ startHour: number; endHour: number }> {
|
|
||||||
const startHour = await getConfigNumber('WORK_START_HOUR', 9);
|
|
||||||
const endHour = await getConfigNumber('WORK_END_HOUR', 18);
|
|
||||||
|
|
||||||
return { startHour, endHour };
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### **TAT Time Utils (Working Hours Cache)**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Re_Backend/src/utils/tatTimeUtils.ts
|
|
||||||
|
|
||||||
let workingHoursCache: WorkingHoursConfig | null = null;
|
|
||||||
let workingHoursCacheExpiry: Date | null = null;
|
|
||||||
|
|
||||||
async function loadWorkingHoursCache(): Promise<void> {
|
|
||||||
// Check if cache is still valid
|
|
||||||
if (workingHoursCacheExpiry && new Date() < workingHoursCacheExpiry) {
|
|
||||||
return; // Use cached values
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load from database
|
|
||||||
const { getWorkingHours, getConfigNumber } = await import('../services/configReader.service');
|
|
||||||
const hours = await getWorkingHours();
|
|
||||||
const startDay = await getConfigNumber('WORK_START_DAY', 1);
|
|
||||||
const endDay = await getConfigNumber('WORK_END_DAY', 5);
|
|
||||||
|
|
||||||
// Store in cache
|
|
||||||
workingHoursCache = {
|
|
||||||
startHour: hours.startHour,
|
|
||||||
endHour: hours.endHour,
|
|
||||||
startDay: startDay,
|
|
||||||
endDay: endDay
|
|
||||||
};
|
|
||||||
|
|
||||||
// Set 5-minute expiry
|
|
||||||
workingHoursCacheExpiry = dayjs().add(5, 'minute').toDate();
|
|
||||||
|
|
||||||
console.log(`[TAT Utils] Loaded working hours: ${hours.startHour}:00-${hours.endHour}:00`);
|
|
||||||
}
|
|
||||||
|
|
||||||
function isWorkingTime(date: Dayjs): boolean {
|
|
||||||
// Use cached working hours (with fallback to defaults)
|
|
||||||
const config = workingHoursCache || {
|
|
||||||
startHour: 9,
|
|
||||||
endHour: 18,
|
|
||||||
startDay: 1,
|
|
||||||
endDay: 5
|
|
||||||
};
|
|
||||||
|
|
||||||
const day = date.day();
|
|
||||||
const hour = date.hour();
|
|
||||||
|
|
||||||
// Check based on configured values
|
|
||||||
if (day < config.startDay || day > config.endDay) return false;
|
|
||||||
if (hour < config.startHour || hour >= config.endHour) return false;
|
|
||||||
if (isHoliday(date)) return false;
|
|
||||||
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Admin Controller (Cache Invalidation)**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Re_Backend/src/controllers/admin.controller.ts
|
|
||||||
|
|
||||||
export const updateConfiguration = async (req: Request, res: Response): Promise<void> => {
|
|
||||||
// ... update database ...
|
|
||||||
|
|
||||||
// Clear config cache
|
|
||||||
clearConfigCache();
|
|
||||||
|
|
||||||
// If working hours config was updated, also clear TAT cache
|
|
||||||
const workingHoursKeys = ['WORK_START_HOUR', 'WORK_END_HOUR', 'WORK_START_DAY', 'WORK_END_DAY'];
|
|
||||||
if (workingHoursKeys.includes(configKey)) {
|
|
||||||
clearWorkingHoursCache(); // ✅ Immediate cache clear
|
|
||||||
logger.info(`Working hours config '${configKey}' updated - cache cleared`);
|
|
||||||
}
|
|
||||||
|
|
||||||
res.json({ success: true });
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Priority Behavior
|
|
||||||
|
|
||||||
### **STANDARD Priority**
|
|
||||||
|
|
||||||
✅ **Uses configured working hours**
|
|
||||||
- Respects `WORK_START_HOUR` and `WORK_END_HOUR`
|
|
||||||
- Respects `WORK_START_DAY` and `WORK_END_DAY`
|
|
||||||
- Excludes holidays
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Config: 9:00 AM - 6:00 PM, Monday-Friday
|
|
||||||
TAT: 16 working hours
|
|
||||||
→ Only hours between 9 AM - 6 PM on Mon-Fri count
|
|
||||||
→ Weekends and holidays are skipped
|
|
||||||
```
|
|
||||||
|
|
||||||
### **EXPRESS Priority**
|
|
||||||
|
|
||||||
❌ **Ignores working hours configuration**
|
|
||||||
- Counts ALL 24 hours per day
|
|
||||||
- Counts ALL 7 days per week
|
|
||||||
- Counts holidays
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Config: 9:00 AM - 6:00 PM (ignored)
|
|
||||||
TAT: 16 hours
|
|
||||||
→ Simply add 16 hours to start time
|
|
||||||
→ No exclusions
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Testing Scenarios
|
|
||||||
|
|
||||||
### **Test 1: Change Working Hours, Create Request**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Check current working hours
|
|
||||||
curl http://localhost:5000/api/v1/admin/configurations \
|
|
||||||
| grep WORK_START_HOUR
|
|
||||||
# → Returns: "configValue": "9"
|
|
||||||
|
|
||||||
# 2. Update working hours to start at 8:00 AM
|
|
||||||
curl -X PUT http://localhost:5000/api/v1/admin/configurations/WORK_START_HOUR \
|
|
||||||
-H "Authorization: Bearer TOKEN" \
|
|
||||||
-d '{"configValue": "8"}'
|
|
||||||
# → Response: "Configuration updated successfully"
|
|
||||||
|
|
||||||
# 3. Check logs
|
|
||||||
# → Should see: "Working hours configuration 'WORK_START_HOUR' updated - cache cleared"
|
|
||||||
|
|
||||||
# 4. Create new STANDARD request
|
|
||||||
curl -X POST http://localhost:5000/api/v1/workflows \
|
|
||||||
-d '{"priority": "STANDARD", "tatHours": 16}'
|
|
||||||
|
|
||||||
# 5. Check TAT calculation logs
|
|
||||||
# → Should see: "Loaded working hours: 8:00-18:00" ✅
|
|
||||||
# → Deadline calculation uses new hours ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test 2: Verify Cache Expiry**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create request (loads working hours into cache)
|
|
||||||
# → Cache expires in 5 minutes
|
|
||||||
|
|
||||||
# 2. Wait 6 minutes
|
|
||||||
|
|
||||||
# 3. Create another request
|
|
||||||
# → Should see log: "Loaded working hours: ..." (cache reloaded)
|
|
||||||
|
|
||||||
# 4. Create third request immediately
|
|
||||||
# → No log (uses cached values)
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test 3: Extend to 6-Day Week**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Update end day to Saturday
|
|
||||||
curl -X PUT http://localhost:5000/api/v1/admin/configurations/WORK_END_DAY \
|
|
||||||
-d '{"configValue": "6"}'
|
|
||||||
|
|
||||||
# 2. Create request on Friday afternoon
|
|
||||||
# → Deadline should include Saturday ✅
|
|
||||||
# → Sunday still excluded ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Database Configuration
|
|
||||||
|
|
||||||
### **Configuration Keys:**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT config_key, config_value, display_name
|
|
||||||
FROM admin_configurations
|
|
||||||
WHERE config_key IN (
|
|
||||||
'WORK_START_HOUR',
|
|
||||||
'WORK_END_HOUR',
|
|
||||||
'WORK_START_DAY',
|
|
||||||
'WORK_END_DAY'
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Example results:
|
|
||||||
-- WORK_START_HOUR | 9 | Work Start Hour
|
|
||||||
-- WORK_END_HOUR | 18 | Work End Hour
|
|
||||||
-- WORK_START_DAY | 1 | Work Start Day (Monday)
|
|
||||||
-- WORK_END_DAY | 5 | Work End Day (Friday)
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Update Example:**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Change working hours to 8 AM - 8 PM
|
|
||||||
UPDATE admin_configurations
|
|
||||||
SET config_value = '8', updated_at = NOW()
|
|
||||||
WHERE config_key = 'WORK_START_HOUR';
|
|
||||||
|
|
||||||
UPDATE admin_configurations
|
|
||||||
SET config_value = '20', updated_at = NOW()
|
|
||||||
WHERE config_key = 'WORK_END_HOUR';
|
|
||||||
|
|
||||||
-- Include Saturday as working day
|
|
||||||
UPDATE admin_configurations
|
|
||||||
SET config_value = '6', updated_at = NOW()
|
|
||||||
WHERE config_key = 'WORK_END_DAY';
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Logging Examples
|
|
||||||
|
|
||||||
### **Configuration Update:**
|
|
||||||
|
|
||||||
```
|
|
||||||
[Admin] Working hours configuration 'WORK_START_HOUR' updated - cache cleared
|
|
||||||
[ConfigReader] Configuration cache cleared
|
|
||||||
[TAT Utils] Working hours cache cleared
|
|
||||||
```
|
|
||||||
|
|
||||||
### **TAT Calculation:**
|
|
||||||
|
|
||||||
```
|
|
||||||
[TAT Utils] Loaded working hours: 8:00-20:00, Days: 1-6
|
|
||||||
[TAT Scheduler] Using STANDARD mode - excludes holidays, weekends, non-working hours
|
|
||||||
[TAT Scheduler] Calculating TAT milestones for request REQ-2025-001
|
|
||||||
[TAT Scheduler] Priority: STANDARD, TAT Hours: 16
|
|
||||||
[TAT Scheduler] Start: 2025-11-05 09:00
|
|
||||||
[TAT Scheduler] Threshold 1 (55%): 2025-11-05 17:48 (using 8-20 working hours)
|
|
||||||
[TAT Scheduler] Threshold 2 (80%): 2025-11-06 10:48
|
|
||||||
[TAT Scheduler] Breach (100%): 2025-11-06 15:00
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Migration from Hardcoded Values
|
|
||||||
|
|
||||||
### **Before (Hardcoded):**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// ❌ Hardcoded in code
|
|
||||||
const WORK_START_HOUR = 9;
|
|
||||||
const WORK_END_HOUR = 18;
|
|
||||||
const WORK_START_DAY = 1;
|
|
||||||
const WORK_END_DAY = 5;
|
|
||||||
|
|
||||||
// To change: Need code update + deployment
|
|
||||||
```
|
|
||||||
|
|
||||||
### **After (Dynamic):**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// ✅ Read from database
|
|
||||||
const config = await getWorkingHours();
|
|
||||||
// config = { startHour: 9, endHour: 18 }
|
|
||||||
|
|
||||||
// To change: Just update in admin UI
|
|
||||||
// No code changes needed ✅
|
|
||||||
// No deployment needed ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Benefits
|
|
||||||
|
|
||||||
### **1. Flexibility**
|
|
||||||
- ✅ Change working hours anytime without code changes
|
|
||||||
- ✅ No deployment needed
|
|
||||||
- ✅ Takes effect within 5 minutes
|
|
||||||
|
|
||||||
### **2. Global Organizations**
|
|
||||||
- ✅ Adjust for different time zones
|
|
||||||
- ✅ Support 24/5 or 24/6 operations
|
|
||||||
- ✅ Extended hours for urgent periods
|
|
||||||
|
|
||||||
### **3. Seasonal Adjustments**
|
|
||||||
- ✅ Extend hours during busy seasons
|
|
||||||
- ✅ Reduce hours during slow periods
|
|
||||||
- ✅ Special hours for events
|
|
||||||
|
|
||||||
### **4. Performance**
|
|
||||||
- ✅ Cache prevents repeated DB queries
|
|
||||||
- ✅ Fast lookups (memory vs database)
|
|
||||||
- ✅ Auto-refresh every 5 minutes
|
|
||||||
|
|
||||||
### **5. Consistency**
|
|
||||||
- ✅ All TAT calculations use same values
|
|
||||||
- ✅ Immediate cache invalidation on update
|
|
||||||
- ✅ Fallback to defaults if DB unavailable
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Summary
|
|
||||||
|
|
||||||
| Aspect | Details |
|
|
||||||
|--------|---------|
|
|
||||||
| **Configurable** | ✅ Working hours, working days |
|
|
||||||
| **Admin UI** | ✅ Settings → System Configuration |
|
|
||||||
| **Cache Duration** | 5 minutes |
|
|
||||||
| **Cache Invalidation** | Automatic on config update |
|
|
||||||
| **Applies To** | STANDARD priority only |
|
|
||||||
| **Express Mode** | Ignores working hours (24/7) |
|
|
||||||
| **Performance** | Optimized with caching |
|
|
||||||
| **Fallback** | Uses TAT_CONFIG defaults if DB fails |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Files Modified
|
|
||||||
|
|
||||||
1. `Re_Backend/src/utils/tatTimeUtils.ts` - Dynamic working hours loading
|
|
||||||
2. `Re_Backend/src/controllers/admin.controller.ts` - Cache invalidation on update
|
|
||||||
3. `Re_Backend/src/services/configReader.service.ts` - `getWorkingHours()` function
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Configuration Flow Diagram
|
|
||||||
|
|
||||||
```
|
|
||||||
Admin Updates Working Hours (8:00 AM - 8:00 PM)
|
|
||||||
↓
|
|
||||||
Database Updated (admin_configurations table)
|
|
||||||
↓
|
|
||||||
clearConfigCache() + clearWorkingHoursCache()
|
|
||||||
↓
|
|
||||||
Caches Invalidated (both config and working hours)
|
|
||||||
↓
|
|
||||||
Next TAT Calculation
|
|
||||||
↓
|
|
||||||
loadWorkingHoursCache() called
|
|
||||||
↓
|
|
||||||
Read from Database (startHour=8, endHour=20)
|
|
||||||
↓
|
|
||||||
Store in Memory (5-minute cache)
|
|
||||||
↓
|
|
||||||
TAT Calculation Uses New Hours ✅
|
|
||||||
↓
|
|
||||||
All Future Requests (for 5 min) Use Cached Values
|
|
||||||
↓
|
|
||||||
After 5 Minutes → Reload from Database
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Working hours are now fully dynamic and admin-controlled! 🎉
|
|
||||||
|
|
||||||
535
Data_Collection_Analysis.md
Normal file
535
Data_Collection_Analysis.md
Normal file
@ -0,0 +1,535 @@
|
|||||||
|
# Data Collection Analysis - What We Have vs What We're Collecting
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This document compares the database structure with what we're currently collecting and recommends what we should start collecting for the Detailed Reports.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. ACTIVITIES TABLE
|
||||||
|
|
||||||
|
### ✅ **Database Fields Available:**
|
||||||
|
```sql
|
||||||
|
- activity_id (PK)
|
||||||
|
- request_id (FK) ✅ COLLECTING
|
||||||
|
- user_id (FK) ✅ COLLECTING
|
||||||
|
- user_name ✅ COLLECTING
|
||||||
|
- activity_type ✅ COLLECTING
|
||||||
|
- activity_description ✅ COLLECTING
|
||||||
|
- activity_category ❌ NOT COLLECTING (set to NULL)
|
||||||
|
- severity ❌ NOT COLLECTING (set to NULL)
|
||||||
|
- metadata ✅ COLLECTING (partially)
|
||||||
|
- is_system_event ✅ COLLECTING
|
||||||
|
- ip_address ❌ NOT COLLECTING (set to NULL)
|
||||||
|
- user_agent ❌ NOT COLLECTING (set to NULL)
|
||||||
|
- created_at ✅ COLLECTING
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🔴 **Currently NOT Collecting (But Should):**
|
||||||
|
|
||||||
|
1. **IP Address** (`ip_address`)
|
||||||
|
- **Status:** Field exists, but always set to `null`
|
||||||
|
- **Impact:** Cannot show IP in User Activity Log Report
|
||||||
|
- **Fix:** Extract from `req.ip` or `req.headers['x-forwarded-for']` in controllers
|
||||||
|
- **Priority:** HIGH (needed for security/audit)
|
||||||
|
|
||||||
|
2. **User Agent** (`user_agent`)
|
||||||
|
- **Status:** Field exists, but always set to `null`
|
||||||
|
- **Impact:** Cannot show device/browser info in reports
|
||||||
|
- **Fix:** Extract from `req.headers['user-agent']` in controllers
|
||||||
|
- **Priority:** MEDIUM (nice to have for analytics)
|
||||||
|
|
||||||
|
3. **Activity Category** (`activity_category`)
|
||||||
|
- **Status:** Field exists, but always set to `null`
|
||||||
|
- **Impact:** Cannot categorize activities (e.g., "AUTHENTICATION", "WORKFLOW", "DOCUMENT")
|
||||||
|
- **Fix:** Map `activity_type` to category:
|
||||||
|
- `created`, `approval`, `rejection`, `status_change` → "WORKFLOW"
|
||||||
|
- `comment` → "COLLABORATION"
|
||||||
|
- `document_added` → "DOCUMENT"
|
||||||
|
- `sla_warning` → "SYSTEM"
|
||||||
|
- **Priority:** MEDIUM (helps with filtering/reporting)
|
||||||
|
|
||||||
|
4. **Severity** (`severity`)
|
||||||
|
- **Status:** Field exists, but always set to `null`
|
||||||
|
- **Impact:** Cannot prioritize critical activities
|
||||||
|
- **Fix:** Map based on activity type:
|
||||||
|
- `rejection`, `sla_warning` → "WARNING"
|
||||||
|
- `approval`, `closed` → "INFO"
|
||||||
|
- `status_change` → "INFO"
|
||||||
|
- **Priority:** LOW (optional enhancement)
|
||||||
|
|
||||||
|
### 📝 **Recommendation:**
|
||||||
|
**Update `activity.service.ts` to accept and store:**
|
||||||
|
```typescript
|
||||||
|
async log(entry: ActivityEntry & {
|
||||||
|
ipAddress?: string;
|
||||||
|
userAgent?: string;
|
||||||
|
category?: string;
|
||||||
|
severity?: string;
|
||||||
|
}) {
|
||||||
|
// ... existing code ...
|
||||||
|
const activityData = {
|
||||||
|
// ... existing fields ...
|
||||||
|
ipAddress: entry.ipAddress || null,
|
||||||
|
userAgent: entry.userAgent || null,
|
||||||
|
activityCategory: entry.category || this.inferCategory(entry.type),
|
||||||
|
severity: entry.severity || this.inferSeverity(entry.type),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Update all controller calls to pass IP and User Agent:**
|
||||||
|
```typescript
|
||||||
|
activityService.log({
|
||||||
|
// ... existing fields ...
|
||||||
|
ipAddress: req.ip || req.headers['x-forwarded-for'] || null,
|
||||||
|
userAgent: req.headers['user-agent'] || null,
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. APPROVAL_LEVELS TABLE
|
||||||
|
|
||||||
|
### ✅ **Database Fields Available:**
|
||||||
|
```sql
|
||||||
|
- level_id (PK)
|
||||||
|
- request_id (FK) ✅ COLLECTING
|
||||||
|
- level_number ✅ COLLECTING
|
||||||
|
- level_name ❌ OPTIONAL (may not be set)
|
||||||
|
- approver_id (FK) ✅ COLLECTING
|
||||||
|
- approver_email ✅ COLLECTING
|
||||||
|
- approver_name ✅ COLLECTING
|
||||||
|
- tat_hours ✅ COLLECTING
|
||||||
|
- tat_days ✅ COLLECTING (auto-calculated)
|
||||||
|
- status ✅ COLLECTING
|
||||||
|
- level_start_time ✅ COLLECTING
|
||||||
|
- level_end_time ✅ COLLECTING
|
||||||
|
- action_date ✅ COLLECTING
|
||||||
|
- comments ✅ COLLECTING
|
||||||
|
- rejection_reason ✅ COLLECTING
|
||||||
|
- is_final_approver ✅ COLLECTING
|
||||||
|
- elapsed_hours ✅ COLLECTING
|
||||||
|
- remaining_hours ✅ COLLECTING
|
||||||
|
- tat_percentage_used ✅ COLLECTING
|
||||||
|
- tat50_alert_sent ✅ COLLECTING
|
||||||
|
- tat75_alert_sent ✅ COLLECTING
|
||||||
|
- tat_breached ✅ COLLECTING
|
||||||
|
- tat_start_time ✅ COLLECTING
|
||||||
|
- created_at ✅ COLLECTING
|
||||||
|
- updated_at ✅ COLLECTING
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🔴 **Currently NOT Collecting (But Should):**
|
||||||
|
|
||||||
|
1. **Level Name** (`level_name`)
|
||||||
|
- **Status:** Field exists, but may be NULL
|
||||||
|
- **Impact:** Cannot show stage name in reports (only level number)
|
||||||
|
- **Fix:** When creating approval levels, prompt for or auto-generate level names:
|
||||||
|
- "Department Head Review"
|
||||||
|
- "Finance Approval"
|
||||||
|
- "Final Approval"
|
||||||
|
- **Priority:** MEDIUM (improves report readability)
|
||||||
|
|
||||||
|
### 📝 **Recommendation:**
|
||||||
|
**Ensure level_name is set when creating approval levels:**
|
||||||
|
```typescript
|
||||||
|
await ApprovalLevel.create({
|
||||||
|
// ... existing fields ...
|
||||||
|
levelName: levelData.levelName || `Level ${levelNumber}`,
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. USER_SESSIONS TABLE
|
||||||
|
|
||||||
|
### ✅ **Database Fields Available:**
|
||||||
|
```sql
|
||||||
|
- session_id (PK)
|
||||||
|
- user_id (FK)
|
||||||
|
- session_token ✅ COLLECTING
|
||||||
|
- refresh_token ✅ COLLECTING
|
||||||
|
- ip_address ❓ CHECK IF COLLECTING
|
||||||
|
- user_agent ❓ CHECK IF COLLECTING
|
||||||
|
- device_type ❓ CHECK IF COLLECTING
|
||||||
|
- browser ❓ CHECK IF COLLECTING
|
||||||
|
- os ❓ CHECK IF COLLECTING
|
||||||
|
- login_at ✅ COLLECTING
|
||||||
|
- last_activity_at ✅ COLLECTING
|
||||||
|
- logout_at ❓ CHECK IF COLLECTING
|
||||||
|
- expires_at ✅ COLLECTING
|
||||||
|
- is_active ✅ COLLECTING
|
||||||
|
- logout_reason ❓ CHECK IF COLLECTING
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🔴 **Missing for Login Activity Tracking:**
|
||||||
|
|
||||||
|
1. **Login Activities in Activities Table**
|
||||||
|
- **Status:** Login events are NOT logged in `activities` table
|
||||||
|
- **Impact:** Cannot show login activities in User Activity Log Report
|
||||||
|
- **Fix:** Add login activity logging in auth middleware/controller:
|
||||||
|
```typescript
|
||||||
|
// After successful login
|
||||||
|
await activityService.log({
|
||||||
|
requestId: 'SYSTEM_LOGIN', // Special request ID for system events
|
||||||
|
type: 'login',
|
||||||
|
user: { userId, name: user.displayName },
|
||||||
|
ipAddress: req.ip,
|
||||||
|
userAgent: req.headers['user-agent'],
|
||||||
|
category: 'AUTHENTICATION',
|
||||||
|
severity: 'INFO',
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
action: 'User Login',
|
||||||
|
details: `User logged in from ${req.ip}`
|
||||||
|
});
|
||||||
|
```
|
||||||
|
- **Priority:** HIGH (needed for security audit)
|
||||||
|
|
||||||
|
2. **Device/Browser Parsing**
|
||||||
|
- **Status:** Fields exist but may not be populated
|
||||||
|
- **Impact:** Cannot show device type in reports
|
||||||
|
- **Fix:** Parse user agent to extract:
|
||||||
|
- `device_type`: "WEB", "MOBILE"
|
||||||
|
- `browser`: "Chrome", "Firefox", "Safari"
|
||||||
|
- `os`: "Windows", "macOS", "iOS", "Android"
|
||||||
|
- **Priority:** MEDIUM (nice to have)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. WORKFLOW_REQUESTS TABLE
|
||||||
|
|
||||||
|
### ✅ **All Fields Are Being Collected:**
|
||||||
|
- All fields in `workflow_requests` are properly collected
|
||||||
|
- No missing data here
|
||||||
|
|
||||||
|
### 📝 **Note:**
|
||||||
|
- `submission_date` vs `created_at`: Use `submission_date` for "days open" calculation
|
||||||
|
- `closure_date`: Available for completed requests
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. TAT_TRACKING TABLE
|
||||||
|
|
||||||
|
### ✅ **Database Fields Available:**
|
||||||
|
```sql
|
||||||
|
- tracking_id (PK)
|
||||||
|
- request_id (FK)
|
||||||
|
- level_id (FK)
|
||||||
|
- tracking_type ✅ COLLECTING
|
||||||
|
- tat_status ✅ COLLECTING
|
||||||
|
- total_tat_hours ✅ COLLECTING
|
||||||
|
- elapsed_hours ✅ COLLECTING
|
||||||
|
- remaining_hours ✅ COLLECTING
|
||||||
|
- percentage_used ✅ COLLECTING
|
||||||
|
- threshold_50_breached ✅ COLLECTING
|
||||||
|
- threshold_50_alerted_at ✅ COLLECTING
|
||||||
|
- threshold_80_breached ✅ COLLECTING
|
||||||
|
- threshold_80_alerted_at ✅ COLLECTING
|
||||||
|
- threshold_100_breached ✅ COLLECTING
|
||||||
|
- threshold_100_alerted_at ✅ COLLECTING
|
||||||
|
- alert_count ✅ COLLECTING
|
||||||
|
- last_calculated_at ✅ COLLECTING
|
||||||
|
```
|
||||||
|
|
||||||
|
### ✅ **All Fields Are Being Collected:**
|
||||||
|
- TAT tracking appears to be fully implemented
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. AUDIT_LOGS TABLE
|
||||||
|
|
||||||
|
### ✅ **Database Fields Available:**
|
||||||
|
```sql
|
||||||
|
- audit_id (PK)
|
||||||
|
- user_id (FK)
|
||||||
|
- entity_type
|
||||||
|
- entity_id
|
||||||
|
- action
|
||||||
|
- action_category
|
||||||
|
- old_values (JSONB)
|
||||||
|
- new_values (JSONB)
|
||||||
|
- changes_summary
|
||||||
|
- ip_address
|
||||||
|
- user_agent
|
||||||
|
- session_id
|
||||||
|
- request_method
|
||||||
|
- request_url
|
||||||
|
- response_status
|
||||||
|
- execution_time_ms
|
||||||
|
- created_at
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🔴 **Status:**
|
||||||
|
- **Audit logging may not be fully implemented**
|
||||||
|
- **Impact:** Cannot track all system changes for audit purposes
|
||||||
|
- **Priority:** MEDIUM (for compliance/security)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SUMMARY: What to Start Collecting
|
||||||
|
|
||||||
|
### 🔴 **HIGH PRIORITY (Must Have for Reports):**
|
||||||
|
|
||||||
|
1. **IP Address in Activities** ✅ Field exists, just need to populate
|
||||||
|
- Extract from `req.ip` or `req.headers['x-forwarded-for']`
|
||||||
|
- Update `activity.service.ts` to accept IP
|
||||||
|
- Update all controller calls
|
||||||
|
|
||||||
|
2. **User Agent in Activities** ✅ Field exists, just need to populate
|
||||||
|
- Extract from `req.headers['user-agent']`
|
||||||
|
- Update `activity.service.ts` to accept user agent
|
||||||
|
- Update all controller calls
|
||||||
|
|
||||||
|
3. **Login Activities** ❌ Not currently logged
|
||||||
|
- Add login activity logging in auth controller
|
||||||
|
- Use special `requestId: 'SYSTEM_LOGIN'` for system events
|
||||||
|
- Include IP and user agent
|
||||||
|
|
||||||
|
### 🟡 **MEDIUM PRIORITY (Nice to Have):**
|
||||||
|
|
||||||
|
4. **Activity Category** ✅ Field exists, just need to populate
|
||||||
|
- Auto-infer from `activity_type`
|
||||||
|
- Helps with filtering and reporting
|
||||||
|
|
||||||
|
5. **Level Names** ✅ Field exists, ensure it's set
|
||||||
|
- Improve readability in reports
|
||||||
|
- Auto-generate if not provided
|
||||||
|
|
||||||
|
6. **Severity** ✅ Field exists, just need to populate
|
||||||
|
- Auto-infer from `activity_type`
|
||||||
|
- Helps prioritize critical activities
|
||||||
|
|
||||||
|
### 🟢 **LOW PRIORITY (Future Enhancement):**
|
||||||
|
|
||||||
|
7. **Device/Browser Parsing**
|
||||||
|
- Parse user agent to extract device type, browser, OS
|
||||||
|
- Store in `user_sessions` table
|
||||||
|
|
||||||
|
8. **Audit Logging**
|
||||||
|
- Implement comprehensive audit logging
|
||||||
|
- Track all system changes
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. BUSINESS DAYS CALCULATION FOR WORKFLOW AGING
|
||||||
|
|
||||||
|
### ✅ **Available:**
|
||||||
|
- `calculateElapsedWorkingHours()` - Calculates working hours (excludes weekends/holidays)
|
||||||
|
- Working hours configuration (9 AM - 6 PM, Mon-Fri)
|
||||||
|
- Holiday support (from database)
|
||||||
|
- Priority-based calculation (express vs standard)
|
||||||
|
|
||||||
|
### ❌ **Missing:**
|
||||||
|
1. **Business Days Count Function**
|
||||||
|
- Need a function to calculate business days (not hours)
|
||||||
|
- For Workflow Aging Report: "Days Open" should be business days
|
||||||
|
- Currently only have working hours calculation
|
||||||
|
|
||||||
|
2. **TAT Processor Using Wrong Calculation**
|
||||||
|
- `tatProcessor.ts` uses simple calendar hours:
|
||||||
|
```typescript
|
||||||
|
const elapsedMs = now.getTime() - new Date(levelStartTime).getTime();
|
||||||
|
const elapsedHours = elapsedMs / (1000 * 60 * 60);
|
||||||
|
```
|
||||||
|
- Should use `calculateElapsedWorkingHours()` instead
|
||||||
|
- This causes incorrect TAT breach calculations
|
||||||
|
|
||||||
|
### 🔧 **What Needs to be Built:**
|
||||||
|
|
||||||
|
1. **Add Business Days Calculation Function:**
|
||||||
|
```typescript
|
||||||
|
// In tatTimeUtils.ts
|
||||||
|
export async function calculateBusinessDays(
|
||||||
|
startDate: Date | string,
|
||||||
|
endDate: Date | string = new Date(),
|
||||||
|
priority: string = 'standard'
|
||||||
|
): Promise<number> {
|
||||||
|
await loadWorkingHoursCache();
|
||||||
|
await loadHolidaysCache();
|
||||||
|
|
||||||
|
let start = dayjs(startDate);
|
||||||
|
const end = dayjs(endDate);
|
||||||
|
const config = workingHoursCache || { /* defaults */ };
|
||||||
|
|
||||||
|
let businessDays = 0;
|
||||||
|
let current = start.startOf('day');
|
||||||
|
|
||||||
|
while (current.isBefore(end) || current.isSame(end, 'day')) {
|
||||||
|
const dayOfWeek = current.day();
|
||||||
|
const dateStr = current.format('YYYY-MM-DD');
|
||||||
|
|
||||||
|
const isWorkingDay = priority === 'express'
|
||||||
|
? true
|
||||||
|
: (dayOfWeek >= config.startDay && dayOfWeek <= config.endDay);
|
||||||
|
const isNotHoliday = !holidaysCache.has(dateStr);
|
||||||
|
|
||||||
|
if (isWorkingDay && isNotHoliday) {
|
||||||
|
businessDays++;
|
||||||
|
}
|
||||||
|
|
||||||
|
current = current.add(1, 'day');
|
||||||
|
}
|
||||||
|
|
||||||
|
return businessDays;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Fix TAT Processor:**
|
||||||
|
- Replace calendar hours calculation with `calculateElapsedWorkingHours()`
|
||||||
|
- This will fix TAT breach alerts to use proper working hours
|
||||||
|
|
||||||
|
3. **Update Workflow Aging Report:**
|
||||||
|
- Use `calculateBusinessDays()` instead of calendar days
|
||||||
|
- Filter by business days threshold
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## IMPLEMENTATION CHECKLIST
|
||||||
|
|
||||||
|
### Phase 1: Quick Wins (Fields Exist, Just Need to Populate)
|
||||||
|
- [ ] Update `activity.service.ts` to accept `ipAddress` and `userAgent`
|
||||||
|
- [ ] Update all controller calls to pass IP and user agent
|
||||||
|
- [ ] Add activity category inference
|
||||||
|
- [ ] Add severity inference
|
||||||
|
|
||||||
|
### Phase 2: Fix TAT Calculations (CRITICAL)
|
||||||
|
- [x] Fix `tatProcessor.ts` to use `calculateElapsedWorkingHours()` instead of calendar hours ✅
|
||||||
|
- [x] Add `calculateBusinessDays()` function to `tatTimeUtils.ts` ✅
|
||||||
|
- [ ] Test TAT breach calculations with working hours
|
||||||
|
|
||||||
|
### Phase 3: New Functionality
|
||||||
|
- [x] Add login activity logging ✅ (Implemented in auth.controller.ts for SSO and token exchange)
|
||||||
|
- [x] Ensure level names are set when creating approval levels ✅ (levelName set in workflow.service.ts)
|
||||||
|
- [x] Add device/browser parsing for user sessions ✅ (userAgentParser.ts utility created - can be used for parsing user agent strings)
|
||||||
|
|
||||||
|
### Phase 4: Enhanced Reporting
|
||||||
|
- [x] Build report endpoints using collected data ✅ (getLifecycleReport, getActivityLogReport, getWorkflowAgingReport)
|
||||||
|
- [x] Add filtering by category, severity ✅ (Filtering by category and severity added to getActivityLogReport, frontend UI added)
|
||||||
|
- [x] Add IP/user agent to activity log reports ✅ (IP and user agent captured and displayed)
|
||||||
|
- [x] Use business days in Workflow Aging Report ✅ (calculateBusinessDays implemented and used)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CODE CHANGES NEEDED
|
||||||
|
|
||||||
|
### 1. Update Activity Service (`activity.service.ts`)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export type ActivityEntry = {
|
||||||
|
requestId: string;
|
||||||
|
type: 'created' | 'assignment' | 'approval' | 'rejection' | 'status_change' | 'comment' | 'reminder' | 'document_added' | 'sla_warning' | 'ai_conclusion_generated' | 'closed' | 'login';
|
||||||
|
user?: { userId: string; name?: string; email?: string };
|
||||||
|
timestamp: string;
|
||||||
|
action: string;
|
||||||
|
details: string;
|
||||||
|
metadata?: any;
|
||||||
|
ipAddress?: string; // NEW
|
||||||
|
userAgent?: string; // NEW
|
||||||
|
category?: string; // NEW
|
||||||
|
severity?: string; // NEW
|
||||||
|
};
|
||||||
|
|
||||||
|
class ActivityService {
|
||||||
|
private inferCategory(type: string): string {
|
||||||
|
const categoryMap: Record<string, string> = {
|
||||||
|
'created': 'WORKFLOW',
|
||||||
|
'approval': 'WORKFLOW',
|
||||||
|
'rejection': 'WORKFLOW',
|
||||||
|
'status_change': 'WORKFLOW',
|
||||||
|
'assignment': 'WORKFLOW',
|
||||||
|
'comment': 'COLLABORATION',
|
||||||
|
'document_added': 'DOCUMENT',
|
||||||
|
'sla_warning': 'SYSTEM',
|
||||||
|
'reminder': 'SYSTEM',
|
||||||
|
'ai_conclusion_generated': 'SYSTEM',
|
||||||
|
'closed': 'WORKFLOW',
|
||||||
|
'login': 'AUTHENTICATION'
|
||||||
|
};
|
||||||
|
return categoryMap[type] || 'OTHER';
|
||||||
|
}
|
||||||
|
|
||||||
|
private inferSeverity(type: string): string {
|
||||||
|
const severityMap: Record<string, string> = {
|
||||||
|
'rejection': 'WARNING',
|
||||||
|
'sla_warning': 'WARNING',
|
||||||
|
'approval': 'INFO',
|
||||||
|
'closed': 'INFO',
|
||||||
|
'status_change': 'INFO',
|
||||||
|
'login': 'INFO',
|
||||||
|
'created': 'INFO',
|
||||||
|
'comment': 'INFO',
|
||||||
|
'document_added': 'INFO'
|
||||||
|
};
|
||||||
|
return severityMap[type] || 'INFO';
|
||||||
|
}
|
||||||
|
|
||||||
|
async log(entry: ActivityEntry) {
|
||||||
|
// ... existing code ...
|
||||||
|
const activityData = {
|
||||||
|
requestId: entry.requestId,
|
||||||
|
userId: entry.user?.userId || null,
|
||||||
|
userName: entry.user?.name || entry.user?.email || null,
|
||||||
|
activityType: entry.type,
|
||||||
|
activityDescription: entry.details,
|
||||||
|
activityCategory: entry.category || this.inferCategory(entry.type),
|
||||||
|
severity: entry.severity || this.inferSeverity(entry.type),
|
||||||
|
metadata: entry.metadata || null,
|
||||||
|
isSystemEvent: !entry.user,
|
||||||
|
ipAddress: entry.ipAddress || null, // NEW
|
||||||
|
userAgent: entry.userAgent || null, // NEW
|
||||||
|
};
|
||||||
|
// ... rest of code ...
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Update Controller Calls (Example)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// In workflow.controller.ts, approval.controller.ts, etc.
|
||||||
|
activityService.log({
|
||||||
|
requestId: workflow.requestId,
|
||||||
|
type: 'created',
|
||||||
|
user: { userId, name: user.displayName },
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
action: 'Request Created',
|
||||||
|
details: `Request ${workflow.requestNumber} created`,
|
||||||
|
ipAddress: req.ip || req.headers['x-forwarded-for'] || null, // NEW
|
||||||
|
userAgent: req.headers['user-agent'] || null, // NEW
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Add Login Activity Logging
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// In auth.controller.ts after successful login
|
||||||
|
await activityService.log({
|
||||||
|
requestId: 'SYSTEM_LOGIN', // Special ID for system events
|
||||||
|
type: 'login',
|
||||||
|
user: { userId: user.userId, name: user.displayName },
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
action: 'User Login',
|
||||||
|
details: `User logged in successfully`,
|
||||||
|
ipAddress: req.ip || req.headers['x-forwarded-for'] || null,
|
||||||
|
userAgent: req.headers['user-agent'] || null,
|
||||||
|
category: 'AUTHENTICATION',
|
||||||
|
severity: 'INFO'
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CONCLUSION
|
||||||
|
|
||||||
|
**Good News:** Most fields already exist in the database! We just need to:
|
||||||
|
1. Populate existing fields (IP, user agent, category, severity)
|
||||||
|
2. Add login activity logging
|
||||||
|
3. Ensure level names are set
|
||||||
|
|
||||||
|
**Estimated Effort:**
|
||||||
|
- Phase 1 (Quick Wins): 2-4 hours
|
||||||
|
- Phase 2 (New Functionality): 4-6 hours
|
||||||
|
- Phase 3 (Enhanced Reporting): 8-12 hours
|
||||||
|
|
||||||
|
**Total: ~14-22 hours of development work**
|
||||||
|
|
||||||
129
FIXES_APPLIED.md
129
FIXES_APPLIED.md
@ -1,129 +0,0 @@
|
|||||||
# 🔧 Backend Fixes Applied - November 4, 2025
|
|
||||||
|
|
||||||
## ✅ Issue 1: TypeScript Compilation Error
|
|
||||||
|
|
||||||
### **Error:**
|
|
||||||
```
|
|
||||||
src/services/tatScheduler.service.ts:30:15 - error TS2339:
|
|
||||||
Property 'halfTime' does not exist on type 'Promise<{ halfTime: Date; ... }>'.
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Root Cause:**
|
|
||||||
`calculateTatMilestones()` was changed from sync to async (to support holiday checking), but `tatScheduler.service.ts` was calling it without `await`.
|
|
||||||
|
|
||||||
### **Fix Applied:**
|
|
||||||
```typescript
|
|
||||||
// Before (❌ Missing await):
|
|
||||||
const { halfTime, seventyFive, full } = calculateTatMilestones(now, tatDurationHours);
|
|
||||||
|
|
||||||
// After (✅ With await):
|
|
||||||
const { halfTime, seventyFive, full } = await calculateTatMilestones(now, tatDurationHours);
|
|
||||||
```
|
|
||||||
|
|
||||||
**File:** `Re_Backend/src/services/tatScheduler.service.ts` (line 30)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Issue 2: Empty Configurations Table
|
|
||||||
|
|
||||||
### **Problem:**
|
|
||||||
`admin_configurations` table created but empty → Frontend can't fetch any configurations.
|
|
||||||
|
|
||||||
### **Fix Applied:**
|
|
||||||
Created auto-seeding service that runs on server startup:
|
|
||||||
|
|
||||||
**File:** `Re_Backend/src/services/configSeed.service.ts`
|
|
||||||
- Checks if configurations exist
|
|
||||||
- If empty, seeds 10 default configurations:
|
|
||||||
- 6 TAT Settings (default hours, thresholds, working hours)
|
|
||||||
- 3 Document Policy settings
|
|
||||||
- 2 AI Configuration settings
|
|
||||||
|
|
||||||
### **Integration:**
|
|
||||||
Updated `Re_Backend/src/server.ts` to call `seedDefaultConfigurations()` on startup.
|
|
||||||
|
|
||||||
**Output on server start:**
|
|
||||||
```
|
|
||||||
⚙️ System configurations initialized
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 **Default Configurations Seeded**
|
|
||||||
|
|
||||||
| Config Key | Value | Category | UI Component |
|
|
||||||
|------------|-------|----------|--------------|
|
|
||||||
| `DEFAULT_TAT_EXPRESS_HOURS` | 24 | TAT_SETTINGS | number |
|
|
||||||
| `DEFAULT_TAT_STANDARD_HOURS` | 48 | TAT_SETTINGS | number |
|
|
||||||
| `TAT_REMINDER_THRESHOLD_1` | 50 | TAT_SETTINGS | slider |
|
|
||||||
| `TAT_REMINDER_THRESHOLD_2` | 75 | TAT_SETTINGS | slider |
|
|
||||||
| `WORK_START_HOUR` | 9 | TAT_SETTINGS | number |
|
|
||||||
| `WORK_END_HOUR` | 18 | TAT_SETTINGS | number |
|
|
||||||
| `MAX_FILE_SIZE_MB` | 10 | DOCUMENT_POLICY | number |
|
|
||||||
| `ALLOWED_FILE_TYPES` | pdf,doc,... | DOCUMENT_POLICY | text |
|
|
||||||
| `DOCUMENT_RETENTION_DAYS` | 365 | DOCUMENT_POLICY | number |
|
|
||||||
| `AI_REMARK_GENERATION_ENABLED` | true | AI_CONFIGURATION | toggle |
|
|
||||||
| `AI_REMARK_MAX_CHARACTERS` | 500 | AI_CONFIGURATION | number |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 **How to Verify**
|
|
||||||
|
|
||||||
### **Step 1: Restart Backend**
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Expected Output:**
|
|
||||||
```
|
|
||||||
⚙️ System configurations initialized
|
|
||||||
📅 Holiday calendar loaded for TAT calculations
|
|
||||||
🚀 Server running on port 5000
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 2: Check Database**
|
|
||||||
```sql
|
|
||||||
SELECT COUNT(*) FROM admin_configurations;
|
|
||||||
-- Should return: 11 (10 default configs)
|
|
||||||
|
|
||||||
SELECT config_key, config_value FROM admin_configurations ORDER BY sort_order;
|
|
||||||
-- Should show all seeded configurations
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 3: Test Frontend**
|
|
||||||
```bash
|
|
||||||
# Login as admin
|
|
||||||
# Navigate to Settings → System Configuration tab
|
|
||||||
# Should see all configurations grouped by category
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ **Status: Both Issues Resolved**
|
|
||||||
|
|
||||||
| Issue | Status | Fix |
|
|
||||||
|-------|--------|-----|
|
|
||||||
| TypeScript compilation error | ✅ Fixed | Added `await` to async function call |
|
|
||||||
| Empty configurations table | ✅ Fixed | Auto-seeding on server startup |
|
|
||||||
| Holiday list not fetching | ✅ Will work | Backend now starts successfully |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 **Next Steps**
|
|
||||||
|
|
||||||
1. ✅ **Restart backend** - `npm run dev`
|
|
||||||
2. ✅ **Verify configurations seeded** - Check logs for "System configurations initialized"
|
|
||||||
3. ✅ **Test frontend** - Login as admin and view Settings
|
|
||||||
4. ✅ **Add holidays** - Use the Holiday Calendar tab
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**All systems ready! 🚀**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Fixed:** November 4, 2025
|
|
||||||
**Files Modified:** 3
|
|
||||||
**Status:** Complete
|
|
||||||
|
|
||||||
@ -1,731 +0,0 @@
|
|||||||
# ✅ Holiday Calendar & Admin Configuration System - Complete
|
|
||||||
|
|
||||||
## 🎉 What's Been Implemented
|
|
||||||
|
|
||||||
### **1. Holiday Calendar System** 📅
|
|
||||||
- ✅ Admin can add/edit/delete organization holidays
|
|
||||||
- ✅ Holidays automatically excluded from STANDARD priority TAT calculations
|
|
||||||
- ✅ Weekends (Saturday/Sunday) + Holidays = Non-working days
|
|
||||||
- ✅ Supports recurring holidays (annual)
|
|
||||||
- ✅ Department/location-specific holidays
|
|
||||||
- ✅ Bulk import from JSON/CSV
|
|
||||||
- ✅ Year-based calendar view
|
|
||||||
- ✅ Automatic cache refresh
|
|
||||||
|
|
||||||
### **2. Admin Configuration System** ⚙️
|
|
||||||
- ✅ Centralized configuration management
|
|
||||||
- ✅ All planned config areas supported:
|
|
||||||
- TAT Settings
|
|
||||||
- User Roles
|
|
||||||
- Notification Rules
|
|
||||||
- Document Policy
|
|
||||||
- Dashboard Layout
|
|
||||||
- AI Configuration
|
|
||||||
- Workflow Sharing Policy
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Database Schema
|
|
||||||
|
|
||||||
### **New Tables Created:**
|
|
||||||
|
|
||||||
**1. `holidays` Table:**
|
|
||||||
```sql
|
|
||||||
- holiday_id (UUID, PK)
|
|
||||||
- holiday_date (DATE, UNIQUE) -- YYYY-MM-DD
|
|
||||||
- holiday_name (VARCHAR) -- "Diwali", "Republic Day"
|
|
||||||
- description (TEXT) -- Optional details
|
|
||||||
- is_recurring (BOOLEAN) -- Annual holidays
|
|
||||||
- recurrence_rule (VARCHAR) -- RRULE format
|
|
||||||
- holiday_type (ENUM) -- NATIONAL, REGIONAL, ORGANIZATIONAL, OPTIONAL
|
|
||||||
- is_active (BOOLEAN) -- Enable/disable
|
|
||||||
- applies_to_departments (TEXT[]) -- NULL = all
|
|
||||||
- applies_to_locations (TEXT[]) -- NULL = all
|
|
||||||
- created_by (UUID FK)
|
|
||||||
- updated_by (UUID FK)
|
|
||||||
- created_at, updated_at
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. `admin_configurations` Table:**
|
|
||||||
```sql
|
|
||||||
- config_id (UUID, PK)
|
|
||||||
- config_key (VARCHAR, UNIQUE) -- "DEFAULT_TAT_EXPRESS_HOURS"
|
|
||||||
- config_category (ENUM) -- TAT_SETTINGS, NOTIFICATION_RULES, etc.
|
|
||||||
- config_value (TEXT) -- Actual value
|
|
||||||
- value_type (ENUM) -- STRING, NUMBER, BOOLEAN, JSON, ARRAY
|
|
||||||
- display_name (VARCHAR) -- UI-friendly name
|
|
||||||
- description (TEXT)
|
|
||||||
- default_value (TEXT) -- Reset value
|
|
||||||
- is_editable (BOOLEAN)
|
|
||||||
- is_sensitive (BOOLEAN) -- For API keys, passwords
|
|
||||||
- validation_rules (JSONB) -- Min, max, regex
|
|
||||||
- ui_component (VARCHAR) -- input, select, toggle, slider
|
|
||||||
- options (JSONB) -- For dropdown options
|
|
||||||
- sort_order (INTEGER) -- Display order
|
|
||||||
- requires_restart (BOOLEAN)
|
|
||||||
- last_modified_by (UUID FK)
|
|
||||||
- last_modified_at (TIMESTAMP)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔌 API Endpoints
|
|
||||||
|
|
||||||
### **Holiday Management:**
|
|
||||||
|
|
||||||
| Method | Endpoint | Description |
|
|
||||||
|--------|----------|-------------|
|
|
||||||
| GET | `/api/admin/holidays` | Get all holidays (with year filter) |
|
|
||||||
| GET | `/api/admin/holidays/calendar/:year` | Get calendar for specific year |
|
|
||||||
| POST | `/api/admin/holidays` | Create new holiday |
|
|
||||||
| PUT | `/api/admin/holidays/:holidayId` | Update holiday |
|
|
||||||
| DELETE | `/api/admin/holidays/:holidayId` | Delete (deactivate) holiday |
|
|
||||||
| POST | `/api/admin/holidays/bulk-import` | Bulk import holidays |
|
|
||||||
|
|
||||||
### **Configuration Management:**
|
|
||||||
|
|
||||||
| Method | Endpoint | Description |
|
|
||||||
|--------|----------|-------------|
|
|
||||||
| GET | `/api/admin/configurations` | Get all configurations |
|
|
||||||
| GET | `/api/admin/configurations?category=TAT_SETTINGS` | Get by category |
|
|
||||||
| PUT | `/api/admin/configurations/:configKey` | Update configuration |
|
|
||||||
| POST | `/api/admin/configurations/:configKey/reset` | Reset to default |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 TAT Calculation with Holidays
|
|
||||||
|
|
||||||
### **STANDARD Priority (Working Days):**
|
|
||||||
|
|
||||||
**Excludes:**
|
|
||||||
- ✅ Saturdays (day 6)
|
|
||||||
- ✅ Sundays (day 0)
|
|
||||||
- ✅ Holidays from `holidays` table
|
|
||||||
- ✅ Outside working hours (before 9 AM, after 6 PM)
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Submit: Monday Oct 20 at 10:00 AM
|
|
||||||
TAT: 48 hours (STANDARD priority)
|
|
||||||
Holiday: Tuesday Oct 21 (Diwali)
|
|
||||||
|
|
||||||
Calculation:
|
|
||||||
Monday 10 AM - 6 PM = 8 hours (total: 8h)
|
|
||||||
Tuesday = HOLIDAY (skipped)
|
|
||||||
Wednesday 9 AM - 6 PM = 9 hours (total: 17h)
|
|
||||||
Thursday 9 AM - 6 PM = 9 hours (total: 26h)
|
|
||||||
Friday 9 AM - 6 PM = 9 hours (total: 35h)
|
|
||||||
Saturday-Sunday = WEEKEND (skipped)
|
|
||||||
Monday 9 AM - 10 PM = 13 hours (total: 48h)
|
|
||||||
|
|
||||||
Due: Monday Oct 27 at 10:00 AM
|
|
||||||
```
|
|
||||||
|
|
||||||
### **EXPRESS Priority (Calendar Days):**
|
|
||||||
|
|
||||||
**Excludes: NOTHING**
|
|
||||||
- All days included (weekends, holidays, 24/7)
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
Submit: Monday Oct 20 at 10:00 AM
|
|
||||||
TAT: 48 hours (EXPRESS priority)
|
|
||||||
|
|
||||||
Due: Wednesday Oct 22 at 10:00 AM (exactly 48 hours later)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔄 Holiday Cache System
|
|
||||||
|
|
||||||
### **How It Works:**
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Server Starts
|
|
||||||
↓
|
|
||||||
2. Load holidays from database (current year + next year)
|
|
||||||
↓
|
|
||||||
3. Store in memory cache (Set of date strings)
|
|
||||||
↓
|
|
||||||
4. Cache expires after 6 hours
|
|
||||||
↓
|
|
||||||
5. Auto-reload when expired
|
|
||||||
↓
|
|
||||||
6. Manual reload when admin adds/updates/deletes holiday
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- ⚡ Fast lookups (O(1) Set lookup)
|
|
||||||
- 💾 Minimal memory (just date strings)
|
|
||||||
- 🔄 Auto-refresh every 6 hours
|
|
||||||
- 🎯 Immediate update when admin changes holidays
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎨 Frontend UI (To Be Built)
|
|
||||||
|
|
||||||
### **Admin Dashboard → Holiday Management:**
|
|
||||||
|
|
||||||
```tsx
|
|
||||||
<HolidayManagementPage>
|
|
||||||
{/* Year Selector */}
|
|
||||||
<YearSelector
|
|
||||||
currentYear={2025}
|
|
||||||
onChange={loadHolidaysForYear}
|
|
||||||
/>
|
|
||||||
|
|
||||||
{/* Calendar View */}
|
|
||||||
<CalendarGrid year={2025}>
|
|
||||||
{/* Days with holidays highlighted */}
|
|
||||||
<Day date="2025-01-26" isHoliday holidayName="Republic Day" />
|
|
||||||
<Day date="2025-08-15" isHoliday holidayName="Independence Day" />
|
|
||||||
</CalendarGrid>
|
|
||||||
|
|
||||||
{/* List View */}
|
|
||||||
<HolidayList>
|
|
||||||
<HolidayCard
|
|
||||||
date="2025-01-26"
|
|
||||||
name="Republic Day"
|
|
||||||
type="NATIONAL"
|
|
||||||
recurring={true}
|
|
||||||
onEdit={handleEdit}
|
|
||||||
onDelete={handleDelete}
|
|
||||||
/>
|
|
||||||
</HolidayList>
|
|
||||||
|
|
||||||
{/* Actions */}
|
|
||||||
<div className="actions">
|
|
||||||
<Button onClick={openAddHolidayModal}>
|
|
||||||
+ Add Holiday
|
|
||||||
</Button>
|
|
||||||
<Button onClick={openBulkImportDialog}>
|
|
||||||
📁 Import Holidays
|
|
||||||
</Button>
|
|
||||||
</div>
|
|
||||||
</HolidayManagementPage>
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 Default Configurations
|
|
||||||
|
|
||||||
### **Pre-seeded in database:**
|
|
||||||
|
|
||||||
| Config Key | Value | Category | Description |
|
|
||||||
|------------|-------|----------|-------------|
|
|
||||||
| `DEFAULT_TAT_EXPRESS_HOURS` | 24 | TAT_SETTINGS | Default TAT for express |
|
|
||||||
| `DEFAULT_TAT_STANDARD_HOURS` | 48 | TAT_SETTINGS | Default TAT for standard |
|
|
||||||
| `TAT_REMINDER_THRESHOLD_1` | 50 | TAT_SETTINGS | First reminder at 50% |
|
|
||||||
| `TAT_REMINDER_THRESHOLD_2` | 75 | TAT_SETTINGS | Second reminder at 75% |
|
|
||||||
| `WORK_START_HOUR` | 9 | TAT_SETTINGS | Work day starts at 9 AM |
|
|
||||||
| `WORK_END_HOUR` | 18 | TAT_SETTINGS | Work day ends at 6 PM |
|
|
||||||
| `MAX_FILE_SIZE_MB` | 10 | DOCUMENT_POLICY | Max upload size |
|
|
||||||
| `ALLOWED_FILE_TYPES` | pdf,doc,... | DOCUMENT_POLICY | Allowed extensions |
|
|
||||||
| `DOCUMENT_RETENTION_DAYS` | 365 | DOCUMENT_POLICY | Retention period |
|
|
||||||
| `AI_REMARK_GENERATION_ENABLED` | true | AI_CONFIGURATION | Enable AI remarks |
|
|
||||||
| `AI_REMARK_MAX_CHARACTERS` | 500 | AI_CONFIGURATION | Max AI text length |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Quick Start
|
|
||||||
|
|
||||||
### **Step 1: Run Migrations**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run migrate
|
|
||||||
```
|
|
||||||
|
|
||||||
**You'll see:**
|
|
||||||
```
|
|
||||||
✅ Holidays table created successfully
|
|
||||||
✅ Admin configurations table created and seeded
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 2: Import Indian Holidays (Optional)**
|
|
||||||
|
|
||||||
Create a script or use the API:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Using curl (requires admin token):
|
|
||||||
curl -X POST http://localhost:5000/api/admin/holidays/bulk-import \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "Authorization: Bearer YOUR_ADMIN_TOKEN" \
|
|
||||||
-d @data/indian_holidays_2025.json
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 3: Verify Holidays Loaded**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT COUNT(*) FROM holidays WHERE is_active = true;
|
|
||||||
-- Should return 14 (or however many you imported)
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 4: Restart Backend**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
**You'll see:**
|
|
||||||
```
|
|
||||||
📅 Holiday calendar loaded for TAT calculations
|
|
||||||
Loaded 14 holidays into cache
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🧪 Testing
|
|
||||||
|
|
||||||
### **Test 1: Create Holiday**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
POST /api/admin/holidays
|
|
||||||
{
|
|
||||||
"holidayDate": "2025-12-31",
|
|
||||||
"holidayName": "New Year's Eve",
|
|
||||||
"description": "Last day of the year",
|
|
||||||
"holidayType": "ORGANIZATIONAL"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test 2: Verify Holiday Affects TAT**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create STANDARD priority request on Dec 30
|
|
||||||
# 2. Set TAT: 16 hours (2 working days)
|
|
||||||
# 3. Expected due: Jan 2 (skips Dec 31 holiday + weekend)
|
|
||||||
# 4. Actual due should be: Jan 2
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test 3: Verify EXPRESS Not Affected**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create EXPRESS priority request on Dec 30
|
|
||||||
# 2. Set TAT: 48 hours
|
|
||||||
# 3. Expected due: Jan 1 (exactly 48 hours, includes holiday)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Admin Configuration UI (To Be Built)
|
|
||||||
|
|
||||||
### **Admin Settings Page:**
|
|
||||||
|
|
||||||
```tsx
|
|
||||||
<AdminSettings>
|
|
||||||
<Tabs>
|
|
||||||
<Tab value="tat">TAT Settings</Tab>
|
|
||||||
<Tab value="holidays">Holiday Calendar</Tab>
|
|
||||||
<Tab value="documents">Document Policy</Tab>
|
|
||||||
<Tab value="notifications">Notifications</Tab>
|
|
||||||
<Tab value="ai">AI Configuration</Tab>
|
|
||||||
</Tabs>
|
|
||||||
|
|
||||||
<TabPanel value="tat">
|
|
||||||
<ConfigSection>
|
|
||||||
<ConfigItem
|
|
||||||
label="Default TAT for Express (hours)"
|
|
||||||
type="number"
|
|
||||||
value={24}
|
|
||||||
min={1}
|
|
||||||
max={168}
|
|
||||||
onChange={handleUpdate}
|
|
||||||
/>
|
|
||||||
<ConfigItem
|
|
||||||
label="Default TAT for Standard (hours)"
|
|
||||||
type="number"
|
|
||||||
value={48}
|
|
||||||
min={1}
|
|
||||||
max={720}
|
|
||||||
/>
|
|
||||||
<ConfigItem
|
|
||||||
label="First Reminder Threshold (%)"
|
|
||||||
type="slider"
|
|
||||||
value={50}
|
|
||||||
min={1}
|
|
||||||
max={100}
|
|
||||||
/>
|
|
||||||
<ConfigItem
|
|
||||||
label="Working Hours"
|
|
||||||
type="timerange"
|
|
||||||
value={{ start: 9, end: 18 }}
|
|
||||||
/>
|
|
||||||
</ConfigSection>
|
|
||||||
</TabPanel>
|
|
||||||
|
|
||||||
<TabPanel value="holidays">
|
|
||||||
<HolidayCalendar />
|
|
||||||
</TabPanel>
|
|
||||||
</AdminSettings>
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 Sample Queries
|
|
||||||
|
|
||||||
### **Get Holidays for Current Year:**
|
|
||||||
```sql
|
|
||||||
SELECT * FROM holidays
|
|
||||||
WHERE EXTRACT(YEAR FROM holiday_date) = EXTRACT(YEAR FROM CURRENT_DATE)
|
|
||||||
AND is_active = true
|
|
||||||
ORDER BY holiday_date;
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Check if Date is Holiday:**
|
|
||||||
```sql
|
|
||||||
SELECT EXISTS(
|
|
||||||
SELECT 1 FROM holidays
|
|
||||||
WHERE holiday_date = '2025-08-15'
|
|
||||||
AND is_active = true
|
|
||||||
) as is_holiday;
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Upcoming Holidays (Next 3 Months):**
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
holiday_name,
|
|
||||||
holiday_date,
|
|
||||||
holiday_type,
|
|
||||||
description
|
|
||||||
FROM holidays
|
|
||||||
WHERE holiday_date BETWEEN CURRENT_DATE AND CURRENT_DATE + INTERVAL '90 days'
|
|
||||||
AND is_active = true
|
|
||||||
ORDER BY holiday_date;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Complete Feature Set
|
|
||||||
|
|
||||||
### **Holiday Management:**
|
|
||||||
- ✅ Create individual holidays
|
|
||||||
- ✅ Update holiday details
|
|
||||||
- ✅ Delete (deactivate) holidays
|
|
||||||
- ✅ Bulk import from JSON
|
|
||||||
- ✅ Year-based calendar view
|
|
||||||
- ✅ Recurring holidays support
|
|
||||||
- ✅ Department-specific holidays
|
|
||||||
- ✅ Location-specific holidays
|
|
||||||
|
|
||||||
### **TAT Integration:**
|
|
||||||
- ✅ STANDARD priority skips holidays
|
|
||||||
- ✅ EXPRESS priority ignores holidays
|
|
||||||
- ✅ Automatic cache management
|
|
||||||
- ✅ Performance optimized (in-memory cache)
|
|
||||||
- ✅ Real-time updates when holidays change
|
|
||||||
|
|
||||||
### **Admin Configuration:**
|
|
||||||
- ✅ TAT default values
|
|
||||||
- ✅ Reminder thresholds
|
|
||||||
- ✅ Working hours
|
|
||||||
- ✅ Document policies
|
|
||||||
- ✅ AI settings
|
|
||||||
- ✅ All configs with validation rules
|
|
||||||
- ✅ UI component hints
|
|
||||||
- ✅ Reset to default option
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📦 Files Created
|
|
||||||
|
|
||||||
### **Backend (10 new files):**
|
|
||||||
1. `src/models/Holiday.ts` - Holiday model
|
|
||||||
2. `src/services/holiday.service.ts` - Holiday management service
|
|
||||||
3. `src/controllers/admin.controller.ts` - Admin API controllers
|
|
||||||
4. `src/routes/admin.routes.ts` - Admin API routes
|
|
||||||
5. `src/migrations/20251104-create-holidays.ts` - Holidays table migration
|
|
||||||
6. `src/migrations/20251104-create-admin-config.ts` - Admin config migration
|
|
||||||
7. `data/indian_holidays_2025.json` - Sample holidays data
|
|
||||||
8. `docs/HOLIDAY_CALENDAR_SYSTEM.md` - Complete documentation
|
|
||||||
|
|
||||||
### **Modified Files (6):**
|
|
||||||
1. `src/utils/tatTimeUtils.ts` - Added holiday checking
|
|
||||||
2. `src/server.ts` - Initialize holidays cache
|
|
||||||
3. `src/models/index.ts` - Export Holiday model
|
|
||||||
4. `src/routes/index.ts` - Register admin routes
|
|
||||||
5. `src/middlewares/authorization.middleware.ts` - Added requireAdmin
|
|
||||||
6. `src/scripts/migrate.ts` - Include new migrations
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 How to Use
|
|
||||||
|
|
||||||
### **Step 1: Run Migrations**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run migrate
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected Output:**
|
|
||||||
```
|
|
||||||
✅ Holidays table created successfully
|
|
||||||
✅ Admin configurations table created and seeded
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 2: Restart Backend**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected Output:**
|
|
||||||
```
|
|
||||||
📅 Holiday calendar loaded for TAT calculations
|
|
||||||
[TAT Utils] Loaded 0 holidays into cache (will load when admin adds holidays)
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 3: Add Holidays via API**
|
|
||||||
|
|
||||||
**Option A: Add Individual Holiday:**
|
|
||||||
```bash
|
|
||||||
curl -X POST http://localhost:5000/api/admin/holidays \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "Authorization: Bearer YOUR_ADMIN_TOKEN" \
|
|
||||||
-d '{
|
|
||||||
"holidayDate": "2025-11-05",
|
|
||||||
"holidayName": "Diwali",
|
|
||||||
"description": "Festival of Lights",
|
|
||||||
"holidayType": "NATIONAL"
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option B: Bulk Import:**
|
|
||||||
```bash
|
|
||||||
# Use the sample data file:
|
|
||||||
curl -X POST http://localhost:5000/api/admin/holidays/bulk-import \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "Authorization: Bearer YOUR_ADMIN_TOKEN" \
|
|
||||||
-d @data/indian_holidays_2025.json
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 4: Test TAT with Holidays**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create STANDARD priority request
|
|
||||||
# 2. TAT calculation will now skip holidays
|
|
||||||
# 3. Due date will be later if holidays fall within TAT period
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 TAT Calculation Examples
|
|
||||||
|
|
||||||
### **Example 1: No Holidays in TAT Period**
|
|
||||||
|
|
||||||
```
|
|
||||||
Submit: Monday Dec 1, 10:00 AM
|
|
||||||
TAT: 24 hours (STANDARD)
|
|
||||||
Holidays: None in this period
|
|
||||||
|
|
||||||
Calculation:
|
|
||||||
Monday 10 AM - 6 PM = 8 hours
|
|
||||||
Tuesday 9 AM - 1 PM = 4 hours
|
|
||||||
Total = 12 hours (needs 12 more)
|
|
||||||
...
|
|
||||||
Due: Tuesday 1:00 PM
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Example 2: Holiday in TAT Period**
|
|
||||||
|
|
||||||
```
|
|
||||||
Submit: Friday Oct 31, 10:00 AM
|
|
||||||
TAT: 24 hours (STANDARD)
|
|
||||||
Holiday: Monday Nov 3 (Diwali)
|
|
||||||
|
|
||||||
Calculation:
|
|
||||||
Friday 10 AM - 6 PM = 8 hours
|
|
||||||
Saturday-Sunday = WEEKEND (skipped)
|
|
||||||
Monday = HOLIDAY (skipped)
|
|
||||||
Tuesday 9 AM - 6 PM = 9 hours (total: 17h)
|
|
||||||
Wednesday 9 AM - 2 PM = 5 hours (total: 22h)
|
|
||||||
...
|
|
||||||
Due: Wednesday Nov 5 at 2:00 PM
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔒 Security
|
|
||||||
|
|
||||||
### **Admin Access Required:**
|
|
||||||
|
|
||||||
All holiday and configuration endpoints check:
|
|
||||||
1. ✅ User is authenticated (`authenticateToken`)
|
|
||||||
2. ✅ User has admin role (`requireAdmin`)
|
|
||||||
|
|
||||||
**Non-admins get:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": false,
|
|
||||||
"error": "Admin access required"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Admin Configuration Categories
|
|
||||||
|
|
||||||
### **1. TAT Settings**
|
|
||||||
- Default TAT hours (Express/Standard)
|
|
||||||
- Reminder thresholds (50%, 75%)
|
|
||||||
- Working hours (9 AM - 6 PM)
|
|
||||||
|
|
||||||
### **2. User Roles** (Future)
|
|
||||||
- Add/deactivate users
|
|
||||||
- Change roles (Initiator, Approver, Spectator)
|
|
||||||
|
|
||||||
### **3. Notification Rules**
|
|
||||||
- Channels (in-app, email)
|
|
||||||
- Frequency
|
|
||||||
- Template messages
|
|
||||||
|
|
||||||
### **4. Document Policy**
|
|
||||||
- Max upload size (10 MB)
|
|
||||||
- Allowed file types
|
|
||||||
- Retention period (365 days)
|
|
||||||
|
|
||||||
### **5. Dashboard Layout** (Future)
|
|
||||||
- Enable/disable KPI cards per role
|
|
||||||
|
|
||||||
### **6. AI Configuration**
|
|
||||||
- Toggle AI remark generation
|
|
||||||
- Max characters (500)
|
|
||||||
|
|
||||||
### **7. Workflow Sharing Policy** (Future)
|
|
||||||
- Control who can add spectators
|
|
||||||
- Share links permissions
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Implementation Summary
|
|
||||||
|
|
||||||
| Feature | Status | Notes |
|
|
||||||
|---------|--------|-------|
|
|
||||||
| **Holidays Table** | ✅ Created | With 4 indexes |
|
|
||||||
| **Admin Config Table** | ✅ Created | Pre-seeded with defaults |
|
|
||||||
| **Holiday Service** | ✅ Implemented | CRUD + bulk import |
|
|
||||||
| **Admin Controller** | ✅ Implemented | All endpoints |
|
|
||||||
| **Admin Routes** | ✅ Implemented | Secured with requireAdmin |
|
|
||||||
| **TAT Integration** | ✅ Implemented | Holidays excluded for STANDARD |
|
|
||||||
| **Holiday Cache** | ✅ Implemented | 6-hour expiry, auto-refresh |
|
|
||||||
| **Sample Data** | ✅ Created | 14 Indian holidays for 2025 |
|
|
||||||
| **Documentation** | ✅ Complete | Full guide created |
|
|
||||||
| **Migrations** | ✅ Ready | 2 new migrations added |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎓 Next Steps
|
|
||||||
|
|
||||||
### **Immediate:**
|
|
||||||
1. ✅ Run migrations: `npm run migrate`
|
|
||||||
2. ✅ Restart backend: `npm run dev`
|
|
||||||
3. ✅ Verify holidays table exists
|
|
||||||
4. ✅ Import sample holidays (optional)
|
|
||||||
|
|
||||||
### **Frontend Development:**
|
|
||||||
1. 📋 Build Holiday Management page
|
|
||||||
2. 📋 Build Admin Configuration page
|
|
||||||
3. 📋 Build Calendar view component
|
|
||||||
4. 📋 Build Bulk import UI
|
|
||||||
5. 📋 Add to Admin Dashboard
|
|
||||||
|
|
||||||
### **Future Enhancements:**
|
|
||||||
1. 📋 Recurring holiday auto-generation
|
|
||||||
2. 📋 Holiday templates by country
|
|
||||||
3. 📋 Email notifications for upcoming holidays
|
|
||||||
4. 📋 Holiday impact reports (how many requests affected)
|
|
||||||
5. 📋 Multi-year holiday planning
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Impact on Existing Requests
|
|
||||||
|
|
||||||
### **For Existing Requests:**
|
|
||||||
|
|
||||||
**Before Holidays Table:**
|
|
||||||
- TAT calculation: Weekends only
|
|
||||||
|
|
||||||
**After Holidays Table:**
|
|
||||||
- TAT calculation: Weekends + Holidays
|
|
||||||
- Due dates may change for active requests
|
|
||||||
- Historical requests unchanged
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🆘 Troubleshooting
|
|
||||||
|
|
||||||
### **Holidays Not Excluded from TAT?**
|
|
||||||
|
|
||||||
**Check:**
|
|
||||||
1. Holidays cache loaded? Look for "Loaded X holidays into cache" in logs
|
|
||||||
2. Priority is STANDARD? (EXPRESS doesn't use holidays)
|
|
||||||
3. Holiday is active? `is_active = true`
|
|
||||||
4. Holiday date is correct format? `YYYY-MM-DD`
|
|
||||||
|
|
||||||
**Debug:**
|
|
||||||
```sql
|
|
||||||
-- Check if holiday exists
|
|
||||||
SELECT * FROM holidays
|
|
||||||
WHERE holiday_date = '2025-11-05'
|
|
||||||
AND is_active = true;
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Cache Not Updating After Adding Holiday?**
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
- Cache refreshes automatically when admin adds/updates/deletes
|
|
||||||
- If not working, restart backend server
|
|
||||||
- Cache refreshes every 6 hours automatically
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📈 Future Admin Features
|
|
||||||
|
|
||||||
Based on your requirements, these can be added:
|
|
||||||
|
|
||||||
### **User Role Management:**
|
|
||||||
- Add/remove users
|
|
||||||
- Change user roles
|
|
||||||
- Activate/deactivate accounts
|
|
||||||
|
|
||||||
### **Notification Templates:**
|
|
||||||
- Customize email/push templates
|
|
||||||
- Set notification frequency
|
|
||||||
- Channel preferences
|
|
||||||
|
|
||||||
### **Dashboard Customization:**
|
|
||||||
- Enable/disable KPI cards
|
|
||||||
- Customize card order
|
|
||||||
- Role-based dashboard views
|
|
||||||
|
|
||||||
### **Workflow Policies:**
|
|
||||||
- Who can add spectators
|
|
||||||
- Sharing permissions
|
|
||||||
- Approval flow templates
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎉 Status: COMPLETE!
|
|
||||||
|
|
||||||
✅ **Holiday Calendar System** - Fully implemented
|
|
||||||
✅ **Admin Configuration** - Schema and API ready
|
|
||||||
✅ **TAT Integration** - Holidays excluded for STANDARD priority
|
|
||||||
✅ **API Endpoints** - All CRUD operations
|
|
||||||
✅ **Security** - Admin-only access
|
|
||||||
✅ **Performance** - Optimized with caching
|
|
||||||
✅ **Sample Data** - Indian holidays 2025
|
|
||||||
✅ **Documentation** - Complete guide
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Just run migrations and you're ready to go! 🚀**
|
|
||||||
|
|
||||||
See `docs/HOLIDAY_CALENDAR_SYSTEM.md` for detailed API documentation.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 4, 2025
|
|
||||||
**Version**: 1.0.0
|
|
||||||
**Team**: Royal Enfield Workflow System
|
|
||||||
|
|
||||||
@ -1,516 +0,0 @@
|
|||||||
# Holiday Handling & EXPRESS Mode TAT Calculation
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The TAT (Turn Around Time) system now supports:
|
|
||||||
1. **Holiday Exclusions** - Configured holidays are excluded from STANDARD priority TAT calculations
|
|
||||||
2. **EXPRESS Mode** - EXPRESS priority requests use 24/7 calculation (no exclusions)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## How It Works
|
|
||||||
|
|
||||||
### **STANDARD Priority (Default)**
|
|
||||||
|
|
||||||
**Calculation:**
|
|
||||||
- ✅ Excludes weekends (Saturday, Sunday)
|
|
||||||
- ✅ Excludes non-working hours (9 AM - 6 PM by default)
|
|
||||||
- ✅ **Excludes holidays configured in Admin Settings**
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
TAT = 16 working hours
|
|
||||||
Start: Monday 2:00 PM
|
|
||||||
|
|
||||||
Calculation:
|
|
||||||
Monday 2:00 PM - 6:00 PM = 4 hours (remaining: 12h)
|
|
||||||
Tuesday 9:00 AM - 6:00 PM = 9 hours (remaining: 3h)
|
|
||||||
Wednesday 9:00 AM - 12:00 PM = 3 hours (remaining: 0h)
|
|
||||||
|
|
||||||
If Wednesday is a HOLIDAY → Skip to Thursday:
|
|
||||||
Wednesday (HOLIDAY) = 0 hours (skipped)
|
|
||||||
Thursday 9:00 AM - 12:00 PM = 3 hours (remaining: 0h)
|
|
||||||
|
|
||||||
Final deadline: Thursday 12:00 PM ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **EXPRESS Priority**
|
|
||||||
|
|
||||||
**Calculation:**
|
|
||||||
- ✅ Counts ALL hours (24/7)
|
|
||||||
- ✅ **No weekend exclusion**
|
|
||||||
- ✅ **No non-working hours exclusion**
|
|
||||||
- ✅ **No holiday exclusion**
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```
|
|
||||||
TAT = 16 hours
|
|
||||||
Start: Monday 2:00 PM
|
|
||||||
|
|
||||||
Calculation:
|
|
||||||
Simply add 16 hours:
|
|
||||||
Monday 2:00 PM + 16 hours = Tuesday 6:00 AM
|
|
||||||
|
|
||||||
Final deadline: Tuesday 6:00 AM ✅
|
|
||||||
|
|
||||||
(Even if Tuesday is a holiday, it still counts)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Holiday Configuration Flow
|
|
||||||
|
|
||||||
### **1. Admin Adds Holiday**
|
|
||||||
|
|
||||||
```
|
|
||||||
Settings Page → Holiday Manager → Add Holiday
|
|
||||||
Name: "Christmas Day"
|
|
||||||
Date: 2025-12-25
|
|
||||||
Type: Public Holiday
|
|
||||||
✅ Save
|
|
||||||
```
|
|
||||||
|
|
||||||
### **2. Holiday Stored in Database**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
INSERT INTO holidays (holiday_date, holiday_name, holiday_type, is_active)
|
|
||||||
VALUES ('2025-12-25', 'Christmas Day', 'PUBLIC_HOLIDAY', true);
|
|
||||||
```
|
|
||||||
|
|
||||||
### **3. Holiday Cache Updated**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Holidays are cached in memory for 6 hours
|
|
||||||
await loadHolidaysCache();
|
|
||||||
// → holidaysCache = Set(['2025-12-25', '2025-01-01', ...])
|
|
||||||
```
|
|
||||||
|
|
||||||
### **4. TAT Calculation Uses Holiday Cache**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// When scheduling TAT jobs
|
|
||||||
if (priority === 'STANDARD') {
|
|
||||||
// Working hours calculation - checks holidays
|
|
||||||
const threshold1 = await addWorkingHours(start, hours * 0.55);
|
|
||||||
// → If date is in holidaysCache, it's skipped ✅
|
|
||||||
} else {
|
|
||||||
// EXPRESS: 24/7 calculation - ignores holidays
|
|
||||||
const threshold1 = addCalendarHours(start, hours * 0.55);
|
|
||||||
// → Adds hours directly, no checks ✅
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation Details
|
|
||||||
|
|
||||||
### **Function: `addWorkingHours()` (STANDARD Mode)**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
export async function addWorkingHours(start: Date, hoursToAdd: number): Promise<Dayjs> {
|
|
||||||
let current = dayjs(start);
|
|
||||||
|
|
||||||
// Load holidays from database (cached)
|
|
||||||
await loadHolidaysCache();
|
|
||||||
|
|
||||||
let remaining = hoursToAdd;
|
|
||||||
|
|
||||||
while (remaining > 0) {
|
|
||||||
current = current.add(1, 'hour');
|
|
||||||
|
|
||||||
// Check if current hour is working time
|
|
||||||
if (isWorkingTime(current)) { // ✅ Checks holidays here
|
|
||||||
remaining -= 1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return current;
|
|
||||||
}
|
|
||||||
|
|
||||||
function isWorkingTime(date: Dayjs): boolean {
|
|
||||||
// Check weekend
|
|
||||||
if (date.day() === 0 || date.day() === 6) return false;
|
|
||||||
|
|
||||||
// Check working hours
|
|
||||||
if (date.hour() < 9 || date.hour() >= 18) return false;
|
|
||||||
|
|
||||||
// Check if holiday ✅
|
|
||||||
if (isHoliday(date)) return false;
|
|
||||||
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
function isHoliday(date: Dayjs): boolean {
|
|
||||||
const dateStr = date.format('YYYY-MM-DD');
|
|
||||||
return holidaysCache.has(dateStr); // ✅ Checks cached holidays
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Function: `addCalendarHours()` (EXPRESS Mode)**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
export function addCalendarHours(start: Date, hoursToAdd: number): Dayjs {
|
|
||||||
// Simple addition - no checks ✅
|
|
||||||
return dayjs(start).add(hoursToAdd, 'hour');
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## TAT Scheduler Integration
|
|
||||||
|
|
||||||
### **Updated Method Signature:**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
async scheduleTatJobs(
|
|
||||||
requestId: string,
|
|
||||||
levelId: string,
|
|
||||||
approverId: string,
|
|
||||||
tatDurationHours: number,
|
|
||||||
startTime?: Date,
|
|
||||||
priority: Priority = Priority.STANDARD // ✅ New parameter
|
|
||||||
): Promise<void>
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Priority-Based Calculation:**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
const isExpress = priority === Priority.EXPRESS;
|
|
||||||
|
|
||||||
if (isExpress) {
|
|
||||||
// EXPRESS: 24/7 calculation
|
|
||||||
threshold1Time = addCalendarHours(now, hours * 0.55).toDate();
|
|
||||||
threshold2Time = addCalendarHours(now, hours * 0.80).toDate();
|
|
||||||
breachTime = addCalendarHours(now, hours).toDate();
|
|
||||||
logger.info('Using EXPRESS mode (24/7) - no holiday/weekend exclusions');
|
|
||||||
} else {
|
|
||||||
// STANDARD: Working hours, exclude holidays
|
|
||||||
const t1 = await addWorkingHours(now, hours * 0.55);
|
|
||||||
const t2 = await addWorkingHours(now, hours * 0.80);
|
|
||||||
const tBreach = await addWorkingHours(now, hours);
|
|
||||||
threshold1Time = t1.toDate();
|
|
||||||
threshold2Time = t2.toDate();
|
|
||||||
breachTime = tBreach.toDate();
|
|
||||||
logger.info('Using STANDARD mode - excludes holidays, weekends, non-working hours');
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Example Scenarios
|
|
||||||
|
|
||||||
### **Scenario 1: STANDARD with Holiday**
|
|
||||||
|
|
||||||
```
|
|
||||||
Request Details:
|
|
||||||
- Priority: STANDARD
|
|
||||||
- TAT: 16 working hours
|
|
||||||
- Start: Monday 2:00 PM
|
|
||||||
- Holiday: Wednesday (Christmas)
|
|
||||||
|
|
||||||
Calculation:
|
|
||||||
Monday 2:00 PM - 6:00 PM = 4 hours (12h remaining)
|
|
||||||
Tuesday 9:00 AM - 6:00 PM = 9 hours (3h remaining)
|
|
||||||
Wednesday (HOLIDAY) = SKIPPED ✅
|
|
||||||
Thursday 9:00 AM - 12:00 PM = 3 hours (0h remaining)
|
|
||||||
|
|
||||||
TAT Milestones:
|
|
||||||
- Threshold 1 (55%): Tuesday 4:40 PM (8.8 working hours)
|
|
||||||
- Threshold 2 (80%): Thursday 10:48 AM (12.8 working hours)
|
|
||||||
- Breach (100%): Thursday 12:00 PM (16 working hours)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Scenario 2: EXPRESS with Holiday**
|
|
||||||
|
|
||||||
```
|
|
||||||
Request Details:
|
|
||||||
- Priority: EXPRESS
|
|
||||||
- TAT: 16 hours
|
|
||||||
- Start: Monday 2:00 PM
|
|
||||||
- Holiday: Wednesday (Christmas) - IGNORED ✅
|
|
||||||
|
|
||||||
Calculation:
|
|
||||||
Monday 2:00 PM + 16 hours = Tuesday 6:00 AM
|
|
||||||
|
|
||||||
TAT Milestones:
|
|
||||||
- Threshold 1 (55%): Monday 10:48 PM (8.8 hours)
|
|
||||||
- Threshold 2 (80%): Tuesday 2:48 AM (12.8 hours)
|
|
||||||
- Breach (100%): Tuesday 6:00 AM (16 hours)
|
|
||||||
|
|
||||||
Note: Even though Wednesday is a holiday, EXPRESS doesn't care ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Scenario 3: Multiple Holidays**
|
|
||||||
|
|
||||||
```
|
|
||||||
Request Details:
|
|
||||||
- Priority: STANDARD
|
|
||||||
- TAT: 40 working hours
|
|
||||||
- Start: Friday 10:00 AM
|
|
||||||
- Holidays: Monday (New Year), Tuesday (Day After)
|
|
||||||
|
|
||||||
Calculation:
|
|
||||||
Friday 10:00 AM - 6:00 PM = 8 hours (32h remaining)
|
|
||||||
Saturday-Sunday = SKIPPED (weekend)
|
|
||||||
Monday (HOLIDAY) = SKIPPED ✅
|
|
||||||
Tuesday (HOLIDAY) = SKIPPED ✅
|
|
||||||
Wednesday 9:00 AM - 6:00 PM = 9 hours (23h remaining)
|
|
||||||
Thursday 9:00 AM - 6:00 PM = 9 hours (14h remaining)
|
|
||||||
Friday 9:00 AM - 6:00 PM = 9 hours (5h remaining)
|
|
||||||
Monday 9:00 AM - 2:00 PM = 5 hours (0h remaining)
|
|
||||||
|
|
||||||
Final deadline: Next Monday 2:00 PM ✅
|
|
||||||
(Skipped 2 weekends + 2 holidays)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Holiday Cache Management
|
|
||||||
|
|
||||||
### **Cache Lifecycle:**
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Server Startup
|
|
||||||
→ initializeHolidaysCache() called
|
|
||||||
→ Holidays loaded into memory
|
|
||||||
|
|
||||||
2. Cache Valid for 6 Hours
|
|
||||||
→ holidaysCacheExpiry = now + 6 hours
|
|
||||||
→ Subsequent calls use cached data (fast)
|
|
||||||
|
|
||||||
3. Cache Expires After 6 Hours
|
|
||||||
→ Next TAT calculation reloads cache from DB
|
|
||||||
→ New cache expires in 6 hours
|
|
||||||
|
|
||||||
4. Manual Cache Refresh (Optional)
|
|
||||||
→ Admin adds/updates holiday
|
|
||||||
→ Call initializeHolidaysCache() to refresh immediately
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Cache Performance:**
|
|
||||||
|
|
||||||
```
|
|
||||||
Without Cache:
|
|
||||||
- Every TAT calculation → DB query → SLOW ❌
|
|
||||||
- 100 requests/hour → 100 DB queries
|
|
||||||
|
|
||||||
With Cache:
|
|
||||||
- Load once per 6 hours → DB query → FAST ✅
|
|
||||||
- 100 requests/hour → 0 DB queries (use cache)
|
|
||||||
- Cache refresh: Every 6 hours or on-demand
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Priority Detection in Services
|
|
||||||
|
|
||||||
### **Workflow Service (Submission):**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// When submitting workflow
|
|
||||||
const workflowPriority = (updated as any).priority || 'STANDARD';
|
|
||||||
|
|
||||||
await tatSchedulerService.scheduleTatJobs(
|
|
||||||
requestId,
|
|
||||||
levelId,
|
|
||||||
approverId,
|
|
||||||
tatHours,
|
|
||||||
now,
|
|
||||||
workflowPriority // ✅ Pass priority
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Approval Service (Next Level):**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// When moving to next approval level
|
|
||||||
const workflowPriority = (wf as any)?.priority || 'STANDARD';
|
|
||||||
|
|
||||||
await tatSchedulerService.scheduleTatJobs(
|
|
||||||
requestId,
|
|
||||||
nextLevelId,
|
|
||||||
nextApproverId,
|
|
||||||
tatHours,
|
|
||||||
now,
|
|
||||||
workflowPriority // ✅ Pass priority
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Database Schema
|
|
||||||
|
|
||||||
### **Holidays Table:**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
CREATE TABLE holidays (
|
|
||||||
holiday_id UUID PRIMARY KEY,
|
|
||||||
holiday_date DATE NOT NULL,
|
|
||||||
holiday_name VARCHAR(255) NOT NULL,
|
|
||||||
holiday_type VARCHAR(50),
|
|
||||||
description TEXT,
|
|
||||||
is_active BOOLEAN DEFAULT true,
|
|
||||||
created_at TIMESTAMP DEFAULT NOW(),
|
|
||||||
updated_at TIMESTAMP DEFAULT NOW()
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Example data
|
|
||||||
INSERT INTO holidays (holiday_date, holiday_name, holiday_type)
|
|
||||||
VALUES
|
|
||||||
('2025-12-25', 'Christmas Day', 'PUBLIC_HOLIDAY'),
|
|
||||||
('2025-01-01', 'New Year''s Day', 'PUBLIC_HOLIDAY'),
|
|
||||||
('2025-07-04', 'Independence Day', 'PUBLIC_HOLIDAY');
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Workflow Request Priority:**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- WorkflowRequest table already has priority field
|
|
||||||
SELECT request_id, priority, tat_hours
|
|
||||||
FROM workflow_requests
|
|
||||||
WHERE priority = 'EXPRESS'; -- 24/7 calculation
|
|
||||||
-- OR
|
|
||||||
WHERE priority = 'STANDARD'; -- Working hours + holiday exclusion
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Testing Scenarios
|
|
||||||
|
|
||||||
### **Test 1: Add Holiday, Create STANDARD Request**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Add holiday for tomorrow
|
|
||||||
curl -X POST http://localhost:5000/api/v1/admin/holidays \
|
|
||||||
-H "Authorization: Bearer TOKEN" \
|
|
||||||
-d '{
|
|
||||||
"holidayDate": "2025-11-06",
|
|
||||||
"holidayName": "Test Holiday",
|
|
||||||
"holidayType": "PUBLIC_HOLIDAY"
|
|
||||||
}'
|
|
||||||
|
|
||||||
# 2. Create STANDARD request with 24h TAT
|
|
||||||
curl -X POST http://localhost:5000/api/v1/workflows \
|
|
||||||
-d '{
|
|
||||||
"priority": "STANDARD",
|
|
||||||
"tatHours": 24
|
|
||||||
}'
|
|
||||||
|
|
||||||
# 3. Check scheduled TAT jobs in logs
|
|
||||||
# → Should show deadline skipping the holiday ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test 2: Same Holiday, EXPRESS Request**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Holiday still exists (tomorrow)
|
|
||||||
|
|
||||||
# 2. Create EXPRESS request with 24h TAT
|
|
||||||
curl -X POST http://localhost:5000/api/v1/workflows \
|
|
||||||
-d '{
|
|
||||||
"priority": "EXPRESS",
|
|
||||||
"tatHours": 24
|
|
||||||
}'
|
|
||||||
|
|
||||||
# 3. Check scheduled TAT jobs in logs
|
|
||||||
# → Should show deadline NOT skipping the holiday ✅
|
|
||||||
# → Exactly 24 hours from now (includes holiday)
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test 3: Verify Holiday Exclusion**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create request on Friday afternoon
|
|
||||||
# With 16 working hours TAT
|
|
||||||
# Should skip weekend and land on Monday/Tuesday
|
|
||||||
|
|
||||||
# If Monday is a holiday:
|
|
||||||
# → STANDARD: Should land on Tuesday ✅
|
|
||||||
# → EXPRESS: Should land on Sunday ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Logging Examples
|
|
||||||
|
|
||||||
### **STANDARD Mode Log:**
|
|
||||||
|
|
||||||
```
|
|
||||||
[TAT Scheduler] Using STANDARD mode - excludes holidays, weekends, non-working hours
|
|
||||||
[TAT Scheduler] Calculating TAT milestones for request REQ-123, level LEVEL-456
|
|
||||||
[TAT Scheduler] Priority: STANDARD, TAT Hours: 16
|
|
||||||
[TAT Scheduler] Start: 2025-11-05 14:00
|
|
||||||
[TAT Scheduler] Threshold 1 (55%): 2025-11-07 11:48 (skipped 1 holiday)
|
|
||||||
[TAT Scheduler] Threshold 2 (80%): 2025-11-08 09:48
|
|
||||||
[TAT Scheduler] Breach (100%): 2025-11-08 14:00
|
|
||||||
```
|
|
||||||
|
|
||||||
### **EXPRESS Mode Log:**
|
|
||||||
|
|
||||||
```
|
|
||||||
[TAT Scheduler] Using EXPRESS mode (24/7) - no holiday/weekend exclusions
|
|
||||||
[TAT Scheduler] Calculating TAT milestones for request REQ-456, level LEVEL-789
|
|
||||||
[TAT Scheduler] Priority: EXPRESS, TAT Hours: 16
|
|
||||||
[TAT Scheduler] Start: 2025-11-05 14:00
|
|
||||||
[TAT Scheduler] Threshold 1 (55%): 2025-11-05 22:48 (8.8 hours)
|
|
||||||
[TAT Scheduler] Threshold 2 (80%): 2025-11-06 02:48 (12.8 hours)
|
|
||||||
[TAT Scheduler] Breach (100%): 2025-11-06 06:00 (16 hours)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Summary
|
|
||||||
|
|
||||||
### **What Changed:**
|
|
||||||
|
|
||||||
1. ✅ Added `addCalendarHours()` for EXPRESS mode (24/7 calculation)
|
|
||||||
2. ✅ Updated `addWorkingHours()` to check holidays from admin settings
|
|
||||||
3. ✅ Added `priority` parameter to `scheduleTatJobs()`
|
|
||||||
4. ✅ Updated workflow/approval services to pass priority
|
|
||||||
5. ✅ Holiday cache for performance (6-hour expiry)
|
|
||||||
|
|
||||||
### **How Holidays Are Used:**
|
|
||||||
|
|
||||||
| Priority | Calculation Method | Holidays | Weekends | Non-Working Hours |
|
|
||||||
|----------|-------------------|----------|----------|-------------------|
|
|
||||||
| **STANDARD** | Working hours only | ✅ Excluded | ✅ Excluded | ✅ Excluded |
|
|
||||||
| **EXPRESS** | 24/7 calendar hours | ❌ Counted | ❌ Counted | ❌ Counted |
|
|
||||||
|
|
||||||
### **Benefits:**
|
|
||||||
|
|
||||||
1. ✅ **Accurate TAT for STANDARD** - Respects holidays, no false breaches
|
|
||||||
2. ✅ **Fast EXPRESS** - True 24/7 calculation for urgent requests
|
|
||||||
3. ✅ **Centralized Holiday Management** - Admin can add/edit holidays
|
|
||||||
4. ✅ **Performance** - Holiday cache prevents repeated DB queries
|
|
||||||
5. ✅ **Flexible** - Priority can be changed per request
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Files Modified
|
|
||||||
|
|
||||||
1. `Re_Backend/src/utils/tatTimeUtils.ts` - Added `addCalendarHours()` for EXPRESS mode
|
|
||||||
2. `Re_Backend/src/services/tatScheduler.service.ts` - Added priority parameter and logic
|
|
||||||
3. `Re_Backend/src/services/workflow.service.ts` - Pass priority when scheduling TAT
|
|
||||||
4. `Re_Backend/src/services/approval.service.ts` - Pass priority for next level TAT
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Configuration Keys
|
|
||||||
|
|
||||||
| Config Key | Default | Description |
|
|
||||||
|------------|---------|-------------|
|
|
||||||
| `WORK_START_HOUR` | 9 | Working hours start (STANDARD mode only) |
|
|
||||||
| `WORK_END_HOUR` | 18 | Working hours end (STANDARD mode only) |
|
|
||||||
| `WORK_START_DAY` | 1 | Monday (STANDARD mode only) |
|
|
||||||
| `WORK_END_DAY` | 5 | Friday (STANDARD mode only) |
|
|
||||||
|
|
||||||
**Note:** EXPRESS mode ignores all these configurations and uses 24/7 calculation.
|
|
||||||
|
|
||||||
@ -1,307 +0,0 @@
|
|||||||
# ✅ KPI & TAT Reporting System - Setup Complete!
|
|
||||||
|
|
||||||
## 🎉 What's Been Implemented
|
|
||||||
|
|
||||||
### 1. TAT Alerts Table (`tat_alerts`)
|
|
||||||
|
|
||||||
**Purpose**: Store every TAT notification (50%, 75%, 100%) for display and KPI analysis
|
|
||||||
|
|
||||||
**Features**:
|
|
||||||
- ✅ Records all TAT notifications sent
|
|
||||||
- ✅ Tracks timing, completion status, and compliance
|
|
||||||
- ✅ Stores metadata for rich reporting
|
|
||||||
- ✅ Displays like the shared image: "Reminder 1: 50% of SLA breach reminder have been sent"
|
|
||||||
|
|
||||||
**Example Query**:
|
|
||||||
```sql
|
|
||||||
-- Get TAT alerts for a specific request (for UI display)
|
|
||||||
SELECT
|
|
||||||
alert_type,
|
|
||||||
threshold_percentage,
|
|
||||||
alert_sent_at,
|
|
||||||
alert_message
|
|
||||||
FROM tat_alerts
|
|
||||||
WHERE request_id = 'YOUR_REQUEST_ID'
|
|
||||||
ORDER BY alert_sent_at ASC;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 2. Eight KPI Views Created
|
|
||||||
|
|
||||||
All views are ready to use for reporting and dashboards:
|
|
||||||
|
|
||||||
| View Name | Purpose | KPI Category |
|
|
||||||
|-----------|---------|--------------|
|
|
||||||
| `vw_request_volume_summary` | Request counts, status, cycle times | Volume & Status |
|
|
||||||
| `vw_tat_compliance` | TAT compliance tracking | TAT Efficiency |
|
|
||||||
| `vw_approver_performance` | Approver metrics, response times | Approver Load |
|
|
||||||
| `vw_tat_alerts_summary` | TAT alerts with response times | TAT Efficiency |
|
|
||||||
| `vw_department_summary` | Department-wise statistics | Volume & Status |
|
|
||||||
| `vw_daily_kpi_metrics` | Daily trends and metrics | Trends |
|
|
||||||
| `vw_workflow_aging` | Aging analysis | Volume & Status |
|
|
||||||
| `vw_engagement_metrics` | Comments, documents, collaboration | Engagement & Quality |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 3. Complete KPI Coverage
|
|
||||||
|
|
||||||
All KPIs from your requirements are now supported:
|
|
||||||
|
|
||||||
#### ✅ Request Volume & Status
|
|
||||||
- Total Requests Created
|
|
||||||
- Open Requests (with age)
|
|
||||||
- Approved Requests
|
|
||||||
- Rejected Requests
|
|
||||||
|
|
||||||
#### ✅ TAT Efficiency
|
|
||||||
- Average TAT Compliance %
|
|
||||||
- Avg Approval Cycle Time
|
|
||||||
- Delayed Workflows
|
|
||||||
- TAT Breach History
|
|
||||||
|
|
||||||
#### ✅ Approver Load
|
|
||||||
- Pending Actions (My Queue)
|
|
||||||
- Approvals Completed (Today/Week)
|
|
||||||
- Approver Performance Metrics
|
|
||||||
|
|
||||||
#### ✅ Engagement & Quality
|
|
||||||
- Comments/Work Notes Added
|
|
||||||
- Attachments Uploaded
|
|
||||||
- Spectator Participation
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Example Queries
|
|
||||||
|
|
||||||
### Show TAT Reminders (Like Your Image)
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- For displaying TAT alerts in Request Detail screen
|
|
||||||
SELECT
|
|
||||||
CASE
|
|
||||||
WHEN alert_type = 'TAT_50' THEN '⏳ 50% of SLA breach reminder have been sent'
|
|
||||||
WHEN alert_type = 'TAT_75' THEN '⚠️ 75% of SLA breach reminder have been sent'
|
|
||||||
WHEN alert_type = 'TAT_100' THEN '⏰ TAT breached - Immediate action required'
|
|
||||||
END as reminder_text,
|
|
||||||
'Reminder sent by system automatically' as description,
|
|
||||||
alert_sent_at
|
|
||||||
FROM tat_alerts
|
|
||||||
WHERE request_id = 'REQUEST_ID'
|
|
||||||
AND level_id = 'LEVEL_ID'
|
|
||||||
ORDER BY threshold_percentage ASC;
|
|
||||||
```
|
|
||||||
|
|
||||||
### TAT Compliance Rate
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
ROUND(
|
|
||||||
COUNT(CASE WHEN completed_within_tat = true THEN 1 END) * 100.0 /
|
|
||||||
NULLIF(COUNT(CASE WHEN completed_within_tat IS NOT NULL THEN 1 END), 0),
|
|
||||||
2
|
|
||||||
) as compliance_percentage
|
|
||||||
FROM vw_tat_compliance;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Approver Performance Leaderboard
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
approver_name,
|
|
||||||
department,
|
|
||||||
ROUND(tat_compliance_percentage, 2) as compliance_percent,
|
|
||||||
approved_count,
|
|
||||||
ROUND(avg_response_time_hours, 2) as avg_response_hours,
|
|
||||||
breaches_count
|
|
||||||
FROM vw_approver_performance
|
|
||||||
WHERE total_assignments > 0
|
|
||||||
ORDER BY tat_compliance_percentage DESC
|
|
||||||
LIMIT 10;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Department Comparison
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
department,
|
|
||||||
total_requests,
|
|
||||||
approved_requests,
|
|
||||||
ROUND(approved_requests * 100.0 / NULLIF(total_requests, 0), 2) as approval_rate,
|
|
||||||
ROUND(avg_cycle_time_hours / 24, 2) as avg_cycle_days
|
|
||||||
FROM vw_department_summary
|
|
||||||
WHERE department IS NOT NULL
|
|
||||||
ORDER BY total_requests DESC;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 How TAT Alerts Work
|
|
||||||
|
|
||||||
### 1. When Request is Submitted
|
|
||||||
|
|
||||||
```
|
|
||||||
✅ TAT monitoring starts for Level 1
|
|
||||||
✅ Jobs scheduled: 50%, 75%, 100%
|
|
||||||
✅ level_start_time and tat_start_time set
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. When Notification Fires
|
|
||||||
|
|
||||||
```
|
|
||||||
✅ Notification sent to approver
|
|
||||||
✅ Record created in tat_alerts table
|
|
||||||
✅ Activity logged
|
|
||||||
✅ Flags updated in approval_levels
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Display in UI
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Frontend can fetch and display like:
|
|
||||||
const alerts = await getTATAlerts(requestId, levelId);
|
|
||||||
|
|
||||||
alerts.forEach(alert => {
|
|
||||||
console.log(`Reminder ${alert.threshold_percentage}%: ${alert.alert_message}`);
|
|
||||||
console.log(`Sent at: ${formatDate(alert.alert_sent_at)}`);
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📈 Analytical Reports Supported
|
|
||||||
|
|
||||||
1. **Request Lifecycle Report** - Complete timeline with TAT
|
|
||||||
2. **Approver Performance Report** - Leaderboard & metrics
|
|
||||||
3. **Department-wise Summary** - Cross-department comparison
|
|
||||||
4. **TAT Breach Report** - All breached requests with reasons
|
|
||||||
5. **Priority Distribution** - Express vs Standard analysis
|
|
||||||
6. **Workflow Aging** - Long-running requests
|
|
||||||
7. **Daily/Weekly Trends** - Time-series analysis
|
|
||||||
8. **Engagement Metrics** - Collaboration tracking
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Next Steps
|
|
||||||
|
|
||||||
### 1. Setup Upstash Redis (REQUIRED)
|
|
||||||
|
|
||||||
TAT notifications need Redis to work:
|
|
||||||
|
|
||||||
1. Go to: https://console.upstash.com/
|
|
||||||
2. Create free Redis database
|
|
||||||
3. Copy connection URL
|
|
||||||
4. Add to `.env`:
|
|
||||||
```bash
|
|
||||||
REDIS_URL=rediss://default:PASSWORD@host.upstash.io:6379
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
```
|
|
||||||
5. Restart backend
|
|
||||||
|
|
||||||
See: `START_HERE.md` or `TAT_QUICK_START.md`
|
|
||||||
|
|
||||||
### 2. Test TAT Notifications
|
|
||||||
|
|
||||||
1. Create request with 6-hour TAT (becomes 6 minutes in test mode)
|
|
||||||
2. Submit request
|
|
||||||
3. Wait for notifications: 3min, 4.5min, 6min
|
|
||||||
4. Check `tat_alerts` table
|
|
||||||
5. Verify display in Request Detail screen
|
|
||||||
|
|
||||||
### 3. Build Frontend Reports
|
|
||||||
|
|
||||||
Use the KPI views to build:
|
|
||||||
- Dashboard cards
|
|
||||||
- Charts (pie, bar, line)
|
|
||||||
- Tables with filters
|
|
||||||
- Export to CSV
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Documentation
|
|
||||||
|
|
||||||
| Document | Purpose |
|
|
||||||
|----------|---------|
|
|
||||||
| `docs/KPI_REPORTING_SYSTEM.md` | Complete KPI guide with all queries |
|
|
||||||
| `docs/TAT_NOTIFICATION_SYSTEM.md` | TAT system architecture |
|
|
||||||
| `TAT_QUICK_START.md` | Quick setup for TAT |
|
|
||||||
| `START_HERE.md` | Start here for TAT setup |
|
|
||||||
| `backend_structure.txt` | Database schema reference |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 Database Schema Summary
|
|
||||||
|
|
||||||
```
|
|
||||||
tat_alerts (NEW)
|
|
||||||
├─ alert_id (PK)
|
|
||||||
├─ request_id (FK → workflow_requests)
|
|
||||||
├─ level_id (FK → approval_levels)
|
|
||||||
├─ approver_id (FK → users)
|
|
||||||
├─ alert_type (TAT_50, TAT_75, TAT_100)
|
|
||||||
├─ threshold_percentage (50, 75, 100)
|
|
||||||
├─ tat_hours_allocated
|
|
||||||
├─ tat_hours_elapsed
|
|
||||||
├─ tat_hours_remaining
|
|
||||||
├─ level_start_time
|
|
||||||
├─ alert_sent_at
|
|
||||||
├─ expected_completion_time
|
|
||||||
├─ alert_message
|
|
||||||
├─ notification_sent
|
|
||||||
├─ notification_channels (array)
|
|
||||||
├─ is_breached
|
|
||||||
├─ was_completed_on_time
|
|
||||||
├─ completion_time
|
|
||||||
├─ metadata (JSONB)
|
|
||||||
└─ created_at
|
|
||||||
|
|
||||||
approval_levels (UPDATED)
|
|
||||||
├─ ... existing fields ...
|
|
||||||
├─ tat50_alert_sent (NEW)
|
|
||||||
├─ tat75_alert_sent (NEW)
|
|
||||||
├─ tat_breached (NEW)
|
|
||||||
└─ tat_start_time (NEW)
|
|
||||||
|
|
||||||
8 Views Created:
|
|
||||||
├─ vw_request_volume_summary
|
|
||||||
├─ vw_tat_compliance
|
|
||||||
├─ vw_approver_performance
|
|
||||||
├─ vw_tat_alerts_summary
|
|
||||||
├─ vw_department_summary
|
|
||||||
├─ vw_daily_kpi_metrics
|
|
||||||
├─ vw_workflow_aging
|
|
||||||
└─ vw_engagement_metrics
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Implementation Checklist
|
|
||||||
|
|
||||||
- [x] Create `tat_alerts` table
|
|
||||||
- [x] Add TAT status fields to `approval_levels`
|
|
||||||
- [x] Create 8 KPI views for reporting
|
|
||||||
- [x] Update TAT processor to log alerts
|
|
||||||
- [x] Export `TatAlert` model
|
|
||||||
- [x] Run all migrations successfully
|
|
||||||
- [x] Create comprehensive documentation
|
|
||||||
- [ ] Setup Upstash Redis (YOU DO THIS)
|
|
||||||
- [ ] Test TAT notifications (YOU DO THIS)
|
|
||||||
- [ ] Build frontend KPI dashboards (YOU DO THIS)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎉 Status: READY TO USE!
|
|
||||||
|
|
||||||
- ✅ Database schema complete
|
|
||||||
- ✅ TAT alerts logging ready
|
|
||||||
- ✅ KPI views optimized
|
|
||||||
- ✅ All migrations applied
|
|
||||||
- ✅ Documentation complete
|
|
||||||
|
|
||||||
**Just connect Redis and you're good to go!**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 4, 2025
|
|
||||||
**Team**: Royal Enfield Workflow System
|
|
||||||
|
|
||||||
@ -1,120 +0,0 @@
|
|||||||
# 🚀 Migration Quick Reference
|
|
||||||
|
|
||||||
## Daily Development Workflow
|
|
||||||
|
|
||||||
### Starting Development (Auto-runs Migrations)
|
|
||||||
```bash
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
✅ **This will automatically run all new migrations before starting the server!**
|
|
||||||
|
|
||||||
### Run Migrations Only
|
|
||||||
```bash
|
|
||||||
npm run migrate
|
|
||||||
```
|
|
||||||
|
|
||||||
## Adding a New Migration (3 Steps)
|
|
||||||
|
|
||||||
### 1️⃣ Create Migration File
|
|
||||||
Location: `src/migrations/YYYYMMDD-description.ts`
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { QueryInterface, DataTypes } from 'sequelize';
|
|
||||||
|
|
||||||
export async function up(queryInterface: QueryInterface): Promise<void> {
|
|
||||||
await queryInterface.addColumn('table_name', 'column_name', {
|
|
||||||
type: DataTypes.STRING,
|
|
||||||
allowNull: true,
|
|
||||||
});
|
|
||||||
console.log('✅ Migration completed');
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function down(queryInterface: QueryInterface): Promise<void> {
|
|
||||||
await queryInterface.removeColumn('table_name', 'column_name');
|
|
||||||
console.log('✅ Rollback completed');
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2️⃣ Register in `src/scripts/migrate.ts`
|
|
||||||
```typescript
|
|
||||||
// Add import at top
|
|
||||||
import * as m15 from '../migrations/YYYYMMDD-description';
|
|
||||||
|
|
||||||
// Add execution in run() function
|
|
||||||
await (m15 as any).up(sequelize.getQueryInterface());
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3️⃣ Test
|
|
||||||
```bash
|
|
||||||
npm run migrate
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Operations
|
|
||||||
|
|
||||||
### Add Column
|
|
||||||
```typescript
|
|
||||||
await queryInterface.addColumn('table', 'column', {
|
|
||||||
type: DataTypes.STRING(100),
|
|
||||||
allowNull: false,
|
|
||||||
defaultValue: 'value'
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Add Foreign Key
|
|
||||||
```typescript
|
|
||||||
await queryInterface.addColumn('table', 'foreign_id', {
|
|
||||||
type: DataTypes.UUID,
|
|
||||||
references: { model: 'other_table', key: 'id' },
|
|
||||||
onUpdate: 'CASCADE',
|
|
||||||
onDelete: 'SET NULL'
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Add Index
|
|
||||||
```typescript
|
|
||||||
await queryInterface.addIndex('table', ['column'], {
|
|
||||||
name: 'idx_table_column'
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create Table
|
|
||||||
```typescript
|
|
||||||
await queryInterface.createTable('new_table', {
|
|
||||||
id: {
|
|
||||||
type: DataTypes.UUID,
|
|
||||||
defaultValue: DataTypes.UUIDV4,
|
|
||||||
primaryKey: true
|
|
||||||
},
|
|
||||||
name: DataTypes.STRING(100),
|
|
||||||
created_at: DataTypes.DATE,
|
|
||||||
updated_at: DataTypes.DATE
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## What's New ✨
|
|
||||||
|
|
||||||
### Latest Migration: Skip Approver Functionality
|
|
||||||
- **File**: `20251105-add-skip-fields-to-approval-levels.ts`
|
|
||||||
- **Added Fields**:
|
|
||||||
- `is_skipped` - Boolean flag
|
|
||||||
- `skipped_at` - Timestamp
|
|
||||||
- `skipped_by` - User reference
|
|
||||||
- `skip_reason` - Text explanation
|
|
||||||
- **Index**: Partial index on `is_skipped = TRUE`
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
| Issue | Solution |
|
|
||||||
|-------|----------|
|
|
||||||
| Migration fails | Check console error, fix migration file, re-run |
|
|
||||||
| Column exists error | Migration partially ran - add idempotent checks |
|
|
||||||
| Server won't start | Fix migration first, it blocks startup |
|
|
||||||
|
|
||||||
## 📚 Full Documentation
|
|
||||||
See `MIGRATION_WORKFLOW.md` for comprehensive guide.
|
|
||||||
|
|
||||||
---
|
|
||||||
**Auto-Migration**: ✅ Enabled
|
|
||||||
**Total Migrations**: 14
|
|
||||||
**Latest**: 2025-11-05
|
|
||||||
|
|
||||||
@ -1,284 +0,0 @@
|
|||||||
# Migration Workflow Guide
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
This project uses a TypeScript-based migration system for database schema changes. All migrations are automatically executed when you start the development server.
|
|
||||||
|
|
||||||
## 🚀 Quick Start
|
|
||||||
|
|
||||||
### Running Development Server with Migrations
|
|
||||||
```bash
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
This command will:
|
|
||||||
1. ✅ Run all pending migrations automatically
|
|
||||||
2. 🚀 Start the development server with hot reload
|
|
||||||
|
|
||||||
### Running Migrations Only
|
|
||||||
```bash
|
|
||||||
npm run migrate
|
|
||||||
```
|
|
||||||
Use this when you only want to apply migrations without starting the server.
|
|
||||||
|
|
||||||
## 📝 Creating New Migrations
|
|
||||||
|
|
||||||
### Step 1: Create Migration File
|
|
||||||
Create a new TypeScript file in `src/migrations/` with the naming pattern:
|
|
||||||
```
|
|
||||||
YYYYMMDD-descriptive-name.ts
|
|
||||||
```
|
|
||||||
|
|
||||||
Example: `20251105-add-new-field.ts`
|
|
||||||
|
|
||||||
### Step 2: Migration Template
|
|
||||||
```typescript
|
|
||||||
import { QueryInterface, DataTypes } from 'sequelize';
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Migration: Brief description
|
|
||||||
* Purpose: Detailed explanation
|
|
||||||
* Date: YYYY-MM-DD
|
|
||||||
*/
|
|
||||||
|
|
||||||
export async function up(queryInterface: QueryInterface): Promise<void> {
|
|
||||||
// Add your forward migration logic here
|
|
||||||
await queryInterface.addColumn('table_name', 'column_name', {
|
|
||||||
type: DataTypes.STRING,
|
|
||||||
allowNull: true,
|
|
||||||
});
|
|
||||||
|
|
||||||
console.log('✅ Migration description completed');
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function down(queryInterface: QueryInterface): Promise<void> {
|
|
||||||
// Add your rollback logic here
|
|
||||||
await queryInterface.removeColumn('table_name', 'column_name');
|
|
||||||
|
|
||||||
console.log('✅ Migration rolled back');
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Register Migration
|
|
||||||
Add your new migration to `src/scripts/migrate.ts`:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// 1. Import at the top
|
|
||||||
import * as m15 from '../migrations/20251105-add-new-field';
|
|
||||||
|
|
||||||
// 2. Execute in the run() function
|
|
||||||
await (m15 as any).up(sequelize.getQueryInterface());
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Test
|
|
||||||
```bash
|
|
||||||
npm run migrate
|
|
||||||
```
|
|
||||||
|
|
||||||
## 📋 Current Migrations
|
|
||||||
|
|
||||||
The following migrations are configured and will run in order:
|
|
||||||
|
|
||||||
1. `2025103001-create-workflow-requests` - Core workflow requests table
|
|
||||||
2. `2025103002-create-approval-levels` - Approval hierarchy structure
|
|
||||||
3. `2025103003-create-participants` - Workflow participants
|
|
||||||
4. `2025103004-create-documents` - Document attachments
|
|
||||||
5. `20251031_01_create_subscriptions` - User subscriptions
|
|
||||||
6. `20251031_02_create_activities` - Activity tracking
|
|
||||||
7. `20251031_03_create_work_notes` - Work notes/comments
|
|
||||||
8. `20251031_04_create_work_note_attachments` - Note attachments
|
|
||||||
9. `20251104-add-tat-alert-fields` - TAT alert fields
|
|
||||||
10. `20251104-create-tat-alerts` - TAT alerts table
|
|
||||||
11. `20251104-create-kpi-views` - KPI database views
|
|
||||||
12. `20251104-create-holidays` - Holiday calendar
|
|
||||||
13. `20251104-create-admin-config` - Admin configurations
|
|
||||||
14. `20251105-add-skip-fields-to-approval-levels` - Skip approver functionality
|
|
||||||
|
|
||||||
## 🔄 Migration Safety Features
|
|
||||||
|
|
||||||
### Idempotent Migrations
|
|
||||||
All migrations should be **idempotent** (safe to run multiple times). Use checks like:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Check if column exists before adding
|
|
||||||
const tableDescription = await queryInterface.describeTable('table_name');
|
|
||||||
if (!tableDescription.column_name) {
|
|
||||||
await queryInterface.addColumn(/* ... */);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if table exists before creating
|
|
||||||
const tables = await queryInterface.showAllTables();
|
|
||||||
if (!tables.includes('table_name')) {
|
|
||||||
await queryInterface.createTable(/* ... */);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Error Handling
|
|
||||||
Migrations automatically:
|
|
||||||
- ✅ Stop on first error
|
|
||||||
- ❌ Exit with error code 1 on failure
|
|
||||||
- 📝 Log detailed error messages
|
|
||||||
- 🔄 Prevent server startup if migrations fail
|
|
||||||
|
|
||||||
## 🛠️ Common Migration Operations
|
|
||||||
|
|
||||||
### Adding a Column
|
|
||||||
```typescript
|
|
||||||
await queryInterface.addColumn('table_name', 'new_column', {
|
|
||||||
type: DataTypes.STRING(100),
|
|
||||||
allowNull: false,
|
|
||||||
defaultValue: 'default_value',
|
|
||||||
comment: 'Column description'
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Adding Foreign Key
|
|
||||||
```typescript
|
|
||||||
await queryInterface.addColumn('table_name', 'foreign_key_id', {
|
|
||||||
type: DataTypes.UUID,
|
|
||||||
allowNull: true,
|
|
||||||
references: {
|
|
||||||
model: 'referenced_table',
|
|
||||||
key: 'id'
|
|
||||||
},
|
|
||||||
onUpdate: 'CASCADE',
|
|
||||||
onDelete: 'SET NULL'
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Creating Index
|
|
||||||
```typescript
|
|
||||||
await queryInterface.addIndex('table_name', ['column_name'], {
|
|
||||||
name: 'idx_table_column',
|
|
||||||
unique: false
|
|
||||||
});
|
|
||||||
|
|
||||||
// Partial index with WHERE clause
|
|
||||||
await queryInterface.addIndex('table_name', ['status'], {
|
|
||||||
name: 'idx_table_active',
|
|
||||||
where: {
|
|
||||||
is_active: true
|
|
||||||
}
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Creating Table
|
|
||||||
```typescript
|
|
||||||
await queryInterface.createTable('new_table', {
|
|
||||||
id: {
|
|
||||||
type: DataTypes.UUID,
|
|
||||||
defaultValue: DataTypes.UUIDV4,
|
|
||||||
primaryKey: true
|
|
||||||
},
|
|
||||||
name: {
|
|
||||||
type: DataTypes.STRING(100),
|
|
||||||
allowNull: false
|
|
||||||
},
|
|
||||||
created_at: {
|
|
||||||
type: DataTypes.DATE,
|
|
||||||
allowNull: false,
|
|
||||||
defaultValue: DataTypes.NOW
|
|
||||||
},
|
|
||||||
updated_at: {
|
|
||||||
type: DataTypes.DATE,
|
|
||||||
allowNull: false,
|
|
||||||
defaultValue: DataTypes.NOW
|
|
||||||
}
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Modifying Column
|
|
||||||
```typescript
|
|
||||||
await queryInterface.changeColumn('table_name', 'column_name', {
|
|
||||||
type: DataTypes.STRING(200), // Changed from 100
|
|
||||||
allowNull: true // Changed from false
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Dropping Column
|
|
||||||
```typescript
|
|
||||||
await queryInterface.removeColumn('table_name', 'old_column');
|
|
||||||
```
|
|
||||||
|
|
||||||
### Raw SQL Queries
|
|
||||||
```typescript
|
|
||||||
await queryInterface.sequelize.query(`
|
|
||||||
CREATE OR REPLACE VIEW view_name AS
|
|
||||||
SELECT * FROM table_name WHERE condition
|
|
||||||
`);
|
|
||||||
```
|
|
||||||
|
|
||||||
## 📊 Database Structure Reference
|
|
||||||
|
|
||||||
Always refer to `backend_structure.txt` for the authoritative database structure including:
|
|
||||||
- All tables and their columns
|
|
||||||
- Data types and constraints
|
|
||||||
- Relationships and foreign keys
|
|
||||||
- Enum values
|
|
||||||
- Indexes
|
|
||||||
|
|
||||||
## 🚨 Troubleshooting
|
|
||||||
|
|
||||||
### Migration Fails with "Column Already Exists"
|
|
||||||
- The migration might have partially run
|
|
||||||
- Add idempotent checks or manually rollback the failed migration
|
|
||||||
|
|
||||||
### Server Won't Start After Migration
|
|
||||||
- Check the migration error in console
|
|
||||||
- Fix the migration file
|
|
||||||
- Run `npm run migrate` to retry
|
|
||||||
|
|
||||||
### Need to Rollback a Migration
|
|
||||||
```bash
|
|
||||||
# Manual rollback (requires implementing down() function)
|
|
||||||
ts-node src/scripts/rollback.ts
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🎯 Best Practices
|
|
||||||
|
|
||||||
1. **Always test migrations** on development database first
|
|
||||||
2. **Write rollback logic** in `down()` function
|
|
||||||
3. **Use descriptive names** for migrations
|
|
||||||
4. **Add comments** explaining the purpose
|
|
||||||
5. **Keep migrations small** - one logical change per file
|
|
||||||
6. **Never modify** existing migration files after they run in production
|
|
||||||
7. **Use transactions** for complex multi-step migrations
|
|
||||||
8. **Backup production** before running new migrations
|
|
||||||
|
|
||||||
## 📝 Migration Checklist
|
|
||||||
|
|
||||||
Before running migrations in production:
|
|
||||||
|
|
||||||
- [ ] Tested on local development database
|
|
||||||
- [ ] Verified rollback functionality works
|
|
||||||
- [ ] Checked for data loss scenarios
|
|
||||||
- [ ] Reviewed index impact on performance
|
|
||||||
- [ ] Confirmed migration is idempotent
|
|
||||||
- [ ] Updated `backend_structure.txt` documentation
|
|
||||||
- [ ] Added migration to version control
|
|
||||||
- [ ] Registered in `migrate.ts`
|
|
||||||
|
|
||||||
## 🔗 Related Files
|
|
||||||
|
|
||||||
- **Migration Scripts**: `src/migrations/`
|
|
||||||
- **Migration Runner**: `src/scripts/migrate.ts`
|
|
||||||
- **Database Config**: `src/config/database.ts`
|
|
||||||
- **Database Structure**: `backend_structure.txt`
|
|
||||||
- **Package Scripts**: `package.json`
|
|
||||||
|
|
||||||
## 💡 Example: Recent Migration
|
|
||||||
|
|
||||||
The latest migration (`20251105-add-skip-fields-to-approval-levels`) demonstrates best practices:
|
|
||||||
|
|
||||||
- ✅ Descriptive naming
|
|
||||||
- ✅ Clear documentation
|
|
||||||
- ✅ Multiple related columns added together
|
|
||||||
- ✅ Foreign key relationships
|
|
||||||
- ✅ Indexed for query performance
|
|
||||||
- ✅ Includes rollback logic
|
|
||||||
- ✅ Helpful console messages
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 5, 2025
|
|
||||||
**Migration Count**: 14 migrations
|
|
||||||
**Auto-Run**: Enabled for `npm run dev`
|
|
||||||
|
|
||||||
@ -1,220 +0,0 @@
|
|||||||
# Quick Fix: Settings Not Editable Issue
|
|
||||||
|
|
||||||
## 🔴 Problem
|
|
||||||
Settings showing as "not editable" in the frontend.
|
|
||||||
|
|
||||||
## 🎯 Root Cause
|
|
||||||
**Field Mapping Issue:** Database uses `is_editable` (snake_case) but frontend expects `isEditable` (camelCase).
|
|
||||||
|
|
||||||
## ✅ Solution Applied
|
|
||||||
|
|
||||||
### **1. Fixed Admin Controller** ✅
|
|
||||||
Added field mapping from snake_case to camelCase:
|
|
||||||
```typescript
|
|
||||||
// Re_Backend/src/controllers/admin.controller.ts
|
|
||||||
const configurations = rawConfigurations.map(config => ({
|
|
||||||
configId: config.config_id, // ✅ Mapped
|
|
||||||
isEditable: config.is_editable, // ✅ Mapped
|
|
||||||
isSensitive: config.is_sensitive, // ✅ Mapped
|
|
||||||
requiresRestart: config.requires_restart, // ✅ Mapped
|
|
||||||
// ... all other fields
|
|
||||||
}));
|
|
||||||
```
|
|
||||||
|
|
||||||
### **2. Database Fix Required**
|
|
||||||
|
|
||||||
**Option A: Delete and Re-seed** (Recommended if no custom configs)
|
|
||||||
```sql
|
|
||||||
-- Connect to your database
|
|
||||||
DELETE FROM admin_configurations;
|
|
||||||
|
|
||||||
-- Restart backend - auto-seeding will run
|
|
||||||
-- Check logs for: "✅ Default configurations seeded (18 settings)"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option B: Fix Existing Records** (If you have custom values)
|
|
||||||
```sql
|
|
||||||
-- Update existing records to add missing fields
|
|
||||||
UPDATE admin_configurations
|
|
||||||
SET
|
|
||||||
is_sensitive = COALESCE(is_sensitive, false),
|
|
||||||
requires_restart = COALESCE(requires_restart, false),
|
|
||||||
is_editable = COALESCE(is_editable, true)
|
|
||||||
WHERE is_sensitive IS NULL
|
|
||||||
OR requires_restart IS NULL
|
|
||||||
OR is_editable IS NULL;
|
|
||||||
|
|
||||||
-- Set requires_restart = true for settings that need it
|
|
||||||
UPDATE admin_configurations
|
|
||||||
SET requires_restart = true
|
|
||||||
WHERE config_key IN (
|
|
||||||
'WORK_START_HOUR',
|
|
||||||
'WORK_END_HOUR',
|
|
||||||
'MAX_FILE_SIZE_MB',
|
|
||||||
'ALLOWED_FILE_TYPES'
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Step-by-Step Fix
|
|
||||||
|
|
||||||
### **Step 1: Stop Backend**
|
|
||||||
```bash
|
|
||||||
# Press Ctrl+C to stop the server
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 2: Clear Configurations** (if any exist)
|
|
||||||
```sql
|
|
||||||
-- Connect to PostgreSQL
|
|
||||||
psql -U postgres -d re_workflow
|
|
||||||
|
|
||||||
-- Check if configurations exist
|
|
||||||
SELECT COUNT(*) FROM admin_configurations;
|
|
||||||
|
|
||||||
-- If count > 0, delete them
|
|
||||||
DELETE FROM admin_configurations;
|
|
||||||
|
|
||||||
-- Verify
|
|
||||||
SELECT COUNT(*) FROM admin_configurations;
|
|
||||||
-- Should show: 0
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 3: Restart Backend** (Auto-seeds)
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 4: Verify Seeding in Logs**
|
|
||||||
Look for:
|
|
||||||
```
|
|
||||||
⚙️ System configurations initialized
|
|
||||||
✅ Default configurations seeded successfully (18 settings across 7 categories)
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 5: Test in Frontend**
|
|
||||||
1. Login as Admin user
|
|
||||||
2. Go to **Settings → System Configuration**
|
|
||||||
3. You should see **7 category tabs**
|
|
||||||
4. Click any tab (e.g., "TAT SETTINGS")
|
|
||||||
5. All settings should now have:
|
|
||||||
- ✅ Editable input fields
|
|
||||||
- ✅ **Save** button enabled
|
|
||||||
- ✅ **Reset to Default** button
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🧪 Verify Configuration Loaded Correctly
|
|
||||||
|
|
||||||
**Test API Endpoint:**
|
|
||||||
```bash
|
|
||||||
# Get all configurations
|
|
||||||
curl http://localhost:5000/api/v1/admin/configurations \
|
|
||||||
-H "Authorization: Bearer YOUR_JWT_TOKEN"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected Response:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"data": [
|
|
||||||
{
|
|
||||||
"configId": "uuid...",
|
|
||||||
"configKey": "DEFAULT_TAT_EXPRESS_HOURS",
|
|
||||||
"configCategory": "TAT_SETTINGS",
|
|
||||||
"configValue": "24",
|
|
||||||
"valueType": "NUMBER",
|
|
||||||
"displayName": "Default TAT for Express Priority",
|
|
||||||
"isEditable": true, // ✅ Should be true
|
|
||||||
"isSensitive": false,
|
|
||||||
"validationRules": {"min": 1, "max": 168},
|
|
||||||
"uiComponent": "number",
|
|
||||||
"sortOrder": 1,
|
|
||||||
"requiresRestart": false
|
|
||||||
},
|
|
||||||
// ... 17 more configurations
|
|
||||||
],
|
|
||||||
"count": 18
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check the `isEditable` field - should be `true` for all!**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🐛 Common Issues & Solutions
|
|
||||||
|
|
||||||
### Issue 1: "Configurations already exist. Skipping seed."
|
|
||||||
**Cause:** Old configurations in database
|
|
||||||
**Fix:** Delete them and restart backend
|
|
||||||
|
|
||||||
### Issue 2: Settings show as gray/disabled
|
|
||||||
**Cause:** `is_editable = false` in database
|
|
||||||
**Fix:** Run SQL update to set all to `true`
|
|
||||||
|
|
||||||
### Issue 3: "Configuration not found or not editable" error when saving
|
|
||||||
**Cause:** Backend can't find the config or `is_editable = false`
|
|
||||||
**Fix:** Verify database has correct values
|
|
||||||
|
|
||||||
### Issue 4: Empty settings page
|
|
||||||
**Cause:** No configurations in database
|
|
||||||
**Fix:** Check backend logs for seeding errors, run seed manually
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Expected Database State
|
|
||||||
|
|
||||||
After successful seeding, your `admin_configurations` table should have:
|
|
||||||
|
|
||||||
| Count | Category | All Editable? |
|
|
||||||
|-------|----------|---------------|
|
|
||||||
| 6 | TAT_SETTINGS | ✅ Yes |
|
|
||||||
| 3 | DOCUMENT_POLICY | ✅ Yes |
|
|
||||||
| 2 | AI_CONFIGURATION | ✅ Yes |
|
|
||||||
| 3 | NOTIFICATION_RULES | ✅ Yes |
|
|
||||||
| 4 | DASHBOARD_LAYOUT | ✅ Yes |
|
|
||||||
| 3 | WORKFLOW_SHARING | ✅ Yes |
|
|
||||||
| 2 | WORKFLOW_LIMITS | ✅ Yes |
|
|
||||||
| **18 Total** | **7 Categories** | **✅ All Editable** |
|
|
||||||
|
|
||||||
Query to verify:
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
config_category,
|
|
||||||
COUNT(*) as total,
|
|
||||||
SUM(CASE WHEN is_editable = true THEN 1 ELSE 0 END) as editable_count
|
|
||||||
FROM admin_configurations
|
|
||||||
GROUP BY config_category
|
|
||||||
ORDER BY config_category;
|
|
||||||
```
|
|
||||||
|
|
||||||
Should show 100% editable in all categories!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ After Fix - Settings UI Will Show:
|
|
||||||
|
|
||||||
```
|
|
||||||
Settings → System Configuration
|
|
||||||
|
|
||||||
┌─────────────────────────────────────────┐
|
|
||||||
│ [TAT SETTINGS] [DOCUMENT POLICY] [...] │ ← 7 tabs
|
|
||||||
├─────────────────────────────────────────┤
|
|
||||||
│ │
|
|
||||||
│ ⏰ Default TAT for Express Priority │
|
|
||||||
│ (Description...) │
|
|
||||||
│ ┌──────┐ ← EDITABLE │
|
|
||||||
│ │ 24 │ │
|
|
||||||
│ └──────┘ │
|
|
||||||
│ [💾 Save] [🔄 Reset] ← ENABLED │
|
|
||||||
│ │
|
|
||||||
│ ⏰ First TAT Reminder (%) │
|
|
||||||
│ ━━━━●━━━━ 50% ← SLIDER WORKS │
|
|
||||||
│ [💾 Save] [🔄 Reset] │
|
|
||||||
│ │
|
|
||||||
└─────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
**All inputs should be EDITABLE and Save buttons ENABLED!** ✅
|
|
||||||
|
|
||||||
263
QUICK_START.md
Normal file
263
QUICK_START.md
Normal file
@ -0,0 +1,263 @@
|
|||||||
|
# Royal Enfield Workflow - Quick Start Guide
|
||||||
|
|
||||||
|
## 🚀 **One-Command Setup (New!)**
|
||||||
|
|
||||||
|
Everything is now automated! Just run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd Re_Backend
|
||||||
|
npm run dev
|
||||||
|
```
|
||||||
|
|
||||||
|
That's it! The setup script will automatically:
|
||||||
|
- ✅ Check if PostgreSQL database exists
|
||||||
|
- ✅ Create database if missing
|
||||||
|
- ✅ Install required extensions (`uuid-ossp`)
|
||||||
|
- ✅ Run all migrations (18 total: create tables, enums, indexes)
|
||||||
|
- ✅ Auto-seed 30 admin configurations
|
||||||
|
- ✅ Start the development server
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📋 **Prerequisites**
|
||||||
|
|
||||||
|
Before running `npm run dev`, ensure:
|
||||||
|
|
||||||
|
1. **PostgreSQL is installed and running**
|
||||||
|
```bash
|
||||||
|
# Windows
|
||||||
|
# PostgreSQL should be running as a service
|
||||||
|
|
||||||
|
# Verify it's running
|
||||||
|
psql -U postgres -c "SELECT version();"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Dependencies are installed**
|
||||||
|
```bash
|
||||||
|
npm install
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Environment variables are configured**
|
||||||
|
- Copy `.env.example` to `.env`
|
||||||
|
- Update database credentials:
|
||||||
|
```env
|
||||||
|
DB_HOST=localhost
|
||||||
|
DB_PORT=5432
|
||||||
|
DB_USER=postgres
|
||||||
|
DB_PASSWORD=your_password
|
||||||
|
DB_NAME=royal_enfield_workflow
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎯 **First Time Setup**
|
||||||
|
|
||||||
|
### Step 1: Install & Configure
|
||||||
|
```bash
|
||||||
|
cd Re_Backend
|
||||||
|
npm install
|
||||||
|
cp .env.example .env
|
||||||
|
# Edit .env with your database credentials
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Run Development Server
|
||||||
|
```bash
|
||||||
|
npm run dev
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output:**
|
||||||
|
```
|
||||||
|
========================================
|
||||||
|
🚀 Royal Enfield Workflow - Auto Setup
|
||||||
|
========================================
|
||||||
|
|
||||||
|
🔍 Checking if database exists...
|
||||||
|
📦 Database 'royal_enfield_workflow' not found. Creating...
|
||||||
|
✅ Database 'royal_enfield_workflow' created successfully!
|
||||||
|
📦 Installing uuid-ossp extension...
|
||||||
|
✅ Extension installed!
|
||||||
|
🔌 Testing database connection...
|
||||||
|
✅ Database connection established!
|
||||||
|
🔄 Running migrations...
|
||||||
|
|
||||||
|
📋 Creating users table with RBAC and extended SSO fields...
|
||||||
|
✅ 2025103000-create-users
|
||||||
|
✅ 2025103001-create-workflow-requests
|
||||||
|
✅ 2025103002-create-approval-levels
|
||||||
|
... (18 migrations total)
|
||||||
|
|
||||||
|
✅ Migrations completed successfully!
|
||||||
|
|
||||||
|
========================================
|
||||||
|
✅ Setup completed successfully!
|
||||||
|
========================================
|
||||||
|
|
||||||
|
📝 Note: Admin configurations will be auto-seeded on server start.
|
||||||
|
|
||||||
|
💡 Next steps:
|
||||||
|
1. Server will start automatically
|
||||||
|
2. Log in via SSO
|
||||||
|
3. Run this SQL to make yourself admin:
|
||||||
|
UPDATE users SET role = 'ADMIN' WHERE email = 'your-email@royalenfield.com';
|
||||||
|
|
||||||
|
[Config Seed] ✅ Default configurations seeded successfully (30 settings)
|
||||||
|
info: ✅ Server started successfully on port 5000
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Make Yourself Admin
|
||||||
|
After logging in via SSO:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
psql -d royal_enfield_workflow
|
||||||
|
|
||||||
|
UPDATE users
|
||||||
|
SET role = 'ADMIN'
|
||||||
|
WHERE email = 'your-email@royalenfield.com';
|
||||||
|
|
||||||
|
\q
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔄 **Subsequent Runs**
|
||||||
|
|
||||||
|
After initial setup, `npm run dev` will:
|
||||||
|
- ✅ Skip database creation (already exists)
|
||||||
|
- ✅ Run any pending migrations (if you pulled new code)
|
||||||
|
- ✅ Skip config seeding (already has data)
|
||||||
|
- ✅ Start server immediately
|
||||||
|
|
||||||
|
**Typical Output:**
|
||||||
|
```
|
||||||
|
========================================
|
||||||
|
🚀 Royal Enfield Workflow - Auto Setup
|
||||||
|
========================================
|
||||||
|
|
||||||
|
🔍 Checking if database exists...
|
||||||
|
✅ Database 'royal_enfield_workflow' already exists.
|
||||||
|
🔌 Testing database connection...
|
||||||
|
✅ Database connection established!
|
||||||
|
🔄 Running migrations...
|
||||||
|
ℹ️ No pending migrations
|
||||||
|
✅ Migrations completed successfully!
|
||||||
|
|
||||||
|
========================================
|
||||||
|
✅ Setup completed successfully!
|
||||||
|
========================================
|
||||||
|
|
||||||
|
info: ✅ Server started successfully on port 5000
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🛠️ **Manual Commands (If Needed)**
|
||||||
|
|
||||||
|
### Run Setup Only (Without Starting Server)
|
||||||
|
```bash
|
||||||
|
npm run setup
|
||||||
|
```
|
||||||
|
|
||||||
|
### Start Server Without Setup
|
||||||
|
```bash
|
||||||
|
npm run dev:no-setup
|
||||||
|
```
|
||||||
|
|
||||||
|
### Run Migrations Only
|
||||||
|
```bash
|
||||||
|
npm run migrate
|
||||||
|
```
|
||||||
|
|
||||||
|
### Seed Admin Configs Manually
|
||||||
|
```bash
|
||||||
|
npm run seed:config
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔥 **Fresh Database Reset**
|
||||||
|
|
||||||
|
If you want to completely reset and start fresh:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Drop database
|
||||||
|
psql -U postgres -c "DROP DATABASE IF EXISTS royal_enfield_workflow;"
|
||||||
|
|
||||||
|
# Then just run dev (it will recreate everything)
|
||||||
|
npm run dev
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📊 **Database Structure**
|
||||||
|
|
||||||
|
After setup, you'll have:
|
||||||
|
- **18 migrations** run successfully
|
||||||
|
- **30 admin configurations** seeded
|
||||||
|
- **12+ tables** created:
|
||||||
|
- `users` (with RBAC roles)
|
||||||
|
- `workflow_requests`
|
||||||
|
- `approval_levels`
|
||||||
|
- `participants`
|
||||||
|
- `documents`
|
||||||
|
- `work_notes`
|
||||||
|
- `tat_alerts`
|
||||||
|
- `admin_configurations`
|
||||||
|
- `holidays`
|
||||||
|
- `notifications`
|
||||||
|
- `conclusion_remarks`
|
||||||
|
- And more...
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎉 **That's It!**
|
||||||
|
|
||||||
|
Now you can:
|
||||||
|
- Access API at: `http://localhost:5000`
|
||||||
|
- View health check: `http://localhost:5000/health`
|
||||||
|
- Access API docs: `http://localhost:5000/api/v1`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ❓ **Troubleshooting**
|
||||||
|
|
||||||
|
### Database Connection Failed
|
||||||
|
```
|
||||||
|
Error: Unable to connect to database
|
||||||
|
```
|
||||||
|
**Fix:**
|
||||||
|
- Ensure PostgreSQL is running
|
||||||
|
- Check credentials in `.env`
|
||||||
|
- Verify database user has `CREATEDB` permission
|
||||||
|
|
||||||
|
### Setup Script Permission Error
|
||||||
|
```
|
||||||
|
Error: permission denied to create database
|
||||||
|
```
|
||||||
|
**Fix:**
|
||||||
|
```sql
|
||||||
|
-- Grant CREATEDB permission to your user
|
||||||
|
ALTER USER postgres CREATEDB;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Port Already in Use
|
||||||
|
```
|
||||||
|
Error: Port 5000 is already in use
|
||||||
|
```
|
||||||
|
**Fix:**
|
||||||
|
- Change `PORT` in `.env`
|
||||||
|
- Or kill process using port 5000
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚀 **Production Deployment**
|
||||||
|
|
||||||
|
For production:
|
||||||
|
1. Set `NODE_ENV=production` in `.env`
|
||||||
|
2. Use `npm run build` to compile TypeScript
|
||||||
|
3. Use `npm start` (no auto-setup in production)
|
||||||
|
4. Run migrations separately: `npm run migrate`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Happy Coding!** 🎉
|
||||||
|
|
||||||
@ -1,253 +0,0 @@
|
|||||||
# Quick Start: Skip & Add Approver Features
|
|
||||||
|
|
||||||
## 🚀 Setup (One-Time)
|
|
||||||
|
|
||||||
### **Step 1: Run Database Migration**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Connect to database
|
|
||||||
psql -U postgres -d re_workflow
|
|
||||||
|
|
||||||
# Run migration
|
|
||||||
\i Re_Backend/src/migrations/add_is_skipped_to_approval_levels.sql
|
|
||||||
|
|
||||||
# Verify columns added
|
|
||||||
\d approval_levels
|
|
||||||
# Should show: is_skipped, skipped_at, skipped_by, skip_reason
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 2: Restart Backend**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📖 User Guide
|
|
||||||
|
|
||||||
### **How to Skip an Approver (Initiator/Approver)**
|
|
||||||
|
|
||||||
1. Go to **Request Detail** → **Workflow** tab
|
|
||||||
2. Find the approver who is pending/in-review
|
|
||||||
3. Click **"Skip This Approver"** button
|
|
||||||
4. Enter reason (e.g., "On vacation")
|
|
||||||
5. Click OK
|
|
||||||
|
|
||||||
**Result:**
|
|
||||||
- ✅ Approver marked as SKIPPED
|
|
||||||
- ✅ Next approver becomes active
|
|
||||||
- ✅ Notification sent to next approver
|
|
||||||
- ✅ Activity logged
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **How to Add New Approver (Initiator/Approver)**
|
|
||||||
|
|
||||||
1. Go to **Request Detail** → **Quick Actions**
|
|
||||||
2. Click **"Add Approver"**
|
|
||||||
3. Review **Current Levels** (shows all existing approvers with status)
|
|
||||||
4. Select **Approval Level** (where to insert new approver)
|
|
||||||
5. Enter **TAT Hours** (e.g., 48)
|
|
||||||
6. Enter **Email** (use @ to search: `@john`)
|
|
||||||
7. Click **"Add at Level X"**
|
|
||||||
|
|
||||||
**Result:**
|
|
||||||
- ✅ New approver inserted at chosen level
|
|
||||||
- ✅ Existing approvers shifted automatically
|
|
||||||
- ✅ TAT jobs scheduled if level is active
|
|
||||||
- ✅ Notification sent to new approver
|
|
||||||
- ✅ Activity logged
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Examples
|
|
||||||
|
|
||||||
### **Example 1: Skip Non-Responding Approver**
|
|
||||||
|
|
||||||
**Scenario:** Mike (Level 2) hasn't responded for 3 days, deadline approaching
|
|
||||||
|
|
||||||
**Steps:**
|
|
||||||
1. Open request REQ-2025-001
|
|
||||||
2. Go to Workflow tab
|
|
||||||
3. Find Mike's card (Level 2 - In Review)
|
|
||||||
4. Click "Skip This Approver"
|
|
||||||
5. Reason: "Approver on extended leave - deadline critical"
|
|
||||||
6. Confirm
|
|
||||||
|
|
||||||
**Result:**
|
|
||||||
```
|
|
||||||
Before: After:
|
|
||||||
Level 1: Sarah ✅ Level 1: Sarah ✅
|
|
||||||
Level 2: Mike ⏳ → Level 2: Mike ⏭️ (SKIPPED)
|
|
||||||
Level 3: Lisa ⏸️ Level 3: Lisa ⏳ (ACTIVE!)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Example 2: Add Finance Review**
|
|
||||||
|
|
||||||
**Scenario:** Need Finance Manager approval between existing levels
|
|
||||||
|
|
||||||
**Steps:**
|
|
||||||
1. Click "Add Approver" in Quick Actions
|
|
||||||
2. See current levels:
|
|
||||||
- Level 1: Sarah (Approved)
|
|
||||||
- Level 2: Mike (In Review)
|
|
||||||
- Level 3: Lisa (Waiting)
|
|
||||||
3. Select Level: **3** (to insert before Lisa)
|
|
||||||
4. TAT Hours: **48**
|
|
||||||
5. Email: `@john` → Select "John Doe (john@finance.com)"
|
|
||||||
6. Click "Add at Level 3"
|
|
||||||
|
|
||||||
**Result:**
|
|
||||||
```
|
|
||||||
Before: After:
|
|
||||||
Level 1: Sarah ✅ Level 1: Sarah ✅
|
|
||||||
Level 2: Mike ⏳ Level 2: Mike ⏳
|
|
||||||
Level 3: Lisa ⏸️ → Level 3: John ⏸️ (NEW!)
|
|
||||||
Level 4: Lisa ⏸️ (shifted)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⚙️ API Reference
|
|
||||||
|
|
||||||
### **Skip Approver**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
POST /api/v1/workflows/:requestId/approvals/:levelId/skip
|
|
||||||
|
|
||||||
Headers:
|
|
||||||
Authorization: Bearer <token>
|
|
||||||
|
|
||||||
Body:
|
|
||||||
{
|
|
||||||
"reason": "Approver on vacation"
|
|
||||||
}
|
|
||||||
|
|
||||||
Response:
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"message": "Approver skipped successfully"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Add Approver at Level**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
POST /api/v1/workflows/:requestId/approvers/at-level
|
|
||||||
|
|
||||||
Headers:
|
|
||||||
Authorization: Bearer <token>
|
|
||||||
|
|
||||||
Body:
|
|
||||||
{
|
|
||||||
"email": "john@example.com",
|
|
||||||
"tatHours": 48,
|
|
||||||
"level": 3
|
|
||||||
}
|
|
||||||
|
|
||||||
Response:
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"message": "Approver added successfully",
|
|
||||||
"data": {
|
|
||||||
"levelId": "...",
|
|
||||||
"levelNumber": 3,
|
|
||||||
"approverName": "John Doe",
|
|
||||||
"tatHours": 48
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🛡️ Permissions
|
|
||||||
|
|
||||||
| Action | Who Can Do It |
|
|
||||||
|--------|---------------|
|
|
||||||
| Skip Approver | ✅ INITIATOR, ✅ APPROVER |
|
|
||||||
| Add Approver | ✅ INITIATOR, ✅ APPROVER |
|
|
||||||
| View Skip Reason | ✅ All participants |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⚠️ Limitations
|
|
||||||
|
|
||||||
| Limitation | Reason |
|
|
||||||
|------------|--------|
|
|
||||||
| Cannot skip approved levels | Data integrity |
|
|
||||||
| Cannot skip rejected levels | Already closed |
|
|
||||||
| Cannot skip already skipped levels | Already handled |
|
|
||||||
| Cannot skip future levels | Not yet active |
|
|
||||||
| Cannot add before completed levels | Would break workflow state |
|
|
||||||
| Must provide valid TAT (1-720h) | Business rules |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Dashboard Impact
|
|
||||||
|
|
||||||
### **Skipped Approvers in Reports:**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Count skipped approvers
|
|
||||||
SELECT COUNT(*)
|
|
||||||
FROM approval_levels
|
|
||||||
WHERE is_skipped = TRUE;
|
|
||||||
|
|
||||||
-- Find requests with skipped levels
|
|
||||||
SELECT r.request_number, al.level_number, al.approver_name, al.skip_reason
|
|
||||||
FROM workflow_requests r
|
|
||||||
JOIN approval_levels al ON r.request_id = al.request_id
|
|
||||||
WHERE al.is_skipped = TRUE;
|
|
||||||
```
|
|
||||||
|
|
||||||
### **KPIs Affected:**
|
|
||||||
|
|
||||||
- **Avg Approval Time** - Skipped levels excluded from calculation
|
|
||||||
- **Approver Response Rate** - Skipped marked separately
|
|
||||||
- **Workflow Bottlenecks** - Identify frequently skipped approvers
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 Troubleshooting
|
|
||||||
|
|
||||||
### **"Cannot skip approver - level is already APPROVED"**
|
|
||||||
- The level has already been approved
|
|
||||||
- You cannot skip completed levels
|
|
||||||
|
|
||||||
### **"Cannot skip future approval levels"**
|
|
||||||
- You're trying to skip a level that hasn't been reached yet
|
|
||||||
- Only current level can be skipped
|
|
||||||
|
|
||||||
### **"Cannot add approver at level X. Minimum allowed level is Y"**
|
|
||||||
- You're trying to add before a completed level
|
|
||||||
- Must add after all approved/rejected/skipped levels
|
|
||||||
|
|
||||||
### **"User is already a participant in this request"**
|
|
||||||
- The user is already an approver, initiator, or spectator
|
|
||||||
- Cannot add same user twice
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Testing Checklist
|
|
||||||
|
|
||||||
- [ ] Run database migration
|
|
||||||
- [ ] Restart backend server
|
|
||||||
- [ ] Create test workflow with 3 approvers
|
|
||||||
- [ ] Approve Level 1
|
|
||||||
- [ ] Skip Level 2 (test skip functionality)
|
|
||||||
- [ ] Verify Level 3 becomes active
|
|
||||||
- [ ] Add new approver at Level 3 (test add functionality)
|
|
||||||
- [ ] Verify levels shifted correctly
|
|
||||||
- [ ] Check activity log shows both actions
|
|
||||||
- [ ] Verify notifications sent correctly
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Ready to use! 🎉
|
|
||||||
|
|
||||||
1846
Royal_Enfield_API_Collection.postman_collection.json
Normal file
1846
Royal_Enfield_API_Collection.postman_collection.json
Normal file
File diff suppressed because it is too large
Load Diff
@ -1,216 +0,0 @@
|
|||||||
# ✅ Holiday Calendar & Admin Configuration - Setup Complete!
|
|
||||||
|
|
||||||
## 🎉 Successfully Implemented
|
|
||||||
|
|
||||||
### **Database Tables Created:**
|
|
||||||
1. ✅ `holidays` - Organization holiday calendar
|
|
||||||
2. ✅ `admin_configurations` - System-wide admin settings
|
|
||||||
|
|
||||||
### **API Endpoints Created:**
|
|
||||||
- ✅ `/api/admin/holidays` - CRUD operations for holidays
|
|
||||||
- ✅ `/api/admin/configurations` - Manage admin settings
|
|
||||||
|
|
||||||
### **Features Implemented:**
|
|
||||||
- ✅ Holiday management (add/edit/delete/bulk import)
|
|
||||||
- ✅ TAT calculation excludes holidays for STANDARD priority
|
|
||||||
- ✅ Automatic holiday cache with 6-hour refresh
|
|
||||||
- ✅ Admin configuration system ready for future UI
|
|
||||||
- ✅ Sample Indian holidays data (2025) prepared for import
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Quick Start
|
|
||||||
|
|
||||||
### **1. Verify Tables:**
|
|
||||||
```bash
|
|
||||||
# Check if tables were created
|
|
||||||
psql -d your_database -c "\dt holidays"
|
|
||||||
psql -d your_database -c "\dt admin_configurations"
|
|
||||||
```
|
|
||||||
|
|
||||||
### **2. Start the Backend:**
|
|
||||||
```bash
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
**You should see:**
|
|
||||||
```
|
|
||||||
📅 Holiday calendar loaded for TAT calculations
|
|
||||||
[TAT Utils] Loaded 0 holidays into cache
|
|
||||||
```
|
|
||||||
|
|
||||||
### **3. Add Your First Holiday (via API):**
|
|
||||||
|
|
||||||
**As Admin user:**
|
|
||||||
```bash
|
|
||||||
curl -X POST http://localhost:5000/api/admin/holidays \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "Authorization: Bearer YOUR_ADMIN_TOKEN" \
|
|
||||||
-d '{
|
|
||||||
"holidayDate": "2025-11-05",
|
|
||||||
"holidayName": "Diwali",
|
|
||||||
"description": "Festival of Lights",
|
|
||||||
"holidayType": "NATIONAL"
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
### **4. Bulk Import Indian Holidays (Optional):**
|
|
||||||
```bash
|
|
||||||
curl -X POST http://localhost:5000/api/admin/holidays/bulk-import \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "Authorization: Bearer YOUR_ADMIN_TOKEN" \
|
|
||||||
-d @data/indian_holidays_2025.json
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 How It Works
|
|
||||||
|
|
||||||
### **TAT Calculation with Holidays:**
|
|
||||||
|
|
||||||
**STANDARD Priority:**
|
|
||||||
- ❌ Skips **weekends** (Saturday/Sunday)
|
|
||||||
- ❌ Skips **holidays** (from holidays table)
|
|
||||||
- ✅ Only counts **working hours** (9 AM - 6 PM)
|
|
||||||
|
|
||||||
**EXPRESS Priority:**
|
|
||||||
- ✅ Includes **all days** (24/7)
|
|
||||||
- ✅ No holidays or weekends excluded
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Documentation
|
|
||||||
|
|
||||||
- **Full Guide:** `docs/HOLIDAY_CALENDAR_SYSTEM.md`
|
|
||||||
- **Complete Summary:** `HOLIDAY_AND_ADMIN_CONFIG_COMPLETE.md`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Next Steps
|
|
||||||
|
|
||||||
### **For Backend Developers:**
|
|
||||||
1. Test holiday API endpoints
|
|
||||||
2. Verify TAT calculations with holidays
|
|
||||||
3. Add more admin configurations as needed
|
|
||||||
|
|
||||||
### **For Frontend Developers:**
|
|
||||||
1. Build Admin Holiday Management UI
|
|
||||||
2. Create Holiday Calendar view
|
|
||||||
3. Implement Configuration Settings page
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 Verify Setup
|
|
||||||
|
|
||||||
### **Check Holidays Table:**
|
|
||||||
```sql
|
|
||||||
SELECT * FROM holidays;
|
|
||||||
-- Should return 0 rows (no holidays added yet)
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Check Admin Configurations:**
|
|
||||||
```sql
|
|
||||||
SELECT * FROM admin_configurations;
|
|
||||||
-- Should return 0 rows (will be seeded on first use)
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test Holiday API:**
|
|
||||||
```bash
|
|
||||||
# Get all holidays for 2025
|
|
||||||
curl http://localhost:5000/api/admin/holidays?year=2025 \
|
|
||||||
-H "Authorization: Bearer YOUR_ADMIN_TOKEN"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 Sample Holidays Data
|
|
||||||
|
|
||||||
**File:** `data/indian_holidays_2025.json`
|
|
||||||
|
|
||||||
Contains 14 Indian national holidays for 2025:
|
|
||||||
- Republic Day (Jan 26)
|
|
||||||
- Holi
|
|
||||||
- Independence Day (Aug 15)
|
|
||||||
- Gandhi Jayanti (Oct 2)
|
|
||||||
- Diwali
|
|
||||||
- Christmas
|
|
||||||
- And more...
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Setup Status
|
|
||||||
|
|
||||||
| Component | Status | Notes |
|
|
||||||
|-----------|--------|-------|
|
|
||||||
| **Holidays Table** | ✅ Created | With 4 indexes |
|
|
||||||
| **Admin Config Table** | ✅ Created | With 3 indexes |
|
|
||||||
| **Holiday Model** | ✅ Implemented | Full CRUD support |
|
|
||||||
| **Holiday Service** | ✅ Implemented | Including bulk import |
|
|
||||||
| **Admin Controller** | ✅ Implemented | All endpoints ready |
|
|
||||||
| **Admin Routes** | ✅ Implemented | Secured with admin middleware |
|
|
||||||
| **TAT Integration** | ✅ Implemented | Holidays excluded for STANDARD |
|
|
||||||
| **Holiday Cache** | ✅ Implemented | 6-hour expiry, auto-refresh |
|
|
||||||
| **Sample Data** | ✅ Created | 14 holidays for 2025 |
|
|
||||||
| **Documentation** | ✅ Complete | Full guide available |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎓 Example Usage
|
|
||||||
|
|
||||||
### **Create Request with Holiday in TAT Period:**
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Create STANDARD priority request
|
|
||||||
POST /api/workflows
|
|
||||||
{
|
|
||||||
"title": "Test Request",
|
|
||||||
"priority": "STANDARD",
|
|
||||||
"approvers": [
|
|
||||||
{ "email": "approver@example.com", "tatHours": 48 }
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
// If holidays exist between now and +48 hours:
|
|
||||||
// - Due date will be calculated skipping those holidays
|
|
||||||
// - TAT calculation will be accurate
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🛠️ Troubleshooting
|
|
||||||
|
|
||||||
### **Holidays not excluded from TAT?**
|
|
||||||
|
|
||||||
1. Check if holidays cache is loaded:
|
|
||||||
- Look for "Loaded X holidays into cache" in server logs
|
|
||||||
2. Verify priority is STANDARD (EXPRESS doesn't use holidays)
|
|
||||||
3. Check if holiday exists and is active:
|
|
||||||
```sql
|
|
||||||
SELECT * FROM holidays WHERE holiday_date = '2025-11-05' AND is_active = true;
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Cache not updating after adding holiday?**
|
|
||||||
|
|
||||||
- Cache refreshes automatically when admin adds/updates/deletes holidays
|
|
||||||
- If not working, restart backend server
|
|
||||||
- Cache also refreshes every 6 hours automatically
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📞 Support
|
|
||||||
|
|
||||||
For issues or questions:
|
|
||||||
1. Check documentation in `docs/` folder
|
|
||||||
2. Review complete guide in `HOLIDAY_AND_ADMIN_CONFIG_COMPLETE.md`
|
|
||||||
3. Consult with backend team
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**🎉 You're all set! Start adding holidays and enjoy accurate TAT calculations!**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated:** November 4, 2025
|
|
||||||
**Version:** 1.0.0
|
|
||||||
**Team:** Royal Enfield Workflow System
|
|
||||||
|
|
||||||
310
SETUP_SUMMARY.md
310
SETUP_SUMMARY.md
@ -1,310 +0,0 @@
|
|||||||
# 🎉 Auto-Migration Setup Summary
|
|
||||||
|
|
||||||
## ✅ Setup Complete!
|
|
||||||
|
|
||||||
Your development environment now automatically runs all migrations when you start the server.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 What Changed
|
|
||||||
|
|
||||||
### 1. ✨ New Migration Created
|
|
||||||
```
|
|
||||||
src/migrations/20251105-add-skip-fields-to-approval-levels.ts
|
|
||||||
```
|
|
||||||
**Adds "Skip Approver" functionality to approval_levels table:**
|
|
||||||
- `is_skipped` - Boolean flag
|
|
||||||
- `skipped_at` - Timestamp
|
|
||||||
- `skipped_by` - User reference (FK)
|
|
||||||
- `skip_reason` - Text explanation
|
|
||||||
- Optimized index for skipped approvers
|
|
||||||
|
|
||||||
### 2. 🔧 Migration Runner Updated
|
|
||||||
```
|
|
||||||
src/scripts/migrate.ts
|
|
||||||
```
|
|
||||||
**Enhancements:**
|
|
||||||
- ✅ Added m14 migration import
|
|
||||||
- ✅ Added m14 execution
|
|
||||||
- ✅ Better console output with emojis
|
|
||||||
- ✅ Enhanced error messages
|
|
||||||
|
|
||||||
### 3. 🚀 Auto-Run on Development Start
|
|
||||||
```json
|
|
||||||
// package.json - "dev" script
|
|
||||||
"npm run migrate && nodemon --exec ts-node ..."
|
|
||||||
```
|
|
||||||
**Before**: Manual migration required
|
|
||||||
**After**: Automatic migration on `npm run dev`
|
|
||||||
|
|
||||||
### 4. 🗑️ Cleanup
|
|
||||||
```
|
|
||||||
❌ Deleted: src/migrations/add_is_skipped_to_approval_levels.sql
|
|
||||||
```
|
|
||||||
Converted SQL → TypeScript for consistency
|
|
||||||
|
|
||||||
### 5. 📚 Documentation Created
|
|
||||||
- ✅ `MIGRATION_WORKFLOW.md` - Complete guide
|
|
||||||
- ✅ `MIGRATION_QUICK_REFERENCE.md` - Quick reference
|
|
||||||
- ✅ `AUTO_MIGRATION_SETUP_COMPLETE.md` - Detailed setup docs
|
|
||||||
- ✅ `SETUP_SUMMARY.md` - This file
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 How to Use
|
|
||||||
|
|
||||||
### Start Development (Most Common)
|
|
||||||
```bash
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
**What happens:**
|
|
||||||
```
|
|
||||||
1. 📦 Connect to database
|
|
||||||
2. 🔄 Run all 14 migrations
|
|
||||||
3. ✅ Apply any new schema changes
|
|
||||||
4. 🚀 Start development server
|
|
||||||
5. ♻️ Enable hot reload
|
|
||||||
```
|
|
||||||
|
|
||||||
### Run Migrations Only
|
|
||||||
```bash
|
|
||||||
npm run migrate
|
|
||||||
```
|
|
||||||
**When to use:**
|
|
||||||
- After pulling new migration files
|
|
||||||
- Testing migrations before dev start
|
|
||||||
- Updating database without starting server
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Current Migration Status
|
|
||||||
|
|
||||||
| # | Migration | Date |
|
|
||||||
|---|-----------|------|
|
|
||||||
| 1 | create-workflow-requests | 2025-10-30 |
|
|
||||||
| 2 | create-approval-levels | 2025-10-30 |
|
|
||||||
| 3 | create-participants | 2025-10-30 |
|
|
||||||
| 4 | create-documents | 2025-10-30 |
|
|
||||||
| 5 | create-subscriptions | 2025-10-31 |
|
|
||||||
| 6 | create-activities | 2025-10-31 |
|
|
||||||
| 7 | create-work-notes | 2025-10-31 |
|
|
||||||
| 8 | create-work-note-attachments | 2025-10-31 |
|
|
||||||
| 9 | add-tat-alert-fields | 2025-11-04 |
|
|
||||||
| 10 | create-tat-alerts | 2025-11-04 |
|
|
||||||
| 11 | create-kpi-views | 2025-11-04 |
|
|
||||||
| 12 | create-holidays | 2025-11-04 |
|
|
||||||
| 13 | create-admin-config | 2025-11-04 |
|
|
||||||
| 14 | **add-skip-fields-to-approval-levels** | 2025-11-05 ✨ **NEW** |
|
|
||||||
|
|
||||||
**Total**: 14 migrations configured and ready
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔥 Key Features
|
|
||||||
|
|
||||||
### Automated Workflow
|
|
||||||
```
|
|
||||||
npm run dev
|
|
||||||
↓
|
|
||||||
Runs migrations
|
|
||||||
↓
|
|
||||||
Starts server
|
|
||||||
↓
|
|
||||||
Ready to code! 🎉
|
|
||||||
```
|
|
||||||
|
|
||||||
### Safety Features
|
|
||||||
- ✅ **Idempotent** - Safe to run multiple times
|
|
||||||
- ✅ **Error Handling** - Stops on first error
|
|
||||||
- ✅ **Blocks Startup** - Server won't start if migration fails
|
|
||||||
- ✅ **Rollback Support** - Every migration has down() function
|
|
||||||
- ✅ **TypeScript** - Type-safe schema changes
|
|
||||||
|
|
||||||
### Developer Experience
|
|
||||||
- ✅ **Zero Manual Steps** - Everything automatic
|
|
||||||
- ✅ **Consistent State** - Everyone has same schema
|
|
||||||
- ✅ **Fast Iteration** - Quick dev cycle
|
|
||||||
- ✅ **Clear Feedback** - Visual console output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📖 Quick Reference
|
|
||||||
|
|
||||||
### File Locations
|
|
||||||
```
|
|
||||||
src/
|
|
||||||
├── migrations/ ← Migration files
|
|
||||||
│ ├── 2025103001-create-workflow-requests.ts
|
|
||||||
│ ├── ...
|
|
||||||
│ └── 20251105-add-skip-fields-to-approval-levels.ts ✨
|
|
||||||
├── scripts/
|
|
||||||
│ └── migrate.ts ← Migration runner
|
|
||||||
└── config/
|
|
||||||
└── database.ts ← Database config
|
|
||||||
|
|
||||||
Root:
|
|
||||||
├── package.json ← Dev script with auto-migration
|
|
||||||
├── backend_structure.txt ← Database schema reference
|
|
||||||
└── MIGRATION_*.md ← Documentation
|
|
||||||
```
|
|
||||||
|
|
||||||
### Common Commands
|
|
||||||
```bash
|
|
||||||
# Development with auto-migration
|
|
||||||
npm run dev
|
|
||||||
|
|
||||||
# Migrations only
|
|
||||||
npm run migrate
|
|
||||||
|
|
||||||
# Build for production
|
|
||||||
npm run build
|
|
||||||
|
|
||||||
# Type check
|
|
||||||
npm run type-check
|
|
||||||
|
|
||||||
# Linting
|
|
||||||
npm run lint
|
|
||||||
npm run lint:fix
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🆕 Adding New Migrations
|
|
||||||
|
|
||||||
### Quick Steps
|
|
||||||
1. **Create** migration file in `src/migrations/`
|
|
||||||
2. **Register** in `src/scripts/migrate.ts`
|
|
||||||
3. **Test** with `npm run dev` or `npm run migrate`
|
|
||||||
|
|
||||||
### Detailed Guide
|
|
||||||
See `MIGRATION_WORKFLOW.md` for:
|
|
||||||
- Migration templates
|
|
||||||
- Common operations
|
|
||||||
- Best practices
|
|
||||||
- Troubleshooting
|
|
||||||
- Safety guidelines
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✨ Benefits
|
|
||||||
|
|
||||||
### For You
|
|
||||||
- ✅ No more manual migration steps
|
|
||||||
- ✅ Always up-to-date database schema
|
|
||||||
- ✅ Less context switching
|
|
||||||
- ✅ Focus on feature development
|
|
||||||
|
|
||||||
### For Team
|
|
||||||
- ✅ Consistent development environment
|
|
||||||
- ✅ Easy onboarding for new developers
|
|
||||||
- ✅ Clear migration history
|
|
||||||
- ✅ Professional workflow
|
|
||||||
|
|
||||||
### For Production
|
|
||||||
- ✅ Tested migration process
|
|
||||||
- ✅ Rollback capabilities
|
|
||||||
- ✅ Version controlled schema changes
|
|
||||||
- ✅ Audit trail of database changes
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎓 Example Session
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# You just pulled latest code with new migration
|
|
||||||
git pull origin main
|
|
||||||
|
|
||||||
# Start development - migrations run automatically
|
|
||||||
npm run dev
|
|
||||||
|
|
||||||
# Console output:
|
|
||||||
📦 Database connected
|
|
||||||
🔄 Running migrations...
|
|
||||||
|
|
||||||
✅ Created workflow_requests table
|
|
||||||
✅ Created approval_levels table
|
|
||||||
...
|
|
||||||
✅ Added skip-related fields to approval_levels table
|
|
||||||
|
|
||||||
✅ All migrations applied successfully
|
|
||||||
|
|
||||||
🚀 Server running on port 5000
|
|
||||||
📊 Environment: development
|
|
||||||
⏰ TAT Worker: Initialized and listening
|
|
||||||
|
|
||||||
# Your database is now up-to-date!
|
|
||||||
# Server is running!
|
|
||||||
# Ready to code! 🎉
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔗 Next Steps
|
|
||||||
|
|
||||||
### Immediate
|
|
||||||
1. ✅ Run `npm run dev` to test auto-migration
|
|
||||||
2. ✅ Verify all 14 migrations execute successfully
|
|
||||||
3. ✅ Check database schema for new skip fields
|
|
||||||
|
|
||||||
### When Adding Features
|
|
||||||
1. Create migration for schema changes
|
|
||||||
2. Register in migrate.ts
|
|
||||||
3. Test with `npm run dev`
|
|
||||||
4. Commit migration with feature code
|
|
||||||
|
|
||||||
### Before Production Deploy
|
|
||||||
1. Backup production database
|
|
||||||
2. Test migrations in staging
|
|
||||||
3. Review migration execution order
|
|
||||||
4. Deploy with confidence
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📞 Support & Resources
|
|
||||||
|
|
||||||
| Resource | Location |
|
|
||||||
|----------|----------|
|
|
||||||
| Full Guide | `MIGRATION_WORKFLOW.md` |
|
|
||||||
| Quick Reference | `MIGRATION_QUICK_REFERENCE.md` |
|
|
||||||
| Setup Details | `AUTO_MIGRATION_SETUP_COMPLETE.md` |
|
|
||||||
| Database Schema | `backend_structure.txt` |
|
|
||||||
| Migration Files | `src/migrations/` |
|
|
||||||
| Migration Runner | `src/scripts/migrate.ts` |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🏆 Success Criteria
|
|
||||||
|
|
||||||
- ✅ Auto-migration configured
|
|
||||||
- ✅ All 14 migrations registered
|
|
||||||
- ✅ TypeScript migration created for skip fields
|
|
||||||
- ✅ SQL file converted and cleaned up
|
|
||||||
- ✅ Documentation completed
|
|
||||||
- ✅ Package.json updated
|
|
||||||
- ✅ Migration runner enhanced
|
|
||||||
- ✅ Ready for development
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎉 You're All Set!
|
|
||||||
|
|
||||||
Just run:
|
|
||||||
```bash
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
And watch the magic happen! ✨
|
|
||||||
|
|
||||||
All new migrations will automatically run before your server starts.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Setup Date**: November 5, 2025
|
|
||||||
**Migration System**: TypeScript-based
|
|
||||||
**Auto-Run**: ✅ Enabled
|
|
||||||
**Total Migrations**: 14
|
|
||||||
**Status**: 🟢 Production Ready
|
|
||||||
|
|
||||||
**Team**: Royal Enfield .NET Expert Team
|
|
||||||
**Project**: Workflow Management System
|
|
||||||
|
|
||||||
@ -1,751 +0,0 @@
|
|||||||
# Skip Approver & Dynamic Approver Addition
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
This feature allows initiators and approvers to manage approval workflows dynamically when approvers are unavailable or additional approval is needed.
|
|
||||||
|
|
||||||
### **Key Features:**
|
|
||||||
|
|
||||||
1. **Skip Approver** - Skip non-responding approvers and move to next level
|
|
||||||
2. **Add Approver at Specific Level** - Insert new approver at any position
|
|
||||||
3. **Automatic Level Shifting** - Existing approvers are automatically renumbered
|
|
||||||
4. **Smart Validation** - Cannot modify completed levels (approved/rejected/skipped)
|
|
||||||
5. **TAT Management** - New approvers get their own TAT, jobs scheduled automatically
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Use Cases
|
|
||||||
|
|
||||||
### **Use Case 1: Approver on Leave**
|
|
||||||
|
|
||||||
**Scenario:**
|
|
||||||
```
|
|
||||||
Level 1: Sarah (Approved) ✅
|
|
||||||
Level 2: Mike (Pending) ⏳ ← On vacation, not responding
|
|
||||||
Level 3: Lisa (Waiting) ⏸️
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```
|
|
||||||
Initiator clicks "Skip This Approver" on Level 2
|
|
||||||
→ Mike is marked as SKIPPED
|
|
||||||
→ Level 3 (Lisa) becomes active
|
|
||||||
→ Lisa receives notification
|
|
||||||
→ TAT jobs cancelled for Mike, scheduled for Lisa
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:**
|
|
||||||
```
|
|
||||||
Level 1: Sarah (Approved) ✅
|
|
||||||
Level 2: Mike (Skipped) ⏭️ ← Skipped
|
|
||||||
Level 3: Lisa (In Review) ⏳ ← Now active
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Use Case 2: Add Additional Reviewer**
|
|
||||||
|
|
||||||
**Scenario:**
|
|
||||||
```
|
|
||||||
Level 1: Sarah (Approved) ✅
|
|
||||||
Level 2: Mike (In Review) ⏳
|
|
||||||
Level 3: Lisa (Waiting) ⏸️
|
|
||||||
```
|
|
||||||
|
|
||||||
**Need:** Add Finance Manager (John) between Mike and Lisa
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```
|
|
||||||
Click "Add Approver"
|
|
||||||
→ Email: john@example.com
|
|
||||||
→ TAT: 48 hours
|
|
||||||
→ Level: 3 (between Mike and Lisa)
|
|
||||||
→ Submit
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:**
|
|
||||||
```
|
|
||||||
Level 1: Sarah (Approved) ✅
|
|
||||||
Level 2: Mike (In Review) ⏳ ← Still at level 2
|
|
||||||
Level 3: John (Waiting) ⏸️ ← NEW! Inserted here
|
|
||||||
Level 4: Lisa (Waiting) ⏸️ ← Shifted from 3 to 4
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Use Case 3: Replace Skipped Approver**
|
|
||||||
|
|
||||||
**Scenario:**
|
|
||||||
```
|
|
||||||
Level 1: Sarah (Approved) ✅
|
|
||||||
Level 2: Mike (Skipped) ⏭️
|
|
||||||
Level 3: Lisa (In Review) ⏳
|
|
||||||
```
|
|
||||||
|
|
||||||
**Need:** Add replacement for Mike at level 2
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```
|
|
||||||
Click "Add Approver"
|
|
||||||
→ Email: john@example.com
|
|
||||||
→ TAT: 24 hours
|
|
||||||
→ Level: 2 (Mike's old position)
|
|
||||||
→ Submit
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:**
|
|
||||||
```
|
|
||||||
Level 1: Sarah (Approved) ✅
|
|
||||||
Level 2: John (Waiting) ⏸️ ← NEW! Inserted at level 2
|
|
||||||
Level 3: Mike (Skipped) ⏭️ ← Shifted from 2 to 3
|
|
||||||
Level 4: Lisa (In Review) ⏳ ← Shifted from 3 to 4
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Database Schema
|
|
||||||
|
|
||||||
### **New Fields in `approval_levels` Table:**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Migration: add_is_skipped_to_approval_levels.sql
|
|
||||||
|
|
||||||
ALTER TABLE approval_levels
|
|
||||||
ADD COLUMN is_skipped BOOLEAN DEFAULT FALSE,
|
|
||||||
ADD COLUMN skipped_at TIMESTAMP,
|
|
||||||
ADD COLUMN skipped_by UUID REFERENCES users(user_id),
|
|
||||||
ADD COLUMN skip_reason TEXT;
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Status Enum Update:**
|
|
||||||
|
|
||||||
Already includes `SKIPPED` status:
|
|
||||||
```sql
|
|
||||||
status ENUM('PENDING', 'IN_PROGRESS', 'APPROVED', 'REJECTED', 'SKIPPED')
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Example Data:**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Level 2 was skipped
|
|
||||||
SELECT
|
|
||||||
level_number,
|
|
||||||
approver_name,
|
|
||||||
status,
|
|
||||||
is_skipped,
|
|
||||||
skipped_at,
|
|
||||||
skip_reason
|
|
||||||
FROM approval_levels
|
|
||||||
WHERE request_id = 'xxx';
|
|
||||||
|
|
||||||
-- Results:
|
|
||||||
-- 1 | Sarah | APPROVED | FALSE | NULL | NULL
|
|
||||||
-- 2 | Mike | SKIPPED | TRUE | 2025-11-05 | On vacation
|
|
||||||
-- 3 | Lisa | PENDING | FALSE | NULL | NULL
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## API Endpoints
|
|
||||||
|
|
||||||
### **1. Skip Approver**
|
|
||||||
|
|
||||||
**Endpoint:**
|
|
||||||
```
|
|
||||||
POST /api/v1/workflows/:id/approvals/:levelId/skip
|
|
||||||
```
|
|
||||||
|
|
||||||
**Request Body:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"reason": "Approver on vacation - deadline approaching"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"message": "Approver skipped successfully",
|
|
||||||
"data": {
|
|
||||||
"levelId": "...",
|
|
||||||
"levelNumber": 2,
|
|
||||||
"status": "SKIPPED",
|
|
||||||
"skippedAt": "2025-11-05T10:30:00Z"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Logic:**
|
|
||||||
1. ✅ Mark level as `SKIPPED`
|
|
||||||
2. ✅ Cancel TAT jobs for skipped level
|
|
||||||
3. ✅ Activate next level (move to level+1)
|
|
||||||
4. ✅ Schedule TAT jobs for next level
|
|
||||||
5. ✅ Notify next approver
|
|
||||||
6. ✅ Log activity
|
|
||||||
|
|
||||||
**Validation:**
|
|
||||||
- ❌ Cannot skip already approved/rejected/skipped levels
|
|
||||||
- ❌ Cannot skip future levels (only current level)
|
|
||||||
- ✅ Only INITIATOR or APPROVER can skip
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **2. Add Approver at Specific Level**
|
|
||||||
|
|
||||||
**Endpoint:**
|
|
||||||
```
|
|
||||||
POST /api/v1/workflows/:id/approvers/at-level
|
|
||||||
```
|
|
||||||
|
|
||||||
**Request Body:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"email": "john@example.com",
|
|
||||||
"tatHours": 48,
|
|
||||||
"level": 3
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"message": "Approver added successfully",
|
|
||||||
"data": {
|
|
||||||
"levelId": "...",
|
|
||||||
"levelNumber": 3,
|
|
||||||
"approverName": "John Doe",
|
|
||||||
"tatHours": 48,
|
|
||||||
"status": "PENDING"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Logic:**
|
|
||||||
1. ✅ Find user by email
|
|
||||||
2. ✅ Validate target level (must be after completed levels)
|
|
||||||
3. ✅ Shift existing levels at and after target level (+1)
|
|
||||||
4. ✅ Create new approval level at target position
|
|
||||||
5. ✅ Add as participant (APPROVER type)
|
|
||||||
6. ✅ If new level is current level, schedule TAT jobs
|
|
||||||
7. ✅ Notify new approver
|
|
||||||
8. ✅ Log activity
|
|
||||||
|
|
||||||
**Validation:**
|
|
||||||
- ❌ User must exist in system
|
|
||||||
- ❌ User cannot be existing participant
|
|
||||||
- ❌ Level must be after completed levels (approved/rejected/skipped)
|
|
||||||
- ✅ Automatic level shifting for existing approvers
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Level Shifting Logic
|
|
||||||
|
|
||||||
### **Example: Add at Level 3**
|
|
||||||
|
|
||||||
**Before:**
|
|
||||||
```
|
|
||||||
Level 1: Sarah (Approved) ✅
|
|
||||||
Level 2: Mike (In Review) ⏳
|
|
||||||
Level 3: Lisa (Waiting) ⏸️
|
|
||||||
Level 4: Tom (Waiting) ⏸️
|
|
||||||
```
|
|
||||||
|
|
||||||
**Action:**
|
|
||||||
```
|
|
||||||
Add John at Level 3 with 48h TAT
|
|
||||||
```
|
|
||||||
|
|
||||||
**Backend Processing:**
|
|
||||||
```typescript
|
|
||||||
// Step 1: Get levels to shift (levelNumber >= 3)
|
|
||||||
levelsToShift = [Lisa (Level 3), Tom (Level 4)]
|
|
||||||
|
|
||||||
// Step 2: Shift each level
|
|
||||||
Lisa: Level 3 → Level 4
|
|
||||||
Tom: Level 4 → Level 5
|
|
||||||
|
|
||||||
// Step 3: Insert new approver
|
|
||||||
John: Create at Level 3
|
|
||||||
|
|
||||||
// Step 4: Update workflow.totalLevels
|
|
||||||
totalLevels: 4 → 5
|
|
||||||
```
|
|
||||||
|
|
||||||
**After:**
|
|
||||||
```
|
|
||||||
Level 1: Sarah (Approved) ✅
|
|
||||||
Level 2: Mike (In Review) ⏳
|
|
||||||
Level 3: John (Waiting) ⏸️ ← NEW!
|
|
||||||
Level 4: Lisa (Waiting) ⏸️ ← Shifted from 3
|
|
||||||
Level 5: Tom (Waiting) ⏸️ ← Shifted from 4
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Frontend Implementation
|
|
||||||
|
|
||||||
### **AddApproverModal Enhancements:**
|
|
||||||
|
|
||||||
**New Props:**
|
|
||||||
```typescript
|
|
||||||
interface AddApproverModalProps {
|
|
||||||
open: boolean;
|
|
||||||
onClose: () => void;
|
|
||||||
onConfirm: (email: string, tatHours: number, level: number) => Promise<void>;
|
|
||||||
currentLevels?: ApprovalLevelInfo[]; // ✅ NEW!
|
|
||||||
}
|
|
||||||
|
|
||||||
interface ApprovalLevelInfo {
|
|
||||||
levelNumber: number;
|
|
||||||
approverName: string;
|
|
||||||
status: string;
|
|
||||||
tatHours: number;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**UI Components:**
|
|
||||||
1. **Current Levels Display** - Shows all existing levels with status badges
|
|
||||||
2. **Level Selector** - Dropdown with available levels (after completed)
|
|
||||||
3. **TAT Hours Input** - Number input for TAT (1-720 hours)
|
|
||||||
4. **Email Search** - Existing @ mention search
|
|
||||||
|
|
||||||
**Example Modal:**
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────┐
|
|
||||||
│ Add Approver │
|
|
||||||
├─────────────────────────────────────────────────┤
|
|
||||||
│ Current Approval Levels │
|
|
||||||
│ ┌─────────────────────────────────────────────┐ │
|
|
||||||
│ │ [1] Sarah 50h TAT [✓] approved │ │
|
|
||||||
│ │ [2] Mike 24h TAT [⏳] pending │ │
|
|
||||||
│ │ [3] Lisa 36h TAT [⏸] waiting │ │
|
|
||||||
│ └─────────────────────────────────────────────┘ │
|
|
||||||
│ │
|
|
||||||
│ Approval Level * │
|
|
||||||
│ [Select: Level 2 (will shift existing Level 2)] │
|
|
||||||
│ │
|
|
||||||
│ TAT (Turn Around Time) * │
|
|
||||||
│ [48] hours │
|
|
||||||
│ │
|
|
||||||
│ Email Address * │
|
|
||||||
│ [@john or john@example.com] │
|
|
||||||
│ │
|
|
||||||
│ [Cancel] [Add at Level 2] │
|
|
||||||
└─────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **RequestDetail Skip Button:**
|
|
||||||
|
|
||||||
Added to Workflow tab for each pending/in-review level:
|
|
||||||
|
|
||||||
```tsx
|
|
||||||
{/* Skip Approver Button - Only for active levels */}
|
|
||||||
{(isActive || step.status === 'pending') && !isCompleted && !isRejected && (
|
|
||||||
<Button
|
|
||||||
variant="outline"
|
|
||||||
size="sm"
|
|
||||||
className="w-full border-orange-300 text-orange-700 hover:bg-orange-50"
|
|
||||||
onClick={() => {
|
|
||||||
const reason = prompt('Provide reason for skipping:');
|
|
||||||
if (reason !== null) {
|
|
||||||
handleSkipApprover(step.levelId, reason);
|
|
||||||
}
|
|
||||||
}}
|
|
||||||
>
|
|
||||||
<AlertCircle className="w-4 h-4 mr-2" />
|
|
||||||
Skip This Approver
|
|
||||||
</Button>
|
|
||||||
)}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Validation Rules
|
|
||||||
|
|
||||||
### **Skip Approver Validation:**
|
|
||||||
|
|
||||||
| Rule | Validation | Error Message |
|
|
||||||
|------|-----------|---------------|
|
|
||||||
| Already completed | ❌ Cannot skip APPROVED level | "Cannot skip approver - level is already APPROVED" |
|
|
||||||
| Already rejected | ❌ Cannot skip REJECTED level | "Cannot skip approver - level is already REJECTED" |
|
|
||||||
| Already skipped | ❌ Cannot skip SKIPPED level | "Cannot skip approver - level is already SKIPPED" |
|
|
||||||
| Future level | ❌ Cannot skip level > currentLevel | "Cannot skip future approval levels" |
|
|
||||||
| Authorization | ✅ Only INITIATOR or APPROVER | 403 Forbidden |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Add Approver Validation:**
|
|
||||||
|
|
||||||
| Rule | Validation | Error Message |
|
|
||||||
|------|-----------|---------------|
|
|
||||||
| User exists | ✅ User must exist in system | "User not found with this email" |
|
|
||||||
| Already participant | ❌ Cannot add existing participant | "User is already a participant" |
|
|
||||||
| Level range | ❌ Level must be ≥ (completed levels + 1) | "Cannot add at level X. Minimum is Y" |
|
|
||||||
| TAT hours | ✅ 1 ≤ hours ≤ 720 | "TAT hours must be between 1 and 720" |
|
|
||||||
| Email format | ✅ Valid email format | "Please enter a valid email" |
|
|
||||||
| Authorization | ✅ Only INITIATOR or APPROVER | 403 Forbidden |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
### **Example 1: Skip Current Approver**
|
|
||||||
|
|
||||||
**Initial State:**
|
|
||||||
```
|
|
||||||
Request: REQ-2025-001
|
|
||||||
Current Level: 2
|
|
||||||
|
|
||||||
Level 1: Sarah (APPROVED) ✅
|
|
||||||
Level 2: Mike (IN_PROGRESS) ⏳ ← Taking too long
|
|
||||||
Level 3: Lisa (PENDING) ⏸️
|
|
||||||
```
|
|
||||||
|
|
||||||
**Action:**
|
|
||||||
```bash
|
|
||||||
# Initiator skips Mike
|
|
||||||
POST /api/v1/workflows/REQ-2025-001/approvals/LEVEL-ID-2/skip
|
|
||||||
Body: { "reason": "Approver on extended leave" }
|
|
||||||
```
|
|
||||||
|
|
||||||
**Backend Processing:**
|
|
||||||
```typescript
|
|
||||||
1. Get Level 2 (Mike) → Status: IN_PROGRESS ✅
|
|
||||||
2. Validate: Not already completed ✅
|
|
||||||
3. Update Level 2:
|
|
||||||
- status: 'SKIPPED'
|
|
||||||
- is_skipped: TRUE
|
|
||||||
- skipped_at: NOW()
|
|
||||||
- skipped_by: initiator userId
|
|
||||||
- skip_reason: "Approver on extended leave"
|
|
||||||
4. Cancel TAT jobs for Level 2
|
|
||||||
5. Get Level 3 (Lisa)
|
|
||||||
6. Activate Level 3:
|
|
||||||
- status: 'IN_PROGRESS'
|
|
||||||
- levelStartTime: NOW()
|
|
||||||
- tatStartTime: NOW()
|
|
||||||
7. Schedule TAT jobs for Level 3
|
|
||||||
8. Update workflow.currentLevel = 3
|
|
||||||
9. Notify Lisa
|
|
||||||
10. Log activity: "Level 2 approver (Mike) was skipped"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Final State:**
|
|
||||||
```
|
|
||||||
Request: REQ-2025-001
|
|
||||||
Current Level: 3
|
|
||||||
|
|
||||||
Level 1: Sarah (APPROVED) ✅
|
|
||||||
Level 2: Mike (SKIPPED) ⏭️ ← Skipped!
|
|
||||||
Level 3: Lisa (IN_PROGRESS) ⏳ ← Now active!
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Example 2: Add Approver Between Levels**
|
|
||||||
|
|
||||||
**Initial State:**
|
|
||||||
```
|
|
||||||
Request: REQ-2025-001
|
|
||||||
Current Level: 2
|
|
||||||
|
|
||||||
Level 1: Sarah (APPROVED) ✅
|
|
||||||
Level 2: Mike (IN_PROGRESS) ⏳
|
|
||||||
Level 3: Lisa (PENDING) ⏸️
|
|
||||||
```
|
|
||||||
|
|
||||||
**Action:**
|
|
||||||
```bash
|
|
||||||
# Add John at Level 3 (between Mike and Lisa)
|
|
||||||
POST /api/v1/workflows/REQ-2025-001/approvers/at-level
|
|
||||||
Body: {
|
|
||||||
"email": "john@example.com",
|
|
||||||
"tatHours": 48,
|
|
||||||
"level": 3
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Backend Processing:**
|
|
||||||
```typescript
|
|
||||||
1. Find user: john@example.com ✅
|
|
||||||
2. Validate: Not existing participant ✅
|
|
||||||
3. Validate: Level 3 ≥ minLevel (2) ✅
|
|
||||||
4. Get levels to shift: [Lisa (Level 3)]
|
|
||||||
5. Shift Lisa:
|
|
||||||
- Level 3 → Level 4
|
|
||||||
- levelName: "Level 4"
|
|
||||||
6. Create new Level 3:
|
|
||||||
- levelNumber: 3
|
|
||||||
- approverId: John's userId
|
|
||||||
- approverEmail: john@example.com
|
|
||||||
- tatHours: 48
|
|
||||||
- status: PENDING (not current level)
|
|
||||||
7. Update workflow.totalLevels: 3 → 4
|
|
||||||
8. Add John to participants (APPROVER type)
|
|
||||||
9. Notify John
|
|
||||||
10. Log activity: "John added as approver at Level 3 with TAT of 48 hours"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Final State:**
|
|
||||||
```
|
|
||||||
Request: REQ-2025-001
|
|
||||||
Current Level: 2
|
|
||||||
|
|
||||||
Level 1: Sarah (APPROVED) ✅
|
|
||||||
Level 2: Mike (IN_PROGRESS) ⏳ ← Still working
|
|
||||||
Level 3: John (PENDING) ⏸️ ← NEW! Will review after Mike
|
|
||||||
Level 4: Lisa (PENDING) ⏸️ ← Shifted from 3 to 4
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Example 3: Complex Scenario - Skip and Add**
|
|
||||||
|
|
||||||
**Initial State:**
|
|
||||||
```
|
|
||||||
Level 1: Sarah (APPROVED) ✅
|
|
||||||
Level 2: Mike (APPROVED) ✅
|
|
||||||
Level 3: David (IN_PROGRESS) ⏳ ← Taking too long
|
|
||||||
Level 4: Lisa (PENDING) ⏸️
|
|
||||||
Level 5: Tom (PENDING) ⏸️
|
|
||||||
```
|
|
||||||
|
|
||||||
**Action 1: Skip David**
|
|
||||||
```
|
|
||||||
Result:
|
|
||||||
Level 1: Sarah (APPROVED) ✅
|
|
||||||
Level 2: Mike (APPROVED) ✅
|
|
||||||
Level 3: David (SKIPPED) ⏭️
|
|
||||||
Level 4: Lisa (IN_PROGRESS) ⏳ ← Now active
|
|
||||||
Level 5: Tom (PENDING) ⏸️
|
|
||||||
```
|
|
||||||
|
|
||||||
**Action 2: Add John at Level 4 (before Tom)**
|
|
||||||
```
|
|
||||||
Result:
|
|
||||||
Level 1: Sarah (APPROVED) ✅
|
|
||||||
Level 2: Mike (APPROVED) ✅
|
|
||||||
Level 3: David (SKIPPED) ⏭️
|
|
||||||
Level 4: Lisa (IN_PROGRESS) ⏳
|
|
||||||
Level 5: John (PENDING) ⏸️ ← NEW!
|
|
||||||
Level 6: Tom (PENDING) ⏸️ ← Shifted
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## UI/UX
|
|
||||||
|
|
||||||
### **RequestDetail - Workflow Tab:**
|
|
||||||
|
|
||||||
**Skip Button Visibility:**
|
|
||||||
- ✅ Shows for levels with status: `pending` or `in-review`
|
|
||||||
- ❌ Hidden for `approved`, `rejected`, `skipped`, or `waiting`
|
|
||||||
- ✅ Orange/amber styling to indicate caution
|
|
||||||
- ✅ Requires reason via prompt
|
|
||||||
|
|
||||||
**Button Appearance:**
|
|
||||||
```tsx
|
|
||||||
┌───────────────────────────────────────────┐
|
|
||||||
│ Level 2: Mike (In Review) │
|
|
||||||
│ TAT: 24h • Elapsed: 15h │
|
|
||||||
│ │
|
|
||||||
│ [⚠ Skip This Approver] │
|
|
||||||
│ Skip if approver is unavailable... │
|
|
||||||
└───────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **AddApproverModal - Enhanced UI:**
|
|
||||||
|
|
||||||
**Sections:**
|
|
||||||
1. **Current Levels** - Scrollable list showing all existing levels with status
|
|
||||||
2. **Level Selector** - Dropdown with available levels (grayed out completed levels)
|
|
||||||
3. **TAT Input** - Hours input with validation (1-720)
|
|
||||||
4. **Email Search** - @ mention search (existing)
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
- ✅ Auto-selects first available level
|
|
||||||
- ✅ Shows which existing level will be shifted
|
|
||||||
- ✅ Visual indicators for completed vs pending levels
|
|
||||||
- ✅ Prevents selecting invalid levels
|
|
||||||
- ✅ Real-time validation
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Activity Log Examples
|
|
||||||
|
|
||||||
### **Skip Approver Log:**
|
|
||||||
```
|
|
||||||
Action: Approver Skipped
|
|
||||||
Details: Level 2 approver (Mike Johnson) was skipped by Sarah Smith.
|
|
||||||
Reason: Approver on extended leave
|
|
||||||
Timestamp: 2025-11-05 10:30:00
|
|
||||||
User: Sarah Smith (Initiator)
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Add Approver Log:**
|
|
||||||
```
|
|
||||||
Action: Added new approver
|
|
||||||
Details: John Doe (john@example.com) has been added as approver at
|
|
||||||
Level 3 with TAT of 48 hours by Sarah Smith
|
|
||||||
Timestamp: 2025-11-05 11:15:00
|
|
||||||
User: Sarah Smith (Initiator)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Notifications
|
|
||||||
|
|
||||||
### **Skip Approver Notifications:**
|
|
||||||
|
|
||||||
**To Next Approver:**
|
|
||||||
```
|
|
||||||
Title: Request Escalated
|
|
||||||
Body: Previous approver was skipped. Request REQ-2025-001 is now
|
|
||||||
awaiting your approval.
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Add Approver Notifications:**
|
|
||||||
|
|
||||||
**To New Approver:**
|
|
||||||
```
|
|
||||||
Title: New Request Assignment
|
|
||||||
Body: You have been added as Level 3 approver to request REQ-2025-001:
|
|
||||||
New Office Location Approval
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## TAT Handling
|
|
||||||
|
|
||||||
### **Skip Approver:**
|
|
||||||
```typescript
|
|
||||||
// Skipped level's TAT jobs are cancelled
|
|
||||||
await tatSchedulerService.cancelTatJobs(requestId, skippedLevelId);
|
|
||||||
|
|
||||||
// Next level's TAT jobs are scheduled
|
|
||||||
await tatSchedulerService.scheduleTatJobs(
|
|
||||||
requestId,
|
|
||||||
nextLevelId,
|
|
||||||
nextApproverId,
|
|
||||||
nextLevelTatHours,
|
|
||||||
now,
|
|
||||||
workflowPriority
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Add Approver:**
|
|
||||||
```typescript
|
|
||||||
// If new approver is at current level, schedule TAT immediately
|
|
||||||
if (newLevel === currentLevel) {
|
|
||||||
await tatSchedulerService.scheduleTatJobs(
|
|
||||||
requestId,
|
|
||||||
newLevelId,
|
|
||||||
newApproverId,
|
|
||||||
tatHours,
|
|
||||||
now,
|
|
||||||
workflowPriority
|
|
||||||
);
|
|
||||||
}
|
|
||||||
// Otherwise, jobs will be scheduled when level becomes active
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Testing Scenarios
|
|
||||||
|
|
||||||
### **Test 1: Skip Current Approver**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create workflow with 3 approvers
|
|
||||||
# 2. Level 1 approves
|
|
||||||
# 3. Level 2 receives notification
|
|
||||||
# 4. Level 2 doesn't respond for extended time
|
|
||||||
# 5. Initiator clicks "Skip This Approver"
|
|
||||||
# 6. Provide reason: "On vacation"
|
|
||||||
# 7. Verify:
|
|
||||||
# ✅ Level 2 status = SKIPPED
|
|
||||||
# ✅ Level 3 status = IN_PROGRESS
|
|
||||||
# ✅ Level 3 receives notification
|
|
||||||
# ✅ TAT jobs scheduled for Level 3
|
|
||||||
# ✅ Activity logged
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test 2: Add Approver at Middle Level**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Workflow has 3 levels
|
|
||||||
# 2. Level 1 approved
|
|
||||||
# 3. Click "Add Approver"
|
|
||||||
# 4. Select Level 2 (between current levels)
|
|
||||||
# 5. Enter TAT: 48
|
|
||||||
# 6. Enter email: new@example.com
|
|
||||||
# 7. Submit
|
|
||||||
# 8. Verify:
|
|
||||||
# ✅ Old Level 2 becomes Level 3
|
|
||||||
# ✅ Old Level 3 becomes Level 4
|
|
||||||
# ✅ New approver at Level 2
|
|
||||||
# ✅ totalLevels increased by 1
|
|
||||||
# ✅ New approver receives notification
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test 3: Cannot Add Before Completed Level**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Workflow: Level 1 (Approved), Level 2 (Pending)
|
|
||||||
# 2. Try to add at Level 1
|
|
||||||
# 3. Modal shows: "Minimum allowed level is 2"
|
|
||||||
# 4. Level 1 is grayed out in selector
|
|
||||||
# 5. Cannot submit ✅
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Files Modified
|
|
||||||
|
|
||||||
### **Backend:**
|
|
||||||
1. `Re_Backend/src/migrations/add_is_skipped_to_approval_levels.sql` - Database migration
|
|
||||||
2. `Re_Backend/src/services/workflow.service.ts` - Skip and add approver logic
|
|
||||||
3. `Re_Backend/src/routes/workflow.routes.ts` - API endpoints
|
|
||||||
|
|
||||||
### **Frontend:**
|
|
||||||
4. `Re_Figma_Code/src/services/workflowApi.ts` - API client methods
|
|
||||||
5. `Re_Figma_Code/src/components/participant/AddApproverModal/AddApproverModal.tsx` - Enhanced modal
|
|
||||||
6. `Re_Figma_Code/src/pages/RequestDetail/RequestDetail.tsx` - Skip button and handlers
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Summary
|
|
||||||
|
|
||||||
| Feature | Description | Benefit |
|
|
||||||
|---------|-------------|---------|
|
|
||||||
| **Skip Approver** | Mark approver as skipped, move to next | Handle unavailable approvers |
|
|
||||||
| **Add at Level** | Insert approver at specific position | Flexible workflow modification |
|
|
||||||
| **Auto Shifting** | Existing levels automatically renumbered | No manual level management |
|
|
||||||
| **Smart Validation** | Cannot modify completed levels | Data integrity |
|
|
||||||
| **TAT Management** | Jobs cancelled/scheduled automatically | Accurate time tracking |
|
|
||||||
| **Activity Logging** | All actions tracked in audit trail | Full transparency |
|
|
||||||
| **Notifications** | Affected users notified automatically | Keep everyone informed |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Benefits
|
|
||||||
|
|
||||||
1. ✅ **Flexibility** - Handle real-world workflow changes
|
|
||||||
2. ✅ **No Bottlenecks** - Skip unavailable approvers
|
|
||||||
3. ✅ **Dynamic Addition** - Add approvers mid-workflow
|
|
||||||
4. ✅ **Data Integrity** - Cannot modify completed levels
|
|
||||||
5. ✅ **Audit Trail** - Full history of all changes
|
|
||||||
6. ✅ **Automatic Notifications** - All affected parties notified
|
|
||||||
7. ✅ **TAT Accuracy** - Time tracking updated correctly
|
|
||||||
8. ✅ **User-Friendly** - Intuitive UI with clear feedback
|
|
||||||
|
|
||||||
The approval workflow is now fully dynamic and can adapt to changing business needs! 🚀
|
|
||||||
|
|
||||||
@ -1,524 +0,0 @@
|
|||||||
# ✅ Smart Migration System Complete
|
|
||||||
|
|
||||||
## 🎯 What You Asked For
|
|
||||||
|
|
||||||
> "Every time if I do npm run dev, migrations are running right? If that already exist then skip, if it is new tables then do migrations"
|
|
||||||
|
|
||||||
**✅ DONE!** Your migration system is now intelligent and efficient.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🧠 How It Works Now
|
|
||||||
|
|
||||||
### Smart Migration Tracking
|
|
||||||
|
|
||||||
The system now includes:
|
|
||||||
|
|
||||||
1. **🗃️ Migrations Tracking Table**
|
|
||||||
- Automatically created on first run
|
|
||||||
- Stores which migrations have been executed
|
|
||||||
- Prevents duplicate execution
|
|
||||||
|
|
||||||
2. **⏭️ Smart Detection**
|
|
||||||
- Checks which migrations already ran
|
|
||||||
- Only executes **new/pending** migrations
|
|
||||||
- Skips already-completed ones
|
|
||||||
|
|
||||||
3. **🛡️ Idempotent Migrations**
|
|
||||||
- Safe to run multiple times
|
|
||||||
- Checks if tables/columns exist before creating
|
|
||||||
- No errors if schema already matches
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 What Happens When You Run `npm run dev`
|
|
||||||
|
|
||||||
### First Time (Fresh Database)
|
|
||||||
```
|
|
||||||
📦 Database connected
|
|
||||||
✅ Created migrations tracking table
|
|
||||||
🔄 Running 14 pending migration(s)...
|
|
||||||
|
|
||||||
⏳ Running: 2025103001-create-workflow-requests
|
|
||||||
✅ Created workflow_requests table
|
|
||||||
✅ Completed: 2025103001-create-workflow-requests
|
|
||||||
|
|
||||||
⏳ Running: 2025103002-create-approval-levels
|
|
||||||
✅ Created approval_levels table
|
|
||||||
✅ Completed: 2025103002-create-approval-levels
|
|
||||||
|
|
||||||
... (all 14 migrations run)
|
|
||||||
|
|
||||||
✅ Successfully applied 14 migration(s)
|
|
||||||
📊 Total migrations: 14
|
|
||||||
🚀 Server running on port 5000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Second Time (All Migrations Already Run)
|
|
||||||
```
|
|
||||||
📦 Database connected
|
|
||||||
✅ All migrations are up-to-date (no new migrations to run)
|
|
||||||
🚀 Server running on port 5000
|
|
||||||
```
|
|
||||||
**⚡ Instant startup! No migration overhead!**
|
|
||||||
|
|
||||||
### When You Add a New Migration
|
|
||||||
```
|
|
||||||
📦 Database connected
|
|
||||||
🔄 Running 1 pending migration(s)...
|
|
||||||
|
|
||||||
⏳ Running: 20251106-new-feature
|
|
||||||
✅ Added new column
|
|
||||||
✅ Completed: 20251106-new-feature
|
|
||||||
|
|
||||||
✅ Successfully applied 1 migration(s)
|
|
||||||
📊 Total migrations: 15
|
|
||||||
🚀 Server running on port 5000
|
|
||||||
```
|
|
||||||
**Only the NEW migration runs!**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔧 Technical Implementation
|
|
||||||
|
|
||||||
### 1. Migration Tracking Database
|
|
||||||
|
|
||||||
Automatically created table:
|
|
||||||
```sql
|
|
||||||
CREATE TABLE migrations (
|
|
||||||
id SERIAL PRIMARY KEY,
|
|
||||||
name VARCHAR(255) NOT NULL UNIQUE,
|
|
||||||
executed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
Tracks:
|
|
||||||
- ✅ Which migrations have been executed
|
|
||||||
- ✅ When they were executed
|
|
||||||
- ✅ Prevents duplicate execution via UNIQUE constraint
|
|
||||||
|
|
||||||
### 2. Smart Migration Runner
|
|
||||||
|
|
||||||
**File**: `src/scripts/migrate.ts`
|
|
||||||
|
|
||||||
**Key Features**:
|
|
||||||
```typescript
|
|
||||||
// 1. Check what's already been run
|
|
||||||
const executedMigrations = await getExecutedMigrations();
|
|
||||||
|
|
||||||
// 2. Find only new/pending migrations
|
|
||||||
const pendingMigrations = migrations.filter(
|
|
||||||
m => !executedMigrations.includes(m.name)
|
|
||||||
);
|
|
||||||
|
|
||||||
// 3. Skip if nothing to do
|
|
||||||
if (pendingMigrations.length === 0) {
|
|
||||||
console.log('✅ All migrations up-to-date');
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
// 4. Run only pending migrations
|
|
||||||
for (const migration of pendingMigrations) {
|
|
||||||
await migration.module.up(queryInterface);
|
|
||||||
await markMigrationExecuted(migration.name);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Idempotent Migrations
|
|
||||||
|
|
||||||
**Example**: `20251105-add-skip-fields-to-approval-levels.ts`
|
|
||||||
|
|
||||||
**Checks before acting**:
|
|
||||||
```typescript
|
|
||||||
// Check if table exists
|
|
||||||
const tables = await queryInterface.showAllTables();
|
|
||||||
if (!tables.includes('approval_levels')) {
|
|
||||||
return; // Skip if table doesn't exist
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if column exists
|
|
||||||
const tableDescription = await queryInterface.describeTable('approval_levels');
|
|
||||||
if (!tableDescription.is_skipped) {
|
|
||||||
await queryInterface.addColumn(/* ... */);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if index exists
|
|
||||||
const indexes = await queryInterface.showIndex('approval_levels');
|
|
||||||
const indexExists = indexes.some(idx => idx.name === 'idx_name');
|
|
||||||
if (!indexExists) {
|
|
||||||
await queryInterface.addIndex(/* ... */);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Safe to run multiple times!**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Usage Examples
|
|
||||||
|
|
||||||
### Daily Development Workflow
|
|
||||||
```bash
|
|
||||||
# Morning - start work
|
|
||||||
npm run dev
|
|
||||||
# ✅ All up-to-date - server starts immediately
|
|
||||||
|
|
||||||
# After pulling new code with migration
|
|
||||||
git pull origin main
|
|
||||||
npm run dev
|
|
||||||
# 🔄 Runs only the new migration
|
|
||||||
# ✅ Server starts
|
|
||||||
```
|
|
||||||
|
|
||||||
### Adding a New Migration
|
|
||||||
```bash
|
|
||||||
# 1. Create migration file
|
|
||||||
# src/migrations/20251106-add-user-preferences.ts
|
|
||||||
|
|
||||||
# 2. Register in migrate.ts
|
|
||||||
# (add import and execution)
|
|
||||||
|
|
||||||
# 3. Test
|
|
||||||
npm run dev
|
|
||||||
# 🔄 Runs only your new migration
|
|
||||||
|
|
||||||
# 4. Run again to verify idempotency
|
|
||||||
npm run dev
|
|
||||||
# ✅ All up-to-date (doesn't run again)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Manual Migration Run
|
|
||||||
```bash
|
|
||||||
npm run migrate
|
|
||||||
# Same smart behavior, without starting server
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 Current Migration Status
|
|
||||||
|
|
||||||
All 14 migrations are now tracked:
|
|
||||||
|
|
||||||
| # | Migration | Status |
|
|
||||||
|---|-----------|--------|
|
|
||||||
| 1 | 2025103001-create-workflow-requests | ✅ Tracked |
|
|
||||||
| 2 | 2025103002-create-approval-levels | ✅ Tracked |
|
|
||||||
| 3 | 2025103003-create-participants | ✅ Tracked |
|
|
||||||
| 4 | 2025103004-create-documents | ✅ Tracked |
|
|
||||||
| 5 | 20251031_01_create_subscriptions | ✅ Tracked |
|
|
||||||
| 6 | 20251031_02_create_activities | ✅ Tracked |
|
|
||||||
| 7 | 20251031_03_create_work_notes | ✅ Tracked |
|
|
||||||
| 8 | 20251031_04_create_work_note_attachments | ✅ Tracked |
|
|
||||||
| 9 | 20251104-add-tat-alert-fields | ✅ Tracked |
|
|
||||||
| 10 | 20251104-create-tat-alerts | ✅ Tracked |
|
|
||||||
| 11 | 20251104-create-kpi-views | ✅ Tracked |
|
|
||||||
| 12 | 20251104-create-holidays | ✅ Tracked |
|
|
||||||
| 13 | 20251104-create-admin-config | ✅ Tracked |
|
|
||||||
| 14 | 20251105-add-skip-fields-to-approval-levels | ✅ Tracked & Idempotent |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✨ Key Benefits
|
|
||||||
|
|
||||||
### For You (Developer)
|
|
||||||
- ✅ **Fast Restarts** - No waiting for already-run migrations
|
|
||||||
- ✅ **No Errors** - Safe to run `npm run dev` anytime
|
|
||||||
- ✅ **Auto-Detection** - System knows what's new
|
|
||||||
- ✅ **Zero Configuration** - Just works
|
|
||||||
|
|
||||||
### For Team
|
|
||||||
- ✅ **Consistent State** - Everyone's database in sync
|
|
||||||
- ✅ **Easy Onboarding** - New devs run once, all migrates
|
|
||||||
- ✅ **No Coordination** - No "did you run migrations?" questions
|
|
||||||
- ✅ **Pull & Run** - Git pull + npm run dev = ready
|
|
||||||
|
|
||||||
### For Production
|
|
||||||
- ✅ **Safe Deployments** - Won't break if run multiple times
|
|
||||||
- ✅ **Version Control** - Clear migration history
|
|
||||||
- ✅ **Rollback Support** - Each migration has down() function
|
|
||||||
- ✅ **Audit Trail** - migrations table shows execution history
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎓 Best Practices Implemented
|
|
||||||
|
|
||||||
### 1. Idempotency
|
|
||||||
✅ All migrations check existence before creating
|
|
||||||
✅ Safe to run multiple times
|
|
||||||
✅ No duplicate errors
|
|
||||||
|
|
||||||
### 2. Tracking
|
|
||||||
✅ Dedicated migrations table
|
|
||||||
✅ Unique constraint prevents duplicates
|
|
||||||
✅ Timestamp for audit trail
|
|
||||||
|
|
||||||
### 3. Smart Execution
|
|
||||||
✅ Only runs pending migrations
|
|
||||||
✅ Maintains execution order
|
|
||||||
✅ Fails fast on errors
|
|
||||||
|
|
||||||
### 4. Developer Experience
|
|
||||||
✅ Clear console output
|
|
||||||
✅ Progress indicators
|
|
||||||
✅ Helpful error messages
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📝 Adding New Migrations
|
|
||||||
|
|
||||||
### Template for Idempotent Migrations
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { QueryInterface, DataTypes } from 'sequelize';
|
|
||||||
|
|
||||||
export async function up(queryInterface: QueryInterface): Promise<void> {
|
|
||||||
// 1. Check if table exists (for new tables)
|
|
||||||
const tables = await queryInterface.showAllTables();
|
|
||||||
if (!tables.includes('my_table')) {
|
|
||||||
await queryInterface.createTable('my_table', {/* ... */});
|
|
||||||
console.log(' ✅ Created my_table');
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
// 2. Check if column exists (for new columns)
|
|
||||||
const tableDesc = await queryInterface.describeTable('existing_table');
|
|
||||||
if (!tableDesc.new_column) {
|
|
||||||
await queryInterface.addColumn('existing_table', 'new_column', {
|
|
||||||
type: DataTypes.STRING
|
|
||||||
});
|
|
||||||
console.log(' ✅ Added new_column');
|
|
||||||
}
|
|
||||||
|
|
||||||
// 3. Check if index exists (for new indexes)
|
|
||||||
try {
|
|
||||||
const indexes: any[] = await queryInterface.showIndex('my_table') as any[];
|
|
||||||
const indexExists = Array.isArray(indexes) &&
|
|
||||||
indexes.some((idx: any) => idx.name === 'idx_name');
|
|
||||||
|
|
||||||
if (!indexExists) {
|
|
||||||
await queryInterface.addIndex('my_table', ['column'], {
|
|
||||||
name: 'idx_name'
|
|
||||||
});
|
|
||||||
console.log(' ✅ Added idx_name');
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
console.log(' ℹ️ Index handling skipped');
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log('✅ Migration completed');
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function down(queryInterface: QueryInterface): Promise<void> {
|
|
||||||
// Rollback logic
|
|
||||||
await queryInterface.removeColumn('my_table', 'new_column');
|
|
||||||
console.log('✅ Rollback completed');
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Steps to Add New Migration
|
|
||||||
|
|
||||||
1. **Create File**: `src/migrations/YYYYMMDD-description.ts`
|
|
||||||
2. **Write Migration**: Use idempotent template above
|
|
||||||
3. **Register**: Add to `src/scripts/migrate.ts`:
|
|
||||||
```typescript
|
|
||||||
import * as m15 from '../migrations/20251106-description';
|
|
||||||
|
|
||||||
const migrations: Migration[] = [
|
|
||||||
// ... existing ...
|
|
||||||
{ name: '20251106-description', module: m15 },
|
|
||||||
];
|
|
||||||
```
|
|
||||||
4. **Test**: Run `npm run dev` - only new migration executes
|
|
||||||
5. **Verify**: Run `npm run dev` again - should skip (already executed)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🧪 Testing the System
|
|
||||||
|
|
||||||
### Test 1: First Run
|
|
||||||
```bash
|
|
||||||
# Drop database (if testing)
|
|
||||||
# Then run:
|
|
||||||
npm run dev
|
|
||||||
|
|
||||||
# Expected: All 14 migrations run
|
|
||||||
# migrations table created
|
|
||||||
# Server starts
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test 2: Second Run
|
|
||||||
```bash
|
|
||||||
npm run dev
|
|
||||||
|
|
||||||
# Expected: "All migrations up-to-date"
|
|
||||||
# No migrations run
|
|
||||||
# Instant server start
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test 3: New Migration
|
|
||||||
```bash
|
|
||||||
# Add migration #15
|
|
||||||
npm run dev
|
|
||||||
|
|
||||||
# Expected: Only migration #15 runs
|
|
||||||
# Shows "Running 1 pending migration"
|
|
||||||
# Server starts
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test 4: Verify Tracking
|
|
||||||
```bash
|
|
||||||
# In PostgreSQL:
|
|
||||||
SELECT * FROM migrations ORDER BY id;
|
|
||||||
|
|
||||||
# Should show all executed migrations with timestamps
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 Monitoring Migration Status
|
|
||||||
|
|
||||||
### Check Database Directly
|
|
||||||
```sql
|
|
||||||
-- See all executed migrations
|
|
||||||
SELECT id, name, executed_at
|
|
||||||
FROM migrations
|
|
||||||
ORDER BY id;
|
|
||||||
|
|
||||||
-- Count migrations
|
|
||||||
SELECT COUNT(*) as total_migrations FROM migrations;
|
|
||||||
|
|
||||||
-- Latest migration
|
|
||||||
SELECT name, executed_at
|
|
||||||
FROM migrations
|
|
||||||
ORDER BY id DESC
|
|
||||||
LIMIT 1;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Check via Application
|
|
||||||
```bash
|
|
||||||
# Run migration script
|
|
||||||
npm run migrate
|
|
||||||
|
|
||||||
# Output shows:
|
|
||||||
# - Total migrations in code
|
|
||||||
# - Already executed count
|
|
||||||
# - Pending count
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚨 Troubleshooting
|
|
||||||
|
|
||||||
### Issue: "Table already exists"
|
|
||||||
**Solution**: This shouldn't happen now! But if it does:
|
|
||||||
- Migration might not be idempotent
|
|
||||||
- Add table existence check
|
|
||||||
- See idempotent template above
|
|
||||||
|
|
||||||
### Issue: "Column already exists"
|
|
||||||
**Solution**: Add column existence check:
|
|
||||||
```typescript
|
|
||||||
const tableDesc = await queryInterface.describeTable('table');
|
|
||||||
if (!tableDesc.column_name) {
|
|
||||||
await queryInterface.addColumn(/* ... */);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Issue: Migration runs every time
|
|
||||||
**Cause**: Not being marked as executed
|
|
||||||
**Check**:
|
|
||||||
```sql
|
|
||||||
SELECT * FROM migrations WHERE name = 'migration-name';
|
|
||||||
```
|
|
||||||
If missing, the marking step failed.
|
|
||||||
|
|
||||||
### Issue: Need to rerun a migration
|
|
||||||
**Solution**:
|
|
||||||
```sql
|
|
||||||
-- Remove from tracking (use with caution!)
|
|
||||||
DELETE FROM migrations WHERE name = 'migration-name';
|
|
||||||
|
|
||||||
-- Then run
|
|
||||||
npm run migrate
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 System Architecture
|
|
||||||
|
|
||||||
```
|
|
||||||
npm run dev
|
|
||||||
↓
|
|
||||||
migrate.ts runs
|
|
||||||
↓
|
|
||||||
Check: migrations table exists?
|
|
||||||
↓ No → Create it
|
|
||||||
↓ Yes → Continue
|
|
||||||
↓
|
|
||||||
Query: SELECT * FROM migrations
|
|
||||||
↓
|
|
||||||
Compare: Code migrations vs DB migrations
|
|
||||||
↓
|
|
||||||
Pending = Code - DB
|
|
||||||
↓
|
|
||||||
If pending = 0
|
|
||||||
↓ → "All up-to-date" → Start server
|
|
||||||
↓
|
|
||||||
If pending > 0
|
|
||||||
↓
|
|
||||||
For each pending migration:
|
|
||||||
↓
|
|
||||||
Run migration.up()
|
|
||||||
↓
|
|
||||||
INSERT INTO migrations
|
|
||||||
↓
|
|
||||||
Mark as complete
|
|
||||||
↓
|
|
||||||
All done → Start server
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Summary
|
|
||||||
|
|
||||||
### What Changed
|
|
||||||
|
|
||||||
| Before | After |
|
|
||||||
|--------|-------|
|
|
||||||
| All migrations run every time | Only new migrations run |
|
|
||||||
| Errors if tables exist | Smart checks prevent errors |
|
|
||||||
| No tracking | Migrations table tracks history |
|
|
||||||
| Slow restarts | Fast restarts |
|
|
||||||
| Manual coordination needed | Automatic detection |
|
|
||||||
|
|
||||||
### What You Get
|
|
||||||
|
|
||||||
✅ **Smart Detection** - Knows what's already been run
|
|
||||||
✅ **Fast Execution** - Only runs new migrations
|
|
||||||
✅ **Error Prevention** - Idempotent checks
|
|
||||||
✅ **Clear Feedback** - Detailed console output
|
|
||||||
✅ **Audit Trail** - migrations table for history
|
|
||||||
✅ **Team-Friendly** - Everyone stays in sync automatically
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 You're Ready!
|
|
||||||
|
|
||||||
Just run:
|
|
||||||
```bash
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
**First time**: All migrations execute, database is set up
|
|
||||||
**Every time after**: Lightning fast, only new migrations run
|
|
||||||
**Pull new code**: Automatically detects and runs new migrations
|
|
||||||
|
|
||||||
**No manual steps. No coordination needed. Just works!** ✨
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**System**: Smart Migration Tracking ✅
|
|
||||||
**Idempotency**: Enabled ✅
|
|
||||||
**Auto-Detect**: Active ✅
|
|
||||||
**Status**: Production Ready 🟢
|
|
||||||
**Date**: November 5, 2025
|
|
||||||
|
|
||||||
209
START_HERE.md
209
START_HERE.md
@ -1,209 +0,0 @@
|
|||||||
# 🎯 START HERE - TAT Notifications Setup
|
|
||||||
|
|
||||||
## What You Need to Do RIGHT NOW
|
|
||||||
|
|
||||||
### ⚡ 2-Minute Setup (Upstash Redis)
|
|
||||||
|
|
||||||
1. **Open this link**: https://console.upstash.com/
|
|
||||||
- Sign up with GitHub/Google (it's free)
|
|
||||||
|
|
||||||
2. **Create Redis Database**:
|
|
||||||
- Click "Create Database"
|
|
||||||
- Name: `redis-tat-dev`
|
|
||||||
- Type: Regional
|
|
||||||
- Region: Pick closest to you
|
|
||||||
- Click "Create"
|
|
||||||
|
|
||||||
3. **Copy the Redis URL**:
|
|
||||||
- You'll see: `rediss://default:AbC123xyz...@us1-mighty-12345.upstash.io:6379`
|
|
||||||
- Click the copy button 📋
|
|
||||||
|
|
||||||
4. **Open** `Re_Backend/.env` and add:
|
|
||||||
```bash
|
|
||||||
REDIS_URL=rediss://default:AbC123xyz...@us1-mighty-12345.upstash.io:6379
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Restart Backend**:
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **Look for this** in the logs:
|
|
||||||
```
|
|
||||||
✅ [TAT Queue] Connected to Redis
|
|
||||||
✅ [TAT Worker] Initialized and listening
|
|
||||||
⏰ TAT Configuration:
|
|
||||||
- Test Mode: ENABLED (1 hour = 1 minute)
|
|
||||||
```
|
|
||||||
|
|
||||||
✅ **DONE!** You're ready to test!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Test It Now (6 Minutes)
|
|
||||||
|
|
||||||
1. **Create a workflow request** via your frontend
|
|
||||||
2. **Set TAT: 6 hours** (will become 6 minutes in test mode)
|
|
||||||
3. **Submit the request**
|
|
||||||
4. **Watch for notifications**:
|
|
||||||
- **3 minutes**: ⏳ 50% notification
|
|
||||||
- **4.5 minutes**: ⚠️ 75% warning
|
|
||||||
- **6 minutes**: ⏰ 100% breach
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Verify It's Working
|
|
||||||
|
|
||||||
### Check Backend Logs:
|
|
||||||
```bash
|
|
||||||
# You should see:
|
|
||||||
[TAT Scheduler] Calculating TAT milestones...
|
|
||||||
[TAT Scheduler] ✅ TAT jobs scheduled
|
|
||||||
[TAT Processor] Processing tat50...
|
|
||||||
[TAT Processor] tat50 notification sent
|
|
||||||
```
|
|
||||||
|
|
||||||
### Check Upstash Console:
|
|
||||||
1. Go to https://console.upstash.com/
|
|
||||||
2. Click your database
|
|
||||||
3. Click "CLI" tab
|
|
||||||
4. Type: `KEYS bull:tatQueue:*`
|
|
||||||
5. Should see your scheduled jobs
|
|
||||||
|
|
||||||
### Check Database:
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
approver_name,
|
|
||||||
tat50_alert_sent,
|
|
||||||
tat75_alert_sent,
|
|
||||||
tat_breached,
|
|
||||||
status
|
|
||||||
FROM approval_levels
|
|
||||||
WHERE status = 'IN_PROGRESS';
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## What Test Mode Does
|
|
||||||
|
|
||||||
```
|
|
||||||
Normal Mode: Test Mode:
|
|
||||||
48 hours → 48 minutes
|
|
||||||
24 hours → 24 minutes
|
|
||||||
6 hours → 6 minutes
|
|
||||||
2 hours → 2 minutes
|
|
||||||
|
|
||||||
✅ Perfect for quick testing!
|
|
||||||
✅ Turn off for production: TAT_TEST_MODE=false
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### ❌ "ECONNREFUSED" Error?
|
|
||||||
|
|
||||||
**Fix**:
|
|
||||||
1. Check your `.env` file has `REDIS_URL=rediss://...`
|
|
||||||
2. Verify the URL is correct (copy from Upstash again)
|
|
||||||
3. Make sure it starts with `rediss://` (double 's')
|
|
||||||
4. Restart backend: `npm run dev`
|
|
||||||
|
|
||||||
### ❌ No Logs About Redis?
|
|
||||||
|
|
||||||
**Fix**:
|
|
||||||
1. Check `.env` file exists in `Re_Backend/` folder
|
|
||||||
2. Make sure you restarted the backend
|
|
||||||
3. Look for any errors in console
|
|
||||||
|
|
||||||
### ❌ Jobs Not Running?
|
|
||||||
|
|
||||||
**Fix**:
|
|
||||||
1. Verify `TAT_TEST_MODE=true` in `.env`
|
|
||||||
2. Make sure request is SUBMITTED (not just created)
|
|
||||||
3. Check Upstash Console → Metrics (see if commands are running)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
Once you see the first notification working:
|
|
||||||
|
|
||||||
1. ✅ Test multi-level approvals
|
|
||||||
2. ✅ Test early approval (jobs should cancel)
|
|
||||||
3. ✅ Test rejection flow
|
|
||||||
4. ✅ Check activity logs
|
|
||||||
5. ✅ Verify database flags
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Documentation
|
|
||||||
|
|
||||||
- **Quick Start**: `TAT_QUICK_START.md`
|
|
||||||
- **Upstash Guide**: `docs/UPSTASH_SETUP_GUIDE.md`
|
|
||||||
- **Full System Docs**: `docs/TAT_NOTIFICATION_SYSTEM.md`
|
|
||||||
- **Testing Guide**: `docs/TAT_TESTING_GUIDE.md`
|
|
||||||
- **Quick Reference**: `UPSTASH_QUICK_REFERENCE.md`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Why Upstash?
|
|
||||||
|
|
||||||
✅ **No installation** (works on Windows immediately)
|
|
||||||
✅ **100% free** for development
|
|
||||||
✅ **Same setup** for production
|
|
||||||
✅ **No maintenance** required
|
|
||||||
✅ **Fast** (global CDN)
|
|
||||||
✅ **Secure** (TLS by default)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Production Deployment
|
|
||||||
|
|
||||||
When ready for production:
|
|
||||||
|
|
||||||
1. Keep using Upstash OR install Redis on Linux server:
|
|
||||||
```bash
|
|
||||||
sudo apt install redis-server -y
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Update `.env` on server:
|
|
||||||
```bash
|
|
||||||
REDIS_URL=redis://localhost:6379 # or keep Upstash URL
|
|
||||||
TAT_TEST_MODE=false # Use real hours
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Deploy and monitor!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Need Help?
|
|
||||||
|
|
||||||
**Upstash Console**: https://console.upstash.com/
|
|
||||||
**Our Docs**: See `docs/` folder
|
|
||||||
**Redis Commands**: Use Upstash Console CLI tab
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Status Checklist
|
|
||||||
|
|
||||||
- [ ] Upstash account created
|
|
||||||
- [ ] Redis database created
|
|
||||||
- [ ] REDIS_URL copied to `.env`
|
|
||||||
- [ ] TAT_TEST_MODE=true set
|
|
||||||
- [ ] Backend restarted
|
|
||||||
- [ ] Logs show "Connected to Redis"
|
|
||||||
- [ ] Test request created and submitted
|
|
||||||
- [ ] First notification received
|
|
||||||
|
|
||||||
✅ **All done? Congratulations!** 🎉
|
|
||||||
|
|
||||||
Your TAT notification system is now LIVE!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 4, 2025
|
|
||||||
**Team**: Royal Enfield Workflow
|
|
||||||
|
|
||||||
@ -1,591 +0,0 @@
|
|||||||
# ✅ TAT Alerts Display System - Complete Implementation
|
|
||||||
|
|
||||||
## 🎉 What's Been Implemented
|
|
||||||
|
|
||||||
Your TAT notification system now **stores every alert** in the database and **displays them in the UI** exactly like your shared screenshot!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Complete Flow
|
|
||||||
|
|
||||||
### 1. When Request is Submitted
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// First level approver assigned
|
|
||||||
Level 1: John (TAT: 24 hours)
|
|
||||||
↓
|
|
||||||
TAT jobs scheduled for John:
|
|
||||||
- 50% alert (12 hours)
|
|
||||||
- 75% alert (18 hours)
|
|
||||||
- 100% breach (24 hours)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. When Notification Fires (e.g., 50%)
|
|
||||||
|
|
||||||
**Backend (`tatProcessor.ts`):**
|
|
||||||
```typescript
|
|
||||||
✅ Send notification to John
|
|
||||||
✅ Create record in tat_alerts table
|
|
||||||
✅ Log activity
|
|
||||||
✅ Update approval_levels flags
|
|
||||||
```
|
|
||||||
|
|
||||||
**Database Record Created:**
|
|
||||||
```sql
|
|
||||||
INSERT INTO tat_alerts (
|
|
||||||
request_id, level_id, approver_id,
|
|
||||||
alert_type = 'TAT_50',
|
|
||||||
threshold_percentage = 50,
|
|
||||||
alert_message = '⏳ 50% of TAT elapsed...',
|
|
||||||
alert_sent_at = NOW(),
|
|
||||||
...
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. When Displayed in Frontend
|
|
||||||
|
|
||||||
**API Response** (`workflow.service.ts`):
|
|
||||||
```typescript
|
|
||||||
{
|
|
||||||
workflow: {...},
|
|
||||||
approvals: [...],
|
|
||||||
tatAlerts: [ // ← NEW!
|
|
||||||
{
|
|
||||||
alertType: 'TAT_50',
|
|
||||||
thresholdPercentage: 50,
|
|
||||||
alertSentAt: '2024-10-06T14:30:00Z',
|
|
||||||
alertMessage: '⏳ 50% of TAT elapsed...',
|
|
||||||
levelId: 'abc-123',
|
|
||||||
...
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Frontend Display** (`RequestDetail.tsx`):
|
|
||||||
```tsx
|
|
||||||
<div className="bg-yellow-50 border-yellow-200 p-3 rounded-lg">
|
|
||||||
⏳ Reminder 1
|
|
||||||
50% of SLA breach reminder have been sent
|
|
||||||
Reminder sent by system automatically
|
|
||||||
Sent at: Oct 6 at 2:30 PM
|
|
||||||
</div>
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎨 UI Display (Matches Your Screenshot)
|
|
||||||
|
|
||||||
### Reminder Card Styling:
|
|
||||||
|
|
||||||
**50% Alert (⏳):**
|
|
||||||
- Background: `bg-yellow-50`
|
|
||||||
- Border: `border-yellow-200`
|
|
||||||
- Icon: ⏳
|
|
||||||
|
|
||||||
**75% Alert (⚠️):**
|
|
||||||
- Background: `bg-orange-50`
|
|
||||||
- Border: `border-orange-200`
|
|
||||||
- Icon: ⚠️
|
|
||||||
|
|
||||||
**100% Breach (⏰):**
|
|
||||||
- Background: `bg-red-50`
|
|
||||||
- Border: `border-red-200`
|
|
||||||
- Icon: ⏰
|
|
||||||
|
|
||||||
### Display Format:
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────┐
|
|
||||||
│ ⏳ Reminder 1 │
|
|
||||||
│ │
|
|
||||||
│ 50% of SLA breach reminder have been │
|
|
||||||
│ sent │
|
|
||||||
│ │
|
|
||||||
│ Reminder sent by system automatically │
|
|
||||||
│ │
|
|
||||||
│ Sent at: Oct 6 at 2:30 PM │
|
|
||||||
└─────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📍 Where Alerts Appear
|
|
||||||
|
|
||||||
### In Workflow Tab:
|
|
||||||
|
|
||||||
Alerts appear **under each approval level card** in the workflow tab:
|
|
||||||
|
|
||||||
```
|
|
||||||
┌────────────────────────────────────────┐
|
|
||||||
│ Step 2: Lisa Wong (Finance Manager) │
|
|
||||||
│ Status: pending │
|
|
||||||
│ TAT: 12 hours │
|
|
||||||
│ │
|
|
||||||
│ ⏳ Reminder 1 │ ← TAT Alert #1
|
|
||||||
│ 50% of SLA breach reminder... │
|
|
||||||
│ Sent at: Oct 6 at 2:30 PM │
|
|
||||||
│ │
|
|
||||||
│ ⚠️ Reminder 2 │ ← TAT Alert #2
|
|
||||||
│ 75% of SLA breach reminder... │
|
|
||||||
│ Sent at: Oct 6 at 6:30 PM │
|
|
||||||
└────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔄 Complete Data Flow
|
|
||||||
|
|
||||||
### Backend:
|
|
||||||
|
|
||||||
1. **TAT Processor** (`tatProcessor.ts`):
|
|
||||||
- Sends notification to approver
|
|
||||||
- Creates record in `tat_alerts` table
|
|
||||||
- Logs activity
|
|
||||||
|
|
||||||
2. **Workflow Service** (`workflow.service.ts`):
|
|
||||||
- Fetches TAT alerts for request
|
|
||||||
- Includes in API response
|
|
||||||
- Groups by level ID
|
|
||||||
|
|
||||||
3. **Approval Service** (`approval.service.ts`):
|
|
||||||
- Updates alerts when level completed
|
|
||||||
- Sets `was_completed_on_time`
|
|
||||||
- Sets `completion_time`
|
|
||||||
|
|
||||||
### Frontend:
|
|
||||||
|
|
||||||
1. **Request Detail** (`RequestDetail.tsx`):
|
|
||||||
- Receives TAT alerts from API
|
|
||||||
- Filters alerts by level ID
|
|
||||||
- Displays under each approval level
|
|
||||||
- Color-codes by threshold
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Database Schema
|
|
||||||
|
|
||||||
### TAT Alerts Table:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
alert_type, -- TAT_50, TAT_75, TAT_100
|
|
||||||
threshold_percentage, -- 50, 75, 100
|
|
||||||
alert_sent_at, -- When alert was sent
|
|
||||||
alert_message, -- Full message text
|
|
||||||
level_id, -- Which approval level
|
|
||||||
approver_id, -- Who was notified
|
|
||||||
was_completed_on_time, -- Completed within TAT?
|
|
||||||
completion_time -- When completed
|
|
||||||
FROM tat_alerts
|
|
||||||
WHERE request_id = 'YOUR_REQUEST_ID'
|
|
||||||
ORDER BY alert_sent_at ASC;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🧪 Testing the Display
|
|
||||||
|
|
||||||
### Step 1: Setup Upstash Redis
|
|
||||||
|
|
||||||
See `START_HERE.md` for quick setup (2 minutes)
|
|
||||||
|
|
||||||
### Step 2: Enable Test Mode
|
|
||||||
|
|
||||||
In `.env`:
|
|
||||||
```bash
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Create Test Request
|
|
||||||
|
|
||||||
- TAT: 6 hours (becomes 6 minutes in test mode)
|
|
||||||
- Submit the request
|
|
||||||
|
|
||||||
### Step 4: Watch Alerts Appear
|
|
||||||
|
|
||||||
**At 3 minutes (50%):**
|
|
||||||
```
|
|
||||||
⏳ Reminder 1
|
|
||||||
50% of SLA breach reminder have been sent
|
|
||||||
Reminder sent by system automatically
|
|
||||||
Sent at: [timestamp]
|
|
||||||
```
|
|
||||||
|
|
||||||
**At 4.5 minutes (75%):**
|
|
||||||
```
|
|
||||||
⚠️ Reminder 2
|
|
||||||
75% of SLA breach reminder have been sent
|
|
||||||
Reminder sent by system automatically
|
|
||||||
Sent at: [timestamp]
|
|
||||||
```
|
|
||||||
|
|
||||||
**At 6 minutes (100%):**
|
|
||||||
```
|
|
||||||
⏰ Reminder 3
|
|
||||||
100% of SLA breach reminder have been sent
|
|
||||||
Reminder sent by system automatically
|
|
||||||
Sent at: [timestamp]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Verify in Database
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
threshold_percentage,
|
|
||||||
alert_sent_at,
|
|
||||||
was_completed_on_time,
|
|
||||||
completion_time
|
|
||||||
FROM tat_alerts
|
|
||||||
WHERE request_id = 'YOUR_REQUEST_ID'
|
|
||||||
ORDER BY threshold_percentage;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Approver-Specific Alerts
|
|
||||||
|
|
||||||
### Confirmation: Alerts are Approver-Specific
|
|
||||||
|
|
||||||
✅ **Each level's alerts** are sent to **that level's approver only**
|
|
||||||
✅ **Previous approver** does NOT receive alerts for next level
|
|
||||||
✅ **Current approver** receives all their level's alerts (50%, 75%, 100%)
|
|
||||||
|
|
||||||
### Example:
|
|
||||||
|
|
||||||
```
|
|
||||||
Request Flow:
|
|
||||||
Level 1: John (TAT: 24h)
|
|
||||||
→ Alerts sent to: John
|
|
||||||
→ At: 12h, 18h, 24h
|
|
||||||
|
|
||||||
Level 2: Sarah (TAT: 12h)
|
|
||||||
→ Alerts sent to: Sarah (NOT John)
|
|
||||||
→ At: 6h, 9h, 12h
|
|
||||||
|
|
||||||
Level 3: Mike (TAT: 8h)
|
|
||||||
→ Alerts sent to: Mike (NOT Sarah, NOT John)
|
|
||||||
→ At: 4h, 6h, 8h
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 KPI Queries
|
|
||||||
|
|
||||||
### Get All Alerts for a Request:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
al.level_number,
|
|
||||||
al.approver_name,
|
|
||||||
ta.threshold_percentage,
|
|
||||||
ta.alert_sent_at,
|
|
||||||
ta.was_completed_on_time
|
|
||||||
FROM tat_alerts ta
|
|
||||||
JOIN approval_levels al ON ta.level_id = al.level_id
|
|
||||||
WHERE ta.request_id = 'REQUEST_ID'
|
|
||||||
ORDER BY al.level_number, ta.threshold_percentage;
|
|
||||||
```
|
|
||||||
|
|
||||||
### TAT Compliance by Approver:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
ta.approver_id,
|
|
||||||
u.display_name,
|
|
||||||
COUNT(*) as total_alerts_received,
|
|
||||||
COUNT(CASE WHEN ta.was_completed_on_time = true THEN 1 END) as completed_on_time,
|
|
||||||
COUNT(CASE WHEN ta.was_completed_on_time = false THEN 1 END) as completed_late,
|
|
||||||
ROUND(
|
|
||||||
COUNT(CASE WHEN ta.was_completed_on_time = true THEN 1 END) * 100.0 /
|
|
||||||
NULLIF(COUNT(CASE WHEN ta.was_completed_on_time IS NOT NULL THEN 1 END), 0),
|
|
||||||
2
|
|
||||||
) as compliance_rate
|
|
||||||
FROM tat_alerts ta
|
|
||||||
JOIN users u ON ta.approver_id = u.user_id
|
|
||||||
GROUP BY ta.approver_id, u.display_name;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Alert Effectiveness (Response Time After Alert):
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
alert_type,
|
|
||||||
AVG(
|
|
||||||
EXTRACT(EPOCH FROM (completion_time - alert_sent_at)) / 3600
|
|
||||||
) as avg_response_hours_after_alert
|
|
||||||
FROM tat_alerts
|
|
||||||
WHERE completion_time IS NOT NULL
|
|
||||||
GROUP BY alert_type;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📁 Files Modified
|
|
||||||
|
|
||||||
### Backend:
|
|
||||||
- ✅ `src/models/TatAlert.ts` - TAT alert model
|
|
||||||
- ✅ `src/migrations/20251104-create-tat-alerts.ts` - Table creation
|
|
||||||
- ✅ `src/queues/tatProcessor.ts` - Create alert records
|
|
||||||
- ✅ `src/services/workflow.service.ts` - Include alerts in API response
|
|
||||||
- ✅ `src/services/approval.service.ts` - Update alerts on completion
|
|
||||||
- ✅ `src/models/index.ts` - Export TatAlert model
|
|
||||||
|
|
||||||
### Frontend:
|
|
||||||
- ✅ `src/pages/RequestDetail/RequestDetail.tsx` - Display alerts in workflow tab
|
|
||||||
|
|
||||||
### Database:
|
|
||||||
- ✅ `tat_alerts` table created with 7 indexes
|
|
||||||
- ✅ 8 KPI views created for reporting
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎨 Visual Example
|
|
||||||
|
|
||||||
Based on your screenshot, the display looks like:
|
|
||||||
|
|
||||||
```
|
|
||||||
┌──────────────────────────────────────────────────┐
|
|
||||||
│ Step 2: Lisa Wong (Finance Manager) │
|
|
||||||
│ Status: pending │
|
|
||||||
│ TAT: 12 hours │
|
|
||||||
│ │
|
|
||||||
│ ┌──────────────────────────────────────────────┐│
|
|
||||||
│ │ ⏳ Reminder 1 ││
|
|
||||||
│ │ 50% of SLA breach reminder have been sent ││
|
|
||||||
│ │ Reminder sent by system automatically ││
|
|
||||||
│ │ Sent at: Oct 6 at 2:30 PM ││
|
|
||||||
│ └──────────────────────────────────────────────┘│
|
|
||||||
│ │
|
|
||||||
│ ┌──────────────────────────────────────────────┐│
|
|
||||||
│ │ ⚠️ Reminder 2 ││
|
|
||||||
│ │ 75% of SLA breach reminder have been sent ││
|
|
||||||
│ │ Reminder sent by system automatically ││
|
|
||||||
│ │ Sent at: Oct 6 at 6:30 PM ││
|
|
||||||
│ └──────────────────────────────────────────────┘│
|
|
||||||
└──────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Status: READY TO TEST!
|
|
||||||
|
|
||||||
### What Works Now:
|
|
||||||
|
|
||||||
- ✅ TAT alerts stored in database
|
|
||||||
- ✅ Alerts fetched with workflow details
|
|
||||||
- ✅ Alerts grouped by approval level
|
|
||||||
- ✅ Alerts displayed in workflow tab
|
|
||||||
- ✅ Color-coded by threshold
|
|
||||||
- ✅ Formatted like your screenshot
|
|
||||||
- ✅ Completion status tracked
|
|
||||||
- ✅ KPI-ready data structure
|
|
||||||
|
|
||||||
### What You Need to Do:
|
|
||||||
|
|
||||||
1. **Setup Redis** (Upstash recommended - see `START_HERE.md`)
|
|
||||||
2. **Add to `.env`**:
|
|
||||||
```bash
|
|
||||||
REDIS_URL=rediss://default:...@upstash.io:6379
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
```
|
|
||||||
3. **Restart backend**
|
|
||||||
4. **Create test request** (6-hour TAT)
|
|
||||||
5. **Watch alerts appear** in 3, 4.5, 6 minutes!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Documentation
|
|
||||||
|
|
||||||
- **Setup Guide**: `START_HERE.md`
|
|
||||||
- **Quick Start**: `TAT_QUICK_START.md`
|
|
||||||
- **Upstash Guide**: `docs/UPSTASH_SETUP_GUIDE.md`
|
|
||||||
- **KPI Reporting**: `docs/KPI_REPORTING_SYSTEM.md`
|
|
||||||
- **Full System Docs**: `docs/TAT_NOTIFICATION_SYSTEM.md`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Example API Response
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"workflow": {...},
|
|
||||||
"approvals": [
|
|
||||||
{
|
|
||||||
"levelId": "abc-123",
|
|
||||||
"levelNumber": 2,
|
|
||||||
"approverName": "Lisa Wong",
|
|
||||||
"status": "PENDING",
|
|
||||||
"tatHours": 12,
|
|
||||||
...
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"tatAlerts": [
|
|
||||||
{
|
|
||||||
"levelId": "abc-123",
|
|
||||||
"alertType": "TAT_50",
|
|
||||||
"thresholdPercentage": 50,
|
|
||||||
"alertSentAt": "2024-10-06T14:30:00Z",
|
|
||||||
"alertMessage": "⏳ 50% of TAT elapsed...",
|
|
||||||
"isBreached": false,
|
|
||||||
"wasCompletedOnTime": null,
|
|
||||||
"metadata": {
|
|
||||||
"requestNumber": "REQ-2024-001",
|
|
||||||
"approverName": "Lisa Wong",
|
|
||||||
"priority": "express"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"levelId": "abc-123",
|
|
||||||
"alertType": "TAT_75",
|
|
||||||
"thresholdPercentage": 75,
|
|
||||||
"alertSentAt": "2024-10-06T18:30:00Z",
|
|
||||||
"alertMessage": "⚠️ 75% of TAT elapsed...",
|
|
||||||
"isBreached": false,
|
|
||||||
"wasCompletedOnTime": null,
|
|
||||||
"metadata": {...}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 Verify Implementation
|
|
||||||
|
|
||||||
### Check Backend Logs:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# When notification fires:
|
|
||||||
[TAT Processor] Processing tat50 for request...
|
|
||||||
[TAT Processor] TAT alert record created for tat50
|
|
||||||
[TAT Processor] tat50 notification sent
|
|
||||||
|
|
||||||
# When workflow details fetched:
|
|
||||||
[Workflow] Found 2 TAT alerts for request REQ-2024-001
|
|
||||||
```
|
|
||||||
|
|
||||||
### Check Database:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- See all alerts for a request
|
|
||||||
SELECT * FROM tat_alerts
|
|
||||||
WHERE request_id = 'YOUR_REQUEST_ID'
|
|
||||||
ORDER BY alert_sent_at;
|
|
||||||
|
|
||||||
-- See alerts with approval info
|
|
||||||
SELECT
|
|
||||||
al.approver_name,
|
|
||||||
al.level_number,
|
|
||||||
ta.threshold_percentage,
|
|
||||||
ta.alert_sent_at,
|
|
||||||
ta.was_completed_on_time
|
|
||||||
FROM tat_alerts ta
|
|
||||||
JOIN approval_levels al ON ta.level_id = al.level_id
|
|
||||||
WHERE ta.request_id = 'YOUR_REQUEST_ID';
|
|
||||||
```
|
|
||||||
|
|
||||||
### Check Frontend:
|
|
||||||
|
|
||||||
1. Open Request Detail
|
|
||||||
2. Click "Workflow" tab
|
|
||||||
3. Look under each approval level card
|
|
||||||
4. You should see reminder boxes with:
|
|
||||||
- ⏳ 50% reminder (yellow background)
|
|
||||||
- ⚠️ 75% reminder (orange background)
|
|
||||||
- ⏰ 100% breach (red background)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 KPI Reporting Ready
|
|
||||||
|
|
||||||
### All TAT alerts are now queryable for KPIs:
|
|
||||||
|
|
||||||
**TAT Compliance Rate:**
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
COUNT(CASE WHEN was_completed_on_time = true THEN 1 END) * 100.0 /
|
|
||||||
NULLIF(COUNT(*), 0) as compliance_rate
|
|
||||||
FROM tat_alerts
|
|
||||||
WHERE was_completed_on_time IS NOT NULL;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Approver Response Time After Alert:**
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
approver_id,
|
|
||||||
alert_type,
|
|
||||||
AVG(
|
|
||||||
EXTRACT(EPOCH FROM (completion_time - alert_sent_at)) / 3600
|
|
||||||
) as avg_hours_to_respond
|
|
||||||
FROM tat_alerts
|
|
||||||
WHERE completion_time IS NOT NULL
|
|
||||||
GROUP BY approver_id, alert_type;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Breach Analysis:**
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
DATE(alert_sent_at) as date,
|
|
||||||
COUNT(CASE WHEN alert_type = 'TAT_50' THEN 1 END) as alerts_50,
|
|
||||||
COUNT(CASE WHEN alert_type = 'TAT_75' THEN 1 END) as alerts_75,
|
|
||||||
COUNT(CASE WHEN alert_type = 'TAT_100' THEN 1 END) as breaches
|
|
||||||
FROM tat_alerts
|
|
||||||
WHERE alert_sent_at >= CURRENT_DATE - INTERVAL '30 days'
|
|
||||||
GROUP BY DATE(alert_sent_at)
|
|
||||||
ORDER BY date DESC;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Ready to Use!
|
|
||||||
|
|
||||||
### Complete System Features:
|
|
||||||
|
|
||||||
✅ **Notification System** - Sends alerts to approvers
|
|
||||||
✅ **Storage System** - All alerts stored in database
|
|
||||||
✅ **Display System** - Alerts shown in UI (matches screenshot)
|
|
||||||
✅ **Tracking System** - Completion status tracked
|
|
||||||
✅ **KPI System** - Full reporting and analytics
|
|
||||||
✅ **Test Mode** - Fast testing (1 hour = 1 minute)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎓 Quick Test
|
|
||||||
|
|
||||||
1. **Setup Upstash** (2 minutes): https://console.upstash.com/
|
|
||||||
2. **Add to `.env`**:
|
|
||||||
```bash
|
|
||||||
REDIS_URL=rediss://...
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
```
|
|
||||||
3. **Restart backend**
|
|
||||||
4. **Create request** with 6-hour TAT
|
|
||||||
5. **Submit request**
|
|
||||||
6. **Wait 3 minutes** → See first alert in UI
|
|
||||||
7. **Wait 4.5 minutes** → See second alert
|
|
||||||
8. **Wait 6 minutes** → See third alert
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✨ Benefits
|
|
||||||
|
|
||||||
1. **Full Audit Trail** - Every alert stored and queryable
|
|
||||||
2. **Visual Feedback** - Users see exactly when reminders were sent
|
|
||||||
3. **KPI Ready** - Data ready for all reporting needs
|
|
||||||
4. **Compliance Tracking** - Know who completed on time vs late
|
|
||||||
5. **Effectiveness Analysis** - Measure response time after alerts
|
|
||||||
6. **Historical Data** - All past alerts preserved
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**🎉 Implementation Complete! Connect Redis and start testing!**
|
|
||||||
|
|
||||||
See `START_HERE.md` for immediate next steps.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 4, 2025
|
|
||||||
**Status**: ✅ Production Ready
|
|
||||||
**Team**: Royal Enfield Workflow
|
|
||||||
|
|
||||||
@ -1,650 +0,0 @@
|
|||||||
# ✅ Enhanced TAT Alerts Display - Complete Guide
|
|
||||||
|
|
||||||
## 🎯 What's Been Enhanced
|
|
||||||
|
|
||||||
TAT alerts now display **detailed time tracking information** inline with each approver, making it crystal clear what's happening!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Enhanced Alert Display
|
|
||||||
|
|
||||||
### **What Shows Now:**
|
|
||||||
|
|
||||||
```
|
|
||||||
┌──────────────────────────────────────────────────────┐
|
|
||||||
│ ⏳ Reminder 1 - 50% TAT Threshold [WARNING] │
|
|
||||||
│ │
|
|
||||||
│ 50% of SLA breach reminder have been sent │
|
|
||||||
│ │
|
|
||||||
│ ┌──────────────┬──────────────┐ │
|
|
||||||
│ │ Allocated: │ Elapsed: │ │
|
|
||||||
│ │ 12h │ 6.0h │ │
|
|
||||||
│ ├──────────────┼──────────────┤ │
|
|
||||||
│ │ Remaining: │ Due by: │ │
|
|
||||||
│ │ 6.0h │ Oct 7, 2024 │ │
|
|
||||||
│ └──────────────┴──────────────┘ │
|
|
||||||
│ │
|
|
||||||
│ Reminder sent by system automatically [TEST MODE] │
|
|
||||||
│ Sent at: Oct 6 at 2:30 PM │
|
|
||||||
│ Note: Test mode active (1 hour = 1 minute) │
|
|
||||||
└──────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔑 Key Information Displayed
|
|
||||||
|
|
||||||
### **For Each Alert:**
|
|
||||||
|
|
||||||
| Field | Description | Example |
|
|
||||||
|-------|-------------|---------|
|
|
||||||
| **Reminder #** | Sequential number | "Reminder 1" |
|
|
||||||
| **Threshold** | Percentage reached | "50% TAT Threshold" |
|
|
||||||
| **Status Badge** | Warning or Breach | `WARNING` / `BREACHED` |
|
|
||||||
| **Allocated** | Total TAT hours | "12h" |
|
|
||||||
| **Elapsed** | Hours used when alert sent | "6.0h" |
|
|
||||||
| **Remaining** | Hours left when alert sent | "6.0h" |
|
|
||||||
| **Due by** | Expected completion date | "Oct 7, 2024" |
|
|
||||||
| **Sent at** | When reminder was sent | "Oct 6 at 2:30 PM" |
|
|
||||||
| **Test Mode** | If in test mode | Purple badge + note |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎨 Color Coding
|
|
||||||
|
|
||||||
### **50% Alert (⏳):**
|
|
||||||
- Background: `bg-yellow-50`
|
|
||||||
- Border: `border-yellow-200`
|
|
||||||
- Badge: `bg-amber-100 text-amber-800`
|
|
||||||
- Icon: ⏳
|
|
||||||
|
|
||||||
### **75% Alert (⚠️):**
|
|
||||||
- Background: `bg-orange-50`
|
|
||||||
- Border: `border-orange-200`
|
|
||||||
- Badge: `bg-amber-100 text-amber-800`
|
|
||||||
- Icon: ⚠️
|
|
||||||
|
|
||||||
### **100% Breach (⏰):**
|
|
||||||
- Background: `bg-red-50`
|
|
||||||
- Border: `border-red-200`
|
|
||||||
- Badge: `bg-red-100 text-red-800`
|
|
||||||
- Icon: ⏰
|
|
||||||
- Text: Shows "BREACHED" instead of "WARNING"
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🧪 Test Mode vs Production Mode
|
|
||||||
|
|
||||||
### **Test Mode (TAT_TEST_MODE=true):**
|
|
||||||
|
|
||||||
**Purpose**: Fast testing during development
|
|
||||||
|
|
||||||
**Behavior:**
|
|
||||||
- ✅ 1 hour = 1 minute
|
|
||||||
- ✅ 6-hour TAT = 6 minutes
|
|
||||||
- ✅ Purple "TEST MODE" badge shown
|
|
||||||
- ✅ Note: "Test mode active (1 hour = 1 minute)"
|
|
||||||
- ✅ All times are in working time (no weekend skip)
|
|
||||||
|
|
||||||
**Example Alert (Test Mode):**
|
|
||||||
```
|
|
||||||
⏳ Reminder 1 - 50% TAT Threshold [WARNING] [TEST MODE]
|
|
||||||
|
|
||||||
Allocated: 6h | Elapsed: 3.0h
|
|
||||||
Remaining: 3.0h | Due by: Today 2:06 PM
|
|
||||||
|
|
||||||
Note: Test mode active (1 hour = 1 minute)
|
|
||||||
Sent at: Today at 2:03 PM
|
|
||||||
```
|
|
||||||
|
|
||||||
**Timeline:**
|
|
||||||
- Submit at 2:00 PM
|
|
||||||
- 50% alert at 2:03 PM (3 minutes)
|
|
||||||
- 75% alert at 2:04:30 PM (4.5 minutes)
|
|
||||||
- 100% breach at 2:06 PM (6 minutes)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Production Mode (TAT_TEST_MODE=false):**
|
|
||||||
|
|
||||||
**Purpose**: Real-world usage
|
|
||||||
|
|
||||||
**Behavior:**
|
|
||||||
- ✅ 1 hour = 1 hour (real time)
|
|
||||||
- ✅ 48-hour TAT = 48 hours
|
|
||||||
- ✅ No "TEST MODE" badge
|
|
||||||
- ✅ No test mode note
|
|
||||||
- ✅ Respects working hours (Mon-Fri, 9 AM-6 PM)
|
|
||||||
- ✅ Skips weekends
|
|
||||||
|
|
||||||
**Example Alert (Production Mode):**
|
|
||||||
```
|
|
||||||
⏳ Reminder 1 - 50% TAT Threshold [WARNING]
|
|
||||||
|
|
||||||
Allocated: 48h | Elapsed: 24.0h
|
|
||||||
Remaining: 24.0h | Due by: Oct 8, 2024
|
|
||||||
|
|
||||||
Reminder sent by system automatically
|
|
||||||
Sent at: Oct 6 at 10:00 AM
|
|
||||||
```
|
|
||||||
|
|
||||||
**Timeline:**
|
|
||||||
- Submit Monday 10:00 AM
|
|
||||||
- 50% alert Tuesday 10:00 AM (24 hours)
|
|
||||||
- 75% alert Wednesday 10:00 AM (36 hours)
|
|
||||||
- 100% breach Thursday 10:00 AM (48 hours)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📡 New API Endpoints
|
|
||||||
|
|
||||||
### **1. Get TAT Alerts for Request**
|
|
||||||
```
|
|
||||||
GET /api/tat/alerts/request/:requestId
|
|
||||||
```
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"data": [
|
|
||||||
{
|
|
||||||
"alertId": "...",
|
|
||||||
"alertType": "TAT_50",
|
|
||||||
"thresholdPercentage": 50,
|
|
||||||
"tatHoursAllocated": 12,
|
|
||||||
"tatHoursElapsed": 6.0,
|
|
||||||
"tatHoursRemaining": 6.0,
|
|
||||||
"alertSentAt": "2024-10-06T14:30:00Z",
|
|
||||||
"level": {
|
|
||||||
"levelNumber": 2,
|
|
||||||
"approverName": "Lisa Wong",
|
|
||||||
"status": "PENDING"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### **2. Get TAT Compliance Summary**
|
|
||||||
```
|
|
||||||
GET /api/tat/compliance/summary?startDate=2024-10-01&endDate=2024-10-31
|
|
||||||
```
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"data": {
|
|
||||||
"total_alerts": 150,
|
|
||||||
"alerts_50": 50,
|
|
||||||
"alerts_75": 45,
|
|
||||||
"breaches": 25,
|
|
||||||
"completed_on_time": 35,
|
|
||||||
"completed_late": 15,
|
|
||||||
"compliance_percentage": 70.00
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### **3. Get TAT Breach Report**
|
|
||||||
```
|
|
||||||
GET /api/tat/breaches
|
|
||||||
```
|
|
||||||
|
|
||||||
### **4. Get Approver Performance**
|
|
||||||
```
|
|
||||||
GET /api/tat/performance/:approverId
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 Database Fields Available
|
|
||||||
|
|
||||||
### **In `tat_alerts` Table:**
|
|
||||||
|
|
||||||
| Field | Type | Use In UI |
|
|
||||||
|-------|------|-----------|
|
|
||||||
| `alert_type` | ENUM | Determine icon (⏳/⚠️/⏰) |
|
|
||||||
| `threshold_percentage` | INT | Show "50%", "75%", "100%" |
|
|
||||||
| `tat_hours_allocated` | DECIMAL | Display "Allocated: Xh" |
|
|
||||||
| `tat_hours_elapsed` | DECIMAL | Display "Elapsed: Xh" |
|
|
||||||
| `tat_hours_remaining` | DECIMAL | Display "Remaining: Xh" (red if < 2h) |
|
|
||||||
| `level_start_time` | TIMESTAMP | Calculate time since start |
|
|
||||||
| `alert_sent_at` | TIMESTAMP | Show "Sent at: ..." |
|
|
||||||
| `expected_completion_time` | TIMESTAMP | Show "Due by: ..." |
|
|
||||||
| `alert_message` | TEXT | Full notification message |
|
|
||||||
| `is_breached` | BOOLEAN | Show "BREACHED" badge |
|
|
||||||
| `metadata` | JSONB | Test mode indicator, priority, etc. |
|
|
||||||
| `was_completed_on_time` | BOOLEAN | Show compliance status |
|
|
||||||
| `completion_time` | TIMESTAMP | Show actual completion |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 💡 Production Recommendation
|
|
||||||
|
|
||||||
### **For Development/Testing:**
|
|
||||||
```bash
|
|
||||||
# .env
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
WORK_START_HOUR=9
|
|
||||||
WORK_END_HOUR=18
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- ✅ Fast feedback (minutes instead of hours/days)
|
|
||||||
- ✅ Easy to test multiple scenarios
|
|
||||||
- ✅ Clear test mode indicators prevent confusion
|
|
||||||
|
|
||||||
### **For Production:**
|
|
||||||
```bash
|
|
||||||
# .env
|
|
||||||
TAT_TEST_MODE=false
|
|
||||||
WORK_START_HOUR=9
|
|
||||||
WORK_END_HOUR=18
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- ✅ Real-world timing
|
|
||||||
- ✅ Accurate TAT tracking
|
|
||||||
- ✅ Meaningful metrics
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 Complete Alert Card Template
|
|
||||||
|
|
||||||
### **Full Display Structure:**
|
|
||||||
|
|
||||||
```tsx
|
|
||||||
<div className="bg-yellow-50 border-yellow-200 p-3 rounded-lg">
|
|
||||||
{/* Header */}
|
|
||||||
<div className="flex items-center justify-between">
|
|
||||||
<span>⏳ Reminder 1 - 50% TAT Threshold</span>
|
|
||||||
<Badge>WARNING</Badge>
|
|
||||||
{testMode && <Badge>TEST MODE</Badge>}
|
|
||||||
</div>
|
|
||||||
|
|
||||||
{/* Main Message */}
|
|
||||||
<p>50% of SLA breach reminder have been sent</p>
|
|
||||||
|
|
||||||
{/* Time Grid */}
|
|
||||||
<div className="grid grid-cols-2 gap-2">
|
|
||||||
<div>Allocated: 12h</div>
|
|
||||||
<div>Elapsed: 6.0h</div>
|
|
||||||
<div>Remaining: 6.0h</div>
|
|
||||||
<div>Due by: Oct 7</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
{/* Footer */}
|
|
||||||
<div className="border-t pt-2">
|
|
||||||
<p>Reminder sent by system automatically</p>
|
|
||||||
<p>Sent at: Oct 6 at 2:30 PM</p>
|
|
||||||
{testMode && <p>Note: Test mode (1h = 1min)</p>}
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Key Benefits of Enhanced Display
|
|
||||||
|
|
||||||
### **1. Full Transparency**
|
|
||||||
Users see exactly:
|
|
||||||
- How much time was allocated
|
|
||||||
- How much was used when alert fired
|
|
||||||
- How much was remaining
|
|
||||||
- When it's due
|
|
||||||
|
|
||||||
### **2. Context Awareness**
|
|
||||||
- Test mode clearly indicated
|
|
||||||
- Color-coded by severity
|
|
||||||
- Badge shows warning vs breach
|
|
||||||
|
|
||||||
### **3. Actionable Information**
|
|
||||||
- "Remaining: 2.5h" → Approver knows they have 2.5h left
|
|
||||||
- "Due by: Oct 7 at 6 PM" → Clear deadline
|
|
||||||
- "Elapsed: 6h" → Understand how long it's been
|
|
||||||
|
|
||||||
### **4. Confusion Prevention**
|
|
||||||
- Test mode badge prevents misunderstanding
|
|
||||||
- Note explains "1 hour = 1 minute" in test mode
|
|
||||||
- Clear visual distinction from production
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🧪 Testing Workflow
|
|
||||||
|
|
||||||
### **Step 1: Enable Detailed Logging**
|
|
||||||
|
|
||||||
In `Re_Backend/.env`:
|
|
||||||
```bash
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
LOG_LEVEL=debug
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Step 2: Create Test Request**
|
|
||||||
|
|
||||||
- TAT: 6 hours
|
|
||||||
- Priority: Standard or Express
|
|
||||||
- Submit request
|
|
||||||
|
|
||||||
### **Step 3: Watch Alerts Populate**
|
|
||||||
|
|
||||||
**At 3 minutes (50%):**
|
|
||||||
```
|
|
||||||
⏳ Reminder 1 - 50% TAT Threshold [WARNING] [TEST MODE]
|
|
||||||
|
|
||||||
Allocated: 6h | Elapsed: 3.0h
|
|
||||||
Remaining: 3.0h | Due by: Today 2:06 PM
|
|
||||||
|
|
||||||
Note: Test mode active (1 hour = 1 minute)
|
|
||||||
```
|
|
||||||
|
|
||||||
**At 4.5 minutes (75%):**
|
|
||||||
```
|
|
||||||
⚠️ Reminder 2 - 75% TAT Threshold [WARNING] [TEST MODE]
|
|
||||||
|
|
||||||
Allocated: 6h | Elapsed: 4.5h
|
|
||||||
Remaining: 1.5h | Due by: Today 2:06 PM
|
|
||||||
|
|
||||||
Note: Test mode active (1 hour = 1 minute)
|
|
||||||
```
|
|
||||||
|
|
||||||
**At 6 minutes (100%):**
|
|
||||||
```
|
|
||||||
⏰ Reminder 3 - 100% TAT Threshold [BREACHED] [TEST MODE]
|
|
||||||
|
|
||||||
Allocated: 6h | Elapsed: 6.0h
|
|
||||||
Remaining: 0.0h | Due by: Today 2:06 PM (OVERDUE)
|
|
||||||
|
|
||||||
Note: Test mode active (1 hour = 1 minute)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 KPI Queries Using Alert Data
|
|
||||||
|
|
||||||
### **Average Response Time After Each Alert Type:**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
alert_type,
|
|
||||||
ROUND(AVG(tat_hours_elapsed), 2) as avg_elapsed,
|
|
||||||
ROUND(AVG(tat_hours_remaining), 2) as avg_remaining,
|
|
||||||
COUNT(*) as alert_count,
|
|
||||||
COUNT(CASE WHEN was_completed_on_time = true THEN 1 END) as completed_on_time
|
|
||||||
FROM tat_alerts
|
|
||||||
GROUP BY alert_type
|
|
||||||
ORDER BY threshold_percentage;
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Approvers Who Frequently Breach:**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
u.display_name,
|
|
||||||
u.department,
|
|
||||||
COUNT(CASE WHEN ta.is_breached = true THEN 1 END) as breach_count,
|
|
||||||
AVG(ta.tat_hours_elapsed) as avg_time_taken,
|
|
||||||
COUNT(DISTINCT ta.level_id) as total_approvals
|
|
||||||
FROM tat_alerts ta
|
|
||||||
JOIN users u ON ta.approver_id = u.user_id
|
|
||||||
WHERE ta.is_breached = true
|
|
||||||
GROUP BY u.user_id, u.display_name, u.department
|
|
||||||
ORDER BY breach_count DESC
|
|
||||||
LIMIT 10;
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Time-to-Action After Alert:**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
alert_type,
|
|
||||||
threshold_percentage,
|
|
||||||
ROUND(AVG(
|
|
||||||
EXTRACT(EPOCH FROM (completion_time - alert_sent_at)) / 3600
|
|
||||||
), 2) as avg_hours_to_respond_after_alert
|
|
||||||
FROM tat_alerts
|
|
||||||
WHERE completion_time IS NOT NULL
|
|
||||||
GROUP BY alert_type, threshold_percentage
|
|
||||||
ORDER BY threshold_percentage;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔄 Alert Lifecycle
|
|
||||||
|
|
||||||
### **1. Alert Created (When Threshold Reached)**
|
|
||||||
```typescript
|
|
||||||
{
|
|
||||||
alertType: 'TAT_50',
|
|
||||||
thresholdPercentage: 50,
|
|
||||||
tatHoursAllocated: 12,
|
|
||||||
tatHoursElapsed: 6.0,
|
|
||||||
tatHoursRemaining: 6.0,
|
|
||||||
alertSentAt: '2024-10-06T14:30:00Z',
|
|
||||||
expectedCompletionTime: '2024-10-06T18:00:00Z',
|
|
||||||
isBreached: false,
|
|
||||||
wasCompletedOnTime: null, // Not completed yet
|
|
||||||
metadata: { testMode: true, ... }
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### **2. Approver Takes Action**
|
|
||||||
```typescript
|
|
||||||
// Updated when level is approved/rejected
|
|
||||||
{
|
|
||||||
...existingFields,
|
|
||||||
wasCompletedOnTime: true, // or false
|
|
||||||
completionTime: '2024-10-06T16:00:00Z'
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### **3. Displayed in UI**
|
|
||||||
```tsx
|
|
||||||
// Shows all historical alerts for that level
|
|
||||||
// Color-coded by threshold
|
|
||||||
// Shows completion status if completed
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎓 Understanding the Data
|
|
||||||
|
|
||||||
### **Allocated Hours (tat_hours_allocated)**
|
|
||||||
Total TAT time given to approver for this level
|
|
||||||
```
|
|
||||||
Example: 12 hours
|
|
||||||
Meaning: Approver has 12 hours to approve/reject
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Elapsed Hours (tat_hours_elapsed)**
|
|
||||||
Time used when the alert was sent
|
|
||||||
```
|
|
||||||
Example: 6.0 hours (at 50% alert)
|
|
||||||
Meaning: 6 hours have passed since level started
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Remaining Hours (tat_hours_remaining)**
|
|
||||||
Time left when the alert was sent
|
|
||||||
```
|
|
||||||
Example: 6.0 hours (at 50% alert)
|
|
||||||
Meaning: 6 hours remaining before TAT breach
|
|
||||||
Note: Turns red if < 2 hours
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Expected Completion Time**
|
|
||||||
When the level should be completed
|
|
||||||
```
|
|
||||||
Example: Oct 6 at 6:00 PM
|
|
||||||
Meaning: Deadline for this approval level
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⚙️ Configuration Options
|
|
||||||
|
|
||||||
### **Disable Test Mode for Production:**
|
|
||||||
|
|
||||||
Edit `.env`:
|
|
||||||
```bash
|
|
||||||
# Production settings
|
|
||||||
TAT_TEST_MODE=false
|
|
||||||
WORK_START_HOUR=9
|
|
||||||
WORK_END_HOUR=18
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Adjust Working Hours:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Custom working hours (e.g., 8 AM - 5 PM)
|
|
||||||
WORK_START_HOUR=8
|
|
||||||
WORK_END_HOUR=17
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Redis Configuration:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Upstash (recommended)
|
|
||||||
REDIS_URL=rediss://default:PASSWORD@host.upstash.io:6379
|
|
||||||
|
|
||||||
# Local Redis
|
|
||||||
REDIS_URL=redis://localhost:6379
|
|
||||||
|
|
||||||
# Production Redis with auth
|
|
||||||
REDIS_URL=redis://username:password@prod-redis.com:6379
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📱 Mobile Responsive
|
|
||||||
|
|
||||||
The alert cards are responsive:
|
|
||||||
- ✅ 2-column grid on desktop
|
|
||||||
- ✅ Single column on mobile
|
|
||||||
- ✅ All information remains visible
|
|
||||||
- ✅ Touch-friendly spacing
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 API Endpoints Available
|
|
||||||
|
|
||||||
### **Get Alerts for Request:**
|
|
||||||
```bash
|
|
||||||
GET /api/tat/alerts/request/:requestId
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Get Alerts for Level:**
|
|
||||||
```bash
|
|
||||||
GET /api/tat/alerts/level/:levelId
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Get Compliance Summary:**
|
|
||||||
```bash
|
|
||||||
GET /api/tat/compliance/summary
|
|
||||||
GET /api/tat/compliance/summary?startDate=2024-10-01&endDate=2024-10-31
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Get Breach Report:**
|
|
||||||
```bash
|
|
||||||
GET /api/tat/breaches
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Get Approver Performance:**
|
|
||||||
```bash
|
|
||||||
GET /api/tat/performance/:approverId
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Benefits Summary
|
|
||||||
|
|
||||||
### **For Users:**
|
|
||||||
1. **Clear Visibility** - See exact time tracking
|
|
||||||
2. **No Confusion** - Test mode clearly labeled
|
|
||||||
3. **Actionable Data** - Know exactly how much time left
|
|
||||||
4. **Historical Record** - All alerts preserved
|
|
||||||
|
|
||||||
### **For Management:**
|
|
||||||
1. **KPI Ready** - All data for reporting
|
|
||||||
2. **Compliance Tracking** - On-time vs late completion
|
|
||||||
3. **Performance Analysis** - Response time after alerts
|
|
||||||
4. **Trend Analysis** - Breach patterns
|
|
||||||
|
|
||||||
### **For System:**
|
|
||||||
1. **Audit Trail** - Every alert logged
|
|
||||||
2. **Scalable** - Queue-based architecture
|
|
||||||
3. **Reliable** - Automatic retries
|
|
||||||
4. **Maintainable** - Clear configuration
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Quick Switch Between Modes
|
|
||||||
|
|
||||||
### **Development (Fast Testing):**
|
|
||||||
```bash
|
|
||||||
# .env
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
```
|
|
||||||
Restart backend → Alerts fire in minutes
|
|
||||||
|
|
||||||
### **Staging (Semi-Real):**
|
|
||||||
```bash
|
|
||||||
# .env
|
|
||||||
TAT_TEST_MODE=false
|
|
||||||
# But use shorter TATs (2-4 hours instead of 48 hours)
|
|
||||||
```
|
|
||||||
Restart backend → Alerts fire in hours
|
|
||||||
|
|
||||||
### **Production (Real):**
|
|
||||||
```bash
|
|
||||||
# .env
|
|
||||||
TAT_TEST_MODE=false
|
|
||||||
# Use actual TATs (24-48 hours)
|
|
||||||
```
|
|
||||||
Restart backend → Alerts fire in days
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 What You See in Workflow Tab
|
|
||||||
|
|
||||||
For each approval level, you'll see:
|
|
||||||
|
|
||||||
```
|
|
||||||
┌────────────────────────────────────────────┐
|
|
||||||
│ Step 2: Lisa Wong (Finance Manager) │
|
|
||||||
│ Status: pending │
|
|
||||||
│ TAT: 12 hours │
|
|
||||||
│ Elapsed: 8h │
|
|
||||||
│ │
|
|
||||||
│ [50% Alert Card with full details] │
|
|
||||||
│ [75% Alert Card with full details] │
|
|
||||||
│ │
|
|
||||||
│ Comment: (if any) │
|
|
||||||
└────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
**Clear, informative, and actionable!**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎉 Status: READY!
|
|
||||||
|
|
||||||
✅ **Enhanced display** with all timing details
|
|
||||||
✅ **Test mode indicator** to prevent confusion
|
|
||||||
✅ **Color-coded** by severity
|
|
||||||
✅ **Responsive** design
|
|
||||||
✅ **API endpoints** for custom queries
|
|
||||||
✅ **KPI-ready** data structure
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Just setup Upstash Redis and start testing!**
|
|
||||||
|
|
||||||
See: `START_HERE.md` for 2-minute Redis setup
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 4, 2025
|
|
||||||
**Team**: Royal Enfield Workflow
|
|
||||||
|
|
||||||
@ -1,269 +0,0 @@
|
|||||||
# ⏰ TAT Notifications - Quick Start Guide
|
|
||||||
|
|
||||||
## 🎯 Goal
|
|
||||||
Get TAT (Turnaround Time) notifications working in **under 5 minutes**!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Step 1: Setup Redis (Required)
|
|
||||||
|
|
||||||
### 🚀 Option A: Upstash (RECOMMENDED - No Installation!)
|
|
||||||
|
|
||||||
**Best for Windows/Development - 100% Free**
|
|
||||||
|
|
||||||
1. **Sign up**: Go to https://console.upstash.com/
|
|
||||||
2. **Create Database**: Click "Create Database"
|
|
||||||
- Name: `redis-tat-dev`
|
|
||||||
- Type: Regional
|
|
||||||
- Region: Choose closest to you
|
|
||||||
- Click "Create"
|
|
||||||
3. **Copy Connection URL**: You'll get a URL like:
|
|
||||||
```
|
|
||||||
rediss://default:AbCd1234...@us1-mighty-shark-12345.upstash.io:6379
|
|
||||||
```
|
|
||||||
4. **Update `.env` in Re_Backend/**:
|
|
||||||
```bash
|
|
||||||
REDIS_URL=rediss://default:AbCd1234...@us1-mighty-shark-12345.upstash.io:6379
|
|
||||||
```
|
|
||||||
|
|
||||||
✅ **Done!** No installation, no setup, works everywhere!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Alternative: Docker (If you prefer local)
|
|
||||||
|
|
||||||
If you have Docker Desktop:
|
|
||||||
```bash
|
|
||||||
docker run -d --name redis-tat -p 6379:6379 redis:latest
|
|
||||||
```
|
|
||||||
|
|
||||||
Then in `.env`:
|
|
||||||
```bash
|
|
||||||
REDIS_URL=redis://localhost:6379
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⚡ Step 2: Enable Test Mode (HIGHLY RECOMMENDED)
|
|
||||||
|
|
||||||
For testing, enable **fast mode** where **1 hour = 1 minute**:
|
|
||||||
|
|
||||||
### Edit `.env` file in `Re_Backend/`:
|
|
||||||
```bash
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
```
|
|
||||||
|
|
||||||
This means:
|
|
||||||
- ✅ 6-hour TAT = 6 minutes (instead of 6 hours)
|
|
||||||
- ✅ 48-hour TAT = 48 minutes (instead of 48 hours)
|
|
||||||
- ✅ Perfect for quick testing!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Step 3: Restart Backend
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
### You Should See:
|
|
||||||
```
|
|
||||||
✅ [TAT Queue] Connected to Redis
|
|
||||||
✅ [TAT Worker] Initialized and listening
|
|
||||||
⏰ TAT Configuration:
|
|
||||||
- Test Mode: ENABLED (1 hour = 1 minute)
|
|
||||||
- Redis: rediss://***@upstash.io:6379
|
|
||||||
```
|
|
||||||
|
|
||||||
💡 If you see connection errors, double-check your `REDIS_URL` in `.env`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🧪 Step 4: Test It!
|
|
||||||
|
|
||||||
### Create a Request:
|
|
||||||
1. **Frontend**: Create a new workflow request
|
|
||||||
2. **Set TAT**: 6 hours (becomes 6 minutes in test mode)
|
|
||||||
3. **Submit** the request
|
|
||||||
|
|
||||||
### Watch the Magic:
|
|
||||||
```
|
|
||||||
✨ At 3 minutes: ⏳ 50% notification
|
|
||||||
✨ At 4.5 minutes: ⚠️ 75% notification
|
|
||||||
✨ At 6 minutes: ⏰ 100% breach notification
|
|
||||||
```
|
|
||||||
|
|
||||||
### Check Logs:
|
|
||||||
```bash
|
|
||||||
# You'll see:
|
|
||||||
[TAT Scheduler] ✅ TAT jobs scheduled for request...
|
|
||||||
[TAT Processor] Processing tat50 for request...
|
|
||||||
[TAT Processor] tat50 notification sent for request...
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Verify in Database
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
approver_name,
|
|
||||||
tat_hours,
|
|
||||||
tat50_alert_sent,
|
|
||||||
tat75_alert_sent,
|
|
||||||
tat_breached,
|
|
||||||
status
|
|
||||||
FROM approval_levels
|
|
||||||
WHERE status = 'IN_PROGRESS';
|
|
||||||
```
|
|
||||||
|
|
||||||
You should see the flags change as notifications are sent!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ❌ Troubleshooting
|
|
||||||
|
|
||||||
### "ECONNREFUSED" or Connection Error?
|
|
||||||
**Problem**: Can't connect to Redis
|
|
||||||
|
|
||||||
**Solution**:
|
|
||||||
1. **Check `.env` file**:
|
|
||||||
```bash
|
|
||||||
# Make sure REDIS_URL is set correctly
|
|
||||||
REDIS_URL=rediss://default:YOUR_PASSWORD@YOUR_URL.upstash.io:6379
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Verify Upstash Database**:
|
|
||||||
- Go to https://console.upstash.com/
|
|
||||||
- Check database status (should be "Active")
|
|
||||||
- Copy connection URL again if needed
|
|
||||||
|
|
||||||
3. **Test Connection**:
|
|
||||||
- Use Upstash's Redis CLI in their console
|
|
||||||
- Or install `redis-cli` and test:
|
|
||||||
```bash
|
|
||||||
redis-cli -u "rediss://default:YOUR_PASSWORD@YOUR_URL.upstash.io:6379" ping
|
|
||||||
# Should return: PONG
|
|
||||||
```
|
|
||||||
|
|
||||||
### No Notifications?
|
|
||||||
**Checklist**:
|
|
||||||
- ✅ REDIS_URL set in `.env`?
|
|
||||||
- ✅ Backend restarted after setting REDIS_URL?
|
|
||||||
- ✅ TAT_TEST_MODE=true in `.env`?
|
|
||||||
- ✅ Request submitted (not just created)?
|
|
||||||
- ✅ Logs show "Connected to Redis"?
|
|
||||||
|
|
||||||
### Still Issues?
|
|
||||||
```bash
|
|
||||||
# Check detailed logs
|
|
||||||
Get-Content Re_Backend/logs/app.log -Tail 50 -Wait
|
|
||||||
|
|
||||||
# Look for:
|
|
||||||
# ✅ [TAT Queue] Connected to Redis
|
|
||||||
# ❌ [TAT Queue] Redis connection error
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎓 Testing Scenarios
|
|
||||||
|
|
||||||
### Quick Test (6 minutes):
|
|
||||||
```
|
|
||||||
TAT: 6 hours (6 minutes in test mode)
|
|
||||||
├─ 3 min ⏳ 50% reminder
|
|
||||||
├─ 4.5 min ⚠️ 75% warning
|
|
||||||
└─ 6 min ⏰ 100% breach
|
|
||||||
```
|
|
||||||
|
|
||||||
### Medium Test (24 minutes):
|
|
||||||
```
|
|
||||||
TAT: 24 hours (24 minutes in test mode)
|
|
||||||
├─ 12 min ⏳ 50% reminder
|
|
||||||
├─ 18 min ⚠️ 75% warning
|
|
||||||
└─ 24 min ⏰ 100% breach
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 More Information
|
|
||||||
|
|
||||||
- **Full Documentation**: `Re_Backend/docs/TAT_NOTIFICATION_SYSTEM.md`
|
|
||||||
- **Testing Guide**: `Re_Backend/docs/TAT_TESTING_GUIDE.md`
|
|
||||||
- **Redis Setup**: `Re_Backend/INSTALL_REDIS.txt`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎉 Production Mode
|
|
||||||
|
|
||||||
When ready for production:
|
|
||||||
|
|
||||||
1. **Disable Test Mode**:
|
|
||||||
```bash
|
|
||||||
# In .env
|
|
||||||
TAT_TEST_MODE=false
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Restart Backend**
|
|
||||||
|
|
||||||
3. **TAT will now use real hours**:
|
|
||||||
- 48-hour TAT = actual 48 hours
|
|
||||||
- Working hours: Mon-Fri, 9 AM - 6 PM
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🆘 Need Help?
|
|
||||||
|
|
||||||
Common fixes:
|
|
||||||
|
|
||||||
### 1. Verify Upstash Connection
|
|
||||||
```bash
|
|
||||||
# In Upstash Console (https://console.upstash.com/)
|
|
||||||
# - Click your database
|
|
||||||
# - Use the "CLI" tab to test: PING
|
|
||||||
# - Should return: PONG
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Check Environment Variables
|
|
||||||
```bash
|
|
||||||
# In Re_Backend/.env, verify:
|
|
||||||
REDIS_URL=rediss://default:YOUR_PASSWORD@YOUR_URL.upstash.io:6379
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Clear Redis Queue (if needed)
|
|
||||||
```bash
|
|
||||||
# In Upstash Console CLI tab:
|
|
||||||
FLUSHALL
|
|
||||||
# This clears all jobs - use only if you need a fresh start
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Restart Backend
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. Check Logs
|
|
||||||
```bash
|
|
||||||
Get-Content logs/app.log -Tail 50 -Wait
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Status Check**:
|
|
||||||
- [ ] Upstash Redis database created
|
|
||||||
- [ ] REDIS_URL copied to `.env`
|
|
||||||
- [ ] TAT_TEST_MODE=true in `.env`
|
|
||||||
- [ ] Backend restarted
|
|
||||||
- [ ] Logs show "TAT Queue: Connected to Redis"
|
|
||||||
- [ ] Test request submitted
|
|
||||||
|
|
||||||
✅ All checked? **You're ready!**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 4, 2025
|
|
||||||
**Author**: Royal Enfield Workflow Team
|
|
||||||
|
|
||||||
@ -1,420 +0,0 @@
|
|||||||
# 🔍 Troubleshooting TAT Alerts Not Showing
|
|
||||||
|
|
||||||
## Quick Diagnosis Steps
|
|
||||||
|
|
||||||
### Step 1: Check if Redis is Connected
|
|
||||||
|
|
||||||
**Look at your backend console when you start the server:**
|
|
||||||
|
|
||||||
✅ **Good** - Redis is working:
|
|
||||||
```
|
|
||||||
✅ [TAT Queue] Connected to Redis
|
|
||||||
✅ [TAT Worker] Worker is ready and listening
|
|
||||||
```
|
|
||||||
|
|
||||||
❌ **Bad** - Redis is NOT working:
|
|
||||||
```
|
|
||||||
⚠️ [TAT Worker] Redis connection failed
|
|
||||||
⚠️ [TAT Queue] Redis connection failed after 3 attempts
|
|
||||||
```
|
|
||||||
|
|
||||||
**If you see the bad message:**
|
|
||||||
→ TAT alerts will NOT be created because the worker isn't running
|
|
||||||
→ You MUST setup Redis first (see `START_HERE.md`)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Step 2: Verify TAT Alerts Table Exists
|
|
||||||
|
|
||||||
**Run this SQL:**
|
|
||||||
```sql
|
|
||||||
SELECT COUNT(*) FROM tat_alerts;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected Result:**
|
|
||||||
- If table exists: You'll see a count (maybe 0)
|
|
||||||
- If table doesn't exist: Error "relation tat_alerts does not exist"
|
|
||||||
|
|
||||||
**If table doesn't exist:**
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run migrate
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Step 3: Check if TAT Alerts Exist in Database
|
|
||||||
|
|
||||||
**Run this SQL:**
|
|
||||||
```sql
|
|
||||||
-- Check if ANY alerts exist
|
|
||||||
SELECT
|
|
||||||
ta.alert_id,
|
|
||||||
ta.threshold_percentage,
|
|
||||||
ta.alert_sent_at,
|
|
||||||
ta.alert_message,
|
|
||||||
ta.metadata->>'requestNumber' as request_number,
|
|
||||||
ta.metadata->>'approverName' as approver_name
|
|
||||||
FROM tat_alerts ta
|
|
||||||
ORDER BY ta.alert_sent_at DESC
|
|
||||||
LIMIT 10;
|
|
||||||
```
|
|
||||||
|
|
||||||
**If query returns 0 rows:**
|
|
||||||
→ No alerts have been created yet
|
|
||||||
→ This means:
|
|
||||||
1. Redis is not connected, OR
|
|
||||||
2. No requests have been submitted, OR
|
|
||||||
3. Not enough time has passed (wait 3 min in test mode)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Step 4: Check API Response
|
|
||||||
|
|
||||||
**Option A: Use Debug Endpoint**
|
|
||||||
|
|
||||||
Call this URL in your browser or Postman:
|
|
||||||
```
|
|
||||||
GET http://localhost:5000/api/debug/tat-status
|
|
||||||
```
|
|
||||||
|
|
||||||
**You'll see:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"status": {
|
|
||||||
"redis": {
|
|
||||||
"configured": true,
|
|
||||||
"url": "rediss://****@upstash.io:6379",
|
|
||||||
"testMode": true
|
|
||||||
},
|
|
||||||
"database": {
|
|
||||||
"connected": true,
|
|
||||||
"tatAlertsTableExists": true,
|
|
||||||
"totalAlerts": 0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option B: Check Workflow Details Response**
|
|
||||||
|
|
||||||
For a specific request:
|
|
||||||
```
|
|
||||||
GET http://localhost:5000/api/debug/workflow-details/REQ-2025-XXXXX
|
|
||||||
```
|
|
||||||
|
|
||||||
**You'll see:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"structure": {
|
|
||||||
"hasTatAlerts": true,
|
|
||||||
"tatAlertsCount": 2
|
|
||||||
},
|
|
||||||
"tatAlerts": [
|
|
||||||
{
|
|
||||||
"alertType": "TAT_50",
|
|
||||||
"thresholdPercentage": 50,
|
|
||||||
"alertSentAt": "...",
|
|
||||||
...
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Step 5: Check Frontend Console
|
|
||||||
|
|
||||||
**Open browser DevTools (F12) → Console**
|
|
||||||
|
|
||||||
**When you open Request Detail, you should see:**
|
|
||||||
```javascript
|
|
||||||
// Look for the API response
|
|
||||||
Object {
|
|
||||||
workflow: {...},
|
|
||||||
approvals: [...],
|
|
||||||
tatAlerts: [...] // ← Check if this exists
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**If `tatAlerts` is missing or empty:**
|
|
||||||
→ Backend is not returning it (go back to Step 3)
|
|
||||||
|
|
||||||
**If `tatAlerts` exists but not showing:**
|
|
||||||
→ Frontend rendering issue (check Step 6)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Step 6: Verify Frontend Code
|
|
||||||
|
|
||||||
**Check if tatAlerts are being processed:**
|
|
||||||
|
|
||||||
Open `Re_Figma_Code/src/pages/RequestDetail/RequestDetail.tsx`
|
|
||||||
|
|
||||||
**Search for this line (around line 235 and 493):**
|
|
||||||
```typescript
|
|
||||||
const tatAlerts = Array.isArray(details.tatAlerts) ? details.tatAlerts : [];
|
|
||||||
```
|
|
||||||
|
|
||||||
**This should be there!** If not, the code wasn't applied.
|
|
||||||
|
|
||||||
**Then search for (around line 271 and 531):**
|
|
||||||
```typescript
|
|
||||||
const levelAlerts = tatAlerts.filter((alert: any) => alert.levelId === levelId);
|
|
||||||
```
|
|
||||||
|
|
||||||
**And in the JSX (around line 1070):**
|
|
||||||
```tsx
|
|
||||||
{step.tatAlerts && step.tatAlerts.length > 0 && (
|
|
||||||
<div className="mt-3 space-y-2">
|
|
||||||
{step.tatAlerts.map((alert: any, alertIndex: number) => (
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🐛 Common Issues & Fixes
|
|
||||||
|
|
||||||
### Issue 1: "TAT alerts not showing in UI"
|
|
||||||
|
|
||||||
**Cause**: Redis not connected
|
|
||||||
|
|
||||||
**Fix**:
|
|
||||||
1. Setup Upstash: https://console.upstash.com/
|
|
||||||
2. Add to `.env`:
|
|
||||||
```bash
|
|
||||||
REDIS_URL=rediss://default:...@upstash.io:6379
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
```
|
|
||||||
3. Restart backend
|
|
||||||
4. Look for "Connected to Redis" in logs
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Issue 2: "tat_alerts table doesn't exist"
|
|
||||||
|
|
||||||
**Cause**: Migrations not run
|
|
||||||
|
|
||||||
**Fix**:
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run migrate
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Issue 3: "No alerts in database"
|
|
||||||
|
|
||||||
**Cause**: No requests submitted or not enough time passed
|
|
||||||
|
|
||||||
**Fix**:
|
|
||||||
1. Create a new workflow request
|
|
||||||
2. **SUBMIT** the request (not just save as draft)
|
|
||||||
3. Wait:
|
|
||||||
- Test mode: 3 minutes for 50% alert
|
|
||||||
- Production: Depends on TAT (e.g., 12 hours for 24-hour TAT)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Issue 4: "tatAlerts is undefined in API response"
|
|
||||||
|
|
||||||
**Cause**: Backend code not updated
|
|
||||||
|
|
||||||
**Fix**:
|
|
||||||
Check `Re_Backend/src/services/workflow.service.ts` line 698:
|
|
||||||
```typescript
|
|
||||||
return { workflow, approvals, participants, documents, activities, summary, tatAlerts };
|
|
||||||
// ^^^^^^^^
|
|
||||||
// Make sure tatAlerts is included!
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Issue 5: "Frontend not displaying alerts even though they exist"
|
|
||||||
|
|
||||||
**Cause**: Frontend code not applied or missing key
|
|
||||||
|
|
||||||
**Fix**:
|
|
||||||
1. Check browser console for errors
|
|
||||||
2. Verify `step.tatAlerts` is defined in approval flow
|
|
||||||
3. Check if alerts have correct `levelId` matching approval level
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Manual Test Steps
|
|
||||||
|
|
||||||
### Step-by-Step Debugging:
|
|
||||||
|
|
||||||
**1. Check Redis Connection:**
|
|
||||||
```bash
|
|
||||||
# Start backend and look for:
|
|
||||||
✅ [TAT Queue] Connected to Redis
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. Create and Submit Request:**
|
|
||||||
```bash
|
|
||||||
# Via frontend or API:
|
|
||||||
POST /api/workflows/create
|
|
||||||
POST /api/workflows/{id}/submit
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Wait for Alert (Test Mode):**
|
|
||||||
```bash
|
|
||||||
# For 6-hour TAT in test mode:
|
|
||||||
# Wait 3 minutes for 50% alert
|
|
||||||
```
|
|
||||||
|
|
||||||
**4. Check Database:**
|
|
||||||
```sql
|
|
||||||
SELECT * FROM tat_alerts ORDER BY alert_sent_at DESC LIMIT 5;
|
|
||||||
```
|
|
||||||
|
|
||||||
**5. Check API Response:**
|
|
||||||
```bash
|
|
||||||
GET /api/workflows/{requestNumber}/details
|
|
||||||
# Look for tatAlerts array in response
|
|
||||||
```
|
|
||||||
|
|
||||||
**6. Check Frontend:**
|
|
||||||
```javascript
|
|
||||||
// Open DevTools Console
|
|
||||||
// Navigate to Request Detail
|
|
||||||
// Check the console log for API response
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔧 Debug Commands
|
|
||||||
|
|
||||||
### Check TAT System Status:
|
|
||||||
```bash
|
|
||||||
curl http://localhost:5000/api/debug/tat-status
|
|
||||||
```
|
|
||||||
|
|
||||||
### Check Workflow Details for Specific Request:
|
|
||||||
```bash
|
|
||||||
curl http://localhost:5000/api/debug/workflow-details/REQ-2025-XXXXX
|
|
||||||
```
|
|
||||||
|
|
||||||
### Check Database Directly:
|
|
||||||
```sql
|
|
||||||
-- Total alerts
|
|
||||||
SELECT COUNT(*) FROM tat_alerts;
|
|
||||||
|
|
||||||
-- Alerts for specific request
|
|
||||||
SELECT * FROM tat_alerts
|
|
||||||
WHERE request_id = (
|
|
||||||
SELECT request_id FROM workflow_requests
|
|
||||||
WHERE request_number = 'REQ-2025-XXXXX'
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Pending levels that should get alerts
|
|
||||||
SELECT
|
|
||||||
w.request_number,
|
|
||||||
al.approver_name,
|
|
||||||
al.status,
|
|
||||||
al.tat_start_time,
|
|
||||||
CASE
|
|
||||||
WHEN al.tat_start_time IS NULL THEN 'No TAT monitoring started'
|
|
||||||
ELSE 'TAT monitoring active'
|
|
||||||
END as tat_status
|
|
||||||
FROM approval_levels al
|
|
||||||
JOIN workflow_requests w ON al.request_id = w.request_id
|
|
||||||
WHERE al.status IN ('PENDING', 'IN_PROGRESS');
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📝 Checklist for TAT Alerts to Show
|
|
||||||
|
|
||||||
Must have ALL of these:
|
|
||||||
|
|
||||||
- [ ] Redis connected (see "Connected to Redis" in logs)
|
|
||||||
- [ ] TAT worker running (see "Worker is ready" in logs)
|
|
||||||
- [ ] Request SUBMITTED (not draft)
|
|
||||||
- [ ] Enough time passed (3 min in test mode for 50%)
|
|
||||||
- [ ] tat_alerts table exists in database
|
|
||||||
- [ ] Alert records created in tat_alerts table
|
|
||||||
- [ ] API returns tatAlerts in workflow details
|
|
||||||
- [ ] Frontend receives tatAlerts from API
|
|
||||||
- [ ] Frontend displays tatAlerts in workflow tab
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🆘 Still Not Working?
|
|
||||||
|
|
||||||
### Provide These Details:
|
|
||||||
|
|
||||||
1. **Backend console output** when starting server
|
|
||||||
2. **Result of**:
|
|
||||||
```bash
|
|
||||||
curl http://localhost:5000/api/debug/tat-status
|
|
||||||
```
|
|
||||||
3. **Database query result**:
|
|
||||||
```sql
|
|
||||||
SELECT COUNT(*) FROM tat_alerts;
|
|
||||||
```
|
|
||||||
4. **Browser console** errors (F12 → Console)
|
|
||||||
5. **Request number** you're testing with
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Most Common Cause
|
|
||||||
|
|
||||||
**99% of the time, TAT alerts don't show because:**
|
|
||||||
|
|
||||||
❌ **Redis is not connected**
|
|
||||||
|
|
||||||
**How to verify:**
|
|
||||||
```bash
|
|
||||||
# When you start backend, you should see:
|
|
||||||
✅ [TAT Queue] Connected to Redis
|
|
||||||
|
|
||||||
# If you see this instead:
|
|
||||||
⚠️ [TAT Queue] Redis connection failed
|
|
||||||
|
|
||||||
# Then:
|
|
||||||
# 1. Setup Upstash: https://console.upstash.com/
|
|
||||||
# 2. Add REDIS_URL to .env
|
|
||||||
# 3. Restart backend
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Quick Fix Steps
|
|
||||||
|
|
||||||
If alerts aren't showing, do this IN ORDER:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Check .env file has Redis URL
|
|
||||||
cat Re_Backend/.env | findstr REDIS_URL
|
|
||||||
|
|
||||||
# 2. Restart backend
|
|
||||||
cd Re_Backend
|
|
||||||
npm run dev
|
|
||||||
|
|
||||||
# 3. Look for "Connected to Redis" in console
|
|
||||||
|
|
||||||
# 4. Create NEW request (don't use old ones)
|
|
||||||
|
|
||||||
# 5. SUBMIT the request
|
|
||||||
|
|
||||||
# 6. Wait 3 minutes (in test mode)
|
|
||||||
|
|
||||||
# 7. Refresh Request Detail page
|
|
||||||
|
|
||||||
# 8. Go to Workflow tab
|
|
||||||
|
|
||||||
# 9. Alerts should appear under approver card
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Need more help? Share the output of `/api/debug/tat-status` endpoint!**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 4, 2025
|
|
||||||
**Team**: Royal Enfield Workflow
|
|
||||||
|
|
||||||
@ -1,215 +0,0 @@
|
|||||||
# 🚀 Upstash Redis - Quick Reference
|
|
||||||
|
|
||||||
## One-Time Setup (2 Minutes)
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Visit: https://console.upstash.com/
|
|
||||||
└─ Sign up (free)
|
|
||||||
|
|
||||||
2. Create Database
|
|
||||||
└─ Name: redis-tat-dev
|
|
||||||
└─ Type: Regional
|
|
||||||
└─ Region: US-East-1 (or closest)
|
|
||||||
└─ Click "Create"
|
|
||||||
|
|
||||||
3. Copy Redis URL
|
|
||||||
└─ Format: rediss://default:PASSWORD@host.upstash.io:6379
|
|
||||||
└─ Click copy button 📋
|
|
||||||
|
|
||||||
4. Paste into .env
|
|
||||||
└─ Re_Backend/.env
|
|
||||||
└─ REDIS_URL=rediss://default:...
|
|
||||||
└─ TAT_TEST_MODE=true
|
|
||||||
|
|
||||||
5. Start Backend
|
|
||||||
└─ cd Re_Backend
|
|
||||||
└─ npm run dev
|
|
||||||
└─ ✅ See: "Connected to Redis"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Environment Variables
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Re_Backend/.env
|
|
||||||
|
|
||||||
# Upstash Redis (paste your URL)
|
|
||||||
REDIS_URL=rediss://default:YOUR_PASSWORD@YOUR_HOST.upstash.io:6379
|
|
||||||
|
|
||||||
# Test Mode (1 hour = 1 minute)
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
|
|
||||||
# Working Hours (optional)
|
|
||||||
WORK_START_HOUR=9
|
|
||||||
WORK_END_HOUR=18
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Test TAT Notifications
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Create Request
|
|
||||||
└─ TAT: 6 hours
|
|
||||||
└─ Submit request
|
|
||||||
|
|
||||||
2. Wait for Notifications (Test Mode)
|
|
||||||
└─ 3 minutes → ⏳ 50% alert
|
|
||||||
└─ 4.5 minutes → ⚠️ 75% warning
|
|
||||||
└─ 6 minutes → ⏰ 100% breach
|
|
||||||
|
|
||||||
3. Check Logs
|
|
||||||
└─ [TAT Scheduler] ✅ TAT jobs scheduled
|
|
||||||
└─ [TAT Processor] Processing tat50...
|
|
||||||
└─ [TAT Processor] tat50 notification sent
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Monitor in Upstash Console
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Go to: https://console.upstash.com/
|
|
||||||
2. Click your database
|
|
||||||
3. Click "CLI" tab
|
|
||||||
4. Run commands:
|
|
||||||
|
|
||||||
PING
|
|
||||||
→ PONG
|
|
||||||
|
|
||||||
KEYS bull:tatQueue:*
|
|
||||||
→ Shows all queued TAT jobs
|
|
||||||
|
|
||||||
INFO
|
|
||||||
→ Shows Redis stats
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### ❌ Connection Error
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check .env
|
|
||||||
REDIS_URL=rediss://... (correct URL?)
|
|
||||||
|
|
||||||
# Test in Upstash Console
|
|
||||||
# CLI tab → PING → should return PONG
|
|
||||||
|
|
||||||
# Restart backend
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
### ❌ No Notifications
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Checklist:
|
|
||||||
- ✅ REDIS_URL in .env?
|
|
||||||
- ✅ TAT_TEST_MODE=true?
|
|
||||||
- ✅ Backend restarted?
|
|
||||||
- ✅ Request SUBMITTED (not just created)?
|
|
||||||
- ✅ Logs show "Connected to Redis"?
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Production Setup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Option 1: Use Upstash (same as dev)
|
|
||||||
REDIS_URL=rediss://default:PROD_PASSWORD@prod.upstash.io:6379
|
|
||||||
TAT_TEST_MODE=false
|
|
||||||
|
|
||||||
# Option 2: Linux server with native Redis
|
|
||||||
sudo apt install redis-server -y
|
|
||||||
sudo systemctl start redis-server
|
|
||||||
|
|
||||||
# Then in .env:
|
|
||||||
REDIS_URL=redis://localhost:6379
|
|
||||||
TAT_TEST_MODE=false
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Upstash Free Tier
|
|
||||||
|
|
||||||
```
|
|
||||||
✅ 10,000 commands/day (FREE forever)
|
|
||||||
✅ 256 MB storage
|
|
||||||
✅ TLS encryption
|
|
||||||
✅ Global CDN
|
|
||||||
✅ Zero maintenance
|
|
||||||
|
|
||||||
Perfect for:
|
|
||||||
- Development
|
|
||||||
- Testing
|
|
||||||
- Small production (<100 users)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Commands Cheat Sheet
|
|
||||||
|
|
||||||
### Upstash Console CLI
|
|
||||||
|
|
||||||
```redis
|
|
||||||
# Test connection
|
|
||||||
PING
|
|
||||||
|
|
||||||
# List all keys
|
|
||||||
KEYS *
|
|
||||||
|
|
||||||
# Count keys
|
|
||||||
DBSIZE
|
|
||||||
|
|
||||||
# View queued jobs
|
|
||||||
KEYS bull:tatQueue:*
|
|
||||||
|
|
||||||
# Get job details
|
|
||||||
HGETALL bull:tatQueue:tat50-<REQUEST_ID>-<LEVEL_ID>
|
|
||||||
|
|
||||||
# Clear all data (CAREFUL!)
|
|
||||||
FLUSHALL
|
|
||||||
|
|
||||||
# Get server info
|
|
||||||
INFO
|
|
||||||
|
|
||||||
# Monitor live commands
|
|
||||||
MONITOR
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Quick Links
|
|
||||||
|
|
||||||
- **Upstash Console**: https://console.upstash.com/
|
|
||||||
- **Upstash Docs**: https://docs.upstash.com/redis
|
|
||||||
- **Full Setup Guide**: `docs/UPSTASH_SETUP_GUIDE.md`
|
|
||||||
- **TAT System Docs**: `docs/TAT_NOTIFICATION_SYSTEM.md`
|
|
||||||
- **Quick Start**: `TAT_QUICK_START.md`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Support
|
|
||||||
|
|
||||||
**Connection Issues?**
|
|
||||||
1. Verify URL format: `rediss://` (double 's')
|
|
||||||
2. Check Upstash database status (should be "Active")
|
|
||||||
3. Test in Upstash Console CLI
|
|
||||||
|
|
||||||
**Need Help?**
|
|
||||||
- Check logs: `Get-Content logs/app.log -Tail 50 -Wait`
|
|
||||||
- Review docs: `docs/UPSTASH_SETUP_GUIDE.md`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**✅ Setup Complete? Start Testing!**
|
|
||||||
|
|
||||||
Create a 6-hour TAT request and watch notifications arrive in 3, 4.5, and 6 minutes!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 4, 2025
|
|
||||||
|
|
||||||
@ -1,345 +0,0 @@
|
|||||||
# ❓ Why Are TAT Alerts Not Showing?
|
|
||||||
|
|
||||||
## 🎯 Follow These Steps IN ORDER
|
|
||||||
|
|
||||||
### ✅ Step 1: Is Redis Connected?
|
|
||||||
|
|
||||||
**Check your backend console:**
|
|
||||||
|
|
||||||
```
|
|
||||||
Look for one of these messages:
|
|
||||||
```
|
|
||||||
|
|
||||||
**✅ GOOD (Redis is working):**
|
|
||||||
```
|
|
||||||
✅ [TAT Queue] Connected to Redis
|
|
||||||
✅ [TAT Worker] Worker is ready and listening
|
|
||||||
```
|
|
||||||
|
|
||||||
**❌ BAD (Redis NOT working):**
|
|
||||||
```
|
|
||||||
⚠️ [TAT Worker] Redis connection failed
|
|
||||||
⚠️ [TAT Queue] Redis connection failed after 3 attempts
|
|
||||||
```
|
|
||||||
|
|
||||||
**If you see the BAD message:**
|
|
||||||
|
|
||||||
→ **STOP HERE!** TAT alerts will NOT work without Redis!
|
|
||||||
|
|
||||||
→ **Setup Upstash Redis NOW:**
|
|
||||||
1. Go to: https://console.upstash.com/
|
|
||||||
2. Sign up (free)
|
|
||||||
3. Create database
|
|
||||||
4. Copy Redis URL
|
|
||||||
5. Add to `Re_Backend/.env`:
|
|
||||||
```bash
|
|
||||||
REDIS_URL=rediss://default:PASSWORD@host.upstash.io:6379
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
```
|
|
||||||
6. Restart backend
|
|
||||||
7. Verify you see "Connected to Redis"
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ✅ Step 2: Have You Submitted a Request?
|
|
||||||
|
|
||||||
**TAT monitoring starts ONLY when:**
|
|
||||||
- ✅ Request is **SUBMITTED** (not just created/saved)
|
|
||||||
- ✅ Status changes from DRAFT → PENDING
|
|
||||||
|
|
||||||
**To verify:**
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
request_number,
|
|
||||||
status,
|
|
||||||
submission_date
|
|
||||||
FROM workflow_requests
|
|
||||||
WHERE request_number = 'YOUR_REQUEST_NUMBER';
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check:**
|
|
||||||
- `status` should be `PENDING`, `IN_PROGRESS`, or later
|
|
||||||
- `submission_date` should NOT be NULL
|
|
||||||
|
|
||||||
**If status is DRAFT:**
|
|
||||||
→ Click "Submit" button on the request
|
|
||||||
→ TAT monitoring will start
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ✅ Step 3: Has Enough Time Passed?
|
|
||||||
|
|
||||||
**In TEST MODE (TAT_TEST_MODE=true):**
|
|
||||||
- 1 hour = 1 minute
|
|
||||||
- For 6-hour TAT:
|
|
||||||
- 50% alert at: **3 minutes**
|
|
||||||
- 75% alert at: **4.5 minutes**
|
|
||||||
- 100% breach at: **6 minutes**
|
|
||||||
|
|
||||||
**In PRODUCTION MODE:**
|
|
||||||
- 1 hour = 1 hour (real time)
|
|
||||||
- For 24-hour TAT:
|
|
||||||
- 50% alert at: **12 hours**
|
|
||||||
- 75% alert at: **18 hours**
|
|
||||||
- 100% breach at: **24 hours**
|
|
||||||
|
|
||||||
**Check when request was submitted:**
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
request_number,
|
|
||||||
submission_date,
|
|
||||||
NOW() - submission_date as time_since_submission
|
|
||||||
FROM workflow_requests
|
|
||||||
WHERE request_number = 'YOUR_REQUEST_NUMBER';
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ✅ Step 4: Are Alerts in the Database?
|
|
||||||
|
|
||||||
**Run this SQL:**
|
|
||||||
```sql
|
|
||||||
-- Check if table exists
|
|
||||||
SELECT COUNT(*) FROM tat_alerts;
|
|
||||||
|
|
||||||
-- If table exists, check for your request
|
|
||||||
SELECT
|
|
||||||
ta.threshold_percentage,
|
|
||||||
ta.alert_sent_at,
|
|
||||||
ta.alert_message,
|
|
||||||
ta.metadata
|
|
||||||
FROM tat_alerts ta
|
|
||||||
JOIN workflow_requests w ON ta.request_id = w.request_id
|
|
||||||
WHERE w.request_number = 'YOUR_REQUEST_NUMBER'
|
|
||||||
ORDER BY ta.alert_sent_at;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected Result:**
|
|
||||||
- **0 rows** → No alerts sent yet (wait longer OR Redis not connected)
|
|
||||||
- **1+ rows** → Alerts exist! (Go to Step 5)
|
|
||||||
|
|
||||||
**If table doesn't exist:**
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run migrate
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ✅ Step 5: Is API Returning tatAlerts?
|
|
||||||
|
|
||||||
**Test the API directly:**
|
|
||||||
|
|
||||||
**Method 1: Use Debug Endpoint**
|
|
||||||
```bash
|
|
||||||
curl http://localhost:5000/api/debug/workflow-details/YOUR_REQUEST_NUMBER
|
|
||||||
```
|
|
||||||
|
|
||||||
**Look for:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"structure": {
|
|
||||||
"hasTatAlerts": true, ← Should be true
|
|
||||||
"tatAlertsCount": 2 ← Should be > 0
|
|
||||||
},
|
|
||||||
"tatAlerts": [...] ← Should have data
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Method 2: Check Network Tab in Browser**
|
|
||||||
|
|
||||||
1. Open Request Detail page
|
|
||||||
2. Open DevTools (F12) → Network tab
|
|
||||||
3. Find the API call to `/workflows/{requestNumber}/details`
|
|
||||||
4. Click on it
|
|
||||||
5. Check Response tab
|
|
||||||
6. Look for `tatAlerts` array
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ✅ Step 6: Is Frontend Receiving tatAlerts?
|
|
||||||
|
|
||||||
**Open Browser Console (F12 → Console)**
|
|
||||||
|
|
||||||
**When you open Request Detail, you should see:**
|
|
||||||
```javascript
|
|
||||||
[RequestDetail] TAT Alerts received from API: 2 [Array(2)]
|
|
||||||
```
|
|
||||||
|
|
||||||
**If you see:**
|
|
||||||
```javascript
|
|
||||||
[RequestDetail] TAT Alerts received from API: 0 []
|
|
||||||
```
|
|
||||||
|
|
||||||
→ API is NOT returning alerts (go back to Step 4)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ✅ Step 7: Are Alerts Being Displayed?
|
|
||||||
|
|
||||||
**In Request Detail:**
|
|
||||||
1. Click **"Workflow" tab**
|
|
||||||
2. Scroll to the approver card
|
|
||||||
3. Look under the approver's comment section
|
|
||||||
|
|
||||||
**You should see yellow/orange/red boxes with:**
|
|
||||||
```
|
|
||||||
⏳ Reminder 1 - 50% TAT Threshold
|
|
||||||
```
|
|
||||||
|
|
||||||
**If you DON'T see them:**
|
|
||||||
→ Check browser console for JavaScript errors
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 Quick Diagnostic
|
|
||||||
|
|
||||||
**Run ALL of these and share the results:**
|
|
||||||
|
|
||||||
### 1. Backend Status:
|
|
||||||
```bash
|
|
||||||
curl http://localhost:5000/api/debug/tat-status
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Database Query:
|
|
||||||
```sql
|
|
||||||
SELECT COUNT(*) as total FROM tat_alerts;
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Browser Console:
|
|
||||||
```javascript
|
|
||||||
// Open Request Detail
|
|
||||||
// Check console for:
|
|
||||||
[RequestDetail] TAT Alerts received from API: X [...]
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Network Response:
|
|
||||||
```
|
|
||||||
DevTools → Network → workflow details call → Response tab
|
|
||||||
Look for "tatAlerts" field
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Most Likely Issues (In Order)
|
|
||||||
|
|
||||||
### 1. Redis Not Connected (90% of cases)
|
|
||||||
**Symptom**: No "Connected to Redis" in logs
|
|
||||||
**Fix**: Setup Upstash, add REDIS_URL, restart
|
|
||||||
|
|
||||||
### 2. Request Not Submitted (5%)
|
|
||||||
**Symptom**: Request status is DRAFT
|
|
||||||
**Fix**: Click Submit button
|
|
||||||
|
|
||||||
### 3. Not Enough Time Passed (3%)
|
|
||||||
**Symptom**: Submitted < 3 minutes ago (in test mode)
|
|
||||||
**Fix**: Wait 3 minutes for first alert
|
|
||||||
|
|
||||||
### 4. TAT Worker Not Running (1%)
|
|
||||||
**Symptom**: Redis connected but no "Worker is ready" message
|
|
||||||
**Fix**: Restart backend server
|
|
||||||
|
|
||||||
### 5. Frontend Code Not Applied (1%)
|
|
||||||
**Symptom**: API returns tatAlerts but UI doesn't show them
|
|
||||||
**Fix**: Refresh browser, clear cache
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚨 Emergency Checklist
|
|
||||||
|
|
||||||
**Do this RIGHT NOW to verify everything:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Check backend console for Redis connection
|
|
||||||
# Look for: ✅ [TAT Queue] Connected to Redis
|
|
||||||
|
|
||||||
# 2. If NOT connected, setup Upstash:
|
|
||||||
# https://console.upstash.com/
|
|
||||||
|
|
||||||
# 3. Add to .env:
|
|
||||||
# REDIS_URL=rediss://...
|
|
||||||
# TAT_TEST_MODE=true
|
|
||||||
|
|
||||||
# 4. Restart backend
|
|
||||||
npm run dev
|
|
||||||
|
|
||||||
# 5. Check you see "Connected to Redis"
|
|
||||||
|
|
||||||
# 6. Create NEW request with 6-hour TAT
|
|
||||||
|
|
||||||
# 7. SUBMIT the request
|
|
||||||
|
|
||||||
# 8. Wait 3 minutes
|
|
||||||
|
|
||||||
# 9. Open browser console (F12)
|
|
||||||
|
|
||||||
# 10. Open Request Detail page
|
|
||||||
|
|
||||||
# 11. Check console log for:
|
|
||||||
# [RequestDetail] TAT Alerts received from API: X [...]
|
|
||||||
|
|
||||||
# 12. Go to Workflow tab
|
|
||||||
|
|
||||||
# 13. Alerts should appear!
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📞 Share These for Help
|
|
||||||
|
|
||||||
If still not working, share:
|
|
||||||
|
|
||||||
1. **Backend console output** (first 50 lines after starting)
|
|
||||||
2. **Result of**: `curl http://localhost:5000/api/debug/tat-status`
|
|
||||||
3. **SQL result**: `SELECT COUNT(*) FROM tat_alerts;`
|
|
||||||
4. **Browser console** when opening Request Detail
|
|
||||||
5. **Request number** you're testing with
|
|
||||||
6. **How long ago** was the request submitted?
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Working Example
|
|
||||||
|
|
||||||
**When everything works, you'll see:**
|
|
||||||
|
|
||||||
**Backend Console:**
|
|
||||||
```
|
|
||||||
✅ [TAT Queue] Connected to Redis
|
|
||||||
✅ [TAT Worker] Worker is ready
|
|
||||||
[TAT Scheduler] ✅ TAT jobs scheduled for request REQ-2025-001
|
|
||||||
```
|
|
||||||
|
|
||||||
**After 3 minutes (test mode):**
|
|
||||||
```
|
|
||||||
[TAT Processor] Processing tat50 for request REQ-2025-001
|
|
||||||
[TAT Processor] TAT alert record created for tat50
|
|
||||||
[TAT Processor] tat50 notification sent
|
|
||||||
```
|
|
||||||
|
|
||||||
**Browser Console:**
|
|
||||||
```javascript
|
|
||||||
[RequestDetail] TAT Alerts received from API: 1 [
|
|
||||||
{
|
|
||||||
alertType: "TAT_50",
|
|
||||||
thresholdPercentage: 50,
|
|
||||||
alertSentAt: "2025-11-04T...",
|
|
||||||
...
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
**UI Display:**
|
|
||||||
```
|
|
||||||
⏳ Reminder 1 - 50% TAT Threshold
|
|
||||||
50% of SLA breach reminder have been sent
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Most likely you just need to setup Redis! See `START_HERE.md`**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 4, 2025
|
|
||||||
|
|
||||||
@ -35,19 +35,28 @@ notifications ||--o{ sms_logs : "sends"
|
|||||||
|
|
||||||
users {
|
users {
|
||||||
uuid user_id PK
|
uuid user_id PK
|
||||||
varchar employee_id UK "HR System ID"
|
varchar employee_id UK "HR System ID - Optional"
|
||||||
|
varchar okta_sub UK "Okta Subject ID"
|
||||||
varchar email UK "Primary Email"
|
varchar email UK "Primary Email"
|
||||||
varchar first_name
|
varchar first_name "Optional"
|
||||||
varchar last_name
|
varchar last_name "Optional"
|
||||||
varchar display_name "Full Name"
|
varchar display_name "Full Name"
|
||||||
varchar department
|
varchar department "Optional"
|
||||||
varchar designation
|
varchar designation "Optional"
|
||||||
varchar phone
|
varchar phone "Office Phone - Optional"
|
||||||
boolean is_active "Account Status"
|
varchar manager "Reporting Manager - SSO Optional"
|
||||||
boolean is_admin "Super User Flag"
|
varchar second_email "Alternate Email - SSO Optional"
|
||||||
timestamp last_login
|
text job_title "Detailed Job Title - SSO Optional"
|
||||||
timestamp created_at
|
varchar employee_number "HR Employee Number - SSO Optional"
|
||||||
timestamp updated_at
|
varchar postal_address "Work Location - SSO Optional"
|
||||||
|
varchar mobile_phone "Mobile Contact - SSO Optional"
|
||||||
|
jsonb ad_groups "AD Group Memberships - SSO Optional"
|
||||||
|
jsonb location "Location Details - Optional"
|
||||||
|
boolean is_active "Account Status Default true"
|
||||||
|
enum role "USER, MANAGEMENT, ADMIN - RBAC Default USER"
|
||||||
|
timestamp last_login "Last Login Time"
|
||||||
|
timestamp created_at "Record Created"
|
||||||
|
timestamp updated_at "Record Updated"
|
||||||
}
|
}
|
||||||
|
|
||||||
workflow_requests {
|
workflow_requests {
|
||||||
|
|||||||
506
docs/FRESH_DATABASE_SETUP.md
Normal file
506
docs/FRESH_DATABASE_SETUP.md
Normal file
@ -0,0 +1,506 @@
|
|||||||
|
# Fresh Database Setup Guide
|
||||||
|
|
||||||
|
## 🎯 Overview
|
||||||
|
|
||||||
|
This guide walks you through setting up a **completely fresh database** for the Royal Enfield Workflow Management System.
|
||||||
|
|
||||||
|
**Use this when:**
|
||||||
|
- First-time installation
|
||||||
|
- Major schema changes require full rebuild
|
||||||
|
- Moving to production environment
|
||||||
|
- Resetting development database
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⚡ Quick Start (Automated)
|
||||||
|
|
||||||
|
### Linux/Mac:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd Re_Backend
|
||||||
|
chmod +x scripts/fresh-database-setup.sh
|
||||||
|
./scripts/fresh-database-setup.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Windows:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
cd Re_Backend
|
||||||
|
scripts\fresh-database-setup.bat
|
||||||
|
```
|
||||||
|
|
||||||
|
**The automated script will:**
|
||||||
|
1. ✅ Drop existing database (with confirmation)
|
||||||
|
2. ✅ Create fresh database
|
||||||
|
3. ✅ Install PostgreSQL extensions
|
||||||
|
4. ✅ Run all migrations in order
|
||||||
|
5. ✅ Seed admin configuration
|
||||||
|
6. ✅ Verify database structure
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📋 Manual Setup (Step-by-Step)
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check PostgreSQL
|
||||||
|
psql --version
|
||||||
|
# Required: PostgreSQL 16.x
|
||||||
|
|
||||||
|
# Check Redis
|
||||||
|
redis-cli ping
|
||||||
|
# Expected: PONG
|
||||||
|
|
||||||
|
# Check Node.js
|
||||||
|
node --version
|
||||||
|
# Required: 18.x or higher
|
||||||
|
|
||||||
|
# Configure environment
|
||||||
|
cp env.example .env
|
||||||
|
# Edit .env with your settings
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 1: Drop Existing Database
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Connect to PostgreSQL
|
||||||
|
psql -U postgres
|
||||||
|
|
||||||
|
# Drop database if exists
|
||||||
|
DROP DATABASE IF EXISTS royal_enfield_workflow;
|
||||||
|
|
||||||
|
# Exit psql
|
||||||
|
\q
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 2: Create Fresh Database
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create new database
|
||||||
|
psql -U postgres -c "CREATE DATABASE royal_enfield_workflow OWNER postgres;"
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
psql -U postgres -l | grep royal_enfield
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 3: Install Extensions
|
||||||
|
|
||||||
|
```bash
|
||||||
|
psql -U postgres -d royal_enfield_workflow <<EOF
|
||||||
|
-- UUID extension
|
||||||
|
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
|
||||||
|
|
||||||
|
-- Text search
|
||||||
|
CREATE EXTENSION IF NOT EXISTS "pg_trgm";
|
||||||
|
|
||||||
|
-- JSONB operators
|
||||||
|
CREATE EXTENSION IF NOT EXISTS "btree_gin";
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 4: Run Migrations
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd Re_Backend
|
||||||
|
npm install # If not already done
|
||||||
|
npm run migrate
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output:**
|
||||||
|
```
|
||||||
|
✅ Migration: 2025103000-create-users.ts
|
||||||
|
✅ Migration: 2025103001-create-workflow-requests.ts
|
||||||
|
✅ Migration: 2025103002-create-approval-levels.ts
|
||||||
|
✅ Migration: 2025103003-create-participants.ts
|
||||||
|
✅ Migration: 2025103004-create-documents.ts
|
||||||
|
✅ Migration: 20251031_01_create_subscriptions.ts
|
||||||
|
✅ Migration: 20251031_02_create_activities.ts
|
||||||
|
✅ Migration: 20251031_03_create_work_notes.ts
|
||||||
|
✅ Migration: 20251031_04_create_work_note_attachments.ts
|
||||||
|
✅ Migration: 20251104-create-tat-alerts.ts
|
||||||
|
✅ Migration: 20251104-create-holidays.ts
|
||||||
|
✅ Migration: 20251104-create-admin-config.ts
|
||||||
|
✅ Migration: 20251111-create-conclusion-remarks.ts
|
||||||
|
✅ Migration: 20251111-create-notifications.ts
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 5: Seed Admin Configuration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm run seed:config
|
||||||
|
```
|
||||||
|
|
||||||
|
**This creates default settings for:**
|
||||||
|
- Email notifications
|
||||||
|
- TAT thresholds
|
||||||
|
- Business hours
|
||||||
|
- Holiday calendar
|
||||||
|
- AI provider settings
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 6: Assign Admin User
|
||||||
|
|
||||||
|
**Option A: Via SQL Script (Replace YOUR_EMAIL first)**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Edit the script
|
||||||
|
nano scripts/assign-admin-user.sql
|
||||||
|
# Change: YOUR_EMAIL@royalenfield.com
|
||||||
|
|
||||||
|
# Run it
|
||||||
|
psql -d royal_enfield_workflow -f scripts/assign-admin-user.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
**Option B: Via Direct SQL**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
psql -d royal_enfield_workflow
|
||||||
|
|
||||||
|
UPDATE users
|
||||||
|
SET role = 'ADMIN'
|
||||||
|
WHERE email = 'your-email@royalenfield.com';
|
||||||
|
|
||||||
|
-- Verify
|
||||||
|
SELECT email, role FROM users WHERE role = 'ADMIN';
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 7: Verify Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check all tables created
|
||||||
|
psql -d royal_enfield_workflow -c "\dt"
|
||||||
|
|
||||||
|
# Check user role enum
|
||||||
|
psql -d royal_enfield_workflow -c "\dT+ user_role_enum"
|
||||||
|
|
||||||
|
# Check your user
|
||||||
|
psql -d royal_enfield_workflow -c "SELECT email, role FROM users WHERE email = 'your-email@royalenfield.com';"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 8: Start Backend
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm run dev
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected output:
|
||||||
|
```
|
||||||
|
🚀 Server started on port 5000
|
||||||
|
🗄️ Database connected
|
||||||
|
🔴 Redis connected
|
||||||
|
📡 WebSocket server ready
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📊 Database Schema (Fresh Setup)
|
||||||
|
|
||||||
|
### Tables Created (in order):
|
||||||
|
|
||||||
|
1. ✅ **users** - User accounts with RBAC (role field)
|
||||||
|
2. ✅ **workflow_requests** - Workflow requests
|
||||||
|
3. ✅ **approval_levels** - Approval workflow steps
|
||||||
|
4. ✅ **participants** - Request participants
|
||||||
|
5. ✅ **documents** - Document attachments
|
||||||
|
6. ✅ **subscriptions** - User notification preferences
|
||||||
|
7. ✅ **activities** - Audit trail
|
||||||
|
8. ✅ **work_notes** - Collaboration messages
|
||||||
|
9. ✅ **work_note_attachments** - Work note files
|
||||||
|
10. ✅ **tat_alerts** - TAT/SLA alerts
|
||||||
|
11. ✅ **holidays** - Holiday calendar
|
||||||
|
12. ✅ **admin_config** - System configuration
|
||||||
|
13. ✅ **conclusion_remarks** - AI-generated conclusions
|
||||||
|
14. ✅ **notifications** - Notification queue
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔑 User Roles (RBAC)
|
||||||
|
|
||||||
|
### Default Role: USER
|
||||||
|
|
||||||
|
**All new users automatically get `USER` role on first login.**
|
||||||
|
|
||||||
|
### Assign MANAGEMENT Role
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Single user
|
||||||
|
UPDATE users SET role = 'MANAGEMENT'
|
||||||
|
WHERE email = 'manager@royalenfield.com';
|
||||||
|
|
||||||
|
-- Multiple users
|
||||||
|
UPDATE users SET role = 'MANAGEMENT'
|
||||||
|
WHERE email IN (
|
||||||
|
'manager1@royalenfield.com',
|
||||||
|
'manager2@royalenfield.com'
|
||||||
|
);
|
||||||
|
|
||||||
|
-- By department
|
||||||
|
UPDATE users SET role = 'MANAGEMENT'
|
||||||
|
WHERE department = 'Management' AND is_active = true;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Assign ADMIN Role
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Single user
|
||||||
|
UPDATE users SET role = 'ADMIN'
|
||||||
|
WHERE email = 'admin@royalenfield.com';
|
||||||
|
|
||||||
|
-- Multiple admins
|
||||||
|
UPDATE users SET role = 'ADMIN'
|
||||||
|
WHERE email IN (
|
||||||
|
'admin1@royalenfield.com',
|
||||||
|
'admin2@royalenfield.com'
|
||||||
|
);
|
||||||
|
|
||||||
|
-- By department
|
||||||
|
UPDATE users SET role = 'ADMIN'
|
||||||
|
WHERE department = 'IT' AND is_active = true;
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔍 Verification Queries
|
||||||
|
|
||||||
|
### Check All Tables
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT tablename
|
||||||
|
FROM pg_tables
|
||||||
|
WHERE schemaname = 'public'
|
||||||
|
ORDER BY tablename;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Role Distribution
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
role,
|
||||||
|
COUNT(*) as user_count,
|
||||||
|
COUNT(CASE WHEN is_active = true THEN 1 END) as active_users
|
||||||
|
FROM users
|
||||||
|
GROUP BY role
|
||||||
|
ORDER BY
|
||||||
|
CASE role
|
||||||
|
WHEN 'ADMIN' THEN 1
|
||||||
|
WHEN 'MANAGEMENT' THEN 2
|
||||||
|
WHEN 'USER' THEN 3
|
||||||
|
END;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Admin Users
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
email,
|
||||||
|
display_name,
|
||||||
|
department,
|
||||||
|
role,
|
||||||
|
created_at,
|
||||||
|
last_login
|
||||||
|
FROM users
|
||||||
|
WHERE role = 'ADMIN' AND is_active = true
|
||||||
|
ORDER BY email;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Extended SSO Fields
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
email,
|
||||||
|
display_name,
|
||||||
|
manager,
|
||||||
|
job_title,
|
||||||
|
postal_address,
|
||||||
|
mobile_phone,
|
||||||
|
array_length(ad_groups, 1) as ad_group_count
|
||||||
|
FROM users
|
||||||
|
WHERE email = 'your-email@royalenfield.com';
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🧪 Test Your Setup
|
||||||
|
|
||||||
|
### 1. Create Test User (via API)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST http://localhost:5000/api/v1/auth/okta/callback \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"email": "test@royalenfield.com",
|
||||||
|
"displayName": "Test User",
|
||||||
|
"oktaSub": "test-sub-123"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Check User Created with Default Role
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT email, role FROM users WHERE email = 'test@royalenfield.com';
|
||||||
|
-- Expected: role = 'USER'
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Update to ADMIN
|
||||||
|
|
||||||
|
```sql
|
||||||
|
UPDATE users SET role = 'ADMIN' WHERE email = 'test@royalenfield.com';
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Verify API Access
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Login and get token
|
||||||
|
curl -X POST http://localhost:5000/api/v1/auth/login \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"email": "test@royalenfield.com", ...}'
|
||||||
|
|
||||||
|
# Try admin endpoint (should work if ADMIN role)
|
||||||
|
curl http://localhost:5000/api/v1/admin/configurations \
|
||||||
|
-H "Authorization: Bearer YOUR_TOKEN"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📦 Migration Files (Execution Order)
|
||||||
|
|
||||||
|
| Order | Migration File | Purpose |
|
||||||
|
|-------|---------------|---------|
|
||||||
|
| 1 | `2025103000-create-users.ts` | Users table with role + SSO fields |
|
||||||
|
| 2 | `2025103001-create-workflow-requests.ts` | Workflow requests |
|
||||||
|
| 3 | `2025103002-create-approval-levels.ts` | Approval levels |
|
||||||
|
| 4 | `2025103003-create-participants.ts` | Participants |
|
||||||
|
| 5 | `2025103004-create-documents.ts` | Documents |
|
||||||
|
| 6 | `20251031_01_create_subscriptions.ts` | Subscriptions |
|
||||||
|
| 7 | `20251031_02_create_activities.ts` | Activities/Audit trail |
|
||||||
|
| 8 | `20251031_03_create_work_notes.ts` | Work notes |
|
||||||
|
| 9 | `20251031_04_create_work_note_attachments.ts` | Work note attachments |
|
||||||
|
| 10 | `20251104-create-tat-alerts.ts` | TAT alerts |
|
||||||
|
| 11 | `20251104-create-holidays.ts` | Holidays |
|
||||||
|
| 12 | `20251104-create-admin-config.ts` | Admin configuration |
|
||||||
|
| 13 | `20251111-create-conclusion-remarks.ts` | Conclusion remarks |
|
||||||
|
| 14 | `20251111-create-notifications.ts` | Notifications |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⚠️ Important Notes
|
||||||
|
|
||||||
|
### is_admin Field REMOVED
|
||||||
|
|
||||||
|
❌ **OLD (Don't use):**
|
||||||
|
```typescript
|
||||||
|
if (user.is_admin) { ... }
|
||||||
|
```
|
||||||
|
|
||||||
|
✅ **NEW (Use this):**
|
||||||
|
```typescript
|
||||||
|
if (user.role === 'ADMIN') { ... }
|
||||||
|
```
|
||||||
|
|
||||||
|
### Default Values
|
||||||
|
|
||||||
|
| Field | Default Value | Notes |
|
||||||
|
|-------|---------------|-------|
|
||||||
|
| `role` | `USER` | Everyone starts as USER |
|
||||||
|
| `is_active` | `true` | Accounts active by default |
|
||||||
|
| All SSO fields | `null` | Optional, populated from Okta |
|
||||||
|
|
||||||
|
### Automatic Behaviors
|
||||||
|
|
||||||
|
- 🔄 **First Login**: User created with role=USER
|
||||||
|
- 🔒 **Admin Assignment**: Manual via SQL or API
|
||||||
|
- 📧 **Email**: Required and unique
|
||||||
|
- 🆔 **oktaSub**: Required and unique from SSO
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚨 Troubleshooting
|
||||||
|
|
||||||
|
### Migration Fails
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check which migrations ran
|
||||||
|
psql -d royal_enfield_workflow -c "SELECT * FROM SequelizeMeta ORDER BY name;"
|
||||||
|
|
||||||
|
# Rollback if needed
|
||||||
|
npm run migrate:undo
|
||||||
|
|
||||||
|
# Re-run
|
||||||
|
npm run migrate
|
||||||
|
```
|
||||||
|
|
||||||
|
### User Not Created on Login
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Check if user exists
|
||||||
|
SELECT * FROM users WHERE email = 'your-email@royalenfield.com';
|
||||||
|
|
||||||
|
-- Check Okta sub
|
||||||
|
SELECT * FROM users WHERE okta_sub = 'your-okta-sub';
|
||||||
|
```
|
||||||
|
|
||||||
|
### Role Not Working
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Verify role
|
||||||
|
SELECT email, role, is_active FROM users WHERE email = 'your-email@royalenfield.com';
|
||||||
|
|
||||||
|
-- Check role enum
|
||||||
|
\dT+ user_role_enum
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📞 Quick Commands Reference
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Fresh setup (automated)
|
||||||
|
./scripts/fresh-database-setup.sh
|
||||||
|
|
||||||
|
# Make yourself admin
|
||||||
|
psql -d royal_enfield_workflow -c "UPDATE users SET role = 'ADMIN' WHERE email = 'your@email.com';"
|
||||||
|
|
||||||
|
# Check your role
|
||||||
|
psql -d royal_enfield_workflow -c "SELECT email, role FROM users WHERE email = 'your@email.com';"
|
||||||
|
|
||||||
|
# Start server
|
||||||
|
npm run dev
|
||||||
|
|
||||||
|
# Check logs
|
||||||
|
tail -f logs/application.log
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ Success Checklist
|
||||||
|
|
||||||
|
- [ ] PostgreSQL 16.x installed
|
||||||
|
- [ ] Redis running
|
||||||
|
- [ ] .env configured
|
||||||
|
- [ ] Database created
|
||||||
|
- [ ] All migrations completed (14 tables)
|
||||||
|
- [ ] Admin config seeded
|
||||||
|
- [ ] At least one ADMIN user assigned
|
||||||
|
- [ ] Backend server starts without errors
|
||||||
|
- [ ] Can login and access admin endpoints
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Your fresh database is now production-ready!** 🎉
|
||||||
|
|
||||||
222
docs/IN_APP_NOTIFICATIONS_SETUP.md
Normal file
222
docs/IN_APP_NOTIFICATIONS_SETUP.md
Normal file
@ -0,0 +1,222 @@
|
|||||||
|
# In-App Notification System - Setup Guide
|
||||||
|
|
||||||
|
## 🎯 Overview
|
||||||
|
|
||||||
|
Complete real-time in-app notification system for Royal Enfield Workflow Management System.
|
||||||
|
|
||||||
|
## ✅ Features Implemented
|
||||||
|
|
||||||
|
### Backend:
|
||||||
|
1. **Notification Model** (`models/Notification.ts`)
|
||||||
|
- Stores all in-app notifications
|
||||||
|
- Tracks read/unread status
|
||||||
|
- Supports priority levels (LOW, MEDIUM, HIGH, URGENT)
|
||||||
|
- Metadata for request context
|
||||||
|
|
||||||
|
2. **Notification Controller** (`controllers/notification.controller.ts`)
|
||||||
|
- GET `/api/v1/notifications` - List user's notifications with pagination
|
||||||
|
- GET `/api/v1/notifications/unread-count` - Get unread count
|
||||||
|
- PATCH `/api/v1/notifications/:notificationId/read` - Mark as read
|
||||||
|
- POST `/api/v1/notifications/mark-all-read` - Mark all as read
|
||||||
|
- DELETE `/api/v1/notifications/:notificationId` - Delete notification
|
||||||
|
|
||||||
|
3. **Enhanced Notification Service** (`services/notification.service.ts`)
|
||||||
|
- Saves notifications to database (for in-app display)
|
||||||
|
- Emits real-time socket.io events
|
||||||
|
- Sends push notifications (if subscribed)
|
||||||
|
- All in one call: `notificationService.sendToUsers()`
|
||||||
|
|
||||||
|
4. **Socket.io Enhancement** (`realtime/socket.ts`)
|
||||||
|
- Added `join:user` event for personal notification room
|
||||||
|
- Added `emitToUser()` function for targeted notifications
|
||||||
|
- Real-time delivery without page refresh
|
||||||
|
|
||||||
|
### Frontend:
|
||||||
|
1. **Notification API Service** (`services/notificationApi.ts`)
|
||||||
|
- Complete API client for all notification endpoints
|
||||||
|
|
||||||
|
2. **PageLayout Integration** (`components/layout/PageLayout/PageLayout.tsx`)
|
||||||
|
- Real-time notification bell with unread count badge
|
||||||
|
- Dropdown showing latest 10 notifications
|
||||||
|
- Click to mark as read and navigate to request
|
||||||
|
- "Mark all as read" functionality
|
||||||
|
- Auto-refreshes when new notifications arrive
|
||||||
|
- Works even if browser push notifications disabled
|
||||||
|
|
||||||
|
3. **Data Freshness** (MyRequests, OpenRequests, ClosedRequests)
|
||||||
|
- Fixed stale data after DB deletion
|
||||||
|
- Always shows fresh data from API
|
||||||
|
|
||||||
|
## 📦 Database Setup
|
||||||
|
|
||||||
|
### Step 1: Run Migration
|
||||||
|
|
||||||
|
Execute this SQL in your PostgreSQL database:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
psql -U postgres -d re_workflow_db -f migrations/create_notifications_table.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
OR run manually in pgAdmin/SQL tool:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- See: migrations/create_notifications_table.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Verify Table Created
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT table_name FROM information_schema.tables
|
||||||
|
WHERE table_schema = 'public' AND table_name = 'notifications';
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🚀 How It Works
|
||||||
|
|
||||||
|
### 1. When an Event Occurs (e.g., Request Assigned):
|
||||||
|
|
||||||
|
**Backend:**
|
||||||
|
```typescript
|
||||||
|
await notificationService.sendToUsers(
|
||||||
|
[approverId],
|
||||||
|
{
|
||||||
|
title: 'New request assigned',
|
||||||
|
body: 'Marketing Campaign Approval - REQ-2025-12345',
|
||||||
|
requestId: workflowId,
|
||||||
|
requestNumber: 'REQ-2025-12345',
|
||||||
|
url: `/request/REQ-2025-12345`,
|
||||||
|
type: 'assignment',
|
||||||
|
priority: 'HIGH',
|
||||||
|
actionRequired: true
|
||||||
|
}
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
This automatically:
|
||||||
|
- ✅ Saves notification to `notifications` table
|
||||||
|
- ✅ Emits `notification:new` socket event to user
|
||||||
|
- ✅ Sends browser push notification (if enabled)
|
||||||
|
|
||||||
|
### 2. Frontend Receives Notification:
|
||||||
|
|
||||||
|
**PageLayout** automatically:
|
||||||
|
- ✅ Receives socket event in real-time
|
||||||
|
- ✅ Updates notification count badge
|
||||||
|
- ✅ Adds to notification dropdown
|
||||||
|
- ✅ Shows blue dot for unread
|
||||||
|
- ✅ User clicks → marks as read → navigates to request
|
||||||
|
|
||||||
|
## 📌 Notification Events (Major)
|
||||||
|
|
||||||
|
Based on your requirement, here are the key events that trigger notifications:
|
||||||
|
|
||||||
|
| Event | Type | Sent To | Priority |
|
||||||
|
|-------|------|---------|----------|
|
||||||
|
| Request Created | `created` | Initiator | MEDIUM |
|
||||||
|
| Request Assigned | `assignment` | Approver | HIGH |
|
||||||
|
| Approval Given | `approved` | Initiator | HIGH |
|
||||||
|
| Request Rejected | `rejected` | Initiator | URGENT |
|
||||||
|
| TAT Alert (50%) | `tat_alert` | Approver | MEDIUM |
|
||||||
|
| TAT Alert (75%) | `tat_alert` | Approver | HIGH |
|
||||||
|
| TAT Breached | `tat_breach` | Approver + Initiator | URGENT |
|
||||||
|
| Work Note Mention | `mention` | Tagged Users | MEDIUM |
|
||||||
|
| New Comment | `comment` | Participants | LOW |
|
||||||
|
|
||||||
|
## 🔧 Configuration
|
||||||
|
|
||||||
|
### Backend (.env):
|
||||||
|
```env
|
||||||
|
# Already configured - no changes needed
|
||||||
|
VAPID_PUBLIC_KEY=your_vapid_public_key
|
||||||
|
VAPID_PRIVATE_KEY=your_vapid_private_key
|
||||||
|
```
|
||||||
|
|
||||||
|
### Frontend (.env):
|
||||||
|
```env
|
||||||
|
# Already configured
|
||||||
|
VITE_API_BASE_URL=http://localhost:5000/api/v1
|
||||||
|
```
|
||||||
|
|
||||||
|
## ✅ Testing
|
||||||
|
|
||||||
|
### 1. Test Basic Notification:
|
||||||
|
```bash
|
||||||
|
# Create a workflow and assign to an approver
|
||||||
|
# Check approver's notification bell - should show count
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Test Real-Time Delivery:
|
||||||
|
```bash
|
||||||
|
# Have 2 users logged in (different browsers)
|
||||||
|
# User A creates request, assigns to User B
|
||||||
|
# User B should see notification appear immediately (no refresh needed)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Test TAT Notifications:
|
||||||
|
```bash
|
||||||
|
# Create request with 1-hour TAT
|
||||||
|
# Wait for threshold notifications (50%, 75%, 100%)
|
||||||
|
# Approver should receive in-app notifications
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Test Work Note Mentions:
|
||||||
|
```bash
|
||||||
|
# Add work note with @mention
|
||||||
|
# Tagged user should receive notification
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🎨 UI Features
|
||||||
|
|
||||||
|
- **Unread Badge**: Shows count (1-9, or "9+" for 10+)
|
||||||
|
- **Blue Dot**: Indicates unread notifications
|
||||||
|
- **Blue Background**: Highlights unread items
|
||||||
|
- **Time Ago**: "5 minutes ago", "2 hours ago", etc.
|
||||||
|
- **Click to Navigate**: Automatically opens the related request
|
||||||
|
- **Mark All Read**: Single click to clear all unread
|
||||||
|
- **Scrollable**: Shows latest 10, with "View all" link
|
||||||
|
|
||||||
|
## 📱 Fallback for Disabled Push Notifications
|
||||||
|
|
||||||
|
Even if user denies browser push notifications:
|
||||||
|
- ✅ In-app notifications ALWAYS work
|
||||||
|
- ✅ Notifications saved to database
|
||||||
|
- ✅ Real-time delivery via socket.io
|
||||||
|
- ✅ No permission required
|
||||||
|
- ✅ Works on all browsers
|
||||||
|
|
||||||
|
## 🔍 Debug Endpoints
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get notifications for current user
|
||||||
|
GET /api/v1/notifications?page=1&limit=10
|
||||||
|
|
||||||
|
# Get only unread
|
||||||
|
GET /api/v1/notifications?unreadOnly=true
|
||||||
|
|
||||||
|
# Get unread count
|
||||||
|
GET /api/v1/notifications/unread-count
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🎉 Benefits
|
||||||
|
|
||||||
|
1. **No Browser Permission Needed** - Always works, unlike push notifications
|
||||||
|
2. **Real-Time Updates** - Instant delivery via socket.io
|
||||||
|
3. **Persistent** - Saved in database, available after login
|
||||||
|
4. **Actionable** - Click to navigate to related request
|
||||||
|
5. **User-Friendly** - Clean UI integrated into header
|
||||||
|
6. **Complete Tracking** - Know what was sent via which channel
|
||||||
|
|
||||||
|
## 🔥 Next Steps (Optional)
|
||||||
|
|
||||||
|
1. **Email Integration**: Send email for URGENT priority notifications
|
||||||
|
2. **SMS Integration**: Critical alerts via SMS
|
||||||
|
3. **Notification Preferences**: Let users choose which events to receive
|
||||||
|
4. **Notification History Page**: Full-page view with filters
|
||||||
|
5. **Sound Alerts**: Play sound when new notification arrives
|
||||||
|
6. **Desktop Notifications**: Browser native notifications (if permitted)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**✅ In-App Notifications are now fully operational!**
|
||||||
|
|
||||||
|
Users will receive instant notifications for all major workflow events, even without browser push permissions enabled.
|
||||||
|
|
||||||
@ -1,549 +0,0 @@
|
|||||||
# KPI Reporting System - Complete Guide
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
This document describes the complete KPI (Key Performance Indicator) reporting system for the Royal Enfield Workflow Management System, including database schema, views, and query examples.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Database Schema
|
|
||||||
|
|
||||||
### 1. TAT Alerts Table (`tat_alerts`)
|
|
||||||
|
|
||||||
**Purpose**: Store all TAT notification records for display and KPI analysis
|
|
||||||
|
|
||||||
```sql
|
|
||||||
CREATE TABLE tat_alerts (
|
|
||||||
alert_id UUID PRIMARY KEY,
|
|
||||||
request_id UUID REFERENCES workflow_requests(request_id),
|
|
||||||
level_id UUID REFERENCES approval_levels(level_id),
|
|
||||||
approver_id UUID REFERENCES users(user_id),
|
|
||||||
alert_type ENUM('TAT_50', 'TAT_75', 'TAT_100'),
|
|
||||||
threshold_percentage INTEGER, -- 50, 75, or 100
|
|
||||||
tat_hours_allocated DECIMAL(10,2),
|
|
||||||
tat_hours_elapsed DECIMAL(10,2),
|
|
||||||
tat_hours_remaining DECIMAL(10,2),
|
|
||||||
level_start_time TIMESTAMP,
|
|
||||||
alert_sent_at TIMESTAMP DEFAULT NOW(),
|
|
||||||
expected_completion_time TIMESTAMP,
|
|
||||||
alert_message TEXT,
|
|
||||||
notification_sent BOOLEAN DEFAULT true,
|
|
||||||
notification_channels TEXT[], -- ['push', 'email', 'sms']
|
|
||||||
is_breached BOOLEAN DEFAULT false,
|
|
||||||
was_completed_on_time BOOLEAN, -- Set when level completed
|
|
||||||
completion_time TIMESTAMP,
|
|
||||||
metadata JSONB DEFAULT '{}',
|
|
||||||
created_at TIMESTAMP DEFAULT NOW()
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key Features**:
|
|
||||||
- ✅ Tracks every TAT notification sent (50%, 75%, 100%)
|
|
||||||
- ✅ Records timing information for KPI calculation
|
|
||||||
- ✅ Stores completion status for compliance reporting
|
|
||||||
- ✅ Metadata includes request title, approver name, priority
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 KPI Categories & Metrics
|
|
||||||
|
|
||||||
### Category 1: Request Volume & Status
|
|
||||||
|
|
||||||
| KPI Name | Description | SQL View | Primary Users |
|
|
||||||
|----------|-------------|----------|---------------|
|
|
||||||
| Total Requests Created | Count of all workflow requests | `vw_request_volume_summary` | All |
|
|
||||||
| Open Requests | Requests currently in progress with age | `vw_workflow_aging` | All |
|
|
||||||
| Approved Requests | Fully approved and closed | `vw_request_volume_summary` | All |
|
|
||||||
| Rejected Requests | Rejected at any stage | `vw_request_volume_summary` | All |
|
|
||||||
|
|
||||||
**Query Examples**:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Total requests created this month
|
|
||||||
SELECT COUNT(*) as total_requests
|
|
||||||
FROM vw_request_volume_summary
|
|
||||||
WHERE created_at >= DATE_TRUNC('month', CURRENT_DATE);
|
|
||||||
|
|
||||||
-- Open requests with age
|
|
||||||
SELECT request_number, title, status, age_hours, status_category
|
|
||||||
FROM vw_request_volume_summary
|
|
||||||
WHERE status_category = 'IN_PROGRESS'
|
|
||||||
ORDER BY age_hours DESC;
|
|
||||||
|
|
||||||
-- Approved vs Rejected (last 30 days)
|
|
||||||
SELECT
|
|
||||||
status,
|
|
||||||
COUNT(*) as count,
|
|
||||||
ROUND(COUNT(*) * 100.0 / SUM(COUNT(*)) OVER (), 2) as percentage
|
|
||||||
FROM vw_request_volume_summary
|
|
||||||
WHERE closure_date >= CURRENT_DATE - INTERVAL '30 days'
|
|
||||||
AND status IN ('APPROVED', 'REJECTED')
|
|
||||||
GROUP BY status;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Category 2: TAT Efficiency
|
|
||||||
|
|
||||||
| KPI Name | Description | SQL View | Primary Users |
|
|
||||||
|----------|-------------|----------|---------------|
|
|
||||||
| Average TAT Compliance % | % of workflows completed within TAT | `vw_tat_compliance` | All |
|
|
||||||
| Avg Approval Cycle Time | Average time from creation to closure | `vw_request_volume_summary` | All |
|
|
||||||
| Delayed Workflows | Requests currently breaching TAT | `vw_tat_compliance` | All |
|
|
||||||
|
|
||||||
**Query Examples**:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Overall TAT compliance rate
|
|
||||||
SELECT
|
|
||||||
COUNT(CASE WHEN completed_within_tat = true THEN 1 END) * 100.0 /
|
|
||||||
NULLIF(COUNT(CASE WHEN completed_within_tat IS NOT NULL THEN 1 END), 0) as compliance_rate,
|
|
||||||
COUNT(CASE WHEN completed_within_tat = true THEN 1 END) as on_time_count,
|
|
||||||
COUNT(CASE WHEN completed_within_tat = false THEN 1 END) as breached_count
|
|
||||||
FROM vw_tat_compliance;
|
|
||||||
|
|
||||||
-- Average cycle time by priority
|
|
||||||
SELECT
|
|
||||||
priority,
|
|
||||||
ROUND(AVG(cycle_time_hours), 2) as avg_hours,
|
|
||||||
ROUND(AVG(cycle_time_hours) / 24, 2) as avg_days,
|
|
||||||
COUNT(*) as total_requests
|
|
||||||
FROM vw_request_volume_summary
|
|
||||||
WHERE closure_date IS NOT NULL
|
|
||||||
GROUP BY priority;
|
|
||||||
|
|
||||||
-- Currently delayed workflows
|
|
||||||
SELECT
|
|
||||||
request_number,
|
|
||||||
approver_name,
|
|
||||||
level_number,
|
|
||||||
tat_status,
|
|
||||||
tat_percentage_used,
|
|
||||||
remaining_hours
|
|
||||||
FROM vw_tat_compliance
|
|
||||||
WHERE tat_status IN ('CRITICAL', 'BREACHED')
|
|
||||||
AND level_status IN ('PENDING', 'IN_PROGRESS')
|
|
||||||
ORDER BY tat_percentage_used DESC;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Category 3: Approver Load
|
|
||||||
|
|
||||||
| KPI Name | Description | SQL View | Primary Users |
|
|
||||||
|----------|-------------|----------|---------------|
|
|
||||||
| Pending Actions (My Queue) | Requests awaiting user approval | `vw_approver_performance` | Approvers |
|
|
||||||
| Approvals Completed | Count of actions in timeframe | `vw_approver_performance` | Approvers |
|
|
||||||
|
|
||||||
**Query Examples**:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- My pending queue (for specific approver)
|
|
||||||
SELECT
|
|
||||||
pending_count,
|
|
||||||
in_progress_count,
|
|
||||||
oldest_pending_hours
|
|
||||||
FROM vw_approver_performance
|
|
||||||
WHERE approver_id = 'USER_ID_HERE';
|
|
||||||
|
|
||||||
-- Approvals completed today
|
|
||||||
SELECT
|
|
||||||
approver_name,
|
|
||||||
COUNT(*) as approvals_today
|
|
||||||
FROM approval_levels
|
|
||||||
WHERE action_date >= CURRENT_DATE
|
|
||||||
AND status IN ('APPROVED', 'REJECTED')
|
|
||||||
GROUP BY approver_name
|
|
||||||
ORDER BY approvals_today DESC;
|
|
||||||
|
|
||||||
-- Approvals completed this week
|
|
||||||
SELECT
|
|
||||||
approver_name,
|
|
||||||
approved_count,
|
|
||||||
rejected_count,
|
|
||||||
(approved_count + rejected_count) as total_actions
|
|
||||||
FROM vw_approver_performance
|
|
||||||
ORDER BY total_actions DESC;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Category 4: Engagement & Quality
|
|
||||||
|
|
||||||
| KPI Name | Description | SQL View | Primary Users |
|
|
||||||
|----------|-------------|----------|---------------|
|
|
||||||
| Comments/Work Notes Added | Collaboration activity | `vw_engagement_metrics` | All |
|
|
||||||
| Attachments Uploaded | Documents added | `vw_engagement_metrics` | All |
|
|
||||||
|
|
||||||
**Query Examples**:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Engagement metrics summary
|
|
||||||
SELECT
|
|
||||||
engagement_level,
|
|
||||||
COUNT(*) as requests_count,
|
|
||||||
AVG(work_notes_count) as avg_comments,
|
|
||||||
AVG(documents_count) as avg_documents
|
|
||||||
FROM vw_engagement_metrics
|
|
||||||
GROUP BY engagement_level;
|
|
||||||
|
|
||||||
-- Most active requests (by comments)
|
|
||||||
SELECT
|
|
||||||
request_number,
|
|
||||||
title,
|
|
||||||
work_notes_count,
|
|
||||||
documents_count,
|
|
||||||
spectators_count
|
|
||||||
FROM vw_engagement_metrics
|
|
||||||
ORDER BY work_notes_count DESC
|
|
||||||
LIMIT 10;
|
|
||||||
|
|
||||||
-- Document upload trends (last 7 days)
|
|
||||||
SELECT
|
|
||||||
DATE(uploaded_at) as date,
|
|
||||||
COUNT(*) as documents_uploaded
|
|
||||||
FROM documents
|
|
||||||
WHERE uploaded_at >= CURRENT_DATE - INTERVAL '7 days'
|
|
||||||
AND is_deleted = false
|
|
||||||
GROUP BY DATE(uploaded_at)
|
|
||||||
ORDER BY date DESC;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📈 Analytical Reports
|
|
||||||
|
|
||||||
### 1. Request Lifecycle Report
|
|
||||||
|
|
||||||
**Purpose**: End-to-end status with timeline, approvers, and TAT compliance
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
w.request_number,
|
|
||||||
w.title,
|
|
||||||
w.status,
|
|
||||||
w.priority,
|
|
||||||
w.submission_date,
|
|
||||||
w.closure_date,
|
|
||||||
w.cycle_time_hours / 24 as cycle_days,
|
|
||||||
al.level_number,
|
|
||||||
al.approver_name,
|
|
||||||
al.status as level_status,
|
|
||||||
al.completed_within_tat,
|
|
||||||
al.elapsed_hours,
|
|
||||||
al.tat_hours as allocated_hours,
|
|
||||||
ta.threshold_percentage as last_alert_threshold,
|
|
||||||
ta.alert_sent_at as last_alert_time
|
|
||||||
FROM vw_request_volume_summary w
|
|
||||||
LEFT JOIN vw_tat_compliance al ON w.request_id = al.request_id
|
|
||||||
LEFT JOIN vw_tat_alerts_summary ta ON al.level_id = ta.level_id
|
|
||||||
WHERE w.request_number = 'REQ-YYYY-NNNNN'
|
|
||||||
ORDER BY al.level_number;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Export**: Can be exported as CSV using `\copy` or application-level export
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 2. Approver Performance Report
|
|
||||||
|
|
||||||
**Purpose**: Track response time, pending count, TAT compliance by approver
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
ap.approver_name,
|
|
||||||
ap.department,
|
|
||||||
ap.pending_count,
|
|
||||||
ap.approved_count,
|
|
||||||
ap.rejected_count,
|
|
||||||
ROUND(ap.avg_response_time_hours, 2) as avg_response_hours,
|
|
||||||
ROUND(ap.tat_compliance_percentage, 2) as compliance_percent,
|
|
||||||
ap.breaches_count,
|
|
||||||
ROUND(ap.oldest_pending_hours, 2) as oldest_pending_hours
|
|
||||||
FROM vw_approver_performance ap
|
|
||||||
WHERE ap.total_assignments > 0
|
|
||||||
ORDER BY ap.tat_compliance_percentage DESC;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Visualization**: Bar chart or leaderboard
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 3. Department-wise Workflow Summary
|
|
||||||
|
|
||||||
**Purpose**: Compare requests by department
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
department,
|
|
||||||
total_requests,
|
|
||||||
open_requests,
|
|
||||||
approved_requests,
|
|
||||||
rejected_requests,
|
|
||||||
ROUND(approved_requests * 100.0 / NULLIF(total_requests, 0), 2) as approval_rate,
|
|
||||||
ROUND(avg_cycle_time_hours / 24, 2) as avg_cycle_days,
|
|
||||||
express_priority_count,
|
|
||||||
standard_priority_count
|
|
||||||
FROM vw_department_summary
|
|
||||||
WHERE department IS NOT NULL
|
|
||||||
ORDER BY total_requests DESC;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Visualization**: Pie chart or stacked bar chart
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 4. TAT Breach Report
|
|
||||||
|
|
||||||
**Purpose**: List all requests that breached TAT with reasons
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
ta.request_number,
|
|
||||||
ta.request_title,
|
|
||||||
ta.priority,
|
|
||||||
ta.level_number,
|
|
||||||
u.display_name as approver_name,
|
|
||||||
ta.threshold_percentage,
|
|
||||||
ta.alert_sent_at,
|
|
||||||
ta.expected_completion_time,
|
|
||||||
ta.completion_time,
|
|
||||||
ta.was_completed_on_time,
|
|
||||||
CASE
|
|
||||||
WHEN ta.completion_time IS NULL THEN 'Still Pending'
|
|
||||||
WHEN ta.was_completed_on_time = false THEN 'Completed Late'
|
|
||||||
ELSE 'Completed On Time'
|
|
||||||
END as status,
|
|
||||||
ta.response_time_after_alert_hours
|
|
||||||
FROM vw_tat_alerts_summary ta
|
|
||||||
LEFT JOIN users u ON ta.approver_id = u.user_id
|
|
||||||
WHERE ta.is_breached = true
|
|
||||||
ORDER BY ta.alert_sent_at DESC;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Visualization**: Table with filters
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 5. Priority Distribution Report
|
|
||||||
|
|
||||||
**Purpose**: Express vs Standard workflows and cycle times
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
priority,
|
|
||||||
COUNT(*) as total_requests,
|
|
||||||
COUNT(CASE WHEN status_category = 'IN_PROGRESS' THEN 1 END) as open_requests,
|
|
||||||
COUNT(CASE WHEN status_category = 'COMPLETED' THEN 1 END) as completed_requests,
|
|
||||||
ROUND(AVG(CASE WHEN closure_date IS NOT NULL THEN cycle_time_hours END), 2) as avg_cycle_hours,
|
|
||||||
ROUND(AVG(CASE WHEN closure_date IS NOT NULL THEN cycle_time_hours / 24 END), 2) as avg_cycle_days
|
|
||||||
FROM vw_request_volume_summary
|
|
||||||
GROUP BY priority;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Visualization**: Pie chart + KPI cards
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 6. Workflow Aging Report
|
|
||||||
|
|
||||||
**Purpose**: Workflows open beyond threshold
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
request_number,
|
|
||||||
title,
|
|
||||||
age_days,
|
|
||||||
age_category,
|
|
||||||
current_approver,
|
|
||||||
current_level_age_hours,
|
|
||||||
current_level_tat_hours,
|
|
||||||
current_level_tat_used
|
|
||||||
FROM vw_workflow_aging
|
|
||||||
WHERE age_category IN ('AGING', 'CRITICAL')
|
|
||||||
ORDER BY age_days DESC;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Visualization**: Table with age color-coding
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 7. Daily/Weekly Trends
|
|
||||||
|
|
||||||
**Purpose**: Track volume and performance trends
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Daily KPIs for last 30 days
|
|
||||||
SELECT
|
|
||||||
date,
|
|
||||||
requests_created,
|
|
||||||
requests_submitted,
|
|
||||||
requests_closed,
|
|
||||||
requests_approved,
|
|
||||||
requests_rejected,
|
|
||||||
ROUND(avg_completion_time_hours, 2) as avg_completion_hours
|
|
||||||
FROM vw_daily_kpi_metrics
|
|
||||||
WHERE date >= CURRENT_DATE - INTERVAL '30 days'
|
|
||||||
ORDER BY date DESC;
|
|
||||||
|
|
||||||
-- Weekly aggregation
|
|
||||||
SELECT
|
|
||||||
DATE_TRUNC('week', date) as week_start,
|
|
||||||
SUM(requests_created) as weekly_created,
|
|
||||||
SUM(requests_closed) as weekly_closed,
|
|
||||||
ROUND(AVG(avg_completion_time_hours), 2) as avg_completion_hours
|
|
||||||
FROM vw_daily_kpi_metrics
|
|
||||||
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
|
|
||||||
GROUP BY DATE_TRUNC('week', date)
|
|
||||||
ORDER BY week_start DESC;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Visualization**: Line chart or area chart
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 TAT Alerts - Display in UI
|
|
||||||
|
|
||||||
### Get TAT Alerts for a Request
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- For displaying in Request Detail screen (like the image shared)
|
|
||||||
SELECT
|
|
||||||
ta.alert_type,
|
|
||||||
ta.threshold_percentage,
|
|
||||||
ta.alert_sent_at,
|
|
||||||
ta.alert_message,
|
|
||||||
ta.tat_hours_elapsed,
|
|
||||||
ta.tat_hours_remaining,
|
|
||||||
ta.notification_sent,
|
|
||||||
CASE
|
|
||||||
WHEN ta.alert_type = 'TAT_50' THEN '⏳ 50% of TAT elapsed'
|
|
||||||
WHEN ta.alert_type = 'TAT_75' THEN '⚠️ 75% of TAT elapsed - Escalation warning'
|
|
||||||
WHEN ta.alert_type = 'TAT_100' THEN '⏰ TAT breached - Immediate action required'
|
|
||||||
END as alert_title
|
|
||||||
FROM tat_alerts ta
|
|
||||||
WHERE ta.request_id = 'REQUEST_ID_HERE'
|
|
||||||
AND ta.level_id = 'LEVEL_ID_HERE'
|
|
||||||
ORDER BY ta.created_at ASC;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Display Format (like image):
|
|
||||||
|
|
||||||
```
|
|
||||||
Reminder 1
|
|
||||||
⏳ 50% of SLA breach reminder have been sent
|
|
||||||
Reminder sent by system automatically
|
|
||||||
Sent at: Oct 6 at 2:30 PM
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 KPI Dashboard Queries
|
|
||||||
|
|
||||||
### Executive Dashboard
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Overall KPIs for dashboard cards
|
|
||||||
SELECT
|
|
||||||
(SELECT COUNT(*) FROM vw_request_volume_summary WHERE created_at >= DATE_TRUNC('month', CURRENT_DATE)) as requests_this_month,
|
|
||||||
(SELECT COUNT(*) FROM vw_request_volume_summary WHERE status_category = 'IN_PROGRESS') as open_requests,
|
|
||||||
(SELECT ROUND(AVG(cycle_time_hours / 24), 2) FROM vw_request_volume_summary WHERE closure_date IS NOT NULL) as avg_cycle_days,
|
|
||||||
(SELECT ROUND(COUNT(CASE WHEN completed_within_tat = true THEN 1 END) * 100.0 / NULLIF(COUNT(*), 0), 2) FROM vw_tat_compliance WHERE completed_within_tat IS NOT NULL) as tat_compliance_percent;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 API Endpoint Examples
|
|
||||||
|
|
||||||
### Example Service Method (TypeScript)
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// services/kpi.service.ts
|
|
||||||
|
|
||||||
export class KPIService {
|
|
||||||
/**
|
|
||||||
* Get Request Volume Summary
|
|
||||||
*/
|
|
||||||
async getRequestVolumeSummary(startDate: string, endDate: string) {
|
|
||||||
const query = `
|
|
||||||
SELECT
|
|
||||||
status_category,
|
|
||||||
COUNT(*) as count
|
|
||||||
FROM vw_request_volume_summary
|
|
||||||
WHERE created_at BETWEEN :startDate AND :endDate
|
|
||||||
GROUP BY status_category
|
|
||||||
`;
|
|
||||||
|
|
||||||
return await sequelize.query(query, {
|
|
||||||
replacements: { startDate, endDate },
|
|
||||||
type: QueryTypes.SELECT
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Get TAT Compliance Rate
|
|
||||||
*/
|
|
||||||
async getTATComplianceRate(period: 'daily' | 'weekly' | 'monthly') {
|
|
||||||
const query = `
|
|
||||||
SELECT
|
|
||||||
COUNT(CASE WHEN completed_within_tat = true THEN 1 END) * 100.0 /
|
|
||||||
NULLIF(COUNT(*), 0) as compliance_rate
|
|
||||||
FROM vw_tat_compliance
|
|
||||||
WHERE action_date >= NOW() - INTERVAL '1 ${period}'
|
|
||||||
`;
|
|
||||||
|
|
||||||
return await sequelize.query(query, { type: QueryTypes.SELECT });
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Get TAT Alerts for Request
|
|
||||||
*/
|
|
||||||
async getTATAlertsForRequest(requestId: string) {
|
|
||||||
return await TatAlert.findAll({
|
|
||||||
where: { requestId },
|
|
||||||
order: [['alertSentAt', 'ASC']],
|
|
||||||
include: [
|
|
||||||
{ model: ApprovalLevel, as: 'level' },
|
|
||||||
{ model: User, as: 'approver' }
|
|
||||||
]
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 Maintenance & Performance
|
|
||||||
|
|
||||||
### Indexes
|
|
||||||
|
|
||||||
All views use indexed columns for optimal performance:
|
|
||||||
- `request_id`, `level_id`, `approver_id`
|
|
||||||
- `status`, `created_at`, `alert_sent_at`
|
|
||||||
- `is_deleted` (for soft deletes)
|
|
||||||
|
|
||||||
### Refresh Materialized Views (if needed)
|
|
||||||
|
|
||||||
If you convert views to materialized views for better performance:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Refresh all materialized views
|
|
||||||
REFRESH MATERIALIZED VIEW CONCURRENTLY mv_request_volume_summary;
|
|
||||||
REFRESH MATERIALIZED VIEW CONCURRENTLY mv_tat_compliance;
|
|
||||||
-- etc.
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📖 Related Documentation
|
|
||||||
|
|
||||||
- **TAT Notification System**: `TAT_NOTIFICATION_SYSTEM.md`
|
|
||||||
- **Database Structure**: `backend_structure.txt`
|
|
||||||
- **API Documentation**: `API_DOCUMENTATION.md`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 4, 2025
|
|
||||||
**Version**: 1.0.0
|
|
||||||
**Maintained By**: Royal Enfield Workflow Team
|
|
||||||
|
|
||||||
632
docs/RBAC_IMPLEMENTATION.md
Normal file
632
docs/RBAC_IMPLEMENTATION.md
Normal file
@ -0,0 +1,632 @@
|
|||||||
|
# Role-Based Access Control (RBAC) Implementation
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The system now supports **three user roles** for granular access control:
|
||||||
|
|
||||||
|
| Role | Access Level | Use Case |
|
||||||
|
|------|--------------|----------|
|
||||||
|
| **USER** | Standard | Default role for all users - create/view own requests |
|
||||||
|
| **MANAGEMENT** | Enhanced Read | View all requests across organization (read-only) |
|
||||||
|
| **ADMIN** | Full Access | System configuration, user management, all workflows |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Roles
|
||||||
|
|
||||||
|
### 1. USER (Default)
|
||||||
|
**Permissions:**
|
||||||
|
- ✅ Create new workflow requests
|
||||||
|
- ✅ View own requests
|
||||||
|
- ✅ Participate in assigned workflows (as approver/spectator)
|
||||||
|
- ✅ Add work notes to requests they're involved in
|
||||||
|
- ✅ Upload documents to own requests
|
||||||
|
- ❌ Cannot view other users' requests (unless added as participant)
|
||||||
|
- ❌ Cannot access system configuration
|
||||||
|
- ❌ Cannot manage users or roles
|
||||||
|
|
||||||
|
**Use Case:** Regular employees creating and managing their workflow requests
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2. MANAGEMENT
|
||||||
|
**Permissions:**
|
||||||
|
- ✅ All USER permissions
|
||||||
|
- ✅ View ALL requests across organization (read-only)
|
||||||
|
- ✅ Access comprehensive dashboards with organization-wide analytics
|
||||||
|
- ✅ Export reports across all departments
|
||||||
|
- ✅ View TAT performance metrics for all approvers
|
||||||
|
- ❌ Cannot approve/reject requests (unless explicitly added as approver)
|
||||||
|
- ❌ Cannot modify system configuration
|
||||||
|
- ❌ Cannot manage user roles
|
||||||
|
|
||||||
|
**Use Case:** Department heads, managers, auditors needing visibility into all workflows
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3. ADMIN
|
||||||
|
**Permissions:**
|
||||||
|
- ✅ All MANAGEMENT permissions
|
||||||
|
- ✅ All USER permissions
|
||||||
|
- ✅ Manage system configuration
|
||||||
|
- ✅ Assign user roles
|
||||||
|
- ✅ Manage holiday calendar
|
||||||
|
- ✅ Configure email/notification settings
|
||||||
|
- ✅ Access audit logs
|
||||||
|
- ✅ Manage AI provider settings
|
||||||
|
|
||||||
|
**Use Case:** System administrators, IT staff managing the workflow platform
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Database Schema
|
||||||
|
|
||||||
|
### Migration Applied
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Create ENUM type for roles
|
||||||
|
CREATE TYPE user_role_enum AS ENUM ('USER', 'MANAGEMENT', 'ADMIN');
|
||||||
|
|
||||||
|
-- Add role column to users table
|
||||||
|
ALTER TABLE users
|
||||||
|
ADD COLUMN role user_role_enum NOT NULL DEFAULT 'USER';
|
||||||
|
|
||||||
|
-- Migrate existing data
|
||||||
|
UPDATE users
|
||||||
|
SET role = CASE
|
||||||
|
WHEN is_admin = true THEN 'ADMIN'
|
||||||
|
ELSE 'USER'
|
||||||
|
END;
|
||||||
|
|
||||||
|
-- Create index for performance
|
||||||
|
CREATE INDEX idx_users_role ON users(role);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Updated Users Table
|
||||||
|
|
||||||
|
```
|
||||||
|
users {
|
||||||
|
uuid user_id PK
|
||||||
|
varchar email UK
|
||||||
|
varchar display_name
|
||||||
|
varchar department
|
||||||
|
varchar designation
|
||||||
|
boolean is_active
|
||||||
|
user_role_enum role ← NEW FIELD
|
||||||
|
boolean is_admin ← DEPRECATED (kept for compatibility)
|
||||||
|
timestamp created_at
|
||||||
|
timestamp updated_at
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Backend Implementation
|
||||||
|
|
||||||
|
### Model (User.ts)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export type UserRole = 'USER' | 'MANAGEMENT' | 'ADMIN';
|
||||||
|
|
||||||
|
interface UserAttributes {
|
||||||
|
// ... other fields
|
||||||
|
role: UserRole; // RBAC role
|
||||||
|
isAdmin: boolean; // DEPRECATED
|
||||||
|
}
|
||||||
|
|
||||||
|
class User extends Model<UserAttributes> {
|
||||||
|
public role!: UserRole;
|
||||||
|
|
||||||
|
// Helper methods
|
||||||
|
public isUserRole(): boolean {
|
||||||
|
return this.role === 'USER';
|
||||||
|
}
|
||||||
|
|
||||||
|
public isManagementRole(): boolean {
|
||||||
|
return this.role === 'MANAGEMENT';
|
||||||
|
}
|
||||||
|
|
||||||
|
public isAdminRole(): boolean {
|
||||||
|
return this.role === 'ADMIN';
|
||||||
|
}
|
||||||
|
|
||||||
|
public hasManagementAccess(): boolean {
|
||||||
|
return this.role === 'MANAGEMENT' || this.role === 'ADMIN';
|
||||||
|
}
|
||||||
|
|
||||||
|
public hasAdminAccess(): boolean {
|
||||||
|
return this.role === 'ADMIN';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Middleware Usage
|
||||||
|
|
||||||
|
### 1. Require Admin Only
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { requireAdmin } from '@middlewares/authorization.middleware';
|
||||||
|
|
||||||
|
// Only ADMIN can access
|
||||||
|
router.post('/admin/config', authenticate, requireAdmin, adminController.updateConfig);
|
||||||
|
router.post('/admin/users/:userId/role', authenticate, requireAdmin, adminController.updateUserRole);
|
||||||
|
router.post('/admin/holidays', authenticate, requireAdmin, adminController.addHoliday);
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Require Management or Admin
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { requireManagement } from '@middlewares/authorization.middleware';
|
||||||
|
|
||||||
|
// MANAGEMENT and ADMIN can access (read-only for management)
|
||||||
|
router.get('/reports/all-requests', authenticate, requireManagement, reportController.getAllRequests);
|
||||||
|
router.get('/analytics/department', authenticate, requireManagement, analyticsController.getDepartmentStats);
|
||||||
|
router.get('/dashboard/organization', authenticate, requireManagement, dashboardController.getOrgWideStats);
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Flexible Role Checking
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { requireRole } from '@middlewares/authorization.middleware';
|
||||||
|
|
||||||
|
// Multiple role options
|
||||||
|
router.get('/workflows/search', authenticate, requireRole(['MANAGEMENT', 'ADMIN']), workflowController.search);
|
||||||
|
router.post('/workflows/export', authenticate, requireRole(['MANAGEMENT', 'ADMIN']), workflowController.export);
|
||||||
|
|
||||||
|
// Any authenticated user
|
||||||
|
router.get('/profile', authenticate, requireRole(['USER', 'MANAGEMENT', 'ADMIN']), userController.getProfile);
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Programmatic Role Checking in Controllers
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { hasManagementAccess, hasAdminAccess } from '@middlewares/authorization.middleware';
|
||||||
|
|
||||||
|
export async function getWorkflows(req: Request, res: Response) {
|
||||||
|
const user = req.user;
|
||||||
|
|
||||||
|
// Management and Admin can see ALL workflows
|
||||||
|
if (hasManagementAccess(user)) {
|
||||||
|
const allWorkflows = await WorkflowRequest.findAll();
|
||||||
|
return res.json({ success: true, data: allWorkflows });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Regular users only see their own workflows
|
||||||
|
const userWorkflows = await WorkflowRequest.findAll({
|
||||||
|
where: { initiatorId: user.userId }
|
||||||
|
});
|
||||||
|
|
||||||
|
return res.json({ success: true, data: userWorkflows });
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example Route Implementations
|
||||||
|
|
||||||
|
### Admin Routes (ADMIN only)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// src/routes/admin.routes.ts
|
||||||
|
import { Router } from 'express';
|
||||||
|
import { authenticate } from '@middlewares/auth.middleware';
|
||||||
|
import { requireAdmin } from '@middlewares/authorization.middleware';
|
||||||
|
import * as adminController from '@controllers/admin.controller';
|
||||||
|
|
||||||
|
const router = Router();
|
||||||
|
|
||||||
|
// All admin routes require ADMIN role
|
||||||
|
router.use(authenticate, requireAdmin);
|
||||||
|
|
||||||
|
// System configuration
|
||||||
|
router.get('/config', adminController.getConfig);
|
||||||
|
router.put('/config', adminController.updateConfig);
|
||||||
|
|
||||||
|
// User role management
|
||||||
|
router.put('/users/:userId/role', adminController.updateUserRole);
|
||||||
|
router.get('/users/admins', adminController.getAllAdmins);
|
||||||
|
router.get('/users/management', adminController.getAllManagement);
|
||||||
|
|
||||||
|
// Holiday management
|
||||||
|
router.post('/holidays', adminController.createHoliday);
|
||||||
|
router.delete('/holidays/:holidayId', adminController.deleteHoliday);
|
||||||
|
|
||||||
|
export default router;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Management Routes (MANAGEMENT + ADMIN)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// src/routes/management.routes.ts
|
||||||
|
import { Router } from 'express';
|
||||||
|
import { authenticate } from '@middlewares/auth.middleware';
|
||||||
|
import { requireManagement } from '@middlewares/authorization.middleware';
|
||||||
|
import * as managementController from '@controllers/management.controller';
|
||||||
|
|
||||||
|
const router = Router();
|
||||||
|
|
||||||
|
// All management routes require MANAGEMENT or ADMIN role
|
||||||
|
router.use(authenticate, requireManagement);
|
||||||
|
|
||||||
|
// Organization-wide dashboards (read-only)
|
||||||
|
router.get('/dashboard/organization', managementController.getOrgDashboard);
|
||||||
|
router.get('/requests/all', managementController.getAllRequests);
|
||||||
|
router.get('/analytics/tat-performance', managementController.getTATPerformance);
|
||||||
|
router.get('/analytics/approver-stats', managementController.getApproverStats);
|
||||||
|
router.get('/reports/export', managementController.exportReports);
|
||||||
|
|
||||||
|
// Department-wise analytics
|
||||||
|
router.get('/analytics/department/:deptName', managementController.getDepartmentAnalytics);
|
||||||
|
|
||||||
|
export default router;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Workflow Routes (Mixed Permissions)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// src/routes/workflow.routes.ts
|
||||||
|
import { Router } from 'express';
|
||||||
|
import { authenticate } from '@middlewares/auth.middleware';
|
||||||
|
import { requireManagement, requireRole } from '@middlewares/authorization.middleware';
|
||||||
|
import * as workflowController from '@controllers/workflow.controller';
|
||||||
|
|
||||||
|
const router = Router();
|
||||||
|
|
||||||
|
// USER: Create own request (all roles can do this)
|
||||||
|
router.post('/workflows', authenticate, workflowController.create);
|
||||||
|
|
||||||
|
// USER: View own requests (filtered by role in controller)
|
||||||
|
router.get('/workflows/my-requests', authenticate, workflowController.getMyRequests);
|
||||||
|
|
||||||
|
// MANAGEMENT + ADMIN: Search all requests
|
||||||
|
router.get('/workflows/search', authenticate, requireManagement, workflowController.searchAll);
|
||||||
|
|
||||||
|
// ADMIN: Delete workflow
|
||||||
|
router.delete('/workflows/:id', authenticate, requireRole(['ADMIN']), workflowController.delete);
|
||||||
|
|
||||||
|
export default router;
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Controller Implementation Examples
|
||||||
|
|
||||||
|
### Example 1: Dashboard with Role-Based Data
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// src/controllers/dashboard.controller.ts
|
||||||
|
import { hasManagementAccess } from '@middlewares/authorization.middleware';
|
||||||
|
|
||||||
|
export async function getDashboard(req: Request, res: Response) {
|
||||||
|
const user = req.user;
|
||||||
|
|
||||||
|
// MANAGEMENT and ADMIN: See organization-wide stats
|
||||||
|
if (hasManagementAccess(user)) {
|
||||||
|
const stats = await dashboardService.getOrganizationStats();
|
||||||
|
return res.json({
|
||||||
|
success: true,
|
||||||
|
data: {
|
||||||
|
...stats,
|
||||||
|
scope: 'organization', // Indicates full visibility
|
||||||
|
userRole: user.role
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// USER: See only personal stats
|
||||||
|
const stats = await dashboardService.getUserStats(user.userId);
|
||||||
|
return res.json({
|
||||||
|
success: true,
|
||||||
|
data: {
|
||||||
|
...stats,
|
||||||
|
scope: 'personal', // Indicates limited visibility
|
||||||
|
userRole: user.role
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 2: User Role Update (ADMIN only)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// src/controllers/admin.controller.ts
|
||||||
|
export async function updateUserRole(req: Request, res: Response) {
|
||||||
|
const { userId } = req.params;
|
||||||
|
const { role } = req.body;
|
||||||
|
|
||||||
|
// Validate role
|
||||||
|
if (!['USER', 'MANAGEMENT', 'ADMIN'].includes(role)) {
|
||||||
|
return res.status(400).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Invalid role. Must be USER, MANAGEMENT, or ADMIN'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update user role
|
||||||
|
const user = await User.findByPk(userId);
|
||||||
|
if (!user) {
|
||||||
|
return res.status(404).json({
|
||||||
|
success: false,
|
||||||
|
error: 'User not found'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
const oldRole = user.role;
|
||||||
|
user.role = role;
|
||||||
|
|
||||||
|
// Sync is_admin for backward compatibility
|
||||||
|
user.isAdmin = (role === 'ADMIN');
|
||||||
|
await user.save();
|
||||||
|
|
||||||
|
// Log role change
|
||||||
|
console.log(`✅ User role updated: ${user.email} - ${oldRole} → ${role}`);
|
||||||
|
|
||||||
|
return res.json({
|
||||||
|
success: true,
|
||||||
|
message: `User role updated from ${oldRole} to ${role}`,
|
||||||
|
data: {
|
||||||
|
userId: user.userId,
|
||||||
|
email: user.email,
|
||||||
|
role: user.role
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Frontend Integration
|
||||||
|
|
||||||
|
### Update Auth Context
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Frontend: src/contexts/AuthContext.tsx
|
||||||
|
interface User {
|
||||||
|
userId: string;
|
||||||
|
email: string;
|
||||||
|
displayName: string;
|
||||||
|
role: 'USER' | 'MANAGEMENT' | 'ADMIN'; // ← Add role
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helper functions
|
||||||
|
export function isAdmin(user: User | null): boolean {
|
||||||
|
return user?.role === 'ADMIN';
|
||||||
|
}
|
||||||
|
|
||||||
|
export function isManagement(user: User | null): boolean {
|
||||||
|
return user?.role === 'MANAGEMENT' || user?.role === 'ADMIN';
|
||||||
|
}
|
||||||
|
|
||||||
|
export function hasManagementAccess(user: User | null): boolean {
|
||||||
|
return user?.role === 'MANAGEMENT' || user?.role === 'ADMIN';
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Role-Based UI Rendering
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Show admin menu only for ADMIN
|
||||||
|
{user?.role === 'ADMIN' && (
|
||||||
|
<NavItem to="/admin/config">
|
||||||
|
<Settings /> System Configuration
|
||||||
|
</NavItem>
|
||||||
|
)}
|
||||||
|
|
||||||
|
// Show management dashboard for MANAGEMENT and ADMIN
|
||||||
|
{(user?.role === 'MANAGEMENT' || user?.role === 'ADMIN') && (
|
||||||
|
<NavItem to="/dashboard/organization">
|
||||||
|
<TrendingUp /> Organization Dashboard
|
||||||
|
</NavItem>
|
||||||
|
)}
|
||||||
|
|
||||||
|
// Show all requests for MANAGEMENT and ADMIN
|
||||||
|
{hasManagementAccess(user) && (
|
||||||
|
<NavItem to="/requests/all">
|
||||||
|
<FileText /> All Requests
|
||||||
|
</NavItem>
|
||||||
|
)}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migration Guide
|
||||||
|
|
||||||
|
### Running the Migration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run migration to add role column
|
||||||
|
npm run migrate
|
||||||
|
|
||||||
|
# Verify migration
|
||||||
|
psql -d royal_enfield_db -c "SELECT email, role, is_admin FROM users LIMIT 10;"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Expected Results
|
||||||
|
|
||||||
|
```
|
||||||
|
Before Migration:
|
||||||
|
+-------------------------+-----------+
|
||||||
|
| email | is_admin |
|
||||||
|
+-------------------------+-----------+
|
||||||
|
| admin@royalenfield.com | true |
|
||||||
|
| user1@royalenfield.com | false |
|
||||||
|
+-------------------------+-----------+
|
||||||
|
|
||||||
|
After Migration:
|
||||||
|
+-------------------------+-----------+-----------+
|
||||||
|
| email | role | is_admin |
|
||||||
|
+-------------------------+-----------+-----------+
|
||||||
|
| admin@royalenfield.com | ADMIN | true |
|
||||||
|
| user1@royalenfield.com | USER | false |
|
||||||
|
+-------------------------+-----------+-----------+
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Assigning Roles
|
||||||
|
|
||||||
|
### Via SQL (Direct Database)
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Make user a MANAGEMENT role
|
||||||
|
UPDATE users
|
||||||
|
SET role = 'MANAGEMENT', is_admin = false
|
||||||
|
WHERE email = 'manager@royalenfield.com';
|
||||||
|
|
||||||
|
-- Make user an ADMIN role
|
||||||
|
UPDATE users
|
||||||
|
SET role = 'ADMIN', is_admin = true
|
||||||
|
WHERE email = 'admin@royalenfield.com';
|
||||||
|
|
||||||
|
-- Revert to USER role
|
||||||
|
UPDATE users
|
||||||
|
SET role = 'USER', is_admin = false
|
||||||
|
WHERE email = 'user@royalenfield.com';
|
||||||
|
```
|
||||||
|
|
||||||
|
### Via API (Admin Endpoint)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Update user role (ADMIN only)
|
||||||
|
POST /api/v1/admin/users/:userId/role
|
||||||
|
Authorization: Bearer <admin-token>
|
||||||
|
Content-Type: application/json
|
||||||
|
|
||||||
|
{
|
||||||
|
"role": "MANAGEMENT"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
### Test Scenarios
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
describe('RBAC Tests', () => {
|
||||||
|
test('USER cannot access admin config', async () => {
|
||||||
|
const response = await request(app)
|
||||||
|
.get('/api/v1/admin/config')
|
||||||
|
.set('Authorization', `Bearer ${userToken}`);
|
||||||
|
|
||||||
|
expect(response.status).toBe(403);
|
||||||
|
expect(response.body.error).toContain('Admin access required');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('MANAGEMENT can view all requests', async () => {
|
||||||
|
const response = await request(app)
|
||||||
|
.get('/api/v1/management/requests/all')
|
||||||
|
.set('Authorization', `Bearer ${managementToken}`);
|
||||||
|
|
||||||
|
expect(response.status).toBe(200);
|
||||||
|
expect(response.body.data).toBeInstanceOf(Array);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('ADMIN can update user roles', async () => {
|
||||||
|
const response = await request(app)
|
||||||
|
.put(`/api/v1/admin/users/${userId}/role`)
|
||||||
|
.set('Authorization', `Bearer ${adminToken}`)
|
||||||
|
.send({ role: 'MANAGEMENT' });
|
||||||
|
|
||||||
|
expect(response.status).toBe(200);
|
||||||
|
expect(response.body.data.role).toBe('MANAGEMENT');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### 1. Always Use Role Column
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Use new role system
|
||||||
|
if (user.role === 'ADMIN') {
|
||||||
|
// Admin logic
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ BAD: Don't use deprecated is_admin
|
||||||
|
if (user.isAdmin) {
|
||||||
|
// Deprecated approach
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Use Helper Functions
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Use provided helpers
|
||||||
|
if (user.hasManagementAccess()) {
|
||||||
|
// Management or Admin logic
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ BAD: Manual checking
|
||||||
|
if (user.role === 'MANAGEMENT' || user.role === 'ADMIN') {
|
||||||
|
// Verbose
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Route Protection
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD: Clear role requirements
|
||||||
|
router.get('/sensitive-data', authenticate, requireManagement, controller.getData);
|
||||||
|
|
||||||
|
// ❌ BAD: Role checking in controller only
|
||||||
|
router.get('/sensitive-data', authenticate, controller.getData); // No middleware check
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Backward Compatibility
|
||||||
|
|
||||||
|
The `is_admin` field is **DEPRECATED** but kept for backward compatibility:
|
||||||
|
|
||||||
|
- ✅ Existing code using `is_admin` will continue to work
|
||||||
|
- ✅ Migration automatically syncs `role` and `is_admin`
|
||||||
|
- ⚠️ New code should use `role` instead of `is_admin`
|
||||||
|
- 📅 `is_admin` will be removed in future version
|
||||||
|
|
||||||
|
### Sync Logic
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// When updating role, sync is_admin
|
||||||
|
user.role = 'ADMIN';
|
||||||
|
user.isAdmin = true; // Auto-sync
|
||||||
|
|
||||||
|
user.role = 'USER';
|
||||||
|
user.isAdmin = false; // Auto-sync
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
| Task | Code Example |
|
||||||
|
|------|--------------|
|
||||||
|
| Check if ADMIN | `user.role === 'ADMIN'` or `user.isAdminRole()` |
|
||||||
|
| Check if MANAGEMENT | `user.role === 'MANAGEMENT'` or `user.isManagementRole()` |
|
||||||
|
| Check if USER | `user.role === 'USER'` or `user.isUserRole()` |
|
||||||
|
| Check Management+ | `user.hasManagementAccess()` |
|
||||||
|
| Middleware: Admin only | `requireAdmin` |
|
||||||
|
| Middleware: Management+ | `requireManagement` |
|
||||||
|
| Middleware: Custom roles | `requireRole(['ADMIN', 'MANAGEMENT'])` |
|
||||||
|
| Update role (SQL) | `UPDATE users SET role = 'MANAGEMENT' WHERE email = '...'` |
|
||||||
|
| Update role (API) | `PUT /admin/users/:userId/role { role: 'MANAGEMENT' }` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
For questions or issues:
|
||||||
|
- Check migration logs: `logs/migration.log`
|
||||||
|
- Review user roles: `SELECT email, role FROM users;`
|
||||||
|
- Test role access: Use provided test scenarios
|
||||||
|
|
||||||
|
**Migration File:** `src/migrations/20251112-add-user-roles.ts`
|
||||||
|
**Model File:** `src/models/User.ts`
|
||||||
|
**Middleware File:** `src/middlewares/authorization.middleware.ts`
|
||||||
|
|
||||||
372
docs/RBAC_QUICK_START.md
Normal file
372
docs/RBAC_QUICK_START.md
Normal file
@ -0,0 +1,372 @@
|
|||||||
|
# RBAC Quick Start Guide
|
||||||
|
|
||||||
|
## ✅ **Implementation Complete!**
|
||||||
|
|
||||||
|
Role-Based Access Control (RBAC) has been successfully implemented with **three roles**:
|
||||||
|
|
||||||
|
| Role | Description | Default on Creation |
|
||||||
|
|------|-------------|---------------------|
|
||||||
|
| **USER** | Standard workflow participant | ✅ YES |
|
||||||
|
| **MANAGEMENT** | Read access to all data | ❌ Must assign |
|
||||||
|
| **ADMIN** | Full system access | ❌ Must assign |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚀 **Quick Start - 3 Steps**
|
||||||
|
|
||||||
|
### Step 1: Run Migration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd Re_Backend
|
||||||
|
npm run migrate
|
||||||
|
```
|
||||||
|
|
||||||
|
**What it does:**
|
||||||
|
- ✅ Creates `user_role_enum` type
|
||||||
|
- ✅ Adds `role` column to `users` table
|
||||||
|
- ✅ Migrates existing `is_admin` data to `role`
|
||||||
|
- ✅ Creates index for performance
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 2: Assign Roles to Users
|
||||||
|
|
||||||
|
**Option A: Via SQL Script (Recommended for initial setup)**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Edit the script first with your user emails
|
||||||
|
nano scripts/assign-user-roles.sql
|
||||||
|
|
||||||
|
# Run the script
|
||||||
|
psql -d royal_enfield_db -f scripts/assign-user-roles.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
**Option B: Via SQL Command (Quick assignment)**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Make specific users ADMIN
|
||||||
|
UPDATE users
|
||||||
|
SET role = 'ADMIN', is_admin = true
|
||||||
|
WHERE email IN ('admin@royalenfield.com', 'it.admin@royalenfield.com');
|
||||||
|
|
||||||
|
-- Make specific users MANAGEMENT
|
||||||
|
UPDATE users
|
||||||
|
SET role = 'MANAGEMENT', is_admin = false
|
||||||
|
WHERE email IN ('manager@royalenfield.com', 'auditor@royalenfield.com');
|
||||||
|
|
||||||
|
-- Verify roles
|
||||||
|
SELECT email, display_name, role, is_admin FROM users ORDER BY role, email;
|
||||||
|
```
|
||||||
|
|
||||||
|
**Option C: Via API (After system is running)**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Update user role (requires ADMIN token)
|
||||||
|
curl -X PUT http://localhost:5000/api/v1/admin/users/{userId}/role \
|
||||||
|
-H "Authorization: Bearer YOUR_ADMIN_TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"role": "MANAGEMENT"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 3: Restart Backend
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm run dev # Development
|
||||||
|
# or
|
||||||
|
npm start # Production
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📡 **New API Endpoints (ADMIN Only)**
|
||||||
|
|
||||||
|
### 1. Update User Role
|
||||||
|
|
||||||
|
```http
|
||||||
|
PUT /api/v1/admin/users/:userId/role
|
||||||
|
Authorization: Bearer {admin-token}
|
||||||
|
Content-Type: application/json
|
||||||
|
|
||||||
|
{
|
||||||
|
"role": "MANAGEMENT"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"success": true,
|
||||||
|
"message": "User role updated from USER to MANAGEMENT",
|
||||||
|
"data": {
|
||||||
|
"userId": "uuid",
|
||||||
|
"email": "user@example.com",
|
||||||
|
"role": "MANAGEMENT",
|
||||||
|
"previousRole": "USER"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Get Users by Role
|
||||||
|
|
||||||
|
```http
|
||||||
|
GET /api/v1/admin/users/by-role?role=MANAGEMENT
|
||||||
|
Authorization: Bearer {admin-token}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"success": true,
|
||||||
|
"data": [...users...],
|
||||||
|
"summary": {
|
||||||
|
"ADMIN": 2,
|
||||||
|
"MANAGEMENT": 5,
|
||||||
|
"USER": 150,
|
||||||
|
"total": 157
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Get Role Statistics
|
||||||
|
|
||||||
|
```http
|
||||||
|
GET /api/v1/admin/users/role-statistics
|
||||||
|
Authorization: Bearer {admin-token}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"success": true,
|
||||||
|
"data": [
|
||||||
|
{ "role": "ADMIN", "count": 2, "active_count": 2, "inactive_count": 0 },
|
||||||
|
{ "role": "MANAGEMENT", "count": 5, "active_count": 5, "inactive_count": 0 },
|
||||||
|
{ "role": "USER", "count": 150, "active_count": 148, "inactive_count": 2 }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🛡️ **Using RBAC in Your Code**
|
||||||
|
|
||||||
|
### Middleware Examples
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { requireAdmin, requireManagement, requireRole } from '@middlewares/authorization.middleware';
|
||||||
|
|
||||||
|
// ADMIN only
|
||||||
|
router.post('/admin/config', authenticate, requireAdmin, controller.updateConfig);
|
||||||
|
|
||||||
|
// MANAGEMENT or ADMIN
|
||||||
|
router.get('/reports/all', authenticate, requireManagement, controller.getAllReports);
|
||||||
|
|
||||||
|
// Flexible (custom roles)
|
||||||
|
router.get('/analytics', authenticate, requireRole(['MANAGEMENT', 'ADMIN']), controller.getAnalytics);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Controller Examples
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { hasManagementAccess, hasAdminAccess } from '@middlewares/authorization.middleware';
|
||||||
|
|
||||||
|
export async function getWorkflows(req: Request, res: Response) {
|
||||||
|
const user = req.user;
|
||||||
|
|
||||||
|
// MANAGEMENT & ADMIN: See all workflows
|
||||||
|
if (hasManagementAccess(user)) {
|
||||||
|
return await WorkflowRequest.findAll();
|
||||||
|
}
|
||||||
|
|
||||||
|
// USER: See only own workflows
|
||||||
|
return await WorkflowRequest.findAll({
|
||||||
|
where: { initiatorId: user.userId }
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📋 **Role Permissions Matrix**
|
||||||
|
|
||||||
|
| Feature | USER | MANAGEMENT | ADMIN |
|
||||||
|
|---------|------|------------|-------|
|
||||||
|
| Create requests | ✅ | ✅ | ✅ |
|
||||||
|
| View own requests | ✅ | ✅ | ✅ |
|
||||||
|
| View all requests | ❌ | ✅ Read-only | ✅ Full access |
|
||||||
|
| Approve/Reject (if assigned) | ✅ | ✅ | ✅ |
|
||||||
|
| Organization dashboard | ❌ | ✅ | ✅ |
|
||||||
|
| Export reports | ❌ | ✅ | ✅ |
|
||||||
|
| System configuration | ❌ | ❌ | ✅ |
|
||||||
|
| Manage user roles | ❌ | ❌ | ✅ |
|
||||||
|
| Holiday management | ❌ | ❌ | ✅ |
|
||||||
|
| Audit logs | ❌ | ❌ | ✅ |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🧪 **Testing Your RBAC**
|
||||||
|
|
||||||
|
### Test 1: Verify Migration
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Check role distribution
|
||||||
|
SELECT role, COUNT(*) as count
|
||||||
|
FROM users
|
||||||
|
GROUP BY role;
|
||||||
|
|
||||||
|
-- Check specific user
|
||||||
|
SELECT email, role, is_admin
|
||||||
|
FROM users
|
||||||
|
WHERE email = 'your-email@royalenfield.com';
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test 2: Test API Access
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Try accessing admin endpoint with USER role (should fail)
|
||||||
|
curl -X GET http://localhost:5000/api/v1/admin/configurations \
|
||||||
|
-H "Authorization: Bearer {user-token}"
|
||||||
|
# Expected: 403 Forbidden
|
||||||
|
|
||||||
|
# Try accessing admin endpoint with ADMIN role (should succeed)
|
||||||
|
curl -X GET http://localhost:5000/api/v1/admin/configurations \
|
||||||
|
-H "Authorization: Bearer {admin-token}"
|
||||||
|
# Expected: 200 OK
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔄 **Migration Path**
|
||||||
|
|
||||||
|
### Existing Code Compatibility
|
||||||
|
|
||||||
|
✅ **All existing code continues to work!**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Old code (still works)
|
||||||
|
if (user.isAdmin) {
|
||||||
|
// Admin logic
|
||||||
|
}
|
||||||
|
|
||||||
|
// New code (recommended)
|
||||||
|
if (user.role === 'ADMIN') {
|
||||||
|
// Admin logic
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### When to Update `is_admin`
|
||||||
|
|
||||||
|
The system **automatically syncs** `is_admin` with `role`:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
user.role = 'ADMIN'; → is_admin = true (auto-synced)
|
||||||
|
user.role = 'USER'; → is_admin = false (auto-synced)
|
||||||
|
user.role = 'MANAGEMENT'; → is_admin = false (auto-synced)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📝 **Files Created/Modified**
|
||||||
|
|
||||||
|
### Created Files:
|
||||||
|
1. ✅ `src/migrations/20251112-add-user-roles.ts` - Database migration
|
||||||
|
2. ✅ `scripts/assign-user-roles.sql` - Role assignment script
|
||||||
|
3. ✅ `docs/RBAC_IMPLEMENTATION.md` - Full documentation
|
||||||
|
4. ✅ `docs/RBAC_QUICK_START.md` - This guide
|
||||||
|
|
||||||
|
### Modified Files:
|
||||||
|
1. ✅ `src/models/User.ts` - Added role field + helper methods
|
||||||
|
2. ✅ `src/middlewares/authorization.middleware.ts` - Added RBAC middleware
|
||||||
|
3. ✅ `src/controllers/admin.controller.ts` - Added role management endpoints
|
||||||
|
4. ✅ `src/routes/admin.routes.ts` - Added role management routes
|
||||||
|
5. ✅ `src/types/user.types.ts` - Added UserRole type
|
||||||
|
6. ✅ `backend_structure.txt` - Updated users table schema
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎯 **Next Steps**
|
||||||
|
|
||||||
|
### 1. Run Migration
|
||||||
|
```bash
|
||||||
|
npm run migrate
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Assign Initial Roles
|
||||||
|
```bash
|
||||||
|
# Edit with your emails
|
||||||
|
nano scripts/assign-user-roles.sql
|
||||||
|
|
||||||
|
# Run script
|
||||||
|
psql -d royal_enfield_db -f scripts/assign-user-roles.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Test the System
|
||||||
|
```bash
|
||||||
|
# Restart backend
|
||||||
|
npm run dev
|
||||||
|
|
||||||
|
# Check roles
|
||||||
|
curl http://localhost:5000/api/v1/admin/users/role-statistics \
|
||||||
|
-H "Authorization: Bearer {admin-token}"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Update Frontend (Optional - for role-based UI)
|
||||||
|
```typescript
|
||||||
|
// In AuthContext or user service
|
||||||
|
interface User {
|
||||||
|
role: 'USER' | 'MANAGEMENT' | 'ADMIN';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Show admin menu only for ADMIN
|
||||||
|
{user.role === 'ADMIN' && <AdminMenu />}
|
||||||
|
|
||||||
|
// Show management dashboard for MANAGEMENT + ADMIN
|
||||||
|
{(user.role === 'MANAGEMENT' || user.role === 'ADMIN') && <OrgDashboard />}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⚠️ **Important Notes**
|
||||||
|
|
||||||
|
1. **Backward Compatibility**: `is_admin` field is kept but DEPRECATED
|
||||||
|
2. **Self-Demotion Prevention**: Admins cannot remove their own admin role
|
||||||
|
3. **Default Role**: All new users get 'USER' role automatically
|
||||||
|
4. **Role Sync**: `is_admin` is automatically synced with `role === 'ADMIN'`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 💡 **Pro Tips**
|
||||||
|
|
||||||
|
### Assign Roles by Department
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Make all IT dept users ADMIN
|
||||||
|
UPDATE users SET role = 'ADMIN', is_admin = true
|
||||||
|
WHERE department = 'IT' AND is_active = true;
|
||||||
|
|
||||||
|
-- Make all managers MANAGEMENT role
|
||||||
|
UPDATE users SET role = 'MANAGEMENT', is_admin = false
|
||||||
|
WHERE designation ILIKE '%manager%' OR designation ILIKE '%head%';
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Your Own Role
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT email, role, is_admin
|
||||||
|
FROM users
|
||||||
|
WHERE email = 'your-email@royalenfield.com';
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📞 **Support**
|
||||||
|
|
||||||
|
For issues or questions:
|
||||||
|
- **Documentation**: `docs/RBAC_IMPLEMENTATION.md`
|
||||||
|
- **Migration File**: `src/migrations/20251112-add-user-roles.ts`
|
||||||
|
- **Assignment Script**: `scripts/assign-user-roles.sql`
|
||||||
|
|
||||||
|
**Your RBAC system is production-ready!** 🎉
|
||||||
|
|
||||||
@ -1,113 +0,0 @@
|
|||||||
# Redis Setup for Windows
|
|
||||||
|
|
||||||
## Method 1: Using Memurai (Redis-compatible for Windows)
|
|
||||||
|
|
||||||
Memurai is a Redis-compatible server for Windows.
|
|
||||||
|
|
||||||
1. **Download Memurai**:
|
|
||||||
- Visit: https://www.memurai.com/get-memurai
|
|
||||||
- Download the installer
|
|
||||||
|
|
||||||
2. **Install**:
|
|
||||||
- Run the installer
|
|
||||||
- Choose default options
|
|
||||||
- It will automatically start as a Windows service
|
|
||||||
|
|
||||||
3. **Verify**:
|
|
||||||
```powershell
|
|
||||||
# Check if service is running
|
|
||||||
Get-Service Memurai
|
|
||||||
|
|
||||||
# Or connect with redis-cli
|
|
||||||
memurai-cli ping
|
|
||||||
# Should return: PONG
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Configure** (if needed):
|
|
||||||
- Default port: 6379
|
|
||||||
- Service runs automatically on startup
|
|
||||||
|
|
||||||
## Method 2: Using Docker Desktop
|
|
||||||
|
|
||||||
1. **Install Docker Desktop**:
|
|
||||||
- Download from: https://www.docker.com/products/docker-desktop
|
|
||||||
|
|
||||||
2. **Start Redis Container**:
|
|
||||||
```powershell
|
|
||||||
docker run -d --name redis -p 6379:6379 redis:7-alpine
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Verify**:
|
|
||||||
```powershell
|
|
||||||
docker ps | Select-String redis
|
|
||||||
```
|
|
||||||
|
|
||||||
## Method 3: Using WSL2 (Windows Subsystem for Linux)
|
|
||||||
|
|
||||||
1. **Enable WSL2**:
|
|
||||||
```powershell
|
|
||||||
wsl --install
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Install Redis in WSL**:
|
|
||||||
```bash
|
|
||||||
sudo apt update
|
|
||||||
sudo apt install redis-server
|
|
||||||
sudo service redis-server start
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Verify**:
|
|
||||||
```bash
|
|
||||||
redis-cli ping
|
|
||||||
# Should return: PONG
|
|
||||||
```
|
|
||||||
|
|
||||||
## Quick Test
|
|
||||||
|
|
||||||
After starting Redis, test the connection:
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
# If you have redis-cli or memurai-cli
|
|
||||||
redis-cli ping
|
|
||||||
|
|
||||||
# Or use telnet
|
|
||||||
Test-NetConnection -ComputerName localhost -Port 6379
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Port Already in Use
|
|
||||||
```powershell
|
|
||||||
# Check what's using port 6379
|
|
||||||
netstat -ano | findstr :6379
|
|
||||||
|
|
||||||
# Kill the process if needed
|
|
||||||
taskkill /PID <PID> /F
|
|
||||||
```
|
|
||||||
|
|
||||||
### Service Not Starting
|
|
||||||
```powershell
|
|
||||||
# For Memurai
|
|
||||||
net start Memurai
|
|
||||||
|
|
||||||
# Check logs
|
|
||||||
Get-EventLog -LogName Application -Source Memurai -Newest 10
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
Default Redis/Memurai configuration works out of the box. No changes needed for development.
|
|
||||||
|
|
||||||
**Connection String**: `redis://localhost:6379`
|
|
||||||
|
|
||||||
## Production Considerations
|
|
||||||
|
|
||||||
- Use Redis authentication in production
|
|
||||||
- Configure persistence (RDB/AOF)
|
|
||||||
- Set up monitoring and alerts
|
|
||||||
- Consider Redis Cluster for high availability
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Recommended for Windows Development**: Memurai (easiest) or Docker Desktop
|
|
||||||
|
|
||||||
@ -1,387 +0,0 @@
|
|||||||
# TAT (Turnaround Time) Notification System
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The TAT Notification System automatically tracks and notifies approvers about their approval deadlines at key milestones (50%, 75%, and 100% of allotted time). It uses a queue-based architecture with BullMQ and Redis to ensure reliable, scheduled notifications.
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────┐
|
|
||||||
│ Workflow │
|
|
||||||
│ Submission │
|
|
||||||
└────────┬────────┘
|
|
||||||
│
|
|
||||||
├──> Schedule TAT Jobs (50%, 75%, 100%)
|
|
||||||
│
|
|
||||||
┌────────▼────────┐ ┌──────────────┐ ┌─────────────┐
|
|
||||||
│ TAT Queue │────>│ TAT Worker │────>│ Processor │
|
|
||||||
│ (BullMQ) │ │ (Background)│ │ Handler │
|
|
||||||
└─────────────────┘ └──────────────┘ └──────┬──────┘
|
|
||||||
│
|
|
||||||
├──> Send Notification
|
|
||||||
├──> Update Database
|
|
||||||
└──> Log Activity
|
|
||||||
```
|
|
||||||
|
|
||||||
## Components
|
|
||||||
|
|
||||||
### 1. TAT Time Utilities (`tatTimeUtils.ts`)
|
|
||||||
|
|
||||||
Handles working hours calculations (Monday-Friday, 9 AM - 6 PM):
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Calculate TAT milestones considering working hours
|
|
||||||
const { halfTime, seventyFive, full } = calculateTatMilestones(startDate, tatHours);
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key Functions:**
|
|
||||||
- `addWorkingHours()`: Adds working hours to a start date, skipping weekends
|
|
||||||
- `calculateTatMilestones()`: Calculates 50%, 75%, and 100% time points
|
|
||||||
- `calculateDelay()`: Computes delay in milliseconds from now to target
|
|
||||||
|
|
||||||
### 2. TAT Queue (`tatQueue.ts`)
|
|
||||||
|
|
||||||
BullMQ queue configuration with Redis:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
export const tatQueue = new Queue('tatQueue', {
|
|
||||||
connection: IORedis,
|
|
||||||
defaultJobOptions: {
|
|
||||||
removeOnComplete: true,
|
|
||||||
removeOnFail: false,
|
|
||||||
attempts: 3,
|
|
||||||
backoff: { type: 'exponential', delay: 2000 }
|
|
||||||
}
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. TAT Processor (`tatProcessor.ts`)
|
|
||||||
|
|
||||||
Handles job execution when TAT milestones are reached:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
export async function handleTatJob(job: Job<TatJobData>) {
|
|
||||||
// Process tat50, tat75, or tatBreach
|
|
||||||
// - Send notification to approver
|
|
||||||
// - Update database flags
|
|
||||||
// - Log activity
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Job Types:**
|
|
||||||
- `tat50`: ⏳ 50% of TAT elapsed (gentle reminder)
|
|
||||||
- `tat75`: ⚠️ 75% of TAT elapsed (escalation warning)
|
|
||||||
- `tatBreach`: ⏰ 100% of TAT elapsed (breach notification)
|
|
||||||
|
|
||||||
### 4. TAT Worker (`tatWorker.ts`)
|
|
||||||
|
|
||||||
Background worker that processes jobs from the queue:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
export const tatWorker = new Worker('tatQueue', handleTatJob, {
|
|
||||||
connection,
|
|
||||||
concurrency: 5,
|
|
||||||
limiter: { max: 10, duration: 1000 }
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
- Concurrent job processing (up to 5 jobs)
|
|
||||||
- Rate limiting (10 jobs/second)
|
|
||||||
- Automatic retry on failure
|
|
||||||
- Graceful shutdown on SIGTERM/SIGINT
|
|
||||||
|
|
||||||
### 5. TAT Scheduler Service (`tatScheduler.service.ts`)
|
|
||||||
|
|
||||||
Service for scheduling and managing TAT jobs:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Schedule TAT jobs for an approval level
|
|
||||||
await tatSchedulerService.scheduleTatJobs(
|
|
||||||
requestId,
|
|
||||||
levelId,
|
|
||||||
approverId,
|
|
||||||
tatHours,
|
|
||||||
startTime
|
|
||||||
);
|
|
||||||
|
|
||||||
// Cancel TAT jobs when level is completed
|
|
||||||
await tatSchedulerService.cancelTatJobs(requestId, levelId);
|
|
||||||
```
|
|
||||||
|
|
||||||
## Database Schema
|
|
||||||
|
|
||||||
### New Fields in `approval_levels` Table
|
|
||||||
|
|
||||||
```sql
|
|
||||||
ALTER TABLE approval_levels ADD COLUMN tat50_alert_sent BOOLEAN NOT NULL DEFAULT false;
|
|
||||||
ALTER TABLE approval_levels ADD COLUMN tat75_alert_sent BOOLEAN NOT NULL DEFAULT false;
|
|
||||||
ALTER TABLE approval_levels ADD COLUMN tat_breached BOOLEAN NOT NULL DEFAULT false;
|
|
||||||
ALTER TABLE approval_levels ADD COLUMN tat_start_time TIMESTAMP WITH TIME ZONE;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Field Descriptions:**
|
|
||||||
- `tat50_alert_sent`: Tracks if 50% notification was sent
|
|
||||||
- `tat75_alert_sent`: Tracks if 75% notification was sent
|
|
||||||
- `tat_breached`: Tracks if TAT deadline was breached
|
|
||||||
- `tat_start_time`: Timestamp when TAT monitoring started
|
|
||||||
|
|
||||||
## Integration Points
|
|
||||||
|
|
||||||
### 1. Workflow Submission
|
|
||||||
|
|
||||||
When a workflow is submitted, TAT monitoring starts for the first approval level:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// workflow.service.ts - submitWorkflow()
|
|
||||||
await current.update({
|
|
||||||
levelStartTime: now,
|
|
||||||
tatStartTime: now,
|
|
||||||
status: ApprovalStatus.IN_PROGRESS
|
|
||||||
});
|
|
||||||
|
|
||||||
await tatSchedulerService.scheduleTatJobs(
|
|
||||||
requestId,
|
|
||||||
levelId,
|
|
||||||
approverId,
|
|
||||||
tatHours,
|
|
||||||
now
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Approval Flow
|
|
||||||
|
|
||||||
When a level is approved, TAT jobs are cancelled and new ones are scheduled for the next level:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// approval.service.ts - approveLevel()
|
|
||||||
// Cancel current level TAT jobs
|
|
||||||
await tatSchedulerService.cancelTatJobs(requestId, levelId);
|
|
||||||
|
|
||||||
// Schedule TAT jobs for next level
|
|
||||||
await tatSchedulerService.scheduleTatJobs(
|
|
||||||
nextRequestId,
|
|
||||||
nextLevelId,
|
|
||||||
nextApproverId,
|
|
||||||
nextTatHours,
|
|
||||||
now
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Rejection Flow
|
|
||||||
|
|
||||||
When a level is rejected, all pending TAT jobs are cancelled:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// approval.service.ts - approveLevel()
|
|
||||||
await tatSchedulerService.cancelTatJobs(requestId, levelId);
|
|
||||||
```
|
|
||||||
|
|
||||||
## Notification Flow
|
|
||||||
|
|
||||||
### 50% TAT Alert (⏳)
|
|
||||||
|
|
||||||
**Message:** "50% of TAT elapsed for Request REQ-XXX: [Title]"
|
|
||||||
|
|
||||||
**Actions:**
|
|
||||||
- Send push notification to approver
|
|
||||||
- Update `tat50_alert_sent = true`
|
|
||||||
- Update `tat_percentage_used = 50`
|
|
||||||
- Log activity: "50% of TAT time has elapsed"
|
|
||||||
|
|
||||||
### 75% TAT Alert (⚠️)
|
|
||||||
|
|
||||||
**Message:** "75% of TAT elapsed for Request REQ-XXX: [Title]. Please take action soon."
|
|
||||||
|
|
||||||
**Actions:**
|
|
||||||
- Send push notification to approver
|
|
||||||
- Update `tat75_alert_sent = true`
|
|
||||||
- Update `tat_percentage_used = 75`
|
|
||||||
- Log activity: "75% of TAT time has elapsed - Escalation warning"
|
|
||||||
|
|
||||||
### 100% TAT Breach (⏰)
|
|
||||||
|
|
||||||
**Message:** "TAT breached for Request REQ-XXX: [Title]. Immediate action required!"
|
|
||||||
|
|
||||||
**Actions:**
|
|
||||||
- Send push notification to approver
|
|
||||||
- Update `tat_breached = true`
|
|
||||||
- Update `tat_percentage_used = 100`
|
|
||||||
- Log activity: "TAT deadline reached - Breach notification"
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Environment Variables
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Redis connection for TAT queue
|
|
||||||
REDIS_URL=redis://localhost:6379
|
|
||||||
|
|
||||||
# Optional: TAT monitoring settings
|
|
||||||
TAT_CHECK_INTERVAL_MINUTES=30
|
|
||||||
TAT_REMINDER_THRESHOLD_1=50
|
|
||||||
TAT_REMINDER_THRESHOLD_2=80
|
|
||||||
```
|
|
||||||
|
|
||||||
### Docker Compose
|
|
||||||
|
|
||||||
Redis service is automatically configured:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
redis:
|
|
||||||
image: redis:7-alpine
|
|
||||||
container_name: re_workflow_redis
|
|
||||||
ports:
|
|
||||||
- "6379:6379"
|
|
||||||
volumes:
|
|
||||||
- redis_data:/data
|
|
||||||
networks:
|
|
||||||
- re_workflow_network
|
|
||||||
restart: unless-stopped
|
|
||||||
```
|
|
||||||
|
|
||||||
## Working Hours Configuration
|
|
||||||
|
|
||||||
**Default Schedule:**
|
|
||||||
- Working Days: Monday - Friday
|
|
||||||
- Working Hours: 9:00 AM - 6:00 PM (9 hours/day)
|
|
||||||
- Timezone: Server timezone
|
|
||||||
|
|
||||||
**To Modify:**
|
|
||||||
Edit `WORK_START_HOUR` and `WORK_END_HOUR` in `tatTimeUtils.ts`
|
|
||||||
|
|
||||||
## Example Scenario
|
|
||||||
|
|
||||||
### Scenario: 48-hour TAT Approval
|
|
||||||
|
|
||||||
1. **Workflow Submitted**: Monday 10:00 AM
|
|
||||||
2. **50% Alert (24 hours)**: Tuesday 10:00 AM
|
|
||||||
- Notification sent to approver
|
|
||||||
- Database updated: `tat50_alert_sent = true`
|
|
||||||
3. **75% Alert (36 hours)**: Wednesday 10:00 AM
|
|
||||||
- Escalation warning sent
|
|
||||||
- Database updated: `tat75_alert_sent = true`
|
|
||||||
4. **100% Breach (48 hours)**: Thursday 10:00 AM
|
|
||||||
- Breach alert sent
|
|
||||||
- Database updated: `tat_breached = true`
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
### Queue Job Failures
|
|
||||||
|
|
||||||
- **Automatic Retry**: Failed jobs retry up to 3 times with exponential backoff
|
|
||||||
- **Error Logging**: All failures logged to console and logs
|
|
||||||
- **Non-Blocking**: TAT failures don't block workflow approval process
|
|
||||||
|
|
||||||
### Redis Connection Failures
|
|
||||||
|
|
||||||
- **Graceful Degradation**: Application continues to work even if Redis is down
|
|
||||||
- **Reconnection**: Automatic reconnection attempts
|
|
||||||
- **Logging**: Connection status logged
|
|
||||||
|
|
||||||
## Monitoring & Debugging
|
|
||||||
|
|
||||||
### Check Queue Status
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# View jobs in Redis
|
|
||||||
redis-cli
|
|
||||||
> KEYS bull:tatQueue:*
|
|
||||||
> LRANGE bull:tatQueue:delayed 0 -1
|
|
||||||
```
|
|
||||||
|
|
||||||
### View Worker Logs
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check worker status in application logs
|
|
||||||
grep "TAT Worker" logs/app.log
|
|
||||||
grep "TAT Scheduler" logs/app.log
|
|
||||||
grep "TAT Processor" logs/app.log
|
|
||||||
```
|
|
||||||
|
|
||||||
### Database Queries
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Check TAT status for all approval levels
|
|
||||||
SELECT
|
|
||||||
level_id,
|
|
||||||
request_id,
|
|
||||||
approver_name,
|
|
||||||
tat_hours,
|
|
||||||
tat_percentage_used,
|
|
||||||
tat50_alert_sent,
|
|
||||||
tat75_alert_sent,
|
|
||||||
tat_breached,
|
|
||||||
level_start_time,
|
|
||||||
tat_start_time
|
|
||||||
FROM approval_levels
|
|
||||||
WHERE status IN ('PENDING', 'IN_PROGRESS');
|
|
||||||
|
|
||||||
-- Find breached TATs
|
|
||||||
SELECT * FROM approval_levels WHERE tat_breached = true;
|
|
||||||
```
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
1. **Always Schedule on Level Start**: Ensure `tatStartTime` is set when a level becomes active
|
|
||||||
2. **Always Cancel on Level Complete**: Cancel jobs when level is approved/rejected to avoid duplicate notifications
|
|
||||||
3. **Use Job IDs**: Unique job IDs (`tat50-{requestId}-{levelId}`) allow easy cancellation
|
|
||||||
4. **Monitor Queue Health**: Regularly check Redis and worker status
|
|
||||||
5. **Test with Short TATs**: Use short TAT durations in development for testing
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Notifications Not Sent
|
|
||||||
|
|
||||||
1. Check Redis connection: `redis-cli ping`
|
|
||||||
2. Verify worker is running: Check logs for "TAT Worker: Initialized"
|
|
||||||
3. Check job scheduling: Look for "TAT jobs scheduled" logs
|
|
||||||
4. Verify VAPID configuration for push notifications
|
|
||||||
|
|
||||||
### Duplicate Notifications
|
|
||||||
|
|
||||||
1. Ensure jobs are cancelled when level is completed
|
|
||||||
2. Check for duplicate job IDs in Redis
|
|
||||||
3. Verify `tat50_alert_sent` and `tat75_alert_sent` flags
|
|
||||||
|
|
||||||
### Jobs Not Executing
|
|
||||||
|
|
||||||
1. Check system time (jobs use timestamps)
|
|
||||||
2. Verify working hours calculation
|
|
||||||
3. Check job delays in Redis
|
|
||||||
4. Review worker concurrency and rate limits
|
|
||||||
|
|
||||||
## Future Enhancements
|
|
||||||
|
|
||||||
1. **Configurable Working Hours**: Allow per-organization working hours
|
|
||||||
2. **Holiday Calendar**: Skip public holidays in TAT calculations
|
|
||||||
3. **Escalation Rules**: Auto-escalate to manager on breach
|
|
||||||
4. **TAT Dashboard**: Real-time visualization of TAT statuses
|
|
||||||
5. **Email Notifications**: Add email alerts alongside push notifications
|
|
||||||
6. **SMS Notifications**: Critical breach alerts via SMS
|
|
||||||
|
|
||||||
## API Endpoints (Future)
|
|
||||||
|
|
||||||
Potential API endpoints for TAT management:
|
|
||||||
|
|
||||||
```
|
|
||||||
GET /api/tat/status/:requestId - Get TAT status for request
|
|
||||||
GET /api/tat/breaches - List all breached requests
|
|
||||||
POST /api/tat/extend/:levelId - Extend TAT for a level
|
|
||||||
GET /api/tat/analytics - TAT analytics and reports
|
|
||||||
```
|
|
||||||
|
|
||||||
## References
|
|
||||||
|
|
||||||
- [BullMQ Documentation](https://docs.bullmq.io/)
|
|
||||||
- [Redis Documentation](https://redis.io/documentation)
|
|
||||||
- [Day.js Documentation](https://day.js.org/)
|
|
||||||
- [Web Push Notifications](https://developer.mozilla.org/en-US/docs/Web/API/Push_API)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 4, 2025
|
|
||||||
**Version**: 1.0.0
|
|
||||||
**Maintained By**: Royal Enfield Workflow Team
|
|
||||||
|
|
||||||
@ -1,411 +0,0 @@
|
|||||||
# TAT Notification Testing Guide
|
|
||||||
|
|
||||||
## Quick Setup for Testing
|
|
||||||
|
|
||||||
### Step 1: Setup Redis
|
|
||||||
|
|
||||||
**You MUST have Redis for TAT notifications to work.**
|
|
||||||
|
|
||||||
#### 🚀 Option A: Upstash (RECOMMENDED - No Installation!)
|
|
||||||
|
|
||||||
**Best choice for Windows development:**
|
|
||||||
|
|
||||||
1. Go to: https://console.upstash.com/
|
|
||||||
2. Sign up (free)
|
|
||||||
3. Create Database:
|
|
||||||
- Name: `redis-tat-dev`
|
|
||||||
- Type: Regional
|
|
||||||
- Region: Choose closest
|
|
||||||
4. Copy Redis URL (format: `rediss://default:...@host.upstash.io:6379`)
|
|
||||||
5. Add to `Re_Backend/.env`:
|
|
||||||
```bash
|
|
||||||
REDIS_URL=rediss://default:YOUR_PASSWORD@YOUR_HOST.upstash.io:6379
|
|
||||||
```
|
|
||||||
|
|
||||||
**✅ Done!** No installation, works everywhere!
|
|
||||||
|
|
||||||
See detailed guide: `docs/UPSTASH_SETUP_GUIDE.md`
|
|
||||||
|
|
||||||
#### Option B: Docker (If you prefer local)
|
|
||||||
```bash
|
|
||||||
docker run -d --name redis-tat -p 6379:6379 redis:latest
|
|
||||||
```
|
|
||||||
|
|
||||||
Then in `.env`:
|
|
||||||
```bash
|
|
||||||
REDIS_URL=redis://localhost:6379
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Option C: Linux Production
|
|
||||||
```bash
|
|
||||||
sudo apt install redis-server -y
|
|
||||||
sudo systemctl start redis-server
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Verify Connection
|
|
||||||
- **Upstash**: Use Console CLI → `PING` → should return `PONG`
|
|
||||||
- **Local**: `Test-NetConnection localhost -Port 6379`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Step 2: Enable Test Mode (Optional but Recommended)
|
|
||||||
|
|
||||||
For faster testing, enable test mode where **1 hour = 1 minute**:
|
|
||||||
|
|
||||||
1. **Edit your `.env` file**:
|
|
||||||
```bash
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Restart your backend**:
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Verify test mode is enabled** - You should see:
|
|
||||||
```
|
|
||||||
⏰ TAT Configuration:
|
|
||||||
- Test Mode: ENABLED (1 hour = 1 minute)
|
|
||||||
- Working Hours: 9:00 - 18:00
|
|
||||||
- Working Days: Monday - Friday
|
|
||||||
- Redis: redis://localhost:6379
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Step 3: Create a Test Workflow
|
|
||||||
|
|
||||||
#### Production Mode (TAT_TEST_MODE=false)
|
|
||||||
- Create a request with **2 hours TAT**
|
|
||||||
- Notifications will come at:
|
|
||||||
- **1 hour** (50%)
|
|
||||||
- **1.5 hours** (75%)
|
|
||||||
- **2 hours** (100% breach)
|
|
||||||
|
|
||||||
#### Test Mode (TAT_TEST_MODE=true) ⚡ FASTER
|
|
||||||
- Create a request with **6 hours TAT** (becomes 6 minutes)
|
|
||||||
- Notifications will come at:
|
|
||||||
- **3 minutes** (50%)
|
|
||||||
- **4.5 minutes** (75%)
|
|
||||||
- **6 minutes** (100% breach)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Step 4: Submit and Monitor
|
|
||||||
|
|
||||||
1. **Create and Submit Request** via your frontend or API
|
|
||||||
|
|
||||||
2. **Check Backend Logs** - You should see:
|
|
||||||
```
|
|
||||||
[TAT Scheduler] Calculating TAT milestones for request...
|
|
||||||
[TAT Scheduler] Start: 2025-11-04 12:00
|
|
||||||
[TAT Scheduler] 50%: 2025-11-04 12:03
|
|
||||||
[TAT Scheduler] 75%: 2025-11-04 12:04
|
|
||||||
[TAT Scheduler] 100%: 2025-11-04 12:06
|
|
||||||
[TAT Scheduler] Scheduled tat50 for level...
|
|
||||||
[TAT Scheduler] Scheduled tat75 for level...
|
|
||||||
[TAT Scheduler] Scheduled tatBreach for level...
|
|
||||||
[TAT Scheduler] ✅ TAT jobs scheduled for request...
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Wait for Notifications**
|
|
||||||
- Watch the logs
|
|
||||||
- Check push notifications
|
|
||||||
- Verify database updates
|
|
||||||
|
|
||||||
4. **Verify Notifications** - Look for:
|
|
||||||
```
|
|
||||||
[TAT Processor] Processing tat50 for request...
|
|
||||||
[TAT Processor] tat50 notification sent for request...
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Testing Scenarios
|
|
||||||
|
|
||||||
### Scenario 1: Normal Flow (Happy Path)
|
|
||||||
```
|
|
||||||
1. Create request with TAT = 6 hours (6 min in test mode)
|
|
||||||
2. Submit request
|
|
||||||
3. Wait for 50% notification (3 min)
|
|
||||||
4. Wait for 75% notification (4.5 min)
|
|
||||||
5. Wait for 100% breach (6 min)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected Result:**
|
|
||||||
- ✅ 3 notifications sent
|
|
||||||
- ✅ Database flags updated
|
|
||||||
- ✅ Activity logs created
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Scenario 2: Early Approval
|
|
||||||
```
|
|
||||||
1. Create request with TAT = 6 hours
|
|
||||||
2. Submit request
|
|
||||||
3. Wait for 50% notification (3 min)
|
|
||||||
4. Approve immediately
|
|
||||||
5. Remaining notifications should be cancelled
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected Result:**
|
|
||||||
- ✅ 50% notification received
|
|
||||||
- ✅ 75% and 100% notifications cancelled
|
|
||||||
- ✅ TAT jobs for next level scheduled
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Scenario 3: Multi-Level Approval
|
|
||||||
```
|
|
||||||
1. Create request with 3 approval levels (2 hours each)
|
|
||||||
2. Submit request
|
|
||||||
3. Level 1: Wait for notifications, then approve
|
|
||||||
4. Level 2: Should schedule new TAT jobs
|
|
||||||
5. Level 2: Wait for notifications, then approve
|
|
||||||
6. Level 3: Should schedule new TAT jobs
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected Result:**
|
|
||||||
- ✅ Each level gets its own TAT monitoring
|
|
||||||
- ✅ Previous level jobs cancelled on approval
|
|
||||||
- ✅ New level jobs scheduled
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Scenario 4: Rejection
|
|
||||||
```
|
|
||||||
1. Create request with TAT = 6 hours
|
|
||||||
2. Submit request
|
|
||||||
3. Wait for 50% notification
|
|
||||||
4. Reject the request
|
|
||||||
5. All remaining notifications should be cancelled
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected Result:**
|
|
||||||
- ✅ TAT jobs cancelled
|
|
||||||
- ✅ No further notifications
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Verification Checklist
|
|
||||||
|
|
||||||
### Backend Logs ✅
|
|
||||||
```bash
|
|
||||||
# Should see these messages:
|
|
||||||
✓ [TAT Queue] Connected to Redis
|
|
||||||
✓ [TAT Worker] Initialized and listening
|
|
||||||
✓ [TAT Scheduler] TAT jobs scheduled
|
|
||||||
✓ [TAT Processor] Processing tat50
|
|
||||||
✓ [TAT Processor] tat50 notification sent
|
|
||||||
```
|
|
||||||
|
|
||||||
### Database Check ✅
|
|
||||||
```sql
|
|
||||||
-- Check approval level TAT status
|
|
||||||
SELECT
|
|
||||||
request_id,
|
|
||||||
level_number,
|
|
||||||
approver_name,
|
|
||||||
tat_hours,
|
|
||||||
tat_percentage_used,
|
|
||||||
tat50_alert_sent,
|
|
||||||
tat75_alert_sent,
|
|
||||||
tat_breached,
|
|
||||||
tat_start_time,
|
|
||||||
status
|
|
||||||
FROM approval_levels
|
|
||||||
WHERE request_id = '<YOUR_REQUEST_ID>';
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected Fields:**
|
|
||||||
- `tat_start_time`: Should be set when level starts
|
|
||||||
- `tat50_alert_sent`: true after 50% notification
|
|
||||||
- `tat75_alert_sent`: true after 75% notification
|
|
||||||
- `tat_breached`: true after 100% notification
|
|
||||||
- `tat_percentage_used`: 50, 75, or 100
|
|
||||||
|
|
||||||
### Activity Logs ✅
|
|
||||||
```sql
|
|
||||||
-- Check activity timeline
|
|
||||||
SELECT
|
|
||||||
activity_type,
|
|
||||||
activity_description,
|
|
||||||
user_name,
|
|
||||||
created_at
|
|
||||||
FROM activities
|
|
||||||
WHERE request_id = '<YOUR_REQUEST_ID>'
|
|
||||||
ORDER BY created_at DESC;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected Entries:**
|
|
||||||
- "50% of TAT time has elapsed"
|
|
||||||
- "75% of TAT time has elapsed - Escalation warning"
|
|
||||||
- "TAT deadline reached - Breach notification"
|
|
||||||
|
|
||||||
### Redis Queue ✅
|
|
||||||
```bash
|
|
||||||
# Connect to Redis
|
|
||||||
redis-cli
|
|
||||||
|
|
||||||
# Check scheduled jobs
|
|
||||||
KEYS bull:tatQueue:*
|
|
||||||
LRANGE bull:tatQueue:delayed 0 -1
|
|
||||||
|
|
||||||
# Check job details
|
|
||||||
HGETALL bull:tatQueue:tat50-<REQUEST_ID>-<LEVEL_ID>
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### ❌ No Notifications Received
|
|
||||||
|
|
||||||
**Problem:** TAT jobs scheduled but no notifications
|
|
||||||
|
|
||||||
**Solutions:**
|
|
||||||
1. Check Redis is running:
|
|
||||||
```powershell
|
|
||||||
Test-NetConnection localhost -Port 6379
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Check worker is running:
|
|
||||||
```bash
|
|
||||||
# Look for in backend logs:
|
|
||||||
[TAT Worker] Worker is ready and listening
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Check job delays:
|
|
||||||
```bash
|
|
||||||
redis-cli
|
|
||||||
> LRANGE bull:tatQueue:delayed 0 -1
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Verify VAPID keys for push notifications:
|
|
||||||
```bash
|
|
||||||
# In .env file:
|
|
||||||
VAPID_PUBLIC_KEY=...
|
|
||||||
VAPID_PRIVATE_KEY=...
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ❌ Jobs Not Executing
|
|
||||||
|
|
||||||
**Problem:** Jobs scheduled but never execute
|
|
||||||
|
|
||||||
**Solutions:**
|
|
||||||
1. Check system time is correct
|
|
||||||
2. Verify test mode settings
|
|
||||||
3. Check worker logs for errors
|
|
||||||
4. Restart worker:
|
|
||||||
```bash
|
|
||||||
# Restart backend server
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ❌ Duplicate Notifications
|
|
||||||
|
|
||||||
**Problem:** Receiving multiple notifications for same milestone
|
|
||||||
|
|
||||||
**Solutions:**
|
|
||||||
1. Check database flags are being set:
|
|
||||||
```sql
|
|
||||||
SELECT tat50_alert_sent, tat75_alert_sent FROM approval_levels;
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Verify job cancellation on approval:
|
|
||||||
```bash
|
|
||||||
# Should see in logs:
|
|
||||||
[Approval] TAT jobs cancelled for level...
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Check for duplicate job IDs in Redis
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### ❌ Redis Connection Errors
|
|
||||||
|
|
||||||
**Problem:** `ECONNREFUSED` errors
|
|
||||||
|
|
||||||
**Solutions:**
|
|
||||||
1. **Start Redis** - See Step 1
|
|
||||||
2. Check Redis URL in `.env`:
|
|
||||||
```bash
|
|
||||||
REDIS_URL=redis://localhost:6379
|
|
||||||
```
|
|
||||||
3. Verify port 6379 is not blocked:
|
|
||||||
```powershell
|
|
||||||
Test-NetConnection localhost -Port 6379
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Testing Timeline Examples
|
|
||||||
|
|
||||||
### Test Mode Enabled (1 hour = 1 minute)
|
|
||||||
|
|
||||||
| TAT Hours | Real Time | 50% | 75% | 100% |
|
|
||||||
|-----------|-----------|-----|-----|------|
|
|
||||||
| 2 hours | 2 minutes | 1m | 1.5m| 2m |
|
|
||||||
| 6 hours | 6 minutes | 3m | 4.5m| 6m |
|
|
||||||
| 24 hours | 24 minutes| 12m | 18m | 24m |
|
|
||||||
| 48 hours | 48 minutes| 24m | 36m | 48m |
|
|
||||||
|
|
||||||
### Production Mode (Normal)
|
|
||||||
|
|
||||||
| TAT Hours | 50% | 75% | 100% |
|
|
||||||
|-----------|--------|--------|--------|
|
|
||||||
| 2 hours | 1h | 1.5h | 2h |
|
|
||||||
| 6 hours | 3h | 4.5h | 6h |
|
|
||||||
| 24 hours | 12h | 18h | 24h |
|
|
||||||
| 48 hours | 24h | 36h | 48h |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Quick Test Commands
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
# 1. Check Redis
|
|
||||||
Test-NetConnection localhost -Port 6379
|
|
||||||
|
|
||||||
# 2. Start Backend (with test mode)
|
|
||||||
cd Re_Backend
|
|
||||||
$env:TAT_TEST_MODE="true"
|
|
||||||
npm run dev
|
|
||||||
|
|
||||||
# 3. Monitor Logs (in another terminal)
|
|
||||||
cd Re_Backend
|
|
||||||
Get-Content -Path "logs/app.log" -Wait -Tail 50
|
|
||||||
|
|
||||||
# 4. Check Redis Jobs
|
|
||||||
redis-cli KEYS "bull:tatQueue:*"
|
|
||||||
|
|
||||||
# 5. Query Database
|
|
||||||
psql -U laxman -d re_workflow_db -c "SELECT * FROM approval_levels WHERE tat_start_time IS NOT NULL;"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Support
|
|
||||||
|
|
||||||
If you encounter issues:
|
|
||||||
|
|
||||||
1. **Check Logs**: `Re_Backend/logs/`
|
|
||||||
2. **Enable Debug**: Set `LOG_LEVEL=debug` in `.env`
|
|
||||||
3. **Redis Status**: `redis-cli ping` should return `PONG`
|
|
||||||
4. **Worker Status**: Look for "TAT Worker: Initialized" in logs
|
|
||||||
5. **Database**: Verify TAT fields exist in `approval_levels` table
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Happy Testing!** 🎉
|
|
||||||
|
|
||||||
For more information, see:
|
|
||||||
- `TAT_NOTIFICATION_SYSTEM.md` - Full system documentation
|
|
||||||
- `INSTALL_REDIS.txt` - Redis installation guide
|
|
||||||
- `backend_structure.txt` - Database schema reference
|
|
||||||
|
|
||||||
@ -1,381 +0,0 @@
|
|||||||
# Upstash Redis Setup Guide
|
|
||||||
|
|
||||||
## Why Upstash?
|
|
||||||
|
|
||||||
✅ **No Installation**: Works instantly on Windows, Mac, Linux
|
|
||||||
✅ **100% Free Tier**: 10,000 commands/day (more than enough for dev)
|
|
||||||
✅ **Production Ready**: Same service for dev and production
|
|
||||||
✅ **Global CDN**: Fast from anywhere
|
|
||||||
✅ **Zero Maintenance**: No Redis server to manage
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Step-by-Step Setup (3 minutes)
|
|
||||||
|
|
||||||
### 1. Create Upstash Account
|
|
||||||
|
|
||||||
1. Go to: https://console.upstash.com/
|
|
||||||
2. Sign up with GitHub, Google, or Email
|
|
||||||
3. Verify your email (if required)
|
|
||||||
|
|
||||||
### 2. Create Redis Database
|
|
||||||
|
|
||||||
1. **Click "Create Database"**
|
|
||||||
2. **Fill in details**:
|
|
||||||
- **Name**: `redis-tat-dev` (or any name you like)
|
|
||||||
- **Type**: Select "Regional"
|
|
||||||
- **Region**: Choose closest to you (e.g., US East, EU West)
|
|
||||||
- **TLS**: Keep enabled (recommended)
|
|
||||||
- **Eviction**: Choose "No Eviction"
|
|
||||||
|
|
||||||
3. **Click "Create"**
|
|
||||||
|
|
||||||
### 3. Copy Connection URL
|
|
||||||
|
|
||||||
After creation, you'll see your database dashboard:
|
|
||||||
|
|
||||||
1. **Find "REST API" section**
|
|
||||||
2. **Look for "Redis URL"** - it looks like:
|
|
||||||
```
|
|
||||||
rediss://default:AbCdEfGh1234567890XyZ@us1-mighty-shark-12345.upstash.io:6379
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Click the copy button** 📋
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Configure Your Application
|
|
||||||
|
|
||||||
### Edit `.env` File
|
|
||||||
|
|
||||||
Open `Re_Backend/.env` and add/update:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Upstash Redis URL
|
|
||||||
REDIS_URL=rediss://default:YOUR_PASSWORD@YOUR_URL.upstash.io:6379
|
|
||||||
|
|
||||||
# Enable test mode for faster testing
|
|
||||||
TAT_TEST_MODE=true
|
|
||||||
```
|
|
||||||
|
|
||||||
**Important**:
|
|
||||||
- Note the **double `s`** in `rediss://` (TLS enabled)
|
|
||||||
- Copy the entire URL including the password
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Verify Connection
|
|
||||||
|
|
||||||
### Start Your Backend
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd Re_Backend
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
### Check Logs
|
|
||||||
|
|
||||||
You should see:
|
|
||||||
```
|
|
||||||
✅ [TAT Queue] Connected to Redis
|
|
||||||
✅ [TAT Worker] Initialized and listening
|
|
||||||
⏰ TAT Configuration:
|
|
||||||
- Test Mode: ENABLED (1 hour = 1 minute)
|
|
||||||
- Redis: rediss://***@upstash.io:6379
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Test Using Upstash Console
|
|
||||||
|
|
||||||
### Method 1: Web CLI (Easiest)
|
|
||||||
|
|
||||||
1. Go to your database in Upstash Console
|
|
||||||
2. Click the **"CLI"** tab
|
|
||||||
3. Type commands:
|
|
||||||
```redis
|
|
||||||
PING
|
|
||||||
# → PONG
|
|
||||||
|
|
||||||
KEYS *
|
|
||||||
# → Shows all keys (should see TAT jobs after submitting request)
|
|
||||||
|
|
||||||
INFO
|
|
||||||
# → Shows Redis server info
|
|
||||||
```
|
|
||||||
|
|
||||||
### Method 2: Redis CLI (Optional)
|
|
||||||
|
|
||||||
If you have `redis-cli` installed:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
redis-cli -u "rediss://default:YOUR_PASSWORD@YOUR_URL.upstash.io:6379" ping
|
|
||||||
# → PONG
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Monitor Your TAT Jobs
|
|
||||||
|
|
||||||
### View Queued Jobs
|
|
||||||
|
|
||||||
In Upstash Console CLI:
|
|
||||||
|
|
||||||
```redis
|
|
||||||
# List all TAT jobs
|
|
||||||
KEYS bull:tatQueue:*
|
|
||||||
|
|
||||||
# See delayed jobs
|
|
||||||
LRANGE bull:tatQueue:delayed 0 -1
|
|
||||||
|
|
||||||
# Get specific job details
|
|
||||||
HGETALL bull:tatQueue:tat50-<REQUEST_ID>-<LEVEL_ID>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example Output
|
|
||||||
|
|
||||||
After submitting a request, you should see:
|
|
||||||
```redis
|
|
||||||
KEYS bull:tatQueue:*
|
|
||||||
# Returns:
|
|
||||||
# 1) "bull:tatQueue:id"
|
|
||||||
# 2) "bull:tatQueue:delayed"
|
|
||||||
# 3) "bull:tatQueue:tat50-abc123-xyz789"
|
|
||||||
# 4) "bull:tatQueue:tat75-abc123-xyz789"
|
|
||||||
# 5) "bull:tatQueue:tatBreach-abc123-xyz789"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Upstash Features for Development
|
|
||||||
|
|
||||||
### 1. Data Browser
|
|
||||||
- View all keys and values
|
|
||||||
- Edit data directly
|
|
||||||
- Delete specific keys
|
|
||||||
|
|
||||||
### 2. CLI Tab
|
|
||||||
- Run Redis commands
|
|
||||||
- Test queries
|
|
||||||
- Debug issues
|
|
||||||
|
|
||||||
### 3. Metrics
|
|
||||||
- Monitor requests/sec
|
|
||||||
- Track data usage
|
|
||||||
- View connection count
|
|
||||||
|
|
||||||
### 4. Logs
|
|
||||||
- See all commands executed
|
|
||||||
- Debug connection issues
|
|
||||||
- Monitor performance
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Free Tier Limits
|
|
||||||
|
|
||||||
**Upstash Free Tier includes:**
|
|
||||||
- ✅ 10,000 commands per day
|
|
||||||
- ✅ 256 MB storage
|
|
||||||
- ✅ TLS/SSL encryption
|
|
||||||
- ✅ Global edge caching
|
|
||||||
- ✅ REST API access
|
|
||||||
|
|
||||||
**Perfect for:**
|
|
||||||
- ✅ Development
|
|
||||||
- ✅ Testing
|
|
||||||
- ✅ Small production apps (up to ~100 users)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Production Considerations
|
|
||||||
|
|
||||||
### Upgrade When Needed
|
|
||||||
|
|
||||||
For production with high traffic:
|
|
||||||
- **Pro Plan**: $0.2 per 100K commands
|
|
||||||
- **Pay as you go**: No monthly fee
|
|
||||||
- **Auto-scaling**: Handles any load
|
|
||||||
|
|
||||||
### Security Best Practices
|
|
||||||
|
|
||||||
1. **Use TLS**: Always use `rediss://` (double s)
|
|
||||||
2. **Rotate Passwords**: Change regularly in production
|
|
||||||
3. **IP Restrictions**: Add allowed IPs in Upstash console
|
|
||||||
4. **Environment Variables**: Never commit REDIS_URL to Git
|
|
||||||
|
|
||||||
### Production Setup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# .env.production
|
|
||||||
REDIS_URL=rediss://default:PROD_PASSWORD@prod-region.upstash.io:6379
|
|
||||||
TAT_TEST_MODE=false # Use real hours in production
|
|
||||||
WORK_START_HOUR=9
|
|
||||||
WORK_END_HOUR=18
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Connection Refused Error
|
|
||||||
|
|
||||||
**Problem**: `ECONNREFUSED` or timeout
|
|
||||||
|
|
||||||
**Solutions**:
|
|
||||||
|
|
||||||
1. **Check URL format**:
|
|
||||||
```bash
|
|
||||||
# Should be:
|
|
||||||
rediss://default:password@host.upstash.io:6379
|
|
||||||
|
|
||||||
# NOT:
|
|
||||||
redis://... (missing second 's' for TLS)
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Verify database is active**:
|
|
||||||
- Go to Upstash Console
|
|
||||||
- Check database status (should be green "Active")
|
|
||||||
|
|
||||||
3. **Test connection**:
|
|
||||||
- Use Upstash Console CLI tab
|
|
||||||
- Type `PING` - should return `PONG`
|
|
||||||
|
|
||||||
### Slow Response Times
|
|
||||||
|
|
||||||
**Problem**: High latency
|
|
||||||
|
|
||||||
**Solutions**:
|
|
||||||
|
|
||||||
1. **Choose closer region**:
|
|
||||||
- Delete database
|
|
||||||
- Create new one in region closer to you
|
|
||||||
|
|
||||||
2. **Use REST API** (alternative):
|
|
||||||
```bash
|
|
||||||
UPSTASH_REDIS_REST_URL=https://YOUR_URL.upstash.io
|
|
||||||
UPSTASH_REDIS_REST_TOKEN=YOUR_TOKEN
|
|
||||||
```
|
|
||||||
|
|
||||||
### Command Limit Exceeded
|
|
||||||
|
|
||||||
**Problem**: "Daily request limit exceeded"
|
|
||||||
|
|
||||||
**Solutions**:
|
|
||||||
|
|
||||||
1. **Check usage**:
|
|
||||||
- Go to Upstash Console → Metrics
|
|
||||||
- See command count
|
|
||||||
|
|
||||||
2. **Optimize**:
|
|
||||||
- Remove unnecessary Redis calls
|
|
||||||
- Batch operations where possible
|
|
||||||
|
|
||||||
3. **Upgrade** (if needed):
|
|
||||||
- Pro plan: $0.2 per 100K commands
|
|
||||||
- No monthly fee
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Comparison: Upstash vs Local Redis
|
|
||||||
|
|
||||||
| Feature | Upstash | Local Redis |
|
|
||||||
|---------|---------|-------------|
|
|
||||||
| **Setup Time** | 2 minutes | 10-30 minutes |
|
|
||||||
| **Installation** | None | Docker/Memurai |
|
|
||||||
| **Maintenance** | Zero | Manual updates |
|
|
||||||
| **Cost (Dev)** | Free | Free |
|
|
||||||
| **Works Offline** | No | Yes |
|
|
||||||
| **Production** | Same setup | Need migration |
|
|
||||||
| **Monitoring** | Built-in | Setup required |
|
|
||||||
| **Backup** | Automatic | Manual |
|
|
||||||
|
|
||||||
**Verdict**:
|
|
||||||
- ✅ **Upstash for most cases** (especially Windows dev)
|
|
||||||
- Local Redis only if you need offline development
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Migration from Local Redis
|
|
||||||
|
|
||||||
If you were using local Redis:
|
|
||||||
|
|
||||||
### 1. Export Data (Optional)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# From local Redis
|
|
||||||
redis-cli --rdb dump.rdb
|
|
||||||
|
|
||||||
# Import to Upstash (use Upstash REST API or CLI)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Update Configuration
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Old (.env)
|
|
||||||
REDIS_URL=redis://localhost:6379
|
|
||||||
|
|
||||||
# New (.env)
|
|
||||||
REDIS_URL=rediss://default:PASSWORD@host.upstash.io:6379
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Restart Application
|
|
||||||
|
|
||||||
```bash
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
**That's it!** No code changes needed - BullMQ works identically.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## FAQs
|
|
||||||
|
|
||||||
### Q: Is Upstash free forever?
|
|
||||||
**A**: Yes, 10,000 commands/day free tier is permanent.
|
|
||||||
|
|
||||||
### Q: Can I use it in production?
|
|
||||||
**A**: Absolutely! Many companies use Upstash in production.
|
|
||||||
|
|
||||||
### Q: What if I exceed free tier?
|
|
||||||
**A**: You get notified. Either optimize or upgrade to pay-as-you-go.
|
|
||||||
|
|
||||||
### Q: Is my data secure?
|
|
||||||
**A**: Yes, TLS encryption by default, SOC 2 compliant.
|
|
||||||
|
|
||||||
### Q: Can I have multiple databases?
|
|
||||||
**A**: Yes, unlimited databases on free tier.
|
|
||||||
|
|
||||||
### Q: What about data persistence?
|
|
||||||
**A**: Full Redis persistence (RDB + AOF) with automatic backups.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- **Upstash Docs**: https://docs.upstash.com/redis
|
|
||||||
- **Redis Commands**: https://redis.io/commands
|
|
||||||
- **BullMQ Docs**: https://docs.bullmq.io/
|
|
||||||
- **Our TAT System**: See `TAT_NOTIFICATION_SYSTEM.md`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
✅ Upstash setup complete? Now:
|
|
||||||
|
|
||||||
1. **Enable Test Mode**: Set `TAT_TEST_MODE=true` in `.env`
|
|
||||||
2. **Create Test Request**: Submit a 6-hour TAT request
|
|
||||||
3. **Watch Logs**: See notifications at 3min, 4.5min, 6min
|
|
||||||
4. **Check Upstash CLI**: Monitor jobs in real-time
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Setup Complete!** 🎉
|
|
||||||
|
|
||||||
Your TAT notification system is now powered by Upstash Redis!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 4, 2025
|
|
||||||
**Contact**: Royal Enfield Workflow Team
|
|
||||||
|
|
||||||
@ -25,6 +25,7 @@ REFRESH_TOKEN_EXPIRY=7d
|
|||||||
OKTA_DOMAIN=https://dev-830839.oktapreview.com
|
OKTA_DOMAIN=https://dev-830839.oktapreview.com
|
||||||
OKTA_CLIENT_ID=0oa2j8slwj5S4bG5k0h8
|
OKTA_CLIENT_ID=0oa2j8slwj5S4bG5k0h8
|
||||||
OKTA_CLIENT_SECRET=your_okta_client_secret_here
|
OKTA_CLIENT_SECRET=your_okta_client_secret_here
|
||||||
|
OKTA_API_TOKEN=your_okta_api_token_here # For Okta User Management API (user search)
|
||||||
|
|
||||||
# Session
|
# Session
|
||||||
SESSION_SECRET=your_session_secret_here_min_32_chars
|
SESSION_SECRET=your_session_secret_here_min_32_chars
|
||||||
|
|||||||
144
package-lock.json
generated
144
package-lock.json
generated
@ -8,7 +8,9 @@
|
|||||||
"name": "re-workflow-backend",
|
"name": "re-workflow-backend",
|
||||||
"version": "1.0.0",
|
"version": "1.0.0",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
|
"@anthropic-ai/sdk": "^0.68.0",
|
||||||
"@google-cloud/storage": "^7.14.0",
|
"@google-cloud/storage": "^7.14.0",
|
||||||
|
"@google/generative-ai": "^0.24.1",
|
||||||
"@types/uuid": "^8.3.4",
|
"@types/uuid": "^8.3.4",
|
||||||
"axios": "^1.7.9",
|
"axios": "^1.7.9",
|
||||||
"bcryptjs": "^2.4.3",
|
"bcryptjs": "^2.4.3",
|
||||||
@ -25,6 +27,7 @@
|
|||||||
"morgan": "^1.10.0",
|
"morgan": "^1.10.0",
|
||||||
"multer": "^1.4.5-lts.1",
|
"multer": "^1.4.5-lts.1",
|
||||||
"node-cron": "^3.0.3",
|
"node-cron": "^3.0.3",
|
||||||
|
"openai": "^6.8.1",
|
||||||
"passport": "^0.7.0",
|
"passport": "^0.7.0",
|
||||||
"passport-jwt": "^4.0.1",
|
"passport-jwt": "^4.0.1",
|
||||||
"pg": "^8.13.1",
|
"pg": "^8.13.1",
|
||||||
@ -48,6 +51,7 @@
|
|||||||
"@types/node": "^22.10.5",
|
"@types/node": "^22.10.5",
|
||||||
"@types/passport": "^1.0.16",
|
"@types/passport": "^1.0.16",
|
||||||
"@types/passport-jwt": "^4.0.1",
|
"@types/passport-jwt": "^4.0.1",
|
||||||
|
"@types/pg": "^8.15.6",
|
||||||
"@types/supertest": "^6.0.2",
|
"@types/supertest": "^6.0.2",
|
||||||
"@types/web-push": "^3.6.4",
|
"@types/web-push": "^3.6.4",
|
||||||
"@typescript-eslint/eslint-plugin": "^8.19.1",
|
"@typescript-eslint/eslint-plugin": "^8.19.1",
|
||||||
@ -69,6 +73,26 @@
|
|||||||
"npm": ">=10.0.0"
|
"npm": ">=10.0.0"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/@anthropic-ai/sdk": {
|
||||||
|
"version": "0.68.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/@anthropic-ai/sdk/-/sdk-0.68.0.tgz",
|
||||||
|
"integrity": "sha512-SMYAmbbiprG8k1EjEPMTwaTqssDT7Ae+jxcR5kWXiqTlbwMR2AthXtscEVWOHkRfyAV5+y3PFYTJRNa3OJWIEw==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"json-schema-to-ts": "^3.1.1"
|
||||||
|
},
|
||||||
|
"bin": {
|
||||||
|
"anthropic-ai-sdk": "bin/cli"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"zod": "^3.25.0 || ^4.0.0"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"zod": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/@babel/code-frame": {
|
"node_modules/@babel/code-frame": {
|
||||||
"version": "7.27.1",
|
"version": "7.27.1",
|
||||||
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.27.1.tgz",
|
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.27.1.tgz",
|
||||||
@ -530,6 +554,15 @@
|
|||||||
"@babel/core": "^7.0.0-0"
|
"@babel/core": "^7.0.0-0"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/@babel/runtime": {
|
||||||
|
"version": "7.28.4",
|
||||||
|
"resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.28.4.tgz",
|
||||||
|
"integrity": "sha512-Q/N6JNWvIvPnLDvjlE1OUBLPQHH6l3CltCEsHIujp45zQUSSh8K+gHnaEX45yAT1nyngnINhvWtzN+Nb9D8RAQ==",
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=6.9.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/@babel/template": {
|
"node_modules/@babel/template": {
|
||||||
"version": "7.27.2",
|
"version": "7.27.2",
|
||||||
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.27.2.tgz",
|
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.27.2.tgz",
|
||||||
@ -875,6 +908,15 @@
|
|||||||
"node": ">=14"
|
"node": ">=14"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/@google/generative-ai": {
|
||||||
|
"version": "0.24.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/@google/generative-ai/-/generative-ai-0.24.1.tgz",
|
||||||
|
"integrity": "sha512-MqO+MLfM6kjxcKoy0p1wRzG3b4ZZXtPI+z2IE26UogS2Cm/XHO+7gGRBh6gcJsOiIVoH93UwKvW4HdgiOZCy9Q==",
|
||||||
|
"license": "Apache-2.0",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18.0.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/@humanfs/core": {
|
"node_modules/@humanfs/core": {
|
||||||
"version": "0.19.1",
|
"version": "0.19.1",
|
||||||
"resolved": "https://registry.npmjs.org/@humanfs/core/-/core-0.19.1.tgz",
|
"resolved": "https://registry.npmjs.org/@humanfs/core/-/core-0.19.1.tgz",
|
||||||
@ -2032,6 +2074,18 @@
|
|||||||
"@types/passport": "*"
|
"@types/passport": "*"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/@types/pg": {
|
||||||
|
"version": "8.15.6",
|
||||||
|
"resolved": "https://registry.npmjs.org/@types/pg/-/pg-8.15.6.tgz",
|
||||||
|
"integrity": "sha512-NoaMtzhxOrubeL/7UZuNTrejB4MPAJ0RpxZqXQf2qXuVlTPuG6Y8p4u9dKRaue4yjmC7ZhzVO2/Yyyn25znrPQ==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@types/node": "*",
|
||||||
|
"pg-protocol": "*",
|
||||||
|
"pg-types": "^2.2.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/@types/qs": {
|
"node_modules/@types/qs": {
|
||||||
"version": "6.14.0",
|
"version": "6.14.0",
|
||||||
"resolved": "https://registry.npmjs.org/@types/qs/-/qs-6.14.0.tgz",
|
"resolved": "https://registry.npmjs.org/@types/qs/-/qs-6.14.0.tgz",
|
||||||
@ -3975,6 +4029,27 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/engine.io/node_modules/ws": {
|
||||||
|
"version": "8.17.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/ws/-/ws-8.17.1.tgz",
|
||||||
|
"integrity": "sha512-6XQFvXTkbfUOZOKKILFG1PDK2NDQs4azKQl26T0YS5CxqWLgXajbPZ+h4gZekJyRqFU8pvnbAbbs/3TgRPy+GQ==",
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=10.0.0"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"bufferutil": "^4.0.1",
|
||||||
|
"utf-8-validate": ">=5.0.2"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"bufferutil": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"utf-8-validate": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/error-ex": {
|
"node_modules/error-ex": {
|
||||||
"version": "1.3.4",
|
"version": "1.3.4",
|
||||||
"resolved": "https://registry.npmjs.org/error-ex/-/error-ex-1.3.4.tgz",
|
"resolved": "https://registry.npmjs.org/error-ex/-/error-ex-1.3.4.tgz",
|
||||||
@ -6264,6 +6339,19 @@
|
|||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
|
"node_modules/json-schema-to-ts": {
|
||||||
|
"version": "3.1.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/json-schema-to-ts/-/json-schema-to-ts-3.1.1.tgz",
|
||||||
|
"integrity": "sha512-+DWg8jCJG2TEnpy7kOm/7/AxaYoaRbjVB4LFZLySZlWn8exGs3A4OLJR966cVvU26N7X9TWxl+Jsw7dzAqKT6g==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@babel/runtime": "^7.18.3",
|
||||||
|
"ts-algebra": "^2.0.0"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=16"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/json-schema-traverse": {
|
"node_modules/json-schema-traverse": {
|
||||||
"version": "0.4.1",
|
"version": "0.4.1",
|
||||||
"resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz",
|
"resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz",
|
||||||
@ -7148,6 +7236,27 @@
|
|||||||
"url": "https://github.com/sponsors/sindresorhus"
|
"url": "https://github.com/sponsors/sindresorhus"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/openai": {
|
||||||
|
"version": "6.8.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/openai/-/openai-6.8.1.tgz",
|
||||||
|
"integrity": "sha512-ACifslrVgf+maMz9vqwMP4+v9qvx5Yzssydizks8n+YUJ6YwUoxj51sKRQ8HYMfR6wgKLSIlaI108ZwCk+8yig==",
|
||||||
|
"license": "Apache-2.0",
|
||||||
|
"bin": {
|
||||||
|
"openai": "bin/cli"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"ws": "^8.18.0",
|
||||||
|
"zod": "^3.25 || ^4.0"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"ws": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"zod": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/optionator": {
|
"node_modules/optionator": {
|
||||||
"version": "0.9.4",
|
"version": "0.9.4",
|
||||||
"resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz",
|
"resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz",
|
||||||
@ -8443,6 +8552,27 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/socket.io-adapter/node_modules/ws": {
|
||||||
|
"version": "8.17.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/ws/-/ws-8.17.1.tgz",
|
||||||
|
"integrity": "sha512-6XQFvXTkbfUOZOKKILFG1PDK2NDQs4azKQl26T0YS5CxqWLgXajbPZ+h4gZekJyRqFU8pvnbAbbs/3TgRPy+GQ==",
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=10.0.0"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"bufferutil": "^4.0.1",
|
||||||
|
"utf-8-validate": ">=5.0.2"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"bufferutil": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"utf-8-validate": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/socket.io-parser": {
|
"node_modules/socket.io-parser": {
|
||||||
"version": "4.2.4",
|
"version": "4.2.4",
|
||||||
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.2.4.tgz",
|
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.2.4.tgz",
|
||||||
@ -8972,6 +9102,12 @@
|
|||||||
"node": ">= 14.0.0"
|
"node": ">= 14.0.0"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/ts-algebra": {
|
||||||
|
"version": "2.0.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/ts-algebra/-/ts-algebra-2.0.0.tgz",
|
||||||
|
"integrity": "sha512-FPAhNPFMrkwz76P7cdjdmiShwMynZYN6SgOujD1urY4oNm80Ou9oMdmbR45LotcKOXoy7wSmHkRFE6Mxbrhefw==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/ts-api-utils": {
|
"node_modules/ts-api-utils": {
|
||||||
"version": "2.1.0",
|
"version": "2.1.0",
|
||||||
"resolved": "https://registry.npmjs.org/ts-api-utils/-/ts-api-utils-2.1.0.tgz",
|
"resolved": "https://registry.npmjs.org/ts-api-utils/-/ts-api-utils-2.1.0.tgz",
|
||||||
@ -9627,10 +9763,12 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/ws": {
|
"node_modules/ws": {
|
||||||
"version": "8.17.1",
|
"version": "8.18.3",
|
||||||
"resolved": "https://registry.npmjs.org/ws/-/ws-8.17.1.tgz",
|
"resolved": "https://registry.npmjs.org/ws/-/ws-8.18.3.tgz",
|
||||||
"integrity": "sha512-6XQFvXTkbfUOZOKKILFG1PDK2NDQs4azKQl26T0YS5CxqWLgXajbPZ+h4gZekJyRqFU8pvnbAbbs/3TgRPy+GQ==",
|
"integrity": "sha512-PEIGCY5tSlUt50cqyMXfCzX+oOPqN0vuGqWzbcJ2xvnkzkq46oOpz7dQaTDBdfICb4N14+GARUDw2XV2N4tvzg==",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
|
"optional": true,
|
||||||
|
"peer": true,
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=10.0.0"
|
"node": ">=10.0.0"
|
||||||
},
|
},
|
||||||
|
|||||||
@ -5,7 +5,8 @@
|
|||||||
"main": "dist/server.js",
|
"main": "dist/server.js",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"start": "node dist/server.js",
|
"start": "node dist/server.js",
|
||||||
"dev": "npm run migrate && nodemon --exec ts-node -r tsconfig-paths/register src/server.ts",
|
"dev": "npm run setup && nodemon --exec ts-node -r tsconfig-paths/register src/server.ts",
|
||||||
|
"dev:no-setup": "nodemon --exec ts-node -r tsconfig-paths/register src/server.ts",
|
||||||
"build": "tsc",
|
"build": "tsc",
|
||||||
"build:watch": "tsc --watch",
|
"build:watch": "tsc --watch",
|
||||||
"start:prod": "NODE_ENV=production node dist/server.js",
|
"start:prod": "NODE_ENV=production node dist/server.js",
|
||||||
@ -21,11 +22,14 @@
|
|||||||
"db:migrate:undo": "sequelize-cli db:migrate:undo",
|
"db:migrate:undo": "sequelize-cli db:migrate:undo",
|
||||||
"db:seed": "sequelize-cli db:seed:all",
|
"db:seed": "sequelize-cli db:seed:all",
|
||||||
"clean": "rm -rf dist",
|
"clean": "rm -rf dist",
|
||||||
|
"setup": "ts-node -r tsconfig-paths/register src/scripts/auto-setup.ts",
|
||||||
"migrate": "ts-node -r tsconfig-paths/register src/scripts/migrate.ts",
|
"migrate": "ts-node -r tsconfig-paths/register src/scripts/migrate.ts",
|
||||||
"seed:config": "ts-node -r tsconfig-paths/register src/scripts/seed-admin-config.ts"
|
"seed:config": "ts-node -r tsconfig-paths/register src/scripts/seed-admin-config.ts"
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
|
"@anthropic-ai/sdk": "^0.68.0",
|
||||||
"@google-cloud/storage": "^7.14.0",
|
"@google-cloud/storage": "^7.14.0",
|
||||||
|
"@google/generative-ai": "^0.24.1",
|
||||||
"@types/uuid": "^8.3.4",
|
"@types/uuid": "^8.3.4",
|
||||||
"axios": "^1.7.9",
|
"axios": "^1.7.9",
|
||||||
"bcryptjs": "^2.4.3",
|
"bcryptjs": "^2.4.3",
|
||||||
@ -42,6 +46,7 @@
|
|||||||
"morgan": "^1.10.0",
|
"morgan": "^1.10.0",
|
||||||
"multer": "^1.4.5-lts.1",
|
"multer": "^1.4.5-lts.1",
|
||||||
"node-cron": "^3.0.3",
|
"node-cron": "^3.0.3",
|
||||||
|
"openai": "^6.8.1",
|
||||||
"passport": "^0.7.0",
|
"passport": "^0.7.0",
|
||||||
"passport-jwt": "^4.0.1",
|
"passport-jwt": "^4.0.1",
|
||||||
"pg": "^8.13.1",
|
"pg": "^8.13.1",
|
||||||
@ -65,6 +70,7 @@
|
|||||||
"@types/node": "^22.10.5",
|
"@types/node": "^22.10.5",
|
||||||
"@types/passport": "^1.0.16",
|
"@types/passport": "^1.0.16",
|
||||||
"@types/passport-jwt": "^4.0.1",
|
"@types/passport-jwt": "^4.0.1",
|
||||||
|
"@types/pg": "^8.15.6",
|
||||||
"@types/supertest": "^6.0.2",
|
"@types/supertest": "^6.0.2",
|
||||||
"@types/web-push": "^3.6.4",
|
"@types/web-push": "^3.6.4",
|
||||||
"@typescript-eslint/eslint-plugin": "^8.19.1",
|
"@typescript-eslint/eslint-plugin": "^8.19.1",
|
||||||
|
|||||||
55
scripts/assign-admin-user.sql
Normal file
55
scripts/assign-admin-user.sql
Normal file
@ -0,0 +1,55 @@
|
|||||||
|
/**
|
||||||
|
* Assign First Admin User
|
||||||
|
*
|
||||||
|
* Purpose: Quick script to make your first user an ADMIN after fresh setup
|
||||||
|
*
|
||||||
|
* Usage:
|
||||||
|
* 1. Replace YOUR_EMAIL below with your actual email
|
||||||
|
* 2. Run: psql -d royal_enfield_workflow -f scripts/assign-admin-user.sql
|
||||||
|
*/
|
||||||
|
|
||||||
|
-- ============================================
|
||||||
|
-- UPDATE THIS EMAIL WITH YOUR ACTUAL EMAIL
|
||||||
|
-- ============================================
|
||||||
|
|
||||||
|
\echo 'Assigning ADMIN role to user...\n'
|
||||||
|
|
||||||
|
UPDATE users
|
||||||
|
SET role = 'ADMIN'
|
||||||
|
WHERE email = 'YOUR_EMAIL@royalenfield.com' -- ← CHANGE THIS
|
||||||
|
RETURNING
|
||||||
|
user_id,
|
||||||
|
email,
|
||||||
|
display_name,
|
||||||
|
role,
|
||||||
|
updated_at;
|
||||||
|
|
||||||
|
\echo '\n✅ Admin role assigned!\n'
|
||||||
|
|
||||||
|
-- Display all current admins
|
||||||
|
\echo 'Current ADMIN users:'
|
||||||
|
SELECT
|
||||||
|
email,
|
||||||
|
display_name,
|
||||||
|
department,
|
||||||
|
role,
|
||||||
|
created_at
|
||||||
|
FROM users
|
||||||
|
WHERE role = 'ADMIN' AND is_active = true
|
||||||
|
ORDER BY email;
|
||||||
|
|
||||||
|
-- Display role summary
|
||||||
|
\echo '\nRole Summary:'
|
||||||
|
SELECT
|
||||||
|
role,
|
||||||
|
COUNT(*) as count
|
||||||
|
FROM users
|
||||||
|
WHERE is_active = true
|
||||||
|
GROUP BY role
|
||||||
|
ORDER BY
|
||||||
|
CASE role
|
||||||
|
WHEN 'ADMIN' THEN 1
|
||||||
|
WHEN 'MANAGEMENT' THEN 2
|
||||||
|
WHEN 'USER' THEN 3
|
||||||
|
END;
|
||||||
|
|
||||||
123
scripts/assign-user-roles.sql
Normal file
123
scripts/assign-user-roles.sql
Normal file
@ -0,0 +1,123 @@
|
|||||||
|
/**
|
||||||
|
* User Role Assignment Script
|
||||||
|
*
|
||||||
|
* Purpose: Assign roles to specific users after fresh database setup
|
||||||
|
*
|
||||||
|
* Usage:
|
||||||
|
* 1. Update the email addresses below with your actual users
|
||||||
|
* 2. Run: psql -d royal_enfield_workflow -f scripts/assign-user-roles.sql
|
||||||
|
*
|
||||||
|
* Roles:
|
||||||
|
* - USER: Default role for all employees
|
||||||
|
* - MANAGEMENT: Department heads, managers, auditors
|
||||||
|
* - ADMIN: IT administrators, system managers
|
||||||
|
*/
|
||||||
|
|
||||||
|
-- ============================================
|
||||||
|
-- ASSIGN ADMIN ROLES
|
||||||
|
-- ============================================
|
||||||
|
-- Replace with your actual admin email addresses
|
||||||
|
|
||||||
|
UPDATE users
|
||||||
|
SET role = 'ADMIN'
|
||||||
|
WHERE email IN (
|
||||||
|
'admin@royalenfield.com',
|
||||||
|
'it.admin@royalenfield.com',
|
||||||
|
'system.admin@royalenfield.com'
|
||||||
|
-- Add more admin emails here
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Verify ADMIN users
|
||||||
|
SELECT
|
||||||
|
email,
|
||||||
|
display_name,
|
||||||
|
role,
|
||||||
|
updated_at
|
||||||
|
FROM users
|
||||||
|
WHERE role = 'ADMIN'
|
||||||
|
ORDER BY email;
|
||||||
|
|
||||||
|
-- ============================================
|
||||||
|
-- ASSIGN MANAGEMENT ROLES
|
||||||
|
-- ============================================
|
||||||
|
-- Replace with your actual management email addresses
|
||||||
|
|
||||||
|
UPDATE users
|
||||||
|
SET role = 'MANAGEMENT'
|
||||||
|
WHERE email IN (
|
||||||
|
'manager1@royalenfield.com',
|
||||||
|
'dept.head@royalenfield.com',
|
||||||
|
'auditor@royalenfield.com'
|
||||||
|
-- Add more management emails here
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Verify MANAGEMENT users
|
||||||
|
SELECT
|
||||||
|
email,
|
||||||
|
display_name,
|
||||||
|
department,
|
||||||
|
role,
|
||||||
|
updated_at
|
||||||
|
FROM users
|
||||||
|
WHERE role = 'MANAGEMENT'
|
||||||
|
ORDER BY department, email;
|
||||||
|
|
||||||
|
-- ============================================
|
||||||
|
-- VERIFY ALL ROLES
|
||||||
|
-- ============================================
|
||||||
|
|
||||||
|
SELECT
|
||||||
|
role,
|
||||||
|
COUNT(*) as user_count
|
||||||
|
FROM users
|
||||||
|
WHERE is_active = true
|
||||||
|
GROUP BY role
|
||||||
|
ORDER BY
|
||||||
|
CASE role
|
||||||
|
WHEN 'ADMIN' THEN 1
|
||||||
|
WHEN 'MANAGEMENT' THEN 2
|
||||||
|
WHEN 'USER' THEN 3
|
||||||
|
END;
|
||||||
|
|
||||||
|
-- ============================================
|
||||||
|
-- EXAMPLE: Assign role by department
|
||||||
|
-- ============================================
|
||||||
|
|
||||||
|
-- Make all users in "IT" department as ADMIN
|
||||||
|
-- UPDATE users
|
||||||
|
-- SET role = 'ADMIN'
|
||||||
|
-- WHERE department = 'IT' AND is_active = true;
|
||||||
|
|
||||||
|
-- Make all users in "Management" department as MANAGEMENT
|
||||||
|
-- UPDATE users
|
||||||
|
-- SET role = 'MANAGEMENT'
|
||||||
|
-- WHERE department = 'Management' AND is_active = true;
|
||||||
|
|
||||||
|
-- ============================================
|
||||||
|
-- EXAMPLE: Assign role by designation
|
||||||
|
-- ============================================
|
||||||
|
|
||||||
|
-- Make all "Department Head" as MANAGEMENT
|
||||||
|
-- UPDATE users
|
||||||
|
-- SET role = 'MANAGEMENT'
|
||||||
|
-- WHERE (designation ILIKE '%head%' OR designation ILIKE '%manager%')
|
||||||
|
-- AND is_active = true;
|
||||||
|
|
||||||
|
-- ============================================
|
||||||
|
-- Display role summary
|
||||||
|
-- ============================================
|
||||||
|
|
||||||
|
\echo '\n✅ Role assignment complete!\n'
|
||||||
|
\echo 'Role Summary:'
|
||||||
|
SELECT
|
||||||
|
role,
|
||||||
|
COUNT(*) as total_users,
|
||||||
|
COUNT(CASE WHEN is_active = true THEN 1 END) as active_users
|
||||||
|
FROM users
|
||||||
|
GROUP BY role
|
||||||
|
ORDER BY
|
||||||
|
CASE role
|
||||||
|
WHEN 'ADMIN' THEN 1
|
||||||
|
WHEN 'MANAGEMENT' THEN 2
|
||||||
|
WHEN 'USER' THEN 3
|
||||||
|
END;
|
||||||
136
scripts/fresh-database-setup.bat
Normal file
136
scripts/fresh-database-setup.bat
Normal file
@ -0,0 +1,136 @@
|
|||||||
|
@echo off
|
||||||
|
REM ############################################################################
|
||||||
|
REM Fresh Database Setup Script (Windows)
|
||||||
|
REM
|
||||||
|
REM Purpose: Complete fresh database setup for Royal Enfield Workflow System
|
||||||
|
REM
|
||||||
|
REM Prerequisites:
|
||||||
|
REM 1. PostgreSQL 16.x installed
|
||||||
|
REM 2. Redis installed and running
|
||||||
|
REM 3. Node.js 18+ installed
|
||||||
|
REM 4. Environment variables configured in .env
|
||||||
|
REM
|
||||||
|
REM Usage: scripts\fresh-database-setup.bat
|
||||||
|
REM ############################################################################
|
||||||
|
|
||||||
|
setlocal enabledelayedexpansion
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo ===============================================================
|
||||||
|
echo Royal Enfield Workflow System - Fresh Database Setup
|
||||||
|
echo ===============================================================
|
||||||
|
echo.
|
||||||
|
|
||||||
|
REM Load .env file
|
||||||
|
if exist .env (
|
||||||
|
echo [*] Loading environment variables...
|
||||||
|
for /f "usebackq tokens=1,2 delims==" %%a in (".env") do (
|
||||||
|
set "%%a=%%b"
|
||||||
|
)
|
||||||
|
) else (
|
||||||
|
echo [ERROR] .env file not found!
|
||||||
|
echo Please copy env.example to .env and configure it
|
||||||
|
pause
|
||||||
|
exit /b 1
|
||||||
|
)
|
||||||
|
|
||||||
|
REM Set default values if not in .env
|
||||||
|
if not defined DB_NAME set DB_NAME=royal_enfield_workflow
|
||||||
|
if not defined DB_USER set DB_USER=postgres
|
||||||
|
if not defined DB_HOST set DB_HOST=localhost
|
||||||
|
if not defined DB_PORT set DB_PORT=5432
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo WARNING: This will DROP the existing database!
|
||||||
|
echo Database: %DB_NAME%
|
||||||
|
echo Host: %DB_HOST%:%DB_PORT%
|
||||||
|
echo.
|
||||||
|
set /p CONFIRM="Are you sure you want to continue? (yes/no): "
|
||||||
|
|
||||||
|
if /i not "%CONFIRM%"=="yes" (
|
||||||
|
echo Setup cancelled.
|
||||||
|
exit /b 0
|
||||||
|
)
|
||||||
|
|
||||||
|
echo.
|
||||||
|
echo ===============================================================
|
||||||
|
echo Step 1: Dropping existing database (if exists)...
|
||||||
|
echo ===============================================================
|
||||||
|
echo.
|
||||||
|
|
||||||
|
psql -h %DB_HOST% -p %DB_PORT% -U %DB_USER% -d postgres -c "DROP DATABASE IF EXISTS %DB_NAME%;" 2>nul
|
||||||
|
|
||||||
|
echo [OK] Old database dropped
|
||||||
|
echo.
|
||||||
|
|
||||||
|
echo ===============================================================
|
||||||
|
echo Step 2: Creating fresh database...
|
||||||
|
echo ===============================================================
|
||||||
|
echo.
|
||||||
|
|
||||||
|
psql -h %DB_HOST% -p %DB_PORT% -U %DB_USER% -d postgres -c "CREATE DATABASE %DB_NAME% OWNER %DB_USER%;"
|
||||||
|
|
||||||
|
echo [OK] Fresh database created: %DB_NAME%
|
||||||
|
echo.
|
||||||
|
|
||||||
|
echo ===============================================================
|
||||||
|
echo Step 3: Installing PostgreSQL extensions...
|
||||||
|
echo ===============================================================
|
||||||
|
echo.
|
||||||
|
|
||||||
|
psql -h %DB_HOST% -p %DB_PORT% -U %DB_USER% -d %DB_NAME% -c "CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\";"
|
||||||
|
psql -h %DB_HOST% -p %DB_PORT% -U %DB_USER% -d %DB_NAME% -c "CREATE EXTENSION IF NOT EXISTS \"pg_trgm\";"
|
||||||
|
psql -h %DB_HOST% -p %DB_PORT% -U %DB_USER% -d %DB_NAME% -c "CREATE EXTENSION IF NOT EXISTS \"btree_gin\";"
|
||||||
|
|
||||||
|
echo [OK] PostgreSQL extensions installed
|
||||||
|
echo.
|
||||||
|
|
||||||
|
echo ===============================================================
|
||||||
|
echo Step 4: Running database migrations...
|
||||||
|
echo ===============================================================
|
||||||
|
echo.
|
||||||
|
|
||||||
|
call npm run migrate
|
||||||
|
|
||||||
|
echo [OK] All migrations completed
|
||||||
|
echo.
|
||||||
|
|
||||||
|
echo ===============================================================
|
||||||
|
echo Step 5: Seeding admin configuration...
|
||||||
|
echo ===============================================================
|
||||||
|
echo.
|
||||||
|
|
||||||
|
call npm run seed:config
|
||||||
|
|
||||||
|
echo [OK] Admin configuration seeded
|
||||||
|
echo.
|
||||||
|
|
||||||
|
echo ===============================================================
|
||||||
|
echo Step 6: Database verification...
|
||||||
|
echo ===============================================================
|
||||||
|
echo.
|
||||||
|
|
||||||
|
psql -h %DB_HOST% -p %DB_PORT% -U %DB_USER% -d %DB_NAME% -c "SELECT tablename FROM pg_tables WHERE schemaname = 'public' ORDER BY tablename;"
|
||||||
|
|
||||||
|
echo [OK] Database structure verified
|
||||||
|
echo.
|
||||||
|
|
||||||
|
echo ===============================================================
|
||||||
|
echo FRESH DATABASE SETUP COMPLETE!
|
||||||
|
echo ===============================================================
|
||||||
|
echo.
|
||||||
|
echo Next Steps:
|
||||||
|
echo 1. Assign admin role to your user:
|
||||||
|
echo psql -d %DB_NAME% -f scripts\assign-admin-user.sql
|
||||||
|
echo.
|
||||||
|
echo 2. Start the backend server:
|
||||||
|
echo npm run dev
|
||||||
|
echo.
|
||||||
|
echo 3. Access the application:
|
||||||
|
echo http://localhost:5000
|
||||||
|
echo.
|
||||||
|
echo Database is ready for production use!
|
||||||
|
echo.
|
||||||
|
|
||||||
|
pause
|
||||||
|
|
||||||
168
scripts/fresh-database-setup.sh
Normal file
168
scripts/fresh-database-setup.sh
Normal file
@ -0,0 +1,168 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# Fresh Database Setup Script
|
||||||
|
#
|
||||||
|
# Purpose: Complete fresh database setup for Royal Enfield Workflow System
|
||||||
|
#
|
||||||
|
# Prerequisites:
|
||||||
|
# 1. PostgreSQL 16.x installed
|
||||||
|
# 2. Redis installed and running
|
||||||
|
# 3. Node.js 18+ installed
|
||||||
|
# 4. Environment variables configured in .env
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# chmod +x scripts/fresh-database-setup.sh
|
||||||
|
# ./scripts/fresh-database-setup.sh
|
||||||
|
#
|
||||||
|
# What this script does:
|
||||||
|
# 1. Drops existing database (if exists)
|
||||||
|
# 2. Creates fresh database
|
||||||
|
# 3. Runs all migrations in order
|
||||||
|
# 4. Seeds admin configuration
|
||||||
|
# 5. Creates initial admin user
|
||||||
|
# 6. Verifies setup
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
set -e # Exit on error
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# Load environment variables
|
||||||
|
if [ -f .env ]; then
|
||||||
|
echo -e "${BLUE}📋 Loading environment variables...${NC}"
|
||||||
|
export $(cat .env | grep -v '^#' | xargs)
|
||||||
|
else
|
||||||
|
echo -e "${RED}❌ .env file not found!${NC}"
|
||||||
|
echo -e "${YELLOW}Please copy env.example to .env and configure it${NC}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Database variables
|
||||||
|
DB_NAME="${DB_NAME:-royal_enfield_workflow}"
|
||||||
|
DB_USER="${DB_USER:-postgres}"
|
||||||
|
DB_HOST="${DB_HOST:-localhost}"
|
||||||
|
DB_PORT="${DB_PORT:-5432}"
|
||||||
|
|
||||||
|
echo -e "${BLUE}╔═══════════════════════════════════════════════════════════════╗${NC}"
|
||||||
|
echo -e "${BLUE}║ Royal Enfield Workflow System - Fresh Database Setup ║${NC}"
|
||||||
|
echo -e "${BLUE}╚═══════════════════════════════════════════════════════════════╝${NC}"
|
||||||
|
echo ""
|
||||||
|
echo -e "${YELLOW}⚠️ WARNING: This will DROP the existing database!${NC}"
|
||||||
|
echo -e "${YELLOW} Database: ${DB_NAME}${NC}"
|
||||||
|
echo -e "${YELLOW} Host: ${DB_HOST}:${DB_PORT}${NC}"
|
||||||
|
echo ""
|
||||||
|
read -p "Are you sure you want to continue? (yes/no): " -r
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ ! $REPLY =~ ^[Yy]es$ ]]; then
|
||||||
|
echo -e "${YELLOW}Setup cancelled.${NC}"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo -e "${BLUE}════════════════════════════════════════════════════════════════${NC}"
|
||||||
|
echo -e "${BLUE}Step 1: Dropping existing database (if exists)...${NC}"
|
||||||
|
echo -e "${BLUE}════════════════════════════════════════════════════════════════${NC}"
|
||||||
|
|
||||||
|
psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d postgres -c "DROP DATABASE IF EXISTS $DB_NAME;" || true
|
||||||
|
|
||||||
|
echo -e "${GREEN}✅ Old database dropped${NC}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
echo -e "${BLUE}════════════════════════════════════════════════════════════════${NC}"
|
||||||
|
echo -e "${BLUE}Step 2: Creating fresh database...${NC}"
|
||||||
|
echo -e "${BLUE}════════════════════════════════════════════════════════════════${NC}"
|
||||||
|
|
||||||
|
psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d postgres -c "CREATE DATABASE $DB_NAME OWNER $DB_USER;"
|
||||||
|
|
||||||
|
echo -e "${GREEN}✅ Fresh database created: $DB_NAME${NC}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
echo -e "${BLUE}════════════════════════════════════════════════════════════════${NC}"
|
||||||
|
echo -e "${BLUE}Step 3: Installing PostgreSQL extensions...${NC}"
|
||||||
|
echo -e "${BLUE}════════════════════════════════════════════════════════════════${NC}"
|
||||||
|
|
||||||
|
psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME <<EOF
|
||||||
|
-- UUID extension for primary keys
|
||||||
|
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
|
||||||
|
|
||||||
|
-- pg_trgm for text search
|
||||||
|
CREATE EXTENSION IF NOT EXISTS "pg_trgm";
|
||||||
|
|
||||||
|
-- Enable JSONB operators
|
||||||
|
CREATE EXTENSION IF NOT EXISTS "btree_gin";
|
||||||
|
EOF
|
||||||
|
|
||||||
|
echo -e "${GREEN}✅ PostgreSQL extensions installed${NC}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
echo -e "${BLUE}════════════════════════════════════════════════════════════════${NC}"
|
||||||
|
echo -e "${BLUE}Step 4: Running database migrations...${NC}"
|
||||||
|
echo -e "${BLUE}════════════════════════════════════════════════════════════════${NC}"
|
||||||
|
|
||||||
|
npm run migrate
|
||||||
|
|
||||||
|
echo -e "${GREEN}✅ All migrations completed${NC}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
echo -e "${BLUE}════════════════════════════════════════════════════════════════${NC}"
|
||||||
|
echo -e "${BLUE}Step 5: Seeding admin configuration...${NC}"
|
||||||
|
echo -e "${BLUE}════════════════════════════════════════════════════════════════${NC}"
|
||||||
|
|
||||||
|
npm run seed:config
|
||||||
|
|
||||||
|
echo -e "${GREEN}✅ Admin configuration seeded${NC}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
echo -e "${BLUE}════════════════════════════════════════════════════════════════${NC}"
|
||||||
|
echo -e "${BLUE}Step 6: Database verification...${NC}"
|
||||||
|
echo -e "${BLUE}════════════════════════════════════════════════════════════════${NC}"
|
||||||
|
|
||||||
|
psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME <<EOF
|
||||||
|
-- Check tables created
|
||||||
|
SELECT
|
||||||
|
schemaname,
|
||||||
|
tablename
|
||||||
|
FROM pg_tables
|
||||||
|
WHERE schemaname = 'public'
|
||||||
|
ORDER BY tablename;
|
||||||
|
|
||||||
|
-- Check role enum
|
||||||
|
SELECT
|
||||||
|
enumlabel
|
||||||
|
FROM pg_enum
|
||||||
|
WHERE enumtypid = 'user_role_enum'::regtype;
|
||||||
|
|
||||||
|
-- Check indexes
|
||||||
|
SELECT
|
||||||
|
tablename,
|
||||||
|
indexname
|
||||||
|
FROM pg_indexes
|
||||||
|
WHERE schemaname = 'public' AND tablename = 'users'
|
||||||
|
ORDER BY tablename, indexname;
|
||||||
|
EOF
|
||||||
|
|
||||||
|
echo -e "${GREEN}✅ Database structure verified${NC}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
echo -e "${GREEN}╔═══════════════════════════════════════════════════════════════╗${NC}"
|
||||||
|
echo -e "${GREEN}║ ✅ FRESH DATABASE SETUP COMPLETE! ║${NC}"
|
||||||
|
echo -e "${GREEN}╚═══════════════════════════════════════════════════════════════╝${NC}"
|
||||||
|
echo ""
|
||||||
|
echo -e "${YELLOW}📋 Next Steps:${NC}"
|
||||||
|
echo -e " 1. Assign admin role to your user:"
|
||||||
|
echo -e " ${BLUE}psql -d $DB_NAME -f scripts/assign-admin-user.sql${NC}"
|
||||||
|
echo ""
|
||||||
|
echo -e " 2. Start the backend server:"
|
||||||
|
echo -e " ${BLUE}npm run dev${NC}"
|
||||||
|
echo ""
|
||||||
|
echo -e " 3. Access the application:"
|
||||||
|
echo -e " ${BLUE}http://localhost:5000${NC}"
|
||||||
|
echo ""
|
||||||
|
echo -e "${GREEN}🎉 Database is ready for production use!${NC}"
|
||||||
|
|
||||||
14
src/app.ts
14
src/app.ts
@ -29,6 +29,16 @@ const initializeDatabase = async () => {
|
|||||||
// Initialize database
|
// Initialize database
|
||||||
initializeDatabase();
|
initializeDatabase();
|
||||||
|
|
||||||
|
// Trust proxy - Enable this when behind a reverse proxy (nginx, load balancer, etc.)
|
||||||
|
// This allows Express to read X-Forwarded-* headers correctly
|
||||||
|
// Set to true in production, false in development
|
||||||
|
if (process.env.TRUST_PROXY === 'true' || process.env.NODE_ENV === 'production') {
|
||||||
|
app.set('trust proxy', true);
|
||||||
|
} else {
|
||||||
|
// In development, trust first proxy (useful for local testing with nginx)
|
||||||
|
app.set('trust proxy', 1);
|
||||||
|
}
|
||||||
|
|
||||||
// CORS middleware - MUST be before other middleware
|
// CORS middleware - MUST be before other middleware
|
||||||
app.use(corsMiddleware);
|
app.use(corsMiddleware);
|
||||||
|
|
||||||
@ -117,7 +127,7 @@ app.post('/api/v1/auth/sso-callback', async (req: express.Request, res: express.
|
|||||||
designation: user.designation || null,
|
designation: user.designation || null,
|
||||||
phone: user.phone || null,
|
phone: user.phone || null,
|
||||||
location: user.location || null,
|
location: user.location || null,
|
||||||
isAdmin: user.isAdmin,
|
role: user.role,
|
||||||
lastLogin: user.lastLogin
|
lastLogin: user.lastLogin
|
||||||
},
|
},
|
||||||
isNewUser: user.createdAt.getTime() === user.updatedAt.getTime()
|
isNewUser: user.createdAt.getTime() === user.updatedAt.getTime()
|
||||||
@ -155,7 +165,7 @@ app.get('/api/v1/users', async (_req: express.Request, res: express.Response): P
|
|||||||
designation: user.designation || null,
|
designation: user.designation || null,
|
||||||
phone: user.phone || null,
|
phone: user.phone || null,
|
||||||
location: user.location || null,
|
location: user.location || null,
|
||||||
isAdmin: user.isAdmin,
|
role: user.role,
|
||||||
lastLogin: user.lastLogin,
|
lastLogin: user.lastLogin,
|
||||||
createdAt: user.createdAt
|
createdAt: user.createdAt
|
||||||
})),
|
})),
|
||||||
|
|||||||
@ -111,8 +111,9 @@ export const SYSTEM_CONFIG = {
|
|||||||
* Get configuration for frontend consumption
|
* Get configuration for frontend consumption
|
||||||
* Returns only non-sensitive configuration values
|
* Returns only non-sensitive configuration values
|
||||||
*/
|
*/
|
||||||
export function getPublicConfig() {
|
export async function getPublicConfig() {
|
||||||
return {
|
// Get base configuration
|
||||||
|
const baseConfig = {
|
||||||
appName: SYSTEM_CONFIG.APP_NAME,
|
appName: SYSTEM_CONFIG.APP_NAME,
|
||||||
appVersion: SYSTEM_CONFIG.APP_VERSION,
|
appVersion: SYSTEM_CONFIG.APP_VERSION,
|
||||||
workingHours: SYSTEM_CONFIG.WORKING_HOURS,
|
workingHours: SYSTEM_CONFIG.WORKING_HOURS,
|
||||||
@ -141,8 +142,46 @@ export function getPublicConfig() {
|
|||||||
enableMentions: SYSTEM_CONFIG.WORK_NOTES.ENABLE_MENTIONS,
|
enableMentions: SYSTEM_CONFIG.WORK_NOTES.ENABLE_MENTIONS,
|
||||||
},
|
},
|
||||||
features: SYSTEM_CONFIG.FEATURES,
|
features: SYSTEM_CONFIG.FEATURES,
|
||||||
ui: SYSTEM_CONFIG.UI,
|
ui: SYSTEM_CONFIG.UI
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// Try to get AI service status and configuration (gracefully handle if not available)
|
||||||
|
try {
|
||||||
|
const { aiService } = require('../services/ai.service');
|
||||||
|
const { getConfigValue } = require('../services/configReader.service');
|
||||||
|
|
||||||
|
// Get AI configuration from admin settings
|
||||||
|
const aiEnabled = (await getConfigValue('AI_ENABLED', 'true'))?.toLowerCase() === 'true';
|
||||||
|
const remarkGenerationEnabled = (await getConfigValue('AI_REMARK_GENERATION_ENABLED', 'true'))?.toLowerCase() === 'true';
|
||||||
|
const maxRemarkLength = parseInt(await getConfigValue('AI_MAX_REMARK_LENGTH', '2000') || '2000', 10);
|
||||||
|
|
||||||
|
return {
|
||||||
|
...baseConfig,
|
||||||
|
ai: {
|
||||||
|
enabled: aiEnabled && aiService.isAvailable(),
|
||||||
|
provider: aiService.getProviderName(),
|
||||||
|
remarkGenerationEnabled: remarkGenerationEnabled && aiEnabled && aiService.isAvailable(),
|
||||||
|
maxRemarkLength: maxRemarkLength,
|
||||||
|
features: {
|
||||||
|
conclusionGeneration: remarkGenerationEnabled && aiEnabled && aiService.isAvailable()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
} catch (error) {
|
||||||
|
// AI service not available - return config without AI info
|
||||||
|
return {
|
||||||
|
...baseConfig,
|
||||||
|
ai: {
|
||||||
|
enabled: false,
|
||||||
|
provider: 'None',
|
||||||
|
remarkGenerationEnabled: false,
|
||||||
|
maxRemarkLength: 2000,
|
||||||
|
features: {
|
||||||
|
conclusionGeneration: false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@ -2,10 +2,11 @@ import { Request, Response } from 'express';
|
|||||||
import { Holiday, HolidayType } from '@models/Holiday';
|
import { Holiday, HolidayType } from '@models/Holiday';
|
||||||
import { holidayService } from '@services/holiday.service';
|
import { holidayService } from '@services/holiday.service';
|
||||||
import { sequelize } from '@config/database';
|
import { sequelize } from '@config/database';
|
||||||
import { QueryTypes } from 'sequelize';
|
import { QueryTypes, Op } from 'sequelize';
|
||||||
import logger from '@utils/logger';
|
import logger from '@utils/logger';
|
||||||
import { initializeHolidaysCache, clearWorkingHoursCache } from '@utils/tatTimeUtils';
|
import { initializeHolidaysCache, clearWorkingHoursCache } from '@utils/tatTimeUtils';
|
||||||
import { clearConfigCache } from '@services/configReader.service';
|
import { clearConfigCache } from '@services/configReader.service';
|
||||||
|
import { User, UserRole } from '@models/User';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get all holidays (with optional year filter)
|
* Get all holidays (with optional year filter)
|
||||||
@ -365,8 +366,20 @@ export const updateConfiguration = async (req: Request, res: Response): Promise<
|
|||||||
// If working hours config was updated, also clear working hours cache
|
// If working hours config was updated, also clear working hours cache
|
||||||
const workingHoursKeys = ['WORK_START_HOUR', 'WORK_END_HOUR', 'WORK_START_DAY', 'WORK_END_DAY'];
|
const workingHoursKeys = ['WORK_START_HOUR', 'WORK_END_HOUR', 'WORK_START_DAY', 'WORK_END_DAY'];
|
||||||
if (workingHoursKeys.includes(configKey)) {
|
if (workingHoursKeys.includes(configKey)) {
|
||||||
clearWorkingHoursCache();
|
await clearWorkingHoursCache();
|
||||||
logger.info(`[Admin] Working hours configuration '${configKey}' updated - cache cleared`);
|
logger.info(`[Admin] Working hours configuration '${configKey}' updated - cache cleared and reloaded`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// If AI config was updated, reinitialize AI service
|
||||||
|
const aiConfigKeys = ['AI_PROVIDER', 'CLAUDE_API_KEY', 'OPENAI_API_KEY', 'GEMINI_API_KEY', 'AI_ENABLED'];
|
||||||
|
if (aiConfigKeys.includes(configKey)) {
|
||||||
|
try {
|
||||||
|
const { aiService } = require('../services/ai.service');
|
||||||
|
await aiService.reinitialize();
|
||||||
|
logger.info(`[Admin] AI configuration '${configKey}' updated - AI service reinitialized with ${aiService.getProviderName()}`);
|
||||||
|
} catch (error) {
|
||||||
|
logger.error(`[Admin] Failed to reinitialize AI service:`, error);
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
logger.info(`[Admin] Configuration '${configKey}' updated and cache cleared`);
|
logger.info(`[Admin] Configuration '${configKey}' updated and cache cleared`);
|
||||||
}
|
}
|
||||||
@ -407,8 +420,8 @@ export const resetConfiguration = async (req: Request, res: Response): Promise<v
|
|||||||
// If working hours config was reset, also clear working hours cache
|
// If working hours config was reset, also clear working hours cache
|
||||||
const workingHoursKeys = ['WORK_START_HOUR', 'WORK_END_HOUR', 'WORK_START_DAY', 'WORK_END_DAY'];
|
const workingHoursKeys = ['WORK_START_HOUR', 'WORK_END_HOUR', 'WORK_START_DAY', 'WORK_END_DAY'];
|
||||||
if (workingHoursKeys.includes(configKey)) {
|
if (workingHoursKeys.includes(configKey)) {
|
||||||
clearWorkingHoursCache();
|
await clearWorkingHoursCache();
|
||||||
logger.info(`[Admin] Working hours configuration '${configKey}' reset to default - cache cleared`);
|
logger.info(`[Admin] Working hours configuration '${configKey}' reset to default - cache cleared and reloaded`);
|
||||||
} else {
|
} else {
|
||||||
logger.info(`[Admin] Configuration '${configKey}' reset to default and cache cleared`);
|
logger.info(`[Admin] Configuration '${configKey}' reset to default and cache cleared`);
|
||||||
}
|
}
|
||||||
@ -426,3 +439,364 @@ export const resetConfiguration = async (req: Request, res: Response): Promise<v
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* ============================================
|
||||||
|
* USER ROLE MANAGEMENT (RBAC)
|
||||||
|
* ============================================
|
||||||
|
*/
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Update User Role
|
||||||
|
*
|
||||||
|
* Purpose: Change user's role (USER, MANAGEMENT, ADMIN)
|
||||||
|
*
|
||||||
|
* Access: ADMIN only
|
||||||
|
*
|
||||||
|
* Body: { role: 'USER' | 'MANAGEMENT' | 'ADMIN' }
|
||||||
|
*/
|
||||||
|
export const updateUserRole = async (req: Request, res: Response): Promise<void> => {
|
||||||
|
try {
|
||||||
|
const { userId } = req.params;
|
||||||
|
const { role } = req.body;
|
||||||
|
|
||||||
|
// Validate role
|
||||||
|
const validRoles: UserRole[] = ['USER', 'MANAGEMENT', 'ADMIN'];
|
||||||
|
if (!role || !validRoles.includes(role)) {
|
||||||
|
res.status(400).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Invalid role. Must be USER, MANAGEMENT, or ADMIN'
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find user
|
||||||
|
const user = await User.findByPk(userId);
|
||||||
|
if (!user) {
|
||||||
|
res.status(404).json({
|
||||||
|
success: false,
|
||||||
|
error: 'User not found'
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store old role for logging
|
||||||
|
const oldRole = user.role;
|
||||||
|
|
||||||
|
// Prevent self-demotion from ADMIN (safety check)
|
||||||
|
const adminUser = req.user;
|
||||||
|
if (adminUser?.userId === userId && role !== 'ADMIN') {
|
||||||
|
res.status(400).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Cannot remove your own admin privileges. Ask another admin to change your role.'
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update role
|
||||||
|
user.role = role;
|
||||||
|
await user.save();
|
||||||
|
|
||||||
|
logger.info(`✅ User role updated by ${adminUser?.email}: ${user.email} - ${oldRole} → ${role}`);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
message: `User role updated from ${oldRole} to ${role}`,
|
||||||
|
data: {
|
||||||
|
userId: user.userId,
|
||||||
|
email: user.email,
|
||||||
|
displayName: user.displayName,
|
||||||
|
role: user.role,
|
||||||
|
previousRole: oldRole,
|
||||||
|
updatedAt: user.updatedAt
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Admin] Error updating user role:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to update user role'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get All Users by Role (with pagination and filtering)
|
||||||
|
*
|
||||||
|
* Purpose: List all users with optional role filtering and pagination
|
||||||
|
*
|
||||||
|
* Access: ADMIN only
|
||||||
|
*
|
||||||
|
* Query:
|
||||||
|
* - ?role=ADMIN | MANAGEMENT | USER | ALL | ELEVATED (default: ELEVATED for ADMIN+MANAGEMENT only)
|
||||||
|
* - ?page=1 (default)
|
||||||
|
* - ?limit=10 (default)
|
||||||
|
*/
|
||||||
|
export const getUsersByRole = async (req: Request, res: Response): Promise<void> => {
|
||||||
|
try {
|
||||||
|
const { role, page = '1', limit = '10' } = req.query;
|
||||||
|
|
||||||
|
const pageNum = parseInt(page as string) || 1;
|
||||||
|
const limitNum = Math.min(parseInt(limit as string) || 10, 100); // Max 100 per page
|
||||||
|
const offset = (pageNum - 1) * limitNum;
|
||||||
|
|
||||||
|
const whereClause: any = { isActive: true };
|
||||||
|
|
||||||
|
// Handle role filtering
|
||||||
|
if (role && role !== 'ALL' && role !== 'ELEVATED') {
|
||||||
|
const validRoles: UserRole[] = ['USER', 'MANAGEMENT', 'ADMIN'];
|
||||||
|
if (!validRoles.includes(role as UserRole)) {
|
||||||
|
res.status(400).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Invalid role. Must be USER, MANAGEMENT, ADMIN, ALL, or ELEVATED'
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
whereClause.role = role;
|
||||||
|
} else if (role === 'ELEVATED' || !role) {
|
||||||
|
// Default: Show only ADMIN and MANAGEMENT (elevated users)
|
||||||
|
whereClause.role = { [Op.in]: ['ADMIN', 'MANAGEMENT'] };
|
||||||
|
}
|
||||||
|
// If role === 'ALL', don't filter by role (show all users)
|
||||||
|
|
||||||
|
// Get total count for pagination
|
||||||
|
const totalUsers = await User.count({ where: whereClause });
|
||||||
|
const totalPages = Math.ceil(totalUsers / limitNum);
|
||||||
|
|
||||||
|
// Get paginated users
|
||||||
|
const users = await User.findAll({
|
||||||
|
where: whereClause,
|
||||||
|
attributes: [
|
||||||
|
'userId',
|
||||||
|
'email',
|
||||||
|
'displayName',
|
||||||
|
'firstName',
|
||||||
|
'lastName',
|
||||||
|
'department',
|
||||||
|
'designation',
|
||||||
|
'role',
|
||||||
|
'manager',
|
||||||
|
'postalAddress',
|
||||||
|
'lastLogin',
|
||||||
|
'createdAt'
|
||||||
|
],
|
||||||
|
order: [
|
||||||
|
['role', 'ASC'], // ADMIN first, then MANAGEMENT, then USER
|
||||||
|
['displayName', 'ASC']
|
||||||
|
],
|
||||||
|
limit: limitNum,
|
||||||
|
offset: offset
|
||||||
|
});
|
||||||
|
|
||||||
|
// Get role summary (across all users, not just current page)
|
||||||
|
const roleStats = await sequelize.query(`
|
||||||
|
SELECT
|
||||||
|
role,
|
||||||
|
COUNT(*) as count
|
||||||
|
FROM users
|
||||||
|
WHERE is_active = true
|
||||||
|
GROUP BY role
|
||||||
|
ORDER BY
|
||||||
|
CASE role
|
||||||
|
WHEN 'ADMIN' THEN 1
|
||||||
|
WHEN 'MANAGEMENT' THEN 2
|
||||||
|
WHEN 'USER' THEN 3
|
||||||
|
END
|
||||||
|
`, {
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
const summary = {
|
||||||
|
ADMIN: parseInt((roleStats.find((s: any) => s.role === 'ADMIN') as any)?.count || '0'),
|
||||||
|
MANAGEMENT: parseInt((roleStats.find((s: any) => s.role === 'MANAGEMENT') as any)?.count || '0'),
|
||||||
|
USER: parseInt((roleStats.find((s: any) => s.role === 'USER') as any)?.count || '0')
|
||||||
|
};
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: {
|
||||||
|
users: users,
|
||||||
|
pagination: {
|
||||||
|
currentPage: pageNum,
|
||||||
|
totalPages: totalPages,
|
||||||
|
totalUsers: totalUsers,
|
||||||
|
limit: limitNum,
|
||||||
|
hasNextPage: pageNum < totalPages,
|
||||||
|
hasPrevPage: pageNum > 1
|
||||||
|
},
|
||||||
|
summary,
|
||||||
|
filter: role || 'ELEVATED'
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Admin] Error fetching users by role:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch users'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get Role Statistics
|
||||||
|
*
|
||||||
|
* Purpose: Get count of users in each role
|
||||||
|
*
|
||||||
|
* Access: ADMIN only
|
||||||
|
*/
|
||||||
|
export const getRoleStatistics = async (req: Request, res: Response): Promise<void> => {
|
||||||
|
try {
|
||||||
|
const stats = await sequelize.query(`
|
||||||
|
SELECT
|
||||||
|
role,
|
||||||
|
COUNT(*) as count,
|
||||||
|
COUNT(CASE WHEN is_active = true THEN 1 END) as active_count,
|
||||||
|
COUNT(CASE WHEN is_active = false THEN 1 END) as inactive_count
|
||||||
|
FROM users
|
||||||
|
GROUP BY role
|
||||||
|
ORDER BY
|
||||||
|
CASE role
|
||||||
|
WHEN 'ADMIN' THEN 1
|
||||||
|
WHEN 'MANAGEMENT' THEN 2
|
||||||
|
WHEN 'USER' THEN 3
|
||||||
|
END
|
||||||
|
`, {
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: {
|
||||||
|
statistics: stats,
|
||||||
|
total: stats.reduce((sum: number, stat: any) => sum + parseInt(stat.count), 0)
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Admin] Error fetching role statistics:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch role statistics'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Assign role to user by email
|
||||||
|
*
|
||||||
|
* Purpose: Search user in Okta, create if doesn't exist, then assign role
|
||||||
|
*
|
||||||
|
* Access: ADMIN only
|
||||||
|
*
|
||||||
|
* Body: { email: string, role: 'USER' | 'MANAGEMENT' | 'ADMIN' }
|
||||||
|
*/
|
||||||
|
export const assignRoleByEmail = async (req: Request, res: Response): Promise<void> => {
|
||||||
|
try {
|
||||||
|
const { email, role } = req.body;
|
||||||
|
const currentUserId = req.user?.userId;
|
||||||
|
|
||||||
|
// Validate inputs
|
||||||
|
if (!email || !role) {
|
||||||
|
res.status(400).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Email and role are required'
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate role
|
||||||
|
if (!['USER', 'MANAGEMENT', 'ADMIN'].includes(role)) {
|
||||||
|
res.status(400).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Invalid role. Must be USER, MANAGEMENT, or ADMIN'
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info(`[Admin] Assigning role ${role} to ${email} by user ${currentUserId}`);
|
||||||
|
|
||||||
|
// First, check if user already exists in our database
|
||||||
|
let user = await User.findOne({ where: { email } });
|
||||||
|
|
||||||
|
if (!user) {
|
||||||
|
// User doesn't exist, need to fetch from Okta and create
|
||||||
|
logger.info(`[Admin] User ${email} not found in database, fetching from Okta...`);
|
||||||
|
|
||||||
|
// Import UserService to search Okta
|
||||||
|
const { UserService } = await import('@services/user.service');
|
||||||
|
const userService = new UserService();
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Search Okta for this user
|
||||||
|
const oktaUsers = await userService.searchUsers(email, 1);
|
||||||
|
|
||||||
|
if (!oktaUsers || oktaUsers.length === 0) {
|
||||||
|
res.status(404).json({
|
||||||
|
success: false,
|
||||||
|
error: 'User not found in Okta. Please ensure the email is correct.'
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const oktaUser = oktaUsers[0];
|
||||||
|
|
||||||
|
// Create user in our database
|
||||||
|
user = await User.create({
|
||||||
|
email: oktaUser.email,
|
||||||
|
oktaSub: (oktaUser as any).userId || (oktaUser as any).oktaSub, // Okta user ID as oktaSub
|
||||||
|
employeeId: (oktaUser as any).employeeNumber || (oktaUser as any).employeeId || null,
|
||||||
|
firstName: oktaUser.firstName || null,
|
||||||
|
lastName: oktaUser.lastName || null,
|
||||||
|
displayName: oktaUser.displayName || `${oktaUser.firstName || ''} ${oktaUser.lastName || ''}`.trim() || oktaUser.email,
|
||||||
|
department: oktaUser.department || null,
|
||||||
|
designation: (oktaUser as any).designation || (oktaUser as any).title || null,
|
||||||
|
phone: (oktaUser as any).phone || (oktaUser as any).mobilePhone || null,
|
||||||
|
isActive: true,
|
||||||
|
role: role, // Assign the requested role
|
||||||
|
lastLogin: undefined // Not logged in yet
|
||||||
|
});
|
||||||
|
|
||||||
|
logger.info(`[Admin] Created new user ${email} with role ${role}`);
|
||||||
|
} catch (oktaError: any) {
|
||||||
|
logger.error('[Admin] Error fetching from Okta:', oktaError);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch user from Okta: ' + (oktaError.message || 'Unknown error')
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// User exists, update their role
|
||||||
|
const previousRole = user.role;
|
||||||
|
|
||||||
|
// Prevent self-demotion
|
||||||
|
if (user.userId === currentUserId && role !== 'ADMIN') {
|
||||||
|
res.status(403).json({
|
||||||
|
success: false,
|
||||||
|
error: 'You cannot demote yourself from ADMIN role'
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
await user.update({ role });
|
||||||
|
|
||||||
|
logger.info(`[Admin] Updated user ${email} role from ${previousRole} to ${role}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
message: `Successfully assigned ${role} role to ${user.displayName || email}`,
|
||||||
|
data: {
|
||||||
|
userId: user.userId,
|
||||||
|
email: user.email,
|
||||||
|
displayName: user.displayName,
|
||||||
|
role: user.role
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Admin] Error assigning role by email:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to assign role'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
|||||||
@ -3,6 +3,7 @@ import { ApprovalService } from '@services/approval.service';
|
|||||||
import { validateApprovalAction } from '@validators/approval.validator';
|
import { validateApprovalAction } from '@validators/approval.validator';
|
||||||
import { ResponseHandler } from '@utils/responseHandler';
|
import { ResponseHandler } from '@utils/responseHandler';
|
||||||
import type { AuthenticatedRequest } from '../types/express';
|
import type { AuthenticatedRequest } from '../types/express';
|
||||||
|
import { getRequestMetadata } from '@utils/requestUtils';
|
||||||
|
|
||||||
const approvalService = new ApprovalService();
|
const approvalService = new ApprovalService();
|
||||||
|
|
||||||
@ -12,7 +13,11 @@ export class ApprovalController {
|
|||||||
const { levelId } = req.params;
|
const { levelId } = req.params;
|
||||||
const validatedData = validateApprovalAction(req.body);
|
const validatedData = validateApprovalAction(req.body);
|
||||||
|
|
||||||
const level = await approvalService.approveLevel(levelId, validatedData, req.user.userId);
|
const requestMeta = getRequestMetadata(req);
|
||||||
|
const level = await approvalService.approveLevel(levelId, validatedData, req.user.userId, {
|
||||||
|
ipAddress: requestMeta.ipAddress,
|
||||||
|
userAgent: requestMeta.userAgent
|
||||||
|
});
|
||||||
|
|
||||||
if (!level) {
|
if (!level) {
|
||||||
ResponseHandler.notFound(res, 'Approval level not found');
|
ResponseHandler.notFound(res, 'Approval level not found');
|
||||||
|
|||||||
@ -4,6 +4,8 @@ import { validateSSOCallback, validateRefreshToken, validateTokenExchange } from
|
|||||||
import { ResponseHandler } from '../utils/responseHandler';
|
import { ResponseHandler } from '../utils/responseHandler';
|
||||||
import type { AuthenticatedRequest } from '../types/express';
|
import type { AuthenticatedRequest } from '../types/express';
|
||||||
import logger from '../utils/logger';
|
import logger from '../utils/logger';
|
||||||
|
import { activityService, SYSTEM_EVENT_REQUEST_ID } from '../services/activity.service';
|
||||||
|
import { getRequestMetadata } from '../utils/requestUtils';
|
||||||
|
|
||||||
export class AuthController {
|
export class AuthController {
|
||||||
private authService: AuthService;
|
private authService: AuthService;
|
||||||
@ -23,6 +25,31 @@ export class AuthController {
|
|||||||
|
|
||||||
const result = await this.authService.handleSSOCallback(validatedData as any);
|
const result = await this.authService.handleSSOCallback(validatedData as any);
|
||||||
|
|
||||||
|
// Log login activity
|
||||||
|
const requestMeta = getRequestMetadata(req);
|
||||||
|
await activityService.log({
|
||||||
|
requestId: SYSTEM_EVENT_REQUEST_ID, // Special UUID for system events
|
||||||
|
type: 'login',
|
||||||
|
user: {
|
||||||
|
userId: result.user.userId,
|
||||||
|
name: result.user.displayName || result.user.email,
|
||||||
|
email: result.user.email
|
||||||
|
},
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
action: 'User Login',
|
||||||
|
details: `User logged in via SSO from ${requestMeta.ipAddress || 'unknown IP'}`,
|
||||||
|
metadata: {
|
||||||
|
loginMethod: 'SSO',
|
||||||
|
employeeId: result.user.employeeId,
|
||||||
|
department: result.user.department,
|
||||||
|
role: result.user.role
|
||||||
|
},
|
||||||
|
ipAddress: requestMeta.ipAddress,
|
||||||
|
userAgent: requestMeta.userAgent,
|
||||||
|
category: 'AUTHENTICATION',
|
||||||
|
severity: 'INFO'
|
||||||
|
});
|
||||||
|
|
||||||
ResponseHandler.success(res, {
|
ResponseHandler.success(res, {
|
||||||
user: result.user,
|
user: result.user,
|
||||||
accessToken: result.accessToken,
|
accessToken: result.accessToken,
|
||||||
@ -59,7 +86,7 @@ export class AuthController {
|
|||||||
designation: user.designation,
|
designation: user.designation,
|
||||||
phone: user.phone,
|
phone: user.phone,
|
||||||
location: user.location,
|
location: user.location,
|
||||||
isAdmin: user.isAdmin,
|
role: user.role,
|
||||||
isActive: user.isActive,
|
isActive: user.isActive,
|
||||||
lastLogin: user.lastLogin,
|
lastLogin: user.lastLogin,
|
||||||
createdAt: user.createdAt,
|
createdAt: user.createdAt,
|
||||||
@ -274,6 +301,31 @@ export class AuthController {
|
|||||||
|
|
||||||
const result = await this.authService.exchangeCodeForTokens(code, redirectUri);
|
const result = await this.authService.exchangeCodeForTokens(code, redirectUri);
|
||||||
|
|
||||||
|
// Log login activity
|
||||||
|
const requestMeta = getRequestMetadata(req);
|
||||||
|
await activityService.log({
|
||||||
|
requestId: SYSTEM_EVENT_REQUEST_ID, // Special UUID for system events
|
||||||
|
type: 'login',
|
||||||
|
user: {
|
||||||
|
userId: result.user.userId,
|
||||||
|
name: result.user.displayName || result.user.email,
|
||||||
|
email: result.user.email
|
||||||
|
},
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
action: 'User Login',
|
||||||
|
details: `User logged in via token exchange from ${requestMeta.ipAddress || 'unknown IP'}`,
|
||||||
|
metadata: {
|
||||||
|
loginMethod: 'TOKEN_EXCHANGE',
|
||||||
|
employeeId: result.user.employeeId,
|
||||||
|
department: result.user.department,
|
||||||
|
role: result.user.role
|
||||||
|
},
|
||||||
|
ipAddress: requestMeta.ipAddress,
|
||||||
|
userAgent: requestMeta.userAgent,
|
||||||
|
category: 'AUTHENTICATION',
|
||||||
|
severity: 'INFO'
|
||||||
|
});
|
||||||
|
|
||||||
// Set cookies with httpOnly flag for security
|
// Set cookies with httpOnly flag for security
|
||||||
const isProduction = process.env.NODE_ENV === 'production';
|
const isProduction = process.env.NODE_ENV === 'production';
|
||||||
const cookieOptions = {
|
const cookieOptions = {
|
||||||
|
|||||||
404
src/controllers/conclusion.controller.ts
Normal file
404
src/controllers/conclusion.controller.ts
Normal file
@ -0,0 +1,404 @@
|
|||||||
|
import { Request, Response } from 'express';
|
||||||
|
import { WorkflowRequest, ApprovalLevel, WorkNote, Document, Activity, ConclusionRemark } from '@models/index';
|
||||||
|
import { aiService } from '@services/ai.service';
|
||||||
|
import { activityService } from '@services/activity.service';
|
||||||
|
import logger from '@utils/logger';
|
||||||
|
import { getRequestMetadata } from '@utils/requestUtils';
|
||||||
|
|
||||||
|
export class ConclusionController {
|
||||||
|
/**
|
||||||
|
* Generate AI conclusion remark for a request
|
||||||
|
* POST /api/v1/conclusions/:requestId/generate
|
||||||
|
*/
|
||||||
|
async generateConclusion(req: Request, res: Response) {
|
||||||
|
try {
|
||||||
|
const { requestId } = req.params;
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
|
||||||
|
// Fetch request with all related data
|
||||||
|
const request = await WorkflowRequest.findOne({
|
||||||
|
where: { requestId },
|
||||||
|
include: [
|
||||||
|
{ association: 'initiator', attributes: ['userId', 'displayName', 'email'] }
|
||||||
|
]
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!request) {
|
||||||
|
return res.status(404).json({ error: 'Request not found' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if user is the initiator
|
||||||
|
if ((request as any).initiatorId !== userId) {
|
||||||
|
return res.status(403).json({ error: 'Only the initiator can generate conclusion remarks' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if request is approved
|
||||||
|
if ((request as any).status !== 'APPROVED') {
|
||||||
|
return res.status(400).json({ error: 'Conclusion can only be generated for approved requests' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if AI features are enabled in admin config
|
||||||
|
const { getConfigValue } = await import('../services/configReader.service');
|
||||||
|
const aiEnabled = (await getConfigValue('AI_ENABLED', 'true'))?.toLowerCase() === 'true';
|
||||||
|
const remarkGenerationEnabled = (await getConfigValue('AI_REMARK_GENERATION_ENABLED', 'true'))?.toLowerCase() === 'true';
|
||||||
|
|
||||||
|
if (!aiEnabled) {
|
||||||
|
logger.warn(`[Conclusion] AI features disabled in admin config for request ${requestId}`);
|
||||||
|
return res.status(400).json({
|
||||||
|
error: 'AI features disabled',
|
||||||
|
message: 'AI features are currently disabled by administrator. Please write the conclusion manually.',
|
||||||
|
canContinueManually: true
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!remarkGenerationEnabled) {
|
||||||
|
logger.warn(`[Conclusion] AI remark generation disabled in admin config for request ${requestId}`);
|
||||||
|
return res.status(400).json({
|
||||||
|
error: 'AI remark generation disabled',
|
||||||
|
message: 'AI-powered conclusion generation is currently disabled by administrator. Please write the conclusion manually.',
|
||||||
|
canContinueManually: true
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if AI service is available
|
||||||
|
if (!aiService.isAvailable()) {
|
||||||
|
logger.warn(`[Conclusion] AI service unavailable for request ${requestId}`);
|
||||||
|
return res.status(503).json({
|
||||||
|
error: 'AI service not available',
|
||||||
|
message: 'AI features are currently unavailable. Please configure an AI provider (Claude, OpenAI, or Gemini) in the admin panel, or write the conclusion manually.',
|
||||||
|
canContinueManually: true
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Gather context for AI generation
|
||||||
|
const approvalLevels = await ApprovalLevel.findAll({
|
||||||
|
where: { requestId },
|
||||||
|
order: [['levelNumber', 'ASC']]
|
||||||
|
});
|
||||||
|
|
||||||
|
const workNotes = await WorkNote.findAll({
|
||||||
|
where: { requestId },
|
||||||
|
order: [['createdAt', 'ASC']],
|
||||||
|
limit: 20 // Last 20 work notes
|
||||||
|
});
|
||||||
|
|
||||||
|
const documents = await Document.findAll({
|
||||||
|
where: { requestId },
|
||||||
|
order: [['uploadedAt', 'DESC']]
|
||||||
|
});
|
||||||
|
|
||||||
|
const activities = await Activity.findAll({
|
||||||
|
where: { requestId },
|
||||||
|
order: [['createdAt', 'ASC']],
|
||||||
|
limit: 50 // Last 50 activities
|
||||||
|
});
|
||||||
|
|
||||||
|
// Build context object
|
||||||
|
const context = {
|
||||||
|
requestTitle: (request as any).title,
|
||||||
|
requestDescription: (request as any).description,
|
||||||
|
requestNumber: (request as any).requestNumber,
|
||||||
|
priority: (request as any).priority,
|
||||||
|
approvalFlow: approvalLevels.map((level: any) => ({
|
||||||
|
levelNumber: level.levelNumber,
|
||||||
|
approverName: level.approverName,
|
||||||
|
status: level.status,
|
||||||
|
comments: level.comments,
|
||||||
|
actionDate: level.actionDate,
|
||||||
|
tatHours: Number(level.tatHours || 0),
|
||||||
|
elapsedHours: Number(level.elapsedHours || 0)
|
||||||
|
})),
|
||||||
|
workNotes: workNotes.map((note: any) => ({
|
||||||
|
userName: note.userName,
|
||||||
|
message: note.message,
|
||||||
|
createdAt: note.createdAt
|
||||||
|
})),
|
||||||
|
documents: documents.map((doc: any) => ({
|
||||||
|
fileName: doc.originalFileName || doc.fileName,
|
||||||
|
uploadedBy: doc.uploadedBy,
|
||||||
|
uploadedAt: doc.uploadedAt
|
||||||
|
})),
|
||||||
|
activities: activities.map((activity: any) => ({
|
||||||
|
type: activity.activityType,
|
||||||
|
action: activity.activityDescription,
|
||||||
|
details: activity.activityDescription,
|
||||||
|
timestamp: activity.createdAt
|
||||||
|
}))
|
||||||
|
};
|
||||||
|
|
||||||
|
logger.info(`[Conclusion] Generating AI remark for request ${requestId}...`);
|
||||||
|
|
||||||
|
// Generate AI conclusion
|
||||||
|
const aiResult = await aiService.generateConclusionRemark(context);
|
||||||
|
|
||||||
|
// Check if conclusion already exists
|
||||||
|
let conclusionInstance = await ConclusionRemark.findOne({ where: { requestId } });
|
||||||
|
|
||||||
|
const conclusionData = {
|
||||||
|
aiGeneratedRemark: aiResult.remark,
|
||||||
|
aiModelUsed: aiResult.provider,
|
||||||
|
aiConfidenceScore: aiResult.confidence,
|
||||||
|
approvalSummary: {
|
||||||
|
totalLevels: approvalLevels.length,
|
||||||
|
approvedLevels: approvalLevels.filter((l: any) => l.status === 'APPROVED').length,
|
||||||
|
averageTatUsage: approvalLevels.reduce((sum: number, l: any) =>
|
||||||
|
sum + Number(l.tatPercentageUsed || 0), 0) / (approvalLevels.length || 1)
|
||||||
|
},
|
||||||
|
documentSummary: {
|
||||||
|
totalDocuments: documents.length,
|
||||||
|
documentNames: documents.map((d: any) => d.originalFileName || d.fileName)
|
||||||
|
},
|
||||||
|
keyDiscussionPoints: aiResult.keyPoints,
|
||||||
|
generatedAt: new Date()
|
||||||
|
};
|
||||||
|
|
||||||
|
if (conclusionInstance) {
|
||||||
|
// Update existing conclusion (allow regeneration)
|
||||||
|
await conclusionInstance.update(conclusionData as any);
|
||||||
|
logger.info(`[Conclusion] ✅ AI conclusion regenerated for request ${requestId}`);
|
||||||
|
} else {
|
||||||
|
// Create new conclusion
|
||||||
|
conclusionInstance = await ConclusionRemark.create({
|
||||||
|
requestId,
|
||||||
|
...conclusionData,
|
||||||
|
finalRemark: null,
|
||||||
|
editedBy: null,
|
||||||
|
isEdited: false,
|
||||||
|
editCount: 0,
|
||||||
|
finalizedAt: null
|
||||||
|
} as any);
|
||||||
|
logger.info(`[Conclusion] ✅ AI conclusion generated for request ${requestId}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Log activity
|
||||||
|
const requestMeta = getRequestMetadata(req);
|
||||||
|
await activityService.log({
|
||||||
|
requestId,
|
||||||
|
type: 'ai_conclusion_generated',
|
||||||
|
user: { userId, name: (request as any).initiator?.displayName || 'Initiator' },
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
action: 'AI Conclusion Generated',
|
||||||
|
details: 'AI-powered conclusion remark generated for review',
|
||||||
|
ipAddress: requestMeta.ipAddress,
|
||||||
|
userAgent: requestMeta.userAgent
|
||||||
|
});
|
||||||
|
|
||||||
|
return res.status(200).json({
|
||||||
|
message: 'Conclusion generated successfully',
|
||||||
|
data: {
|
||||||
|
conclusionId: (conclusionInstance as any).conclusionId,
|
||||||
|
aiGeneratedRemark: aiResult.remark,
|
||||||
|
keyDiscussionPoints: aiResult.keyPoints,
|
||||||
|
confidence: aiResult.confidence,
|
||||||
|
provider: aiResult.provider,
|
||||||
|
generatedAt: new Date()
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Conclusion] Error generating conclusion:', error);
|
||||||
|
|
||||||
|
// Provide helpful error messages
|
||||||
|
const isConfigError = error.message?.includes('not configured') ||
|
||||||
|
error.message?.includes('not available') ||
|
||||||
|
error.message?.includes('not initialized');
|
||||||
|
|
||||||
|
return res.status(isConfigError ? 503 : 500).json({
|
||||||
|
error: isConfigError ? 'AI service not configured' : 'Failed to generate conclusion',
|
||||||
|
message: error.message || 'An unexpected error occurred',
|
||||||
|
canContinueManually: true // User can still write manual conclusion
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Update conclusion remark (edit by initiator)
|
||||||
|
* PUT /api/v1/conclusions/:requestId
|
||||||
|
*/
|
||||||
|
async updateConclusion(req: Request, res: Response) {
|
||||||
|
try {
|
||||||
|
const { requestId } = req.params;
|
||||||
|
const { finalRemark } = req.body;
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
|
||||||
|
if (!finalRemark || typeof finalRemark !== 'string') {
|
||||||
|
return res.status(400).json({ error: 'Final remark is required' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fetch request
|
||||||
|
const request = await WorkflowRequest.findOne({ where: { requestId } });
|
||||||
|
if (!request) {
|
||||||
|
return res.status(404).json({ error: 'Request not found' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if user is the initiator
|
||||||
|
if ((request as any).initiatorId !== userId) {
|
||||||
|
return res.status(403).json({ error: 'Only the initiator can update conclusion remarks' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find conclusion
|
||||||
|
const conclusion = await ConclusionRemark.findOne({ where: { requestId } });
|
||||||
|
if (!conclusion) {
|
||||||
|
return res.status(404).json({ error: 'Conclusion not found. Generate it first.' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update conclusion
|
||||||
|
const wasEdited = (conclusion as any).aiGeneratedRemark !== finalRemark;
|
||||||
|
|
||||||
|
await conclusion.update({
|
||||||
|
finalRemark: finalRemark,
|
||||||
|
editedBy: userId,
|
||||||
|
isEdited: wasEdited,
|
||||||
|
editCount: wasEdited ? (conclusion as any).editCount + 1 : (conclusion as any).editCount
|
||||||
|
} as any);
|
||||||
|
|
||||||
|
logger.info(`[Conclusion] Updated conclusion for request ${requestId} (edited: ${wasEdited})`);
|
||||||
|
|
||||||
|
return res.status(200).json({
|
||||||
|
message: 'Conclusion updated successfully',
|
||||||
|
data: conclusion
|
||||||
|
});
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Conclusion] Error updating conclusion:', error);
|
||||||
|
return res.status(500).json({ error: 'Failed to update conclusion' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Finalize conclusion and close request
|
||||||
|
* POST /api/v1/conclusions/:requestId/finalize
|
||||||
|
*/
|
||||||
|
async finalizeConclusion(req: Request, res: Response) {
|
||||||
|
try {
|
||||||
|
const { requestId } = req.params;
|
||||||
|
const { finalRemark } = req.body;
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
|
||||||
|
if (!finalRemark || typeof finalRemark !== 'string') {
|
||||||
|
return res.status(400).json({ error: 'Final remark is required' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fetch request
|
||||||
|
const request = await WorkflowRequest.findOne({
|
||||||
|
where: { requestId },
|
||||||
|
include: [
|
||||||
|
{ association: 'initiator', attributes: ['userId', 'displayName', 'email'] }
|
||||||
|
]
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!request) {
|
||||||
|
return res.status(404).json({ error: 'Request not found' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if user is the initiator
|
||||||
|
if ((request as any).initiatorId !== userId) {
|
||||||
|
return res.status(403).json({ error: 'Only the initiator can finalize conclusion remarks' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if request is approved
|
||||||
|
if ((request as any).status !== 'APPROVED') {
|
||||||
|
return res.status(400).json({ error: 'Only approved requests can be closed' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find or create conclusion
|
||||||
|
let conclusion = await ConclusionRemark.findOne({ where: { requestId } });
|
||||||
|
|
||||||
|
if (!conclusion) {
|
||||||
|
// Create if doesn't exist (manual conclusion without AI)
|
||||||
|
conclusion = await ConclusionRemark.create({
|
||||||
|
requestId,
|
||||||
|
aiGeneratedRemark: null,
|
||||||
|
aiModelUsed: null,
|
||||||
|
aiConfidenceScore: null,
|
||||||
|
finalRemark: finalRemark,
|
||||||
|
editedBy: userId,
|
||||||
|
isEdited: false,
|
||||||
|
editCount: 0,
|
||||||
|
approvalSummary: {},
|
||||||
|
documentSummary: {},
|
||||||
|
keyDiscussionPoints: [],
|
||||||
|
generatedAt: null,
|
||||||
|
finalizedAt: new Date()
|
||||||
|
} as any);
|
||||||
|
} else {
|
||||||
|
// Update existing conclusion
|
||||||
|
const wasEdited = (conclusion as any).aiGeneratedRemark !== finalRemark;
|
||||||
|
|
||||||
|
await conclusion.update({
|
||||||
|
finalRemark: finalRemark,
|
||||||
|
editedBy: userId,
|
||||||
|
isEdited: wasEdited,
|
||||||
|
editCount: wasEdited ? (conclusion as any).editCount + 1 : (conclusion as any).editCount,
|
||||||
|
finalizedAt: new Date()
|
||||||
|
} as any);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update request status to CLOSED
|
||||||
|
await request.update({
|
||||||
|
status: 'CLOSED',
|
||||||
|
conclusionRemark: finalRemark,
|
||||||
|
closureDate: new Date()
|
||||||
|
} as any);
|
||||||
|
|
||||||
|
logger.info(`[Conclusion] ✅ Request ${requestId} finalized and closed`);
|
||||||
|
|
||||||
|
// Log activity
|
||||||
|
const requestMeta = getRequestMetadata(req);
|
||||||
|
await activityService.log({
|
||||||
|
requestId,
|
||||||
|
type: 'closed',
|
||||||
|
user: { userId, name: (request as any).initiator?.displayName || 'Initiator' },
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
action: 'Request Closed',
|
||||||
|
details: `Request closed with conclusion remark by ${(request as any).initiator?.displayName}`,
|
||||||
|
ipAddress: requestMeta.ipAddress,
|
||||||
|
userAgent: requestMeta.userAgent
|
||||||
|
});
|
||||||
|
|
||||||
|
return res.status(200).json({
|
||||||
|
message: 'Request finalized and closed successfully',
|
||||||
|
data: {
|
||||||
|
conclusionId: (conclusion as any).conclusionId,
|
||||||
|
requestNumber: (request as any).requestNumber,
|
||||||
|
status: 'CLOSED',
|
||||||
|
finalRemark: finalRemark,
|
||||||
|
finalizedAt: (conclusion as any).finalizedAt
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Conclusion] Error finalizing conclusion:', error);
|
||||||
|
return res.status(500).json({ error: 'Failed to finalize conclusion' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get conclusion for a request
|
||||||
|
* GET /api/v1/conclusions/:requestId
|
||||||
|
*/
|
||||||
|
async getConclusion(req: Request, res: Response) {
|
||||||
|
try {
|
||||||
|
const { requestId } = req.params;
|
||||||
|
|
||||||
|
const conclusion = await ConclusionRemark.findOne({
|
||||||
|
where: { requestId },
|
||||||
|
include: [
|
||||||
|
{ association: 'editor', attributes: ['userId', 'displayName', 'email'] }
|
||||||
|
]
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!conclusion) {
|
||||||
|
return res.status(404).json({ error: 'Conclusion not found' });
|
||||||
|
}
|
||||||
|
|
||||||
|
return res.status(200).json({
|
||||||
|
message: 'Conclusion retrieved successfully',
|
||||||
|
data: conclusion
|
||||||
|
});
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Conclusion] Error getting conclusion:', error);
|
||||||
|
return res.status(500).json({ error: 'Failed to get conclusion' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export const conclusionController = new ConclusionController();
|
||||||
|
|
||||||
472
src/controllers/dashboard.controller.ts
Normal file
472
src/controllers/dashboard.controller.ts
Normal file
@ -0,0 +1,472 @@
|
|||||||
|
import { Request, Response } from 'express';
|
||||||
|
import { DashboardService } from '../services/dashboard.service';
|
||||||
|
import logger from '@utils/logger';
|
||||||
|
|
||||||
|
export class DashboardController {
|
||||||
|
private dashboardService: DashboardService;
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
this.dashboardService = new DashboardService();
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get all KPI metrics for dashboard
|
||||||
|
*/
|
||||||
|
async getKPIs(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
const startDate = req.query.startDate as string | undefined;
|
||||||
|
const endDate = req.query.endDate as string | undefined;
|
||||||
|
|
||||||
|
const kpis = await this.dashboardService.getKPIs(userId, dateRange, startDate, endDate);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: kpis
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching KPIs:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch dashboard KPIs'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get request volume and status statistics
|
||||||
|
*/
|
||||||
|
async getRequestStats(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
const startDate = req.query.startDate as string | undefined;
|
||||||
|
const endDate = req.query.endDate as string | undefined;
|
||||||
|
|
||||||
|
const stats = await this.dashboardService.getRequestStats(userId, dateRange, startDate, endDate);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: stats
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching request stats:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch request statistics'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get TAT efficiency metrics
|
||||||
|
*/
|
||||||
|
async getTATEfficiency(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
const startDate = req.query.startDate as string | undefined;
|
||||||
|
const endDate = req.query.endDate as string | undefined;
|
||||||
|
|
||||||
|
const efficiency = await this.dashboardService.getTATEfficiency(userId, dateRange, startDate, endDate);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: efficiency
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching TAT efficiency:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch TAT efficiency metrics'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get approver load statistics
|
||||||
|
*/
|
||||||
|
async getApproverLoad(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
const startDate = req.query.startDate as string | undefined;
|
||||||
|
const endDate = req.query.endDate as string | undefined;
|
||||||
|
|
||||||
|
const load = await this.dashboardService.getApproverLoad(userId, dateRange, startDate, endDate);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: load
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching approver load:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch approver load statistics'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get engagement and quality metrics
|
||||||
|
*/
|
||||||
|
async getEngagementStats(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
const startDate = req.query.startDate as string | undefined;
|
||||||
|
const endDate = req.query.endDate as string | undefined;
|
||||||
|
|
||||||
|
const engagement = await this.dashboardService.getEngagementStats(userId, dateRange, startDate, endDate);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: engagement
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching engagement stats:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch engagement statistics'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get AI insights and closure metrics
|
||||||
|
*/
|
||||||
|
async getAIInsights(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
const startDate = req.query.startDate as string | undefined;
|
||||||
|
const endDate = req.query.endDate as string | undefined;
|
||||||
|
|
||||||
|
const insights = await this.dashboardService.getAIInsights(userId, dateRange, startDate, endDate);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: insights
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching AI insights:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch AI insights'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get AI Remark Utilization metrics with monthly trends
|
||||||
|
*/
|
||||||
|
async getAIRemarkUtilization(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
const startDate = req.query.startDate as string | undefined;
|
||||||
|
const endDate = req.query.endDate as string | undefined;
|
||||||
|
|
||||||
|
const utilization = await this.dashboardService.getAIRemarkUtilization(userId, dateRange, startDate, endDate);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: utilization
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching AI remark utilization:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch AI remark utilization'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get Approver Performance metrics with pagination
|
||||||
|
*/
|
||||||
|
async getApproverPerformance(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
const startDate = req.query.startDate as string | undefined;
|
||||||
|
const endDate = req.query.endDate as string | undefined;
|
||||||
|
const page = Number(req.query.page || 1);
|
||||||
|
const limit = Number(req.query.limit || 10);
|
||||||
|
|
||||||
|
const result = await this.dashboardService.getApproverPerformance(userId, dateRange, page, limit, startDate, endDate);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: result.performance,
|
||||||
|
pagination: {
|
||||||
|
currentPage: result.currentPage,
|
||||||
|
totalPages: result.totalPages,
|
||||||
|
totalRecords: result.totalRecords,
|
||||||
|
limit: result.limit
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching approver performance:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch approver performance metrics'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get recent activity feed
|
||||||
|
*/
|
||||||
|
async getRecentActivity(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const page = Number(req.query.page || 1);
|
||||||
|
const limit = Number(req.query.limit || 10);
|
||||||
|
|
||||||
|
const result = await this.dashboardService.getRecentActivity(userId, page, limit);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: result.activities,
|
||||||
|
pagination: {
|
||||||
|
currentPage: result.currentPage,
|
||||||
|
totalPages: result.totalPages,
|
||||||
|
totalRecords: result.totalRecords,
|
||||||
|
limit: result.limit
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching recent activity:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch recent activity'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get critical/high priority requests with pagination
|
||||||
|
*/
|
||||||
|
async getCriticalRequests(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const page = Number(req.query.page || 1);
|
||||||
|
const limit = Number(req.query.limit || 10);
|
||||||
|
|
||||||
|
const result = await this.dashboardService.getCriticalRequests(userId, page, limit);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: result.criticalRequests,
|
||||||
|
pagination: {
|
||||||
|
currentPage: result.currentPage,
|
||||||
|
totalPages: result.totalPages,
|
||||||
|
totalRecords: result.totalRecords,
|
||||||
|
limit: result.limit
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching critical requests:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch critical requests'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get upcoming deadlines with pagination
|
||||||
|
*/
|
||||||
|
async getUpcomingDeadlines(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const page = Number(req.query.page || 1);
|
||||||
|
const limit = Number(req.query.limit || 10);
|
||||||
|
|
||||||
|
const result = await this.dashboardService.getUpcomingDeadlines(userId, page, limit);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: result.deadlines,
|
||||||
|
pagination: {
|
||||||
|
currentPage: result.currentPage,
|
||||||
|
totalPages: result.totalPages,
|
||||||
|
totalRecords: result.totalRecords,
|
||||||
|
limit: result.limit
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching upcoming deadlines:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch upcoming deadlines'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get department-wise statistics
|
||||||
|
*/
|
||||||
|
async getDepartmentStats(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
const startDate = req.query.startDate as string | undefined;
|
||||||
|
const endDate = req.query.endDate as string | undefined;
|
||||||
|
|
||||||
|
const stats = await this.dashboardService.getDepartmentStats(userId, dateRange, startDate, endDate);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: stats
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching department stats:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch department statistics'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get priority distribution statistics
|
||||||
|
*/
|
||||||
|
async getPriorityDistribution(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
const startDate = req.query.startDate as string | undefined;
|
||||||
|
const endDate = req.query.endDate as string | undefined;
|
||||||
|
|
||||||
|
const distribution = await this.dashboardService.getPriorityDistribution(userId, dateRange, startDate, endDate);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: distribution
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching priority distribution:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch priority distribution'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get Request Lifecycle Report
|
||||||
|
*/
|
||||||
|
async getLifecycleReport(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const page = Number(req.query.page || 1);
|
||||||
|
const limit = Number(req.query.limit || 50);
|
||||||
|
|
||||||
|
const result = await this.dashboardService.getLifecycleReport(userId, page, limit);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: result.lifecycleData,
|
||||||
|
pagination: {
|
||||||
|
currentPage: result.currentPage,
|
||||||
|
totalPages: result.totalPages,
|
||||||
|
totalRecords: result.totalRecords,
|
||||||
|
limit: result.limit
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching lifecycle report:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch lifecycle report'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get enhanced User Activity Log Report
|
||||||
|
*/
|
||||||
|
async getActivityLogReport(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const page = Number(req.query.page || 1);
|
||||||
|
const limit = Number(req.query.limit || 50);
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
const filterUserId = req.query.filterUserId as string | undefined;
|
||||||
|
const filterType = req.query.filterType as string | undefined;
|
||||||
|
const filterCategory = req.query.filterCategory as string | undefined;
|
||||||
|
const filterSeverity = req.query.filterSeverity as string | undefined;
|
||||||
|
|
||||||
|
const result = await this.dashboardService.getActivityLogReport(
|
||||||
|
userId,
|
||||||
|
page,
|
||||||
|
limit,
|
||||||
|
dateRange,
|
||||||
|
filterUserId,
|
||||||
|
filterType,
|
||||||
|
filterCategory,
|
||||||
|
filterSeverity
|
||||||
|
);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: result.activities,
|
||||||
|
pagination: {
|
||||||
|
currentPage: result.currentPage,
|
||||||
|
totalPages: result.totalPages,
|
||||||
|
totalRecords: result.totalRecords,
|
||||||
|
limit: result.limit
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching activity log report:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch activity log report'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get Workflow Aging Report
|
||||||
|
*/
|
||||||
|
async getWorkflowAgingReport(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const threshold = Number(req.query.threshold || 7);
|
||||||
|
const page = Number(req.query.page || 1);
|
||||||
|
const limit = Number(req.query.limit || 50);
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
|
||||||
|
const result = await this.dashboardService.getWorkflowAgingReport(
|
||||||
|
userId,
|
||||||
|
threshold,
|
||||||
|
page,
|
||||||
|
limit,
|
||||||
|
dateRange
|
||||||
|
);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: result.agingData,
|
||||||
|
pagination: {
|
||||||
|
currentPage: result.currentPage,
|
||||||
|
totalPages: result.totalPages,
|
||||||
|
totalRecords: result.totalRecords,
|
||||||
|
limit: result.limit
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching workflow aging report:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch workflow aging report'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
@ -6,6 +6,7 @@ import { User } from '@models/User';
|
|||||||
import { ResponseHandler } from '@utils/responseHandler';
|
import { ResponseHandler } from '@utils/responseHandler';
|
||||||
import { activityService } from '@services/activity.service';
|
import { activityService } from '@services/activity.service';
|
||||||
import type { AuthenticatedRequest } from '../types/express';
|
import type { AuthenticatedRequest } from '../types/express';
|
||||||
|
import { getRequestMetadata } from '@utils/requestUtils';
|
||||||
|
|
||||||
export class DocumentController {
|
export class DocumentController {
|
||||||
async upload(req: AuthenticatedRequest, res: Response): Promise<void> {
|
async upload(req: AuthenticatedRequest, res: Response): Promise<void> {
|
||||||
@ -58,6 +59,7 @@ export class DocumentController {
|
|||||||
const uploaderName = (user as any)?.displayName || (user as any)?.email || 'User';
|
const uploaderName = (user as any)?.displayName || (user as any)?.email || 'User';
|
||||||
|
|
||||||
// Log activity for document upload
|
// Log activity for document upload
|
||||||
|
const requestMeta = getRequestMetadata(req);
|
||||||
await activityService.log({
|
await activityService.log({
|
||||||
requestId,
|
requestId,
|
||||||
type: 'document_added',
|
type: 'document_added',
|
||||||
@ -70,7 +72,9 @@ export class DocumentController {
|
|||||||
fileSize: file.size,
|
fileSize: file.size,
|
||||||
fileType: extension,
|
fileType: extension,
|
||||||
category
|
category
|
||||||
}
|
},
|
||||||
|
ipAddress: requestMeta.ipAddress,
|
||||||
|
userAgent: requestMeta.userAgent
|
||||||
});
|
});
|
||||||
|
|
||||||
ResponseHandler.success(res, doc, 'File uploaded', 201);
|
ResponseHandler.success(res, doc, 'File uploaded', 201);
|
||||||
|
|||||||
176
src/controllers/notification.controller.ts
Normal file
176
src/controllers/notification.controller.ts
Normal file
@ -0,0 +1,176 @@
|
|||||||
|
import { Request, Response } from 'express';
|
||||||
|
import { Notification } from '@models/Notification';
|
||||||
|
import { Op } from 'sequelize';
|
||||||
|
import logger from '@utils/logger';
|
||||||
|
|
||||||
|
export class NotificationController {
|
||||||
|
/**
|
||||||
|
* Get user's notifications with pagination
|
||||||
|
*/
|
||||||
|
async getUserNotifications(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const { page = 1, limit = 20, unreadOnly = false } = req.query;
|
||||||
|
|
||||||
|
if (!userId) {
|
||||||
|
res.status(401).json({ success: false, message: 'Unauthorized' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const where: any = { userId };
|
||||||
|
if (unreadOnly === 'true') {
|
||||||
|
where.isRead = false;
|
||||||
|
}
|
||||||
|
|
||||||
|
const offset = (Number(page) - 1) * Number(limit);
|
||||||
|
|
||||||
|
const { rows, count } = await Notification.findAndCountAll({
|
||||||
|
where,
|
||||||
|
order: [['createdAt', 'DESC']],
|
||||||
|
limit: Number(limit),
|
||||||
|
offset
|
||||||
|
});
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: {
|
||||||
|
notifications: rows,
|
||||||
|
pagination: {
|
||||||
|
page: Number(page),
|
||||||
|
limit: Number(limit),
|
||||||
|
total: count,
|
||||||
|
totalPages: Math.ceil(count / Number(limit))
|
||||||
|
},
|
||||||
|
unreadCount: unreadOnly === 'true' ? count : await Notification.count({ where: { userId, isRead: false } })
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Notification Controller] Error fetching notifications:', error);
|
||||||
|
res.status(500).json({ success: false, message: error.message });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get unread notification count
|
||||||
|
*/
|
||||||
|
async getUnreadCount(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
|
||||||
|
if (!userId) {
|
||||||
|
res.status(401).json({ success: false, message: 'Unauthorized' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const count = await Notification.count({
|
||||||
|
where: { userId, isRead: false }
|
||||||
|
});
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: { unreadCount: count }
|
||||||
|
});
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Notification Controller] Error fetching unread count:', error);
|
||||||
|
res.status(500).json({ success: false, message: error.message });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Mark notification as read
|
||||||
|
*/
|
||||||
|
async markAsRead(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const { notificationId } = req.params;
|
||||||
|
|
||||||
|
if (!userId) {
|
||||||
|
res.status(401).json({ success: false, message: 'Unauthorized' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const notification = await Notification.findOne({
|
||||||
|
where: { notificationId, userId }
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!notification) {
|
||||||
|
res.status(404).json({ success: false, message: 'Notification not found' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
await notification.update({
|
||||||
|
isRead: true,
|
||||||
|
readAt: new Date()
|
||||||
|
});
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
message: 'Notification marked as read',
|
||||||
|
data: { notification }
|
||||||
|
});
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Notification Controller] Error marking notification as read:', error);
|
||||||
|
res.status(500).json({ success: false, message: error.message });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Mark all notifications as read
|
||||||
|
*/
|
||||||
|
async markAllAsRead(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
|
||||||
|
if (!userId) {
|
||||||
|
res.status(401).json({ success: false, message: 'Unauthorized' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
await Notification.update(
|
||||||
|
{ isRead: true, readAt: new Date() },
|
||||||
|
{ where: { userId, isRead: false } }
|
||||||
|
);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
message: 'All notifications marked as read'
|
||||||
|
});
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Notification Controller] Error marking all as read:', error);
|
||||||
|
res.status(500).json({ success: false, message: error.message });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Delete notification
|
||||||
|
*/
|
||||||
|
async deleteNotification(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const { notificationId } = req.params;
|
||||||
|
|
||||||
|
if (!userId) {
|
||||||
|
res.status(401).json({ success: false, message: 'Unauthorized' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const deleted = await Notification.destroy({
|
||||||
|
where: { notificationId, userId }
|
||||||
|
});
|
||||||
|
|
||||||
|
if (deleted === 0) {
|
||||||
|
res.status(404).json({ success: false, message: 'Notification not found' });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
message: 'Notification deleted'
|
||||||
|
});
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Notification Controller] Error deleting notification:', error);
|
||||||
|
res.status(500).json({ success: false, message: error.message });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
@ -16,8 +16,6 @@ export class UserController {
|
|||||||
const limit = Number(req.query.limit || 10);
|
const limit = Number(req.query.limit || 10);
|
||||||
const currentUserId = (req as any).user?.userId || (req as any).user?.id;
|
const currentUserId = (req as any).user?.userId || (req as any).user?.id;
|
||||||
|
|
||||||
logger.info('User search requested', { q, limit });
|
|
||||||
|
|
||||||
const users = await this.userService.searchUsers(q, limit, currentUserId);
|
const users = await this.userService.searchUsers(q, limit, currentUserId);
|
||||||
|
|
||||||
const result = users.map(u => ({
|
const result = users.map(u => ({
|
||||||
@ -37,6 +35,44 @@ export class UserController {
|
|||||||
ResponseHandler.error(res, 'User search failed', 500);
|
ResponseHandler.error(res, 'User search failed', 500);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Ensure user exists in database (create if not exists)
|
||||||
|
* Called when user is selected/tagged in the frontend
|
||||||
|
*/
|
||||||
|
async ensureUserExists(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const { userId, email, displayName, firstName, lastName, department, phone } = req.body;
|
||||||
|
|
||||||
|
if (!userId || !email) {
|
||||||
|
ResponseHandler.error(res, 'userId and email are required', 400);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const user = await this.userService.ensureUserExists({
|
||||||
|
userId,
|
||||||
|
email,
|
||||||
|
displayName,
|
||||||
|
firstName,
|
||||||
|
lastName,
|
||||||
|
department,
|
||||||
|
phone
|
||||||
|
});
|
||||||
|
|
||||||
|
ResponseHandler.success(res, {
|
||||||
|
userId: user.userId,
|
||||||
|
email: user.email,
|
||||||
|
displayName: user.displayName,
|
||||||
|
firstName: user.firstName,
|
||||||
|
lastName: user.lastName,
|
||||||
|
department: user.department,
|
||||||
|
isActive: user.isActive
|
||||||
|
}, 'User ensured in database');
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('Ensure user failed', { error });
|
||||||
|
ResponseHandler.error(res, error.message || 'Failed to ensure user', 500);
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@ -10,6 +10,7 @@ import { User } from '@models/User';
|
|||||||
import fs from 'fs';
|
import fs from 'fs';
|
||||||
import path from 'path';
|
import path from 'path';
|
||||||
import crypto from 'crypto';
|
import crypto from 'crypto';
|
||||||
|
import { getRequestMetadata } from '@utils/requestUtils';
|
||||||
|
|
||||||
const workflowService = new WorkflowService();
|
const workflowService = new WorkflowService();
|
||||||
|
|
||||||
@ -22,7 +23,11 @@ export class WorkflowController {
|
|||||||
...validatedData,
|
...validatedData,
|
||||||
priority: validatedData.priority as Priority
|
priority: validatedData.priority as Priority
|
||||||
};
|
};
|
||||||
const workflow = await workflowService.createWorkflow(req.user.userId, workflowData);
|
const requestMeta = getRequestMetadata(req);
|
||||||
|
const workflow = await workflowService.createWorkflow(req.user.userId, workflowData, {
|
||||||
|
ipAddress: requestMeta.ipAddress,
|
||||||
|
userAgent: requestMeta.userAgent
|
||||||
|
});
|
||||||
|
|
||||||
ResponseHandler.success(res, workflow, 'Workflow created successfully', 201);
|
ResponseHandler.success(res, workflow, 'Workflow created successfully', 201);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@ -49,7 +54,11 @@ export class WorkflowController {
|
|||||||
const validated = validateCreateWorkflow(parsed);
|
const validated = validateCreateWorkflow(parsed);
|
||||||
const workflowData = { ...validated, priority: validated.priority as Priority } as any;
|
const workflowData = { ...validated, priority: validated.priority as Priority } as any;
|
||||||
|
|
||||||
const workflow = await workflowService.createWorkflow(userId, workflowData);
|
const requestMeta = getRequestMetadata(req);
|
||||||
|
const workflow = await workflowService.createWorkflow(userId, workflowData, {
|
||||||
|
ipAddress: requestMeta.ipAddress,
|
||||||
|
userAgent: requestMeta.userAgent
|
||||||
|
});
|
||||||
|
|
||||||
// Attach files as documents (category defaults to SUPPORTING)
|
// Attach files as documents (category defaults to SUPPORTING)
|
||||||
const files = (req as any).files as Express.Multer.File[] | undefined;
|
const files = (req as any).files as Express.Multer.File[] | undefined;
|
||||||
@ -87,6 +96,7 @@ export class WorkflowController {
|
|||||||
docs.push(doc);
|
docs.push(doc);
|
||||||
|
|
||||||
// Log document upload activity
|
// Log document upload activity
|
||||||
|
const requestMeta = getRequestMetadata(req);
|
||||||
activityService.log({
|
activityService.log({
|
||||||
requestId: workflow.requestId,
|
requestId: workflow.requestId,
|
||||||
type: 'document_added',
|
type: 'document_added',
|
||||||
@ -94,7 +104,9 @@ export class WorkflowController {
|
|||||||
timestamp: new Date().toISOString(),
|
timestamp: new Date().toISOString(),
|
||||||
action: 'Document Added',
|
action: 'Document Added',
|
||||||
details: `Added ${file.originalname} as supporting document by ${uploaderName}`,
|
details: `Added ${file.originalname} as supporting document by ${uploaderName}`,
|
||||||
metadata: { fileName: file.originalname, fileSize: file.size, fileType: extension }
|
metadata: { fileName: file.originalname, fileSize: file.size, fileType: extension },
|
||||||
|
ipAddress: requestMeta.ipAddress,
|
||||||
|
userAgent: requestMeta.userAgent
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -155,7 +167,15 @@ export class WorkflowController {
|
|||||||
const userId = (req as any).user?.userId || (req as any).user?.id || (req as any).auth?.userId;
|
const userId = (req as any).user?.userId || (req as any).user?.id || (req as any).auth?.userId;
|
||||||
const page = Math.max(parseInt(String(req.query.page || '1'), 10), 1);
|
const page = Math.max(parseInt(String(req.query.page || '1'), 10), 1);
|
||||||
const limit = Math.min(Math.max(parseInt(String(req.query.limit || '20'), 10), 1), 100);
|
const limit = Math.min(Math.max(parseInt(String(req.query.limit || '20'), 10), 1), 100);
|
||||||
const result = await workflowService.listMyRequests(userId, page, limit);
|
|
||||||
|
// Extract filter parameters
|
||||||
|
const filters = {
|
||||||
|
search: req.query.search as string | undefined,
|
||||||
|
status: req.query.status as string | undefined,
|
||||||
|
priority: req.query.priority as string | undefined
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = await workflowService.listMyRequests(userId, page, limit, filters);
|
||||||
ResponseHandler.success(res, result, 'My requests fetched');
|
ResponseHandler.success(res, result, 'My requests fetched');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
|
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
|
||||||
@ -168,7 +188,19 @@ export class WorkflowController {
|
|||||||
const userId = (req as any).user?.userId || (req as any).user?.id || (req as any).auth?.userId;
|
const userId = (req as any).user?.userId || (req as any).user?.id || (req as any).auth?.userId;
|
||||||
const page = Math.max(parseInt(String(req.query.page || '1'), 10), 1);
|
const page = Math.max(parseInt(String(req.query.page || '1'), 10), 1);
|
||||||
const limit = Math.min(Math.max(parseInt(String(req.query.limit || '20'), 10), 1), 100);
|
const limit = Math.min(Math.max(parseInt(String(req.query.limit || '20'), 10), 1), 100);
|
||||||
const result = await workflowService.listOpenForMe(userId, page, limit);
|
|
||||||
|
// Extract filter parameters
|
||||||
|
const filters = {
|
||||||
|
search: req.query.search as string | undefined,
|
||||||
|
status: req.query.status as string | undefined,
|
||||||
|
priority: req.query.priority as string | undefined
|
||||||
|
};
|
||||||
|
|
||||||
|
// Extract sorting parameters
|
||||||
|
const sortBy = req.query.sortBy as string | undefined;
|
||||||
|
const sortOrder = (req.query.sortOrder as string | undefined) || 'desc';
|
||||||
|
|
||||||
|
const result = await workflowService.listOpenForMe(userId, page, limit, filters, sortBy, sortOrder);
|
||||||
ResponseHandler.success(res, result, 'Open requests for user fetched');
|
ResponseHandler.success(res, result, 'Open requests for user fetched');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
|
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
|
||||||
@ -181,7 +213,19 @@ export class WorkflowController {
|
|||||||
const userId = (req as any).user?.userId || (req as any).user?.id || (req as any).auth?.userId;
|
const userId = (req as any).user?.userId || (req as any).user?.id || (req as any).auth?.userId;
|
||||||
const page = Math.max(parseInt(String(req.query.page || '1'), 10), 1);
|
const page = Math.max(parseInt(String(req.query.page || '1'), 10), 1);
|
||||||
const limit = Math.min(Math.max(parseInt(String(req.query.limit || '20'), 10), 1), 100);
|
const limit = Math.min(Math.max(parseInt(String(req.query.limit || '20'), 10), 1), 100);
|
||||||
const result = await workflowService.listClosedByMe(userId, page, limit);
|
|
||||||
|
// Extract filter parameters
|
||||||
|
const filters = {
|
||||||
|
search: req.query.search as string | undefined,
|
||||||
|
status: req.query.status as string | undefined,
|
||||||
|
priority: req.query.priority as string | undefined
|
||||||
|
};
|
||||||
|
|
||||||
|
// Extract sorting parameters
|
||||||
|
const sortBy = req.query.sortBy as string | undefined;
|
||||||
|
const sortOrder = (req.query.sortOrder as string | undefined) || 'desc';
|
||||||
|
|
||||||
|
const result = await workflowService.listClosedByMe(userId, page, limit, filters, sortBy, sortOrder);
|
||||||
ResponseHandler.success(res, result, 'Closed requests by user fetched');
|
ResponseHandler.success(res, result, 'Closed requests by user fetched');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
|
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
|
||||||
|
|||||||
@ -1,6 +1,7 @@
|
|||||||
import type { Request, Response } from 'express';
|
import type { Request, Response } from 'express';
|
||||||
import { workNoteService } from '../services/worknote.service';
|
import { workNoteService } from '../services/worknote.service';
|
||||||
import { WorkflowService } from '../services/workflow.service';
|
import { WorkflowService } from '../services/workflow.service';
|
||||||
|
import { getRequestMetadata } from '@utils/requestUtils';
|
||||||
|
|
||||||
export class WorkNoteController {
|
export class WorkNoteController {
|
||||||
private workflowService = new WorkflowService();
|
private workflowService = new WorkflowService();
|
||||||
@ -40,7 +41,21 @@ export class WorkNoteController {
|
|||||||
|
|
||||||
const payload = req.body?.payload ? JSON.parse(req.body.payload) : (req.body || {});
|
const payload = req.body?.payload ? JSON.parse(req.body.payload) : (req.body || {});
|
||||||
const files = (req.files as any[])?.map(f => ({ path: f.path, originalname: f.originalname, mimetype: f.mimetype, size: f.size })) || [];
|
const files = (req.files as any[])?.map(f => ({ path: f.path, originalname: f.originalname, mimetype: f.mimetype, size: f.size })) || [];
|
||||||
const note = await workNoteService.create(requestId, user, payload, files);
|
|
||||||
|
// Extract mentions from payload (sent by frontend)
|
||||||
|
const mentions = payload.mentions || [];
|
||||||
|
const workNotePayload = {
|
||||||
|
message: payload.message,
|
||||||
|
isPriority: payload.isPriority,
|
||||||
|
parentNoteId: payload.parentNoteId,
|
||||||
|
mentionedUsers: mentions // Pass mentioned user IDs to service
|
||||||
|
};
|
||||||
|
|
||||||
|
const requestMeta = getRequestMetadata(req);
|
||||||
|
const note = await workNoteService.create(requestId, user, workNotePayload, files, {
|
||||||
|
ipAddress: requestMeta.ipAddress,
|
||||||
|
userAgent: requestMeta.userAgent
|
||||||
|
});
|
||||||
res.status(201).json({ success: true, data: note });
|
res.status(201).json({ success: true, data: note });
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -49,7 +49,7 @@ export const authenticateToken = async (
|
|||||||
userId: user.userId,
|
userId: user.userId,
|
||||||
email: user.email,
|
email: user.email,
|
||||||
employeeId: user.employeeId || null, // Optional - schema not finalized
|
employeeId: user.employeeId || null, // Optional - schema not finalized
|
||||||
role: user.isAdmin ? 'admin' : 'user'
|
role: user.role // Keep uppercase: USER, MANAGEMENT, ADMIN
|
||||||
};
|
};
|
||||||
|
|
||||||
next();
|
next();
|
||||||
@ -70,7 +70,7 @@ export const requireAdmin = (
|
|||||||
res: Response,
|
res: Response,
|
||||||
next: NextFunction
|
next: NextFunction
|
||||||
): void => {
|
): void => {
|
||||||
if (req.user?.role !== 'admin') {
|
if (req.user?.role !== 'ADMIN') {
|
||||||
ResponseHandler.forbidden(res, 'Admin access required');
|
ResponseHandler.forbidden(res, 'Admin access required');
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -95,7 +95,7 @@ export const optionalAuth = async (
|
|||||||
userId: user.userId,
|
userId: user.userId,
|
||||||
email: user.email,
|
email: user.email,
|
||||||
employeeId: user.employeeId || null, // Optional - schema not finalized
|
employeeId: user.employeeId || null, // Optional - schema not finalized
|
||||||
role: user.isAdmin ? 'admin' : 'user'
|
role: user.role // Keep uppercase: USER, MANAGEMENT, ADMIN
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -98,16 +98,36 @@ export function requireParticipantTypes(allowed: AllowedType[]) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Middleware to require admin role
|
* Role-Based Access Control (RBAC) Middleware
|
||||||
|
*
|
||||||
|
* Roles:
|
||||||
|
* - USER: Default role - can create/view own requests, participate in assigned workflows
|
||||||
|
* - MANAGEMENT: Read access to all requests, enhanced dashboard visibility
|
||||||
|
* - ADMIN: Full system access - configuration, user management, all workflows
|
||||||
|
*/
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Middleware: requireAdmin
|
||||||
|
*
|
||||||
|
* Purpose: Restrict access to ADMIN role only
|
||||||
|
*
|
||||||
|
* Use Cases:
|
||||||
|
* - System configuration management
|
||||||
|
* - User role assignment
|
||||||
|
* - Holiday calendar management
|
||||||
|
* - Email/notification settings
|
||||||
|
* - Audit log access
|
||||||
|
*
|
||||||
|
* Returns: 403 Forbidden if user is not ADMIN
|
||||||
*/
|
*/
|
||||||
export function requireAdmin(req: Request, res: Response, next: NextFunction): void {
|
export function requireAdmin(req: Request, res: Response, next: NextFunction): void {
|
||||||
try {
|
try {
|
||||||
const userRole = req.user?.role;
|
const userRole = req.user?.role;
|
||||||
|
|
||||||
if (userRole !== 'admin') {
|
if (userRole !== 'ADMIN') {
|
||||||
res.status(403).json({
|
res.status(403).json({
|
||||||
success: false,
|
success: false,
|
||||||
error: 'Admin access required'
|
error: 'Admin access required. Only administrators can perform this action.'
|
||||||
});
|
});
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -122,4 +142,117 @@ export function requireAdmin(req: Request, res: Response, next: NextFunction): v
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Middleware: requireManagement
|
||||||
|
*
|
||||||
|
* Purpose: Restrict access to MANAGEMENT and ADMIN roles
|
||||||
|
*
|
||||||
|
* Use Cases:
|
||||||
|
* - View all requests (read-only)
|
||||||
|
* - Access comprehensive dashboards
|
||||||
|
* - Export reports
|
||||||
|
* - View analytics across all departments
|
||||||
|
*
|
||||||
|
* Permissions:
|
||||||
|
* - MANAGEMENT: Read access to all data
|
||||||
|
* - ADMIN: Read + Write access
|
||||||
|
*
|
||||||
|
* Returns: 403 Forbidden if user is only USER role
|
||||||
|
*/
|
||||||
|
export function requireManagement(req: Request, res: Response, next: NextFunction): void {
|
||||||
|
try {
|
||||||
|
const userRole = req.user?.role;
|
||||||
|
|
||||||
|
if (userRole !== 'MANAGEMENT' && userRole !== 'ADMIN') {
|
||||||
|
res.status(403).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Management access required. This feature is available to management and admin users only.'
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
next();
|
||||||
|
} catch (error) {
|
||||||
|
console.error('❌ Management authorization failed:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Authorization check failed'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Middleware: requireRole
|
||||||
|
*
|
||||||
|
* Purpose: Flexible role checking - accepts multiple allowed roles
|
||||||
|
*
|
||||||
|
* Example Usage:
|
||||||
|
* - requireRole(['ADMIN']) - Admin only
|
||||||
|
* - requireRole(['MANAGEMENT', 'ADMIN']) - Management or Admin
|
||||||
|
* - requireRole(['USER', 'MANAGEMENT', 'ADMIN']) - Any authenticated user
|
||||||
|
*
|
||||||
|
* @param allowedRoles - Array of allowed role strings
|
||||||
|
* @returns Express middleware function
|
||||||
|
*/
|
||||||
|
export function requireRole(allowedRoles: ('USER' | 'MANAGEMENT' | 'ADMIN')[]) {
|
||||||
|
return (req: Request, res: Response, next: NextFunction): void => {
|
||||||
|
try {
|
||||||
|
const userRole = req.user?.role;
|
||||||
|
|
||||||
|
if (!userRole || !allowedRoles.includes(userRole as any)) {
|
||||||
|
res.status(403).json({
|
||||||
|
success: false,
|
||||||
|
error: `Access denied. Required roles: ${allowedRoles.join(' or ')}`
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
next();
|
||||||
|
} catch (error) {
|
||||||
|
console.error('❌ Role authorization failed:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Authorization check failed'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Helper: Check if user has specific role
|
||||||
|
*
|
||||||
|
* Purpose: Programmatic role checking within controllers
|
||||||
|
*
|
||||||
|
* @param user - Express req.user object
|
||||||
|
* @param role - Role to check
|
||||||
|
* @returns boolean
|
||||||
|
*/
|
||||||
|
export function hasRole(user: any, role: 'USER' | 'MANAGEMENT' | 'ADMIN'): boolean {
|
||||||
|
return user?.role === role;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Helper: Check if user has management or admin access
|
||||||
|
*
|
||||||
|
* Purpose: Quick check for enhanced permissions
|
||||||
|
*
|
||||||
|
* @param user - Express req.user object
|
||||||
|
* @returns boolean
|
||||||
|
*/
|
||||||
|
export function hasManagementAccess(user: any): boolean {
|
||||||
|
return user?.role === 'MANAGEMENT' || user?.role === 'ADMIN';
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Helper: Check if user has admin access
|
||||||
|
*
|
||||||
|
* Purpose: Quick check for admin-only permissions
|
||||||
|
*
|
||||||
|
* @param user - Express req.user object
|
||||||
|
* @returns boolean
|
||||||
|
*/
|
||||||
|
export function hasAdminAccess(user: any): boolean {
|
||||||
|
return user?.role === 'ADMIN';
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@ -17,16 +17,21 @@ export const corsMiddleware = cors({
|
|||||||
origin: (origin, callback) => {
|
origin: (origin, callback) => {
|
||||||
const allowedOrigins = getOrigins();
|
const allowedOrigins = getOrigins();
|
||||||
|
|
||||||
// Allow requests with no origin (like mobile apps or curl requests) in development
|
// In development, be more permissive
|
||||||
if (!origin && process.env.NODE_ENV === 'development') {
|
if (process.env.NODE_ENV !== 'production') {
|
||||||
|
// Allow localhost on any port
|
||||||
|
if (!origin || origin.includes('localhost') || origin.includes('127.0.0.1')) {
|
||||||
|
return callback(null, true);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Allow requests with no origin (like mobile apps or curl requests)
|
||||||
|
if (!origin) {
|
||||||
return callback(null, true);
|
return callback(null, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (origin && allowedOrigins.includes(origin)) {
|
if (origin && allowedOrigins.includes(origin)) {
|
||||||
callback(null, true);
|
callback(null, true);
|
||||||
} else if (!origin) {
|
|
||||||
// Allow requests with no origin
|
|
||||||
callback(null, true);
|
|
||||||
} else {
|
} else {
|
||||||
callback(new Error('Not allowed by CORS'));
|
callback(new Error('Not allowed by CORS'));
|
||||||
}
|
}
|
||||||
|
|||||||
@ -2,114 +2,236 @@ import { QueryInterface, DataTypes } from 'sequelize';
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Migration: Create users table
|
* Migration: Create users table
|
||||||
|
*
|
||||||
|
* Purpose: Create the main users table with all fields including RBAC and SSO fields
|
||||||
|
*
|
||||||
* This must run FIRST before other tables that reference users
|
* This must run FIRST before other tables that reference users
|
||||||
|
*
|
||||||
|
* Includes:
|
||||||
|
* - Basic user information (email, name, etc.)
|
||||||
|
* - SSO/Okta fields (manager, job_title, etc.)
|
||||||
|
* - RBAC role system (USER, MANAGEMENT, ADMIN)
|
||||||
|
* - Location and AD group information
|
||||||
|
*
|
||||||
|
* Created: 2025-11-12 (Updated for fresh setup)
|
||||||
*/
|
*/
|
||||||
export async function up(queryInterface: QueryInterface): Promise<void> {
|
export async function up(queryInterface: QueryInterface): Promise<void> {
|
||||||
// Create users table
|
console.log('📋 Creating users table with RBAC and extended SSO fields...');
|
||||||
await queryInterface.createTable('users', {
|
|
||||||
user_id: {
|
try {
|
||||||
type: DataTypes.UUID,
|
// Step 1: Create ENUM type for roles
|
||||||
primaryKey: true,
|
console.log(' ✓ Creating user_role_enum...');
|
||||||
defaultValue: DataTypes.UUIDV4,
|
await queryInterface.sequelize.query(`
|
||||||
field: 'user_id'
|
CREATE TYPE user_role_enum AS ENUM ('USER', 'MANAGEMENT', 'ADMIN');
|
||||||
},
|
`);
|
||||||
employee_id: {
|
|
||||||
type: DataTypes.STRING(50),
|
// Step 2: Create users table
|
||||||
allowNull: true,
|
console.log(' ✓ Creating users table...');
|
||||||
field: 'employee_id'
|
await queryInterface.createTable('users', {
|
||||||
},
|
user_id: {
|
||||||
okta_sub: {
|
type: DataTypes.UUID,
|
||||||
type: DataTypes.STRING(100),
|
primaryKey: true,
|
||||||
allowNull: false,
|
defaultValue: DataTypes.UUIDV4,
|
||||||
unique: true,
|
field: 'user_id',
|
||||||
field: 'okta_sub'
|
comment: 'Primary key - UUID'
|
||||||
},
|
},
|
||||||
email: {
|
employee_id: {
|
||||||
type: DataTypes.STRING(255),
|
type: DataTypes.STRING(50),
|
||||||
allowNull: false,
|
allowNull: true,
|
||||||
unique: true,
|
field: 'employee_id',
|
||||||
field: 'email'
|
comment: 'HR System Employee ID (optional) - some users may not have'
|
||||||
},
|
},
|
||||||
first_name: {
|
okta_sub: {
|
||||||
type: DataTypes.STRING(100),
|
type: DataTypes.STRING(100),
|
||||||
allowNull: true,
|
allowNull: false,
|
||||||
field: 'first_name'
|
unique: true,
|
||||||
},
|
field: 'okta_sub',
|
||||||
last_name: {
|
comment: 'Okta user subject identifier - unique identifier from SSO'
|
||||||
type: DataTypes.STRING(100),
|
},
|
||||||
allowNull: true,
|
email: {
|
||||||
field: 'last_name'
|
type: DataTypes.STRING(255),
|
||||||
},
|
allowNull: false,
|
||||||
display_name: {
|
unique: true,
|
||||||
type: DataTypes.STRING(200),
|
field: 'email',
|
||||||
allowNull: true,
|
comment: 'Primary email address - unique and required'
|
||||||
field: 'display_name'
|
},
|
||||||
},
|
first_name: {
|
||||||
department: {
|
type: DataTypes.STRING(100),
|
||||||
type: DataTypes.STRING(100),
|
allowNull: true,
|
||||||
allowNull: true
|
defaultValue: '',
|
||||||
},
|
field: 'first_name',
|
||||||
designation: {
|
comment: 'First name from SSO (optional)'
|
||||||
type: DataTypes.STRING(100),
|
},
|
||||||
allowNull: true
|
last_name: {
|
||||||
},
|
type: DataTypes.STRING(100),
|
||||||
phone: {
|
allowNull: true,
|
||||||
type: DataTypes.STRING(20),
|
defaultValue: '',
|
||||||
allowNull: true
|
field: 'last_name',
|
||||||
},
|
comment: 'Last name from SSO (optional)'
|
||||||
location: {
|
},
|
||||||
type: DataTypes.JSONB,
|
display_name: {
|
||||||
allowNull: true
|
type: DataTypes.STRING(200),
|
||||||
},
|
allowNull: true,
|
||||||
is_active: {
|
defaultValue: '',
|
||||||
type: DataTypes.BOOLEAN,
|
field: 'display_name',
|
||||||
defaultValue: true,
|
comment: 'Full display name for UI'
|
||||||
field: 'is_active'
|
},
|
||||||
},
|
department: {
|
||||||
is_admin: {
|
type: DataTypes.STRING(100),
|
||||||
type: DataTypes.BOOLEAN,
|
allowNull: true,
|
||||||
defaultValue: false,
|
comment: 'Department/Division from SSO'
|
||||||
field: 'is_admin'
|
},
|
||||||
},
|
designation: {
|
||||||
last_login: {
|
type: DataTypes.STRING(100),
|
||||||
type: DataTypes.DATE,
|
allowNull: true,
|
||||||
allowNull: true,
|
comment: 'Job designation/position'
|
||||||
field: 'last_login'
|
},
|
||||||
},
|
phone: {
|
||||||
created_at: {
|
type: DataTypes.STRING(20),
|
||||||
type: DataTypes.DATE,
|
allowNull: true,
|
||||||
allowNull: false,
|
comment: 'Office phone number'
|
||||||
defaultValue: DataTypes.NOW,
|
},
|
||||||
field: 'created_at'
|
|
||||||
},
|
// ============ Extended SSO/Okta Fields ============
|
||||||
updated_at: {
|
manager: {
|
||||||
type: DataTypes.DATE,
|
type: DataTypes.STRING(200),
|
||||||
allowNull: false,
|
allowNull: true,
|
||||||
defaultValue: DataTypes.NOW,
|
comment: 'Reporting manager name from SSO/AD'
|
||||||
field: 'updated_at'
|
},
|
||||||
}
|
second_email: {
|
||||||
});
|
type: DataTypes.STRING(255),
|
||||||
|
allowNull: true,
|
||||||
|
field: 'second_email',
|
||||||
|
comment: 'Alternate email address from SSO'
|
||||||
|
},
|
||||||
|
job_title: {
|
||||||
|
type: DataTypes.TEXT,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'job_title',
|
||||||
|
comment: 'Detailed job title/description from SSO'
|
||||||
|
},
|
||||||
|
employee_number: {
|
||||||
|
type: DataTypes.STRING(50),
|
||||||
|
allowNull: true,
|
||||||
|
field: 'employee_number',
|
||||||
|
comment: 'HR system employee number from SSO (e.g., "00020330")'
|
||||||
|
},
|
||||||
|
postal_address: {
|
||||||
|
type: DataTypes.STRING(500),
|
||||||
|
allowNull: true,
|
||||||
|
field: 'postal_address',
|
||||||
|
comment: 'Work location/office address from SSO'
|
||||||
|
},
|
||||||
|
mobile_phone: {
|
||||||
|
type: DataTypes.STRING(20),
|
||||||
|
allowNull: true,
|
||||||
|
field: 'mobile_phone',
|
||||||
|
comment: 'Mobile contact number from SSO'
|
||||||
|
},
|
||||||
|
ad_groups: {
|
||||||
|
type: DataTypes.JSONB,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'ad_groups',
|
||||||
|
comment: 'Active Directory group memberships from SSO (memberOf array)'
|
||||||
|
},
|
||||||
|
|
||||||
|
// ============ System Fields ============
|
||||||
|
location: {
|
||||||
|
type: DataTypes.JSONB,
|
||||||
|
allowNull: true,
|
||||||
|
comment: 'JSON object: {city, state, country, office, timezone}'
|
||||||
|
},
|
||||||
|
is_active: {
|
||||||
|
type: DataTypes.BOOLEAN,
|
||||||
|
defaultValue: true,
|
||||||
|
field: 'is_active',
|
||||||
|
comment: 'Account status - true=active, false=disabled'
|
||||||
|
},
|
||||||
|
role: {
|
||||||
|
type: DataTypes.ENUM('USER', 'MANAGEMENT', 'ADMIN'),
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: 'USER',
|
||||||
|
comment: 'RBAC role: USER (default), MANAGEMENT (read all), ADMIN (full access)'
|
||||||
|
},
|
||||||
|
last_login: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'last_login',
|
||||||
|
comment: 'Last successful login timestamp'
|
||||||
|
},
|
||||||
|
created_at: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: DataTypes.NOW,
|
||||||
|
field: 'created_at'
|
||||||
|
},
|
||||||
|
updated_at: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: DataTypes.NOW,
|
||||||
|
field: 'updated_at'
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
// Create indexes
|
// Step 3: Create indexes
|
||||||
await queryInterface.addIndex('users', ['email'], {
|
console.log(' ✓ Creating indexes...');
|
||||||
name: 'users_email_idx',
|
|
||||||
unique: true
|
await queryInterface.addIndex('users', ['email'], {
|
||||||
});
|
name: 'users_email_idx',
|
||||||
|
unique: true
|
||||||
|
});
|
||||||
|
|
||||||
await queryInterface.addIndex('users', ['okta_sub'], {
|
await queryInterface.addIndex('users', ['okta_sub'], {
|
||||||
name: 'users_okta_sub_idx',
|
name: 'users_okta_sub_idx',
|
||||||
unique: true
|
unique: true
|
||||||
});
|
});
|
||||||
|
|
||||||
await queryInterface.addIndex('users', ['employee_id'], {
|
await queryInterface.addIndex('users', ['employee_id'], {
|
||||||
name: 'users_employee_id_idx'
|
name: 'users_employee_id_idx'
|
||||||
});
|
});
|
||||||
|
|
||||||
// Users table created
|
await queryInterface.addIndex('users', ['department'], {
|
||||||
|
name: 'idx_users_department'
|
||||||
|
});
|
||||||
|
|
||||||
|
await queryInterface.addIndex('users', ['is_active'], {
|
||||||
|
name: 'idx_users_is_active'
|
||||||
|
});
|
||||||
|
|
||||||
|
await queryInterface.addIndex('users', ['role'], {
|
||||||
|
name: 'idx_users_role'
|
||||||
|
});
|
||||||
|
|
||||||
|
await queryInterface.addIndex('users', ['manager'], {
|
||||||
|
name: 'idx_users_manager'
|
||||||
|
});
|
||||||
|
|
||||||
|
await queryInterface.addIndex('users', ['postal_address'], {
|
||||||
|
name: 'idx_users_postal_address'
|
||||||
|
});
|
||||||
|
|
||||||
|
// GIN indexes for JSONB fields
|
||||||
|
await queryInterface.sequelize.query(`
|
||||||
|
CREATE INDEX idx_users_location ON users USING gin(location jsonb_path_ops);
|
||||||
|
CREATE INDEX idx_users_ad_groups ON users USING gin(ad_groups);
|
||||||
|
`);
|
||||||
|
|
||||||
|
console.log('✅ Users table created successfully with all indexes!');
|
||||||
|
} catch (error) {
|
||||||
|
console.error('❌ Failed to create users table:', error);
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function down(queryInterface: QueryInterface): Promise<void> {
|
export async function down(queryInterface: QueryInterface): Promise<void> {
|
||||||
|
console.log('📋 Dropping users table...');
|
||||||
|
|
||||||
await queryInterface.dropTable('users');
|
await queryInterface.dropTable('users');
|
||||||
// Users table dropped
|
|
||||||
|
// Drop ENUM type
|
||||||
|
await queryInterface.sequelize.query(`
|
||||||
|
DROP TYPE IF EXISTS user_role_enum;
|
||||||
|
`);
|
||||||
|
|
||||||
|
console.log('✅ Users table dropped!');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
109
src/migrations/20251111-create-conclusion-remarks.ts
Normal file
109
src/migrations/20251111-create-conclusion-remarks.ts
Normal file
@ -0,0 +1,109 @@
|
|||||||
|
import { QueryInterface, DataTypes } from 'sequelize';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Migration to create conclusion_remarks table
|
||||||
|
* Stores AI-generated and finalized conclusion remarks for workflow requests
|
||||||
|
*/
|
||||||
|
export async function up(queryInterface: QueryInterface): Promise<void> {
|
||||||
|
await queryInterface.createTable('conclusion_remarks', {
|
||||||
|
conclusion_id: {
|
||||||
|
type: DataTypes.UUID,
|
||||||
|
defaultValue: DataTypes.UUIDV4,
|
||||||
|
primaryKey: true,
|
||||||
|
allowNull: false
|
||||||
|
},
|
||||||
|
request_id: {
|
||||||
|
type: DataTypes.UUID,
|
||||||
|
allowNull: false,
|
||||||
|
references: {
|
||||||
|
model: 'workflow_requests',
|
||||||
|
key: 'request_id'
|
||||||
|
},
|
||||||
|
onUpdate: 'CASCADE',
|
||||||
|
onDelete: 'CASCADE',
|
||||||
|
unique: true // One conclusion per request
|
||||||
|
},
|
||||||
|
ai_generated_remark: {
|
||||||
|
type: DataTypes.TEXT,
|
||||||
|
allowNull: true
|
||||||
|
},
|
||||||
|
ai_model_used: {
|
||||||
|
type: DataTypes.STRING(100),
|
||||||
|
allowNull: true
|
||||||
|
},
|
||||||
|
ai_confidence_score: {
|
||||||
|
type: DataTypes.DECIMAL(5, 2),
|
||||||
|
allowNull: true
|
||||||
|
},
|
||||||
|
final_remark: {
|
||||||
|
type: DataTypes.TEXT,
|
||||||
|
allowNull: true
|
||||||
|
},
|
||||||
|
edited_by: {
|
||||||
|
type: DataTypes.UUID,
|
||||||
|
allowNull: true,
|
||||||
|
references: {
|
||||||
|
model: 'users',
|
||||||
|
key: 'user_id'
|
||||||
|
},
|
||||||
|
onUpdate: 'CASCADE',
|
||||||
|
onDelete: 'SET NULL'
|
||||||
|
},
|
||||||
|
is_edited: {
|
||||||
|
type: DataTypes.BOOLEAN,
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: false
|
||||||
|
},
|
||||||
|
edit_count: {
|
||||||
|
type: DataTypes.INTEGER,
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: 0
|
||||||
|
},
|
||||||
|
approval_summary: {
|
||||||
|
type: DataTypes.JSONB,
|
||||||
|
allowNull: true
|
||||||
|
},
|
||||||
|
document_summary: {
|
||||||
|
type: DataTypes.JSONB,
|
||||||
|
allowNull: true
|
||||||
|
},
|
||||||
|
key_discussion_points: {
|
||||||
|
type: DataTypes.ARRAY(DataTypes.TEXT),
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: []
|
||||||
|
},
|
||||||
|
generated_at: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: true
|
||||||
|
},
|
||||||
|
finalized_at: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: true
|
||||||
|
},
|
||||||
|
created_at: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: DataTypes.NOW
|
||||||
|
},
|
||||||
|
updated_at: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: DataTypes.NOW
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Add index on request_id for faster lookups
|
||||||
|
await queryInterface.addIndex('conclusion_remarks', ['request_id'], {
|
||||||
|
name: 'idx_conclusion_remarks_request_id'
|
||||||
|
});
|
||||||
|
|
||||||
|
// Add index on finalized_at for KPI queries
|
||||||
|
await queryInterface.addIndex('conclusion_remarks', ['finalized_at'], {
|
||||||
|
name: 'idx_conclusion_remarks_finalized_at'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function down(queryInterface: QueryInterface): Promise<void> {
|
||||||
|
await queryInterface.dropTable('conclusion_remarks');
|
||||||
|
}
|
||||||
|
|
||||||
137
src/migrations/20251111-create-notifications.ts
Normal file
137
src/migrations/20251111-create-notifications.ts
Normal file
@ -0,0 +1,137 @@
|
|||||||
|
import { QueryInterface, DataTypes } from 'sequelize';
|
||||||
|
|
||||||
|
export async function up(queryInterface: QueryInterface): Promise<void> {
|
||||||
|
// Create priority enum type
|
||||||
|
await queryInterface.sequelize.query(`
|
||||||
|
DO $$ BEGIN
|
||||||
|
CREATE TYPE notification_priority_enum AS ENUM ('LOW', 'MEDIUM', 'HIGH', 'URGENT');
|
||||||
|
EXCEPTION
|
||||||
|
WHEN duplicate_object THEN null;
|
||||||
|
END $$;
|
||||||
|
`);
|
||||||
|
|
||||||
|
// Create notifications table
|
||||||
|
await queryInterface.createTable('notifications', {
|
||||||
|
notification_id: {
|
||||||
|
type: DataTypes.UUID,
|
||||||
|
defaultValue: DataTypes.UUIDV4,
|
||||||
|
primaryKey: true
|
||||||
|
},
|
||||||
|
user_id: {
|
||||||
|
type: DataTypes.UUID,
|
||||||
|
allowNull: false,
|
||||||
|
references: {
|
||||||
|
model: 'users',
|
||||||
|
key: 'user_id'
|
||||||
|
},
|
||||||
|
onUpdate: 'CASCADE',
|
||||||
|
onDelete: 'CASCADE'
|
||||||
|
},
|
||||||
|
request_id: {
|
||||||
|
type: DataTypes.UUID,
|
||||||
|
allowNull: true,
|
||||||
|
references: {
|
||||||
|
model: 'workflow_requests',
|
||||||
|
key: 'request_id'
|
||||||
|
},
|
||||||
|
onUpdate: 'CASCADE',
|
||||||
|
onDelete: 'SET NULL'
|
||||||
|
},
|
||||||
|
notification_type: {
|
||||||
|
type: DataTypes.STRING(50),
|
||||||
|
allowNull: false
|
||||||
|
},
|
||||||
|
title: {
|
||||||
|
type: DataTypes.STRING(255),
|
||||||
|
allowNull: false
|
||||||
|
},
|
||||||
|
message: {
|
||||||
|
type: DataTypes.TEXT,
|
||||||
|
allowNull: false
|
||||||
|
},
|
||||||
|
is_read: {
|
||||||
|
type: DataTypes.BOOLEAN,
|
||||||
|
defaultValue: false,
|
||||||
|
allowNull: false
|
||||||
|
},
|
||||||
|
priority: {
|
||||||
|
type: 'notification_priority_enum',
|
||||||
|
defaultValue: 'MEDIUM',
|
||||||
|
allowNull: false
|
||||||
|
},
|
||||||
|
action_url: {
|
||||||
|
type: DataTypes.STRING(500),
|
||||||
|
allowNull: true
|
||||||
|
},
|
||||||
|
action_required: {
|
||||||
|
type: DataTypes.BOOLEAN,
|
||||||
|
defaultValue: false,
|
||||||
|
allowNull: false
|
||||||
|
},
|
||||||
|
metadata: {
|
||||||
|
type: DataTypes.JSONB,
|
||||||
|
allowNull: true,
|
||||||
|
defaultValue: {}
|
||||||
|
},
|
||||||
|
sent_via: {
|
||||||
|
type: DataTypes.ARRAY(DataTypes.STRING),
|
||||||
|
defaultValue: [],
|
||||||
|
allowNull: false
|
||||||
|
},
|
||||||
|
email_sent: {
|
||||||
|
type: DataTypes.BOOLEAN,
|
||||||
|
defaultValue: false,
|
||||||
|
allowNull: false
|
||||||
|
},
|
||||||
|
sms_sent: {
|
||||||
|
type: DataTypes.BOOLEAN,
|
||||||
|
defaultValue: false,
|
||||||
|
allowNull: false
|
||||||
|
},
|
||||||
|
push_sent: {
|
||||||
|
type: DataTypes.BOOLEAN,
|
||||||
|
defaultValue: false,
|
||||||
|
allowNull: false
|
||||||
|
},
|
||||||
|
read_at: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: true
|
||||||
|
},
|
||||||
|
expires_at: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: true
|
||||||
|
},
|
||||||
|
created_at: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: DataTypes.NOW
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Create indexes for better query performance
|
||||||
|
await queryInterface.addIndex('notifications', ['user_id'], {
|
||||||
|
name: 'idx_notifications_user_id'
|
||||||
|
});
|
||||||
|
|
||||||
|
await queryInterface.addIndex('notifications', ['user_id', 'is_read'], {
|
||||||
|
name: 'idx_notifications_user_unread'
|
||||||
|
});
|
||||||
|
|
||||||
|
await queryInterface.addIndex('notifications', ['request_id'], {
|
||||||
|
name: 'idx_notifications_request_id'
|
||||||
|
});
|
||||||
|
|
||||||
|
await queryInterface.addIndex('notifications', ['created_at'], {
|
||||||
|
name: 'idx_notifications_created_at'
|
||||||
|
});
|
||||||
|
|
||||||
|
await queryInterface.addIndex('notifications', ['notification_type'], {
|
||||||
|
name: 'idx_notifications_type'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function down(queryInterface: QueryInterface): Promise<void> {
|
||||||
|
await queryInterface.dropTable('notifications');
|
||||||
|
await queryInterface.sequelize.query('DROP TYPE IF EXISTS notification_priority_enum;');
|
||||||
|
}
|
||||||
|
|
||||||
152
src/models/ConclusionRemark.ts
Normal file
152
src/models/ConclusionRemark.ts
Normal file
@ -0,0 +1,152 @@
|
|||||||
|
import { DataTypes, Model, Optional } from 'sequelize';
|
||||||
|
import { sequelize } from '../config/database';
|
||||||
|
|
||||||
|
interface ConclusionRemarkAttributes {
|
||||||
|
conclusionId: string;
|
||||||
|
requestId: string;
|
||||||
|
aiGeneratedRemark: string | null;
|
||||||
|
aiModelUsed: string | null;
|
||||||
|
aiConfidenceScore: number | null;
|
||||||
|
finalRemark: string | null;
|
||||||
|
editedBy: string | null;
|
||||||
|
isEdited: boolean;
|
||||||
|
editCount: number;
|
||||||
|
approvalSummary: any;
|
||||||
|
documentSummary: any;
|
||||||
|
keyDiscussionPoints: string[];
|
||||||
|
generatedAt: Date | null;
|
||||||
|
finalizedAt: Date | null;
|
||||||
|
createdAt?: Date;
|
||||||
|
updatedAt?: Date;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface ConclusionRemarkCreationAttributes
|
||||||
|
extends Optional<ConclusionRemarkAttributes, 'conclusionId' | 'aiGeneratedRemark' | 'aiModelUsed' | 'aiConfidenceScore' | 'finalRemark' | 'editedBy' | 'isEdited' | 'editCount' | 'approvalSummary' | 'documentSummary' | 'keyDiscussionPoints' | 'generatedAt' | 'finalizedAt'> {}
|
||||||
|
|
||||||
|
class ConclusionRemark extends Model<ConclusionRemarkAttributes, ConclusionRemarkCreationAttributes>
|
||||||
|
implements ConclusionRemarkAttributes {
|
||||||
|
public conclusionId!: string;
|
||||||
|
public requestId!: string;
|
||||||
|
public aiGeneratedRemark!: string | null;
|
||||||
|
public aiModelUsed!: string | null;
|
||||||
|
public aiConfidenceScore!: number | null;
|
||||||
|
public finalRemark!: string | null;
|
||||||
|
public editedBy!: string | null;
|
||||||
|
public isEdited!: boolean;
|
||||||
|
public editCount!: number;
|
||||||
|
public approvalSummary!: any;
|
||||||
|
public documentSummary!: any;
|
||||||
|
public keyDiscussionPoints!: string[];
|
||||||
|
public generatedAt!: Date | null;
|
||||||
|
public finalizedAt!: Date | null;
|
||||||
|
public readonly createdAt!: Date;
|
||||||
|
public readonly updatedAt!: Date;
|
||||||
|
}
|
||||||
|
|
||||||
|
ConclusionRemark.init(
|
||||||
|
{
|
||||||
|
conclusionId: {
|
||||||
|
type: DataTypes.UUID,
|
||||||
|
defaultValue: DataTypes.UUIDV4,
|
||||||
|
primaryKey: true,
|
||||||
|
field: 'conclusion_id'
|
||||||
|
},
|
||||||
|
requestId: {
|
||||||
|
type: DataTypes.UUID,
|
||||||
|
allowNull: false,
|
||||||
|
field: 'request_id',
|
||||||
|
references: {
|
||||||
|
model: 'workflow_requests',
|
||||||
|
key: 'request_id'
|
||||||
|
}
|
||||||
|
},
|
||||||
|
aiGeneratedRemark: {
|
||||||
|
type: DataTypes.TEXT,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'ai_generated_remark'
|
||||||
|
},
|
||||||
|
aiModelUsed: {
|
||||||
|
type: DataTypes.STRING(100),
|
||||||
|
allowNull: true,
|
||||||
|
field: 'ai_model_used'
|
||||||
|
},
|
||||||
|
aiConfidenceScore: {
|
||||||
|
type: DataTypes.DECIMAL(5, 2),
|
||||||
|
allowNull: true,
|
||||||
|
field: 'ai_confidence_score'
|
||||||
|
},
|
||||||
|
finalRemark: {
|
||||||
|
type: DataTypes.TEXT,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'final_remark'
|
||||||
|
},
|
||||||
|
editedBy: {
|
||||||
|
type: DataTypes.UUID,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'edited_by',
|
||||||
|
references: {
|
||||||
|
model: 'users',
|
||||||
|
key: 'user_id'
|
||||||
|
}
|
||||||
|
},
|
||||||
|
isEdited: {
|
||||||
|
type: DataTypes.BOOLEAN,
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: false,
|
||||||
|
field: 'is_edited'
|
||||||
|
},
|
||||||
|
editCount: {
|
||||||
|
type: DataTypes.INTEGER,
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: 0,
|
||||||
|
field: 'edit_count'
|
||||||
|
},
|
||||||
|
approvalSummary: {
|
||||||
|
type: DataTypes.JSONB,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'approval_summary'
|
||||||
|
},
|
||||||
|
documentSummary: {
|
||||||
|
type: DataTypes.JSONB,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'document_summary'
|
||||||
|
},
|
||||||
|
keyDiscussionPoints: {
|
||||||
|
type: DataTypes.ARRAY(DataTypes.TEXT),
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: [],
|
||||||
|
field: 'key_discussion_points'
|
||||||
|
},
|
||||||
|
generatedAt: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'generated_at'
|
||||||
|
},
|
||||||
|
finalizedAt: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'finalized_at'
|
||||||
|
},
|
||||||
|
createdAt: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: DataTypes.NOW,
|
||||||
|
field: 'created_at'
|
||||||
|
},
|
||||||
|
updatedAt: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: DataTypes.NOW,
|
||||||
|
field: 'updated_at'
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
sequelize,
|
||||||
|
tableName: 'conclusion_remarks',
|
||||||
|
timestamps: true,
|
||||||
|
underscored: true
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
export default ConclusionRemark;
|
||||||
|
|
||||||
156
src/models/Notification.ts
Normal file
156
src/models/Notification.ts
Normal file
@ -0,0 +1,156 @@
|
|||||||
|
import { DataTypes, Model, Optional } from 'sequelize';
|
||||||
|
import { sequelize } from '../config/database';
|
||||||
|
|
||||||
|
interface NotificationAttributes {
|
||||||
|
notificationId: string;
|
||||||
|
userId: string;
|
||||||
|
requestId?: string;
|
||||||
|
notificationType: string;
|
||||||
|
title: string;
|
||||||
|
message: string;
|
||||||
|
isRead: boolean;
|
||||||
|
priority: 'LOW' | 'MEDIUM' | 'HIGH' | 'URGENT';
|
||||||
|
actionUrl?: string;
|
||||||
|
actionRequired: boolean;
|
||||||
|
metadata?: any;
|
||||||
|
sentVia: string[];
|
||||||
|
emailSent: boolean;
|
||||||
|
smsSent: boolean;
|
||||||
|
pushSent: boolean;
|
||||||
|
readAt?: Date;
|
||||||
|
expiresAt?: Date;
|
||||||
|
createdAt: Date;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface NotificationCreationAttributes extends Optional<NotificationAttributes, 'notificationId' | 'isRead' | 'priority' | 'actionRequired' | 'sentVia' | 'emailSent' | 'smsSent' | 'pushSent' | 'createdAt'> {}
|
||||||
|
|
||||||
|
class Notification extends Model<NotificationAttributes, NotificationCreationAttributes> implements NotificationAttributes {
|
||||||
|
public notificationId!: string;
|
||||||
|
public userId!: string;
|
||||||
|
public requestId?: string;
|
||||||
|
public notificationType!: string;
|
||||||
|
public title!: string;
|
||||||
|
public message!: string;
|
||||||
|
public isRead!: boolean;
|
||||||
|
public priority!: 'LOW' | 'MEDIUM' | 'HIGH' | 'URGENT';
|
||||||
|
public actionUrl?: string;
|
||||||
|
public actionRequired!: boolean;
|
||||||
|
public metadata?: any;
|
||||||
|
public sentVia!: string[];
|
||||||
|
public emailSent!: boolean;
|
||||||
|
public smsSent!: boolean;
|
||||||
|
public pushSent!: boolean;
|
||||||
|
public readAt?: Date;
|
||||||
|
public expiresAt?: Date;
|
||||||
|
public readonly createdAt!: Date;
|
||||||
|
}
|
||||||
|
|
||||||
|
Notification.init(
|
||||||
|
{
|
||||||
|
notificationId: {
|
||||||
|
type: DataTypes.UUID,
|
||||||
|
defaultValue: DataTypes.UUIDV4,
|
||||||
|
primaryKey: true,
|
||||||
|
field: 'notification_id'
|
||||||
|
},
|
||||||
|
userId: {
|
||||||
|
type: DataTypes.UUID,
|
||||||
|
allowNull: false,
|
||||||
|
field: 'user_id',
|
||||||
|
references: {
|
||||||
|
model: 'users',
|
||||||
|
key: 'user_id'
|
||||||
|
}
|
||||||
|
},
|
||||||
|
requestId: {
|
||||||
|
type: DataTypes.UUID,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'request_id',
|
||||||
|
references: {
|
||||||
|
model: 'workflow_requests',
|
||||||
|
key: 'request_id'
|
||||||
|
}
|
||||||
|
},
|
||||||
|
notificationType: {
|
||||||
|
type: DataTypes.STRING(50),
|
||||||
|
allowNull: false,
|
||||||
|
field: 'notification_type'
|
||||||
|
},
|
||||||
|
title: {
|
||||||
|
type: DataTypes.STRING(255),
|
||||||
|
allowNull: false
|
||||||
|
},
|
||||||
|
message: {
|
||||||
|
type: DataTypes.TEXT,
|
||||||
|
allowNull: false
|
||||||
|
},
|
||||||
|
isRead: {
|
||||||
|
type: DataTypes.BOOLEAN,
|
||||||
|
defaultValue: false,
|
||||||
|
field: 'is_read'
|
||||||
|
},
|
||||||
|
priority: {
|
||||||
|
type: DataTypes.ENUM('LOW', 'MEDIUM', 'HIGH', 'URGENT'),
|
||||||
|
defaultValue: 'MEDIUM'
|
||||||
|
},
|
||||||
|
actionUrl: {
|
||||||
|
type: DataTypes.STRING(500),
|
||||||
|
allowNull: true,
|
||||||
|
field: 'action_url'
|
||||||
|
},
|
||||||
|
actionRequired: {
|
||||||
|
type: DataTypes.BOOLEAN,
|
||||||
|
defaultValue: false,
|
||||||
|
field: 'action_required'
|
||||||
|
},
|
||||||
|
metadata: {
|
||||||
|
type: DataTypes.JSONB,
|
||||||
|
allowNull: true
|
||||||
|
},
|
||||||
|
sentVia: {
|
||||||
|
type: DataTypes.ARRAY(DataTypes.STRING),
|
||||||
|
defaultValue: [],
|
||||||
|
field: 'sent_via'
|
||||||
|
},
|
||||||
|
emailSent: {
|
||||||
|
type: DataTypes.BOOLEAN,
|
||||||
|
defaultValue: false,
|
||||||
|
field: 'email_sent'
|
||||||
|
},
|
||||||
|
smsSent: {
|
||||||
|
type: DataTypes.BOOLEAN,
|
||||||
|
defaultValue: false,
|
||||||
|
field: 'sms_sent'
|
||||||
|
},
|
||||||
|
pushSent: {
|
||||||
|
type: DataTypes.BOOLEAN,
|
||||||
|
defaultValue: false,
|
||||||
|
field: 'push_sent'
|
||||||
|
},
|
||||||
|
readAt: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'read_at'
|
||||||
|
},
|
||||||
|
expiresAt: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'expires_at'
|
||||||
|
},
|
||||||
|
createdAt: {
|
||||||
|
type: DataTypes.DATE,
|
||||||
|
allowNull: false,
|
||||||
|
defaultValue: DataTypes.NOW,
|
||||||
|
field: 'created_at'
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
sequelize,
|
||||||
|
tableName: 'notifications',
|
||||||
|
timestamps: false,
|
||||||
|
underscored: true
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
export { Notification };
|
||||||
|
|
||||||
@ -1,6 +1,15 @@
|
|||||||
import { DataTypes, Model, Optional } from 'sequelize';
|
import { DataTypes, Model, Optional } from 'sequelize';
|
||||||
import { sequelize } from '../config/database';
|
import { sequelize } from '../config/database';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* User Role Enum
|
||||||
|
*
|
||||||
|
* USER: Default role - can create requests, view own requests, participate in workflows
|
||||||
|
* MANAGEMENT: Enhanced visibility - can view all requests, read-only access to all data
|
||||||
|
* ADMIN: Full access - can manage system configuration, users, and all workflows
|
||||||
|
*/
|
||||||
|
export type UserRole = 'USER' | 'MANAGEMENT' | 'ADMIN';
|
||||||
|
|
||||||
interface UserAttributes {
|
interface UserAttributes {
|
||||||
userId: string;
|
userId: string;
|
||||||
employeeId?: string | null;
|
employeeId?: string | null;
|
||||||
@ -12,6 +21,16 @@ interface UserAttributes {
|
|||||||
department?: string | null;
|
department?: string | null;
|
||||||
designation?: string | null;
|
designation?: string | null;
|
||||||
phone?: string | null;
|
phone?: string | null;
|
||||||
|
|
||||||
|
// Extended fields from SSO/Okta (All Optional)
|
||||||
|
manager?: string | null; // Reporting manager name
|
||||||
|
secondEmail?: string | null; // Alternate email
|
||||||
|
jobTitle?: string | null; // Detailed job description (title field from Okta)
|
||||||
|
employeeNumber?: string | null; // HR system employee number (different from employeeId)
|
||||||
|
postalAddress?: string | null; // Work location/office address
|
||||||
|
mobilePhone?: string | null; // Mobile contact (different from phone)
|
||||||
|
adGroups?: string[] | null; // Active Directory group memberships
|
||||||
|
|
||||||
// Location Information (JSON object)
|
// Location Information (JSON object)
|
||||||
location?: {
|
location?: {
|
||||||
city?: string;
|
city?: string;
|
||||||
@ -21,13 +40,13 @@ interface UserAttributes {
|
|||||||
timezone?: string;
|
timezone?: string;
|
||||||
};
|
};
|
||||||
isActive: boolean;
|
isActive: boolean;
|
||||||
isAdmin: boolean;
|
role: UserRole; // RBAC: USER, MANAGEMENT, ADMIN
|
||||||
lastLogin?: Date;
|
lastLogin?: Date;
|
||||||
createdAt: Date;
|
createdAt: Date;
|
||||||
updatedAt: Date;
|
updatedAt: Date;
|
||||||
}
|
}
|
||||||
|
|
||||||
interface UserCreationAttributes extends Optional<UserAttributes, 'userId' | 'employeeId' | 'department' | 'designation' | 'phone' | 'lastLogin' | 'createdAt' | 'updatedAt'> {}
|
interface UserCreationAttributes extends Optional<UserAttributes, 'userId' | 'employeeId' | 'department' | 'designation' | 'phone' | 'manager' | 'secondEmail' | 'jobTitle' | 'employeeNumber' | 'postalAddress' | 'mobilePhone' | 'adGroups' | 'role' | 'lastLogin' | 'createdAt' | 'updatedAt'> {}
|
||||||
|
|
||||||
class User extends Model<UserAttributes, UserCreationAttributes> implements UserAttributes {
|
class User extends Model<UserAttributes, UserCreationAttributes> implements UserAttributes {
|
||||||
public userId!: string;
|
public userId!: string;
|
||||||
@ -40,6 +59,16 @@ class User extends Model<UserAttributes, UserCreationAttributes> implements User
|
|||||||
public department?: string;
|
public department?: string;
|
||||||
public designation?: string;
|
public designation?: string;
|
||||||
public phone?: string;
|
public phone?: string;
|
||||||
|
|
||||||
|
// Extended fields from SSO/Okta (All Optional)
|
||||||
|
public manager?: string | null;
|
||||||
|
public secondEmail?: string | null;
|
||||||
|
public jobTitle?: string | null;
|
||||||
|
public employeeNumber?: string | null;
|
||||||
|
public postalAddress?: string | null;
|
||||||
|
public mobilePhone?: string | null;
|
||||||
|
public adGroups?: string[] | null;
|
||||||
|
|
||||||
// Location Information (JSON object)
|
// Location Information (JSON object)
|
||||||
public location?: {
|
public location?: {
|
||||||
city?: string;
|
city?: string;
|
||||||
@ -49,12 +78,35 @@ class User extends Model<UserAttributes, UserCreationAttributes> implements User
|
|||||||
timezone?: string;
|
timezone?: string;
|
||||||
};
|
};
|
||||||
public isActive!: boolean;
|
public isActive!: boolean;
|
||||||
public isAdmin!: boolean;
|
public role!: UserRole; // RBAC: USER, MANAGEMENT, ADMIN
|
||||||
public lastLogin?: Date;
|
public lastLogin?: Date;
|
||||||
public createdAt!: Date;
|
public createdAt!: Date;
|
||||||
public updatedAt!: Date;
|
public updatedAt!: Date;
|
||||||
|
|
||||||
// Associations
|
// Associations
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Helper Methods for Role Checking
|
||||||
|
*/
|
||||||
|
public isUserRole(): boolean {
|
||||||
|
return this.role === 'USER';
|
||||||
|
}
|
||||||
|
|
||||||
|
public isManagementRole(): boolean {
|
||||||
|
return this.role === 'MANAGEMENT';
|
||||||
|
}
|
||||||
|
|
||||||
|
public isAdminRole(): boolean {
|
||||||
|
return this.role === 'ADMIN';
|
||||||
|
}
|
||||||
|
|
||||||
|
public hasManagementAccess(): boolean {
|
||||||
|
return this.role === 'MANAGEMENT' || this.role === 'ADMIN';
|
||||||
|
}
|
||||||
|
|
||||||
|
public hasAdminAccess(): boolean {
|
||||||
|
return this.role === 'ADMIN';
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
User.init(
|
User.init(
|
||||||
@ -117,6 +169,53 @@ User.init(
|
|||||||
type: DataTypes.STRING(20),
|
type: DataTypes.STRING(20),
|
||||||
allowNull: true
|
allowNull: true
|
||||||
},
|
},
|
||||||
|
|
||||||
|
// ============ Extended SSO/Okta Fields (All Optional) ============
|
||||||
|
manager: {
|
||||||
|
type: DataTypes.STRING(200),
|
||||||
|
allowNull: true,
|
||||||
|
comment: 'Reporting manager name from SSO/AD'
|
||||||
|
},
|
||||||
|
secondEmail: {
|
||||||
|
type: DataTypes.STRING(255),
|
||||||
|
allowNull: true,
|
||||||
|
field: 'second_email',
|
||||||
|
validate: {
|
||||||
|
isEmail: true
|
||||||
|
},
|
||||||
|
comment: 'Alternate email address from SSO'
|
||||||
|
},
|
||||||
|
jobTitle: {
|
||||||
|
type: DataTypes.TEXT,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'job_title',
|
||||||
|
comment: 'Detailed job title/description from SSO (e.g., "Manages dealers for MotorCycle Business...")'
|
||||||
|
},
|
||||||
|
employeeNumber: {
|
||||||
|
type: DataTypes.STRING(50),
|
||||||
|
allowNull: true,
|
||||||
|
field: 'employee_number',
|
||||||
|
comment: 'HR system employee number from SSO (e.g., "00020330")'
|
||||||
|
},
|
||||||
|
postalAddress: {
|
||||||
|
type: DataTypes.STRING(500),
|
||||||
|
allowNull: true,
|
||||||
|
field: 'postal_address',
|
||||||
|
comment: 'Work location/office address from SSO (e.g., "Kolkata", "Chennai")'
|
||||||
|
},
|
||||||
|
mobilePhone: {
|
||||||
|
type: DataTypes.STRING(20),
|
||||||
|
allowNull: true,
|
||||||
|
field: 'mobile_phone',
|
||||||
|
comment: 'Mobile contact number from SSO (mobilePhone field)'
|
||||||
|
},
|
||||||
|
adGroups: {
|
||||||
|
type: DataTypes.JSONB,
|
||||||
|
allowNull: true,
|
||||||
|
field: 'ad_groups',
|
||||||
|
comment: 'Active Directory group memberships from SSO (memberOf field) - JSON array'
|
||||||
|
},
|
||||||
|
|
||||||
// Location Information (JSON object)
|
// Location Information (JSON object)
|
||||||
location: {
|
location: {
|
||||||
type: DataTypes.JSONB, // Use JSONB for PostgreSQL
|
type: DataTypes.JSONB, // Use JSONB for PostgreSQL
|
||||||
@ -129,11 +228,11 @@ User.init(
|
|||||||
field: 'is_active',
|
field: 'is_active',
|
||||||
comment: 'Account status'
|
comment: 'Account status'
|
||||||
},
|
},
|
||||||
isAdmin: {
|
role: {
|
||||||
type: DataTypes.BOOLEAN,
|
type: DataTypes.ENUM('USER', 'MANAGEMENT', 'ADMIN'),
|
||||||
defaultValue: false,
|
allowNull: false,
|
||||||
field: 'is_admin',
|
defaultValue: 'USER',
|
||||||
comment: 'Super user flag'
|
comment: 'User role for access control: USER (default), MANAGEMENT (read all), ADMIN (full access)'
|
||||||
},
|
},
|
||||||
lastLogin: {
|
lastLogin: {
|
||||||
type: DataTypes.DATE,
|
type: DataTypes.DATE,
|
||||||
@ -178,11 +277,24 @@ User.init(
|
|||||||
{
|
{
|
||||||
fields: ['is_active']
|
fields: ['is_active']
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
fields: ['role'], // Index for role-based queries
|
||||||
|
name: 'idx_users_role'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
fields: ['manager'], // Index for org chart queries
|
||||||
|
name: 'idx_users_manager'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
fields: ['postal_address'], // Index for location-based filtering
|
||||||
|
name: 'idx_users_postal_address'
|
||||||
|
},
|
||||||
{
|
{
|
||||||
fields: ['location'],
|
fields: ['location'],
|
||||||
using: 'gin', // GIN index for JSONB queries
|
using: 'gin', // GIN index for JSONB queries
|
||||||
operator: 'jsonb_path_ops'
|
operator: 'jsonb_path_ops'
|
||||||
}
|
}
|
||||||
|
// Note: ad_groups GIN index is created in migration (can't be defined here)
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
);
|
);
|
||||||
|
|||||||
@ -12,6 +12,8 @@ import { WorkNote } from './WorkNote';
|
|||||||
import { WorkNoteAttachment } from './WorkNoteAttachment';
|
import { WorkNoteAttachment } from './WorkNoteAttachment';
|
||||||
import { TatAlert } from './TatAlert';
|
import { TatAlert } from './TatAlert';
|
||||||
import { Holiday } from './Holiday';
|
import { Holiday } from './Holiday';
|
||||||
|
import { Notification } from './Notification';
|
||||||
|
import ConclusionRemark from './ConclusionRemark';
|
||||||
|
|
||||||
// Define associations
|
// Define associations
|
||||||
const defineAssociations = () => {
|
const defineAssociations = () => {
|
||||||
@ -59,6 +61,23 @@ const defineAssociations = () => {
|
|||||||
sourceKey: 'requestId'
|
sourceKey: 'requestId'
|
||||||
});
|
});
|
||||||
|
|
||||||
|
WorkflowRequest.hasOne(ConclusionRemark, {
|
||||||
|
as: 'conclusion',
|
||||||
|
foreignKey: 'requestId',
|
||||||
|
sourceKey: 'requestId'
|
||||||
|
});
|
||||||
|
|
||||||
|
ConclusionRemark.belongsTo(WorkflowRequest, {
|
||||||
|
foreignKey: 'requestId',
|
||||||
|
targetKey: 'requestId'
|
||||||
|
});
|
||||||
|
|
||||||
|
ConclusionRemark.belongsTo(User, {
|
||||||
|
as: 'editor',
|
||||||
|
foreignKey: 'editedBy',
|
||||||
|
targetKey: 'userId'
|
||||||
|
});
|
||||||
|
|
||||||
// Note: belongsTo associations are defined in individual model files to avoid duplicate alias conflicts
|
// Note: belongsTo associations are defined in individual model files to avoid duplicate alias conflicts
|
||||||
// Only hasMany associations from WorkflowRequest are defined here since they're one-way
|
// Only hasMany associations from WorkflowRequest are defined here since they're one-way
|
||||||
};
|
};
|
||||||
@ -79,7 +98,9 @@ export {
|
|||||||
WorkNote,
|
WorkNote,
|
||||||
WorkNoteAttachment,
|
WorkNoteAttachment,
|
||||||
TatAlert,
|
TatAlert,
|
||||||
Holiday
|
Holiday,
|
||||||
|
Notification,
|
||||||
|
ConclusionRemark
|
||||||
};
|
};
|
||||||
|
|
||||||
// Export default sequelize instance
|
// Export default sequelize instance
|
||||||
|
|||||||
58
src/queues/redisConnection.ts
Normal file
58
src/queues/redisConnection.ts
Normal file
@ -0,0 +1,58 @@
|
|||||||
|
import IORedis from 'ioredis';
|
||||||
|
import logger from '@utils/logger';
|
||||||
|
|
||||||
|
const redisUrl = process.env.REDIS_URL || 'redis://localhost:6379';
|
||||||
|
const redisPassword = process.env.REDIS_PASSWORD || undefined;
|
||||||
|
|
||||||
|
const redisOptions: any = {
|
||||||
|
maxRetriesPerRequest: null, // Required for BullMQ
|
||||||
|
enableReadyCheck: false,
|
||||||
|
retryStrategy: (times: number) => {
|
||||||
|
if (times > 5) {
|
||||||
|
logger.error('[Redis] Connection failed after 5 attempts');
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
return Math.min(times * 2000, 10000);
|
||||||
|
},
|
||||||
|
connectTimeout: 30000,
|
||||||
|
commandTimeout: 20000,
|
||||||
|
keepAlive: 30000,
|
||||||
|
autoResubscribe: true,
|
||||||
|
autoResendUnfulfilledCommands: true
|
||||||
|
};
|
||||||
|
|
||||||
|
if (redisPassword) {
|
||||||
|
redisOptions.password = redisPassword;
|
||||||
|
logger.info('[Redis] Using password authentication');
|
||||||
|
}
|
||||||
|
|
||||||
|
let sharedConnection: IORedis | null = null;
|
||||||
|
|
||||||
|
// Create a SINGLE shared connection for both Queue and Worker
|
||||||
|
export const getSharedRedisConnection = (): IORedis => {
|
||||||
|
if (!sharedConnection) {
|
||||||
|
logger.info(`[Redis] Connecting to ${redisUrl}`);
|
||||||
|
|
||||||
|
sharedConnection = new IORedis(redisUrl, redisOptions);
|
||||||
|
|
||||||
|
sharedConnection.on('connect', () => {
|
||||||
|
logger.info(`[Redis] ✅ Connected successfully`);
|
||||||
|
});
|
||||||
|
|
||||||
|
sharedConnection.on('error', (err) => {
|
||||||
|
logger.error('[Redis] Connection error:', err.message);
|
||||||
|
});
|
||||||
|
|
||||||
|
sharedConnection.on('close', () => {
|
||||||
|
logger.warn('[Redis] Connection closed');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
return sharedConnection;
|
||||||
|
};
|
||||||
|
|
||||||
|
// Export for backwards compatibility
|
||||||
|
export const sharedRedisConnection = getSharedRedisConnection();
|
||||||
|
|
||||||
|
export default sharedRedisConnection;
|
||||||
|
|
||||||
@ -6,6 +6,7 @@ import { TatAlert, TatAlertType } from '@models/TatAlert';
|
|||||||
import { activityService } from '@services/activity.service';
|
import { activityService } from '@services/activity.service';
|
||||||
import logger from '@utils/logger';
|
import logger from '@utils/logger';
|
||||||
import dayjs from 'dayjs';
|
import dayjs from 'dayjs';
|
||||||
|
import { calculateElapsedWorkingHours, addWorkingHours, addWorkingHoursExpress } from '@utils/tatTimeUtils';
|
||||||
|
|
||||||
interface TatJobData {
|
interface TatJobData {
|
||||||
type: 'threshold1' | 'threshold2' | 'breach';
|
type: 'threshold1' | 'threshold2' | 'breach';
|
||||||
@ -21,17 +22,17 @@ interface TatJobData {
|
|||||||
export async function handleTatJob(job: Job<TatJobData>) {
|
export async function handleTatJob(job: Job<TatJobData>) {
|
||||||
const { requestId, levelId, approverId, type, threshold } = job.data;
|
const { requestId, levelId, approverId, type, threshold } = job.data;
|
||||||
|
|
||||||
try {
|
logger.info(`[TAT Processor] Processing ${type} (${threshold}%) for request ${requestId}`);
|
||||||
logger.info(`[TAT Processor] Processing ${type} for request ${requestId}, level ${levelId}`);
|
|
||||||
|
|
||||||
|
try {
|
||||||
// Get approval level and workflow details
|
// Get approval level and workflow details
|
||||||
const approvalLevel = await ApprovalLevel.findOne({
|
const approvalLevel = await ApprovalLevel.findOne({
|
||||||
where: { levelId }
|
where: { levelId }
|
||||||
});
|
});
|
||||||
|
|
||||||
if (!approvalLevel) {
|
if (!approvalLevel) {
|
||||||
logger.warn(`[TAT Processor] Approval level ${levelId} not found`);
|
logger.warn(`[TAT Processor] Approval level ${levelId} not found - likely already approved/rejected`);
|
||||||
return;
|
return; // Skip notification for non-existent level
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check if level is still pending (not already approved/rejected)
|
// Check if level is still pending (not already approved/rejected)
|
||||||
@ -61,50 +62,74 @@ export async function handleTatJob(job: Job<TatJobData>) {
|
|||||||
const tatHours = Number((approvalLevel as any).tatHours || 0);
|
const tatHours = Number((approvalLevel as any).tatHours || 0);
|
||||||
const levelStartTime = (approvalLevel as any).levelStartTime || (approvalLevel as any).createdAt;
|
const levelStartTime = (approvalLevel as any).levelStartTime || (approvalLevel as any).createdAt;
|
||||||
const now = new Date();
|
const now = new Date();
|
||||||
const elapsedMs = now.getTime() - new Date(levelStartTime).getTime();
|
|
||||||
const elapsedHours = elapsedMs / (1000 * 60 * 60);
|
// FIXED: Use proper working hours calculation instead of calendar hours
|
||||||
|
// This respects working hours (9 AM - 6 PM), excludes weekends for STANDARD priority, and excludes holidays
|
||||||
|
const priority = ((workflow as any).priority || 'STANDARD').toString().toLowerCase();
|
||||||
|
const elapsedHours = await calculateElapsedWorkingHours(levelStartTime, now, priority);
|
||||||
const remainingHours = Math.max(0, tatHours - elapsedHours);
|
const remainingHours = Math.max(0, tatHours - elapsedHours);
|
||||||
const expectedCompletionTime = dayjs(levelStartTime).add(tatHours, 'hour').toDate();
|
|
||||||
|
// Calculate expected completion time using proper working hours calculation
|
||||||
|
// EXPRESS: includes weekends but only during working hours
|
||||||
|
// STANDARD: excludes weekends and only during working hours
|
||||||
|
const expectedCompletionTime = priority === 'express'
|
||||||
|
? (await addWorkingHoursExpress(levelStartTime, tatHours)).toDate()
|
||||||
|
: (await addWorkingHours(levelStartTime, tatHours)).toDate();
|
||||||
|
|
||||||
switch (type) {
|
switch (type) {
|
||||||
case 'threshold1':
|
case 'threshold1':
|
||||||
emoji = '⏳';
|
emoji = '';
|
||||||
alertType = TatAlertType.TAT_50; // Keep enum for backwards compatibility
|
alertType = TatAlertType.TAT_50; // Keep enum for backwards compatibility
|
||||||
thresholdPercentage = threshold;
|
thresholdPercentage = threshold;
|
||||||
message = `${emoji} ${threshold}% of TAT elapsed for Request ${requestNumber}: ${title}`;
|
message = `${threshold}% of TAT elapsed for Request ${requestNumber}: ${title}`;
|
||||||
activityDetails = `${threshold}% of TAT time has elapsed`;
|
activityDetails = `${threshold}% of TAT time has elapsed`;
|
||||||
|
|
||||||
// Update TAT status in database
|
// Update TAT status in database with comprehensive tracking
|
||||||
await ApprovalLevel.update(
|
await ApprovalLevel.update(
|
||||||
{ tatPercentageUsed: threshold, tat50AlertSent: true },
|
{
|
||||||
|
tatPercentageUsed: threshold,
|
||||||
|
tat50AlertSent: true,
|
||||||
|
elapsedHours: elapsedHours,
|
||||||
|
remainingHours: remainingHours
|
||||||
|
},
|
||||||
{ where: { levelId } }
|
{ where: { levelId } }
|
||||||
);
|
);
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case 'threshold2':
|
case 'threshold2':
|
||||||
emoji = '⚠️';
|
emoji = '';
|
||||||
alertType = TatAlertType.TAT_75; // Keep enum for backwards compatibility
|
alertType = TatAlertType.TAT_75; // Keep enum for backwards compatibility
|
||||||
thresholdPercentage = threshold;
|
thresholdPercentage = threshold;
|
||||||
message = `${emoji} ${threshold}% of TAT elapsed for Request ${requestNumber}: ${title}. Please take action soon.`;
|
message = `${threshold}% of TAT elapsed for Request ${requestNumber}: ${title}. Please take action soon.`;
|
||||||
activityDetails = `${threshold}% of TAT time has elapsed - Escalation warning`;
|
activityDetails = `${threshold}% of TAT time has elapsed - Escalation warning`;
|
||||||
|
|
||||||
// Update TAT status in database
|
// Update TAT status in database with comprehensive tracking
|
||||||
await ApprovalLevel.update(
|
await ApprovalLevel.update(
|
||||||
{ tatPercentageUsed: threshold, tat75AlertSent: true },
|
{
|
||||||
|
tatPercentageUsed: threshold,
|
||||||
|
tat75AlertSent: true,
|
||||||
|
elapsedHours: elapsedHours,
|
||||||
|
remainingHours: remainingHours
|
||||||
|
},
|
||||||
{ where: { levelId } }
|
{ where: { levelId } }
|
||||||
);
|
);
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case 'breach':
|
case 'breach':
|
||||||
emoji = '⏰';
|
emoji = '';
|
||||||
alertType = TatAlertType.TAT_100;
|
alertType = TatAlertType.TAT_100;
|
||||||
thresholdPercentage = 100;
|
thresholdPercentage = 100;
|
||||||
message = `${emoji} TAT breached for Request ${requestNumber}: ${title}. Immediate action required!`;
|
message = `TAT breached for Request ${requestNumber}: ${title}. Immediate action required!`;
|
||||||
activityDetails = 'TAT deadline reached - Breach notification';
|
activityDetails = 'TAT deadline reached - Breach notification';
|
||||||
|
|
||||||
// Update TAT status in database
|
// Update TAT status in database with comprehensive tracking
|
||||||
await ApprovalLevel.update(
|
await ApprovalLevel.update(
|
||||||
{ tatPercentageUsed: 100, tatBreached: true },
|
{
|
||||||
|
tatPercentageUsed: 100,
|
||||||
|
tatBreached: true,
|
||||||
|
elapsedHours: elapsedHours,
|
||||||
|
remainingHours: 0 // No time remaining after breach
|
||||||
|
},
|
||||||
{ where: { levelId } }
|
{ where: { levelId } }
|
||||||
);
|
);
|
||||||
break;
|
break;
|
||||||
@ -126,7 +151,7 @@ export async function handleTatJob(job: Job<TatJobData>) {
|
|||||||
expectedCompletionTime,
|
expectedCompletionTime,
|
||||||
alertMessage: message,
|
alertMessage: message,
|
||||||
notificationSent: true,
|
notificationSent: true,
|
||||||
notificationChannels: ['push'], // Can add 'email', 'sms' if implemented
|
notificationChannels: ['push'],
|
||||||
isBreached: type === 'breach',
|
isBreached: type === 'breach',
|
||||||
metadata: {
|
metadata: {
|
||||||
requestNumber,
|
requestNumber,
|
||||||
@ -140,12 +165,17 @@ export async function handleTatJob(job: Job<TatJobData>) {
|
|||||||
}
|
}
|
||||||
} as any);
|
} as any);
|
||||||
|
|
||||||
logger.info(`[TAT Processor] TAT alert record created for ${type}`);
|
logger.info(`[TAT Processor] ✅ Alert created: ${type} (${threshold}%)`);
|
||||||
} catch (alertError) {
|
} catch (alertError: any) {
|
||||||
logger.error(`[TAT Processor] Failed to create TAT alert record:`, alertError);
|
logger.error(`[TAT Processor] ❌ Alert creation failed for ${type}: ${alertError.message}`);
|
||||||
// Don't fail the notification if alert logging fails
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Determine notification priority based on TAT threshold
|
||||||
|
const notificationPriority =
|
||||||
|
type === 'breach' ? 'URGENT' :
|
||||||
|
type === 'threshold2' ? 'HIGH' :
|
||||||
|
'MEDIUM';
|
||||||
|
|
||||||
// Send notification to approver
|
// Send notification to approver
|
||||||
await notificationService.sendToUsers([approverId], {
|
await notificationService.sendToUsers([approverId], {
|
||||||
title: type === 'breach' ? 'TAT Breach Alert' : 'TAT Reminder',
|
title: type === 'breach' ? 'TAT Breach Alert' : 'TAT Reminder',
|
||||||
@ -153,20 +183,73 @@ export async function handleTatJob(job: Job<TatJobData>) {
|
|||||||
requestId,
|
requestId,
|
||||||
requestNumber,
|
requestNumber,
|
||||||
url: `/request/${requestNumber}`,
|
url: `/request/${requestNumber}`,
|
||||||
type: type
|
type: type,
|
||||||
|
priority: notificationPriority,
|
||||||
|
actionRequired: type === 'breach' || type === 'threshold2' // Require action for critical alerts
|
||||||
});
|
});
|
||||||
|
|
||||||
// Log activity
|
// If breached, also notify the initiator (workflow creator)
|
||||||
await activityService.log({
|
if (type === 'breach') {
|
||||||
requestId,
|
const initiatorId = (workflow as any).initiatorId;
|
||||||
type: 'sla_warning',
|
if (initiatorId && initiatorId !== approverId) {
|
||||||
user: { userId: 'system', name: 'System' },
|
await notificationService.sendToUsers([initiatorId], {
|
||||||
timestamp: new Date().toISOString(),
|
title: 'TAT Breach - Request Delayed',
|
||||||
action: type === 'breach' ? 'TAT Breached' : 'TAT Warning',
|
body: `Your request ${requestNumber}: "${title}" has exceeded its TAT. The approver has been notified.`,
|
||||||
details: activityDetails
|
requestId,
|
||||||
});
|
requestNumber,
|
||||||
|
url: `/request/${requestNumber}`,
|
||||||
|
type: 'tat_breach_initiator',
|
||||||
|
priority: 'HIGH',
|
||||||
|
actionRequired: false
|
||||||
|
});
|
||||||
|
logger.info(`[TAT Processor] Breach notification sent to initiator ${initiatorId}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
logger.info(`[TAT Processor] ${type} notification sent for request ${requestId}`);
|
// Log activity (skip if it fails - don't break the TAT notification)
|
||||||
|
try {
|
||||||
|
await activityService.log({
|
||||||
|
requestId,
|
||||||
|
type: 'sla_warning',
|
||||||
|
user: { userId: null as any, name: 'System' }, // Use null instead of 'system' for UUID field
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
action: type === 'breach' ? 'TAT Breached' : 'TAT Warning',
|
||||||
|
details: activityDetails
|
||||||
|
});
|
||||||
|
logger.info(`[TAT Processor] Activity logged for ${type}`);
|
||||||
|
} catch (activityError: any) {
|
||||||
|
logger.warn(`[TAT Processor] Failed to log activity (non-critical):`, activityError.message);
|
||||||
|
// Continue - activity logging failure shouldn't break TAT notification
|
||||||
|
}
|
||||||
|
|
||||||
|
// 🔥 CRITICAL: Emit TAT alert to frontend via socket.io for real-time updates
|
||||||
|
try {
|
||||||
|
const { emitToRequestRoom } = require('../realtime/socket');
|
||||||
|
if (emitToRequestRoom) {
|
||||||
|
// Fetch the newly created alert to send complete data to frontend
|
||||||
|
const newAlert = await TatAlert.findOne({
|
||||||
|
where: { requestId, levelId, alertType },
|
||||||
|
order: [['createdAt', 'DESC']]
|
||||||
|
});
|
||||||
|
|
||||||
|
if (newAlert) {
|
||||||
|
emitToRequestRoom(requestId, 'tat:alert', {
|
||||||
|
alert: newAlert,
|
||||||
|
requestId,
|
||||||
|
levelId,
|
||||||
|
type,
|
||||||
|
thresholdPercentage,
|
||||||
|
message
|
||||||
|
});
|
||||||
|
logger.info(`[TAT Processor] ✅ TAT alert emitted to frontend via socket.io for request ${requestId}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (socketError) {
|
||||||
|
logger.error(`[TAT Processor] Failed to emit TAT alert via socket:`, socketError);
|
||||||
|
// Don't fail the job if socket emission fails
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info(`[TAT Processor] ✅ ${type} notification sent for request ${requestId}`);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error(`[TAT Processor] Failed to process ${type} job:`, error);
|
logger.error(`[TAT Processor] Failed to process ${type} job:`, error);
|
||||||
throw error; // Re-throw to trigger retry
|
throw error; // Re-throw to trigger retry
|
||||||
|
|||||||
@ -1,61 +1,31 @@
|
|||||||
import { Queue } from 'bullmq';
|
import { Queue } from 'bullmq';
|
||||||
import IORedis from 'ioredis';
|
import { sharedRedisConnection } from './redisConnection';
|
||||||
import logger from '@utils/logger';
|
import logger from '@utils/logger';
|
||||||
|
|
||||||
// Create Redis connection
|
|
||||||
const redisUrl = process.env.REDIS_URL || 'redis://localhost:6379';
|
|
||||||
let connection: IORedis | null = null;
|
|
||||||
let tatQueue: Queue | null = null;
|
let tatQueue: Queue | null = null;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
connection = new IORedis(redisUrl, {
|
// Use shared Redis connection for both Queue and Worker
|
||||||
maxRetriesPerRequest: null, // Required for BullMQ
|
tatQueue = new Queue('tatQueue', {
|
||||||
enableReadyCheck: false,
|
connection: sharedRedisConnection,
|
||||||
lazyConnect: true, // Don't connect immediately
|
defaultJobOptions: {
|
||||||
retryStrategy: (times) => {
|
removeOnComplete: true,
|
||||||
if (times > 3) {
|
removeOnFail: false,
|
||||||
logger.warn('[TAT Queue] Redis connection failed after 3 attempts. TAT notifications will be disabled.');
|
attempts: 2,
|
||||||
return null; // Stop retrying
|
backoff: {
|
||||||
|
type: 'fixed',
|
||||||
|
delay: 5000
|
||||||
}
|
}
|
||||||
return Math.min(times * 1000, 3000);
|
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// Handle connection events
|
tatQueue.on('error', (error) => {
|
||||||
connection.on('connect', () => {
|
logger.error('[TAT Queue] Queue error:', error);
|
||||||
logger.info('[TAT Queue] Connected to Redis');
|
|
||||||
});
|
});
|
||||||
|
|
||||||
connection.on('error', (err) => {
|
logger.info('[TAT Queue] ✅ Queue initialized');
|
||||||
logger.warn('[TAT Queue] Redis connection error - TAT notifications disabled:', err.message);
|
|
||||||
});
|
|
||||||
|
|
||||||
// Try to connect
|
|
||||||
connection.connect().then(() => {
|
|
||||||
logger.info('[TAT Queue] Redis connection established');
|
|
||||||
}).catch((err) => {
|
|
||||||
logger.warn('[TAT Queue] Could not connect to Redis. TAT notifications will be disabled.', err.message);
|
|
||||||
connection = null;
|
|
||||||
});
|
|
||||||
|
|
||||||
// Create TAT Queue only if connection is available
|
|
||||||
if (connection) {
|
|
||||||
tatQueue = new Queue('tatQueue', {
|
|
||||||
connection,
|
|
||||||
defaultJobOptions: {
|
|
||||||
removeOnComplete: true, // Clean up completed jobs
|
|
||||||
removeOnFail: false, // Keep failed jobs for debugging
|
|
||||||
attempts: 3, // Retry failed jobs up to 3 times
|
|
||||||
backoff: {
|
|
||||||
type: 'exponential',
|
|
||||||
delay: 2000 // Start with 2 second delay
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
logger.info('[TAT Queue] Queue initialized');
|
|
||||||
}
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.warn('[TAT Queue] Failed to initialize TAT queue. TAT notifications will be disabled.', error);
|
logger.error('[TAT Queue] Failed to initialize:', error);
|
||||||
tatQueue = null;
|
tatQueue = null;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@ -1,74 +1,44 @@
|
|||||||
import { Worker } from 'bullmq';
|
import { Worker } from 'bullmq';
|
||||||
import IORedis from 'ioredis';
|
import { sharedRedisConnection } from './redisConnection';
|
||||||
import { handleTatJob } from './tatProcessor';
|
import { handleTatJob } from './tatProcessor';
|
||||||
import logger from '@utils/logger';
|
import logger from '@utils/logger';
|
||||||
|
|
||||||
// Create Redis connection for worker
|
|
||||||
const redisUrl = process.env.REDIS_URL || 'redis://localhost:6379';
|
|
||||||
let connection: IORedis | null = null;
|
|
||||||
let tatWorker: Worker | null = null;
|
let tatWorker: Worker | null = null;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
connection = new IORedis(redisUrl, {
|
tatWorker = new Worker('tatQueue', handleTatJob, {
|
||||||
maxRetriesPerRequest: null,
|
connection: sharedRedisConnection,
|
||||||
enableReadyCheck: false,
|
concurrency: 5,
|
||||||
lazyConnect: true,
|
autorun: true,
|
||||||
retryStrategy: (times) => {
|
limiter: {
|
||||||
if (times > 3) {
|
max: 10,
|
||||||
logger.warn('[TAT Worker] Redis connection failed. TAT worker will not start.');
|
duration: 1000
|
||||||
return null;
|
|
||||||
}
|
|
||||||
return Math.min(times * 1000, 3000);
|
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// Try to connect and create worker
|
if (tatWorker) {
|
||||||
connection.connect().then(() => {
|
|
||||||
logger.info('[TAT Worker] Connected to Redis');
|
|
||||||
|
|
||||||
// Create TAT Worker
|
|
||||||
tatWorker = new Worker('tatQueue', handleTatJob, {
|
|
||||||
connection: connection!,
|
|
||||||
concurrency: 5, // Process up to 5 jobs concurrently
|
|
||||||
limiter: {
|
|
||||||
max: 10, // Maximum 10 jobs
|
|
||||||
duration: 1000 // per second
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Event listeners
|
|
||||||
tatWorker.on('ready', () => {
|
tatWorker.on('ready', () => {
|
||||||
logger.info('[TAT Worker] Worker is ready and listening for jobs');
|
logger.info('[TAT Worker] ✅ Ready and listening for TAT jobs');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
tatWorker.on('active', (job) => {
|
||||||
|
logger.info(`[TAT Worker] Processing: ${job.name} for request ${job.data.requestId}`);
|
||||||
|
});
|
||||||
|
|
||||||
tatWorker.on('completed', (job) => {
|
tatWorker.on('completed', (job) => {
|
||||||
logger.info(`[TAT Worker] ✅ Job ${job.id} (${job.name}) completed for request ${job.data.requestId}`);
|
logger.info(`[TAT Worker] Completed: ${job.name}`);
|
||||||
});
|
});
|
||||||
|
|
||||||
tatWorker.on('failed', (job, err) => {
|
tatWorker.on('failed', (job, err) => {
|
||||||
if (job) {
|
logger.error(`[TAT Worker] Failed: ${job?.name}`, err.message);
|
||||||
logger.error(`[TAT Worker] ❌ Job ${job.id} (${job.name}) failed for request ${job.data.requestId}:`, err);
|
|
||||||
} else {
|
|
||||||
logger.error('[TAT Worker] ❌ Job failed:', err);
|
|
||||||
}
|
|
||||||
});
|
});
|
||||||
|
|
||||||
tatWorker.on('error', (err) => {
|
tatWorker.on('error', (err) => {
|
||||||
logger.warn('[TAT Worker] Worker error:', err.message);
|
logger.error('[TAT Worker] Error:', err.message);
|
||||||
});
|
});
|
||||||
|
}
|
||||||
tatWorker.on('stalled', (jobId) => {
|
} catch (workerError: any) {
|
||||||
logger.warn(`[TAT Worker] Job ${jobId} has stalled`);
|
logger.error('[TAT Worker] Failed to create worker:', workerError);
|
||||||
});
|
|
||||||
|
|
||||||
logger.info('[TAT Worker] Worker initialized and listening for TAT jobs');
|
|
||||||
}).catch((err) => {
|
|
||||||
logger.warn('[TAT Worker] Could not connect to Redis. TAT worker will not start. TAT notifications are disabled.', err.message);
|
|
||||||
connection = null;
|
|
||||||
tatWorker = null;
|
|
||||||
});
|
|
||||||
} catch (error) {
|
|
||||||
logger.warn('[TAT Worker] Failed to initialize TAT worker. TAT notifications will be disabled.', error);
|
|
||||||
tatWorker = null;
|
tatWorker = null;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -78,9 +48,6 @@ process.on('SIGTERM', async () => {
|
|||||||
logger.info('[TAT Worker] SIGTERM received, closing worker...');
|
logger.info('[TAT Worker] SIGTERM received, closing worker...');
|
||||||
await tatWorker.close();
|
await tatWorker.close();
|
||||||
}
|
}
|
||||||
if (connection) {
|
|
||||||
await connection.quit();
|
|
||||||
}
|
|
||||||
});
|
});
|
||||||
|
|
||||||
process.on('SIGINT', async () => {
|
process.on('SIGINT', async () => {
|
||||||
@ -88,10 +55,6 @@ process.on('SIGINT', async () => {
|
|||||||
logger.info('[TAT Worker] SIGINT received, closing worker...');
|
logger.info('[TAT Worker] SIGINT received, closing worker...');
|
||||||
await tatWorker.close();
|
await tatWorker.close();
|
||||||
}
|
}
|
||||||
if (connection) {
|
|
||||||
await connection.quit();
|
|
||||||
}
|
|
||||||
});
|
});
|
||||||
|
|
||||||
export { tatWorker };
|
export { tatWorker };
|
||||||
|
|
||||||
|
|||||||
@ -29,6 +29,14 @@ export function initSocket(httpServer: any) {
|
|||||||
let currentRequestId: string | null = null;
|
let currentRequestId: string | null = null;
|
||||||
let currentUserId: string | null = null;
|
let currentUserId: string | null = null;
|
||||||
|
|
||||||
|
// Join user's personal notification room
|
||||||
|
socket.on('join:user', (data: { userId: string }) => {
|
||||||
|
const userId = typeof data === 'string' ? data : data.userId;
|
||||||
|
socket.join(`user:${userId}`);
|
||||||
|
currentUserId = userId;
|
||||||
|
console.log(`[Socket] User ${userId} joined personal notification room`);
|
||||||
|
});
|
||||||
|
|
||||||
socket.on('join:request', (data: { requestId: string; userId?: string }) => {
|
socket.on('join:request', (data: { requestId: string; userId?: string }) => {
|
||||||
const requestId = typeof data === 'string' ? data : data.requestId;
|
const requestId = typeof data === 'string' ? data : data.requestId;
|
||||||
const userId = typeof data === 'object' ? data.userId : null;
|
const userId = typeof data === 'object' ? data.userId : null;
|
||||||
@ -99,4 +107,10 @@ export function emitToRequestRoom(requestId: string, event: string, payload: any
|
|||||||
io.to(`request:${requestId}`).emit(event, payload);
|
io.to(`request:${requestId}`).emit(event, payload);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export function emitToUser(userId: string, event: string, payload: any) {
|
||||||
|
if (!io) return;
|
||||||
|
io.to(`user:${userId}`).emit(event, payload);
|
||||||
|
console.log(`[Socket] Emitted '${event}' to user ${userId}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@ -10,7 +10,11 @@ import {
|
|||||||
bulkImportHolidays,
|
bulkImportHolidays,
|
||||||
getAllConfigurations,
|
getAllConfigurations,
|
||||||
updateConfiguration,
|
updateConfiguration,
|
||||||
resetConfiguration
|
resetConfiguration,
|
||||||
|
updateUserRole,
|
||||||
|
getUsersByRole,
|
||||||
|
getRoleStatistics,
|
||||||
|
assignRoleByEmail
|
||||||
} from '@controllers/admin.controller';
|
} from '@controllers/admin.controller';
|
||||||
|
|
||||||
const router = Router();
|
const router = Router();
|
||||||
@ -97,5 +101,39 @@ router.put('/configurations/:configKey', updateConfiguration);
|
|||||||
*/
|
*/
|
||||||
router.post('/configurations/:configKey/reset', resetConfiguration);
|
router.post('/configurations/:configKey/reset', resetConfiguration);
|
||||||
|
|
||||||
|
// ==================== User Role Management Routes (RBAC) ====================
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @route POST /api/admin/users/assign-role
|
||||||
|
* @desc Assign role to user by email (creates user from Okta if doesn't exist)
|
||||||
|
* @body { email: string, role: 'USER' | 'MANAGEMENT' | 'ADMIN' }
|
||||||
|
* @access Admin
|
||||||
|
*/
|
||||||
|
router.post('/users/assign-role', assignRoleByEmail);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @route PUT /api/admin/users/:userId/role
|
||||||
|
* @desc Update user's role (USER, MANAGEMENT, ADMIN)
|
||||||
|
* @params userId
|
||||||
|
* @body { role: 'USER' | 'MANAGEMENT' | 'ADMIN' }
|
||||||
|
* @access Admin
|
||||||
|
*/
|
||||||
|
router.put('/users/:userId/role', updateUserRole);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @route GET /api/admin/users/by-role
|
||||||
|
* @desc Get all users filtered by role
|
||||||
|
* @query role (optional): ADMIN | MANAGEMENT | USER
|
||||||
|
* @access Admin
|
||||||
|
*/
|
||||||
|
router.get('/users/by-role', getUsersByRole);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @route GET /api/admin/users/role-statistics
|
||||||
|
* @desc Get count of users in each role
|
||||||
|
* @access Admin
|
||||||
|
*/
|
||||||
|
router.get('/users/role-statistics', getRoleStatistics);
|
||||||
|
|
||||||
export default router;
|
export default router;
|
||||||
|
|
||||||
|
|||||||
76
src/routes/ai.routes.ts
Normal file
76
src/routes/ai.routes.ts
Normal file
@ -0,0 +1,76 @@
|
|||||||
|
import { Router, Request, Response } from 'express';
|
||||||
|
import { aiService } from '@services/ai.service';
|
||||||
|
import { authenticateToken } from '../middlewares/auth.middleware';
|
||||||
|
import logger from '@utils/logger';
|
||||||
|
|
||||||
|
const router = Router();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @route GET /api/v1/ai/status
|
||||||
|
* @desc Get AI service status
|
||||||
|
* @access Private (Admin only)
|
||||||
|
*/
|
||||||
|
router.get('/status', authenticateToken, async (req: Request, res: Response) => {
|
||||||
|
try {
|
||||||
|
const isAvailable = aiService.isAvailable();
|
||||||
|
const provider = aiService.getProviderName();
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: {
|
||||||
|
available: isAvailable,
|
||||||
|
provider: provider,
|
||||||
|
status: isAvailable ? 'active' : 'unavailable'
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[AI Routes] Error getting status:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to get AI status'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @route POST /api/v1/ai/reinitialize
|
||||||
|
* @desc Reinitialize AI service (after config change)
|
||||||
|
* @access Private (Admin only)
|
||||||
|
*/
|
||||||
|
router.post('/reinitialize', authenticateToken, async (req: Request, res: Response): Promise<void> => {
|
||||||
|
try {
|
||||||
|
// Check if user is admin
|
||||||
|
const userRole = (req as any).user?.role;
|
||||||
|
const isAdmin = userRole?.toLowerCase() === 'admin';
|
||||||
|
if (!isAdmin) {
|
||||||
|
res.status(403).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Only admins can reinitialize AI service'
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
await aiService.reinitialize();
|
||||||
|
|
||||||
|
const isAvailable = aiService.isAvailable();
|
||||||
|
const provider = aiService.getProviderName();
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
message: 'AI service reinitialized successfully',
|
||||||
|
data: {
|
||||||
|
available: isAvailable,
|
||||||
|
provider: provider
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[AI Routes] Error reinitializing:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to reinitialize AI service'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
export default router;
|
||||||
|
|
||||||
47
src/routes/conclusion.routes.ts
Normal file
47
src/routes/conclusion.routes.ts
Normal file
@ -0,0 +1,47 @@
|
|||||||
|
import { Router } from 'express';
|
||||||
|
import { conclusionController } from '@controllers/conclusion.controller';
|
||||||
|
import { authenticateToken } from '../middlewares/auth.middleware';
|
||||||
|
|
||||||
|
const router = Router();
|
||||||
|
|
||||||
|
// All routes require authentication
|
||||||
|
router.use(authenticateToken);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @route POST /api/v1/conclusions/:requestId/generate
|
||||||
|
* @desc Generate AI-powered conclusion remark
|
||||||
|
* @access Private (Initiator only)
|
||||||
|
*/
|
||||||
|
router.post('/:requestId/generate', (req, res) =>
|
||||||
|
conclusionController.generateConclusion(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @route PUT /api/v1/conclusions/:requestId
|
||||||
|
* @desc Update conclusion remark (edit by initiator)
|
||||||
|
* @access Private (Initiator only)
|
||||||
|
*/
|
||||||
|
router.put('/:requestId', (req, res) =>
|
||||||
|
conclusionController.updateConclusion(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @route POST /api/v1/conclusions/:requestId/finalize
|
||||||
|
* @desc Finalize conclusion and close request
|
||||||
|
* @access Private (Initiator only)
|
||||||
|
*/
|
||||||
|
router.post('/:requestId/finalize', (req, res) =>
|
||||||
|
conclusionController.finalizeConclusion(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @route GET /api/v1/conclusions/:requestId
|
||||||
|
* @desc Get conclusion for a request
|
||||||
|
* @access Private
|
||||||
|
*/
|
||||||
|
router.get('/:requestId', (req, res) =>
|
||||||
|
conclusionController.getConclusion(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
export default router;
|
||||||
|
|
||||||
@ -11,7 +11,7 @@ const router = Router();
|
|||||||
*/
|
*/
|
||||||
router.get('/',
|
router.get('/',
|
||||||
asyncHandler(async (req: Request, res: Response): Promise<void> => {
|
asyncHandler(async (req: Request, res: Response): Promise<void> => {
|
||||||
const config = getPublicConfig();
|
const config = await getPublicConfig();
|
||||||
res.json({
|
res.json({
|
||||||
success: true,
|
success: true,
|
||||||
data: config
|
data: config
|
||||||
|
|||||||
112
src/routes/dashboard.routes.ts
Normal file
112
src/routes/dashboard.routes.ts
Normal file
@ -0,0 +1,112 @@
|
|||||||
|
import { Router } from 'express';
|
||||||
|
import type { Request, Response } from 'express';
|
||||||
|
import { DashboardController } from '../controllers/dashboard.controller';
|
||||||
|
import { authenticateToken } from '../middlewares/auth.middleware';
|
||||||
|
import { asyncHandler } from '../middlewares/errorHandler.middleware';
|
||||||
|
|
||||||
|
const router = Router();
|
||||||
|
const dashboardController = new DashboardController();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Dashboard Routes
|
||||||
|
* All routes require authentication
|
||||||
|
*/
|
||||||
|
|
||||||
|
// Get KPI summary (all KPI cards)
|
||||||
|
router.get('/kpis',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getKPIs.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get detailed request statistics
|
||||||
|
router.get('/stats/requests',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getRequestStats.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get TAT efficiency metrics
|
||||||
|
router.get('/stats/tat-efficiency',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getTATEfficiency.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get approver load statistics
|
||||||
|
router.get('/stats/approver-load',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getApproverLoad.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get engagement & quality metrics
|
||||||
|
router.get('/stats/engagement',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getEngagementStats.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get AI & closure insights
|
||||||
|
router.get('/stats/ai-insights',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getAIInsights.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get AI Remark Utilization with monthly trends
|
||||||
|
router.get('/stats/ai-remark-utilization',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getAIRemarkUtilization.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get Approver Performance metrics
|
||||||
|
router.get('/stats/approver-performance',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getApproverPerformance.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get recent activity feed
|
||||||
|
router.get('/activity/recent',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getRecentActivity.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get high priority/critical requests
|
||||||
|
router.get('/requests/critical',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getCriticalRequests.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get upcoming deadlines
|
||||||
|
router.get('/deadlines/upcoming',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getUpcomingDeadlines.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get department-wise summary
|
||||||
|
router.get('/stats/by-department',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getDepartmentStats.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get priority distribution
|
||||||
|
router.get('/stats/priority-distribution',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getPriorityDistribution.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get Request Lifecycle Report
|
||||||
|
router.get('/reports/lifecycle',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getLifecycleReport.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get enhanced User Activity Log Report
|
||||||
|
router.get('/reports/activity-log',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getActivityLogReport.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get Workflow Aging Report
|
||||||
|
router.get('/reports/workflow-aging',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getWorkflowAgingReport.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
export default router;
|
||||||
|
|
||||||
@ -1,30 +1,356 @@
|
|||||||
import { Router } from 'express';
|
import { Router, Request, Response } from 'express';
|
||||||
import { authenticateToken } from '@middlewares/auth.middleware';
|
import { tatQueue } from '../queues/tatQueue';
|
||||||
import {
|
import { tatWorker } from '../queues/tatWorker';
|
||||||
checkTatSystemStatus,
|
import { TatAlert } from '@models/TatAlert';
|
||||||
checkWorkflowDetailsResponse
|
import { ApprovalLevel } from '@models/ApprovalLevel';
|
||||||
} from '@controllers/debug.controller';
|
import dayjs from 'dayjs';
|
||||||
|
import logger from '@utils/logger';
|
||||||
|
|
||||||
const router = Router();
|
const router = Router();
|
||||||
|
|
||||||
// Debug routes (should be disabled in production)
|
/**
|
||||||
if (process.env.NODE_ENV !== 'production') {
|
* Debug endpoint to check scheduled TAT jobs in the queue
|
||||||
router.use(authenticateToken);
|
*/
|
||||||
|
router.get('/tat-jobs/:requestId', async (req: Request, res: Response): Promise<void> => {
|
||||||
|
try {
|
||||||
|
const { requestId } = req.params;
|
||||||
|
|
||||||
|
if (!tatQueue) {
|
||||||
|
res.json({
|
||||||
|
error: 'TAT queue not available (Redis not connected)',
|
||||||
|
jobs: []
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
// Get all jobs for this request
|
||||||
* @route GET /api/debug/tat-status
|
const waitingJobs = await tatQueue.getJobs(['waiting', 'delayed', 'active']);
|
||||||
* @desc Check TAT system configuration and status
|
const requestJobs = waitingJobs.filter(job => job.data.requestId === requestId);
|
||||||
* @access Private
|
|
||||||
*/
|
|
||||||
router.get('/tat-status', checkTatSystemStatus);
|
|
||||||
|
|
||||||
/**
|
const jobDetails = requestJobs.map(job => {
|
||||||
* @route GET /api/debug/workflow-details/:requestId
|
const delay = job.opts.delay || 0;
|
||||||
* @desc Check what's in workflow details response
|
const scheduledTime = job.timestamp ? new Date(job.timestamp + delay) : null;
|
||||||
* @access Private
|
const now = new Date();
|
||||||
*/
|
const timeUntilFire = scheduledTime ? Math.round((scheduledTime.getTime() - now.getTime()) / 1000 / 60) : null;
|
||||||
router.get('/workflow-details/:requestId', checkWorkflowDetailsResponse);
|
|
||||||
}
|
return {
|
||||||
|
jobId: job.id,
|
||||||
|
type: job.data.type,
|
||||||
|
threshold: job.data.threshold,
|
||||||
|
requestId: job.data.requestId,
|
||||||
|
levelId: job.data.levelId,
|
||||||
|
state: job.getState(),
|
||||||
|
delay: delay,
|
||||||
|
delayMinutes: Math.round(delay / 1000 / 60),
|
||||||
|
delayHours: (delay / 1000 / 60 / 60).toFixed(2),
|
||||||
|
timestamp: job.timestamp,
|
||||||
|
scheduledTime: scheduledTime?.toISOString(),
|
||||||
|
timeUntilFire: timeUntilFire ? `${timeUntilFire} minutes` : 'N/A',
|
||||||
|
processedOn: job.processedOn ? new Date(job.processedOn).toISOString() : null,
|
||||||
|
finishedOn: job.finishedOn ? new Date(job.finishedOn).toISOString() : null
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
// Get TAT alerts from database
|
||||||
|
const alerts = await TatAlert.findAll({
|
||||||
|
where: { requestId },
|
||||||
|
order: [['alertSentAt', 'ASC']]
|
||||||
|
});
|
||||||
|
|
||||||
|
const alertDetails = alerts.map((alert: any) => ({
|
||||||
|
alertType: alert.alertType,
|
||||||
|
thresholdPercentage: alert.thresholdPercentage,
|
||||||
|
alertSentAt: alert.alertSentAt,
|
||||||
|
levelStartTime: alert.levelStartTime,
|
||||||
|
timeSinceStart: alert.levelStartTime
|
||||||
|
? `${((new Date(alert.alertSentAt).getTime() - new Date(alert.levelStartTime).getTime()) / 1000 / 60 / 60).toFixed(2)} hours`
|
||||||
|
: 'N/A',
|
||||||
|
notificationSent: alert.notificationSent
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Get approval level details
|
||||||
|
const levels = await ApprovalLevel.findAll({
|
||||||
|
where: { requestId }
|
||||||
|
});
|
||||||
|
|
||||||
|
const levelDetails = levels.map((level: any) => ({
|
||||||
|
levelId: level.levelId,
|
||||||
|
levelNumber: level.levelNumber,
|
||||||
|
status: level.status,
|
||||||
|
tatHours: level.tatHours,
|
||||||
|
levelStartTime: level.levelStartTime,
|
||||||
|
tat50AlertSent: level.tat50AlertSent,
|
||||||
|
tat75AlertSent: level.tat75AlertSent,
|
||||||
|
tatBreached: level.tatBreached,
|
||||||
|
tatPercentageUsed: level.tatPercentageUsed
|
||||||
|
}));
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
requestId,
|
||||||
|
currentTime: new Date().toISOString(),
|
||||||
|
queuedJobs: jobDetails,
|
||||||
|
jobCount: jobDetails.length,
|
||||||
|
sentAlerts: alertDetails,
|
||||||
|
alertCount: alertDetails.length,
|
||||||
|
approvalLevels: levelDetails,
|
||||||
|
testMode: process.env.TAT_TEST_MODE === 'true'
|
||||||
|
});
|
||||||
|
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Debug] Error checking TAT jobs:', error);
|
||||||
|
res.status(500).json({ error: error.message });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Debug endpoint to check all queued TAT jobs
|
||||||
|
*/
|
||||||
|
router.get('/tat-jobs', async (req: Request, res: Response): Promise<void> => {
|
||||||
|
try {
|
||||||
|
if (!tatQueue) {
|
||||||
|
res.json({
|
||||||
|
error: 'TAT queue not available (Redis not connected)',
|
||||||
|
jobs: []
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const waitingJobs = await tatQueue.getJobs(['waiting', 'delayed', 'active']);
|
||||||
|
|
||||||
|
const jobDetails = waitingJobs.map(job => {
|
||||||
|
const delay = job.opts.delay || 0;
|
||||||
|
const scheduledTime = job.timestamp ? new Date(job.timestamp + delay) : null;
|
||||||
|
const now = new Date();
|
||||||
|
const timeUntilFire = scheduledTime ? Math.round((scheduledTime.getTime() - now.getTime()) / 1000 / 60) : null;
|
||||||
|
|
||||||
|
return {
|
||||||
|
jobId: job.id,
|
||||||
|
type: job.data.type,
|
||||||
|
threshold: job.data.threshold,
|
||||||
|
requestId: job.data.requestId,
|
||||||
|
levelId: job.data.levelId,
|
||||||
|
state: job.getState(),
|
||||||
|
delay: delay,
|
||||||
|
delayMinutes: Math.round(delay / 1000 / 60),
|
||||||
|
delayHours: (delay / 1000 / 60 / 60).toFixed(2),
|
||||||
|
scheduledTime: scheduledTime?.toISOString(),
|
||||||
|
timeUntilFire: timeUntilFire ? `${timeUntilFire} minutes` : 'N/A'
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
currentTime: new Date().toISOString(),
|
||||||
|
jobs: jobDetails,
|
||||||
|
totalJobs: jobDetails.length,
|
||||||
|
testMode: process.env.TAT_TEST_MODE === 'true'
|
||||||
|
});
|
||||||
|
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Debug] Error checking all TAT jobs:', error);
|
||||||
|
res.status(500).json({ error: error.message });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Debug endpoint to check TAT time calculations
|
||||||
|
*/
|
||||||
|
router.post('/tat-calculate', async (req: Request, res: Response): Promise<void> => {
|
||||||
|
try {
|
||||||
|
const { startTime, tatHours, priority = 'STANDARD' } = req.body;
|
||||||
|
|
||||||
|
const { addWorkingHours, addWorkingHoursExpress, calculateDelay } = await import('@utils/tatTimeUtils');
|
||||||
|
const { getTatThresholds } = await import('../services/configReader.service');
|
||||||
|
|
||||||
|
const start = startTime ? new Date(startTime) : new Date();
|
||||||
|
const isExpress = priority === 'EXPRESS';
|
||||||
|
const thresholds = await getTatThresholds();
|
||||||
|
|
||||||
|
let threshold1Time: Date;
|
||||||
|
let threshold2Time: Date;
|
||||||
|
let breachTime: Date;
|
||||||
|
|
||||||
|
if (isExpress) {
|
||||||
|
const t1 = await addWorkingHoursExpress(start, tatHours * (thresholds.first / 100));
|
||||||
|
const t2 = await addWorkingHoursExpress(start, tatHours * (thresholds.second / 100));
|
||||||
|
const tBreach = await addWorkingHoursExpress(start, tatHours);
|
||||||
|
threshold1Time = t1.toDate();
|
||||||
|
threshold2Time = t2.toDate();
|
||||||
|
breachTime = tBreach.toDate();
|
||||||
|
} else {
|
||||||
|
const t1 = await addWorkingHours(start, tatHours * (thresholds.first / 100));
|
||||||
|
const t2 = await addWorkingHours(start, tatHours * (thresholds.second / 100));
|
||||||
|
const tBreach = await addWorkingHours(start, tatHours);
|
||||||
|
threshold1Time = t1.toDate();
|
||||||
|
threshold2Time = t2.toDate();
|
||||||
|
breachTime = tBreach.toDate();
|
||||||
|
}
|
||||||
|
|
||||||
|
const now = new Date();
|
||||||
|
const delays = {
|
||||||
|
threshold1: calculateDelay(threshold1Time),
|
||||||
|
threshold2: calculateDelay(threshold2Time),
|
||||||
|
breach: calculateDelay(breachTime)
|
||||||
|
};
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
input: {
|
||||||
|
startTime: start.toISOString(),
|
||||||
|
tatHours,
|
||||||
|
priority,
|
||||||
|
thresholds
|
||||||
|
},
|
||||||
|
calculations: {
|
||||||
|
threshold1: {
|
||||||
|
percentage: thresholds.first,
|
||||||
|
targetTime: threshold1Time.toISOString(),
|
||||||
|
delay: delays.threshold1,
|
||||||
|
delayMinutes: Math.round(delays.threshold1 / 1000 / 60),
|
||||||
|
delayHours: (delays.threshold1 / 1000 / 60 / 60).toFixed(2),
|
||||||
|
isPast: delays.threshold1 === 0
|
||||||
|
},
|
||||||
|
threshold2: {
|
||||||
|
percentage: thresholds.second,
|
||||||
|
targetTime: threshold2Time.toISOString(),
|
||||||
|
delay: delays.threshold2,
|
||||||
|
delayMinutes: Math.round(delays.threshold2 / 1000 / 60),
|
||||||
|
delayHours: (delays.threshold2 / 1000 / 60 / 60).toFixed(2),
|
||||||
|
isPast: delays.threshold2 === 0
|
||||||
|
},
|
||||||
|
breach: {
|
||||||
|
percentage: 100,
|
||||||
|
targetTime: breachTime.toISOString(),
|
||||||
|
delay: delays.breach,
|
||||||
|
delayMinutes: Math.round(delays.breach / 1000 / 60),
|
||||||
|
delayHours: (delays.breach / 1000 / 60 / 60).toFixed(2),
|
||||||
|
isPast: delays.breach === 0
|
||||||
|
}
|
||||||
|
},
|
||||||
|
currentTime: now.toISOString(),
|
||||||
|
testMode: process.env.TAT_TEST_MODE === 'true'
|
||||||
|
});
|
||||||
|
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Debug] Error calculating TAT times:', error);
|
||||||
|
res.status(500).json({ error: error.message });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Debug endpoint to check queue and worker status
|
||||||
|
*/
|
||||||
|
router.get('/queue-status', async (req: Request, res: Response): Promise<void> => {
|
||||||
|
try {
|
||||||
|
if (!tatQueue || !tatWorker) {
|
||||||
|
res.json({
|
||||||
|
error: 'Queue or Worker not available',
|
||||||
|
queueAvailable: !!tatQueue,
|
||||||
|
workerAvailable: !!tatWorker
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get job counts
|
||||||
|
const [waiting, delayed, active, completed, failed] = await Promise.all([
|
||||||
|
tatQueue.getJobCounts('waiting'),
|
||||||
|
tatQueue.getJobCounts('delayed'),
|
||||||
|
tatQueue.getJobCounts('active'),
|
||||||
|
tatQueue.getJobCounts('completed'),
|
||||||
|
tatQueue.getJobCounts('failed')
|
||||||
|
]);
|
||||||
|
|
||||||
|
// Get all jobs in various states
|
||||||
|
const waitingJobs = await tatQueue.getJobs(['waiting'], 0, 10);
|
||||||
|
const delayedJobs = await tatQueue.getJobs(['delayed'], 0, 10);
|
||||||
|
const activeJobs = await tatQueue.getJobs(['active'], 0, 10);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
queue: {
|
||||||
|
name: tatQueue.name,
|
||||||
|
available: true
|
||||||
|
},
|
||||||
|
worker: {
|
||||||
|
available: true,
|
||||||
|
running: tatWorker.isRunning(),
|
||||||
|
paused: tatWorker.isPaused(),
|
||||||
|
closing: tatWorker.closing,
|
||||||
|
concurrency: tatWorker.opts.concurrency,
|
||||||
|
autorun: tatWorker.opts.autorun
|
||||||
|
},
|
||||||
|
jobCounts: {
|
||||||
|
waiting: waiting.waiting,
|
||||||
|
delayed: delayed.delayed,
|
||||||
|
active: active.active,
|
||||||
|
completed: completed.completed,
|
||||||
|
failed: failed.failed
|
||||||
|
},
|
||||||
|
recentJobs: {
|
||||||
|
waiting: waitingJobs.map(j => ({ id: j.id, name: j.name, data: j.data })),
|
||||||
|
delayed: delayedJobs.map(j => ({
|
||||||
|
id: j.id,
|
||||||
|
name: j.name,
|
||||||
|
data: j.data,
|
||||||
|
delay: j.opts.delay,
|
||||||
|
timestamp: j.timestamp,
|
||||||
|
scheduledFor: new Date(j.timestamp + (j.opts.delay || 0)).toISOString()
|
||||||
|
})),
|
||||||
|
active: activeJobs.map(j => ({ id: j.id, name: j.name, data: j.data }))
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Debug] Error checking queue status:', error);
|
||||||
|
res.status(500).json({ error: error.message, stack: error.stack });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Debug endpoint to manually trigger a test TAT job (immediate execution)
|
||||||
|
*/
|
||||||
|
router.post('/trigger-test-tat', async (req: Request, res: Response): Promise<void> => {
|
||||||
|
try {
|
||||||
|
if (!tatQueue) {
|
||||||
|
res.json({
|
||||||
|
error: 'TAT queue not available (Redis not connected)'
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const { requestId, levelId, approverId } = req.body;
|
||||||
|
|
||||||
|
// Add a test job with 5 second delay
|
||||||
|
const job = await tatQueue.add(
|
||||||
|
'test-threshold1',
|
||||||
|
{
|
||||||
|
type: 'threshold1',
|
||||||
|
threshold: 50,
|
||||||
|
requestId: requestId || 'test-request-123',
|
||||||
|
levelId: levelId || 'test-level-456',
|
||||||
|
approverId: approverId || 'test-approver-789'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
delay: 5000, // 5 seconds
|
||||||
|
jobId: `test-tat-${Date.now()}`,
|
||||||
|
removeOnComplete: false, // Keep for debugging
|
||||||
|
removeOnFail: false
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
message: 'Test TAT job created (will fire in 5 seconds)',
|
||||||
|
job: {
|
||||||
|
id: job.id,
|
||||||
|
name: job.name,
|
||||||
|
data: job.data,
|
||||||
|
delay: 5000
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[Debug] Error triggering test TAT:', error);
|
||||||
|
res.status(500).json({ error: error.message, stack: error.stack });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
export default router;
|
export default router;
|
||||||
|
|
||||||
|
|||||||
@ -7,6 +7,10 @@ import tatRoutes from './tat.routes';
|
|||||||
import adminRoutes from './admin.routes';
|
import adminRoutes from './admin.routes';
|
||||||
import debugRoutes from './debug.routes';
|
import debugRoutes from './debug.routes';
|
||||||
import configRoutes from './config.routes';
|
import configRoutes from './config.routes';
|
||||||
|
import dashboardRoutes from './dashboard.routes';
|
||||||
|
import notificationRoutes from './notification.routes';
|
||||||
|
import conclusionRoutes from './conclusion.routes';
|
||||||
|
import aiRoutes from './ai.routes';
|
||||||
|
|
||||||
const router = Router();
|
const router = Router();
|
||||||
|
|
||||||
@ -28,12 +32,13 @@ router.use('/documents', documentRoutes);
|
|||||||
router.use('/tat', tatRoutes);
|
router.use('/tat', tatRoutes);
|
||||||
router.use('/admin', adminRoutes);
|
router.use('/admin', adminRoutes);
|
||||||
router.use('/debug', debugRoutes);
|
router.use('/debug', debugRoutes);
|
||||||
|
router.use('/dashboard', dashboardRoutes);
|
||||||
|
router.use('/notifications', notificationRoutes);
|
||||||
|
router.use('/conclusions', conclusionRoutes);
|
||||||
|
router.use('/ai', aiRoutes);
|
||||||
|
|
||||||
// TODO: Add other route modules as they are implemented
|
// TODO: Add other route modules as they are implemented
|
||||||
// router.use('/approvals', approvalRoutes);
|
// router.use('/approvals', approvalRoutes);
|
||||||
// router.use('/documents', documentRoutes);
|
|
||||||
// router.use('/notifications', notificationRoutes);
|
|
||||||
// router.use('/participants', participantRoutes);
|
// router.use('/participants', participantRoutes);
|
||||||
// router.use('/dashboard', dashboardRoutes);
|
|
||||||
|
|
||||||
export default router;
|
export default router;
|
||||||
|
|||||||
46
src/routes/notification.routes.ts
Normal file
46
src/routes/notification.routes.ts
Normal file
@ -0,0 +1,46 @@
|
|||||||
|
import { Router } from 'express';
|
||||||
|
import { NotificationController } from '../controllers/notification.controller';
|
||||||
|
import { authenticateToken } from '../middlewares/auth.middleware';
|
||||||
|
import { asyncHandler } from '../middlewares/errorHandler.middleware';
|
||||||
|
|
||||||
|
const router = Router();
|
||||||
|
const notificationController = new NotificationController();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Notification Routes
|
||||||
|
* All routes require authentication
|
||||||
|
*/
|
||||||
|
|
||||||
|
// Get user's notifications (with pagination)
|
||||||
|
// Query params: page, limit, unreadOnly
|
||||||
|
router.get('/',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(notificationController.getUserNotifications.bind(notificationController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get unread count
|
||||||
|
router.get('/unread-count',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(notificationController.getUnreadCount.bind(notificationController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Mark notification as read
|
||||||
|
router.patch('/:notificationId/read',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(notificationController.markAsRead.bind(notificationController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Mark all as read
|
||||||
|
router.post('/mark-all-read',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(notificationController.markAllAsRead.bind(notificationController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Delete notification
|
||||||
|
router.delete('/:notificationId',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(notificationController.deleteNotification.bind(notificationController))
|
||||||
|
);
|
||||||
|
|
||||||
|
export default router;
|
||||||
|
|
||||||
@ -9,6 +9,9 @@ const userController = new UserController();
|
|||||||
// GET /api/v1/users/search?q=<email or name>
|
// GET /api/v1/users/search?q=<email or name>
|
||||||
router.get('/search', authenticateToken, asyncHandler(userController.searchUsers.bind(userController)));
|
router.get('/search', authenticateToken, asyncHandler(userController.searchUsers.bind(userController)));
|
||||||
|
|
||||||
|
// POST /api/v1/users/ensure - Ensure user exists in DB (create if not exists)
|
||||||
|
router.post('/ensure', authenticateToken, asyncHandler(userController.ensureUserExists.bind(userController)));
|
||||||
|
|
||||||
export default router;
|
export default router;
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
168
src/scripts/auto-setup.ts
Normal file
168
src/scripts/auto-setup.ts
Normal file
@ -0,0 +1,168 @@
|
|||||||
|
/**
|
||||||
|
* Automatic Database Setup Script
|
||||||
|
* Runs before server starts to ensure database is ready
|
||||||
|
*
|
||||||
|
* This script:
|
||||||
|
* 1. Checks if database exists
|
||||||
|
* 2. Creates database if missing
|
||||||
|
* 3. Installs required extensions
|
||||||
|
* 4. Runs all pending migrations (18 total)
|
||||||
|
* 5. Configs are auto-seeded by configSeed.service.ts on server start (30 configs)
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Client } from 'pg';
|
||||||
|
import { sequelize } from '../config/database';
|
||||||
|
import { exec } from 'child_process';
|
||||||
|
import { promisify } from 'util';
|
||||||
|
import dotenv from 'dotenv';
|
||||||
|
import path from 'path';
|
||||||
|
|
||||||
|
dotenv.config({ path: path.resolve(__dirname, '../../.env') });
|
||||||
|
|
||||||
|
const execAsync = promisify(exec);
|
||||||
|
|
||||||
|
const DB_HOST = process.env.DB_HOST || 'localhost';
|
||||||
|
const DB_PORT = parseInt(process.env.DB_PORT || '5432');
|
||||||
|
const DB_USER = process.env.DB_USER || 'postgres';
|
||||||
|
const DB_PASSWORD = process.env.DB_PASSWORD || '';
|
||||||
|
const DB_NAME = process.env.DB_NAME || 'royal_enfield_workflow';
|
||||||
|
|
||||||
|
async function checkAndCreateDatabase(): Promise<boolean> {
|
||||||
|
const client = new Client({
|
||||||
|
host: DB_HOST,
|
||||||
|
port: DB_PORT,
|
||||||
|
user: DB_USER,
|
||||||
|
password: DB_PASSWORD,
|
||||||
|
database: 'postgres', // Connect to default postgres database
|
||||||
|
});
|
||||||
|
|
||||||
|
try {
|
||||||
|
await client.connect();
|
||||||
|
console.log('🔍 Checking if database exists...');
|
||||||
|
|
||||||
|
// Check if database exists
|
||||||
|
const result = await client.query(
|
||||||
|
`SELECT 1 FROM pg_database WHERE datname = $1`,
|
||||||
|
[DB_NAME]
|
||||||
|
);
|
||||||
|
|
||||||
|
if (result.rows.length === 0) {
|
||||||
|
console.log(`📦 Database '${DB_NAME}' not found. Creating...`);
|
||||||
|
|
||||||
|
// Create database
|
||||||
|
await client.query(`CREATE DATABASE "${DB_NAME}"`);
|
||||||
|
console.log(`✅ Database '${DB_NAME}' created successfully!`);
|
||||||
|
|
||||||
|
await client.end();
|
||||||
|
|
||||||
|
// Connect to new database and install extensions
|
||||||
|
const newDbClient = new Client({
|
||||||
|
host: DB_HOST,
|
||||||
|
port: DB_PORT,
|
||||||
|
user: DB_USER,
|
||||||
|
password: DB_PASSWORD,
|
||||||
|
database: DB_NAME,
|
||||||
|
});
|
||||||
|
|
||||||
|
await newDbClient.connect();
|
||||||
|
console.log('📦 Installing uuid-ossp extension...');
|
||||||
|
await newDbClient.query('CREATE EXTENSION IF NOT EXISTS "uuid-ossp"');
|
||||||
|
console.log('✅ Extension installed!');
|
||||||
|
await newDbClient.end();
|
||||||
|
|
||||||
|
return true; // Database was created
|
||||||
|
} else {
|
||||||
|
console.log(`✅ Database '${DB_NAME}' already exists.`);
|
||||||
|
await client.end();
|
||||||
|
return false; // Database already existed
|
||||||
|
}
|
||||||
|
} catch (error: any) {
|
||||||
|
console.error('❌ Database check/creation failed:', error.message);
|
||||||
|
await client.end();
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function runMigrations(): Promise<void> {
|
||||||
|
try {
|
||||||
|
console.log('🔄 Running migrations...');
|
||||||
|
|
||||||
|
// Run migrations using npm script
|
||||||
|
const { stdout, stderr } = await execAsync('npm run migrate', {
|
||||||
|
cwd: path.resolve(__dirname, '../..'),
|
||||||
|
});
|
||||||
|
|
||||||
|
if (stdout) console.log(stdout);
|
||||||
|
if (stderr && !stderr.includes('npm WARN')) console.error(stderr);
|
||||||
|
|
||||||
|
console.log('✅ Migrations completed successfully!');
|
||||||
|
} catch (error: any) {
|
||||||
|
console.error('❌ Migration failed:', error.message);
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function testConnection(): Promise<void> {
|
||||||
|
try {
|
||||||
|
console.log('🔌 Testing database connection...');
|
||||||
|
await sequelize.authenticate();
|
||||||
|
console.log('✅ Database connection established!');
|
||||||
|
} catch (error: any) {
|
||||||
|
console.error('❌ Unable to connect to database:', error.message);
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function autoSetup(): Promise<void> {
|
||||||
|
console.log('\n========================================');
|
||||||
|
console.log('🚀 Royal Enfield Workflow - Auto Setup');
|
||||||
|
console.log('========================================\n');
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Step 1: Check and create database if needed
|
||||||
|
const wasCreated = await checkAndCreateDatabase();
|
||||||
|
|
||||||
|
// Step 2: Test connection
|
||||||
|
await testConnection();
|
||||||
|
|
||||||
|
// Step 3: Run migrations (always, to catch any pending migrations)
|
||||||
|
await runMigrations();
|
||||||
|
|
||||||
|
console.log('\n========================================');
|
||||||
|
console.log('✅ Setup completed successfully!');
|
||||||
|
console.log('========================================\n');
|
||||||
|
|
||||||
|
console.log('📝 Note: Admin configurations will be auto-seeded on server start if table is empty.\n');
|
||||||
|
|
||||||
|
if (wasCreated) {
|
||||||
|
console.log('💡 Next steps:');
|
||||||
|
console.log(' 1. Server will start automatically');
|
||||||
|
console.log(' 2. Log in via SSO');
|
||||||
|
console.log(' 3. Run this SQL to make yourself admin:');
|
||||||
|
console.log(` UPDATE users SET role = 'ADMIN' WHERE email = 'your-email@royalenfield.com';\n`);
|
||||||
|
}
|
||||||
|
|
||||||
|
} catch (error: any) {
|
||||||
|
console.error('\n========================================');
|
||||||
|
console.error('❌ Setup failed!');
|
||||||
|
console.error('========================================');
|
||||||
|
console.error('Error:', error.message);
|
||||||
|
console.error('\nPlease check:');
|
||||||
|
console.error('1. PostgreSQL is running');
|
||||||
|
console.error('2. DB credentials in .env are correct');
|
||||||
|
console.error('3. User has permission to create databases\n');
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Run if called directly
|
||||||
|
if (require.main === module) {
|
||||||
|
autoSetup().then(() => {
|
||||||
|
process.exit(0);
|
||||||
|
}).catch(() => {
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
export default autoSetup;
|
||||||
|
|
||||||
@ -16,6 +16,8 @@ import * as m12 from '../migrations/20251104-create-holidays';
|
|||||||
import * as m13 from '../migrations/20251104-create-admin-config';
|
import * as m13 from '../migrations/20251104-create-admin-config';
|
||||||
import * as m14 from '../migrations/20251105-add-skip-fields-to-approval-levels';
|
import * as m14 from '../migrations/20251105-add-skip-fields-to-approval-levels';
|
||||||
import * as m15 from '../migrations/2025110501-alter-tat-days-to-generated';
|
import * as m15 from '../migrations/2025110501-alter-tat-days-to-generated';
|
||||||
|
import * as m16 from '../migrations/20251111-create-notifications';
|
||||||
|
import * as m17 from '../migrations/20251111-create-conclusion-remarks';
|
||||||
|
|
||||||
interface Migration {
|
interface Migration {
|
||||||
name: string;
|
name: string;
|
||||||
@ -46,6 +48,8 @@ const migrations: Migration[] = [
|
|||||||
{ name: '20251104-create-admin-config', module: m13 },
|
{ name: '20251104-create-admin-config', module: m13 },
|
||||||
{ name: '20251105-add-skip-fields-to-approval-levels', module: m14 },
|
{ name: '20251105-add-skip-fields-to-approval-levels', module: m14 },
|
||||||
{ name: '2025110501-alter-tat-days-to-generated', module: m15 },
|
{ name: '2025110501-alter-tat-days-to-generated', module: m15 },
|
||||||
|
{ name: '20251111-create-notifications', module: m16 },
|
||||||
|
{ name: '20251111-create-conclusion-remarks', module: m17 },
|
||||||
];
|
];
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@ -148,7 +148,7 @@ async function seedAdminConfigurations() {
|
|||||||
(
|
(
|
||||||
gen_random_uuid(),
|
gen_random_uuid(),
|
||||||
'WORK_START_HOUR',
|
'WORK_START_HOUR',
|
||||||
'WORKING_HOURS',
|
'TAT_SETTINGS',
|
||||||
'9',
|
'9',
|
||||||
'NUMBER',
|
'NUMBER',
|
||||||
'Work Day Start Hour',
|
'Work Day Start Hour',
|
||||||
@ -166,7 +166,7 @@ async function seedAdminConfigurations() {
|
|||||||
(
|
(
|
||||||
gen_random_uuid(),
|
gen_random_uuid(),
|
||||||
'WORK_END_HOUR',
|
'WORK_END_HOUR',
|
||||||
'WORKING_HOURS',
|
'TAT_SETTINGS',
|
||||||
'18',
|
'18',
|
||||||
'NUMBER',
|
'NUMBER',
|
||||||
'Work Day End Hour',
|
'Work Day End Hour',
|
||||||
@ -184,7 +184,7 @@ async function seedAdminConfigurations() {
|
|||||||
(
|
(
|
||||||
gen_random_uuid(),
|
gen_random_uuid(),
|
||||||
'WORK_START_DAY',
|
'WORK_START_DAY',
|
||||||
'WORKING_HOURS',
|
'TAT_SETTINGS',
|
||||||
'1',
|
'1',
|
||||||
'NUMBER',
|
'NUMBER',
|
||||||
'Work Week Start Day',
|
'Work Week Start Day',
|
||||||
@ -202,7 +202,7 @@ async function seedAdminConfigurations() {
|
|||||||
(
|
(
|
||||||
gen_random_uuid(),
|
gen_random_uuid(),
|
||||||
'WORK_END_DAY',
|
'WORK_END_DAY',
|
||||||
'WORKING_HOURS',
|
'TAT_SETTINGS',
|
||||||
'5',
|
'5',
|
||||||
'NUMBER',
|
'NUMBER',
|
||||||
'Work Week End Day',
|
'Work Week End Day',
|
||||||
@ -366,7 +366,138 @@ async function seedAdminConfigurations() {
|
|||||||
true,
|
true,
|
||||||
NOW(),
|
NOW(),
|
||||||
NOW()
|
NOW()
|
||||||
|
),
|
||||||
|
|
||||||
|
-- AI Configuration (from migration 20251111-add-ai-provider-configs)
|
||||||
|
(
|
||||||
|
gen_random_uuid(),
|
||||||
|
'AI_PROVIDER',
|
||||||
|
'AI_CONFIGURATION',
|
||||||
|
'claude',
|
||||||
|
'STRING',
|
||||||
|
'AI Provider',
|
||||||
|
'Active AI provider for conclusion generation (claude, openai, or gemini)',
|
||||||
|
'claude',
|
||||||
|
true,
|
||||||
|
false,
|
||||||
|
'{"enum": ["claude", "openai", "gemini"], "required": true}'::jsonb,
|
||||||
|
'select',
|
||||||
|
100,
|
||||||
|
false,
|
||||||
|
NOW(),
|
||||||
|
NOW()
|
||||||
|
),
|
||||||
|
(
|
||||||
|
gen_random_uuid(),
|
||||||
|
'CLAUDE_API_KEY',
|
||||||
|
'AI_CONFIGURATION',
|
||||||
|
'',
|
||||||
|
'STRING',
|
||||||
|
'Claude API Key',
|
||||||
|
'API key for Claude (Anthropic) - Get from console.anthropic.com',
|
||||||
|
'',
|
||||||
|
true,
|
||||||
|
true,
|
||||||
|
'{"pattern": "^sk-ant-", "minLength": 40}'::jsonb,
|
||||||
|
'input',
|
||||||
|
101,
|
||||||
|
false,
|
||||||
|
NOW(),
|
||||||
|
NOW()
|
||||||
|
),
|
||||||
|
(
|
||||||
|
gen_random_uuid(),
|
||||||
|
'OPENAI_API_KEY',
|
||||||
|
'AI_CONFIGURATION',
|
||||||
|
'',
|
||||||
|
'STRING',
|
||||||
|
'OpenAI API Key',
|
||||||
|
'API key for OpenAI (GPT-4) - Get from platform.openai.com',
|
||||||
|
'',
|
||||||
|
true,
|
||||||
|
true,
|
||||||
|
'{"pattern": "^sk-", "minLength": 40}'::jsonb,
|
||||||
|
'input',
|
||||||
|
102,
|
||||||
|
false,
|
||||||
|
NOW(),
|
||||||
|
NOW()
|
||||||
|
),
|
||||||
|
(
|
||||||
|
gen_random_uuid(),
|
||||||
|
'GEMINI_API_KEY',
|
||||||
|
'AI_CONFIGURATION',
|
||||||
|
'',
|
||||||
|
'STRING',
|
||||||
|
'Gemini API Key',
|
||||||
|
'API key for Gemini (Google) - Get from ai.google.dev',
|
||||||
|
'',
|
||||||
|
true,
|
||||||
|
true,
|
||||||
|
'{"minLength": 20}'::jsonb,
|
||||||
|
'input',
|
||||||
|
103,
|
||||||
|
false,
|
||||||
|
NOW(),
|
||||||
|
NOW()
|
||||||
|
),
|
||||||
|
(
|
||||||
|
gen_random_uuid(),
|
||||||
|
'AI_ENABLED',
|
||||||
|
'AI_CONFIGURATION',
|
||||||
|
'true',
|
||||||
|
'BOOLEAN',
|
||||||
|
'Enable AI Features',
|
||||||
|
'Master toggle to enable/disable all AI-powered features in the system',
|
||||||
|
'true',
|
||||||
|
true,
|
||||||
|
false,
|
||||||
|
'{"type": "boolean"}'::jsonb,
|
||||||
|
'toggle',
|
||||||
|
104,
|
||||||
|
false,
|
||||||
|
NOW(),
|
||||||
|
NOW()
|
||||||
|
),
|
||||||
|
(
|
||||||
|
gen_random_uuid(),
|
||||||
|
'AI_REMARK_GENERATION_ENABLED',
|
||||||
|
'AI_CONFIGURATION',
|
||||||
|
'true',
|
||||||
|
'BOOLEAN',
|
||||||
|
'Enable AI Remark Generation',
|
||||||
|
'Enable/disable AI-powered conclusion remark generation when requests are approved',
|
||||||
|
'true',
|
||||||
|
true,
|
||||||
|
false,
|
||||||
|
'{"type": "boolean"}'::jsonb,
|
||||||
|
'toggle',
|
||||||
|
105,
|
||||||
|
false,
|
||||||
|
NOW(),
|
||||||
|
NOW()
|
||||||
|
),
|
||||||
|
(
|
||||||
|
gen_random_uuid(),
|
||||||
|
'AI_MAX_REMARK_LENGTH',
|
||||||
|
'AI_CONFIGURATION',
|
||||||
|
'2000',
|
||||||
|
'NUMBER',
|
||||||
|
'AI Max Remark Length',
|
||||||
|
'Maximum character length for AI-generated conclusion remarks (used as context for AI prompt)',
|
||||||
|
'2000',
|
||||||
|
true,
|
||||||
|
false,
|
||||||
|
'{"type": "number", "min": 500, "max": 5000}'::jsonb,
|
||||||
|
'number',
|
||||||
|
106,
|
||||||
|
false,
|
||||||
|
NOW(),
|
||||||
|
NOW()
|
||||||
)
|
)
|
||||||
|
ON CONFLICT (config_key) DO UPDATE SET
|
||||||
|
config_value = EXCLUDED.config_value,
|
||||||
|
updated_at = NOW()
|
||||||
`);
|
`);
|
||||||
|
|
||||||
const finalCount = await sequelize.query(
|
const finalCount = await sequelize.query(
|
||||||
|
|||||||
@ -1,18 +1,61 @@
|
|||||||
import logger from '@utils/logger';
|
import logger from '@utils/logger';
|
||||||
|
|
||||||
|
// Special UUID for system events (login, etc.) - well-known UUID: 00000000-0000-0000-0000-000000000001
|
||||||
|
export const SYSTEM_EVENT_REQUEST_ID = '00000000-0000-0000-0000-000000000001';
|
||||||
|
|
||||||
export type ActivityEntry = {
|
export type ActivityEntry = {
|
||||||
requestId: string;
|
requestId: string;
|
||||||
type: 'created' | 'assignment' | 'approval' | 'rejection' | 'status_change' | 'comment' | 'reminder' | 'document_added' | 'sla_warning';
|
type: 'created' | 'assignment' | 'approval' | 'rejection' | 'status_change' | 'comment' | 'reminder' | 'document_added' | 'sla_warning' | 'ai_conclusion_generated' | 'closed' | 'login';
|
||||||
user?: { userId: string; name?: string; email?: string };
|
user?: { userId: string; name?: string; email?: string };
|
||||||
timestamp: string;
|
timestamp: string;
|
||||||
action: string;
|
action: string;
|
||||||
details: string;
|
details: string;
|
||||||
metadata?: any;
|
metadata?: any;
|
||||||
|
ipAddress?: string;
|
||||||
|
userAgent?: string;
|
||||||
|
category?: string;
|
||||||
|
severity?: string;
|
||||||
};
|
};
|
||||||
|
|
||||||
class ActivityService {
|
class ActivityService {
|
||||||
private byRequest: Map<string, ActivityEntry[]> = new Map();
|
private byRequest: Map<string, ActivityEntry[]> = new Map();
|
||||||
|
|
||||||
|
private inferCategory(type: string): string {
|
||||||
|
const categoryMap: Record<string, string> = {
|
||||||
|
'created': 'WORKFLOW',
|
||||||
|
'approval': 'WORKFLOW',
|
||||||
|
'rejection': 'WORKFLOW',
|
||||||
|
'status_change': 'WORKFLOW',
|
||||||
|
'assignment': 'WORKFLOW',
|
||||||
|
'comment': 'COLLABORATION',
|
||||||
|
'document_added': 'DOCUMENT',
|
||||||
|
'sla_warning': 'SYSTEM',
|
||||||
|
'reminder': 'SYSTEM',
|
||||||
|
'ai_conclusion_generated': 'SYSTEM',
|
||||||
|
'closed': 'WORKFLOW',
|
||||||
|
'login': 'AUTHENTICATION'
|
||||||
|
};
|
||||||
|
return categoryMap[type] || 'OTHER';
|
||||||
|
}
|
||||||
|
|
||||||
|
private inferSeverity(type: string): string {
|
||||||
|
const severityMap: Record<string, string> = {
|
||||||
|
'rejection': 'WARNING',
|
||||||
|
'sla_warning': 'WARNING',
|
||||||
|
'approval': 'INFO',
|
||||||
|
'closed': 'INFO',
|
||||||
|
'status_change': 'INFO',
|
||||||
|
'login': 'INFO',
|
||||||
|
'created': 'INFO',
|
||||||
|
'comment': 'INFO',
|
||||||
|
'document_added': 'INFO',
|
||||||
|
'assignment': 'INFO',
|
||||||
|
'reminder': 'INFO',
|
||||||
|
'ai_conclusion_generated': 'INFO'
|
||||||
|
};
|
||||||
|
return severityMap[type] || 'INFO';
|
||||||
|
}
|
||||||
|
|
||||||
async log(entry: ActivityEntry) {
|
async log(entry: ActivityEntry) {
|
||||||
const list = this.byRequest.get(entry.requestId) || [];
|
const list = this.byRequest.get(entry.requestId) || [];
|
||||||
list.push(entry);
|
list.push(entry);
|
||||||
@ -29,19 +72,20 @@ class ActivityService {
|
|||||||
userName: userName,
|
userName: userName,
|
||||||
activityType: entry.type,
|
activityType: entry.type,
|
||||||
activityDescription: entry.details,
|
activityDescription: entry.details,
|
||||||
activityCategory: null,
|
activityCategory: entry.category || this.inferCategory(entry.type),
|
||||||
severity: null,
|
severity: entry.severity || this.inferSeverity(entry.type),
|
||||||
metadata: entry.metadata || null,
|
metadata: entry.metadata || null,
|
||||||
isSystemEvent: !entry.user,
|
isSystemEvent: !entry.user,
|
||||||
ipAddress: null,
|
ipAddress: entry.ipAddress || null, // Database accepts null
|
||||||
userAgent: null,
|
userAgent: entry.userAgent || null, // Database accepts null
|
||||||
};
|
};
|
||||||
|
|
||||||
logger.info(`[Activity] Creating activity:`, {
|
logger.info(`[Activity] Creating activity:`, {
|
||||||
requestId: entry.requestId,
|
requestId: entry.requestId,
|
||||||
userName,
|
userName,
|
||||||
userId: entry.user?.userId,
|
userId: entry.user?.userId,
|
||||||
type: entry.type
|
type: entry.type,
|
||||||
|
ipAddress: entry.ipAddress ? '***' : null
|
||||||
});
|
});
|
||||||
|
|
||||||
await Activity.create(activityData);
|
await Activity.create(activityData);
|
||||||
|
|||||||
553
src/services/ai.service.ts
Normal file
553
src/services/ai.service.ts
Normal file
@ -0,0 +1,553 @@
|
|||||||
|
import logger from '@utils/logger';
|
||||||
|
import { getAIProviderConfig } from './configReader.service';
|
||||||
|
|
||||||
|
// Provider-specific interfaces
|
||||||
|
interface AIProvider {
|
||||||
|
generateText(prompt: string): Promise<string>;
|
||||||
|
isAvailable(): boolean;
|
||||||
|
getProviderName(): string;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Claude Provider
|
||||||
|
class ClaudeProvider implements AIProvider {
|
||||||
|
private client: any = null;
|
||||||
|
private model: string;
|
||||||
|
|
||||||
|
constructor(apiKey?: string) {
|
||||||
|
// Allow model override via environment variable
|
||||||
|
// Current models (November 2025):
|
||||||
|
// - claude-sonnet-4-20250514 (default - latest Claude Sonnet 4)
|
||||||
|
// - Use env variable CLAUDE_MODEL to override if needed
|
||||||
|
this.model = process.env.CLAUDE_MODEL || 'claude-sonnet-4-20250514';
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Priority: 1. Provided key, 2. Environment variable
|
||||||
|
const key = apiKey || process.env.CLAUDE_API_KEY || process.env.ANTHROPIC_API_KEY;
|
||||||
|
|
||||||
|
if (!key || key.trim() === '') {
|
||||||
|
return; // Silently skip if no key available
|
||||||
|
}
|
||||||
|
|
||||||
|
// Dynamic import to avoid hard dependency
|
||||||
|
const Anthropic = require('@anthropic-ai/sdk');
|
||||||
|
this.client = new Anthropic({ apiKey: key });
|
||||||
|
logger.info(`[AI Service] ✅ Claude provider initialized with model: ${this.model}`);
|
||||||
|
} catch (error: any) {
|
||||||
|
// Handle missing package gracefully
|
||||||
|
if (error.code === 'MODULE_NOT_FOUND') {
|
||||||
|
logger.warn('[AI Service] Claude SDK not installed. Run: npm install @anthropic-ai/sdk');
|
||||||
|
} else {
|
||||||
|
logger.error('[AI Service] Failed to initialize Claude:', error.message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async generateText(prompt: string): Promise<string> {
|
||||||
|
if (!this.client) throw new Error('Claude client not initialized');
|
||||||
|
|
||||||
|
logger.info(`[AI Service] Generating with Claude model: ${this.model}`);
|
||||||
|
|
||||||
|
const response = await this.client.messages.create({
|
||||||
|
model: this.model,
|
||||||
|
max_tokens: 2048, // Increased for longer conclusions
|
||||||
|
temperature: 0.3,
|
||||||
|
messages: [{ role: 'user', content: prompt }]
|
||||||
|
});
|
||||||
|
|
||||||
|
const content = response.content[0];
|
||||||
|
return content.type === 'text' ? content.text : '';
|
||||||
|
}
|
||||||
|
|
||||||
|
isAvailable(): boolean {
|
||||||
|
return this.client !== null;
|
||||||
|
}
|
||||||
|
|
||||||
|
getProviderName(): string {
|
||||||
|
return 'Claude (Anthropic)';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// OpenAI Provider
|
||||||
|
class OpenAIProvider implements AIProvider {
|
||||||
|
private client: any = null;
|
||||||
|
private model: string = 'gpt-4o';
|
||||||
|
|
||||||
|
constructor(apiKey?: string) {
|
||||||
|
try {
|
||||||
|
// Priority: 1. Provided key, 2. Environment variable
|
||||||
|
const key = apiKey || process.env.OPENAI_API_KEY;
|
||||||
|
|
||||||
|
if (!key || key.trim() === '') {
|
||||||
|
return; // Silently skip if no key available
|
||||||
|
}
|
||||||
|
|
||||||
|
const OpenAI = require('openai');
|
||||||
|
this.client = new OpenAI({ apiKey: key });
|
||||||
|
logger.info('[AI Service] ✅ OpenAI provider initialized');
|
||||||
|
} catch (error: any) {
|
||||||
|
// Handle missing package gracefully
|
||||||
|
if (error.code === 'MODULE_NOT_FOUND') {
|
||||||
|
logger.warn('[AI Service] OpenAI SDK not installed. Run: npm install openai');
|
||||||
|
} else {
|
||||||
|
logger.error('[AI Service] Failed to initialize OpenAI:', error.message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async generateText(prompt: string): Promise<string> {
|
||||||
|
if (!this.client) throw new Error('OpenAI client not initialized');
|
||||||
|
|
||||||
|
const response = await this.client.chat.completions.create({
|
||||||
|
model: this.model,
|
||||||
|
messages: [{ role: 'user', content: prompt }],
|
||||||
|
max_tokens: 1024,
|
||||||
|
temperature: 0.3
|
||||||
|
});
|
||||||
|
|
||||||
|
return response.choices[0]?.message?.content || '';
|
||||||
|
}
|
||||||
|
|
||||||
|
isAvailable(): boolean {
|
||||||
|
return this.client !== null;
|
||||||
|
}
|
||||||
|
|
||||||
|
getProviderName(): string {
|
||||||
|
return 'OpenAI (GPT-4)';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Gemini Provider (Google)
|
||||||
|
class GeminiProvider implements AIProvider {
|
||||||
|
private client: any = null;
|
||||||
|
private model: string = 'gemini-1.5-pro';
|
||||||
|
|
||||||
|
constructor(apiKey?: string) {
|
||||||
|
try {
|
||||||
|
// Priority: 1. Provided key, 2. Environment variable
|
||||||
|
const key = apiKey || process.env.GEMINI_API_KEY || process.env.GOOGLE_AI_API_KEY;
|
||||||
|
|
||||||
|
if (!key || key.trim() === '') {
|
||||||
|
return; // Silently skip if no key available
|
||||||
|
}
|
||||||
|
|
||||||
|
const { GoogleGenerativeAI } = require('@google/generative-ai');
|
||||||
|
this.client = new GoogleGenerativeAI(key);
|
||||||
|
logger.info('[AI Service] ✅ Gemini provider initialized');
|
||||||
|
} catch (error: any) {
|
||||||
|
// Handle missing package gracefully
|
||||||
|
if (error.code === 'MODULE_NOT_FOUND') {
|
||||||
|
logger.warn('[AI Service] Gemini SDK not installed. Run: npm install @google/generative-ai');
|
||||||
|
} else {
|
||||||
|
logger.error('[AI Service] Failed to initialize Gemini:', error.message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async generateText(prompt: string): Promise<string> {
|
||||||
|
if (!this.client) throw new Error('Gemini client not initialized');
|
||||||
|
|
||||||
|
const model = this.client.getGenerativeModel({ model: this.model });
|
||||||
|
const result = await model.generateContent(prompt);
|
||||||
|
const response = await result.response;
|
||||||
|
return response.text();
|
||||||
|
}
|
||||||
|
|
||||||
|
isAvailable(): boolean {
|
||||||
|
return this.client !== null;
|
||||||
|
}
|
||||||
|
|
||||||
|
getProviderName(): string {
|
||||||
|
return 'Gemini (Google)';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
class AIService {
|
||||||
|
private provider: AIProvider | null = null;
|
||||||
|
private providerName: string = 'None';
|
||||||
|
private isInitialized: boolean = false;
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
// Initialization happens asynchronously
|
||||||
|
this.initialize();
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Initialize AI provider from database configuration
|
||||||
|
*/
|
||||||
|
async initialize(): Promise<void> {
|
||||||
|
try {
|
||||||
|
// Read AI configuration from database (with env fallback)
|
||||||
|
const config = await getAIProviderConfig();
|
||||||
|
|
||||||
|
if (!config.enabled) {
|
||||||
|
logger.warn('[AI Service] AI features disabled in admin configuration');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const preferredProvider = config.provider.toLowerCase();
|
||||||
|
logger.info(`[AI Service] Preferred provider from config: ${preferredProvider}`);
|
||||||
|
|
||||||
|
// Try to initialize the preferred provider first
|
||||||
|
let initialized = false;
|
||||||
|
|
||||||
|
switch (preferredProvider) {
|
||||||
|
case 'openai':
|
||||||
|
case 'gpt':
|
||||||
|
initialized = this.tryProvider(new OpenAIProvider(config.openaiKey));
|
||||||
|
break;
|
||||||
|
case 'gemini':
|
||||||
|
case 'google':
|
||||||
|
initialized = this.tryProvider(new GeminiProvider(config.geminiKey));
|
||||||
|
break;
|
||||||
|
case 'claude':
|
||||||
|
case 'anthropic':
|
||||||
|
default:
|
||||||
|
initialized = this.tryProvider(new ClaudeProvider(config.claudeKey));
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback: Try other providers if preferred one failed
|
||||||
|
if (!initialized) {
|
||||||
|
logger.warn('[AI Service] Preferred provider unavailable. Trying fallbacks...');
|
||||||
|
|
||||||
|
const fallbackProviders = [
|
||||||
|
new ClaudeProvider(config.claudeKey),
|
||||||
|
new OpenAIProvider(config.openaiKey),
|
||||||
|
new GeminiProvider(config.geminiKey)
|
||||||
|
];
|
||||||
|
|
||||||
|
for (const provider of fallbackProviders) {
|
||||||
|
if (this.tryProvider(provider)) {
|
||||||
|
logger.info(`[AI Service] ✅ Using fallback provider: ${this.providerName}`);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!this.provider) {
|
||||||
|
logger.warn('[AI Service] ⚠️ No AI provider available. AI features will be disabled.');
|
||||||
|
logger.warn('[AI Service] To enable AI: Configure API keys in admin panel or set environment variables.');
|
||||||
|
logger.warn('[AI Service] Supported providers: Claude (CLAUDE_API_KEY), OpenAI (OPENAI_API_KEY), Gemini (GEMINI_API_KEY)');
|
||||||
|
}
|
||||||
|
|
||||||
|
this.isInitialized = true;
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[AI Service] Failed to initialize from config:', error);
|
||||||
|
// Fallback to environment variables
|
||||||
|
try {
|
||||||
|
this.initializeFromEnv();
|
||||||
|
} catch (envError) {
|
||||||
|
logger.error('[AI Service] Environment fallback also failed:', envError);
|
||||||
|
this.isInitialized = true; // Mark as initialized even if failed
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fallback initialization from environment variables
|
||||||
|
*/
|
||||||
|
private initializeFromEnv(): void {
|
||||||
|
try {
|
||||||
|
const preferredProvider = (process.env.AI_PROVIDER || 'claude').toLowerCase();
|
||||||
|
|
||||||
|
logger.info(`[AI Service] Using environment variable configuration`);
|
||||||
|
|
||||||
|
switch (preferredProvider) {
|
||||||
|
case 'openai':
|
||||||
|
case 'gpt':
|
||||||
|
this.tryProvider(new OpenAIProvider());
|
||||||
|
break;
|
||||||
|
case 'gemini':
|
||||||
|
case 'google':
|
||||||
|
this.tryProvider(new GeminiProvider());
|
||||||
|
break;
|
||||||
|
case 'claude':
|
||||||
|
case 'anthropic':
|
||||||
|
default:
|
||||||
|
this.tryProvider(new ClaudeProvider());
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!this.provider) {
|
||||||
|
logger.warn('[AI Service] ⚠️ No provider available from environment variables either.');
|
||||||
|
}
|
||||||
|
|
||||||
|
this.isInitialized = true;
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[AI Service] Environment initialization failed:', error);
|
||||||
|
this.isInitialized = true; // Still mark as initialized to prevent infinite loops
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Reinitialize AI provider (call after admin updates config)
|
||||||
|
*/
|
||||||
|
async reinitialize(): Promise<void> {
|
||||||
|
logger.info('[AI Service] Reinitializing AI provider from updated configuration...');
|
||||||
|
this.provider = null;
|
||||||
|
this.providerName = 'None';
|
||||||
|
this.isInitialized = false;
|
||||||
|
await this.initialize();
|
||||||
|
}
|
||||||
|
|
||||||
|
private tryProvider(provider: AIProvider): boolean {
|
||||||
|
if (provider.isAvailable()) {
|
||||||
|
this.provider = provider;
|
||||||
|
this.providerName = provider.getProviderName();
|
||||||
|
logger.info(`[AI Service] ✅ Active provider: ${this.providerName}`);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get current AI provider name
|
||||||
|
*/
|
||||||
|
getProviderName(): string {
|
||||||
|
return this.providerName;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generate conclusion remark for a workflow request
|
||||||
|
* @param context - All relevant data for generating the conclusion
|
||||||
|
* @returns AI-generated conclusion remark
|
||||||
|
*/
|
||||||
|
async generateConclusionRemark(context: {
|
||||||
|
requestTitle: string;
|
||||||
|
requestDescription: string;
|
||||||
|
requestNumber: string;
|
||||||
|
priority: string;
|
||||||
|
approvalFlow: Array<{
|
||||||
|
levelNumber: number;
|
||||||
|
approverName: string;
|
||||||
|
status: string;
|
||||||
|
comments?: string;
|
||||||
|
actionDate?: string;
|
||||||
|
tatHours?: number;
|
||||||
|
elapsedHours?: number;
|
||||||
|
}>;
|
||||||
|
workNotes: Array<{
|
||||||
|
userName: string;
|
||||||
|
message: string;
|
||||||
|
createdAt: string;
|
||||||
|
}>;
|
||||||
|
documents: Array<{
|
||||||
|
fileName: string;
|
||||||
|
uploadedBy: string;
|
||||||
|
uploadedAt: string;
|
||||||
|
}>;
|
||||||
|
activities: Array<{
|
||||||
|
type: string;
|
||||||
|
action: string;
|
||||||
|
details: string;
|
||||||
|
timestamp: string;
|
||||||
|
}>;
|
||||||
|
}): Promise<{ remark: string; confidence: number; keyPoints: string[]; provider: string }> {
|
||||||
|
// Ensure initialization is complete
|
||||||
|
if (!this.isInitialized) {
|
||||||
|
logger.warn('[AI Service] Not yet initialized, attempting initialization...');
|
||||||
|
await this.initialize();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!this.provider) {
|
||||||
|
logger.error('[AI Service] No AI provider available');
|
||||||
|
throw new Error('AI features are currently unavailable. Please configure an AI provider (Claude, OpenAI, or Gemini) in the admin panel, or write the conclusion manually.');
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Build context prompt with max length from config
|
||||||
|
const prompt = await this.buildConclusionPrompt(context);
|
||||||
|
|
||||||
|
logger.info(`[AI Service] Generating conclusion for request ${context.requestNumber} using ${this.providerName}...`);
|
||||||
|
|
||||||
|
// Use provider's generateText method
|
||||||
|
let remarkText = await this.provider.generateText(prompt);
|
||||||
|
|
||||||
|
// Get max length from config for validation
|
||||||
|
const { getConfigValue } = require('./configReader.service');
|
||||||
|
const maxLengthStr = await getConfigValue('AI_MAX_REMARK_LENGTH', '2000');
|
||||||
|
const maxLength = parseInt(maxLengthStr || '2000', 10);
|
||||||
|
|
||||||
|
// Validate and trim if exceeds max length
|
||||||
|
if (remarkText.length > maxLength) {
|
||||||
|
logger.warn(`[AI Service] Generated remark exceeds max length (${remarkText.length} > ${maxLength}), trimming...`);
|
||||||
|
remarkText = remarkText.substring(0, maxLength - 3) + '...'; // Trim with ellipsis
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract key points (look for bullet points or numbered items)
|
||||||
|
const keyPoints = this.extractKeyPoints(remarkText);
|
||||||
|
|
||||||
|
// Calculate confidence based on response quality (simple heuristic)
|
||||||
|
const confidence = this.calculateConfidence(remarkText, context);
|
||||||
|
|
||||||
|
logger.info(`[AI Service] ✅ Generated conclusion (${remarkText.length}/${maxLength} chars, ${keyPoints.length} key points) via ${this.providerName}`);
|
||||||
|
|
||||||
|
return {
|
||||||
|
remark: remarkText,
|
||||||
|
confidence: confidence,
|
||||||
|
keyPoints: keyPoints,
|
||||||
|
provider: this.providerName
|
||||||
|
};
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('[AI Service] Failed to generate conclusion:', error);
|
||||||
|
throw new Error(`AI generation failed (${this.providerName}): ${error.message}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Build the prompt for Claude to generate a professional conclusion remark
|
||||||
|
*/
|
||||||
|
private async buildConclusionPrompt(context: any): Promise<string> {
|
||||||
|
const {
|
||||||
|
requestTitle,
|
||||||
|
requestDescription,
|
||||||
|
requestNumber,
|
||||||
|
priority,
|
||||||
|
approvalFlow,
|
||||||
|
workNotes,
|
||||||
|
documents,
|
||||||
|
activities
|
||||||
|
} = context;
|
||||||
|
|
||||||
|
// Get max remark length from admin configuration
|
||||||
|
const { getConfigValue } = require('./configReader.service');
|
||||||
|
const maxLengthStr = await getConfigValue('AI_MAX_REMARK_LENGTH', '2000');
|
||||||
|
const maxLength = parseInt(maxLengthStr || '2000', 10);
|
||||||
|
const targetWordCount = Math.floor(maxLength / 6); // Approximate words (avg 6 chars per word)
|
||||||
|
|
||||||
|
logger.info(`[AI Service] Using max remark length: ${maxLength} characters (≈${targetWordCount} words) from admin config`);
|
||||||
|
|
||||||
|
// Summarize approvals
|
||||||
|
const approvalSummary = approvalFlow
|
||||||
|
.filter((a: any) => a.status === 'APPROVED' || a.status === 'REJECTED')
|
||||||
|
.map((a: any) => {
|
||||||
|
const tatInfo = a.elapsedHours && a.tatHours
|
||||||
|
? ` (completed in ${a.elapsedHours.toFixed(1)}h of ${a.tatHours}h TAT)`
|
||||||
|
: '';
|
||||||
|
return `- Level ${a.levelNumber}: ${a.approverName} ${a.status}${tatInfo}${a.comments ? `\n Comment: "${a.comments}"` : ''}`;
|
||||||
|
})
|
||||||
|
.join('\n');
|
||||||
|
|
||||||
|
// Summarize work notes (limit to important ones)
|
||||||
|
const workNoteSummary = workNotes
|
||||||
|
.slice(-10) // Last 10 work notes
|
||||||
|
.map((wn: any) => `- ${wn.userName}: "${wn.message.substring(0, 150)}${wn.message.length > 150 ? '...' : ''}"`)
|
||||||
|
.join('\n');
|
||||||
|
|
||||||
|
// Summarize documents
|
||||||
|
const documentSummary = documents
|
||||||
|
.map((d: any) => `- ${d.fileName} (by ${d.uploadedBy})`)
|
||||||
|
.join('\n');
|
||||||
|
|
||||||
|
const prompt = `You are writing a closure summary for a workflow request at Royal Enfield. Write a practical, realistic conclusion that an employee would write when closing a request.
|
||||||
|
|
||||||
|
**Request:**
|
||||||
|
${requestNumber} - ${requestTitle}
|
||||||
|
Description: ${requestDescription}
|
||||||
|
Priority: ${priority}
|
||||||
|
|
||||||
|
**What Happened:**
|
||||||
|
${approvalSummary || 'No approvals recorded'}
|
||||||
|
|
||||||
|
**Discussions (if any):**
|
||||||
|
${workNoteSummary || 'No work notes'}
|
||||||
|
|
||||||
|
**Documents:**
|
||||||
|
${documentSummary || 'No documents'}
|
||||||
|
|
||||||
|
**YOUR TASK:**
|
||||||
|
Write a brief, professional conclusion (approximately ${targetWordCount} words, max ${maxLength} characters) that:
|
||||||
|
- Summarizes what was requested and the final decision
|
||||||
|
- Mentions who approved it and any key comments
|
||||||
|
- Notes the outcome and next steps (if applicable)
|
||||||
|
- Uses clear, factual language without time-specific references
|
||||||
|
- Is suitable for permanent archiving and future reference
|
||||||
|
- Sounds natural and human-written (not AI-generated)
|
||||||
|
|
||||||
|
**IMPORTANT:**
|
||||||
|
- Be concise and direct
|
||||||
|
- MUST stay within ${maxLength} characters limit
|
||||||
|
- No time-specific words like "today", "now", "currently", "recently"
|
||||||
|
- No corporate jargon or buzzwords
|
||||||
|
- No emojis or excessive formatting
|
||||||
|
- Write like a professional documenting a completed process
|
||||||
|
- Focus on facts: what was requested, who approved, what was decided
|
||||||
|
- Use past tense for completed actions
|
||||||
|
|
||||||
|
Write the conclusion now (remember: max ${maxLength} characters):`;
|
||||||
|
|
||||||
|
return prompt;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Extract key points from the AI-generated remark
|
||||||
|
*/
|
||||||
|
private extractKeyPoints(remark: string): string[] {
|
||||||
|
const keyPoints: string[] = [];
|
||||||
|
|
||||||
|
// Look for bullet points (-, •, *) or numbered items (1., 2., etc.)
|
||||||
|
const lines = remark.split('\n');
|
||||||
|
|
||||||
|
for (const line of lines) {
|
||||||
|
const trimmed = line.trim();
|
||||||
|
|
||||||
|
// Match bullet points
|
||||||
|
if (trimmed.match(/^[-•*]\s+(.+)$/)) {
|
||||||
|
const point = trimmed.replace(/^[-•*]\s+/, '');
|
||||||
|
if (point.length > 10) { // Ignore very short lines
|
||||||
|
keyPoints.push(point);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Match numbered items
|
||||||
|
if (trimmed.match(/^\d+\.\s+(.+)$/)) {
|
||||||
|
const point = trimmed.replace(/^\d+\.\s+/, '');
|
||||||
|
if (point.length > 10) {
|
||||||
|
keyPoints.push(point);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If no bullet points found, extract first few sentences
|
||||||
|
if (keyPoints.length === 0) {
|
||||||
|
const sentences = remark.split(/[.!?]+/).filter(s => s.trim().length > 20);
|
||||||
|
keyPoints.push(...sentences.slice(0, 3).map(s => s.trim()));
|
||||||
|
}
|
||||||
|
|
||||||
|
return keyPoints.slice(0, 5); // Max 5 key points
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Calculate confidence score based on response quality
|
||||||
|
*/
|
||||||
|
private calculateConfidence(remark: string, context: any): number {
|
||||||
|
let score = 0.6; // Base score (slightly higher for new prompt)
|
||||||
|
|
||||||
|
// Check if remark has good length (100-400 chars - more realistic)
|
||||||
|
if (remark.length >= 100 && remark.length <= 400) {
|
||||||
|
score += 0.2;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if remark mentions key elements
|
||||||
|
if (remark.toLowerCase().includes('approv')) {
|
||||||
|
score += 0.1;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if remark is not too generic
|
||||||
|
if (remark.length > 80 && !remark.toLowerCase().includes('lorem ipsum')) {
|
||||||
|
score += 0.1;
|
||||||
|
}
|
||||||
|
|
||||||
|
return Math.min(1.0, score);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if AI service is available
|
||||||
|
*/
|
||||||
|
isAvailable(): boolean {
|
||||||
|
return this.provider !== null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export const aiService = new AIService();
|
||||||
|
|
||||||
@ -4,7 +4,8 @@ import { Participant } from '@models/Participant';
|
|||||||
import { TatAlert } from '@models/TatAlert';
|
import { TatAlert } from '@models/TatAlert';
|
||||||
import { ApprovalAction } from '../types/approval.types';
|
import { ApprovalAction } from '../types/approval.types';
|
||||||
import { ApprovalStatus, WorkflowStatus } from '../types/common.types';
|
import { ApprovalStatus, WorkflowStatus } from '../types/common.types';
|
||||||
import { calculateElapsedHours, calculateTATPercentage } from '@utils/helpers';
|
import { calculateTATPercentage } from '@utils/helpers';
|
||||||
|
import { calculateElapsedWorkingHours } from '@utils/tatTimeUtils';
|
||||||
import logger from '@utils/logger';
|
import logger from '@utils/logger';
|
||||||
import { Op } from 'sequelize';
|
import { Op } from 'sequelize';
|
||||||
import { notificationService } from './notification.service';
|
import { notificationService } from './notification.service';
|
||||||
@ -12,13 +13,18 @@ import { activityService } from './activity.service';
|
|||||||
import { tatSchedulerService } from './tatScheduler.service';
|
import { tatSchedulerService } from './tatScheduler.service';
|
||||||
|
|
||||||
export class ApprovalService {
|
export class ApprovalService {
|
||||||
async approveLevel(levelId: string, action: ApprovalAction, _userId: string): Promise<ApprovalLevel | null> {
|
async approveLevel(levelId: string, action: ApprovalAction, _userId: string, requestMetadata?: { ipAddress?: string | null; userAgent?: string | null }): Promise<ApprovalLevel | null> {
|
||||||
try {
|
try {
|
||||||
const level = await ApprovalLevel.findByPk(levelId);
|
const level = await ApprovalLevel.findByPk(levelId);
|
||||||
if (!level) return null;
|
if (!level) return null;
|
||||||
|
|
||||||
|
// Get workflow to determine priority for working hours calculation
|
||||||
|
const wf = await WorkflowRequest.findByPk(level.requestId);
|
||||||
|
const priority = ((wf as any)?.priority || 'standard').toString().toLowerCase();
|
||||||
|
|
||||||
const now = new Date();
|
const now = new Date();
|
||||||
const elapsedHours = calculateElapsedHours(level.levelStartTime || level.createdAt, now);
|
// Calculate elapsed hours using working hours logic (matches frontend)
|
||||||
|
const elapsedHours = await calculateElapsedWorkingHours(level.levelStartTime || level.createdAt, now, priority);
|
||||||
const tatPercentage = calculateTATPercentage(elapsedHours, level.tatHours);
|
const tatPercentage = calculateTATPercentage(elapsedHours, level.tatHours);
|
||||||
|
|
||||||
const updateData = {
|
const updateData = {
|
||||||
@ -60,10 +66,7 @@ export class ApprovalService {
|
|||||||
// Don't fail the approval if TAT alert update fails
|
// Don't fail the approval if TAT alert update fails
|
||||||
}
|
}
|
||||||
|
|
||||||
// Load workflow for titles and initiator
|
// Handle approval - move to next level or close workflow (wf already loaded above)
|
||||||
const wf = await WorkflowRequest.findByPk(level.requestId);
|
|
||||||
|
|
||||||
// Handle approval - move to next level or close workflow
|
|
||||||
if (action.action === 'APPROVE') {
|
if (action.action === 'APPROVE') {
|
||||||
if (level.isFinalApprover) {
|
if (level.isFinalApprover) {
|
||||||
// Final approver - close workflow as APPROVED
|
// Final approver - close workflow as APPROVED
|
||||||
@ -76,22 +79,167 @@ export class ApprovalService {
|
|||||||
{ where: { requestId: level.requestId } }
|
{ where: { requestId: level.requestId } }
|
||||||
);
|
);
|
||||||
logger.info(`Final approver approved. Workflow ${level.requestId} closed as APPROVED`);
|
logger.info(`Final approver approved. Workflow ${level.requestId} closed as APPROVED`);
|
||||||
// Notify initiator
|
|
||||||
|
// Log final approval activity first (so it's included in AI context)
|
||||||
|
activityService.log({
|
||||||
|
requestId: level.requestId,
|
||||||
|
type: 'approval',
|
||||||
|
user: { userId: level.approverId, name: level.approverName },
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
action: 'Approved',
|
||||||
|
details: `Request approved and finalized by ${level.approverName || level.approverEmail}. Awaiting conclusion remark from initiator.`,
|
||||||
|
ipAddress: requestMetadata?.ipAddress || undefined,
|
||||||
|
userAgent: requestMetadata?.userAgent || undefined
|
||||||
|
});
|
||||||
|
|
||||||
|
// Generate AI conclusion remark ASYNCHRONOUSLY (don't wait)
|
||||||
|
// This runs in the background without blocking the approval response
|
||||||
|
(async () => {
|
||||||
|
try {
|
||||||
|
const { aiService } = await import('./ai.service');
|
||||||
|
const { ConclusionRemark } = await import('@models/index');
|
||||||
|
const { ApprovalLevel } = await import('@models/ApprovalLevel');
|
||||||
|
const { WorkNote } = await import('@models/WorkNote');
|
||||||
|
const { Document } = await import('@models/Document');
|
||||||
|
const { Activity } = await import('@models/Activity');
|
||||||
|
const { getConfigValue } = await import('./configReader.service');
|
||||||
|
|
||||||
|
// Check if AI features and remark generation are enabled in admin config
|
||||||
|
const aiEnabled = (await getConfigValue('AI_ENABLED', 'true'))?.toLowerCase() === 'true';
|
||||||
|
const remarkGenerationEnabled = (await getConfigValue('AI_REMARK_GENERATION_ENABLED', 'true'))?.toLowerCase() === 'true';
|
||||||
|
|
||||||
|
if (aiEnabled && remarkGenerationEnabled && aiService.isAvailable()) {
|
||||||
|
logger.info(`[Approval] 🔄 Starting background AI conclusion generation for ${level.requestId}...`);
|
||||||
|
|
||||||
|
// Gather context for AI generation
|
||||||
|
const approvalLevels = await ApprovalLevel.findAll({
|
||||||
|
where: { requestId: level.requestId },
|
||||||
|
order: [['levelNumber', 'ASC']]
|
||||||
|
});
|
||||||
|
|
||||||
|
const workNotes = await WorkNote.findAll({
|
||||||
|
where: { requestId: level.requestId },
|
||||||
|
order: [['createdAt', 'ASC']],
|
||||||
|
limit: 20
|
||||||
|
});
|
||||||
|
|
||||||
|
const documents = await Document.findAll({
|
||||||
|
where: { requestId: level.requestId },
|
||||||
|
order: [['uploadedAt', 'DESC']]
|
||||||
|
});
|
||||||
|
|
||||||
|
const activities = await Activity.findAll({
|
||||||
|
where: { requestId: level.requestId },
|
||||||
|
order: [['createdAt', 'ASC']],
|
||||||
|
limit: 50
|
||||||
|
});
|
||||||
|
|
||||||
|
// Build context object
|
||||||
|
const context = {
|
||||||
|
requestTitle: (wf as any).title,
|
||||||
|
requestDescription: (wf as any).description,
|
||||||
|
requestNumber: (wf as any).requestNumber,
|
||||||
|
priority: (wf as any).priority,
|
||||||
|
approvalFlow: approvalLevels.map((l: any) => ({
|
||||||
|
levelNumber: l.levelNumber,
|
||||||
|
approverName: l.approverName,
|
||||||
|
status: l.status,
|
||||||
|
comments: l.comments,
|
||||||
|
actionDate: l.actionDate,
|
||||||
|
tatHours: Number(l.tatHours || 0),
|
||||||
|
elapsedHours: Number(l.elapsedHours || 0)
|
||||||
|
})),
|
||||||
|
workNotes: workNotes.map((note: any) => ({
|
||||||
|
userName: note.userName,
|
||||||
|
message: note.message,
|
||||||
|
createdAt: note.createdAt
|
||||||
|
})),
|
||||||
|
documents: documents.map((doc: any) => ({
|
||||||
|
fileName: doc.originalFileName || doc.fileName,
|
||||||
|
uploadedBy: doc.uploadedBy,
|
||||||
|
uploadedAt: doc.uploadedAt
|
||||||
|
})),
|
||||||
|
activities: activities.map((activity: any) => ({
|
||||||
|
type: activity.activityType,
|
||||||
|
action: activity.activityDescription,
|
||||||
|
details: activity.activityDescription,
|
||||||
|
timestamp: activity.createdAt
|
||||||
|
}))
|
||||||
|
};
|
||||||
|
|
||||||
|
const aiResult = await aiService.generateConclusionRemark(context);
|
||||||
|
|
||||||
|
// Save to database
|
||||||
|
await ConclusionRemark.create({
|
||||||
|
requestId: level.requestId,
|
||||||
|
aiGeneratedRemark: aiResult.remark,
|
||||||
|
aiModelUsed: aiResult.provider,
|
||||||
|
aiConfidenceScore: aiResult.confidence,
|
||||||
|
finalRemark: null,
|
||||||
|
editedBy: null,
|
||||||
|
isEdited: false,
|
||||||
|
editCount: 0,
|
||||||
|
approvalSummary: {
|
||||||
|
totalLevels: approvalLevels.length,
|
||||||
|
approvedLevels: approvalLevels.filter((l: any) => l.status === 'APPROVED').length,
|
||||||
|
averageTatUsage: approvalLevels.reduce((sum: number, l: any) =>
|
||||||
|
sum + Number(l.tatPercentageUsed || 0), 0) / (approvalLevels.length || 1)
|
||||||
|
},
|
||||||
|
documentSummary: {
|
||||||
|
totalDocuments: documents.length,
|
||||||
|
documentNames: documents.map((d: any) => d.originalFileName || d.fileName)
|
||||||
|
},
|
||||||
|
keyDiscussionPoints: aiResult.keyPoints,
|
||||||
|
generatedAt: new Date(),
|
||||||
|
finalizedAt: null
|
||||||
|
} as any);
|
||||||
|
|
||||||
|
logger.info(`[Approval] ✅ Background AI conclusion completed for ${level.requestId}`);
|
||||||
|
|
||||||
|
// Log activity
|
||||||
|
activityService.log({
|
||||||
|
requestId: level.requestId,
|
||||||
|
type: 'ai_conclusion_generated',
|
||||||
|
user: { userId: 'system', name: 'System' },
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
action: 'AI Conclusion Generated',
|
||||||
|
details: 'AI-powered conclusion remark generated for review by initiator',
|
||||||
|
ipAddress: undefined, // System-generated, no IP
|
||||||
|
userAgent: undefined // System-generated, no user agent
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
// Log why AI generation was skipped
|
||||||
|
if (!aiEnabled) {
|
||||||
|
logger.info(`[Approval] AI features disabled in admin config, skipping conclusion generation for ${level.requestId}`);
|
||||||
|
} else if (!remarkGenerationEnabled) {
|
||||||
|
logger.info(`[Approval] AI remark generation disabled in admin config, skipping for ${level.requestId}`);
|
||||||
|
} else if (!aiService.isAvailable()) {
|
||||||
|
logger.warn(`[Approval] AI service unavailable for ${level.requestId}, skipping conclusion generation`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (aiError) {
|
||||||
|
logger.error(`[Approval] Background AI generation failed for ${level.requestId}:`, aiError);
|
||||||
|
// Silent failure - initiator can write manually
|
||||||
|
}
|
||||||
|
})().catch(err => {
|
||||||
|
// Catch any unhandled promise rejections
|
||||||
|
logger.error(`[Approval] Unhandled error in background AI generation:`, err);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Notify initiator about approval and pending conclusion step
|
||||||
if (wf) {
|
if (wf) {
|
||||||
await notificationService.sendToUsers([ (wf as any).initiatorId ], {
|
await notificationService.sendToUsers([ (wf as any).initiatorId ], {
|
||||||
title: `Approved: ${(wf as any).requestNumber}`,
|
title: `Request Approved - Closure Pending`,
|
||||||
body: `${(wf as any).title}`,
|
body: `Your request "${(wf as any).title}" has been fully approved. Please review and finalize the conclusion remark to close the request.`,
|
||||||
requestNumber: (wf as any).requestNumber,
|
requestNumber: (wf as any).requestNumber,
|
||||||
url: `/request/${(wf as any).requestNumber}`
|
|
||||||
});
|
|
||||||
activityService.log({
|
|
||||||
requestId: level.requestId,
|
requestId: level.requestId,
|
||||||
type: 'approval',
|
url: `/request/${(wf as any).requestNumber}`,
|
||||||
user: { userId: level.approverId, name: level.approverName },
|
type: 'approval_pending_closure',
|
||||||
timestamp: new Date().toISOString(),
|
priority: 'HIGH',
|
||||||
action: 'Approved',
|
actionRequired: true
|
||||||
details: `Request approved and finalized by ${level.approverName || level.approverEmail}`
|
|
||||||
});
|
});
|
||||||
|
|
||||||
|
logger.info(`[Approval] ✅ Final approval complete for ${level.requestId}. Initiator notified to finalize conclusion.`);
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// Not final - move to next level
|
// Not final - move to next level
|
||||||
@ -151,7 +299,9 @@ export class ApprovalService {
|
|||||||
user: { userId: level.approverId, name: level.approverName },
|
user: { userId: level.approverId, name: level.approverName },
|
||||||
timestamp: new Date().toISOString(),
|
timestamp: new Date().toISOString(),
|
||||||
action: 'Approved',
|
action: 'Approved',
|
||||||
details: `Request approved and forwarded to ${(nextLevel as any).approverName || (nextLevel as any).approverEmail} by ${level.approverName || level.approverEmail}`
|
details: `Request approved and forwarded to ${(nextLevel as any).approverName || (nextLevel as any).approverEmail} by ${level.approverName || level.approverEmail}`,
|
||||||
|
ipAddress: requestMetadata?.ipAddress || undefined,
|
||||||
|
userAgent: requestMetadata?.userAgent || undefined
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
@ -178,7 +328,9 @@ export class ApprovalService {
|
|||||||
user: { userId: level.approverId, name: level.approverName },
|
user: { userId: level.approverId, name: level.approverName },
|
||||||
timestamp: new Date().toISOString(),
|
timestamp: new Date().toISOString(),
|
||||||
action: 'Approved',
|
action: 'Approved',
|
||||||
details: `Request approved and finalized by ${level.approverName || level.approverEmail}`
|
details: `Request approved and finalized by ${level.approverName || level.approverEmail}`,
|
||||||
|
ipAddress: requestMetadata?.ipAddress || undefined,
|
||||||
|
userAgent: requestMetadata?.userAgent || undefined
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -229,7 +381,9 @@ export class ApprovalService {
|
|||||||
user: { userId: level.approverId, name: level.approverName },
|
user: { userId: level.approverId, name: level.approverName },
|
||||||
timestamp: new Date().toISOString(),
|
timestamp: new Date().toISOString(),
|
||||||
action: 'Rejected',
|
action: 'Rejected',
|
||||||
details: `Request rejected by ${level.approverName || level.approverEmail}. Reason: ${action.rejectionReason || action.comments || 'No reason provided'}`
|
details: `Request rejected by ${level.approverName || level.approverEmail}. Reason: ${action.rejectionReason || action.comments || 'No reason provided'}`,
|
||||||
|
ipAddress: requestMetadata?.ipAddress || undefined,
|
||||||
|
userAgent: requestMetadata?.userAgent || undefined
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -89,7 +89,7 @@ export class AuthService {
|
|||||||
designation: userData.designation || null,
|
designation: userData.designation || null,
|
||||||
phone: userData.phone || null,
|
phone: userData.phone || null,
|
||||||
isActive: true,
|
isActive: true,
|
||||||
isAdmin: false,
|
role: 'USER',
|
||||||
lastLogin: new Date()
|
lastLogin: new Date()
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -117,7 +117,7 @@ export class AuthService {
|
|||||||
displayName: user.displayName || null,
|
displayName: user.displayName || null,
|
||||||
department: user.department || null,
|
department: user.department || null,
|
||||||
designation: user.designation || null,
|
designation: user.designation || null,
|
||||||
isAdmin: user.isAdmin
|
role: user.role
|
||||||
},
|
},
|
||||||
accessToken,
|
accessToken,
|
||||||
refreshToken
|
refreshToken
|
||||||
@ -145,7 +145,7 @@ export class AuthService {
|
|||||||
userId: user.userId,
|
userId: user.userId,
|
||||||
employeeId: user.employeeId,
|
employeeId: user.employeeId,
|
||||||
email: user.email,
|
email: user.email,
|
||||||
role: user.isAdmin ? 'admin' : 'user'
|
role: user.role // Keep uppercase: USER, MANAGEMENT, ADMIN
|
||||||
};
|
};
|
||||||
|
|
||||||
const options: SignOptions = {
|
const options: SignOptions = {
|
||||||
|
|||||||
@ -37,10 +37,10 @@ export async function getConfigValue(configKey: string, defaultValue: string = '
|
|||||||
const value = (result[0] as any).config_value;
|
const value = (result[0] as any).config_value;
|
||||||
configCache.set(configKey, value);
|
configCache.set(configKey, value);
|
||||||
|
|
||||||
// Set cache expiry if not set
|
// Always update cache expiry when loading from database
|
||||||
if (!cacheExpiry) {
|
cacheExpiry = new Date(Date.now() + CACHE_DURATION_MS);
|
||||||
cacheExpiry = new Date(Date.now() + CACHE_DURATION_MS);
|
|
||||||
}
|
logger.info(`[ConfigReader] Loaded config '${configKey}' = '${value}' from database (cached for 5min)`);
|
||||||
|
|
||||||
return value;
|
return value;
|
||||||
}
|
}
|
||||||
@ -119,3 +119,22 @@ export async function preloadConfigurations(): Promise<void> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get AI provider configurations
|
||||||
|
*/
|
||||||
|
export async function getAIProviderConfig(): Promise<{
|
||||||
|
provider: string;
|
||||||
|
claudeKey: string;
|
||||||
|
openaiKey: string;
|
||||||
|
geminiKey: string;
|
||||||
|
enabled: boolean;
|
||||||
|
}> {
|
||||||
|
const provider = await getConfigValue('AI_PROVIDER', 'claude');
|
||||||
|
const claudeKey = await getConfigValue('CLAUDE_API_KEY', '');
|
||||||
|
const openaiKey = await getConfigValue('OPENAI_API_KEY', '');
|
||||||
|
const geminiKey = await getConfigValue('GEMINI_API_KEY', '');
|
||||||
|
const enabled = await getConfigBoolean('AI_ENABLED', true);
|
||||||
|
|
||||||
|
return { provider, claudeKey, openaiKey, geminiKey, enabled };
|
||||||
|
}
|
||||||
|
|
||||||
|
|||||||
@ -305,6 +305,111 @@ export async function seedDefaultConfigurations(): Promise<void> {
|
|||||||
NOW(),
|
NOW(),
|
||||||
NOW()
|
NOW()
|
||||||
),
|
),
|
||||||
|
(
|
||||||
|
gen_random_uuid(),
|
||||||
|
'AI_PROVIDER',
|
||||||
|
'AI_CONFIGURATION',
|
||||||
|
'claude',
|
||||||
|
'STRING',
|
||||||
|
'AI Provider',
|
||||||
|
'Active AI provider for conclusion generation (claude, openai, or gemini)',
|
||||||
|
'claude',
|
||||||
|
true,
|
||||||
|
false,
|
||||||
|
'{"enum": ["claude", "openai", "gemini"], "required": true}'::jsonb,
|
||||||
|
'select',
|
||||||
|
'["claude", "openai", "gemini"]'::jsonb,
|
||||||
|
22,
|
||||||
|
false,
|
||||||
|
NULL,
|
||||||
|
NULL,
|
||||||
|
NOW(),
|
||||||
|
NOW()
|
||||||
|
),
|
||||||
|
(
|
||||||
|
gen_random_uuid(),
|
||||||
|
'CLAUDE_API_KEY',
|
||||||
|
'AI_CONFIGURATION',
|
||||||
|
'',
|
||||||
|
'STRING',
|
||||||
|
'Claude API Key',
|
||||||
|
'API key for Claude (Anthropic) - Get from console.anthropic.com',
|
||||||
|
'',
|
||||||
|
true,
|
||||||
|
true,
|
||||||
|
'{"pattern": "^sk-ant-", "minLength": 40}'::jsonb,
|
||||||
|
'input',
|
||||||
|
NULL,
|
||||||
|
23,
|
||||||
|
false,
|
||||||
|
NULL,
|
||||||
|
NULL,
|
||||||
|
NOW(),
|
||||||
|
NOW()
|
||||||
|
),
|
||||||
|
(
|
||||||
|
gen_random_uuid(),
|
||||||
|
'OPENAI_API_KEY',
|
||||||
|
'AI_CONFIGURATION',
|
||||||
|
'',
|
||||||
|
'STRING',
|
||||||
|
'OpenAI API Key',
|
||||||
|
'API key for OpenAI (GPT-4) - Get from platform.openai.com',
|
||||||
|
'',
|
||||||
|
true,
|
||||||
|
true,
|
||||||
|
'{"pattern": "^sk-", "minLength": 40}'::jsonb,
|
||||||
|
'input',
|
||||||
|
NULL,
|
||||||
|
24,
|
||||||
|
false,
|
||||||
|
NULL,
|
||||||
|
NULL,
|
||||||
|
NOW(),
|
||||||
|
NOW()
|
||||||
|
),
|
||||||
|
(
|
||||||
|
gen_random_uuid(),
|
||||||
|
'GEMINI_API_KEY',
|
||||||
|
'AI_CONFIGURATION',
|
||||||
|
'',
|
||||||
|
'STRING',
|
||||||
|
'Gemini API Key',
|
||||||
|
'API key for Gemini (Google) - Get from ai.google.dev',
|
||||||
|
'',
|
||||||
|
true,
|
||||||
|
true,
|
||||||
|
'{"minLength": 20}'::jsonb,
|
||||||
|
'input',
|
||||||
|
NULL,
|
||||||
|
25,
|
||||||
|
false,
|
||||||
|
NULL,
|
||||||
|
NULL,
|
||||||
|
NOW(),
|
||||||
|
NOW()
|
||||||
|
),
|
||||||
|
(
|
||||||
|
gen_random_uuid(),
|
||||||
|
'AI_ENABLED',
|
||||||
|
'AI_CONFIGURATION',
|
||||||
|
'true',
|
||||||
|
'BOOLEAN',
|
||||||
|
'Enable AI Features',
|
||||||
|
'Master toggle to enable/disable all AI-powered features in the system',
|
||||||
|
'true',
|
||||||
|
true,
|
||||||
|
false,
|
||||||
|
'{"type": "boolean"}'::jsonb,
|
||||||
|
'toggle',
|
||||||
|
NULL,
|
||||||
|
26,
|
||||||
|
false,
|
||||||
|
NULL,
|
||||||
|
NULL,
|
||||||
|
NOW(),
|
||||||
|
NOW()
|
||||||
|
),
|
||||||
-- Notification Rules
|
-- Notification Rules
|
||||||
(
|
(
|
||||||
gen_random_uuid(),
|
gen_random_uuid(),
|
||||||
@ -563,7 +668,7 @@ export async function seedDefaultConfigurations(): Promise<void> {
|
|||||||
)
|
)
|
||||||
`, { type: QueryTypes.INSERT });
|
`, { type: QueryTypes.INSERT });
|
||||||
|
|
||||||
logger.info('[Config Seed] ✅ Default configurations seeded successfully (20 settings across 7 categories)');
|
logger.info('[Config Seed] ✅ Default configurations seeded successfully (30 settings across 7 categories)');
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error('[Config Seed] Error seeding configurations:', error);
|
logger.error('[Config Seed] Error seeding configurations:', error);
|
||||||
// Don't throw - let server start even if seeding fails
|
// Don't throw - let server start even if seeding fails
|
||||||
|
|||||||
1685
src/services/dashboard.service.ts
Normal file
1685
src/services/dashboard.service.ts
Normal file
File diff suppressed because it is too large
Load Diff
@ -1,9 +1,22 @@
|
|||||||
import webpush from 'web-push';
|
import webpush from 'web-push';
|
||||||
import logger from '@utils/logger';
|
import logger from '@utils/logger';
|
||||||
import { Subscription } from '@models/Subscription';
|
import { Subscription } from '@models/Subscription';
|
||||||
|
import { Notification } from '@models/Notification';
|
||||||
|
|
||||||
type PushSubscription = any; // Web Push protocol JSON
|
type PushSubscription = any; // Web Push protocol JSON
|
||||||
|
|
||||||
|
interface NotificationPayload {
|
||||||
|
title: string;
|
||||||
|
body: string;
|
||||||
|
requestId?: string;
|
||||||
|
requestNumber?: string;
|
||||||
|
url?: string;
|
||||||
|
type?: string;
|
||||||
|
priority?: 'LOW' | 'MEDIUM' | 'HIGH' | 'URGENT';
|
||||||
|
actionRequired?: boolean;
|
||||||
|
metadata?: any;
|
||||||
|
}
|
||||||
|
|
||||||
class NotificationService {
|
class NotificationService {
|
||||||
private userIdToSubscriptions: Map<string, PushSubscription[]> = new Map();
|
private userIdToSubscriptions: Map<string, PushSubscription[]> = new Map();
|
||||||
|
|
||||||
@ -44,23 +57,76 @@ class NotificationService {
|
|||||||
logger.info(`Subscription stored for user ${userId}. Total: ${list.length}`);
|
logger.info(`Subscription stored for user ${userId}. Total: ${list.length}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
async sendToUsers(userIds: string[], payload: any) {
|
/**
|
||||||
|
* Send notification to users - saves to DB and sends via push/socket
|
||||||
|
*/
|
||||||
|
async sendToUsers(userIds: string[], payload: NotificationPayload) {
|
||||||
const message = JSON.stringify(payload);
|
const message = JSON.stringify(payload);
|
||||||
for (const uid of userIds) {
|
const sentVia: string[] = ['IN_APP']; // Always save to DB for in-app display
|
||||||
let subs = this.userIdToSubscriptions.get(uid) || [];
|
|
||||||
// Load from DB if memory empty
|
for (const userId of userIds) {
|
||||||
if (subs.length === 0) {
|
try {
|
||||||
|
// 1. Save notification to database for in-app display
|
||||||
|
const notification = await Notification.create({
|
||||||
|
userId,
|
||||||
|
requestId: payload.requestId,
|
||||||
|
notificationType: payload.type || 'general',
|
||||||
|
title: payload.title,
|
||||||
|
message: payload.body,
|
||||||
|
isRead: false,
|
||||||
|
priority: payload.priority || 'MEDIUM',
|
||||||
|
actionUrl: payload.url,
|
||||||
|
actionRequired: payload.actionRequired || false,
|
||||||
|
metadata: {
|
||||||
|
requestNumber: payload.requestNumber,
|
||||||
|
...payload.metadata
|
||||||
|
},
|
||||||
|
sentVia,
|
||||||
|
emailSent: false,
|
||||||
|
smsSent: false,
|
||||||
|
pushSent: false
|
||||||
|
} as any);
|
||||||
|
|
||||||
|
logger.info(`[Notification] Created in-app notification for user ${userId}: ${payload.title}`);
|
||||||
|
|
||||||
|
// 2. Emit real-time socket event for immediate delivery
|
||||||
try {
|
try {
|
||||||
const rows = await Subscription.findAll({ where: { userId: uid } });
|
const { emitToUser } = require('../realtime/socket');
|
||||||
subs = rows.map((r: any) => ({ endpoint: r.endpoint, keys: { p256dh: r.p256dh, auth: r.auth } }));
|
if (emitToUser) {
|
||||||
} catch {}
|
emitToUser(userId, 'notification:new', {
|
||||||
}
|
notification: notification.toJSON(),
|
||||||
for (const sub of subs) {
|
...payload
|
||||||
try {
|
});
|
||||||
await webpush.sendNotification(sub, message);
|
logger.info(`[Notification] Emitted socket event to user ${userId}`);
|
||||||
} catch (err) {
|
}
|
||||||
logger.error(`Failed to send push to ${uid}:`, err);
|
} catch (socketError) {
|
||||||
|
logger.warn(`[Notification] Socket emit failed (not critical):`, socketError);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 3. Send push notification (if user has subscriptions)
|
||||||
|
let subs = this.userIdToSubscriptions.get(userId) || [];
|
||||||
|
// Load from DB if memory empty
|
||||||
|
if (subs.length === 0) {
|
||||||
|
try {
|
||||||
|
const rows = await Subscription.findAll({ where: { userId } });
|
||||||
|
subs = rows.map((r: any) => ({ endpoint: r.endpoint, keys: { p256dh: r.p256dh, auth: r.auth } }));
|
||||||
|
} catch {}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (subs.length > 0) {
|
||||||
|
for (const sub of subs) {
|
||||||
|
try {
|
||||||
|
await webpush.sendNotification(sub, message);
|
||||||
|
await notification.update({ pushSent: true });
|
||||||
|
logger.info(`[Notification] Push sent to user ${userId}`);
|
||||||
|
} catch (err) {
|
||||||
|
logger.error(`Failed to send push to ${userId}:`, err);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
logger.error(`[Notification] Failed to create notification for user ${userId}:`, error);
|
||||||
|
// Continue to next user even if one fails
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -70,3 +136,4 @@ export const notificationService = new NotificationService();
|
|||||||
notificationService.configure();
|
notificationService.configure();
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@ -1,5 +1,5 @@
|
|||||||
import { tatQueue } from '../queues/tatQueue';
|
import { tatQueue } from '../queues/tatQueue';
|
||||||
import { calculateDelay, addWorkingHours, addCalendarHours } from '@utils/tatTimeUtils';
|
import { calculateDelay, addWorkingHours, addWorkingHoursExpress } from '@utils/tatTimeUtils';
|
||||||
import { getTatThresholds } from './configReader.service';
|
import { getTatThresholds } from './configReader.service';
|
||||||
import dayjs from 'dayjs';
|
import dayjs from 'dayjs';
|
||||||
import logger from '@utils/logger';
|
import logger from '@utils/logger';
|
||||||
@ -44,20 +44,23 @@ export class TatSchedulerService {
|
|||||||
let breachTime: Date;
|
let breachTime: Date;
|
||||||
|
|
||||||
if (isExpress) {
|
if (isExpress) {
|
||||||
// EXPRESS: 24/7 calculation - no exclusions
|
// EXPRESS: All calendar days (Mon-Sun, including weekends/holidays) but working hours only (9 AM - 6 PM)
|
||||||
threshold1Time = addCalendarHours(now, tatDurationHours * (thresholds.first / 100)).toDate();
|
const t1 = await addWorkingHoursExpress(now, tatDurationHours * (thresholds.first / 100));
|
||||||
threshold2Time = addCalendarHours(now, tatDurationHours * (thresholds.second / 100)).toDate();
|
const t2 = await addWorkingHoursExpress(now, tatDurationHours * (thresholds.second / 100));
|
||||||
breachTime = addCalendarHours(now, tatDurationHours).toDate();
|
const tBreach = await addWorkingHoursExpress(now, tatDurationHours);
|
||||||
logger.info(`[TAT Scheduler] Using EXPRESS mode (24/7) - no holiday/weekend exclusions`);
|
threshold1Time = t1.toDate();
|
||||||
|
threshold2Time = t2.toDate();
|
||||||
|
breachTime = tBreach.toDate();
|
||||||
|
logger.info(`[TAT Scheduler] Using EXPRESS mode - all days, working hours only (9 AM - 6 PM)`);
|
||||||
} else {
|
} else {
|
||||||
// STANDARD: Working hours only, excludes holidays
|
// STANDARD: Working days only (Mon-Fri), working hours (9 AM - 6 PM), excludes holidays
|
||||||
const t1 = await addWorkingHours(now, tatDurationHours * (thresholds.first / 100));
|
const t1 = await addWorkingHours(now, tatDurationHours * (thresholds.first / 100));
|
||||||
const t2 = await addWorkingHours(now, tatDurationHours * (thresholds.second / 100));
|
const t2 = await addWorkingHours(now, tatDurationHours * (thresholds.second / 100));
|
||||||
const tBreach = await addWorkingHours(now, tatDurationHours);
|
const tBreach = await addWorkingHours(now, tatDurationHours);
|
||||||
threshold1Time = t1.toDate();
|
threshold1Time = t1.toDate();
|
||||||
threshold2Time = t2.toDate();
|
threshold2Time = t2.toDate();
|
||||||
breachTime = tBreach.toDate();
|
breachTime = tBreach.toDate();
|
||||||
logger.info(`[TAT Scheduler] Using STANDARD mode - excludes holidays, weekends, non-working hours`);
|
logger.info(`[TAT Scheduler] Using STANDARD mode - weekdays only, working hours (9 AM - 6 PM), excludes holidays`);
|
||||||
}
|
}
|
||||||
|
|
||||||
logger.info(`[TAT Scheduler] Calculating TAT milestones for request ${requestId}, level ${levelId}`);
|
logger.info(`[TAT Scheduler] Calculating TAT milestones for request ${requestId}, level ${levelId}`);
|
||||||
@ -88,38 +91,62 @@ export class TatSchedulerService {
|
|||||||
}
|
}
|
||||||
];
|
];
|
||||||
|
|
||||||
|
|
||||||
|
// Check if test mode enabled (1 hour = 1 minute)
|
||||||
|
const isTestMode = process.env.TAT_TEST_MODE === 'true';
|
||||||
|
|
||||||
|
// Check if times collide (working hours calculation issue)
|
||||||
|
const uniqueTimes = new Set(jobs.map(j => j.targetTime.getTime()));
|
||||||
|
const hasCollision = uniqueTimes.size < jobs.length;
|
||||||
|
|
||||||
|
let jobIndex = 0;
|
||||||
for (const job of jobs) {
|
for (const job of jobs) {
|
||||||
// Skip if the time has already passed
|
if (job.delay < 0) {
|
||||||
if (job.delay === 0) {
|
logger.error(`[TAT Scheduler] Skipping ${job.type} - time in past`);
|
||||||
logger.warn(`[TAT Scheduler] Skipping ${job.type} (${job.threshold}%) for level ${levelId} - time already passed`);
|
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let spacedDelay: number;
|
||||||
|
|
||||||
|
if (isTestMode) {
|
||||||
|
// Test mode: times are already in minutes (tatTimeUtils converts hours to minutes)
|
||||||
|
// Just ensure they have minimum spacing for BullMQ reliability
|
||||||
|
spacedDelay = Math.max(job.delay, 5000) + (jobIndex * 5000);
|
||||||
|
} else if (hasCollision) {
|
||||||
|
// Production with collision: add 5-minute spacing
|
||||||
|
spacedDelay = job.delay + (jobIndex * 300000);
|
||||||
|
} else {
|
||||||
|
// Production without collision: use calculated delays
|
||||||
|
spacedDelay = job.delay;
|
||||||
|
}
|
||||||
|
|
||||||
|
const jobId = `tat-${job.type}-${requestId}-${levelId}`;
|
||||||
|
|
||||||
await tatQueue.add(
|
await tatQueue.add(
|
||||||
job.type,
|
job.type,
|
||||||
{
|
{
|
||||||
type: job.type,
|
type: job.type,
|
||||||
threshold: job.threshold, // Store actual threshold percentage in job data
|
threshold: job.threshold,
|
||||||
requestId,
|
requestId,
|
||||||
levelId,
|
levelId,
|
||||||
approverId
|
approverId
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
delay: job.delay,
|
delay: spacedDelay,
|
||||||
jobId: `tat-${job.type}-${requestId}-${levelId}`, // Generic job ID
|
jobId: jobId,
|
||||||
removeOnComplete: true,
|
removeOnComplete: {
|
||||||
|
age: 3600, // Keep for 1 hour for debugging
|
||||||
|
count: 1000
|
||||||
|
},
|
||||||
removeOnFail: false
|
removeOnFail: false
|
||||||
}
|
}
|
||||||
);
|
);
|
||||||
|
|
||||||
logger.info(
|
logger.info(`[TAT Scheduler] Scheduled ${job.type} (${job.threshold}%)`);
|
||||||
`[TAT Scheduler] Scheduled ${job.type} (${job.threshold}%) for level ${levelId} ` +
|
jobIndex++;
|
||||||
`(delay: ${Math.round(job.delay / 1000 / 60)} minutes, ` +
|
|
||||||
`target: ${dayjs(job.targetTime).format('YYYY-MM-DD HH:mm')})`
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
logger.info(`[TAT Scheduler] ✅ TAT jobs scheduled for request ${requestId}, approver ${approverId}`);
|
logger.info(`[TAT Scheduler] TAT jobs scheduled for request ${requestId}`);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error(`[TAT Scheduler] Failed to schedule TAT jobs:`, error);
|
logger.error(`[TAT Scheduler] Failed to schedule TAT jobs:`, error);
|
||||||
throw error;
|
throw error;
|
||||||
|
|||||||
@ -1,9 +1,24 @@
|
|||||||
import { User as UserModel } from '../models/User';
|
import { User as UserModel } from '../models/User';
|
||||||
import { Op } from 'sequelize';
|
import { Op } from 'sequelize';
|
||||||
import { SSOUserData } from '../types/auth.types'; // Use shared type
|
import { SSOUserData } from '../types/auth.types'; // Use shared type
|
||||||
|
import axios from 'axios';
|
||||||
|
|
||||||
// Using UserModel type directly - interface removed to avoid duplication
|
// Using UserModel type directly - interface removed to avoid duplication
|
||||||
|
|
||||||
|
interface OktaUser {
|
||||||
|
id: string;
|
||||||
|
status: string;
|
||||||
|
profile: {
|
||||||
|
firstName?: string;
|
||||||
|
lastName?: string;
|
||||||
|
displayName?: string;
|
||||||
|
email: string;
|
||||||
|
login: string;
|
||||||
|
department?: string;
|
||||||
|
mobilePhone?: string;
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
export class UserService {
|
export class UserService {
|
||||||
async createOrUpdateUser(ssoData: SSOUserData): Promise<UserModel> {
|
async createOrUpdateUser(ssoData: SSOUserData): Promise<UserModel> {
|
||||||
// Validate required fields
|
// Validate required fields
|
||||||
@ -56,7 +71,7 @@ export class UserService {
|
|||||||
phone: ssoData.phone || null,
|
phone: ssoData.phone || null,
|
||||||
// location: (ssoData as any).location || null, // Ignored for now - schema not finalized
|
// location: (ssoData as any).location || null, // Ignored for now - schema not finalized
|
||||||
isActive: true,
|
isActive: true,
|
||||||
isAdmin: false, // Default to false, can be updated later
|
role: 'USER', // Default role for new users
|
||||||
lastLogin: now
|
lastLogin: now
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -78,7 +93,84 @@ export class UserService {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
async searchUsers(query: string, limit: number = 10, excludeUserId?: string): Promise<UserModel[]> {
|
async searchUsers(query: string, limit: number = 10, excludeUserId?: string): Promise<any[]> {
|
||||||
|
const q = (query || '').trim();
|
||||||
|
if (!q) {
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get the current user's email to exclude them from results
|
||||||
|
let excludeEmail: string | undefined;
|
||||||
|
if (excludeUserId) {
|
||||||
|
try {
|
||||||
|
const currentUser = await UserModel.findByPk(excludeUserId);
|
||||||
|
if (currentUser) {
|
||||||
|
excludeEmail = (currentUser as any).email?.toLowerCase();
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
// Ignore error - filtering will still work by userId for local search
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Search Okta users
|
||||||
|
try {
|
||||||
|
const oktaDomain = process.env.OKTA_DOMAIN;
|
||||||
|
const oktaApiToken = process.env.OKTA_API_TOKEN;
|
||||||
|
|
||||||
|
if (!oktaDomain || !oktaApiToken) {
|
||||||
|
console.error('❌ Okta credentials not configured');
|
||||||
|
// Fallback to local DB search
|
||||||
|
return await this.searchUsersLocal(q, limit, excludeUserId);
|
||||||
|
}
|
||||||
|
|
||||||
|
const response = await axios.get(`${oktaDomain}/api/v1/users`, {
|
||||||
|
params: { q, limit: Math.min(limit, 50) },
|
||||||
|
headers: {
|
||||||
|
'Authorization': `SSWS ${oktaApiToken}`,
|
||||||
|
'Accept': 'application/json'
|
||||||
|
},
|
||||||
|
timeout: 5000
|
||||||
|
});
|
||||||
|
|
||||||
|
const oktaUsers: OktaUser[] = response.data || [];
|
||||||
|
|
||||||
|
// Transform Okta users to our format
|
||||||
|
return oktaUsers
|
||||||
|
.filter(u => {
|
||||||
|
// Filter out inactive users
|
||||||
|
if (u.status !== 'ACTIVE') return false;
|
||||||
|
|
||||||
|
// Filter out current user by Okta ID or email
|
||||||
|
if (excludeUserId && u.id === excludeUserId) return false;
|
||||||
|
if (excludeEmail) {
|
||||||
|
const userEmail = (u.profile.email || u.profile.login || '').toLowerCase();
|
||||||
|
if (userEmail === excludeEmail) return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
})
|
||||||
|
.map(u => ({
|
||||||
|
userId: u.id,
|
||||||
|
oktaSub: u.id,
|
||||||
|
email: u.profile.email || u.profile.login,
|
||||||
|
displayName: u.profile.displayName || `${u.profile.firstName || ''} ${u.profile.lastName || ''}`.trim(),
|
||||||
|
firstName: u.profile.firstName,
|
||||||
|
lastName: u.profile.lastName,
|
||||||
|
department: u.profile.department,
|
||||||
|
phone: u.profile.mobilePhone,
|
||||||
|
isActive: true
|
||||||
|
}));
|
||||||
|
} catch (error: any) {
|
||||||
|
console.error('❌ Okta user search failed:', error.message);
|
||||||
|
// Fallback to local DB search
|
||||||
|
return await this.searchUsersLocal(q, limit, excludeUserId);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fallback: Search users in local database
|
||||||
|
*/
|
||||||
|
private async searchUsersLocal(query: string, limit: number = 10, excludeUserId?: string): Promise<UserModel[]> {
|
||||||
const q = (query || '').trim();
|
const q = (query || '').trim();
|
||||||
if (!q) {
|
if (!q) {
|
||||||
return [];
|
return [];
|
||||||
@ -100,4 +192,66 @@ export class UserService {
|
|||||||
limit: Math.min(Math.max(limit || 10, 1), 50),
|
limit: Math.min(Math.max(limit || 10, 1), 50),
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Ensure user exists in database (create if not exists)
|
||||||
|
* Used when tagging users from Okta search results
|
||||||
|
*/
|
||||||
|
async ensureUserExists(oktaUserData: {
|
||||||
|
userId: string;
|
||||||
|
email: string;
|
||||||
|
displayName?: string;
|
||||||
|
firstName?: string;
|
||||||
|
lastName?: string;
|
||||||
|
department?: string;
|
||||||
|
phone?: string;
|
||||||
|
}): Promise<UserModel> {
|
||||||
|
const email = oktaUserData.email.toLowerCase();
|
||||||
|
|
||||||
|
// Check if user already exists
|
||||||
|
let user = await UserModel.findOne({
|
||||||
|
where: {
|
||||||
|
[Op.or]: [
|
||||||
|
{ email },
|
||||||
|
{ oktaSub: oktaUserData.userId }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
if (user) {
|
||||||
|
// Update existing user with latest info from Okta
|
||||||
|
await user.update({
|
||||||
|
oktaSub: oktaUserData.userId,
|
||||||
|
email,
|
||||||
|
firstName: oktaUserData.firstName || user.firstName,
|
||||||
|
lastName: oktaUserData.lastName || user.lastName,
|
||||||
|
displayName: oktaUserData.displayName || user.displayName,
|
||||||
|
department: oktaUserData.department || user.department,
|
||||||
|
phone: oktaUserData.phone || user.phone,
|
||||||
|
isActive: true,
|
||||||
|
updatedAt: new Date()
|
||||||
|
});
|
||||||
|
return user;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create new user
|
||||||
|
user = await UserModel.create({
|
||||||
|
oktaSub: oktaUserData.userId,
|
||||||
|
email,
|
||||||
|
employeeId: null, // Will be updated on first login
|
||||||
|
firstName: oktaUserData.firstName || null,
|
||||||
|
lastName: oktaUserData.lastName || null,
|
||||||
|
displayName: oktaUserData.displayName || email.split('@')[0],
|
||||||
|
department: oktaUserData.department || null,
|
||||||
|
designation: null,
|
||||||
|
phone: oktaUserData.phone || null,
|
||||||
|
isActive: true,
|
||||||
|
role: 'USER',
|
||||||
|
lastLogin: undefined, // Not logged in yet, just created for tagging
|
||||||
|
createdAt: new Date(),
|
||||||
|
updatedAt: new Date()
|
||||||
|
});
|
||||||
|
|
||||||
|
return user;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -460,7 +460,7 @@ export class WorkflowService {
|
|||||||
limit,
|
limit,
|
||||||
order: [['createdAt', 'DESC']],
|
order: [['createdAt', 'DESC']],
|
||||||
include: [
|
include: [
|
||||||
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName'] },
|
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName', 'department', 'designation'] },
|
||||||
],
|
],
|
||||||
});
|
});
|
||||||
const data = await this.enrichForCards(rows);
|
const data = await this.enrichForCards(rows);
|
||||||
@ -491,28 +491,54 @@ export class WorkflowService {
|
|||||||
const approvals = await ApprovalLevel.findAll({
|
const approvals = await ApprovalLevel.findAll({
|
||||||
where: { requestId: (wf as any).requestId },
|
where: { requestId: (wf as any).requestId },
|
||||||
order: [['levelNumber', 'ASC']],
|
order: [['levelNumber', 'ASC']],
|
||||||
attributes: ['levelId', 'levelNumber', 'levelName', 'approverId', 'approverEmail', 'approverName', 'tatHours', 'tatDays', 'status']
|
attributes: ['levelId', 'levelNumber', 'levelName', 'approverId', 'approverEmail', 'approverName', 'tatHours', 'tatDays', 'status', 'levelStartTime', 'tatStartTime']
|
||||||
});
|
});
|
||||||
|
|
||||||
const totalTat = Number((wf as any).totalTatHours || 0);
|
|
||||||
let percent = 0;
|
|
||||||
let remainingText = '';
|
|
||||||
if ((wf as any).submissionDate && totalTat > 0) {
|
|
||||||
const startedAt = new Date((wf as any).submissionDate);
|
|
||||||
const now = new Date();
|
|
||||||
const elapsedHrs = Math.max(0, (now.getTime() - startedAt.getTime()) / (1000 * 60 * 60));
|
|
||||||
percent = Math.min(100, Math.round((elapsedHrs / totalTat) * 100));
|
|
||||||
const remaining = Math.max(0, totalTat - elapsedHrs);
|
|
||||||
const days = Math.floor(remaining / 24);
|
|
||||||
const hours = Math.floor(remaining % 24);
|
|
||||||
remainingText = days > 0 ? `${days} days ${hours} hours remaining` : `${hours} hours remaining`;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Calculate total TAT hours from all approvals
|
// Calculate total TAT hours from all approvals
|
||||||
const totalTatHours = approvals.reduce((sum: number, a: any) => {
|
const totalTatHours = approvals.reduce((sum: number, a: any) => {
|
||||||
return sum + Number(a.tatHours || 0);
|
return sum + Number(a.tatHours || 0);
|
||||||
}, 0);
|
}, 0);
|
||||||
|
|
||||||
|
// Calculate approved levels count
|
||||||
|
const approvedLevelsCount = approvals.filter((a: any) => a.status === 'APPROVED').length;
|
||||||
|
|
||||||
|
const priority = ((wf as any).priority || 'standard').toString().toLowerCase();
|
||||||
|
|
||||||
|
// Calculate OVERALL request SLA (from submission to total deadline)
|
||||||
|
const { calculateSLAStatus } = require('@utils/tatTimeUtils');
|
||||||
|
const submissionDate = (wf as any).submissionDate;
|
||||||
|
const closureDate = (wf as any).closureDate;
|
||||||
|
// For completed requests, use closure_date; for active requests, use current time
|
||||||
|
const overallEndDate = closureDate || null;
|
||||||
|
|
||||||
|
let overallSLA = null;
|
||||||
|
|
||||||
|
if (submissionDate && totalTatHours > 0) {
|
||||||
|
try {
|
||||||
|
overallSLA = await calculateSLAStatus(submissionDate, totalTatHours, priority, overallEndDate);
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Workflow] Error calculating overall SLA:', error);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate current level SLA (if there's an active level)
|
||||||
|
let currentLevelSLA = null;
|
||||||
|
if (currentLevel) {
|
||||||
|
const levelStartTime = (currentLevel as any).levelStartTime || (currentLevel as any).tatStartTime;
|
||||||
|
const levelTatHours = Number((currentLevel as any).tatHours || 0);
|
||||||
|
// For completed levels, use the level's completion time (if available)
|
||||||
|
// Otherwise, if request is completed, use closure_date
|
||||||
|
const levelEndDate = (currentLevel as any).completedAt || closureDate || null;
|
||||||
|
|
||||||
|
if (levelStartTime && levelTatHours > 0) {
|
||||||
|
try {
|
||||||
|
currentLevelSLA = await calculateSLAStatus(levelStartTime, levelTatHours, priority, levelEndDate);
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Workflow] Error calculating current level SLA:', error);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return {
|
return {
|
||||||
requestId: (wf as any).requestId,
|
requestId: (wf as any).requestId,
|
||||||
requestNumber: (wf as any).requestNumber,
|
requestNumber: (wf as any).requestNumber,
|
||||||
@ -521,7 +547,11 @@ export class WorkflowService {
|
|||||||
status: (wf as any).status,
|
status: (wf as any).status,
|
||||||
priority: (wf as any).priority,
|
priority: (wf as any).priority,
|
||||||
submittedAt: (wf as any).submissionDate,
|
submittedAt: (wf as any).submissionDate,
|
||||||
|
createdAt: (wf as any).createdAt,
|
||||||
|
closureDate: (wf as any).closureDate,
|
||||||
|
conclusionRemark: (wf as any).conclusionRemark,
|
||||||
initiator: (wf as any).initiator,
|
initiator: (wf as any).initiator,
|
||||||
|
department: (wf as any).initiator?.department,
|
||||||
totalLevels: (wf as any).totalLevels,
|
totalLevels: (wf as any).totalLevels,
|
||||||
totalTatHours: totalTatHours,
|
totalTatHours: totalTatHours,
|
||||||
currentLevel: currentLevel ? (currentLevel as any).levelNumber : null,
|
currentLevel: currentLevel ? (currentLevel as any).levelNumber : null,
|
||||||
@ -529,6 +559,9 @@ export class WorkflowService {
|
|||||||
userId: (currentLevel as any).approverId,
|
userId: (currentLevel as any).approverId,
|
||||||
email: (currentLevel as any).approverEmail,
|
email: (currentLevel as any).approverEmail,
|
||||||
name: (currentLevel as any).approverName,
|
name: (currentLevel as any).approverName,
|
||||||
|
levelStartTime: (currentLevel as any).levelStartTime,
|
||||||
|
tatHours: (currentLevel as any).tatHours,
|
||||||
|
sla: currentLevelSLA, // ← Backend-calculated SLA for current level
|
||||||
} : null,
|
} : null,
|
||||||
approvals: approvals.map((a: any) => ({
|
approvals: approvals.map((a: any) => ({
|
||||||
levelId: a.levelId,
|
levelId: a.levelId,
|
||||||
@ -539,30 +572,78 @@ export class WorkflowService {
|
|||||||
approverName: a.approverName,
|
approverName: a.approverName,
|
||||||
tatHours: a.tatHours,
|
tatHours: a.tatHours,
|
||||||
tatDays: a.tatDays,
|
tatDays: a.tatDays,
|
||||||
status: a.status
|
status: a.status,
|
||||||
|
levelStartTime: a.levelStartTime || a.tatStartTime
|
||||||
})),
|
})),
|
||||||
sla: { percent, remainingText },
|
summary: {
|
||||||
|
approvedLevels: approvedLevelsCount,
|
||||||
|
totalLevels: (wf as any).totalLevels,
|
||||||
|
sla: overallSLA || {
|
||||||
|
elapsedHours: 0,
|
||||||
|
remainingHours: totalTatHours,
|
||||||
|
percentageUsed: 0,
|
||||||
|
remainingText: `${totalTatHours}h remaining`,
|
||||||
|
isPaused: false,
|
||||||
|
status: 'on_track'
|
||||||
|
}
|
||||||
|
},
|
||||||
|
sla: overallSLA || {
|
||||||
|
elapsedHours: 0,
|
||||||
|
remainingHours: totalTatHours,
|
||||||
|
percentageUsed: 0,
|
||||||
|
remainingText: `${totalTatHours}h remaining`,
|
||||||
|
isPaused: false,
|
||||||
|
status: 'on_track'
|
||||||
|
}, // ← Overall request SLA (all levels combined)
|
||||||
|
currentLevelSLA: currentLevelSLA, // ← Also provide at root level for easy access
|
||||||
};
|
};
|
||||||
}));
|
}));
|
||||||
return data;
|
return data;
|
||||||
}
|
}
|
||||||
|
|
||||||
async listMyRequests(userId: string, page: number, limit: number) {
|
async listMyRequests(userId: string, page: number, limit: number, filters?: { search?: string; status?: string; priority?: string }) {
|
||||||
const offset = (page - 1) * limit;
|
const offset = (page - 1) * limit;
|
||||||
|
|
||||||
|
// Build where clause with filters
|
||||||
|
const whereConditions: any[] = [{ initiatorId: userId }];
|
||||||
|
|
||||||
|
// Apply status filter
|
||||||
|
if (filters?.status && filters.status !== 'all') {
|
||||||
|
whereConditions.push({ status: filters.status.toUpperCase() });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply priority filter
|
||||||
|
if (filters?.priority && filters.priority !== 'all') {
|
||||||
|
whereConditions.push({ priority: filters.priority.toUpperCase() });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply search filter (title, description, or requestNumber)
|
||||||
|
if (filters?.search && filters.search.trim()) {
|
||||||
|
whereConditions.push({
|
||||||
|
[Op.or]: [
|
||||||
|
{ title: { [Op.iLike]: `%${filters.search.trim()}%` } },
|
||||||
|
{ description: { [Op.iLike]: `%${filters.search.trim()}%` } },
|
||||||
|
{ requestNumber: { [Op.iLike]: `%${filters.search.trim()}%` } }
|
||||||
|
]
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
const where = whereConditions.length > 0 ? { [Op.and]: whereConditions } : {};
|
||||||
|
|
||||||
const { rows, count } = await WorkflowRequest.findAndCountAll({
|
const { rows, count } = await WorkflowRequest.findAndCountAll({
|
||||||
where: { initiatorId: userId },
|
where,
|
||||||
offset,
|
offset,
|
||||||
limit,
|
limit,
|
||||||
order: [['createdAt', 'DESC']],
|
order: [['createdAt', 'DESC']],
|
||||||
include: [
|
include: [
|
||||||
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName'] },
|
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName', 'department', 'designation'] },
|
||||||
],
|
],
|
||||||
});
|
});
|
||||||
const data = await this.enrichForCards(rows);
|
const data = await this.enrichForCards(rows);
|
||||||
return { data, pagination: { page, limit, total: count, totalPages: Math.ceil(count / limit) || 1 } };
|
return { data, pagination: { page, limit, total: count, totalPages: Math.ceil(count / limit) || 1 } };
|
||||||
}
|
}
|
||||||
|
|
||||||
async listOpenForMe(userId: string, page: number, limit: number) {
|
async listOpenForMe(userId: string, page: number, limit: number, filters?: { search?: string; status?: string; priority?: string }, sortBy?: string, sortOrder?: string) {
|
||||||
const offset = (page - 1) * limit;
|
const offset = (page - 1) * limit;
|
||||||
// Find all pending/in-progress approval levels across requests ordered by levelNumber
|
// Find all pending/in-progress approval levels across requests ordered by levelNumber
|
||||||
const pendingLevels = await ApprovalLevel.findAll({
|
const pendingLevels = await ApprovalLevel.findAll({
|
||||||
@ -604,23 +685,162 @@ export class WorkflowService {
|
|||||||
// Combine both sets of request IDs (unique)
|
// Combine both sets of request IDs (unique)
|
||||||
const allRequestIds = Array.from(new Set([...approverRequestIds, ...spectatorRequestIds]));
|
const allRequestIds = Array.from(new Set([...approverRequestIds, ...spectatorRequestIds]));
|
||||||
|
|
||||||
const { rows, count } = await WorkflowRequest.findAndCountAll({
|
// Also include APPROVED requests where the user is the initiator (awaiting closure)
|
||||||
where: {
|
const approvedAsInitiator = await WorkflowRequest.findAll({
|
||||||
requestId: { [Op.in]: allRequestIds.length ? allRequestIds : ['00000000-0000-0000-0000-000000000000'] },
|
where: {
|
||||||
status: { [Op.in]: [WorkflowStatus.PENDING as any, (WorkflowStatus as any).IN_PROGRESS ?? 'IN_PROGRESS'] as any },
|
initiatorId: userId,
|
||||||
|
status: { [Op.in]: [WorkflowStatus.APPROVED as any, 'APPROVED'] as any },
|
||||||
},
|
},
|
||||||
offset,
|
attributes: ['requestId'],
|
||||||
limit,
|
|
||||||
order: [['createdAt', 'DESC']],
|
|
||||||
include: [
|
|
||||||
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName'] },
|
|
||||||
],
|
|
||||||
});
|
});
|
||||||
const data = await this.enrichForCards(rows);
|
const approvedInitiatorRequestIds = approvedAsInitiator.map((r: any) => r.requestId);
|
||||||
|
|
||||||
|
// Combine all request IDs (approver, spectator, and approved as initiator)
|
||||||
|
const allOpenRequestIds = Array.from(new Set([...allRequestIds, ...approvedInitiatorRequestIds]));
|
||||||
|
|
||||||
|
// Build base where conditions
|
||||||
|
const baseConditions: any[] = [];
|
||||||
|
|
||||||
|
// Add the main OR condition for request IDs
|
||||||
|
if (allOpenRequestIds.length > 0) {
|
||||||
|
baseConditions.push({
|
||||||
|
requestId: { [Op.in]: allOpenRequestIds }
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
// No matching requests
|
||||||
|
baseConditions.push({
|
||||||
|
requestId: { [Op.in]: ['00000000-0000-0000-0000-000000000000'] }
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add status condition
|
||||||
|
baseConditions.push({
|
||||||
|
status: { [Op.in]: [
|
||||||
|
WorkflowStatus.PENDING as any,
|
||||||
|
(WorkflowStatus as any).IN_PROGRESS ?? 'IN_PROGRESS',
|
||||||
|
WorkflowStatus.APPROVED as any,
|
||||||
|
'PENDING',
|
||||||
|
'IN_PROGRESS',
|
||||||
|
'APPROVED'
|
||||||
|
] as any }
|
||||||
|
});
|
||||||
|
|
||||||
|
// Apply status filter if provided (overrides default status filter)
|
||||||
|
if (filters?.status && filters.status !== 'all') {
|
||||||
|
baseConditions.pop(); // Remove default status
|
||||||
|
baseConditions.push({ status: filters.status.toUpperCase() });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply priority filter
|
||||||
|
if (filters?.priority && filters.priority !== 'all') {
|
||||||
|
baseConditions.push({ priority: filters.priority.toUpperCase() });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply search filter (title, description, or requestNumber)
|
||||||
|
if (filters?.search && filters.search.trim()) {
|
||||||
|
baseConditions.push({
|
||||||
|
[Op.or]: [
|
||||||
|
{ title: { [Op.iLike]: `%${filters.search.trim()}%` } },
|
||||||
|
{ description: { [Op.iLike]: `%${filters.search.trim()}%` } },
|
||||||
|
{ requestNumber: { [Op.iLike]: `%${filters.search.trim()}%` } }
|
||||||
|
]
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
const where = baseConditions.length > 0 ? { [Op.and]: baseConditions } : {};
|
||||||
|
|
||||||
|
// Build order clause based on sortBy parameter
|
||||||
|
// For computed fields (due, sla), we'll sort after enrichment
|
||||||
|
let order: any[] = [['createdAt', 'DESC']]; // Default order
|
||||||
|
const validSortOrder = (sortOrder?.toLowerCase() === 'asc' ? 'ASC' : 'DESC');
|
||||||
|
|
||||||
|
if (sortBy) {
|
||||||
|
switch (sortBy.toLowerCase()) {
|
||||||
|
case 'created':
|
||||||
|
order = [['createdAt', validSortOrder]];
|
||||||
|
break;
|
||||||
|
case 'priority':
|
||||||
|
// Map priority values: EXPRESS = 1, STANDARD = 2 for ascending (standard first), or reverse for descending
|
||||||
|
// For simplicity, we'll sort alphabetically: EXPRESS < STANDARD
|
||||||
|
order = [['priority', validSortOrder], ['createdAt', 'DESC']]; // Secondary sort by createdAt
|
||||||
|
break;
|
||||||
|
// For 'due' and 'sla', we need to sort after enrichment (handled below)
|
||||||
|
case 'due':
|
||||||
|
case 'sla':
|
||||||
|
// Keep default order - will sort after enrichment
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
// Unknown sortBy, use default
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// For computed field sorting (due, sla), we need to fetch all matching records first,
|
||||||
|
// enrich them, sort, then paginate. For DB fields, we can use SQL pagination.
|
||||||
|
const needsPostEnrichmentSort = sortBy && ['due', 'sla'].includes(sortBy.toLowerCase());
|
||||||
|
|
||||||
|
let rows: any[];
|
||||||
|
let count: number;
|
||||||
|
|
||||||
|
if (needsPostEnrichmentSort) {
|
||||||
|
// Fetch all matching records (no pagination yet)
|
||||||
|
const result = await WorkflowRequest.findAndCountAll({
|
||||||
|
where,
|
||||||
|
include: [
|
||||||
|
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName', 'department', 'designation'] },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
// Enrich all records
|
||||||
|
const allEnriched = await this.enrichForCards(result.rows);
|
||||||
|
|
||||||
|
// Sort enriched data
|
||||||
|
allEnriched.sort((a: any, b: any) => {
|
||||||
|
let aValue: any, bValue: any;
|
||||||
|
|
||||||
|
if (sortBy.toLowerCase() === 'due') {
|
||||||
|
aValue = a.currentLevelSLA?.deadline ? new Date(a.currentLevelSLA.deadline).getTime() : Number.MAX_SAFE_INTEGER;
|
||||||
|
bValue = b.currentLevelSLA?.deadline ? new Date(b.currentLevelSLA.deadline).getTime() : Number.MAX_SAFE_INTEGER;
|
||||||
|
} else if (sortBy.toLowerCase() === 'sla') {
|
||||||
|
aValue = a.currentLevelSLA?.percentageUsed || 0;
|
||||||
|
bValue = b.currentLevelSLA?.percentageUsed || 0;
|
||||||
|
} else {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (validSortOrder === 'ASC') {
|
||||||
|
return aValue > bValue ? 1 : -1;
|
||||||
|
} else {
|
||||||
|
return aValue < bValue ? 1 : -1;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
count = result.count;
|
||||||
|
|
||||||
|
// Apply pagination after sorting
|
||||||
|
const startIndex = offset;
|
||||||
|
const endIndex = startIndex + limit;
|
||||||
|
rows = allEnriched.slice(startIndex, endIndex);
|
||||||
|
} else {
|
||||||
|
// Use database sorting for simple fields (created, priority)
|
||||||
|
const result = await WorkflowRequest.findAndCountAll({
|
||||||
|
where,
|
||||||
|
offset,
|
||||||
|
limit,
|
||||||
|
order,
|
||||||
|
include: [
|
||||||
|
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName', 'department', 'designation'] },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
rows = result.rows;
|
||||||
|
count = result.count;
|
||||||
|
}
|
||||||
|
|
||||||
|
const data = needsPostEnrichmentSort ? rows : await this.enrichForCards(rows);
|
||||||
return { data, pagination: { page, limit, total: count, totalPages: Math.ceil(count / limit) || 1 } };
|
return { data, pagination: { page, limit, total: count, totalPages: Math.ceil(count / limit) || 1 } };
|
||||||
}
|
}
|
||||||
|
|
||||||
async listClosedByMe(userId: string, page: number, limit: number) {
|
async listClosedByMe(userId: string, page: number, limit: number, filters?: { search?: string; status?: string; priority?: string }, sortBy?: string, sortOrder?: string) {
|
||||||
const offset = (page - 1) * limit;
|
const offset = (page - 1) * limit;
|
||||||
|
|
||||||
// Get requests where user participated as approver
|
// Get requests where user participated as approver
|
||||||
@ -651,28 +871,142 @@ export class WorkflowService {
|
|||||||
// Combine both sets of request IDs (unique)
|
// Combine both sets of request IDs (unique)
|
||||||
const allRequestIds = Array.from(new Set([...approverRequestIds, ...spectatorRequestIds]));
|
const allRequestIds = Array.from(new Set([...approverRequestIds, ...spectatorRequestIds]));
|
||||||
|
|
||||||
// Fetch closed/rejected requests
|
// Build query conditions
|
||||||
|
const whereConditions: any[] = [];
|
||||||
|
|
||||||
|
// 1. Requests where user was approver/spectator (show APPROVED, REJECTED, CLOSED)
|
||||||
|
const approverSpectatorStatuses = [
|
||||||
|
WorkflowStatus.APPROVED as any,
|
||||||
|
WorkflowStatus.REJECTED as any,
|
||||||
|
(WorkflowStatus as any).CLOSED ?? 'CLOSED',
|
||||||
|
'APPROVED',
|
||||||
|
'REJECTED',
|
||||||
|
'CLOSED'
|
||||||
|
] as any;
|
||||||
|
|
||||||
|
if (allRequestIds.length > 0) {
|
||||||
|
const approverConditionParts: any[] = [
|
||||||
|
{ requestId: { [Op.in]: allRequestIds } }
|
||||||
|
];
|
||||||
|
|
||||||
|
// Apply status filter
|
||||||
|
if (filters?.status && filters.status !== 'all') {
|
||||||
|
approverConditionParts.push({ status: filters.status.toUpperCase() });
|
||||||
|
} else {
|
||||||
|
approverConditionParts.push({ status: { [Op.in]: approverSpectatorStatuses } });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply priority filter
|
||||||
|
if (filters?.priority && filters.priority !== 'all') {
|
||||||
|
approverConditionParts.push({ priority: filters.priority.toUpperCase() });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply search filter (title, description, or requestNumber)
|
||||||
|
if (filters?.search && filters.search.trim()) {
|
||||||
|
approverConditionParts.push({
|
||||||
|
[Op.or]: [
|
||||||
|
{ title: { [Op.iLike]: `%${filters.search.trim()}%` } },
|
||||||
|
{ description: { [Op.iLike]: `%${filters.search.trim()}%` } },
|
||||||
|
{ requestNumber: { [Op.iLike]: `%${filters.search.trim()}%` } }
|
||||||
|
]
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
const approverCondition = approverConditionParts.length > 0
|
||||||
|
? { [Op.and]: approverConditionParts }
|
||||||
|
: { requestId: { [Op.in]: allRequestIds } };
|
||||||
|
|
||||||
|
whereConditions.push(approverCondition);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. Requests where user is initiator (show ONLY REJECTED or CLOSED, NOT APPROVED)
|
||||||
|
// APPROVED means initiator still needs to finalize conclusion
|
||||||
|
const initiatorStatuses = [
|
||||||
|
WorkflowStatus.REJECTED as any,
|
||||||
|
(WorkflowStatus as any).CLOSED ?? 'CLOSED',
|
||||||
|
'REJECTED',
|
||||||
|
'CLOSED'
|
||||||
|
] as any;
|
||||||
|
|
||||||
|
const initiatorConditionParts: any[] = [
|
||||||
|
{ initiatorId: userId }
|
||||||
|
];
|
||||||
|
|
||||||
|
// Apply status filter
|
||||||
|
if (filters?.status && filters.status !== 'all') {
|
||||||
|
const filterStatus = filters.status.toUpperCase();
|
||||||
|
// Only apply if status is REJECTED or CLOSED (not APPROVED for initiator)
|
||||||
|
if (filterStatus === 'REJECTED' || filterStatus === 'CLOSED') {
|
||||||
|
initiatorConditionParts.push({ status: filterStatus });
|
||||||
|
} else {
|
||||||
|
// If filtering for APPROVED, don't include initiator requests
|
||||||
|
initiatorConditionParts.push({ status: { [Op.in]: [] } }); // Empty set - no results
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
initiatorConditionParts.push({ status: { [Op.in]: initiatorStatuses } });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply priority filter
|
||||||
|
if (filters?.priority && filters.priority !== 'all') {
|
||||||
|
initiatorConditionParts.push({ priority: filters.priority.toUpperCase() });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply search filter (title, description, or requestNumber)
|
||||||
|
if (filters?.search && filters.search.trim()) {
|
||||||
|
initiatorConditionParts.push({
|
||||||
|
[Op.or]: [
|
||||||
|
{ title: { [Op.iLike]: `%${filters.search.trim()}%` } },
|
||||||
|
{ description: { [Op.iLike]: `%${filters.search.trim()}%` } },
|
||||||
|
{ requestNumber: { [Op.iLike]: `%${filters.search.trim()}%` } }
|
||||||
|
]
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
const initiatorCondition = initiatorConditionParts.length > 0
|
||||||
|
? { [Op.and]: initiatorConditionParts }
|
||||||
|
: { initiatorId: userId };
|
||||||
|
|
||||||
|
whereConditions.push(initiatorCondition);
|
||||||
|
|
||||||
|
// Build where clause with OR conditions
|
||||||
|
const where: any = whereConditions.length > 0 ? { [Op.or]: whereConditions } : {};
|
||||||
|
|
||||||
|
// Build order clause based on sortBy parameter
|
||||||
|
let order: any[] = [['createdAt', 'DESC']]; // Default order
|
||||||
|
const validSortOrder = (sortOrder?.toLowerCase() === 'asc' ? 'ASC' : 'DESC');
|
||||||
|
|
||||||
|
if (sortBy) {
|
||||||
|
switch (sortBy.toLowerCase()) {
|
||||||
|
case 'created':
|
||||||
|
order = [['createdAt', validSortOrder]];
|
||||||
|
break;
|
||||||
|
case 'due':
|
||||||
|
// Sort by closureDate or updatedAt (closed date)
|
||||||
|
order = [['updatedAt', validSortOrder], ['createdAt', 'DESC']];
|
||||||
|
break;
|
||||||
|
case 'priority':
|
||||||
|
order = [['priority', validSortOrder], ['createdAt', 'DESC']];
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
// Unknown sortBy, use default
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fetch closed/rejected/approved requests (including finalized ones)
|
||||||
const { rows, count } = await WorkflowRequest.findAndCountAll({
|
const { rows, count } = await WorkflowRequest.findAndCountAll({
|
||||||
where: {
|
where,
|
||||||
requestId: { [Op.in]: allRequestIds.length ? allRequestIds : ['00000000-0000-0000-0000-000000000000'] },
|
|
||||||
status: { [Op.in]: [
|
|
||||||
WorkflowStatus.APPROVED as any,
|
|
||||||
WorkflowStatus.REJECTED as any,
|
|
||||||
'APPROVED',
|
|
||||||
'REJECTED'
|
|
||||||
] as any },
|
|
||||||
},
|
|
||||||
offset,
|
offset,
|
||||||
limit,
|
limit,
|
||||||
order: [['createdAt', 'DESC']],
|
order,
|
||||||
include: [
|
include: [
|
||||||
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName'] },
|
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName', 'department', 'designation'] },
|
||||||
],
|
],
|
||||||
});
|
});
|
||||||
const data = await this.enrichForCards(rows);
|
const data = await this.enrichForCards(rows);
|
||||||
return { data, pagination: { page, limit, total: count, totalPages: Math.ceil(count / limit) || 1 } };
|
return { data, pagination: { page, limit, total: count, totalPages: Math.ceil(count / limit) || 1 } };
|
||||||
}
|
}
|
||||||
async createWorkflow(initiatorId: string, workflowData: CreateWorkflowRequest): Promise<WorkflowRequest> {
|
async createWorkflow(initiatorId: string, workflowData: CreateWorkflowRequest, requestMetadata?: { ipAddress?: string | null; userAgent?: string | null }): Promise<WorkflowRequest> {
|
||||||
try {
|
try {
|
||||||
const requestNumber = generateRequestNumber();
|
const requestNumber = generateRequestNumber();
|
||||||
const totalTatHours = workflowData.approvalLevels.reduce((sum, level) => sum + level.tatHours, 0);
|
const totalTatHours = workflowData.approvalLevels.reduce((sum, level) => sum + level.tatHours, 0);
|
||||||
@ -736,22 +1070,43 @@ export class WorkflowService {
|
|||||||
const initiator = await User.findByPk(initiatorId);
|
const initiator = await User.findByPk(initiatorId);
|
||||||
const initiatorName = (initiator as any)?.displayName || (initiator as any)?.email || 'User';
|
const initiatorName = (initiator as any)?.displayName || (initiator as any)?.email || 'User';
|
||||||
|
|
||||||
|
// Log creation activity
|
||||||
activityService.log({
|
activityService.log({
|
||||||
requestId: (workflow as any).requestId,
|
requestId: (workflow as any).requestId,
|
||||||
type: 'created',
|
type: 'created',
|
||||||
user: { userId: initiatorId, name: initiatorName },
|
user: { userId: initiatorId, name: initiatorName },
|
||||||
timestamp: new Date().toISOString(),
|
timestamp: new Date().toISOString(),
|
||||||
action: 'Initial request submitted',
|
action: 'Initial request submitted',
|
||||||
details: `Initial request submitted for ${workflowData.title} by ${initiatorName}`
|
details: `Initial request submitted for ${workflowData.title} by ${initiatorName}`,
|
||||||
|
ipAddress: requestMetadata?.ipAddress || undefined,
|
||||||
|
userAgent: requestMetadata?.userAgent || undefined
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// Send notification to INITIATOR confirming submission
|
||||||
|
await notificationService.sendToUsers([initiatorId], {
|
||||||
|
title: 'Request Submitted Successfully',
|
||||||
|
body: `Your request "${workflowData.title}" has been submitted and is now with the first approver.`,
|
||||||
|
requestNumber: requestNumber,
|
||||||
|
requestId: (workflow as any).requestId,
|
||||||
|
url: `/request/${requestNumber}`,
|
||||||
|
type: 'request_submitted',
|
||||||
|
priority: 'MEDIUM'
|
||||||
|
});
|
||||||
|
|
||||||
|
// Send notification to FIRST APPROVER for assignment
|
||||||
const firstLevel = await ApprovalLevel.findOne({ where: { requestId: (workflow as any).requestId, levelNumber: 1 } });
|
const firstLevel = await ApprovalLevel.findOne({ where: { requestId: (workflow as any).requestId, levelNumber: 1 } });
|
||||||
if (firstLevel) {
|
if (firstLevel) {
|
||||||
await notificationService.sendToUsers([(firstLevel as any).approverId], {
|
await notificationService.sendToUsers([(firstLevel as any).approverId], {
|
||||||
title: 'New request assigned',
|
title: 'New Request Assigned',
|
||||||
body: `${workflowData.title}`,
|
body: `${workflowData.title}`,
|
||||||
requestNumber: requestNumber,
|
requestNumber: requestNumber,
|
||||||
url: `/request/${requestNumber}`
|
requestId: (workflow as any).requestId,
|
||||||
|
url: `/request/${requestNumber}`,
|
||||||
|
type: 'assignment',
|
||||||
|
priority: 'HIGH',
|
||||||
|
actionRequired: true
|
||||||
});
|
});
|
||||||
|
|
||||||
activityService.log({
|
activityService.log({
|
||||||
requestId: (workflow as any).requestId,
|
requestId: (workflow as any).requestId,
|
||||||
type: 'assignment',
|
type: 'assignment',
|
||||||
@ -761,6 +1116,7 @@ export class WorkflowService {
|
|||||||
details: `Request assigned to ${(firstLevel as any).approverName || (firstLevel as any).approverEmail || 'approver'} for review`
|
details: `Request assigned to ${(firstLevel as any).approverName || (firstLevel as any).approverEmail || 'approver'} for review`
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
return workflow;
|
return workflow;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error('Failed to create workflow:', error);
|
logger.error('Failed to create workflow:', error);
|
||||||
@ -1004,7 +1360,71 @@ export class WorkflowService {
|
|||||||
tatAlerts = [];
|
tatAlerts = [];
|
||||||
}
|
}
|
||||||
|
|
||||||
return { workflow, approvals, participants, documents, activities, summary, tatAlerts };
|
// Recalculate SLA for all approval levels with comprehensive data
|
||||||
|
const priority = ((workflow as any)?.priority || 'standard').toString().toLowerCase();
|
||||||
|
const { calculateSLAStatus } = require('@utils/tatTimeUtils');
|
||||||
|
|
||||||
|
const updatedApprovals = await Promise.all(approvals.map(async (approval: any) => {
|
||||||
|
const status = (approval.status || '').toString().toUpperCase();
|
||||||
|
const approvalData = approval.toJSON();
|
||||||
|
|
||||||
|
// Calculate SLA for active approvals (pending/in-progress)
|
||||||
|
if (status === 'PENDING' || status === 'IN_PROGRESS') {
|
||||||
|
const levelStartTime = approval.levelStartTime || approval.tatStartTime || approval.createdAt;
|
||||||
|
const tatHours = Number(approval.tatHours || 0);
|
||||||
|
|
||||||
|
if (levelStartTime && tatHours > 0) {
|
||||||
|
try {
|
||||||
|
// Get comprehensive SLA status from backend utility
|
||||||
|
const slaData = await calculateSLAStatus(levelStartTime, tatHours, priority);
|
||||||
|
|
||||||
|
// Return updated approval with comprehensive SLA data
|
||||||
|
return {
|
||||||
|
...approvalData,
|
||||||
|
elapsedHours: slaData.elapsedHours,
|
||||||
|
remainingHours: slaData.remainingHours,
|
||||||
|
tatPercentageUsed: slaData.percentageUsed,
|
||||||
|
sla: slaData // ← Full SLA object with deadline, isPaused, status, etc.
|
||||||
|
};
|
||||||
|
} catch (error) {
|
||||||
|
logger.error(`[Workflow] Error calculating SLA for level ${approval.levelNumber}:`, error);
|
||||||
|
// Return with fallback values if SLA calculation fails
|
||||||
|
return {
|
||||||
|
...approvalData,
|
||||||
|
sla: {
|
||||||
|
elapsedHours: 0,
|
||||||
|
remainingHours: tatHours,
|
||||||
|
percentageUsed: 0,
|
||||||
|
isPaused: false,
|
||||||
|
status: 'on_track',
|
||||||
|
remainingText: `${tatHours}h`,
|
||||||
|
elapsedText: '0h'
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// For completed/rejected levels, return as-is (already has final values from database)
|
||||||
|
return approvalData;
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Calculate overall request SLA
|
||||||
|
const submissionDate = (workflow as any).submissionDate;
|
||||||
|
const totalTatHours = updatedApprovals.reduce((sum, a) => sum + Number(a.tatHours || 0), 0);
|
||||||
|
let overallSLA = null;
|
||||||
|
|
||||||
|
if (submissionDate && totalTatHours > 0) {
|
||||||
|
overallSLA = await calculateSLAStatus(submissionDate, totalTatHours, priority);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update summary to include comprehensive SLA
|
||||||
|
const updatedSummary = {
|
||||||
|
...summary,
|
||||||
|
sla: overallSLA || summary.sla
|
||||||
|
};
|
||||||
|
|
||||||
|
return { workflow, approvals: updatedApprovals, participants, documents, activities, summary: updatedSummary, tatAlerts };
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error(`Failed to get workflow details ${requestId}:`, error);
|
logger.error(`Failed to get workflow details ${requestId}:`, error);
|
||||||
throw new Error('Failed to get workflow details');
|
throw new Error('Failed to get workflow details');
|
||||||
|
|||||||
@ -2,7 +2,9 @@ import { Op } from 'sequelize';
|
|||||||
import { WorkNote } from '@models/WorkNote';
|
import { WorkNote } from '@models/WorkNote';
|
||||||
import { WorkNoteAttachment } from '@models/WorkNoteAttachment';
|
import { WorkNoteAttachment } from '@models/WorkNoteAttachment';
|
||||||
import { Participant } from '@models/Participant';
|
import { Participant } from '@models/Participant';
|
||||||
|
import { WorkflowRequest } from '@models/WorkflowRequest';
|
||||||
import { activityService } from './activity.service';
|
import { activityService } from './activity.service';
|
||||||
|
import { notificationService } from './notification.service';
|
||||||
import logger from '@utils/logger';
|
import logger from '@utils/logger';
|
||||||
|
|
||||||
export class WorkNoteService {
|
export class WorkNoteService {
|
||||||
@ -69,7 +71,7 @@ export class WorkNoteService {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async create(requestId: string, user: { userId: string; name?: string; role?: string }, payload: { message: string; isPriority?: boolean; parentNoteId?: string | null; mentionedUsers?: string[] | null; }, files?: Array<{ path: string; originalname: string; mimetype: string; size: number }>): Promise<any> {
|
async create(requestId: string, user: { userId: string; name?: string; role?: string }, payload: { message: string; isPriority?: boolean; parentNoteId?: string | null; mentionedUsers?: string[] | null; }, files?: Array<{ path: string; originalname: string; mimetype: string; size: number }>, requestMetadata?: { ipAddress?: string | null; userAgent?: string | null }): Promise<any> {
|
||||||
logger.info('[WorkNote] Creating note:', { requestId, user, messageLength: payload.message?.length });
|
logger.info('[WorkNote] Creating note:', { requestId, user, messageLength: payload.message?.length });
|
||||||
|
|
||||||
const note = await WorkNote.create({
|
const note = await WorkNote.create({
|
||||||
@ -121,7 +123,9 @@ export class WorkNoteService {
|
|||||||
user: { userId: user.userId, name: user.name || 'User' },
|
user: { userId: user.userId, name: user.name || 'User' },
|
||||||
timestamp: new Date().toISOString(),
|
timestamp: new Date().toISOString(),
|
||||||
action: 'Work Note Added',
|
action: 'Work Note Added',
|
||||||
details: `${user.name || 'User'} added a work note: ${payload.message.substring(0, 100)}${payload.message.length > 100 ? '...' : ''}`
|
details: `${user.name || 'User'} added a work note: ${payload.message.substring(0, 100)}${payload.message.length > 100 ? '...' : ''}`,
|
||||||
|
ipAddress: requestMetadata?.ipAddress || undefined,
|
||||||
|
userAgent: requestMetadata?.userAgent || undefined
|
||||||
});
|
});
|
||||||
|
|
||||||
try {
|
try {
|
||||||
@ -144,6 +148,35 @@ export class WorkNoteService {
|
|||||||
}
|
}
|
||||||
} catch (e) { logger.warn('Realtime emit failed (not initialized)'); }
|
} catch (e) { logger.warn('Realtime emit failed (not initialized)'); }
|
||||||
|
|
||||||
|
// Send notifications to mentioned users
|
||||||
|
if (payload.mentionedUsers && Array.isArray(payload.mentionedUsers) && payload.mentionedUsers.length > 0) {
|
||||||
|
try {
|
||||||
|
// Get workflow details for request number and title
|
||||||
|
const workflow = await WorkflowRequest.findOne({ where: { requestId } });
|
||||||
|
const requestNumber = (workflow as any)?.requestNumber || requestId;
|
||||||
|
const requestTitle = (workflow as any)?.title || 'Request';
|
||||||
|
|
||||||
|
logger.info(`[WorkNote] Sending mention notifications to ${payload.mentionedUsers.length} users`);
|
||||||
|
|
||||||
|
await notificationService.sendToUsers(
|
||||||
|
payload.mentionedUsers,
|
||||||
|
{
|
||||||
|
title: '💬 Mentioned in Work Note',
|
||||||
|
body: `${user.name || 'Someone'} mentioned you in ${requestNumber}: "${payload.message.substring(0, 50)}${payload.message.length > 50 ? '...' : ''}"`,
|
||||||
|
requestId,
|
||||||
|
requestNumber,
|
||||||
|
url: `/request/${requestNumber}`,
|
||||||
|
type: 'mention'
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
logger.info(`[WorkNote] Mention notifications sent successfully`);
|
||||||
|
} catch (notifyError) {
|
||||||
|
logger.error('[WorkNote] Failed to send mention notifications:', notifyError);
|
||||||
|
// Don't fail the work note creation if notifications fail
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return { ...note, attachments };
|
return { ...note, attachments };
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@ -37,7 +37,7 @@ export interface LoginResponse {
|
|||||||
displayName?: string | null;
|
displayName?: string | null;
|
||||||
department?: string | null;
|
department?: string | null;
|
||||||
designation?: string | null;
|
designation?: string | null;
|
||||||
isAdmin: boolean;
|
role: 'USER' | 'MANAGEMENT' | 'ADMIN';
|
||||||
};
|
};
|
||||||
accessToken: string;
|
accessToken: string;
|
||||||
refreshToken: string;
|
refreshToken: string;
|
||||||
|
|||||||
6
src/types/express.d.ts
vendored
6
src/types/express.d.ts
vendored
@ -1,5 +1,7 @@
|
|||||||
import { JwtPayload } from 'jsonwebtoken';
|
import { JwtPayload } from 'jsonwebtoken';
|
||||||
|
|
||||||
|
export type UserRole = 'USER' | 'MANAGEMENT' | 'ADMIN';
|
||||||
|
|
||||||
declare global {
|
declare global {
|
||||||
namespace Express {
|
namespace Express {
|
||||||
interface Request {
|
interface Request {
|
||||||
@ -7,7 +9,7 @@ declare global {
|
|||||||
userId: string;
|
userId: string;
|
||||||
email: string;
|
email: string;
|
||||||
employeeId?: string | null; // Optional - schema not finalized
|
employeeId?: string | null; // Optional - schema not finalized
|
||||||
role?: string;
|
role?: UserRole;
|
||||||
};
|
};
|
||||||
cookies?: {
|
cookies?: {
|
||||||
accessToken?: string;
|
accessToken?: string;
|
||||||
@ -25,7 +27,7 @@ export interface AuthenticatedRequest extends Express.Request {
|
|||||||
userId: string;
|
userId: string;
|
||||||
email: string;
|
email: string;
|
||||||
employeeId?: string | null; // Optional - schema not finalized
|
employeeId?: string | null; // Optional - schema not finalized
|
||||||
role: string;
|
role: UserRole;
|
||||||
};
|
};
|
||||||
params: any;
|
params: any;
|
||||||
body: any;
|
body: any;
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user