dashboard enhanced created new apis for approver performance and added api for breacg reason
This commit is contained in:
parent
336df2023c
commit
dcb53a89ed
@ -1,222 +0,0 @@
|
||||
# In-App Notification System - Setup Guide
|
||||
|
||||
## 🎯 Overview
|
||||
|
||||
Complete real-time in-app notification system for Royal Enfield Workflow Management System.
|
||||
|
||||
## ✅ Features Implemented
|
||||
|
||||
### Backend:
|
||||
1. **Notification Model** (`models/Notification.ts`)
|
||||
- Stores all in-app notifications
|
||||
- Tracks read/unread status
|
||||
- Supports priority levels (LOW, MEDIUM, HIGH, URGENT)
|
||||
- Metadata for request context
|
||||
|
||||
2. **Notification Controller** (`controllers/notification.controller.ts`)
|
||||
- GET `/api/v1/notifications` - List user's notifications with pagination
|
||||
- GET `/api/v1/notifications/unread-count` - Get unread count
|
||||
- PATCH `/api/v1/notifications/:notificationId/read` - Mark as read
|
||||
- POST `/api/v1/notifications/mark-all-read` - Mark all as read
|
||||
- DELETE `/api/v1/notifications/:notificationId` - Delete notification
|
||||
|
||||
3. **Enhanced Notification Service** (`services/notification.service.ts`)
|
||||
- Saves notifications to database (for in-app display)
|
||||
- Emits real-time socket.io events
|
||||
- Sends push notifications (if subscribed)
|
||||
- All in one call: `notificationService.sendToUsers()`
|
||||
|
||||
4. **Socket.io Enhancement** (`realtime/socket.ts`)
|
||||
- Added `join:user` event for personal notification room
|
||||
- Added `emitToUser()` function for targeted notifications
|
||||
- Real-time delivery without page refresh
|
||||
|
||||
### Frontend:
|
||||
1. **Notification API Service** (`services/notificationApi.ts`)
|
||||
- Complete API client for all notification endpoints
|
||||
|
||||
2. **PageLayout Integration** (`components/layout/PageLayout/PageLayout.tsx`)
|
||||
- Real-time notification bell with unread count badge
|
||||
- Dropdown showing latest 10 notifications
|
||||
- Click to mark as read and navigate to request
|
||||
- "Mark all as read" functionality
|
||||
- Auto-refreshes when new notifications arrive
|
||||
- Works even if browser push notifications disabled
|
||||
|
||||
3. **Data Freshness** (MyRequests, OpenRequests, ClosedRequests)
|
||||
- Fixed stale data after DB deletion
|
||||
- Always shows fresh data from API
|
||||
|
||||
## 📦 Database Setup
|
||||
|
||||
### Step 1: Run Migration
|
||||
|
||||
Execute this SQL in your PostgreSQL database:
|
||||
|
||||
```bash
|
||||
psql -U postgres -d re_workflow_db -f migrations/create_notifications_table.sql
|
||||
```
|
||||
|
||||
OR run manually in pgAdmin/SQL tool:
|
||||
|
||||
```sql
|
||||
-- See: migrations/create_notifications_table.sql
|
||||
```
|
||||
|
||||
### Step 2: Verify Table Created
|
||||
|
||||
```sql
|
||||
SELECT table_name FROM information_schema.tables
|
||||
WHERE table_schema = 'public' AND table_name = 'notifications';
|
||||
```
|
||||
|
||||
## 🚀 How It Works
|
||||
|
||||
### 1. When an Event Occurs (e.g., Request Assigned):
|
||||
|
||||
**Backend:**
|
||||
```typescript
|
||||
await notificationService.sendToUsers(
|
||||
[approverId],
|
||||
{
|
||||
title: 'New request assigned',
|
||||
body: 'Marketing Campaign Approval - REQ-2025-12345',
|
||||
requestId: workflowId,
|
||||
requestNumber: 'REQ-2025-12345',
|
||||
url: `/request/REQ-2025-12345`,
|
||||
type: 'assignment',
|
||||
priority: 'HIGH',
|
||||
actionRequired: true
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
This automatically:
|
||||
- ✅ Saves notification to `notifications` table
|
||||
- ✅ Emits `notification:new` socket event to user
|
||||
- ✅ Sends browser push notification (if enabled)
|
||||
|
||||
### 2. Frontend Receives Notification:
|
||||
|
||||
**PageLayout** automatically:
|
||||
- ✅ Receives socket event in real-time
|
||||
- ✅ Updates notification count badge
|
||||
- ✅ Adds to notification dropdown
|
||||
- ✅ Shows blue dot for unread
|
||||
- ✅ User clicks → marks as read → navigates to request
|
||||
|
||||
## 📌 Notification Events (Major)
|
||||
|
||||
Based on your requirement, here are the key events that trigger notifications:
|
||||
|
||||
| Event | Type | Sent To | Priority |
|
||||
|-------|------|---------|----------|
|
||||
| Request Created | `created` | Initiator | MEDIUM |
|
||||
| Request Assigned | `assignment` | Approver | HIGH |
|
||||
| Approval Given | `approved` | Initiator | HIGH |
|
||||
| Request Rejected | `rejected` | Initiator | URGENT |
|
||||
| TAT Alert (50%) | `tat_alert` | Approver | MEDIUM |
|
||||
| TAT Alert (75%) | `tat_alert` | Approver | HIGH |
|
||||
| TAT Breached | `tat_breach` | Approver + Initiator | URGENT |
|
||||
| Work Note Mention | `mention` | Tagged Users | MEDIUM |
|
||||
| New Comment | `comment` | Participants | LOW |
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Backend (.env):
|
||||
```env
|
||||
# Already configured - no changes needed
|
||||
VAPID_PUBLIC_KEY=your_vapid_public_key
|
||||
VAPID_PRIVATE_KEY=your_vapid_private_key
|
||||
```
|
||||
|
||||
### Frontend (.env):
|
||||
```env
|
||||
# Already configured
|
||||
VITE_API_BASE_URL=http://localhost:5000/api/v1
|
||||
```
|
||||
|
||||
## ✅ Testing
|
||||
|
||||
### 1. Test Basic Notification:
|
||||
```bash
|
||||
# Create a workflow and assign to an approver
|
||||
# Check approver's notification bell - should show count
|
||||
```
|
||||
|
||||
### 2. Test Real-Time Delivery:
|
||||
```bash
|
||||
# Have 2 users logged in (different browsers)
|
||||
# User A creates request, assigns to User B
|
||||
# User B should see notification appear immediately (no refresh needed)
|
||||
```
|
||||
|
||||
### 3. Test TAT Notifications:
|
||||
```bash
|
||||
# Create request with 1-hour TAT
|
||||
# Wait for threshold notifications (50%, 75%, 100%)
|
||||
# Approver should receive in-app notifications
|
||||
```
|
||||
|
||||
### 4. Test Work Note Mentions:
|
||||
```bash
|
||||
# Add work note with @mention
|
||||
# Tagged user should receive notification
|
||||
```
|
||||
|
||||
## 🎨 UI Features
|
||||
|
||||
- **Unread Badge**: Shows count (1-9, or "9+" for 10+)
|
||||
- **Blue Dot**: Indicates unread notifications
|
||||
- **Blue Background**: Highlights unread items
|
||||
- **Time Ago**: "5 minutes ago", "2 hours ago", etc.
|
||||
- **Click to Navigate**: Automatically opens the related request
|
||||
- **Mark All Read**: Single click to clear all unread
|
||||
- **Scrollable**: Shows latest 10, with "View all" link
|
||||
|
||||
## 📱 Fallback for Disabled Push Notifications
|
||||
|
||||
Even if user denies browser push notifications:
|
||||
- ✅ In-app notifications ALWAYS work
|
||||
- ✅ Notifications saved to database
|
||||
- ✅ Real-time delivery via socket.io
|
||||
- ✅ No permission required
|
||||
- ✅ Works on all browsers
|
||||
|
||||
## 🔍 Debug Endpoints
|
||||
|
||||
```bash
|
||||
# Get notifications for current user
|
||||
GET /api/v1/notifications?page=1&limit=10
|
||||
|
||||
# Get only unread
|
||||
GET /api/v1/notifications?unreadOnly=true
|
||||
|
||||
# Get unread count
|
||||
GET /api/v1/notifications/unread-count
|
||||
```
|
||||
|
||||
## 🎉 Benefits
|
||||
|
||||
1. **No Browser Permission Needed** - Always works, unlike push notifications
|
||||
2. **Real-Time Updates** - Instant delivery via socket.io
|
||||
3. **Persistent** - Saved in database, available after login
|
||||
4. **Actionable** - Click to navigate to related request
|
||||
5. **User-Friendly** - Clean UI integrated into header
|
||||
6. **Complete Tracking** - Know what was sent via which channel
|
||||
|
||||
## 🔥 Next Steps (Optional)
|
||||
|
||||
1. **Email Integration**: Send email for URGENT priority notifications
|
||||
2. **SMS Integration**: Critical alerts via SMS
|
||||
3. **Notification Preferences**: Let users choose which events to receive
|
||||
4. **Notification History Page**: Full-page view with filters
|
||||
5. **Sound Alerts**: Play sound when new notification arrives
|
||||
6. **Desktop Notifications**: Browser native notifications (if permitted)
|
||||
|
||||
---
|
||||
|
||||
**✅ In-App Notifications are now fully operational!**
|
||||
|
||||
Users will receive instant notifications for all major workflow events, even without browser push permissions enabled.
|
||||
|
||||
157
package-lock.json
generated
157
package-lock.json
generated
@ -65,6 +65,7 @@
|
||||
"ts-jest": "^29.2.5",
|
||||
"ts-node": "^10.9.2",
|
||||
"ts-node-dev": "^2.0.0",
|
||||
"tsc-alias": "^1.8.16",
|
||||
"tsconfig-paths": "^4.2.0",
|
||||
"typescript": "^5.7.2"
|
||||
},
|
||||
@ -2672,6 +2673,16 @@
|
||||
"integrity": "sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/array-union": {
|
||||
"version": "2.1.0",
|
||||
"resolved": "https://registry.npmjs.org/array-union/-/array-union-2.1.0.tgz",
|
||||
"integrity": "sha512-HGyxoOTYUyCM6stUe6EJgnd4EoewAI7zMdfqO+kGjnlZmBDz/cR5pf8r/cR4Wq60sL/p0IkcjUEEPwS3GFrIyw==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/arrify": {
|
||||
"version": "2.0.1",
|
||||
"resolved": "https://registry.npmjs.org/arrify/-/arrify-2.0.1.tgz",
|
||||
@ -3812,6 +3823,19 @@
|
||||
"node": "^14.15.0 || ^16.10.0 || >=18.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/dir-glob": {
|
||||
"version": "3.0.1",
|
||||
"resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz",
|
||||
"integrity": "sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"path-type": "^4.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/dotenv": {
|
||||
"version": "16.6.1",
|
||||
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-16.6.1.tgz",
|
||||
@ -4989,6 +5013,19 @@
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/get-tsconfig": {
|
||||
"version": "4.13.0",
|
||||
"resolved": "https://registry.npmjs.org/get-tsconfig/-/get-tsconfig-4.13.0.tgz",
|
||||
"integrity": "sha512-1VKTZJCwBrvbd+Wn3AOgQP/2Av+TfTCOlE4AcRJE72W1ksZXbAx8PPBR9RzgTeSPzlPMHrbANMH3LbltH73wxQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"resolve-pkg-maps": "^1.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/privatenumber/get-tsconfig?sponsor=1"
|
||||
}
|
||||
},
|
||||
"node_modules/glob": {
|
||||
"version": "7.2.3",
|
||||
"resolved": "https://registry.npmjs.org/glob/-/glob-7.2.3.tgz",
|
||||
@ -5061,6 +5098,37 @@
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/globby": {
|
||||
"version": "11.1.0",
|
||||
"resolved": "https://registry.npmjs.org/globby/-/globby-11.1.0.tgz",
|
||||
"integrity": "sha512-jhIXaOzy1sb8IyocaruWSn1TjmnBVs8Ayhcy83rmxNJ8q2uWKCAj3CnJY+KpGSXCueAPc0i05kVvVKtP1t9S3g==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"array-union": "^2.1.0",
|
||||
"dir-glob": "^3.0.1",
|
||||
"fast-glob": "^3.2.9",
|
||||
"ignore": "^5.2.0",
|
||||
"merge2": "^1.4.1",
|
||||
"slash": "^3.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=10"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/globby/node_modules/ignore": {
|
||||
"version": "5.3.2",
|
||||
"resolved": "https://registry.npmjs.org/ignore/-/ignore-5.3.2.tgz",
|
||||
"integrity": "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">= 4"
|
||||
}
|
||||
},
|
||||
"node_modules/google-auth-library": {
|
||||
"version": "9.15.1",
|
||||
"resolved": "https://registry.npmjs.org/google-auth-library/-/google-auth-library-9.15.1.tgz",
|
||||
@ -6955,6 +7023,20 @@
|
||||
"node": ">= 6.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/mylas": {
|
||||
"version": "2.1.14",
|
||||
"resolved": "https://registry.npmjs.org/mylas/-/mylas-2.1.14.tgz",
|
||||
"integrity": "sha512-BzQguy9W9NJgoVn2mRWzbFrFWWztGCcng2QI9+41frfk+Athwgx3qhqhvStz7ExeUUu7Kzw427sNzHpEZNINog==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=16.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/raouldeheer"
|
||||
}
|
||||
},
|
||||
"node_modules/natural-compare": {
|
||||
"version": "1.4.0",
|
||||
"resolved": "https://registry.npmjs.org/natural-compare/-/natural-compare-1.4.0.tgz",
|
||||
@ -7467,6 +7549,16 @@
|
||||
"integrity": "sha512-RA1GjUVMnvYFxuqovrEqZoxxW5NUZqbwKtYz/Tt7nXerk0LbLblQmrsgdeOxV5SFHf0UDggjS/bSeOZwt1pmEQ==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/path-type": {
|
||||
"version": "4.0.0",
|
||||
"resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz",
|
||||
"integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/pause": {
|
||||
"version": "0.0.1",
|
||||
"resolved": "https://registry.npmjs.org/pause/-/pause-0.0.1.tgz",
|
||||
@ -7672,6 +7764,19 @@
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/plimit-lit": {
|
||||
"version": "1.6.1",
|
||||
"resolved": "https://registry.npmjs.org/plimit-lit/-/plimit-lit-1.6.1.tgz",
|
||||
"integrity": "sha512-B7+VDyb8Tl6oMJT9oSO2CW8XC/T4UcJGrwOVoNGwOQsQYhlpfajmrMj5xeejqaASq3V/EqThyOeATEOMuSEXiA==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"queue-lit": "^1.5.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
}
|
||||
},
|
||||
"node_modules/postgres-array": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/postgres-array/-/postgres-array-2.0.0.tgz",
|
||||
@ -7860,6 +7965,16 @@
|
||||
"url": "https://github.com/sponsors/ljharb"
|
||||
}
|
||||
},
|
||||
"node_modules/queue-lit": {
|
||||
"version": "1.5.2",
|
||||
"resolved": "https://registry.npmjs.org/queue-lit/-/queue-lit-1.5.2.tgz",
|
||||
"integrity": "sha512-tLc36IOPeMAubu8BkW8YDBV+WyIgKlYU7zUNs0J5Vk9skSZ4JfGlPOqplP0aHdfv7HL0B2Pg6nwiq60Qc6M2Hw==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
}
|
||||
},
|
||||
"node_modules/queue-microtask": {
|
||||
"version": "1.2.3",
|
||||
"resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz",
|
||||
@ -8024,6 +8139,16 @@
|
||||
"node": ">=4"
|
||||
}
|
||||
},
|
||||
"node_modules/resolve-pkg-maps": {
|
||||
"version": "1.0.0",
|
||||
"resolved": "https://registry.npmjs.org/resolve-pkg-maps/-/resolve-pkg-maps-1.0.0.tgz",
|
||||
"integrity": "sha512-seS2Tj26TBVOC2NIc2rOe2y2ZO7efxITtLZcGSOnHHNOQ7CkiUBfw0Iw2ck6xkIhPwLhKNLS8BO+hEpngQlqzw==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"funding": {
|
||||
"url": "https://github.com/privatenumber/resolve-pkg-maps?sponsor=1"
|
||||
}
|
||||
},
|
||||
"node_modules/resolve.exports": {
|
||||
"version": "2.0.3",
|
||||
"resolved": "https://registry.npmjs.org/resolve.exports/-/resolve.exports-2.0.3.tgz",
|
||||
@ -9279,6 +9404,38 @@
|
||||
"node": ">=10"
|
||||
}
|
||||
},
|
||||
"node_modules/tsc-alias": {
|
||||
"version": "1.8.16",
|
||||
"resolved": "https://registry.npmjs.org/tsc-alias/-/tsc-alias-1.8.16.tgz",
|
||||
"integrity": "sha512-QjCyu55NFyRSBAl6+MTFwplpFcnm2Pq01rR/uxfqJoLMm6X3O14KEGtaSDZpJYaE1bJBGDjD0eSuiIWPe2T58g==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"chokidar": "^3.5.3",
|
||||
"commander": "^9.0.0",
|
||||
"get-tsconfig": "^4.10.0",
|
||||
"globby": "^11.0.4",
|
||||
"mylas": "^2.1.9",
|
||||
"normalize-path": "^3.0.0",
|
||||
"plimit-lit": "^1.2.6"
|
||||
},
|
||||
"bin": {
|
||||
"tsc-alias": "dist/bin/index.js"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=16.20.2"
|
||||
}
|
||||
},
|
||||
"node_modules/tsc-alias/node_modules/commander": {
|
||||
"version": "9.5.0",
|
||||
"resolved": "https://registry.npmjs.org/commander/-/commander-9.5.0.tgz",
|
||||
"integrity": "sha512-KRs7WVDKg86PWiuAqhDrAQnTXZKraVcCc6vFdL14qrZ/DcWwuRo7VoiYXalXO7S5GKpqYiVEwCbgFDfxNHKJBQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": "^12.20.0 || >=14"
|
||||
}
|
||||
},
|
||||
"node_modules/tsconfig": {
|
||||
"version": "7.0.0",
|
||||
"resolved": "https://registry.npmjs.org/tsconfig/-/tsconfig-7.0.0.tgz",
|
||||
|
||||
12
package.json
12
package.json
@ -7,20 +7,13 @@
|
||||
"start": "node dist/server.js",
|
||||
"dev": "npm run setup && nodemon --exec ts-node -r tsconfig-paths/register src/server.ts",
|
||||
"dev:no-setup": "nodemon --exec ts-node -r tsconfig-paths/register src/server.ts",
|
||||
"build": "tsc",
|
||||
"build": "tsc && tsc-alias",
|
||||
"build:watch": "tsc --watch",
|
||||
"start:prod": "NODE_ENV=production node dist/server.js",
|
||||
"test": "jest --coverage",
|
||||
"test:unit": "jest --testPathPattern=tests/unit",
|
||||
"test:integration": "jest --testPathPattern=tests/integration",
|
||||
"test:watch": "jest --watch",
|
||||
"start:prod": "node dist/server.js",
|
||||
"lint": "eslint src/**/*.ts",
|
||||
"lint:fix": "eslint src/**/*.ts --fix",
|
||||
"format": "prettier --write \"src/**/*.ts\"",
|
||||
"type-check": "tsc --noEmit",
|
||||
"db:migrate": "sequelize-cli db:migrate",
|
||||
"db:migrate:undo": "sequelize-cli db:migrate:undo",
|
||||
"db:seed": "sequelize-cli db:seed:all",
|
||||
"clean": "rm -rf dist",
|
||||
"setup": "ts-node -r tsconfig-paths/register src/scripts/auto-setup.ts",
|
||||
"migrate": "ts-node -r tsconfig-paths/register src/scripts/migrate.ts",
|
||||
@ -84,6 +77,7 @@
|
||||
"ts-jest": "^29.2.5",
|
||||
"ts-node": "^10.9.2",
|
||||
"ts-node-dev": "^2.0.0",
|
||||
"tsc-alias": "^1.8.16",
|
||||
"tsconfig-paths": "^4.2.0",
|
||||
"typescript": "^5.7.2"
|
||||
},
|
||||
|
||||
@ -365,8 +365,11 @@ export class DashboardController {
|
||||
const userId = (req as any).user?.userId;
|
||||
const page = Number(req.query.page || 1);
|
||||
const limit = Number(req.query.limit || 50);
|
||||
const dateRange = req.query.dateRange as string | undefined;
|
||||
const startDate = req.query.startDate as string | undefined;
|
||||
const endDate = req.query.endDate as string | undefined;
|
||||
|
||||
const result = await this.dashboardService.getLifecycleReport(userId, page, limit);
|
||||
const result = await this.dashboardService.getLifecycleReport(userId, page, limit, dateRange, startDate, endDate);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
@ -396,6 +399,8 @@ export class DashboardController {
|
||||
const page = Number(req.query.page || 1);
|
||||
const limit = Number(req.query.limit || 50);
|
||||
const dateRange = req.query.dateRange as string | undefined;
|
||||
const startDate = req.query.startDate as string | undefined;
|
||||
const endDate = req.query.endDate as string | undefined;
|
||||
const filterUserId = req.query.filterUserId as string | undefined;
|
||||
const filterType = req.query.filterType as string | undefined;
|
||||
const filterCategory = req.query.filterCategory as string | undefined;
|
||||
@ -409,7 +414,9 @@ export class DashboardController {
|
||||
filterUserId,
|
||||
filterType,
|
||||
filterCategory,
|
||||
filterSeverity
|
||||
filterSeverity,
|
||||
startDate,
|
||||
endDate
|
||||
);
|
||||
|
||||
res.json({
|
||||
@ -432,8 +439,41 @@ export class DashboardController {
|
||||
}
|
||||
|
||||
/**
|
||||
* Get Workflow Aging Report
|
||||
* Get list of departments (metadata for filtering)
|
||||
* GET /api/v1/dashboard/metadata/departments
|
||||
*/
|
||||
async getDepartments(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
if (!userId) {
|
||||
res.status(401).json({
|
||||
success: false,
|
||||
message: 'Unauthorized',
|
||||
timestamp: new Date()
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const departments = await this.dashboardService.getDepartments(userId);
|
||||
|
||||
res.status(200).json({
|
||||
success: true,
|
||||
message: 'Departments retrieved successfully',
|
||||
data: {
|
||||
departments
|
||||
},
|
||||
timestamp: new Date()
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Dashboard] Get Departments failed:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
message: 'Internal server error',
|
||||
timestamp: new Date()
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
async getWorkflowAgingReport(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
@ -441,13 +481,17 @@ export class DashboardController {
|
||||
const page = Number(req.query.page || 1);
|
||||
const limit = Number(req.query.limit || 50);
|
||||
const dateRange = req.query.dateRange as string | undefined;
|
||||
const startDate = req.query.startDate as string | undefined;
|
||||
const endDate = req.query.endDate as string | undefined;
|
||||
|
||||
const result = await this.dashboardService.getWorkflowAgingReport(
|
||||
userId,
|
||||
threshold,
|
||||
page,
|
||||
limit,
|
||||
dateRange
|
||||
dateRange,
|
||||
startDate,
|
||||
endDate
|
||||
);
|
||||
|
||||
res.json({
|
||||
@ -468,5 +512,63 @@ export class DashboardController {
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get requests filtered by approver ID for detailed performance analysis
|
||||
*/
|
||||
async getRequestsByApprover(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const approverId = req.query.approverId as string;
|
||||
const page = Number(req.query.page || 1);
|
||||
const limit = Number(req.query.limit || 50);
|
||||
const dateRange = req.query.dateRange as string | undefined;
|
||||
const startDate = req.query.startDate as string | undefined;
|
||||
const endDate = req.query.endDate as string | undefined;
|
||||
const status = req.query.status as string | undefined;
|
||||
const priority = req.query.priority as string | undefined;
|
||||
const slaCompliance = req.query.slaCompliance as string | undefined;
|
||||
const search = req.query.search as string | undefined;
|
||||
|
||||
if (!approverId) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'Approver ID is required'
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const result = await this.dashboardService.getRequestsByApprover(
|
||||
userId,
|
||||
approverId,
|
||||
page,
|
||||
limit,
|
||||
dateRange,
|
||||
startDate,
|
||||
endDate,
|
||||
status,
|
||||
priority,
|
||||
slaCompliance,
|
||||
search
|
||||
);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: result.requests,
|
||||
pagination: {
|
||||
currentPage: result.currentPage,
|
||||
totalPages: result.totalPages,
|
||||
totalRecords: result.totalRecords,
|
||||
limit: result.limit
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Dashboard] Error fetching requests by approver:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch requests by approver'
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -2,9 +2,13 @@ import { Request, Response } from 'express';
|
||||
import { TatAlert } from '@models/TatAlert';
|
||||
import { ApprovalLevel } from '@models/ApprovalLevel';
|
||||
import { User } from '@models/User';
|
||||
import { WorkflowRequest } from '@models/WorkflowRequest';
|
||||
import logger from '@utils/logger';
|
||||
import { sequelize } from '@config/database';
|
||||
import { QueryTypes } from 'sequelize';
|
||||
import { activityService } from '@services/activity.service';
|
||||
import { getRequestMetadata } from '@utils/requestUtils';
|
||||
import type { AuthenticatedRequest } from '../types/express';
|
||||
|
||||
/**
|
||||
* Get TAT alerts for a specific request
|
||||
@ -155,6 +159,121 @@ export const getTatBreachReport = async (req: Request, res: Response) => {
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Update breach reason for a TAT alert
|
||||
*/
|
||||
export const updateBreachReason = async (req: Request, res: Response) => {
|
||||
try {
|
||||
const { levelId } = req.params;
|
||||
const { breachReason } = req.body;
|
||||
const userId = (req as AuthenticatedRequest).user?.userId;
|
||||
const requestMeta = getRequestMetadata(req);
|
||||
|
||||
if (!userId) {
|
||||
return res.status(401).json({
|
||||
success: false,
|
||||
error: 'Unauthorized'
|
||||
});
|
||||
}
|
||||
|
||||
if (!breachReason || typeof breachReason !== 'string' || breachReason.trim().length === 0) {
|
||||
return res.status(400).json({
|
||||
success: false,
|
||||
error: 'Breach reason is required'
|
||||
});
|
||||
}
|
||||
|
||||
// Get the approval level to verify permissions
|
||||
const level = await ApprovalLevel.findByPk(levelId);
|
||||
if (!level) {
|
||||
return res.status(404).json({
|
||||
success: false,
|
||||
error: 'Approval level not found'
|
||||
});
|
||||
}
|
||||
|
||||
// Get user to check role
|
||||
const user = await User.findByPk(userId);
|
||||
if (!user) {
|
||||
return res.status(404).json({
|
||||
success: false,
|
||||
error: 'User not found'
|
||||
});
|
||||
}
|
||||
|
||||
const userRole = (user as any).role;
|
||||
const approverId = (level as any).approverId;
|
||||
|
||||
// Check permissions: ADMIN, MANAGEMENT, or the approver
|
||||
const hasPermission =
|
||||
userRole === 'ADMIN' ||
|
||||
userRole === 'MANAGEMENT' ||
|
||||
approverId === userId;
|
||||
|
||||
if (!hasPermission) {
|
||||
return res.status(403).json({
|
||||
success: false,
|
||||
error: 'You do not have permission to update breach reason'
|
||||
});
|
||||
}
|
||||
|
||||
// Get user details for activity logging
|
||||
const userDisplayName = (user as any).displayName || (user as any).email || 'Unknown User';
|
||||
const isUpdate = !!(level as any).breachReason; // Check if this is an update or first time
|
||||
const levelNumber = (level as any).levelNumber;
|
||||
const approverName = (level as any).approverName || 'Unknown Approver';
|
||||
|
||||
// Update breach reason directly in approval_levels table
|
||||
await level.update({
|
||||
breachReason: breachReason.trim()
|
||||
});
|
||||
|
||||
// Reload to get updated data
|
||||
await level.reload();
|
||||
|
||||
// Log activity for the request
|
||||
const userRoleLabel = userRole === 'ADMIN' ? 'Admin' : userRole === 'MANAGEMENT' ? 'Management' : 'Approver';
|
||||
await activityService.log({
|
||||
requestId: level.requestId,
|
||||
type: 'comment', // Using comment type for breach reason entry
|
||||
user: {
|
||||
userId: userId,
|
||||
name: userDisplayName,
|
||||
email: (user as any).email
|
||||
},
|
||||
timestamp: new Date().toISOString(),
|
||||
action: isUpdate ? 'Updated TAT breach reason' : 'Added TAT breach reason',
|
||||
details: `${userDisplayName} (${userRoleLabel}) ${isUpdate ? 'updated' : 'added'} TAT breach reason for ${approverName} (Level ${levelNumber}): "${breachReason.trim()}"`,
|
||||
metadata: {
|
||||
levelId: level.levelId,
|
||||
levelNumber: levelNumber,
|
||||
approverName: approverName,
|
||||
breachReason: breachReason.trim(),
|
||||
updatedByRole: userRole
|
||||
},
|
||||
ipAddress: requestMeta.ipAddress,
|
||||
userAgent: requestMeta.userAgent
|
||||
});
|
||||
|
||||
logger.info(`[TAT Controller] Breach reason ${isUpdate ? 'updated' : 'added'} for level ${levelId} by user ${userId} (${userRole})`);
|
||||
|
||||
return res.json({
|
||||
success: true,
|
||||
message: `Breach reason ${isUpdate ? 'updated' : 'added'} successfully`,
|
||||
data: {
|
||||
levelId: level.levelId,
|
||||
breachReason: breachReason.trim()
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[TAT Controller] Error updating breach reason:', error);
|
||||
return res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to update breach reason'
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Get approver TAT performance
|
||||
*/
|
||||
|
||||
@ -0,0 +1,49 @@
|
||||
import { QueryInterface, DataTypes } from 'sequelize';
|
||||
|
||||
/**
|
||||
* Migration: Add breach_reason column to approval_levels table
|
||||
* Purpose: Store TAT breach reason directly in approval_levels table
|
||||
* Date: 2025-11-18
|
||||
*/
|
||||
|
||||
export async function up(queryInterface: QueryInterface): Promise<void> {
|
||||
// Check if table exists first
|
||||
const tables = await queryInterface.showAllTables();
|
||||
if (!tables.includes('approval_levels')) {
|
||||
// Table doesn't exist yet, skipping
|
||||
return;
|
||||
}
|
||||
|
||||
// Get existing columns
|
||||
const tableDescription = await queryInterface.describeTable('approval_levels');
|
||||
|
||||
// Add breach_reason column only if it doesn't exist
|
||||
if (!tableDescription.breach_reason) {
|
||||
await queryInterface.addColumn('approval_levels', 'breach_reason', {
|
||||
type: DataTypes.TEXT,
|
||||
allowNull: true,
|
||||
comment: 'Reason for TAT breach - can contain paragraph-length text'
|
||||
});
|
||||
console.log('✅ Added breach_reason column to approval_levels table');
|
||||
} else {
|
||||
console.log('ℹ️ breach_reason column already exists, skipping');
|
||||
}
|
||||
}
|
||||
|
||||
export async function down(queryInterface: QueryInterface): Promise<void> {
|
||||
// Check if table exists
|
||||
const tables = await queryInterface.showAllTables();
|
||||
if (!tables.includes('approval_levels')) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Get existing columns
|
||||
const tableDescription = await queryInterface.describeTable('approval_levels');
|
||||
|
||||
// Remove column only if it exists
|
||||
if (tableDescription.breach_reason) {
|
||||
await queryInterface.removeColumn('approval_levels', 'breach_reason');
|
||||
console.log('✅ Removed breach_reason column from approval_levels table');
|
||||
}
|
||||
}
|
||||
|
||||
@ -20,6 +20,7 @@ interface ApprovalLevelAttributes {
|
||||
actionDate?: Date;
|
||||
comments?: string;
|
||||
rejectionReason?: string;
|
||||
breachReason?: string;
|
||||
isFinalApprover: boolean;
|
||||
elapsedHours: number;
|
||||
remainingHours: number;
|
||||
@ -32,7 +33,7 @@ interface ApprovalLevelAttributes {
|
||||
updatedAt: Date;
|
||||
}
|
||||
|
||||
interface ApprovalLevelCreationAttributes extends Optional<ApprovalLevelAttributes, 'levelId' | 'levelName' | 'levelStartTime' | 'levelEndTime' | 'actionDate' | 'comments' | 'rejectionReason' | 'tat50AlertSent' | 'tat75AlertSent' | 'tatBreached' | 'tatStartTime' | 'tatDays' | 'createdAt' | 'updatedAt'> {}
|
||||
interface ApprovalLevelCreationAttributes extends Optional<ApprovalLevelAttributes, 'levelId' | 'levelName' | 'levelStartTime' | 'levelEndTime' | 'actionDate' | 'comments' | 'rejectionReason' | 'breachReason' | 'tat50AlertSent' | 'tat75AlertSent' | 'tatBreached' | 'tatStartTime' | 'tatDays' | 'createdAt' | 'updatedAt'> {}
|
||||
|
||||
class ApprovalLevel extends Model<ApprovalLevelAttributes, ApprovalLevelCreationAttributes> implements ApprovalLevelAttributes {
|
||||
public levelId!: string;
|
||||
@ -50,6 +51,7 @@ class ApprovalLevel extends Model<ApprovalLevelAttributes, ApprovalLevelCreation
|
||||
public actionDate?: Date;
|
||||
public comments?: string;
|
||||
public rejectionReason?: string;
|
||||
public breachReason?: string;
|
||||
public isFinalApprover!: boolean;
|
||||
public elapsedHours!: number;
|
||||
public remainingHours!: number;
|
||||
@ -152,6 +154,12 @@ ApprovalLevel.init(
|
||||
allowNull: true,
|
||||
field: 'rejection_reason'
|
||||
},
|
||||
breachReason: {
|
||||
type: DataTypes.TEXT,
|
||||
allowNull: true,
|
||||
field: 'breach_reason',
|
||||
comment: 'Reason for TAT breach - can contain paragraph-length text'
|
||||
},
|
||||
isFinalApprover: {
|
||||
type: DataTypes.BOOLEAN,
|
||||
defaultValue: false,
|
||||
|
||||
@ -56,7 +56,6 @@ export function initSocket(httpServer: any) {
|
||||
const userId = typeof data === 'string' ? data : data.userId;
|
||||
socket.join(`user:${userId}`);
|
||||
currentUserId = userId;
|
||||
console.log(`[Socket] User ${userId} joined personal notification room`);
|
||||
});
|
||||
|
||||
socket.on('join:request', (data: { requestId: string; userId?: string }) => {
|
||||
@ -132,7 +131,6 @@ export function emitToRequestRoom(requestId: string, event: string, payload: any
|
||||
export function emitToUser(userId: string, event: string, payload: any) {
|
||||
if (!io) return;
|
||||
io.to(`user:${userId}`).emit(event, payload);
|
||||
console.log(`[Socket] Emitted '${event}' to user ${userId}`);
|
||||
}
|
||||
|
||||
|
||||
|
||||
@ -108,5 +108,17 @@ router.get('/reports/workflow-aging',
|
||||
asyncHandler(dashboardController.getWorkflowAgingReport.bind(dashboardController))
|
||||
);
|
||||
|
||||
// Get departments metadata (for filtering)
|
||||
router.get('/metadata/departments',
|
||||
authenticateToken,
|
||||
asyncHandler(dashboardController.getDepartments.bind(dashboardController))
|
||||
);
|
||||
|
||||
// Get requests filtered by approver ID (for detailed performance analysis)
|
||||
router.get('/requests/by-approver',
|
||||
authenticateToken,
|
||||
asyncHandler(dashboardController.getRequestsByApprover.bind(dashboardController))
|
||||
);
|
||||
|
||||
export default router;
|
||||
|
||||
|
||||
@ -5,7 +5,8 @@ import {
|
||||
getTatAlertsByLevel,
|
||||
getTatComplianceSummary,
|
||||
getTatBreachReport,
|
||||
getApproverTatPerformance
|
||||
getApproverTatPerformance,
|
||||
updateBreachReason
|
||||
} from '@controllers/tat.controller';
|
||||
|
||||
const router = Router();
|
||||
@ -49,5 +50,12 @@ router.get('/breaches', getTatBreachReport);
|
||||
*/
|
||||
router.get('/performance/:approverId', getApproverTatPerformance);
|
||||
|
||||
/**
|
||||
* @route PUT /api/tat/breach-reason/:levelId
|
||||
* @desc Update breach reason for a TAT alert
|
||||
* @access Private (ADMIN, MANAGEMENT, or approver)
|
||||
*/
|
||||
router.put('/breach-reason/:levelId', updateBreachReason);
|
||||
|
||||
export default router;
|
||||
|
||||
|
||||
@ -18,6 +18,7 @@ import * as m14 from '../migrations/20251105-add-skip-fields-to-approval-levels'
|
||||
import * as m15 from '../migrations/2025110501-alter-tat-days-to-generated';
|
||||
import * as m16 from '../migrations/20251111-create-notifications';
|
||||
import * as m17 from '../migrations/20251111-create-conclusion-remarks';
|
||||
import * as m18 from '../migrations/20251118-add-breach-reason-to-approval-levels';
|
||||
|
||||
interface Migration {
|
||||
name: string;
|
||||
@ -50,6 +51,7 @@ const migrations: Migration[] = [
|
||||
{ name: '2025110501-alter-tat-days-to-generated', module: m15 },
|
||||
{ name: '20251111-create-notifications', module: m16 },
|
||||
{ name: '20251111-create-conclusion-remarks', module: m17 },
|
||||
{ name: '20251118-add-breach-reason-to-approval-levels', module: m18 },
|
||||
];
|
||||
|
||||
/**
|
||||
|
||||
@ -226,9 +226,21 @@ export class ApprovalService {
|
||||
logger.error(`[Approval] Unhandled error in background AI generation:`, err);
|
||||
});
|
||||
|
||||
// Notify initiator about approval and pending conclusion step
|
||||
// Notify initiator and all participants (including spectators) about approval
|
||||
// Spectators are CC'd for transparency, similar to email CC
|
||||
if (wf) {
|
||||
await notificationService.sendToUsers([ (wf as any).initiatorId ], {
|
||||
const participants = await Participant.findAll({
|
||||
where: { requestId: level.requestId }
|
||||
});
|
||||
const targetUserIds = new Set<string>();
|
||||
targetUserIds.add((wf as any).initiatorId);
|
||||
for (const p of participants as any[]) {
|
||||
targetUserIds.add(p.userId); // Includes spectators
|
||||
}
|
||||
|
||||
// Send notification to initiator (with action required)
|
||||
const initiatorId = (wf as any).initiatorId;
|
||||
await notificationService.sendToUsers([initiatorId], {
|
||||
title: `Request Approved - Closure Pending`,
|
||||
body: `Your request "${(wf as any).title}" has been fully approved. Please review and finalize the conclusion remark to close the request.`,
|
||||
requestNumber: (wf as any).requestNumber,
|
||||
@ -239,7 +251,22 @@ export class ApprovalService {
|
||||
actionRequired: true
|
||||
});
|
||||
|
||||
logger.info(`[Approval] ✅ Final approval complete for ${level.requestId}. Initiator notified to finalize conclusion.`);
|
||||
// Send notification to all participants/spectators (for transparency, no action required)
|
||||
const participantUserIds = Array.from(targetUserIds).filter(id => id !== initiatorId);
|
||||
if (participantUserIds.length > 0) {
|
||||
await notificationService.sendToUsers(participantUserIds, {
|
||||
title: `Request Approved`,
|
||||
body: `Request "${(wf as any).title}" has been fully approved. The initiator will finalize the conclusion remark to close the request.`,
|
||||
requestNumber: (wf as any).requestNumber,
|
||||
requestId: level.requestId,
|
||||
url: `/request/${(wf as any).requestNumber}`,
|
||||
type: 'approval_pending_closure',
|
||||
priority: 'MEDIUM',
|
||||
actionRequired: false
|
||||
});
|
||||
}
|
||||
|
||||
logger.info(`[Approval] ✅ Final approval complete for ${level.requestId}. Initiator and ${participants.length} participant(s) notified.`);
|
||||
}
|
||||
} else {
|
||||
// Not final - move to next level
|
||||
|
||||
@ -32,6 +32,15 @@ export class DashboardService {
|
||||
return { start, end: actualEnd };
|
||||
}
|
||||
|
||||
// If custom is selected but dates are not provided, default to last 30 days
|
||||
if (dateRange === 'custom' && (!startDate || !endDate)) {
|
||||
const now = dayjs();
|
||||
return {
|
||||
start: now.subtract(30, 'day').startOf('day').toDate(),
|
||||
end: now.endOf('day').toDate()
|
||||
};
|
||||
}
|
||||
|
||||
const now = dayjs();
|
||||
|
||||
switch (dateRange) {
|
||||
@ -136,12 +145,13 @@ export class DashboardService {
|
||||
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||
`;
|
||||
|
||||
// Get total, approved, and rejected requests created in date range
|
||||
// Get total, approved, rejected, and closed requests created in date range
|
||||
const result = await sequelize.query(`
|
||||
SELECT
|
||||
COUNT(*)::int AS total_requests,
|
||||
COUNT(CASE WHEN wf.status = 'APPROVED' THEN 1 END)::int AS approved_requests,
|
||||
COUNT(CASE WHEN wf.status = 'REJECTED' THEN 1 END)::int AS rejected_requests
|
||||
COUNT(CASE WHEN wf.status = 'REJECTED' THEN 1 END)::int AS rejected_requests,
|
||||
COUNT(CASE WHEN wf.status = 'CLOSED' THEN 1 END)::int AS closed_requests
|
||||
FROM workflow_requests wf
|
||||
${whereClauseForDateRange}
|
||||
`, {
|
||||
@ -181,6 +191,7 @@ export class DashboardService {
|
||||
openRequests: pending.open_requests || 0, // All pending requests regardless of creation date
|
||||
approvedRequests: stats.approved_requests || 0,
|
||||
rejectedRequests: stats.rejected_requests || 0,
|
||||
closedRequests: stats.closed_requests || 0,
|
||||
draftRequests: drafts.draft_count || 0,
|
||||
changeFromPrevious: {
|
||||
total: '+0',
|
||||
@ -203,10 +214,11 @@ export class DashboardService {
|
||||
|
||||
// For regular users: only their initiated requests
|
||||
// For admin: all requests
|
||||
// Include requests that were COMPLETED (closure_date or updated_at) within the date range
|
||||
// Include requests that were COMPLETED (APPROVED, REJECTED, or CLOSED) within the date range
|
||||
// CLOSED status represents approved requests that were finalized with a conclusion remark
|
||||
// This ensures we capture all requests that finished during the period, regardless of when they started
|
||||
let whereClause = `
|
||||
WHERE wf.status IN ('APPROVED', 'REJECTED')
|
||||
WHERE wf.status IN ('APPROVED', 'REJECTED', 'CLOSED')
|
||||
AND wf.is_draft = false
|
||||
AND wf.submission_date IS NOT NULL
|
||||
AND (
|
||||
@ -234,7 +246,6 @@ export class DashboardService {
|
||||
// Calculate cycle time using working hours for each request
|
||||
const { calculateElapsedWorkingHours } = await import('@utils/tatTimeUtils');
|
||||
const cycleTimes: number[] = [];
|
||||
let breachedCount = 0;
|
||||
|
||||
logger.info(`[Dashboard] Calculating cycle time for ${completedRequests.length} completed requests`);
|
||||
|
||||
@ -244,10 +255,12 @@ export class DashboardService {
|
||||
const completionDate = req.closure_date || req.updated_at;
|
||||
const priority = (req.priority || 'STANDARD').toLowerCase();
|
||||
|
||||
let elapsedHours: number | null = null;
|
||||
|
||||
if (submissionDate && completionDate) {
|
||||
try {
|
||||
// Calculate elapsed working hours (respects working hours, weekends, holidays)
|
||||
const elapsedHours = await calculateElapsedWorkingHours(
|
||||
elapsedHours = await calculateElapsedWorkingHours(
|
||||
submissionDate,
|
||||
completionDate,
|
||||
priority
|
||||
@ -261,39 +274,156 @@ export class DashboardService {
|
||||
logger.warn(`[Dashboard] Skipping request ${req.request_id} - missing dates (submission: ${submissionDate}, completion: ${completionDate})`);
|
||||
}
|
||||
|
||||
// Check for breaches
|
||||
const breachCheck = await sequelize.query(`
|
||||
SELECT COUNT(*)::int AS breach_count
|
||||
// Note: Breach checking is now done in the allRequestsBreached loop below
|
||||
// using the same calculateSLAStatus logic as the Requests screen
|
||||
// This ensures consistency between Dashboard and All Requests screen
|
||||
}
|
||||
|
||||
// Count ALL requests (pending, in-progress, approved, rejected, closed) that have currently breached TAT
|
||||
// Use the same logic as Requests screen: check currentLevelSLA status using calculateSLAStatus
|
||||
// This ensures delayedWorkflows matches what users see when filtering for "breached" in All Requests screen
|
||||
// For date range: completed requests (APPROVED/REJECTED/CLOSED) must be completed in date range
|
||||
// For pending/in-progress: include ALL pending/in-progress regardless of submission date (same as requestVolume stats)
|
||||
const allRequestsBreachedQuery = `
|
||||
SELECT DISTINCT
|
||||
wf.request_id,
|
||||
wf.status,
|
||||
wf.priority,
|
||||
wf.current_level,
|
||||
al.level_start_time AS current_level_start_time,
|
||||
al.tat_hours AS current_level_tat_hours,
|
||||
wf.submission_date,
|
||||
wf.total_tat_hours,
|
||||
wf.closure_date,
|
||||
wf.updated_at
|
||||
FROM workflow_requests wf
|
||||
LEFT JOIN approval_levels al ON al.request_id = wf.request_id
|
||||
AND al.level_number = wf.current_level
|
||||
AND (al.status = 'IN_PROGRESS' OR (wf.status IN ('APPROVED', 'REJECTED', 'CLOSED') AND al.status = 'APPROVED'))
|
||||
WHERE wf.is_draft = false
|
||||
AND wf.submission_date IS NOT NULL
|
||||
AND (
|
||||
-- Completed requests: must be completed in date range
|
||||
(wf.status IN ('APPROVED', 'REJECTED', 'CLOSED')
|
||||
AND (
|
||||
(wf.closure_date IS NOT NULL AND wf.closure_date BETWEEN :start AND :end)
|
||||
OR (wf.closure_date IS NULL AND wf.updated_at BETWEEN :start AND :end)
|
||||
))
|
||||
-- Pending/in-progress: include ALL regardless of submission date
|
||||
OR wf.status IN ('PENDING', 'IN_PROGRESS')
|
||||
)
|
||||
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||
AND (
|
||||
EXISTS (
|
||||
SELECT 1
|
||||
FROM tat_alerts ta
|
||||
WHERE ta.request_id = :requestId
|
||||
INNER JOIN approval_levels al_breach ON ta.level_id = al_breach.level_id
|
||||
WHERE ta.request_id = wf.request_id
|
||||
AND ta.is_breached = true
|
||||
`, {
|
||||
replacements: { requestId: req.request_id },
|
||||
AND al_breach.level_number = wf.current_level
|
||||
)
|
||||
OR al.level_start_time IS NOT NULL
|
||||
OR wf.total_tat_hours > 0
|
||||
)
|
||||
`;
|
||||
|
||||
const allRequestsBreached = await sequelize.query(allRequestsBreachedQuery, {
|
||||
replacements: { start: range.start, end: range.end, userId },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
if ((breachCheck[0] as any)?.breach_count > 0) {
|
||||
breachedCount++;
|
||||
// Use calculateSLAStatus to check if each request is breached (same as Requests screen logic)
|
||||
const { calculateSLAStatus } = await import('@utils/tatTimeUtils');
|
||||
let pendingBreachedCount = 0;
|
||||
|
||||
// Also need to recalculate breachedCount for completed requests using same logic as Requests screen
|
||||
// This ensures we catch any completed requests that breached but weren't detected by previous checks
|
||||
let recalculatedBreachedCount = 0;
|
||||
let recalculatedCompliantCount = 0;
|
||||
|
||||
for (const req of allRequestsBreached as any) {
|
||||
const isCompleted = req.status === 'APPROVED' || req.status === 'REJECTED' || req.status === 'CLOSED';
|
||||
|
||||
// Check current level SLA (same logic as Requests screen)
|
||||
let isBreached = false;
|
||||
|
||||
if (req.current_level_start_time && req.current_level_tat_hours > 0) {
|
||||
try {
|
||||
const priority = (req.priority || 'standard').toLowerCase();
|
||||
const levelEndDate = req.closure_date || null; // Use closure date if completed
|
||||
const slaData = await calculateSLAStatus(req.current_level_start_time, req.current_level_tat_hours, priority, levelEndDate);
|
||||
|
||||
// Mark as breached if percentageUsed >= 100 (same as Requests screen)
|
||||
if (slaData.percentageUsed >= 100) {
|
||||
isBreached = true;
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`[Dashboard] Error calculating SLA for request ${req.request_id}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
const totalCompleted = completedRequests.length;
|
||||
const compliantCount = totalCompleted - breachedCount;
|
||||
// Also check overall SLA if current level SLA check doesn't show breach
|
||||
if (!isBreached && req.submission_date && req.total_tat_hours > 0) {
|
||||
try {
|
||||
const priority = (req.priority || 'standard').toLowerCase();
|
||||
const overallEndDate = req.closure_date || null;
|
||||
const overallSLA = await calculateSLAStatus(req.submission_date, req.total_tat_hours, priority, overallEndDate);
|
||||
|
||||
if (overallSLA.percentageUsed >= 100) {
|
||||
isBreached = true;
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`[Dashboard] Error calculating overall SLA for request ${req.request_id}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
if (isBreached) {
|
||||
if (isCompleted) {
|
||||
recalculatedBreachedCount++;
|
||||
} else {
|
||||
pendingBreachedCount++;
|
||||
}
|
||||
} else if (isCompleted) {
|
||||
// Count as compliant if completed and not breached
|
||||
recalculatedCompliantCount++;
|
||||
}
|
||||
}
|
||||
|
||||
// Use recalculated counts which match Requests screen logic exactly
|
||||
// These counts use the same calculateSLAStatus logic as the Requests screen
|
||||
const finalBreachedCount = recalculatedBreachedCount;
|
||||
|
||||
// Total delayed workflows = completed breached + currently pending/in-progress breached
|
||||
const totalDelayedWorkflows = finalBreachedCount + pendingBreachedCount;
|
||||
|
||||
// Compliant workflows = all completed requests (APPROVED, REJECTED, CLOSED) that did NOT breach TAT
|
||||
// This includes:
|
||||
// - Approved requests that were approved within TAT
|
||||
// - Closed requests that were closed within TAT
|
||||
// - Rejected requests that were rejected within TAT (before TAT was exceeded)
|
||||
// Use recalculated compliant count from above which uses same logic as Requests screen
|
||||
const totalCompleted = recalculatedBreachedCount + recalculatedCompliantCount;
|
||||
const compliantCount = recalculatedCompliantCount;
|
||||
|
||||
// Compliance percentage = (compliant / total completed) * 100
|
||||
// This shows what percentage of completed requests (approved/closed/rejected) were completed within TAT
|
||||
const compliancePercent = totalCompleted > 0 ? Math.round((compliantCount / totalCompleted) * 100) : 0;
|
||||
|
||||
// Calculate average cycle time
|
||||
// Calculate average cycle time (rounded to 2 decimal places for accuracy)
|
||||
const sum = cycleTimes.reduce((sum, hours) => sum + hours, 0);
|
||||
const avgCycleTimeHours = cycleTimes.length > 0
|
||||
? Math.round((sum / cycleTimes.length) * 10) / 10
|
||||
? Math.round((sum / cycleTimes.length) * 100) / 100
|
||||
: 0;
|
||||
|
||||
logger.info(`[Dashboard] Cycle time calculation: ${cycleTimes.length} requests included, sum: ${sum.toFixed(2)}h, average: ${avgCycleTimeHours.toFixed(2)}h`);
|
||||
logger.info(`[Dashboard] Compliance calculation: ${totalCompleted} total completed (APPROVED/REJECTED/CLOSED), ${finalBreachedCount} breached, ${compliantCount} compliant`);
|
||||
logger.info(`[Dashboard] Breached requests (using Requests screen logic): ${finalBreachedCount} completed breached + ${pendingBreachedCount} pending/in-progress breached = ${totalDelayedWorkflows} total delayed`);
|
||||
|
||||
return {
|
||||
avgTATCompliance: compliancePercent,
|
||||
avgCycleTimeHours,
|
||||
avgCycleTimeDays: Math.round((avgCycleTimeHours / 8) * 10) / 10, // 8 working hours per day
|
||||
delayedWorkflows: breachedCount,
|
||||
delayedWorkflows: totalDelayedWorkflows, // Includes both completed and pending/in-progress breached requests
|
||||
totalCompleted,
|
||||
compliantWorkflows: compliantCount,
|
||||
changeFromPrevious: {
|
||||
@ -664,9 +794,10 @@ export class DashboardService {
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
// Get current pending counts for each approver (separate query for current pending requests)
|
||||
// Get current pending counts and calculate TAT compliance including pending requests that have breached
|
||||
const approverIds = approverMetrics.map((a: any) => a.approver_id);
|
||||
let pendingCounts: any[] = [];
|
||||
let pendingBreachData: any[] = [];
|
||||
|
||||
if (approverIds.length > 0) {
|
||||
// Find all pending/in-progress approval levels and get the first (current) level for each request
|
||||
@ -677,17 +808,28 @@ export class DashboardService {
|
||||
al.request_id,
|
||||
al.approver_id,
|
||||
al.level_id,
|
||||
al.level_number
|
||||
al.level_number,
|
||||
al.level_start_time,
|
||||
al.tat_hours,
|
||||
wf.priority
|
||||
FROM approval_levels al
|
||||
JOIN workflow_requests wf ON al.request_id = wf.request_id
|
||||
WHERE al.status IN ('PENDING', 'IN_PROGRESS')
|
||||
AND wf.status IN ('PENDING', 'IN_PROGRESS')
|
||||
AND wf.is_draft = false
|
||||
AND al.level_start_time IS NOT NULL
|
||||
AND al.tat_hours > 0
|
||||
ORDER BY al.request_id, al.level_number ASC
|
||||
)
|
||||
SELECT
|
||||
approver_id,
|
||||
COUNT(DISTINCT level_id)::int AS pending_count
|
||||
COUNT(DISTINCT level_id)::int AS pending_count,
|
||||
json_agg(json_build_object(
|
||||
'level_id', level_id,
|
||||
'level_start_time', level_start_time,
|
||||
'tat_hours', tat_hours,
|
||||
'priority', priority
|
||||
)) AS pending_levels_data
|
||||
FROM pending_levels
|
||||
WHERE approver_id IN (:approverIds)
|
||||
GROUP BY approver_id
|
||||
@ -695,23 +837,90 @@ export class DashboardService {
|
||||
replacements: { approverIds },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
// Calculate SLA status for pending levels to determine breaches
|
||||
const { calculateSLAStatus } = await import('@utils/tatTimeUtils');
|
||||
pendingBreachData = await Promise.all(
|
||||
pendingCounts.map(async (pc: any) => {
|
||||
const levels = pc.pending_levels_data || [];
|
||||
let breachedCount = 0;
|
||||
let compliantCount = 0;
|
||||
|
||||
for (const level of levels) {
|
||||
if (level.level_start_time && level.tat_hours > 0) {
|
||||
try {
|
||||
const priority = (level.priority || 'standard').toLowerCase();
|
||||
const calculated = await calculateSLAStatus(
|
||||
level.level_start_time,
|
||||
level.tat_hours,
|
||||
priority,
|
||||
null // No end date for pending requests
|
||||
);
|
||||
|
||||
// Mark as breached if percentageUsed >= 100
|
||||
if (calculated.percentageUsed >= 100) {
|
||||
breachedCount++;
|
||||
} else {
|
||||
compliantCount++;
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`[Dashboard] Error calculating SLA for pending level ${level.level_id}:`, error);
|
||||
// Default to compliant if calculation fails
|
||||
compliantCount++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create a map for quick lookup of pending counts
|
||||
return {
|
||||
approver_id: pc.approver_id,
|
||||
pending_count: pc.pending_count || 0,
|
||||
pending_breached: breachedCount,
|
||||
pending_compliant: compliantCount
|
||||
};
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
// Create maps for quick lookup
|
||||
const pendingCountMap = new Map<string, number>();
|
||||
pendingCounts.forEach((pc: any) => {
|
||||
pendingCountMap.set(pc.approver_id, pc.pending_count || 0);
|
||||
const pendingBreachedMap = new Map<string, number>();
|
||||
const pendingCompliantMap = new Map<string, number>();
|
||||
|
||||
pendingBreachData.forEach((pb: any) => {
|
||||
pendingCountMap.set(pb.approver_id, pb.pending_count || 0);
|
||||
pendingBreachedMap.set(pb.approver_id, pb.pending_breached || 0);
|
||||
pendingCompliantMap.set(pb.approver_id, pb.pending_compliant || 0);
|
||||
});
|
||||
|
||||
return {
|
||||
performance: approverMetrics.map((a: any) => ({
|
||||
performance: approverMetrics.map((a: any) => {
|
||||
// Get pending breach data
|
||||
const pendingBreached = pendingBreachedMap.get(a.approver_id) || 0;
|
||||
const pendingCompliant = pendingCompliantMap.get(a.approver_id) || 0;
|
||||
|
||||
// Calculate overall TAT compliance including pending requests
|
||||
// Completed: within_tat_count (compliant) + breached_count (breached)
|
||||
// Pending: pending_compliant (compliant) + pending_breached (breached)
|
||||
const totalCompliant = a.within_tat_count + pendingCompliant;
|
||||
const totalBreached = a.breached_count + pendingBreached;
|
||||
const totalRequests = a.total_approved + pendingBreached + pendingCompliant;
|
||||
|
||||
// Calculate TAT compliance percentage including pending requests
|
||||
// Use Math.floor to ensure consistent rounding (matches detail screen logic)
|
||||
// This prevents 79.5% from rounding differently in different places
|
||||
const tatCompliancePercent = totalRequests > 0
|
||||
? Math.floor((totalCompliant / totalRequests) * 100)
|
||||
: (a.tat_compliance_percent || 0); // Fallback to original if no pending requests
|
||||
|
||||
return {
|
||||
approverId: a.approver_id,
|
||||
approverName: a.approver_name,
|
||||
totalApproved: a.total_approved,
|
||||
tatCompliancePercent: a.tat_compliance_percent,
|
||||
tatCompliancePercent,
|
||||
avgResponseHours: parseFloat(a.avg_response_hours || 0),
|
||||
pendingCount: pendingCountMap.get(a.approver_id) || 0
|
||||
})),
|
||||
};
|
||||
}),
|
||||
currentPage: page,
|
||||
totalPages,
|
||||
totalRecords,
|
||||
@ -870,6 +1079,7 @@ export class DashboardService {
|
||||
COALESCE(u.department, 'Unknown') AS department,
|
||||
al.approver_name AS current_approver_name,
|
||||
al.approver_email AS current_approver_email,
|
||||
al.approver_id AS current_approver_id,
|
||||
(
|
||||
SELECT COUNT(*)::int
|
||||
FROM tat_alerts ta
|
||||
@ -952,11 +1162,15 @@ export class DashboardService {
|
||||
}
|
||||
}
|
||||
|
||||
// Only include if current level has actually breached (TAT >= 100%)
|
||||
// This filters out false positives where is_breached flag might be set incorrectly
|
||||
// Check if elapsed hours >= allocated TAT hours to ensure actual breach
|
||||
// (percentageUsed is capped at 100, so we check elapsed vs allocated directly)
|
||||
if (currentLevelTatHours > 0 && currentLevelElapsedHours < currentLevelTatHours) {
|
||||
// Trust the is_breached flag from tat_alerts table - if it's marked as breached, include it
|
||||
// The tat_alerts.is_breached flag is set by the TAT monitoring system and should be authoritative
|
||||
// Only filter out if we have a valid TAT calculation AND it's clearly not breached (elapsed < TAT)
|
||||
// BUT if breach_count > 0 from the database, we trust that over the calculation to avoid timing issues
|
||||
// This ensures consistency between Dashboard and All Requests screen
|
||||
const hasBreachFlag = (req.breach_count || 0) > 0;
|
||||
if (currentLevelTatHours > 0 && currentLevelElapsedHours < currentLevelTatHours && !hasBreachFlag) {
|
||||
// Only skip if no breach flag in DB AND calculation shows not breached
|
||||
// If hasBreachFlag is true, trust the database even if calculation hasn't caught up yet
|
||||
return null; // Skip this request - not actually breached
|
||||
}
|
||||
|
||||
@ -973,8 +1187,24 @@ export class DashboardService {
|
||||
breachTime = currentLevelElapsedHours - currentLevelTatHours;
|
||||
}
|
||||
|
||||
// Determine breach reason
|
||||
// Get breach reason from approval_levels table
|
||||
let breachReason = 'TAT Exceeded';
|
||||
try {
|
||||
const levelWithReason = await sequelize.query(`
|
||||
SELECT al.breach_reason
|
||||
FROM approval_levels al
|
||||
WHERE al.request_id = :requestId
|
||||
AND al.level_number = :currentLevel
|
||||
LIMIT 1
|
||||
`, {
|
||||
replacements: { requestId: req.request_id, currentLevel: req.current_level },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
if (levelWithReason && levelWithReason.length > 0 && (levelWithReason[0] as any).breach_reason) {
|
||||
breachReason = (levelWithReason[0] as any).breach_reason;
|
||||
} else {
|
||||
// Fallback to default reason
|
||||
if (req.breach_count > 0) {
|
||||
if (priority === 'express') {
|
||||
breachReason = 'Express Priority - TAT Exceeded';
|
||||
@ -984,6 +1214,20 @@ export class DashboardService {
|
||||
} else if (req.priority === 'EXPRESS') {
|
||||
breachReason = 'Express Priority - High Risk';
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
logger.warn('[Dashboard] Error fetching breach reason from approval_levels, using default');
|
||||
// Use default reason on error
|
||||
if (req.breach_count > 0) {
|
||||
if (priority === 'express') {
|
||||
breachReason = 'Express Priority - TAT Exceeded';
|
||||
} else {
|
||||
breachReason = 'Standard TAT Breach';
|
||||
}
|
||||
} else if (req.priority === 'EXPRESS') {
|
||||
breachReason = 'Express Priority - High Risk';
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
requestId: req.request_id,
|
||||
@ -1000,6 +1244,8 @@ export class DashboardService {
|
||||
isCritical: true, // Only true breaches reach here
|
||||
department: req.department || 'Unknown',
|
||||
approver: req.current_approver_name || req.current_approver_email || 'N/A',
|
||||
approverId: req.current_approver_id || null,
|
||||
approverEmail: req.current_approver_email || null,
|
||||
breachTime: breachTime,
|
||||
breachReason: breachReason
|
||||
};
|
||||
@ -1008,15 +1254,25 @@ export class DashboardService {
|
||||
// Filter out null values (requests that didn't actually breach)
|
||||
const filteredCritical = criticalWithSLA.filter(req => req !== null);
|
||||
|
||||
// Recalculate total records after filtering
|
||||
const actualTotalRecords = filteredCritical.length;
|
||||
const actualTotalPages = Math.ceil(actualTotalRecords / limit);
|
||||
// Since we now trust breach_count from database (if > 0, we include it regardless of calculation),
|
||||
// we should filter very few (if any) requests. The original database count should be accurate.
|
||||
// Only adjust totalRecords if we filtered out requests from current page (for edge cases)
|
||||
// In practice, with the new logic trusting breach_count, filtering should be minimal to none
|
||||
let adjustedTotalRecords = totalRecords;
|
||||
const filteredOutFromPage = criticalRequests.length - filteredCritical.length;
|
||||
if (filteredOutFromPage > 0) {
|
||||
// If we filtered out items from current page, estimate adjustment across all pages
|
||||
// This is an approximation since we can't recalculate without fetching all pages
|
||||
const filterRatio = filteredCritical.length / Math.max(1, criticalRequests.length);
|
||||
adjustedTotalRecords = Math.max(filteredCritical.length, Math.round(totalRecords * filterRatio));
|
||||
}
|
||||
const adjustedTotalPages = Math.ceil(adjustedTotalRecords / limit);
|
||||
|
||||
return {
|
||||
criticalRequests: filteredCritical,
|
||||
currentPage: page,
|
||||
totalPages: actualTotalPages,
|
||||
totalRecords: actualTotalRecords,
|
||||
totalPages: adjustedTotalPages,
|
||||
totalRecords: adjustedTotalRecords,
|
||||
limit
|
||||
};
|
||||
}
|
||||
@ -1180,6 +1436,55 @@ export class DashboardService {
|
||||
}));
|
||||
}
|
||||
|
||||
/**
|
||||
* Get list of unique departments from users (metadata for filtering)
|
||||
* Returns all departments that have at least one user, ordered alphabetically
|
||||
*/
|
||||
async getDepartments(userId: string): Promise<string[]> {
|
||||
// Check if user is admin or management (has broader access)
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = user?.hasManagementAccess() || false;
|
||||
|
||||
// For regular users: only departments from their requests
|
||||
// For admin/management: all departments in the system
|
||||
let whereClause = '';
|
||||
if (!isAdmin) {
|
||||
// Get departments from requests initiated by this user
|
||||
whereClause = `
|
||||
WHERE u.department IS NOT NULL
|
||||
AND u.department != ''
|
||||
AND EXISTS (
|
||||
SELECT 1 FROM workflow_requests wf
|
||||
WHERE wf.initiator_id = u.user_id
|
||||
)
|
||||
`;
|
||||
} else {
|
||||
// Admin/Management: get all departments that have at least one user
|
||||
whereClause = `
|
||||
WHERE u.department IS NOT NULL
|
||||
AND u.department != ''
|
||||
`;
|
||||
}
|
||||
|
||||
const departments = await sequelize.query(`
|
||||
SELECT DISTINCT u.department
|
||||
FROM users u
|
||||
${whereClause}
|
||||
ORDER BY u.department ASC
|
||||
`, {
|
||||
replacements: !isAdmin ? { userId } : {},
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
// Extract department names and filter out null/empty values
|
||||
const deptList = (departments as any[])
|
||||
.map((d: any) => d.department)
|
||||
.filter((dept: string | null) => dept && dept.trim() !== '');
|
||||
|
||||
return [...new Set(deptList)]; // Remove duplicates and return
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Get priority distribution statistics
|
||||
*/
|
||||
@ -1308,10 +1613,10 @@ export class DashboardService {
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate averages per priority
|
||||
// Calculate averages per priority (rounded to 2 decimal places for accuracy)
|
||||
return Array.from(priorityMap.entries()).map(([priority, stats]) => {
|
||||
const avgCycleTimeHours = stats.cycleTimes.length > 0
|
||||
? Math.round((stats.cycleTimes.reduce((sum, hours) => sum + hours, 0) / stats.cycleTimes.length) * 10) / 10
|
||||
? Math.round((stats.cycleTimes.reduce((sum, hours) => sum + hours, 0) / stats.cycleTimes.length) * 100) / 100
|
||||
: 0;
|
||||
|
||||
return {
|
||||
@ -1328,12 +1633,27 @@ export class DashboardService {
|
||||
/**
|
||||
* Get Request Lifecycle Report with full timeline and TAT compliance
|
||||
*/
|
||||
async getLifecycleReport(userId: string, page: number = 1, limit: number = 50) {
|
||||
async getLifecycleReport(userId: string, page: number = 1, limit: number = 50, dateRange?: string, startDate?: string, endDate?: string) {
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = user?.hasManagementAccess() || false;
|
||||
|
||||
const offset = (page - 1) * limit;
|
||||
|
||||
// Parse date range if provided
|
||||
let dateFilter = '';
|
||||
const replacements: any = { userId, limit, offset };
|
||||
|
||||
if (dateRange) {
|
||||
const dateFilterObj = this.parseDateRange(dateRange, startDate, endDate);
|
||||
dateFilter = `
|
||||
AND wf.submission_date IS NOT NULL
|
||||
AND wf.submission_date >= :dateStart
|
||||
AND wf.submission_date <= :dateEnd
|
||||
`;
|
||||
replacements.dateStart = dateFilterObj.start;
|
||||
replacements.dateEnd = dateFilterObj.end;
|
||||
}
|
||||
|
||||
// For regular users: only their initiated requests or where they're participants
|
||||
let whereClause = isAdmin ? '' : `
|
||||
AND (
|
||||
@ -1351,9 +1671,10 @@ export class DashboardService {
|
||||
SELECT COUNT(*) as total
|
||||
FROM workflow_requests wf
|
||||
WHERE wf.is_draft = false
|
||||
${dateFilter}
|
||||
${whereClause}
|
||||
`, {
|
||||
replacements: { userId },
|
||||
replacements,
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
@ -1390,11 +1711,12 @@ export class DashboardService {
|
||||
LEFT JOIN approval_levels al ON al.request_id = wf.request_id
|
||||
AND al.level_number = wf.current_level
|
||||
WHERE wf.is_draft = false
|
||||
${dateFilter}
|
||||
${whereClause}
|
||||
ORDER BY wf.updated_at DESC
|
||||
LIMIT :limit OFFSET :offset
|
||||
`, {
|
||||
replacements: { userId, limit, offset },
|
||||
replacements,
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
@ -1456,12 +1778,14 @@ export class DashboardService {
|
||||
filterUserId?: string,
|
||||
filterType?: string,
|
||||
filterCategory?: string,
|
||||
filterSeverity?: string
|
||||
filterSeverity?: string,
|
||||
startDate?: string,
|
||||
endDate?: string
|
||||
) {
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = user?.hasManagementAccess() || false;
|
||||
|
||||
const range = this.parseDateRange(dateRange);
|
||||
const range = this.parseDateRange(dateRange, startDate, endDate);
|
||||
const offset = (page - 1) * limit;
|
||||
|
||||
// For admins: no restrictions - can see ALL activities from ALL users (including login activities)
|
||||
@ -1578,19 +1902,21 @@ export class DashboardService {
|
||||
|
||||
/**
|
||||
* Get Workflow Aging Report with business days calculation
|
||||
* Uses optimized server-side pagination with business days calculation
|
||||
*/
|
||||
async getWorkflowAgingReport(
|
||||
userId: string,
|
||||
threshold: number = 7,
|
||||
page: number = 1,
|
||||
limit: number = 50,
|
||||
dateRange?: string
|
||||
dateRange?: string,
|
||||
startDate?: string,
|
||||
endDate?: string
|
||||
) {
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = user?.hasManagementAccess() || false;
|
||||
|
||||
const range = this.parseDateRange(dateRange);
|
||||
const offset = (page - 1) * limit;
|
||||
const range = this.parseDateRange(dateRange, startDate, endDate);
|
||||
|
||||
// For regular users: only their initiated requests or where they're participants
|
||||
let whereClause = isAdmin ? '' : `
|
||||
@ -1604,7 +1930,8 @@ export class DashboardService {
|
||||
)
|
||||
`;
|
||||
|
||||
// Get all active requests (not closed)
|
||||
// Step 1: Get ALL active requests that might match (for accurate business days calculation)
|
||||
// We need to calculate business days for all to filter correctly, but we'll optimize the calculation
|
||||
const allRequests = await sequelize.query(`
|
||||
SELECT
|
||||
wf.request_id,
|
||||
@ -1628,16 +1955,23 @@ export class DashboardService {
|
||||
AND wf.submission_date IS NOT NULL
|
||||
AND wf.submission_date BETWEEN :start AND :end
|
||||
${whereClause}
|
||||
ORDER BY wf.submission_date ASC
|
||||
`, {
|
||||
replacements: { userId, start: range.start, end: range.end },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
// Calculate business days for each request and filter by threshold
|
||||
// Step 2: Calculate business days for all requests and filter by threshold
|
||||
// This is necessary for accuracy since business days depend on holidays and working hours config
|
||||
const { calculateBusinessDays } = await import('@utils/tatTimeUtils');
|
||||
const agingData = [];
|
||||
const agingData: any[] = [];
|
||||
|
||||
for (const req of allRequests) {
|
||||
// Process requests in parallel batches for better performance
|
||||
const BATCH_SIZE = 50;
|
||||
for (let i = 0; i < allRequests.length; i += BATCH_SIZE) {
|
||||
const batch = allRequests.slice(i, i + BATCH_SIZE);
|
||||
const batchResults = await Promise.all(
|
||||
batch.map(async (req: any) => {
|
||||
const priority = ((req as any).priority || 'STANDARD').toLowerCase();
|
||||
const businessDays = await calculateBusinessDays(
|
||||
(req as any).submission_date,
|
||||
@ -1646,7 +1980,7 @@ export class DashboardService {
|
||||
);
|
||||
|
||||
if (businessDays > threshold) {
|
||||
agingData.push({
|
||||
return {
|
||||
requestId: (req as any).request_id,
|
||||
requestNumber: (req as any).request_number,
|
||||
title: (req as any).title,
|
||||
@ -1660,15 +1994,23 @@ export class DashboardService {
|
||||
totalLevels: (req as any).total_levels,
|
||||
currentStageName: (req as any).current_stage_name || `Level ${(req as any).current_level}`,
|
||||
currentApproverName: (req as any).current_approver_name
|
||||
});
|
||||
};
|
||||
}
|
||||
return null;
|
||||
})
|
||||
);
|
||||
|
||||
// Filter out null results and add to agingData
|
||||
agingData.push(...batchResults.filter((r: any) => r !== null));
|
||||
}
|
||||
|
||||
// Sort by days open (descending) and paginate
|
||||
// Step 3: Sort by days open (descending)
|
||||
agingData.sort((a, b) => b.daysOpen - a.daysOpen);
|
||||
|
||||
// Step 4: Apply server-side pagination
|
||||
const totalRecords = agingData.length;
|
||||
const totalPages = Math.ceil(totalRecords / limit);
|
||||
const offset = (page - 1) * limit;
|
||||
const paginatedData = agingData.slice(offset, offset + limit);
|
||||
|
||||
return {
|
||||
@ -1679,6 +2021,282 @@ export class DashboardService {
|
||||
limit
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get requests filtered by approver ID with detailed filtering support
|
||||
*/
|
||||
async getRequestsByApprover(
|
||||
userId: string,
|
||||
approverId: string,
|
||||
page: number = 1,
|
||||
limit: number = 50,
|
||||
dateRange?: string,
|
||||
startDate?: string,
|
||||
endDate?: string,
|
||||
status?: string,
|
||||
priority?: string,
|
||||
slaCompliance?: string,
|
||||
search?: string
|
||||
) {
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = user?.hasManagementAccess() || false;
|
||||
|
||||
// Only admins can view other approvers' performance
|
||||
if (!isAdmin) {
|
||||
return {
|
||||
requests: [],
|
||||
currentPage: page,
|
||||
totalPages: 0,
|
||||
totalRecords: 0,
|
||||
limit
|
||||
};
|
||||
}
|
||||
|
||||
const offset = (page - 1) * limit;
|
||||
|
||||
// Parse date range if provided
|
||||
let dateFilter = '';
|
||||
const replacements: any = { approverId, limit, offset };
|
||||
|
||||
if (dateRange) {
|
||||
const dateFilterObj = this.parseDateRange(dateRange, startDate, endDate);
|
||||
// Filter by submission_date OR approval action_date to include requests approved in date range
|
||||
// This ensures we see requests where the approver acted during the date range, even if submitted earlier
|
||||
dateFilter = `
|
||||
AND (
|
||||
(wf.submission_date IS NOT NULL AND wf.submission_date >= :dateStart AND wf.submission_date <= :dateEnd)
|
||||
OR (al.action_date IS NOT NULL AND al.action_date >= :dateStart AND al.action_date <= :dateEnd)
|
||||
)
|
||||
`;
|
||||
replacements.dateStart = dateFilterObj.start;
|
||||
replacements.dateEnd = dateFilterObj.end;
|
||||
}
|
||||
|
||||
// Status filter
|
||||
let statusFilter = '';
|
||||
if (status && status !== 'all') {
|
||||
if (status === 'pending') {
|
||||
statusFilter = `AND wf.status IN ('PENDING', 'IN_PROGRESS')`;
|
||||
} else {
|
||||
statusFilter = `AND wf.status = :statusFilter`;
|
||||
replacements.statusFilter = status.toUpperCase();
|
||||
}
|
||||
}
|
||||
|
||||
// Priority filter
|
||||
let priorityFilter = '';
|
||||
if (priority && priority !== 'all') {
|
||||
priorityFilter = `AND wf.priority = :priorityFilter`;
|
||||
replacements.priorityFilter = priority.toUpperCase();
|
||||
}
|
||||
|
||||
// Search filter
|
||||
let searchFilter = '';
|
||||
if (search && search.trim()) {
|
||||
searchFilter = `
|
||||
AND (
|
||||
wf.request_number ILIKE :searchTerm
|
||||
OR wf.title ILIKE :searchTerm
|
||||
OR u.display_name ILIKE :searchTerm
|
||||
OR u.email ILIKE :searchTerm
|
||||
)
|
||||
`;
|
||||
replacements.searchTerm = `%${search.trim()}%`;
|
||||
}
|
||||
|
||||
// SLA Compliance filter - get requests where this approver was involved
|
||||
let slaFilter = '';
|
||||
if (slaCompliance && slaCompliance !== 'all') {
|
||||
if (slaCompliance === 'breached') {
|
||||
slaFilter = `AND EXISTS (
|
||||
SELECT 1 FROM tat_alerts ta
|
||||
INNER JOIN approval_levels al ON ta.level_id = al.level_id
|
||||
WHERE ta.request_id = wf.request_id
|
||||
AND al.approver_id = :approverId
|
||||
AND ta.is_breached = true
|
||||
)`;
|
||||
} else if (slaCompliance === 'compliant') {
|
||||
// Compliant: completed requests that are not breached
|
||||
slaFilter = `AND wf.status IN ('APPROVED', 'REJECTED', 'CLOSED')
|
||||
AND NOT EXISTS (
|
||||
SELECT 1 FROM tat_alerts ta
|
||||
INNER JOIN approval_levels al ON ta.level_id = al.level_id
|
||||
WHERE ta.request_id = wf.request_id
|
||||
AND al.approver_id = :approverId
|
||||
AND ta.is_breached = true
|
||||
)`;
|
||||
} else {
|
||||
// on_track, approaching, critical - these will be calculated client-side
|
||||
// For now, skip this filter as SLA status is calculated dynamically
|
||||
// The client-side filter will handle these cases
|
||||
}
|
||||
}
|
||||
|
||||
// Get all requests where this approver has been involved (as approver in any approval level)
|
||||
// Include ALL requests where approver is assigned, regardless of approval status (pending, approved, rejected)
|
||||
// For count, we need to use the same date filter logic
|
||||
const countResult = await sequelize.query(`
|
||||
SELECT COUNT(DISTINCT wf.request_id) as total
|
||||
FROM workflow_requests wf
|
||||
INNER JOIN approval_levels al ON wf.request_id = al.request_id
|
||||
WHERE al.approver_id = :approverId
|
||||
AND wf.is_draft = false
|
||||
${dateFilter}
|
||||
${statusFilter}
|
||||
${priorityFilter}
|
||||
${slaFilter}
|
||||
${searchFilter}
|
||||
`, {
|
||||
replacements,
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
const totalRecords = Number((countResult[0] as any).total);
|
||||
const totalPages = Math.ceil(totalRecords / limit);
|
||||
|
||||
// Get requests with approver's level information - use DISTINCT ON for PostgreSQL
|
||||
// Priority: Show approved/rejected levels first, then pending/in-progress
|
||||
// This ensures we see the approver's actual actions, not just pending assignments
|
||||
const requests = await sequelize.query(`
|
||||
SELECT DISTINCT ON (wf.request_id)
|
||||
wf.request_id,
|
||||
wf.request_number,
|
||||
wf.title,
|
||||
wf.priority,
|
||||
wf.status,
|
||||
wf.submission_date,
|
||||
wf.closure_date,
|
||||
wf.current_level,
|
||||
wf.total_levels,
|
||||
wf.total_tat_hours,
|
||||
wf.created_at,
|
||||
wf.updated_at,
|
||||
u.display_name AS initiator_name,
|
||||
u.email AS initiator_email,
|
||||
u.department AS initiator_department,
|
||||
al.level_id,
|
||||
al.level_number,
|
||||
al.status AS approval_status,
|
||||
al.action_date AS approval_action_date,
|
||||
al.level_start_time,
|
||||
al.tat_hours AS level_tat_hours,
|
||||
al.elapsed_hours AS level_elapsed_hours,
|
||||
(
|
||||
SELECT COUNT(*)
|
||||
FROM tat_alerts ta
|
||||
WHERE ta.request_id = wf.request_id
|
||||
AND ta.level_id = al.level_id
|
||||
AND ta.is_breached = true
|
||||
) AS is_breached
|
||||
FROM workflow_requests wf
|
||||
INNER JOIN approval_levels al ON wf.request_id = al.request_id
|
||||
LEFT JOIN users u ON wf.initiator_id = u.user_id
|
||||
WHERE al.approver_id = :approverId
|
||||
AND wf.is_draft = false
|
||||
${dateFilter}
|
||||
${statusFilter}
|
||||
${priorityFilter}
|
||||
${slaFilter}
|
||||
${searchFilter}
|
||||
ORDER BY
|
||||
wf.request_id,
|
||||
CASE
|
||||
WHEN al.status = 'APPROVED' THEN 1
|
||||
WHEN al.status = 'REJECTED' THEN 2
|
||||
WHEN al.status = 'IN_PROGRESS' THEN 3
|
||||
WHEN al.status = 'PENDING' THEN 4
|
||||
ELSE 5
|
||||
END ASC,
|
||||
al.level_number ASC
|
||||
LIMIT :limit OFFSET :offset
|
||||
`, {
|
||||
replacements,
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
// Calculate SLA status for each request/level combination
|
||||
// This ensures we detect breaches for ALL requests (pending, approved, rejected)
|
||||
const { calculateSLAStatus } = await import('@utils/tatTimeUtils');
|
||||
const processedRequests = await Promise.all(
|
||||
requests.map(async (req: any) => {
|
||||
let slaStatus = 'on_track';
|
||||
let isBreached = false;
|
||||
|
||||
// Calculate SLA status for ALL levels (pending, in-progress, approved, rejected)
|
||||
// This ensures we catch breaches even for pending requests
|
||||
if (req.level_tat_hours && req.level_start_time) {
|
||||
try {
|
||||
const priority = (req.priority || 'standard').toLowerCase();
|
||||
// For completed levels, use action/closure date; for pending, use current time
|
||||
const levelEndDate = req.approval_action_date || req.closure_date || null;
|
||||
const calculated = await calculateSLAStatus(
|
||||
req.level_start_time,
|
||||
req.level_tat_hours,
|
||||
priority,
|
||||
levelEndDate
|
||||
);
|
||||
slaStatus = calculated.status;
|
||||
|
||||
// Mark as breached if percentageUsed >= 100 (same logic as Requests screen)
|
||||
// This catches pending requests that have already breached
|
||||
if (calculated.percentageUsed >= 100) {
|
||||
isBreached = true;
|
||||
} else if (req.is_breached && req.is_breached > 0) {
|
||||
// Also check tat_alerts table for historical breaches
|
||||
isBreached = true;
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`[Dashboard] Error calculating SLA status for request ${req.request_id}:`, error);
|
||||
// If calculation fails, check tat_alerts table
|
||||
if (req.is_breached && req.is_breached > 0) {
|
||||
isBreached = true;
|
||||
slaStatus = 'breached';
|
||||
} else {
|
||||
slaStatus = 'on_track';
|
||||
}
|
||||
}
|
||||
} else if (req.is_breached && req.is_breached > 0) {
|
||||
// Fallback: if no TAT data but tat_alerts shows breach
|
||||
isBreached = true;
|
||||
slaStatus = 'breached';
|
||||
}
|
||||
|
||||
return {
|
||||
requestId: req.request_id,
|
||||
requestNumber: req.request_number,
|
||||
title: req.title,
|
||||
priority: (req.priority || 'STANDARD').toLowerCase(),
|
||||
status: (req.status || 'PENDING').toLowerCase(),
|
||||
initiatorName: req.initiator_name || req.initiator_email || 'Unknown',
|
||||
initiatorEmail: req.initiator_email,
|
||||
initiatorDepartment: req.initiator_department,
|
||||
submissionDate: req.submission_date,
|
||||
closureDate: req.closure_date,
|
||||
createdAt: req.created_at,
|
||||
updatedAt: req.updated_at,
|
||||
currentLevel: req.current_level,
|
||||
totalLevels: req.total_levels,
|
||||
levelId: req.level_id,
|
||||
levelNumber: req.level_number,
|
||||
approvalStatus: (req.approval_status || 'PENDING').toLowerCase(),
|
||||
approvalActionDate: req.approval_action_date,
|
||||
slaStatus,
|
||||
levelTatHours: parseFloat(req.level_tat_hours || 0),
|
||||
levelElapsedHours: parseFloat(req.level_elapsed_hours || 0),
|
||||
isBreached: isBreached, // Use calculated breach status (includes pending requests that breached)
|
||||
totalTatHours: parseFloat(req.total_tat_hours || 0)
|
||||
};
|
||||
})
|
||||
);
|
||||
|
||||
return {
|
||||
requests: processedRequests,
|
||||
currentPage: page,
|
||||
totalPages,
|
||||
totalRecords,
|
||||
limit
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
export const dashboardService = new DashboardService();
|
||||
|
||||
@ -118,8 +118,6 @@ export class UserService {
|
||||
const oktaApiToken = process.env.OKTA_API_TOKEN;
|
||||
|
||||
if (!oktaDomain || !oktaApiToken) {
|
||||
console.error('❌ Okta credentials not configured');
|
||||
// Fallback to local DB search
|
||||
return await this.searchUsersLocal(q, limit, excludeUserId);
|
||||
}
|
||||
|
||||
@ -161,8 +159,6 @@ export class UserService {
|
||||
isActive: true
|
||||
}));
|
||||
} catch (error: any) {
|
||||
console.error('❌ Okta user search failed:', error.message);
|
||||
// Fallback to local DB search
|
||||
return await this.searchUsersLocal(q, limit, excludeUserId);
|
||||
}
|
||||
}
|
||||
|
||||
5
src/types/express.d.ts
vendored
5
src/types/express.d.ts
vendored
@ -1,6 +1,5 @@
|
||||
import { JwtPayload } from 'jsonwebtoken';
|
||||
|
||||
export type UserRole = 'USER' | 'MANAGEMENT' | 'ADMIN';
|
||||
import { UserRole } from './user.types';
|
||||
|
||||
declare global {
|
||||
namespace Express {
|
||||
@ -8,7 +7,7 @@ declare global {
|
||||
user?: {
|
||||
userId: string;
|
||||
email: string;
|
||||
employeeId?: string | null; // Optional - schema not finalized
|
||||
employeeId?: string | null;
|
||||
role?: UserRole;
|
||||
};
|
||||
cookies?: {
|
||||
|
||||
@ -38,19 +38,13 @@ async function loadWorkingHoursCache(): Promise<void> {
|
||||
endDay: endDay
|
||||
};
|
||||
workingHoursCacheExpiry = dayjs().add(5, 'minute').toDate();
|
||||
|
||||
console.log(`[TAT Utils] ✅ Working hours loaded from admin config: ${hours.startHour}:00 - ${hours.endHour}:00 (Days: ${startDay}-${endDay})`);
|
||||
|
||||
} catch (error) {
|
||||
console.error('[TAT] Error loading working hours:', error);
|
||||
// Fallback to default values from TAT_CONFIG
|
||||
workingHoursCache = {
|
||||
startHour: TAT_CONFIG.WORK_START_HOUR,
|
||||
endHour: TAT_CONFIG.WORK_END_HOUR,
|
||||
startDay: TAT_CONFIG.WORK_START_DAY,
|
||||
endDay: TAT_CONFIG.WORK_END_DAY
|
||||
};
|
||||
console.log(`[TAT Utils] ⚠️ Using fallback working hours from system config: ${TAT_CONFIG.WORK_START_HOUR}:00 - ${TAT_CONFIG.WORK_END_HOUR}:00`);
|
||||
}
|
||||
}
|
||||
|
||||
@ -174,7 +168,6 @@ export async function addWorkingHours(start: Date | string, hoursToAdd: number):
|
||||
// If start time was outside working hours, reset to clean work start time (no minutes)
|
||||
if (wasOutsideWorkingHours) {
|
||||
current = current.minute(0).second(0).millisecond(0);
|
||||
console.log(`[TAT Utils] Start time ${originalStart} was outside working hours, advanced to ${current.format('YYYY-MM-DD HH:mm:ss')}`);
|
||||
}
|
||||
|
||||
// Split into whole hours and fractional part
|
||||
@ -244,13 +237,9 @@ export async function addWorkingHoursExpress(start: Date | string, hoursToAdd: n
|
||||
const originalStart = current.format('YYYY-MM-DD HH:mm:ss');
|
||||
const currentHour = current.hour();
|
||||
if (currentHour < config.startHour) {
|
||||
// Before working hours - reset to clean work start
|
||||
current = current.hour(config.startHour).minute(0).second(0).millisecond(0);
|
||||
console.log(`[TAT Utils Express] Start time ${originalStart} was before working hours, advanced to ${current.format('YYYY-MM-DD HH:mm:ss')}`);
|
||||
} else if (currentHour >= config.endHour) {
|
||||
// After working hours - reset to clean start of next day
|
||||
current = current.add(1, 'day').hour(config.startHour).minute(0).second(0).millisecond(0);
|
||||
console.log(`[TAT Utils Express] Start time ${originalStart} was after working hours, advanced to ${current.format('YYYY-MM-DD HH:mm:ss')}`);
|
||||
}
|
||||
|
||||
// Split into whole hours and fractional part
|
||||
@ -381,7 +370,6 @@ export async function initializeHolidaysCache(): Promise<void> {
|
||||
export async function clearWorkingHoursCache(): Promise<void> {
|
||||
workingHoursCache = null;
|
||||
workingHoursCacheExpiry = null;
|
||||
console.log('[TAT Utils] Working hours cache cleared - reloading from database...');
|
||||
|
||||
// Immediately reload the cache with new values
|
||||
await loadWorkingHoursCache();
|
||||
@ -607,14 +595,7 @@ export async function calculateElapsedWorkingHours(
|
||||
}
|
||||
}
|
||||
|
||||
// Log if we advanced the start time for elapsed calculation
|
||||
if (start.format('YYYY-MM-DD HH:mm:ss') !== originalStart) {
|
||||
console.log(`[TAT Utils] Elapsed time calculation: Start ${originalStart} was outside working hours, advanced to ${start.format('YYYY-MM-DD HH:mm:ss')}`);
|
||||
}
|
||||
|
||||
// If end time is before adjusted start time, return 0 (TAT hasn't started yet)
|
||||
if (end.isBefore(start)) {
|
||||
console.log(`[TAT Utils] Current time is before TAT start time - elapsed hours: 0`);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -682,18 +663,6 @@ export async function calculateElapsedWorkingHours(
|
||||
|
||||
const hours = totalWorkingMinutes / 60;
|
||||
|
||||
// Warning log for unusually high values
|
||||
if (hours > 16) { // More than 2 working days
|
||||
console.warn('[TAT] High elapsed hours detected:', {
|
||||
startDate: start.format('YYYY-MM-DD HH:mm'),
|
||||
endDate: end.format('YYYY-MM-DD HH:mm'),
|
||||
priority,
|
||||
elapsedHours: hours,
|
||||
workingHoursConfig: config,
|
||||
calendarHours: end.diff(start, 'hour')
|
||||
});
|
||||
}
|
||||
|
||||
return hours;
|
||||
}
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user