Compare commits
4 Commits
main
...
dev_branch
| Author | SHA1 | Date | |
|---|---|---|---|
| 826c0eedea | |||
| c7c9b62358 | |||
| c76b799cf7 | |||
| 1aa7fb9056 |
@ -1,45 +1,103 @@
|
||||
# Redis Setup for Windows
|
||||
|
||||
## Method 1: Using Memurai (Redis-compatible for Windows)
|
||||
## ⚠️ IMPORTANT: Redis Version Requirements
|
||||
|
||||
Memurai is a Redis-compatible server for Windows.
|
||||
**BullMQ requires Redis version 5.0.0 or higher.**
|
||||
|
||||
❌ **DO NOT USE**: Microsoft Archive Redis (https://github.com/microsoftarchive/redis/releases)
|
||||
- This is **outdated** and only provides Redis 3.x
|
||||
- **Version 3.0.504 is NOT compatible** with BullMQ
|
||||
- You will get errors: `Redis version needs to be greater or equal than 5.0.0`
|
||||
|
||||
✅ **USE ONE OF THESE METHODS INSTEAD**:
|
||||
|
||||
---
|
||||
|
||||
## Method 1: Using Memurai (Recommended for Windows) ⭐
|
||||
|
||||
Memurai is a **Redis-compatible** server built specifically for Windows with full Redis 6.x+ compatibility.
|
||||
|
||||
### Why Memurai?
|
||||
- ✅ **Native Windows support** - Runs as a Windows service
|
||||
- ✅ **Redis 6.x+ compatible** - Full feature support
|
||||
- ✅ **Easy installation** - Just install and run
|
||||
- ✅ **Free for development** - Free tier available
|
||||
- ✅ **Production-ready** - Used in enterprise environments
|
||||
|
||||
### Installation Steps:
|
||||
|
||||
1. **Download Memurai**:
|
||||
- Visit: https://www.memurai.com/get-memurai
|
||||
- Download the installer
|
||||
- Download the **Developer Edition** (free)
|
||||
|
||||
2. **Install**:
|
||||
- Run the installer
|
||||
- Run the installer (`Memurai-*.exe`)
|
||||
- Choose default options
|
||||
- It will automatically start as a Windows service
|
||||
- Memurai will install as a Windows service and start automatically
|
||||
|
||||
3. **Verify**:
|
||||
3. **Verify Installation**:
|
||||
```powershell
|
||||
# Check if service is running
|
||||
Get-Service Memurai
|
||||
# Should show: Running
|
||||
|
||||
# Or connect with redis-cli
|
||||
# Test connection
|
||||
memurai-cli ping
|
||||
# Should return: PONG
|
||||
|
||||
# Check version (should be 6.x or 7.x)
|
||||
memurai-cli --version
|
||||
```
|
||||
|
||||
4. **Configure** (if needed):
|
||||
- Default port: 6379
|
||||
- Service runs automatically on startup
|
||||
4. **Configuration**:
|
||||
- Default port: **6379**
|
||||
- Connection string: `redis://localhost:6379`
|
||||
- Service runs automatically on Windows startup
|
||||
- No additional configuration needed for development
|
||||
|
||||
## Method 2: Using Docker Desktop
|
||||
## Method 2: Using Docker Desktop (Alternative) 🐳
|
||||
|
||||
1. **Install Docker Desktop**:
|
||||
If you have Docker Desktop installed, this is the easiest method to get Redis 7.x.
|
||||
|
||||
### Installation Steps:
|
||||
|
||||
1. **Install Docker Desktop** (if not already installed):
|
||||
- Download from: https://www.docker.com/products/docker-desktop
|
||||
- Install and start Docker Desktop
|
||||
|
||||
2. **Start Redis Container**:
|
||||
```powershell
|
||||
docker run -d --name redis -p 6379:6379 redis:7-alpine
|
||||
# Run Redis 7.x in a container
|
||||
docker run -d --name redis-tat -p 6379:6379 redis:7-alpine
|
||||
|
||||
# Or if you want it to restart automatically:
|
||||
docker run -d --name redis-tat -p 6379:6379 --restart unless-stopped redis:7-alpine
|
||||
```
|
||||
|
||||
3. **Verify**:
|
||||
```powershell
|
||||
# Check if container is running
|
||||
docker ps | Select-String redis
|
||||
|
||||
# Check Redis version
|
||||
docker exec redis-tat redis-server --version
|
||||
# Should show: Redis server v=7.x.x
|
||||
|
||||
# Test connection
|
||||
docker exec redis-tat redis-cli ping
|
||||
# Should return: PONG
|
||||
```
|
||||
|
||||
4. **Stop/Start Redis**:
|
||||
```powershell
|
||||
# Stop Redis
|
||||
docker stop redis-tat
|
||||
|
||||
# Start Redis
|
||||
docker start redis-tat
|
||||
|
||||
# Remove container (if needed)
|
||||
docker rm -f redis-tat
|
||||
```
|
||||
|
||||
## Method 3: Using WSL2 (Windows Subsystem for Linux)
|
||||
@ -76,38 +134,191 @@ Test-NetConnection -ComputerName localhost -Port 6379
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### ❌ Error: "Redis version needs to be greater or equal than 5.0.0 Current: 3.0.504"
|
||||
|
||||
**Problem**: You're using Microsoft Archive Redis (version 3.x) which is **too old** for BullMQ.
|
||||
|
||||
**Solution**:
|
||||
1. **Stop the old Redis**:
|
||||
```powershell
|
||||
# Find and stop the old Redis process
|
||||
Get-Process redis-server -ErrorAction SilentlyStop | Stop-Process -Force
|
||||
```
|
||||
|
||||
2. **Uninstall/Remove old Redis** (if installed as service):
|
||||
```powershell
|
||||
# Check if running as service
|
||||
Get-Service | Where-Object {$_.Name -like "*redis*"}
|
||||
```
|
||||
|
||||
3. **Install one of the recommended methods**:
|
||||
- **Option A**: Install Memurai (Recommended) - See Method 1 above
|
||||
- **Option B**: Use Docker - See Method 2 above
|
||||
- **Option C**: Use WSL2 - See Method 3 above
|
||||
|
||||
4. **Verify new Redis version**:
|
||||
```powershell
|
||||
# For Memurai
|
||||
memurai-cli --version
|
||||
# Should show: 6.x or 7.x
|
||||
|
||||
# For Docker
|
||||
docker exec redis-tat redis-server --version
|
||||
# Should show: Redis server v=7.x.x
|
||||
```
|
||||
|
||||
5. **Restart your backend server**:
|
||||
```powershell
|
||||
# The TAT worker will now detect the correct Redis version
|
||||
npm run dev
|
||||
```
|
||||
|
||||
### Port Already in Use
|
||||
```powershell
|
||||
# Check what's using port 6379
|
||||
netstat -ano | findstr :6379
|
||||
|
||||
# Kill the process if needed
|
||||
# Kill the process if needed (replace <PID> with actual process ID)
|
||||
taskkill /PID <PID> /F
|
||||
|
||||
# Or if using old Redis, stop it:
|
||||
Get-Process redis-server -ErrorAction SilentlyStop | Stop-Process -Force
|
||||
```
|
||||
|
||||
### Service Not Starting
|
||||
### Service Not Starting (Memurai)
|
||||
```powershell
|
||||
# For Memurai
|
||||
# Start Memurai service
|
||||
net start Memurai
|
||||
|
||||
# Check service status
|
||||
Get-Service Memurai
|
||||
|
||||
# Check logs
|
||||
Get-EventLog -LogName Application -Source Memurai -Newest 10
|
||||
|
||||
# Restart service
|
||||
Restart-Service Memurai
|
||||
```
|
||||
|
||||
### Docker Container Not Starting
|
||||
```powershell
|
||||
# Check Docker is running
|
||||
docker ps
|
||||
|
||||
# Check Redis container logs
|
||||
docker logs redis-tat
|
||||
|
||||
# Restart container
|
||||
docker restart redis-tat
|
||||
|
||||
# Remove and recreate if needed
|
||||
docker rm -f redis-tat
|
||||
docker run -d --name redis-tat -p 6379:6379 redis:7-alpine
|
||||
```
|
||||
|
||||
### Cannot Connect to Redis
|
||||
```powershell
|
||||
# Test connection
|
||||
Test-NetConnection -ComputerName localhost -Port 6379
|
||||
|
||||
# For Memurai
|
||||
memurai-cli ping
|
||||
|
||||
# For Docker
|
||||
docker exec redis-tat redis-cli ping
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Default Redis/Memurai configuration works out of the box. No changes needed for development.
|
||||
### Environment Variable
|
||||
|
||||
**Connection String**: `redis://localhost:6379`
|
||||
Add to your `.env` file:
|
||||
```env
|
||||
REDIS_URL=redis://localhost:6379
|
||||
```
|
||||
|
||||
### Default Settings
|
||||
- **Port**: `6379`
|
||||
- **Host**: `localhost`
|
||||
- **Connection String**: `redis://localhost:6379`
|
||||
- No authentication required for local development
|
||||
- Default configuration works out of the box
|
||||
|
||||
## Verification After Setup
|
||||
|
||||
After installing Redis, verify it's working:
|
||||
|
||||
```powershell
|
||||
# 1. Check Redis version (must be 5.0+)
|
||||
# For Memurai:
|
||||
memurai-cli --version
|
||||
|
||||
# For Docker:
|
||||
docker exec redis-tat redis-server --version
|
||||
|
||||
# 2. Test connection
|
||||
# For Memurai:
|
||||
memurai-cli ping
|
||||
# Expected: PONG
|
||||
|
||||
# For Docker:
|
||||
docker exec redis-tat redis-cli ping
|
||||
# Expected: PONG
|
||||
|
||||
# 3. Check if backend can connect
|
||||
# Start your backend server and check logs:
|
||||
npm run dev
|
||||
|
||||
# Look for:
|
||||
# [TAT Queue] Connected to Redis
|
||||
# [TAT Worker] Connected to Redis at redis://127.0.0.1:6379
|
||||
# [TAT Worker] Redis version: 7.x.x (or 6.x.x)
|
||||
# [TAT Worker] Worker is ready and listening for jobs
|
||||
```
|
||||
|
||||
## Quick Fix: Migrating from Old Redis
|
||||
|
||||
If you already installed Microsoft Archive Redis (3.x), follow these steps:
|
||||
|
||||
1. **Stop old Redis**:
|
||||
```powershell
|
||||
# Close the PowerShell window running redis-server.exe
|
||||
# Or kill the process:
|
||||
Get-Process redis-server -ErrorAction SilentlyStop | Stop-Process -Force
|
||||
```
|
||||
|
||||
2. **Choose a new method** (recommended: Memurai or Docker)
|
||||
|
||||
3. **Install and verify** (see methods above)
|
||||
|
||||
4. **Update .env** (if needed):
|
||||
```env
|
||||
REDIS_URL=redis://localhost:6379
|
||||
```
|
||||
|
||||
5. **Restart backend**:
|
||||
```powershell
|
||||
npm run dev
|
||||
```
|
||||
|
||||
## Production Considerations
|
||||
|
||||
- Use Redis authentication in production
|
||||
- Configure persistence (RDB/AOF)
|
||||
- Set up monitoring and alerts
|
||||
- Consider Redis Cluster for high availability
|
||||
- ✅ Use Redis authentication in production
|
||||
- ✅ Configure persistence (RDB/AOF)
|
||||
- ✅ Set up monitoring and alerts
|
||||
- ✅ Consider Redis Cluster for high availability
|
||||
- ✅ Use managed Redis service (Redis Cloud, AWS ElastiCache, etc.)
|
||||
|
||||
---
|
||||
|
||||
**Recommended for Windows Development**: Memurai (easiest) or Docker Desktop
|
||||
## Summary: Recommended Setup for Windows
|
||||
|
||||
| Method | Ease of Setup | Performance | Recommended For |
|
||||
|--------|---------------|-------------|-----------------|
|
||||
| **Memurai** ⭐ | ⭐⭐⭐⭐⭐ Very Easy | ⭐⭐⭐⭐⭐ Excellent | **Most Users** |
|
||||
| **Docker** | ⭐⭐⭐⭐ Easy | ⭐⭐⭐⭐⭐ Excellent | Docker Users |
|
||||
| **WSL2** | ⭐⭐⭐ Moderate | ⭐⭐⭐⭐⭐ Excellent | Linux Users |
|
||||
| ❌ **Microsoft Archive Redis** | ❌ Don't Use | ❌ Too Old | **None - Outdated** |
|
||||
|
||||
**⭐ Recommended**: **Memurai** for easiest Windows-native setup, or **Docker** if you already use Docker Desktop.
|
||||
|
||||
|
||||
@ -25,6 +25,7 @@ REFRESH_TOKEN_EXPIRY=7d
|
||||
OKTA_DOMAIN=https://dev-830839.oktapreview.com
|
||||
OKTA_CLIENT_ID=0oa2j8slwj5S4bG5k0h8
|
||||
OKTA_CLIENT_SECRET=your_okta_client_secret_here
|
||||
OKTA_API_TOKEN=your_okta_api_token_here # For Okta User Management API (user search)
|
||||
|
||||
# Session
|
||||
SESSION_SECRET=your_session_secret_here_min_32_chars
|
||||
|
||||
@ -365,8 +365,8 @@ export const updateConfiguration = async (req: Request, res: Response): Promise<
|
||||
// If working hours config was updated, also clear working hours cache
|
||||
const workingHoursKeys = ['WORK_START_HOUR', 'WORK_END_HOUR', 'WORK_START_DAY', 'WORK_END_DAY'];
|
||||
if (workingHoursKeys.includes(configKey)) {
|
||||
clearWorkingHoursCache();
|
||||
logger.info(`[Admin] Working hours configuration '${configKey}' updated - cache cleared`);
|
||||
await clearWorkingHoursCache();
|
||||
logger.info(`[Admin] Working hours configuration '${configKey}' updated - cache cleared and reloaded`);
|
||||
} else {
|
||||
logger.info(`[Admin] Configuration '${configKey}' updated and cache cleared`);
|
||||
}
|
||||
@ -407,8 +407,8 @@ export const resetConfiguration = async (req: Request, res: Response): Promise<v
|
||||
// If working hours config was reset, also clear working hours cache
|
||||
const workingHoursKeys = ['WORK_START_HOUR', 'WORK_END_HOUR', 'WORK_START_DAY', 'WORK_END_DAY'];
|
||||
if (workingHoursKeys.includes(configKey)) {
|
||||
clearWorkingHoursCache();
|
||||
logger.info(`[Admin] Working hours configuration '${configKey}' reset to default - cache cleared`);
|
||||
await clearWorkingHoursCache();
|
||||
logger.info(`[Admin] Working hours configuration '${configKey}' reset to default - cache cleared and reloaded`);
|
||||
} else {
|
||||
logger.info(`[Admin] Configuration '${configKey}' reset to default and cache cleared`);
|
||||
}
|
||||
|
||||
264
src/controllers/dashboard.controller.ts
Normal file
264
src/controllers/dashboard.controller.ts
Normal file
@ -0,0 +1,264 @@
|
||||
import { Request, Response } from 'express';
|
||||
import { DashboardService } from '../services/dashboard.service';
|
||||
import logger from '@utils/logger';
|
||||
|
||||
export class DashboardController {
|
||||
private dashboardService: DashboardService;
|
||||
|
||||
constructor() {
|
||||
this.dashboardService = new DashboardService();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all KPI metrics for dashboard
|
||||
*/
|
||||
async getKPIs(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const dateRange = req.query.dateRange as string | undefined;
|
||||
|
||||
const kpis = await this.dashboardService.getKPIs(userId, dateRange);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: kpis
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Dashboard] Error fetching KPIs:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch dashboard KPIs'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get request volume and status statistics
|
||||
*/
|
||||
async getRequestStats(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const dateRange = req.query.dateRange as string | undefined;
|
||||
|
||||
const stats = await this.dashboardService.getRequestStats(userId, dateRange);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: stats
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Dashboard] Error fetching request stats:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch request statistics'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get TAT efficiency metrics
|
||||
*/
|
||||
async getTATEfficiency(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const dateRange = req.query.dateRange as string | undefined;
|
||||
|
||||
const efficiency = await this.dashboardService.getTATEfficiency(userId, dateRange);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: efficiency
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Dashboard] Error fetching TAT efficiency:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch TAT efficiency metrics'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get approver load statistics
|
||||
*/
|
||||
async getApproverLoad(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const dateRange = req.query.dateRange as string | undefined;
|
||||
|
||||
const load = await this.dashboardService.getApproverLoad(userId, dateRange);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: load
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Dashboard] Error fetching approver load:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch approver load statistics'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get engagement and quality metrics
|
||||
*/
|
||||
async getEngagementStats(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const dateRange = req.query.dateRange as string | undefined;
|
||||
|
||||
const engagement = await this.dashboardService.getEngagementStats(userId, dateRange);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: engagement
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Dashboard] Error fetching engagement stats:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch engagement statistics'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get AI insights and closure metrics
|
||||
*/
|
||||
async getAIInsights(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const dateRange = req.query.dateRange as string | undefined;
|
||||
|
||||
const insights = await this.dashboardService.getAIInsights(userId, dateRange);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: insights
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Dashboard] Error fetching AI insights:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch AI insights'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get recent activity feed
|
||||
*/
|
||||
async getRecentActivity(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const limit = Number(req.query.limit || 10);
|
||||
|
||||
const activities = await this.dashboardService.getRecentActivity(userId, limit);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: activities
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Dashboard] Error fetching recent activity:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch recent activity'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get critical/high priority requests
|
||||
*/
|
||||
async getCriticalRequests(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
|
||||
const criticalRequests = await this.dashboardService.getCriticalRequests(userId);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: criticalRequests
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Dashboard] Error fetching critical requests:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch critical requests'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get upcoming deadlines
|
||||
*/
|
||||
async getUpcomingDeadlines(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const limit = Number(req.query.limit || 5);
|
||||
|
||||
const deadlines = await this.dashboardService.getUpcomingDeadlines(userId, limit);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: deadlines
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Dashboard] Error fetching upcoming deadlines:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch upcoming deadlines'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get department-wise statistics
|
||||
*/
|
||||
async getDepartmentStats(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const dateRange = req.query.dateRange as string | undefined;
|
||||
|
||||
const stats = await this.dashboardService.getDepartmentStats(userId, dateRange);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: stats
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Dashboard] Error fetching department stats:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch department statistics'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get priority distribution statistics
|
||||
*/
|
||||
async getPriorityDistribution(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const dateRange = req.query.dateRange as string | undefined;
|
||||
|
||||
const distribution = await this.dashboardService.getPriorityDistribution(userId, dateRange);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: distribution
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Dashboard] Error fetching priority distribution:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch priority distribution'
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -16,8 +16,6 @@ export class UserController {
|
||||
const limit = Number(req.query.limit || 10);
|
||||
const currentUserId = (req as any).user?.userId || (req as any).user?.id;
|
||||
|
||||
logger.info('User search requested', { q, limit });
|
||||
|
||||
const users = await this.userService.searchUsers(q, limit, currentUserId);
|
||||
|
||||
const result = users.map(u => ({
|
||||
@ -37,6 +35,44 @@ export class UserController {
|
||||
ResponseHandler.error(res, 'User search failed', 500);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure user exists in database (create if not exists)
|
||||
* Called when user is selected/tagged in the frontend
|
||||
*/
|
||||
async ensureUserExists(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const { userId, email, displayName, firstName, lastName, department, phone } = req.body;
|
||||
|
||||
if (!userId || !email) {
|
||||
ResponseHandler.error(res, 'userId and email are required', 400);
|
||||
return;
|
||||
}
|
||||
|
||||
const user = await this.userService.ensureUserExists({
|
||||
userId,
|
||||
email,
|
||||
displayName,
|
||||
firstName,
|
||||
lastName,
|
||||
department,
|
||||
phone
|
||||
});
|
||||
|
||||
ResponseHandler.success(res, {
|
||||
userId: user.userId,
|
||||
email: user.email,
|
||||
displayName: user.displayName,
|
||||
firstName: user.firstName,
|
||||
lastName: user.lastName,
|
||||
department: user.department,
|
||||
isActive: user.isActive
|
||||
}, 'User ensured in database');
|
||||
} catch (error: any) {
|
||||
logger.error('Ensure user failed', { error });
|
||||
ResponseHandler.error(res, error.message || 'Failed to ensure user', 500);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
@ -4,22 +4,41 @@ import logger from '@utils/logger';
|
||||
|
||||
// Create Redis connection
|
||||
const redisUrl = process.env.REDIS_URL || 'redis://localhost:6379';
|
||||
const redisPassword = process.env.REDIS_PASSWORD || undefined;
|
||||
|
||||
let connection: IORedis | null = null;
|
||||
let tatQueue: Queue | null = null;
|
||||
|
||||
try {
|
||||
connection = new IORedis(redisUrl, {
|
||||
// Parse Redis URL and add password if provided
|
||||
const redisOptions: any = {
|
||||
maxRetriesPerRequest: null, // Required for BullMQ
|
||||
enableReadyCheck: false,
|
||||
lazyConnect: true, // Don't connect immediately
|
||||
retryStrategy: (times) => {
|
||||
if (times > 3) {
|
||||
logger.warn('[TAT Queue] Redis connection failed after 3 attempts. TAT notifications will be disabled.');
|
||||
retryStrategy: (times: number) => {
|
||||
if (times > 5) {
|
||||
logger.warn('[TAT Queue] Redis connection failed after 5 attempts. TAT notifications will be disabled.');
|
||||
return null; // Stop retrying
|
||||
}
|
||||
return Math.min(times * 1000, 3000);
|
||||
}
|
||||
});
|
||||
return Math.min(times * 2000, 10000); // Increase retry delay
|
||||
},
|
||||
// Increased timeouts for remote Redis server
|
||||
connectTimeout: 30000, // 30 seconds (for remote server)
|
||||
commandTimeout: 20000, // 20 seconds (for slow network)
|
||||
// Keepalive for long-running connections
|
||||
keepAlive: 30000,
|
||||
// Reconnect on error
|
||||
autoResubscribe: true,
|
||||
autoResendUnfulfilledCommands: true
|
||||
};
|
||||
|
||||
// Add password if provided (either from env var or from URL)
|
||||
if (redisPassword) {
|
||||
redisOptions.password = redisPassword;
|
||||
logger.info('[TAT Queue] Using Redis with password authentication');
|
||||
}
|
||||
|
||||
connection = new IORedis(redisUrl, redisOptions);
|
||||
|
||||
// Handle connection events
|
||||
connection.on('connect', () => {
|
||||
|
||||
@ -5,63 +5,176 @@ import logger from '@utils/logger';
|
||||
|
||||
// Create Redis connection for worker
|
||||
const redisUrl = process.env.REDIS_URL || 'redis://localhost:6379';
|
||||
const redisPassword = process.env.REDIS_PASSWORD || undefined;
|
||||
|
||||
let connection: IORedis | null = null;
|
||||
let tatWorker: Worker | null = null;
|
||||
|
||||
try {
|
||||
connection = new IORedis(redisUrl, {
|
||||
// Parse Redis connection options
|
||||
const redisOptions: any = {
|
||||
maxRetriesPerRequest: null,
|
||||
enableReadyCheck: false,
|
||||
lazyConnect: true,
|
||||
retryStrategy: (times) => {
|
||||
if (times > 3) {
|
||||
logger.warn('[TAT Worker] Redis connection failed. TAT worker will not start.');
|
||||
retryStrategy: (times: number) => {
|
||||
if (times > 5) {
|
||||
logger.warn('[TAT Worker] Redis connection failed after 5 retries. TAT worker will not start.');
|
||||
return null;
|
||||
}
|
||||
return Math.min(times * 1000, 3000);
|
||||
}
|
||||
logger.warn(`[TAT Worker] Redis connection retry attempt ${times}`);
|
||||
return Math.min(times * 2000, 10000); // Increase retry delay
|
||||
},
|
||||
// Increased timeouts for remote Redis server
|
||||
connectTimeout: 30000, // 30 seconds (for remote server)
|
||||
commandTimeout: 20000, // 20 seconds (for slow network)
|
||||
// Keepalive for long-running connections
|
||||
keepAlive: 30000,
|
||||
// Reconnect on error
|
||||
autoResubscribe: true,
|
||||
autoResendUnfulfilledCommands: true
|
||||
};
|
||||
|
||||
// Add password if provided (for authenticated Redis)
|
||||
if (redisPassword) {
|
||||
redisOptions.password = redisPassword;
|
||||
logger.info('[TAT Worker] Using Redis with password authentication');
|
||||
}
|
||||
|
||||
connection = new IORedis(redisUrl, redisOptions);
|
||||
|
||||
// Handle connection errors
|
||||
connection.on('error', (err) => {
|
||||
logger.error('[TAT Worker] Redis connection error:', {
|
||||
message: err.message,
|
||||
code: (err as any).code,
|
||||
errno: (err as any).errno,
|
||||
syscall: (err as any).syscall,
|
||||
address: (err as any).address,
|
||||
port: (err as any).port
|
||||
});
|
||||
});
|
||||
|
||||
connection.on('close', () => {
|
||||
logger.warn('[TAT Worker] Redis connection closed');
|
||||
});
|
||||
|
||||
connection.on('reconnecting', (delay: number) => {
|
||||
logger.info(`[TAT Worker] Redis reconnecting in ${delay}ms`);
|
||||
});
|
||||
|
||||
// Try to connect and create worker
|
||||
connection.connect().then(() => {
|
||||
logger.info('[TAT Worker] Connected to Redis');
|
||||
connection.connect().then(async () => {
|
||||
logger.info(`[TAT Worker] Connected to Redis at ${redisUrl}`);
|
||||
|
||||
// Create TAT Worker
|
||||
tatWorker = new Worker('tatQueue', handleTatJob, {
|
||||
connection: connection!,
|
||||
concurrency: 5, // Process up to 5 jobs concurrently
|
||||
limiter: {
|
||||
max: 10, // Maximum 10 jobs
|
||||
duration: 1000 // per second
|
||||
// Verify connection by pinging and check Redis version
|
||||
try {
|
||||
const pingResult = await connection!.ping();
|
||||
logger.info(`[TAT Worker] Redis PING successful: ${pingResult}`);
|
||||
|
||||
// Check Redis version
|
||||
const info = await connection!.info('server');
|
||||
const versionMatch = info.match(/redis_version:(.+)/);
|
||||
if (versionMatch) {
|
||||
const version = versionMatch[1].trim();
|
||||
logger.info(`[TAT Worker] Redis version: ${version}`);
|
||||
|
||||
// Parse version (e.g., "3.0.504" or "7.0.0")
|
||||
const versionParts = version.split('.').map(Number);
|
||||
const majorVersion = versionParts[0];
|
||||
|
||||
if (majorVersion < 5) {
|
||||
logger.error(`[TAT Worker] ❌ CRITICAL: Redis version ${version} is incompatible!`);
|
||||
logger.error(`[TAT Worker] BullMQ REQUIRES Redis 5.0.0 or higher. Current version: ${version}`);
|
||||
logger.error(`[TAT Worker] ⚠️ TAT Worker cannot start with this Redis version.`);
|
||||
logger.error(`[TAT Worker] 📖 Solution: Upgrade Redis (see docs/REDIS_SETUP_WINDOWS.md)`);
|
||||
logger.error(`[TAT Worker] 💡 Recommended: Install Memurai or use Docker Redis 7.x`);
|
||||
throw new Error(`Redis version ${version} is too old. BullMQ requires Redis 5.0.0+. Please upgrade Redis.`);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Event listeners
|
||||
tatWorker.on('ready', () => {
|
||||
logger.info('[TAT Worker] Worker is ready and listening for jobs');
|
||||
});
|
||||
|
||||
tatWorker.on('completed', (job) => {
|
||||
logger.info(`[TAT Worker] ✅ Job ${job.id} (${job.name}) completed for request ${job.data.requestId}`);
|
||||
});
|
||||
|
||||
tatWorker.on('failed', (job, err) => {
|
||||
if (job) {
|
||||
logger.error(`[TAT Worker] ❌ Job ${job.id} (${job.name}) failed for request ${job.data.requestId}:`, err);
|
||||
} else {
|
||||
logger.error('[TAT Worker] ❌ Job failed:', err);
|
||||
} catch (err: any) {
|
||||
logger.error('[TAT Worker] Redis PING or version check failed:', err);
|
||||
// If version check failed, don't create worker
|
||||
if (err && err.message && err.message.includes('Redis version')) {
|
||||
logger.warn('[TAT Worker] TAT notifications will be disabled until Redis is upgraded.');
|
||||
connection = null;
|
||||
tatWorker = null;
|
||||
return;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Create TAT Worker (only if version check passed)
|
||||
if (connection) {
|
||||
try {
|
||||
// BullMQ will check Redis version internally - wrap in try-catch
|
||||
tatWorker = new Worker('tatQueue', handleTatJob, {
|
||||
connection: connection!,
|
||||
concurrency: 5, // Process up to 5 jobs concurrently
|
||||
limiter: {
|
||||
max: 10, // Maximum 10 jobs
|
||||
duration: 1000 // per second
|
||||
}
|
||||
});
|
||||
} catch (workerError: any) {
|
||||
// Handle Redis version errors gracefully
|
||||
if (workerError && (
|
||||
(workerError.message && workerError.message.includes('Redis version')) ||
|
||||
(workerError.message && workerError.message.includes('5.0.0'))
|
||||
)) {
|
||||
logger.error(`[TAT Worker] ❌ ${workerError.message || 'Redis version incompatible'}`);
|
||||
logger.warn(`[TAT Worker] ⚠️ TAT notifications are DISABLED. Application will continue to work without TAT alerts.`);
|
||||
logger.info(`[TAT Worker] 💡 To enable TAT notifications, upgrade Redis to version 5.0+ (see docs/REDIS_SETUP_WINDOWS.md)`);
|
||||
|
||||
// Clean up connection
|
||||
try {
|
||||
await connection!.quit();
|
||||
} catch (quitError) {
|
||||
// Ignore quit errors
|
||||
}
|
||||
connection = null;
|
||||
tatWorker = null;
|
||||
return;
|
||||
}
|
||||
// Re-throw other errors
|
||||
logger.error('[TAT Worker] Unexpected error creating worker:', workerError);
|
||||
throw workerError;
|
||||
}
|
||||
}
|
||||
|
||||
tatWorker.on('error', (err) => {
|
||||
logger.warn('[TAT Worker] Worker error:', err.message);
|
||||
});
|
||||
// Event listeners (only if worker was created successfully)
|
||||
if (tatWorker) {
|
||||
tatWorker.on('ready', () => {
|
||||
logger.info('[TAT Worker] Worker is ready and listening for jobs');
|
||||
});
|
||||
|
||||
tatWorker.on('stalled', (jobId) => {
|
||||
logger.warn(`[TAT Worker] Job ${jobId} has stalled`);
|
||||
});
|
||||
tatWorker.on('completed', (job) => {
|
||||
logger.info(`[TAT Worker] ✅ Job ${job.id} (${job.name}) completed for request ${job.data.requestId}`);
|
||||
});
|
||||
|
||||
logger.info('[TAT Worker] Worker initialized and listening for TAT jobs');
|
||||
tatWorker.on('failed', (job, err) => {
|
||||
if (job) {
|
||||
logger.error(`[TAT Worker] ❌ Job ${job.id} (${job.name}) failed for request ${job.data.requestId}:`, err);
|
||||
} else {
|
||||
logger.error('[TAT Worker] ❌ Job failed:', err);
|
||||
}
|
||||
});
|
||||
|
||||
tatWorker.on('error', (err) => {
|
||||
logger.error('[TAT Worker] Worker error:', {
|
||||
message: err.message,
|
||||
stack: err.stack,
|
||||
name: err.name,
|
||||
code: (err as any).code,
|
||||
errno: (err as any).errno,
|
||||
syscall: (err as any).syscall
|
||||
});
|
||||
});
|
||||
|
||||
tatWorker.on('stalled', (jobId) => {
|
||||
logger.warn(`[TAT Worker] Job ${jobId} has stalled`);
|
||||
});
|
||||
|
||||
logger.info('[TAT Worker] Worker initialized and listening for TAT jobs');
|
||||
}
|
||||
}).catch((err) => {
|
||||
logger.warn('[TAT Worker] Could not connect to Redis. TAT worker will not start. TAT notifications are disabled.', err.message);
|
||||
connection = null;
|
||||
|
||||
82
src/routes/dashboard.routes.ts
Normal file
82
src/routes/dashboard.routes.ts
Normal file
@ -0,0 +1,82 @@
|
||||
import { Router } from 'express';
|
||||
import type { Request, Response } from 'express';
|
||||
import { DashboardController } from '../controllers/dashboard.controller';
|
||||
import { authenticateToken } from '../middlewares/auth.middleware';
|
||||
import { asyncHandler } from '../middlewares/errorHandler.middleware';
|
||||
|
||||
const router = Router();
|
||||
const dashboardController = new DashboardController();
|
||||
|
||||
/**
|
||||
* Dashboard Routes
|
||||
* All routes require authentication
|
||||
*/
|
||||
|
||||
// Get KPI summary (all KPI cards)
|
||||
router.get('/kpis',
|
||||
authenticateToken,
|
||||
asyncHandler(dashboardController.getKPIs.bind(dashboardController))
|
||||
);
|
||||
|
||||
// Get detailed request statistics
|
||||
router.get('/stats/requests',
|
||||
authenticateToken,
|
||||
asyncHandler(dashboardController.getRequestStats.bind(dashboardController))
|
||||
);
|
||||
|
||||
// Get TAT efficiency metrics
|
||||
router.get('/stats/tat-efficiency',
|
||||
authenticateToken,
|
||||
asyncHandler(dashboardController.getTATEfficiency.bind(dashboardController))
|
||||
);
|
||||
|
||||
// Get approver load statistics
|
||||
router.get('/stats/approver-load',
|
||||
authenticateToken,
|
||||
asyncHandler(dashboardController.getApproverLoad.bind(dashboardController))
|
||||
);
|
||||
|
||||
// Get engagement & quality metrics
|
||||
router.get('/stats/engagement',
|
||||
authenticateToken,
|
||||
asyncHandler(dashboardController.getEngagementStats.bind(dashboardController))
|
||||
);
|
||||
|
||||
// Get AI & closure insights
|
||||
router.get('/stats/ai-insights',
|
||||
authenticateToken,
|
||||
asyncHandler(dashboardController.getAIInsights.bind(dashboardController))
|
||||
);
|
||||
|
||||
// Get recent activity feed
|
||||
router.get('/activity/recent',
|
||||
authenticateToken,
|
||||
asyncHandler(dashboardController.getRecentActivity.bind(dashboardController))
|
||||
);
|
||||
|
||||
// Get high priority/critical requests
|
||||
router.get('/requests/critical',
|
||||
authenticateToken,
|
||||
asyncHandler(dashboardController.getCriticalRequests.bind(dashboardController))
|
||||
);
|
||||
|
||||
// Get upcoming deadlines
|
||||
router.get('/deadlines/upcoming',
|
||||
authenticateToken,
|
||||
asyncHandler(dashboardController.getUpcomingDeadlines.bind(dashboardController))
|
||||
);
|
||||
|
||||
// Get department-wise summary
|
||||
router.get('/stats/by-department',
|
||||
authenticateToken,
|
||||
asyncHandler(dashboardController.getDepartmentStats.bind(dashboardController))
|
||||
);
|
||||
|
||||
// Get priority distribution
|
||||
router.get('/stats/priority-distribution',
|
||||
authenticateToken,
|
||||
asyncHandler(dashboardController.getPriorityDistribution.bind(dashboardController))
|
||||
);
|
||||
|
||||
export default router;
|
||||
|
||||
@ -7,6 +7,7 @@ import tatRoutes from './tat.routes';
|
||||
import adminRoutes from './admin.routes';
|
||||
import debugRoutes from './debug.routes';
|
||||
import configRoutes from './config.routes';
|
||||
import dashboardRoutes from './dashboard.routes';
|
||||
|
||||
const router = Router();
|
||||
|
||||
@ -28,12 +29,11 @@ router.use('/documents', documentRoutes);
|
||||
router.use('/tat', tatRoutes);
|
||||
router.use('/admin', adminRoutes);
|
||||
router.use('/debug', debugRoutes);
|
||||
router.use('/dashboard', dashboardRoutes);
|
||||
|
||||
// TODO: Add other route modules as they are implemented
|
||||
// router.use('/approvals', approvalRoutes);
|
||||
// router.use('/documents', documentRoutes);
|
||||
// router.use('/notifications', notificationRoutes);
|
||||
// router.use('/participants', participantRoutes);
|
||||
// router.use('/dashboard', dashboardRoutes);
|
||||
|
||||
export default router;
|
||||
|
||||
@ -9,6 +9,9 @@ const userController = new UserController();
|
||||
// GET /api/v1/users/search?q=<email or name>
|
||||
router.get('/search', authenticateToken, asyncHandler(userController.searchUsers.bind(userController)));
|
||||
|
||||
// POST /api/v1/users/ensure - Ensure user exists in DB (create if not exists)
|
||||
router.post('/ensure', authenticateToken, asyncHandler(userController.ensureUserExists.bind(userController)));
|
||||
|
||||
export default router;
|
||||
|
||||
|
||||
|
||||
@ -4,7 +4,8 @@ import { Participant } from '@models/Participant';
|
||||
import { TatAlert } from '@models/TatAlert';
|
||||
import { ApprovalAction } from '../types/approval.types';
|
||||
import { ApprovalStatus, WorkflowStatus } from '../types/common.types';
|
||||
import { calculateElapsedHours, calculateTATPercentage } from '@utils/helpers';
|
||||
import { calculateTATPercentage } from '@utils/helpers';
|
||||
import { calculateElapsedWorkingHours } from '@utils/tatTimeUtils';
|
||||
import logger from '@utils/logger';
|
||||
import { Op } from 'sequelize';
|
||||
import { notificationService } from './notification.service';
|
||||
@ -17,8 +18,13 @@ export class ApprovalService {
|
||||
const level = await ApprovalLevel.findByPk(levelId);
|
||||
if (!level) return null;
|
||||
|
||||
// Get workflow to determine priority for working hours calculation
|
||||
const wf = await WorkflowRequest.findByPk(level.requestId);
|
||||
const priority = ((wf as any)?.priority || 'standard').toString().toLowerCase();
|
||||
|
||||
const now = new Date();
|
||||
const elapsedHours = calculateElapsedHours(level.levelStartTime || level.createdAt, now);
|
||||
// Calculate elapsed hours using working hours logic (matches frontend)
|
||||
const elapsedHours = await calculateElapsedWorkingHours(level.levelStartTime || level.createdAt, now, priority);
|
||||
const tatPercentage = calculateTATPercentage(elapsedHours, level.tatHours);
|
||||
|
||||
const updateData = {
|
||||
@ -60,10 +66,7 @@ export class ApprovalService {
|
||||
// Don't fail the approval if TAT alert update fails
|
||||
}
|
||||
|
||||
// Load workflow for titles and initiator
|
||||
const wf = await WorkflowRequest.findByPk(level.requestId);
|
||||
|
||||
// Handle approval - move to next level or close workflow
|
||||
// Handle approval - move to next level or close workflow (wf already loaded above)
|
||||
if (action.action === 'APPROVE') {
|
||||
if (level.isFinalApprover) {
|
||||
// Final approver - close workflow as APPROVED
|
||||
|
||||
@ -37,10 +37,10 @@ export async function getConfigValue(configKey: string, defaultValue: string = '
|
||||
const value = (result[0] as any).config_value;
|
||||
configCache.set(configKey, value);
|
||||
|
||||
// Set cache expiry if not set
|
||||
if (!cacheExpiry) {
|
||||
cacheExpiry = new Date(Date.now() + CACHE_DURATION_MS);
|
||||
}
|
||||
// Always update cache expiry when loading from database
|
||||
cacheExpiry = new Date(Date.now() + CACHE_DURATION_MS);
|
||||
|
||||
logger.info(`[ConfigReader] Loaded config '${configKey}' = '${value}' from database (cached for 5min)`);
|
||||
|
||||
return value;
|
||||
}
|
||||
|
||||
741
src/services/dashboard.service.ts
Normal file
741
src/services/dashboard.service.ts
Normal file
@ -0,0 +1,741 @@
|
||||
import { WorkflowRequest } from '@models/WorkflowRequest';
|
||||
import { ApprovalLevel } from '@models/ApprovalLevel';
|
||||
import { Participant } from '@models/Participant';
|
||||
import { Activity } from '@models/Activity';
|
||||
import { WorkNote } from '@models/WorkNote';
|
||||
import { Document } from '@models/Document';
|
||||
import { TatAlert } from '@models/TatAlert';
|
||||
import { User } from '@models/User';
|
||||
import { Op, QueryTypes } from 'sequelize';
|
||||
import { sequelize } from '@config/database';
|
||||
import dayjs from 'dayjs';
|
||||
import logger from '@utils/logger';
|
||||
import { calculateSLAStatus } from '@utils/tatTimeUtils';
|
||||
|
||||
interface DateRangeFilter {
|
||||
start: Date;
|
||||
end: Date;
|
||||
}
|
||||
|
||||
export class DashboardService {
|
||||
/**
|
||||
* Parse date range string to Date objects
|
||||
*/
|
||||
private parseDateRange(dateRange?: string): DateRangeFilter {
|
||||
const now = dayjs();
|
||||
|
||||
switch (dateRange) {
|
||||
case 'today':
|
||||
return {
|
||||
start: now.startOf('day').toDate(),
|
||||
end: now.endOf('day').toDate()
|
||||
};
|
||||
case 'week':
|
||||
return {
|
||||
start: now.startOf('week').toDate(),
|
||||
end: now.endOf('week').toDate()
|
||||
};
|
||||
case 'month':
|
||||
return {
|
||||
start: now.startOf('month').toDate(),
|
||||
end: now.endOf('month').toDate()
|
||||
};
|
||||
case 'quarter':
|
||||
// Calculate quarter manually since dayjs doesn't support it by default
|
||||
const currentMonth = now.month();
|
||||
const quarterStartMonth = Math.floor(currentMonth / 3) * 3;
|
||||
return {
|
||||
start: now.month(quarterStartMonth).startOf('month').toDate(),
|
||||
end: now.month(quarterStartMonth + 2).endOf('month').toDate()
|
||||
};
|
||||
case 'year':
|
||||
return {
|
||||
start: now.startOf('year').toDate(),
|
||||
end: now.endOf('year').toDate()
|
||||
};
|
||||
default:
|
||||
// Default to last 30 days
|
||||
return {
|
||||
start: now.subtract(30, 'day').toDate(),
|
||||
end: now.toDate()
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all KPIs for dashboard
|
||||
*/
|
||||
async getKPIs(userId: string, dateRange?: string) {
|
||||
const range = this.parseDateRange(dateRange);
|
||||
|
||||
// Run all KPI queries in parallel for performance
|
||||
const [
|
||||
requestStats,
|
||||
tatEfficiency,
|
||||
approverLoad,
|
||||
engagement,
|
||||
aiInsights
|
||||
] = await Promise.all([
|
||||
this.getRequestStats(userId, dateRange),
|
||||
this.getTATEfficiency(userId, dateRange),
|
||||
this.getApproverLoad(userId, dateRange),
|
||||
this.getEngagementStats(userId, dateRange),
|
||||
this.getAIInsights(userId, dateRange)
|
||||
]);
|
||||
|
||||
return {
|
||||
requestVolume: requestStats,
|
||||
tatEfficiency,
|
||||
approverLoad,
|
||||
engagement,
|
||||
aiInsights,
|
||||
dateRange: {
|
||||
start: range.start,
|
||||
end: range.end,
|
||||
label: dateRange || 'last30days'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get request volume and status statistics
|
||||
*/
|
||||
async getRequestStats(userId: string, dateRange?: string) {
|
||||
const range = this.parseDateRange(dateRange);
|
||||
|
||||
// Check if user is admin
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = (user as any)?.isAdmin || false;
|
||||
|
||||
// For regular users: show only requests they INITIATED (not participated in)
|
||||
// For admin: show all requests
|
||||
let whereClause = `
|
||||
WHERE wf.created_at BETWEEN :start AND :end
|
||||
AND wf.is_draft = false
|
||||
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||
`;
|
||||
|
||||
const result = await sequelize.query(`
|
||||
SELECT
|
||||
COUNT(*)::int AS total_requests,
|
||||
COUNT(CASE WHEN wf.status = 'PENDING' OR wf.status = 'IN_PROGRESS' THEN 1 END)::int AS open_requests,
|
||||
COUNT(CASE WHEN wf.status = 'APPROVED' THEN 1 END)::int AS approved_requests,
|
||||
COUNT(CASE WHEN wf.status = 'REJECTED' THEN 1 END)::int AS rejected_requests
|
||||
FROM workflow_requests wf
|
||||
${whereClause}
|
||||
`, {
|
||||
replacements: { start: range.start, end: range.end, userId },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
// Get draft count separately
|
||||
const draftResult = await sequelize.query(`
|
||||
SELECT COUNT(*)::int AS draft_count
|
||||
FROM workflow_requests wf
|
||||
WHERE wf.is_draft = true
|
||||
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||
`, {
|
||||
replacements: { userId },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
const stats = result[0] as any;
|
||||
const drafts = (draftResult[0] as any);
|
||||
|
||||
return {
|
||||
totalRequests: stats.total_requests || 0,
|
||||
openRequests: stats.open_requests || 0,
|
||||
approvedRequests: stats.approved_requests || 0,
|
||||
rejectedRequests: stats.rejected_requests || 0,
|
||||
draftRequests: drafts.draft_count || 0,
|
||||
changeFromPrevious: {
|
||||
total: '+0',
|
||||
open: '+0',
|
||||
approved: '+0',
|
||||
rejected: '+0'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get TAT efficiency metrics
|
||||
*/
|
||||
async getTATEfficiency(userId: string, dateRange?: string) {
|
||||
const range = this.parseDateRange(dateRange);
|
||||
|
||||
// Check if user is admin
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = (user as any)?.isAdmin || false;
|
||||
|
||||
// For regular users: only their initiated requests
|
||||
// For admin: all requests
|
||||
let whereClause = `
|
||||
WHERE wf.created_at BETWEEN :start AND :end
|
||||
AND wf.status IN ('APPROVED', 'REJECTED')
|
||||
AND wf.is_draft = false
|
||||
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||
`;
|
||||
|
||||
const result = await sequelize.query(`
|
||||
SELECT
|
||||
COUNT(*)::int AS total_completed,
|
||||
COUNT(CASE WHEN EXISTS (
|
||||
SELECT 1 FROM tat_alerts ta
|
||||
WHERE ta.request_id = wf.request_id
|
||||
AND ta.is_breached = true
|
||||
) THEN 1 END)::int AS breached_count,
|
||||
AVG(
|
||||
EXTRACT(EPOCH FROM (wf.updated_at - wf.submission_date)) / 3600
|
||||
)::numeric AS avg_cycle_time_hours
|
||||
FROM workflow_requests wf
|
||||
${whereClause}
|
||||
`, {
|
||||
replacements: { start: range.start, end: range.end, userId },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
const stats = result[0] as any;
|
||||
const totalCompleted = stats.total_completed || 0;
|
||||
const breachedCount = stats.breached_count || 0;
|
||||
const compliantCount = totalCompleted - breachedCount;
|
||||
const compliancePercent = totalCompleted > 0 ? Math.round((compliantCount / totalCompleted) * 100) : 0;
|
||||
|
||||
return {
|
||||
avgTATCompliance: compliancePercent,
|
||||
avgCycleTimeHours: Math.round(parseFloat(stats.avg_cycle_time_hours || 0) * 10) / 10,
|
||||
avgCycleTimeDays: Math.round((parseFloat(stats.avg_cycle_time_hours || 0) / 24) * 10) / 10,
|
||||
delayedWorkflows: breachedCount,
|
||||
totalCompleted,
|
||||
compliantWorkflows: compliantCount,
|
||||
changeFromPrevious: {
|
||||
compliance: '+5.8%', // TODO: Calculate actual change
|
||||
cycleTime: '-0.5h'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get approver load statistics
|
||||
*/
|
||||
async getApproverLoad(userId: string, dateRange?: string) {
|
||||
const range = this.parseDateRange(dateRange);
|
||||
|
||||
// Get pending actions where user is the CURRENT active approver
|
||||
// This means: the request is at this user's level AND it's the current level
|
||||
const pendingResult = await sequelize.query(`
|
||||
SELECT COUNT(DISTINCT al.level_id)::int AS pending_count
|
||||
FROM approval_levels al
|
||||
JOIN workflow_requests wf ON al.request_id = wf.request_id
|
||||
WHERE al.approver_id = :userId
|
||||
AND al.status = 'IN_PROGRESS'
|
||||
AND wf.status IN ('PENDING', 'IN_PROGRESS')
|
||||
AND wf.is_draft = false
|
||||
AND al.level_number = wf.current_level
|
||||
`, {
|
||||
replacements: { userId },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
// Get completed approvals in date range
|
||||
const completedResult = await sequelize.query(`
|
||||
SELECT
|
||||
COUNT(*)::int AS completed_today,
|
||||
COUNT(CASE WHEN al.action_date >= :weekStart THEN 1 END)::int AS completed_this_week
|
||||
FROM approval_levels al
|
||||
WHERE al.approver_id = :userId
|
||||
AND al.status IN ('APPROVED', 'REJECTED')
|
||||
AND al.action_date BETWEEN :start AND :end
|
||||
`, {
|
||||
replacements: {
|
||||
userId,
|
||||
start: range.start,
|
||||
end: range.end,
|
||||
weekStart: dayjs().startOf('week').toDate()
|
||||
},
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
const pending = (pendingResult[0] as any);
|
||||
const completed = (completedResult[0] as any);
|
||||
|
||||
return {
|
||||
pendingActions: pending.pending_count || 0,
|
||||
completedToday: completed.completed_today || 0,
|
||||
completedThisWeek: completed.completed_this_week || 0,
|
||||
changeFromPrevious: {
|
||||
pending: '+2',
|
||||
completed: '+15%'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get engagement and quality metrics
|
||||
*/
|
||||
async getEngagementStats(userId: string, dateRange?: string) {
|
||||
const range = this.parseDateRange(dateRange);
|
||||
|
||||
// Check if user is admin
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = (user as any)?.isAdmin || false;
|
||||
|
||||
// Get work notes count - uses created_at
|
||||
// For regular users: only from requests they initiated
|
||||
let workNotesWhereClause = `
|
||||
WHERE wn.created_at BETWEEN :start AND :end
|
||||
${!isAdmin ? `AND EXISTS (
|
||||
SELECT 1 FROM workflow_requests wf
|
||||
WHERE wf.request_id = wn.request_id
|
||||
AND wf.initiator_id = :userId
|
||||
AND wf.is_draft = false
|
||||
)` : ''}
|
||||
`;
|
||||
|
||||
const workNotesResult = await sequelize.query(`
|
||||
SELECT COUNT(*)::int AS work_notes_count
|
||||
FROM work_notes wn
|
||||
${workNotesWhereClause}
|
||||
`, {
|
||||
replacements: { start: range.start, end: range.end, userId },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
// Get documents count - uses uploaded_at
|
||||
// For regular users: only from requests they initiated
|
||||
let documentsWhereClause = `
|
||||
WHERE d.uploaded_at BETWEEN :start AND :end
|
||||
${!isAdmin ? `AND EXISTS (
|
||||
SELECT 1 FROM workflow_requests wf
|
||||
WHERE wf.request_id = d.request_id
|
||||
AND wf.initiator_id = :userId
|
||||
AND wf.is_draft = false
|
||||
)` : ''}
|
||||
`;
|
||||
|
||||
const documentsResult = await sequelize.query(`
|
||||
SELECT COUNT(*)::int AS documents_count
|
||||
FROM documents d
|
||||
${documentsWhereClause}
|
||||
`, {
|
||||
replacements: { start: range.start, end: range.end, userId },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
const workNotes = (workNotesResult[0] as any);
|
||||
const documents = (documentsResult[0] as any);
|
||||
|
||||
return {
|
||||
workNotesAdded: workNotes.work_notes_count || 0,
|
||||
attachmentsUploaded: documents.documents_count || 0,
|
||||
changeFromPrevious: {
|
||||
workNotes: '+25',
|
||||
attachments: '+8'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get AI insights and closure metrics
|
||||
*/
|
||||
async getAIInsights(userId: string, dateRange?: string) {
|
||||
const range = this.parseDateRange(dateRange);
|
||||
|
||||
// Check if user is admin
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = (user as any)?.isAdmin || false;
|
||||
|
||||
// For regular users: only their initiated requests
|
||||
let whereClause = `
|
||||
WHERE wf.created_at BETWEEN :start AND :end
|
||||
AND wf.status = 'APPROVED'
|
||||
AND wf.conclusion_remark IS NOT NULL
|
||||
AND wf.is_draft = false
|
||||
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||
`;
|
||||
|
||||
const result = await sequelize.query(`
|
||||
SELECT
|
||||
COUNT(*)::int AS total_with_conclusion,
|
||||
AVG(LENGTH(wf.conclusion_remark))::numeric AS avg_remark_length,
|
||||
COUNT(CASE WHEN wf.ai_generated_conclusion IS NOT NULL AND wf.ai_generated_conclusion != '' THEN 1 END)::int AS ai_generated_count,
|
||||
COUNT(CASE WHEN wf.ai_generated_conclusion IS NULL OR wf.ai_generated_conclusion = '' THEN 1 END)::int AS manual_count
|
||||
FROM workflow_requests wf
|
||||
${whereClause}
|
||||
`, {
|
||||
replacements: { start: range.start, end: range.end, userId },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
const stats = result[0] as any;
|
||||
const totalWithConclusion = stats.total_with_conclusion || 0;
|
||||
const aiCount = stats.ai_generated_count || 0;
|
||||
const aiAdoptionPercent = totalWithConclusion > 0 ? Math.round((aiCount / totalWithConclusion) * 100) : 0;
|
||||
|
||||
return {
|
||||
avgConclusionRemarkLength: Math.round(parseFloat(stats.avg_remark_length || 0)),
|
||||
aiSummaryAdoptionPercent: aiAdoptionPercent,
|
||||
totalWithConclusion,
|
||||
aiGeneratedCount: aiCount,
|
||||
manualCount: stats.manual_count || 0,
|
||||
changeFromPrevious: {
|
||||
adoption: '+12%',
|
||||
length: '+50 chars'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get recent activity feed
|
||||
*/
|
||||
async getRecentActivity(userId: string, limit: number = 10) {
|
||||
// Check if user is admin
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = (user as any)?.isAdmin || false;
|
||||
|
||||
// For regular users: only activities from their initiated requests OR where they're a participant
|
||||
let whereClause = isAdmin ? '' : `
|
||||
AND (
|
||||
wf.initiator_id = :userId
|
||||
OR EXISTS (
|
||||
SELECT 1 FROM participants p
|
||||
WHERE p.request_id = a.request_id
|
||||
AND p.user_id = :userId
|
||||
)
|
||||
)
|
||||
`;
|
||||
|
||||
const activities = await sequelize.query(`
|
||||
SELECT
|
||||
a.activity_id,
|
||||
a.request_id,
|
||||
a.activity_type AS type,
|
||||
a.activity_description,
|
||||
a.activity_category,
|
||||
a.user_id,
|
||||
a.user_name,
|
||||
a.created_at AS timestamp,
|
||||
wf.request_number,
|
||||
wf.title AS request_title,
|
||||
wf.priority
|
||||
FROM activities a
|
||||
JOIN workflow_requests wf ON a.request_id = wf.request_id
|
||||
WHERE a.created_at >= NOW() - INTERVAL '7 days'
|
||||
${whereClause}
|
||||
ORDER BY a.created_at DESC
|
||||
LIMIT :limit
|
||||
`, {
|
||||
replacements: { userId, limit },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
return activities.map((a: any) => ({
|
||||
activityId: a.activity_id,
|
||||
requestId: a.request_id,
|
||||
requestNumber: a.request_number,
|
||||
requestTitle: a.request_title,
|
||||
type: a.type,
|
||||
action: a.activity_description || a.type, // Use activity_description as action
|
||||
details: a.activity_category,
|
||||
userId: a.user_id,
|
||||
userName: a.user_name,
|
||||
timestamp: a.timestamp,
|
||||
priority: (a.priority || '').toLowerCase()
|
||||
}));
|
||||
}
|
||||
|
||||
/**
|
||||
* Get critical requests (breached TAT or approaching deadline)
|
||||
*/
|
||||
async getCriticalRequests(userId: string) {
|
||||
// Check if user is admin
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = (user as any)?.isAdmin || false;
|
||||
|
||||
// For regular users: show only their initiated requests OR where they are current approver
|
||||
let whereClause = `
|
||||
WHERE wf.status IN ('PENDING', 'IN_PROGRESS')
|
||||
AND wf.is_draft = false
|
||||
${!isAdmin ? `AND (
|
||||
wf.initiator_id = :userId
|
||||
OR EXISTS (
|
||||
SELECT 1 FROM approval_levels al
|
||||
WHERE al.request_id = wf.request_id
|
||||
AND al.approver_id = :userId
|
||||
AND al.level_number = wf.current_level
|
||||
AND al.status = 'IN_PROGRESS'
|
||||
)
|
||||
)` : ''}
|
||||
`;
|
||||
|
||||
const criticalRequests = await sequelize.query(`
|
||||
SELECT
|
||||
wf.request_id,
|
||||
wf.request_number,
|
||||
wf.title,
|
||||
wf.priority,
|
||||
wf.status,
|
||||
wf.current_level,
|
||||
wf.total_levels,
|
||||
wf.submission_date,
|
||||
wf.total_tat_hours,
|
||||
(
|
||||
SELECT COUNT(*)::int
|
||||
FROM tat_alerts ta
|
||||
WHERE ta.request_id = wf.request_id
|
||||
AND ta.is_breached = true
|
||||
) AS breach_count,
|
||||
(
|
||||
SELECT al.tat_hours
|
||||
FROM approval_levels al
|
||||
WHERE al.request_id = wf.request_id
|
||||
AND al.level_number = wf.current_level
|
||||
LIMIT 1
|
||||
) AS current_level_tat_hours,
|
||||
(
|
||||
SELECT al.level_start_time
|
||||
FROM approval_levels al
|
||||
WHERE al.request_id = wf.request_id
|
||||
AND al.level_number = wf.current_level
|
||||
LIMIT 1
|
||||
) AS current_level_start_time
|
||||
FROM workflow_requests wf
|
||||
${whereClause}
|
||||
AND (
|
||||
-- Has TAT breaches
|
||||
EXISTS (
|
||||
SELECT 1 FROM tat_alerts ta
|
||||
WHERE ta.request_id = wf.request_id
|
||||
AND (ta.is_breached = true OR ta.threshold_percentage >= 75)
|
||||
)
|
||||
-- Or is express priority
|
||||
OR wf.priority = 'EXPRESS'
|
||||
)
|
||||
ORDER BY
|
||||
CASE WHEN wf.priority = 'EXPRESS' THEN 1 ELSE 2 END,
|
||||
breach_count DESC,
|
||||
wf.created_at ASC
|
||||
LIMIT 10
|
||||
`, {
|
||||
replacements: { userId },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
// Calculate working hours TAT for each critical request's current level
|
||||
const criticalWithSLA = await Promise.all(criticalRequests.map(async (req: any) => {
|
||||
const priority = (req.priority || 'standard').toLowerCase();
|
||||
const currentLevelTatHours = parseFloat(req.current_level_tat_hours) || 0;
|
||||
const currentLevelStartTime = req.current_level_start_time;
|
||||
|
||||
let currentLevelRemainingHours = currentLevelTatHours;
|
||||
|
||||
if (currentLevelStartTime && currentLevelTatHours > 0) {
|
||||
try {
|
||||
// Use working hours calculation for current level
|
||||
const slaData = await calculateSLAStatus(currentLevelStartTime, currentLevelTatHours, priority);
|
||||
currentLevelRemainingHours = slaData.remainingHours;
|
||||
} catch (error) {
|
||||
logger.error(`[Dashboard] Error calculating SLA for critical request ${req.request_id}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
requestId: req.request_id,
|
||||
requestNumber: req.request_number,
|
||||
title: req.title,
|
||||
priority,
|
||||
status: (req.status || '').toLowerCase(),
|
||||
currentLevel: req.current_level,
|
||||
totalLevels: req.total_levels,
|
||||
submissionDate: req.submission_date,
|
||||
totalTATHours: currentLevelRemainingHours, // Current level remaining hours
|
||||
breachCount: req.breach_count || 0,
|
||||
isCritical: req.breach_count > 0 || req.priority === 'EXPRESS'
|
||||
};
|
||||
}));
|
||||
|
||||
return criticalWithSLA;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get upcoming deadlines
|
||||
*/
|
||||
async getUpcomingDeadlines(userId: string, limit: number = 5) {
|
||||
// Check if user is admin
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = (user as any)?.isAdmin || false;
|
||||
|
||||
// For regular users: only show CURRENT LEVEL where they are the approver
|
||||
// For admins: show all current active levels
|
||||
let whereClause = `
|
||||
WHERE wf.status IN ('PENDING', 'IN_PROGRESS')
|
||||
AND wf.is_draft = false
|
||||
AND al.status = 'IN_PROGRESS'
|
||||
AND al.level_number = wf.current_level
|
||||
${!isAdmin ? `AND al.approver_id = :userId` : ''}
|
||||
`;
|
||||
|
||||
const deadlines = await sequelize.query(`
|
||||
SELECT
|
||||
al.level_id,
|
||||
al.request_id,
|
||||
al.level_number,
|
||||
al.approver_name,
|
||||
al.approver_email,
|
||||
al.tat_hours,
|
||||
al.level_start_time,
|
||||
wf.request_number,
|
||||
wf.title AS request_title,
|
||||
wf.priority,
|
||||
wf.current_level,
|
||||
wf.total_levels
|
||||
FROM approval_levels al
|
||||
JOIN workflow_requests wf ON al.request_id = wf.request_id
|
||||
${whereClause}
|
||||
ORDER BY al.level_start_time ASC
|
||||
LIMIT :limit
|
||||
`, {
|
||||
replacements: { userId, limit },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
// Calculate working hours TAT for each deadline
|
||||
const deadlinesWithSLA = await Promise.all(deadlines.map(async (d: any) => {
|
||||
const priority = (d.priority || 'standard').toLowerCase();
|
||||
const tatHours = parseFloat(d.tat_hours) || 0;
|
||||
const levelStartTime = d.level_start_time;
|
||||
|
||||
let elapsedHours = 0;
|
||||
let remainingHours = tatHours;
|
||||
let tatPercentageUsed = 0;
|
||||
|
||||
if (levelStartTime && tatHours > 0) {
|
||||
try {
|
||||
// Use working hours calculation (same as RequestDetail screen)
|
||||
const slaData = await calculateSLAStatus(levelStartTime, tatHours, priority);
|
||||
elapsedHours = slaData.elapsedHours;
|
||||
remainingHours = slaData.remainingHours;
|
||||
tatPercentageUsed = slaData.percentageUsed;
|
||||
} catch (error) {
|
||||
logger.error(`[Dashboard] Error calculating SLA for level ${d.level_id}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
levelId: d.level_id,
|
||||
requestId: d.request_id,
|
||||
requestNumber: d.request_number,
|
||||
requestTitle: d.request_title,
|
||||
levelNumber: d.level_number,
|
||||
currentLevel: d.current_level,
|
||||
totalLevels: d.total_levels,
|
||||
approverName: d.approver_name,
|
||||
approverEmail: d.approver_email,
|
||||
tatHours,
|
||||
elapsedHours,
|
||||
remainingHours,
|
||||
tatPercentageUsed,
|
||||
levelStartTime,
|
||||
priority
|
||||
};
|
||||
}));
|
||||
|
||||
// Sort by TAT percentage used (descending) and return
|
||||
return deadlinesWithSLA.sort((a, b) => b.tatPercentageUsed - a.tatPercentageUsed);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get department-wise statistics
|
||||
*/
|
||||
async getDepartmentStats(userId: string, dateRange?: string) {
|
||||
const range = this.parseDateRange(dateRange);
|
||||
|
||||
// Check if user is admin
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = (user as any)?.isAdmin || false;
|
||||
|
||||
// For regular users: only their initiated requests
|
||||
let whereClause = `
|
||||
WHERE wf.created_at BETWEEN :start AND :end
|
||||
AND wf.is_draft = false
|
||||
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||
`;
|
||||
|
||||
const deptStats = await sequelize.query(`
|
||||
SELECT
|
||||
COALESCE(u.department, 'Unknown') AS department,
|
||||
COUNT(*)::int AS total_requests,
|
||||
COUNT(CASE WHEN wf.status = 'APPROVED' THEN 1 END)::int AS approved,
|
||||
COUNT(CASE WHEN wf.status = 'REJECTED' THEN 1 END)::int AS rejected,
|
||||
COUNT(CASE WHEN wf.status IN ('PENDING', 'IN_PROGRESS') THEN 1 END)::int AS in_progress
|
||||
FROM workflow_requests wf
|
||||
JOIN users u ON wf.initiator_id = u.user_id
|
||||
${whereClause}
|
||||
GROUP BY u.department
|
||||
ORDER BY total_requests DESC
|
||||
LIMIT 10
|
||||
`, {
|
||||
replacements: { start: range.start, end: range.end, userId },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
return deptStats.map((d: any) => ({
|
||||
department: d.department,
|
||||
totalRequests: d.total_requests,
|
||||
approved: d.approved,
|
||||
rejected: d.rejected,
|
||||
inProgress: d.in_progress,
|
||||
approvalRate: d.total_requests > 0 ? Math.round((d.approved / d.total_requests) * 100) : 0
|
||||
}));
|
||||
}
|
||||
|
||||
/**
|
||||
* Get priority distribution statistics
|
||||
*/
|
||||
async getPriorityDistribution(userId: string, dateRange?: string) {
|
||||
const range = this.parseDateRange(dateRange);
|
||||
|
||||
// Check if user is admin
|
||||
const user = await User.findByPk(userId);
|
||||
const isAdmin = (user as any)?.isAdmin || false;
|
||||
|
||||
// For regular users: only their initiated requests
|
||||
let whereClause = `
|
||||
WHERE wf.created_at BETWEEN :start AND :end
|
||||
AND wf.is_draft = false
|
||||
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||
`;
|
||||
|
||||
const priorityStats = await sequelize.query(`
|
||||
SELECT
|
||||
wf.priority,
|
||||
COUNT(*)::int AS total_count,
|
||||
AVG(
|
||||
EXTRACT(EPOCH FROM (wf.updated_at - wf.submission_date)) / 3600
|
||||
)::numeric AS avg_cycle_time_hours,
|
||||
COUNT(CASE WHEN wf.status = 'APPROVED' THEN 1 END)::int AS approved_count,
|
||||
COUNT(CASE WHEN EXISTS (
|
||||
SELECT 1 FROM tat_alerts ta
|
||||
WHERE ta.request_id = wf.request_id
|
||||
AND ta.is_breached = true
|
||||
) THEN 1 END)::int AS breached_count
|
||||
FROM workflow_requests wf
|
||||
${whereClause}
|
||||
GROUP BY wf.priority
|
||||
`, {
|
||||
replacements: { start: range.start, end: range.end, userId },
|
||||
type: QueryTypes.SELECT
|
||||
});
|
||||
|
||||
return priorityStats.map((p: any) => ({
|
||||
priority: (p.priority || 'STANDARD').toLowerCase(),
|
||||
totalCount: p.total_count,
|
||||
avgCycleTimeHours: Math.round(parseFloat(p.avg_cycle_time_hours || 0) * 10) / 10,
|
||||
approvedCount: p.approved_count,
|
||||
breachedCount: p.breached_count,
|
||||
complianceRate: p.total_count > 0 ? Math.round(((p.total_count - p.breached_count) / p.total_count) * 100) : 0
|
||||
}));
|
||||
}
|
||||
}
|
||||
|
||||
export const dashboardService = new DashboardService();
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
import { tatQueue } from '../queues/tatQueue';
|
||||
import { calculateDelay, addWorkingHours, addCalendarHours } from '@utils/tatTimeUtils';
|
||||
import { calculateDelay, addWorkingHours, addWorkingHoursExpress } from '@utils/tatTimeUtils';
|
||||
import { getTatThresholds } from './configReader.service';
|
||||
import dayjs from 'dayjs';
|
||||
import logger from '@utils/logger';
|
||||
@ -44,20 +44,23 @@ export class TatSchedulerService {
|
||||
let breachTime: Date;
|
||||
|
||||
if (isExpress) {
|
||||
// EXPRESS: 24/7 calculation - no exclusions
|
||||
threshold1Time = addCalendarHours(now, tatDurationHours * (thresholds.first / 100)).toDate();
|
||||
threshold2Time = addCalendarHours(now, tatDurationHours * (thresholds.second / 100)).toDate();
|
||||
breachTime = addCalendarHours(now, tatDurationHours).toDate();
|
||||
logger.info(`[TAT Scheduler] Using EXPRESS mode (24/7) - no holiday/weekend exclusions`);
|
||||
// EXPRESS: All calendar days (Mon-Sun, including weekends/holidays) but working hours only (9 AM - 6 PM)
|
||||
const t1 = await addWorkingHoursExpress(now, tatDurationHours * (thresholds.first / 100));
|
||||
const t2 = await addWorkingHoursExpress(now, tatDurationHours * (thresholds.second / 100));
|
||||
const tBreach = await addWorkingHoursExpress(now, tatDurationHours);
|
||||
threshold1Time = t1.toDate();
|
||||
threshold2Time = t2.toDate();
|
||||
breachTime = tBreach.toDate();
|
||||
logger.info(`[TAT Scheduler] Using EXPRESS mode - all days, working hours only (9 AM - 6 PM)`);
|
||||
} else {
|
||||
// STANDARD: Working hours only, excludes holidays
|
||||
// STANDARD: Working days only (Mon-Fri), working hours (9 AM - 6 PM), excludes holidays
|
||||
const t1 = await addWorkingHours(now, tatDurationHours * (thresholds.first / 100));
|
||||
const t2 = await addWorkingHours(now, tatDurationHours * (thresholds.second / 100));
|
||||
const tBreach = await addWorkingHours(now, tatDurationHours);
|
||||
threshold1Time = t1.toDate();
|
||||
threshold2Time = t2.toDate();
|
||||
breachTime = tBreach.toDate();
|
||||
logger.info(`[TAT Scheduler] Using STANDARD mode - excludes holidays, weekends, non-working hours`);
|
||||
logger.info(`[TAT Scheduler] Using STANDARD mode - weekdays only, working hours (9 AM - 6 PM), excludes holidays`);
|
||||
}
|
||||
|
||||
logger.info(`[TAT Scheduler] Calculating TAT milestones for request ${requestId}, level ${levelId}`);
|
||||
|
||||
@ -1,9 +1,24 @@
|
||||
import { User as UserModel } from '../models/User';
|
||||
import { Op } from 'sequelize';
|
||||
import { SSOUserData } from '../types/auth.types'; // Use shared type
|
||||
import axios from 'axios';
|
||||
|
||||
// Using UserModel type directly - interface removed to avoid duplication
|
||||
|
||||
interface OktaUser {
|
||||
id: string;
|
||||
status: string;
|
||||
profile: {
|
||||
firstName?: string;
|
||||
lastName?: string;
|
||||
displayName?: string;
|
||||
email: string;
|
||||
login: string;
|
||||
department?: string;
|
||||
mobilePhone?: string;
|
||||
};
|
||||
}
|
||||
|
||||
export class UserService {
|
||||
async createOrUpdateUser(ssoData: SSOUserData): Promise<UserModel> {
|
||||
// Validate required fields
|
||||
@ -78,7 +93,84 @@ export class UserService {
|
||||
});
|
||||
}
|
||||
|
||||
async searchUsers(query: string, limit: number = 10, excludeUserId?: string): Promise<UserModel[]> {
|
||||
async searchUsers(query: string, limit: number = 10, excludeUserId?: string): Promise<any[]> {
|
||||
const q = (query || '').trim();
|
||||
if (!q) {
|
||||
return [];
|
||||
}
|
||||
|
||||
// Get the current user's email to exclude them from results
|
||||
let excludeEmail: string | undefined;
|
||||
if (excludeUserId) {
|
||||
try {
|
||||
const currentUser = await UserModel.findByPk(excludeUserId);
|
||||
if (currentUser) {
|
||||
excludeEmail = (currentUser as any).email?.toLowerCase();
|
||||
}
|
||||
} catch (err) {
|
||||
// Ignore error - filtering will still work by userId for local search
|
||||
}
|
||||
}
|
||||
|
||||
// Search Okta users
|
||||
try {
|
||||
const oktaDomain = process.env.OKTA_DOMAIN;
|
||||
const oktaApiToken = process.env.OKTA_API_TOKEN;
|
||||
|
||||
if (!oktaDomain || !oktaApiToken) {
|
||||
console.error('❌ Okta credentials not configured');
|
||||
// Fallback to local DB search
|
||||
return await this.searchUsersLocal(q, limit, excludeUserId);
|
||||
}
|
||||
|
||||
const response = await axios.get(`${oktaDomain}/api/v1/users`, {
|
||||
params: { q, limit: Math.min(limit, 50) },
|
||||
headers: {
|
||||
'Authorization': `SSWS ${oktaApiToken}`,
|
||||
'Accept': 'application/json'
|
||||
},
|
||||
timeout: 5000
|
||||
});
|
||||
|
||||
const oktaUsers: OktaUser[] = response.data || [];
|
||||
|
||||
// Transform Okta users to our format
|
||||
return oktaUsers
|
||||
.filter(u => {
|
||||
// Filter out inactive users
|
||||
if (u.status !== 'ACTIVE') return false;
|
||||
|
||||
// Filter out current user by Okta ID or email
|
||||
if (excludeUserId && u.id === excludeUserId) return false;
|
||||
if (excludeEmail) {
|
||||
const userEmail = (u.profile.email || u.profile.login || '').toLowerCase();
|
||||
if (userEmail === excludeEmail) return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
})
|
||||
.map(u => ({
|
||||
userId: u.id,
|
||||
oktaSub: u.id,
|
||||
email: u.profile.email || u.profile.login,
|
||||
displayName: u.profile.displayName || `${u.profile.firstName || ''} ${u.profile.lastName || ''}`.trim(),
|
||||
firstName: u.profile.firstName,
|
||||
lastName: u.profile.lastName,
|
||||
department: u.profile.department,
|
||||
phone: u.profile.mobilePhone,
|
||||
isActive: true
|
||||
}));
|
||||
} catch (error: any) {
|
||||
console.error('❌ Okta user search failed:', error.message);
|
||||
// Fallback to local DB search
|
||||
return await this.searchUsersLocal(q, limit, excludeUserId);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Fallback: Search users in local database
|
||||
*/
|
||||
private async searchUsersLocal(query: string, limit: number = 10, excludeUserId?: string): Promise<UserModel[]> {
|
||||
const q = (query || '').trim();
|
||||
if (!q) {
|
||||
return [];
|
||||
@ -100,4 +192,66 @@ export class UserService {
|
||||
limit: Math.min(Math.max(limit || 10, 1), 50),
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure user exists in database (create if not exists)
|
||||
* Used when tagging users from Okta search results
|
||||
*/
|
||||
async ensureUserExists(oktaUserData: {
|
||||
userId: string;
|
||||
email: string;
|
||||
displayName?: string;
|
||||
firstName?: string;
|
||||
lastName?: string;
|
||||
department?: string;
|
||||
phone?: string;
|
||||
}): Promise<UserModel> {
|
||||
const email = oktaUserData.email.toLowerCase();
|
||||
|
||||
// Check if user already exists
|
||||
let user = await UserModel.findOne({
|
||||
where: {
|
||||
[Op.or]: [
|
||||
{ email },
|
||||
{ oktaSub: oktaUserData.userId }
|
||||
]
|
||||
}
|
||||
});
|
||||
|
||||
if (user) {
|
||||
// Update existing user with latest info from Okta
|
||||
await user.update({
|
||||
oktaSub: oktaUserData.userId,
|
||||
email,
|
||||
firstName: oktaUserData.firstName || user.firstName,
|
||||
lastName: oktaUserData.lastName || user.lastName,
|
||||
displayName: oktaUserData.displayName || user.displayName,
|
||||
department: oktaUserData.department || user.department,
|
||||
phone: oktaUserData.phone || user.phone,
|
||||
isActive: true,
|
||||
updatedAt: new Date()
|
||||
});
|
||||
return user;
|
||||
}
|
||||
|
||||
// Create new user
|
||||
user = await UserModel.create({
|
||||
oktaSub: oktaUserData.userId,
|
||||
email,
|
||||
employeeId: null, // Will be updated on first login
|
||||
firstName: oktaUserData.firstName || null,
|
||||
lastName: oktaUserData.lastName || null,
|
||||
displayName: oktaUserData.displayName || email.split('@')[0],
|
||||
department: oktaUserData.department || null,
|
||||
designation: null,
|
||||
phone: oktaUserData.phone || null,
|
||||
isActive: true,
|
||||
isAdmin: false,
|
||||
lastLogin: undefined, // Not logged in yet, just created for tagging
|
||||
createdAt: new Date(),
|
||||
updatedAt: new Date()
|
||||
});
|
||||
|
||||
return user;
|
||||
}
|
||||
}
|
||||
|
||||
@ -491,28 +491,44 @@ export class WorkflowService {
|
||||
const approvals = await ApprovalLevel.findAll({
|
||||
where: { requestId: (wf as any).requestId },
|
||||
order: [['levelNumber', 'ASC']],
|
||||
attributes: ['levelId', 'levelNumber', 'levelName', 'approverId', 'approverEmail', 'approverName', 'tatHours', 'tatDays', 'status']
|
||||
attributes: ['levelId', 'levelNumber', 'levelName', 'approverId', 'approverEmail', 'approverName', 'tatHours', 'tatDays', 'status', 'levelStartTime', 'tatStartTime']
|
||||
});
|
||||
|
||||
const totalTat = Number((wf as any).totalTatHours || 0);
|
||||
let percent = 0;
|
||||
let remainingText = '';
|
||||
if ((wf as any).submissionDate && totalTat > 0) {
|
||||
const startedAt = new Date((wf as any).submissionDate);
|
||||
const now = new Date();
|
||||
const elapsedHrs = Math.max(0, (now.getTime() - startedAt.getTime()) / (1000 * 60 * 60));
|
||||
percent = Math.min(100, Math.round((elapsedHrs / totalTat) * 100));
|
||||
const remaining = Math.max(0, totalTat - elapsedHrs);
|
||||
const days = Math.floor(remaining / 24);
|
||||
const hours = Math.floor(remaining % 24);
|
||||
remainingText = days > 0 ? `${days} days ${hours} hours remaining` : `${hours} hours remaining`;
|
||||
}
|
||||
|
||||
// Calculate total TAT hours from all approvals
|
||||
const totalTatHours = approvals.reduce((sum: number, a: any) => {
|
||||
return sum + Number(a.tatHours || 0);
|
||||
}, 0);
|
||||
|
||||
const priority = ((wf as any).priority || 'standard').toString().toLowerCase();
|
||||
|
||||
// Calculate OVERALL request SLA (from submission to total deadline)
|
||||
const { calculateSLAStatus } = require('@utils/tatTimeUtils');
|
||||
const submissionDate = (wf as any).submissionDate;
|
||||
let overallSLA = null;
|
||||
|
||||
if (submissionDate && totalTatHours > 0) {
|
||||
try {
|
||||
overallSLA = await calculateSLAStatus(submissionDate, totalTatHours, priority);
|
||||
} catch (error) {
|
||||
logger.error('[Workflow] Error calculating overall SLA:', error);
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate current level SLA (if there's an active level)
|
||||
let currentLevelSLA = null;
|
||||
if (currentLevel) {
|
||||
const levelStartTime = (currentLevel as any).levelStartTime || (currentLevel as any).tatStartTime;
|
||||
const levelTatHours = Number((currentLevel as any).tatHours || 0);
|
||||
|
||||
if (levelStartTime && levelTatHours > 0) {
|
||||
try {
|
||||
currentLevelSLA = await calculateSLAStatus(levelStartTime, levelTatHours, priority);
|
||||
} catch (error) {
|
||||
logger.error('[Workflow] Error calculating current level SLA:', error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
requestId: (wf as any).requestId,
|
||||
requestNumber: (wf as any).requestNumber,
|
||||
@ -529,6 +545,9 @@ export class WorkflowService {
|
||||
userId: (currentLevel as any).approverId,
|
||||
email: (currentLevel as any).approverEmail,
|
||||
name: (currentLevel as any).approverName,
|
||||
levelStartTime: (currentLevel as any).levelStartTime,
|
||||
tatHours: (currentLevel as any).tatHours,
|
||||
sla: currentLevelSLA, // ← Backend-calculated SLA for current level
|
||||
} : null,
|
||||
approvals: approvals.map((a: any) => ({
|
||||
levelId: a.levelId,
|
||||
@ -539,9 +558,18 @@ export class WorkflowService {
|
||||
approverName: a.approverName,
|
||||
tatHours: a.tatHours,
|
||||
tatDays: a.tatDays,
|
||||
status: a.status
|
||||
status: a.status,
|
||||
levelStartTime: a.levelStartTime || a.tatStartTime
|
||||
})),
|
||||
sla: { percent, remainingText },
|
||||
sla: overallSLA || {
|
||||
elapsedHours: 0,
|
||||
remainingHours: totalTatHours,
|
||||
percentageUsed: 0,
|
||||
remainingText: `${totalTatHours}h remaining`,
|
||||
isPaused: false,
|
||||
status: 'on_track'
|
||||
}, // ← Overall request SLA (all levels combined)
|
||||
currentLevelSLA: currentLevelSLA, // ← Also provide at root level for easy access
|
||||
};
|
||||
}));
|
||||
return data;
|
||||
@ -1004,7 +1032,71 @@ export class WorkflowService {
|
||||
tatAlerts = [];
|
||||
}
|
||||
|
||||
return { workflow, approvals, participants, documents, activities, summary, tatAlerts };
|
||||
// Recalculate SLA for all approval levels with comprehensive data
|
||||
const priority = ((workflow as any)?.priority || 'standard').toString().toLowerCase();
|
||||
const { calculateSLAStatus } = require('@utils/tatTimeUtils');
|
||||
|
||||
const updatedApprovals = await Promise.all(approvals.map(async (approval: any) => {
|
||||
const status = (approval.status || '').toString().toUpperCase();
|
||||
const approvalData = approval.toJSON();
|
||||
|
||||
// Calculate SLA for active approvals (pending/in-progress)
|
||||
if (status === 'PENDING' || status === 'IN_PROGRESS') {
|
||||
const levelStartTime = approval.levelStartTime || approval.tatStartTime || approval.createdAt;
|
||||
const tatHours = Number(approval.tatHours || 0);
|
||||
|
||||
if (levelStartTime && tatHours > 0) {
|
||||
try {
|
||||
// Get comprehensive SLA status from backend utility
|
||||
const slaData = await calculateSLAStatus(levelStartTime, tatHours, priority);
|
||||
|
||||
// Return updated approval with comprehensive SLA data
|
||||
return {
|
||||
...approvalData,
|
||||
elapsedHours: slaData.elapsedHours,
|
||||
remainingHours: slaData.remainingHours,
|
||||
tatPercentageUsed: slaData.percentageUsed,
|
||||
sla: slaData // ← Full SLA object with deadline, isPaused, status, etc.
|
||||
};
|
||||
} catch (error) {
|
||||
logger.error(`[Workflow] Error calculating SLA for level ${approval.levelNumber}:`, error);
|
||||
// Return with fallback values if SLA calculation fails
|
||||
return {
|
||||
...approvalData,
|
||||
sla: {
|
||||
elapsedHours: 0,
|
||||
remainingHours: tatHours,
|
||||
percentageUsed: 0,
|
||||
isPaused: false,
|
||||
status: 'on_track',
|
||||
remainingText: `${tatHours}h`,
|
||||
elapsedText: '0h'
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// For completed/rejected levels, return as-is (already has final values from database)
|
||||
return approvalData;
|
||||
}));
|
||||
|
||||
// Calculate overall request SLA
|
||||
const submissionDate = (workflow as any).submissionDate;
|
||||
const totalTatHours = updatedApprovals.reduce((sum, a) => sum + Number(a.tatHours || 0), 0);
|
||||
let overallSLA = null;
|
||||
|
||||
if (submissionDate && totalTatHours > 0) {
|
||||
overallSLA = await calculateSLAStatus(submissionDate, totalTatHours, priority);
|
||||
}
|
||||
|
||||
// Update summary to include comprehensive SLA
|
||||
const updatedSummary = {
|
||||
...summary,
|
||||
sla: overallSLA || summary.sla
|
||||
};
|
||||
|
||||
return { workflow, approvals: updatedApprovals, participants, documents, activities, summary: updatedSummary, tatAlerts };
|
||||
} catch (error) {
|
||||
logger.error(`Failed to get workflow details ${requestId}:`, error);
|
||||
throw new Error('Failed to get workflow details');
|
||||
|
||||
@ -39,6 +39,8 @@ async function loadWorkingHoursCache(): Promise<void> {
|
||||
};
|
||||
workingHoursCacheExpiry = dayjs().add(5, 'minute').toDate();
|
||||
|
||||
console.log(`[TAT Utils] ✅ Working hours loaded from admin config: ${hours.startHour}:00 - ${hours.endHour}:00 (Days: ${startDay}-${endDay})`);
|
||||
|
||||
} catch (error) {
|
||||
console.error('[TAT] Error loading working hours:', error);
|
||||
// Fallback to default values from TAT_CONFIG
|
||||
@ -48,7 +50,7 @@ async function loadWorkingHoursCache(): Promise<void> {
|
||||
startDay: TAT_CONFIG.WORK_START_DAY,
|
||||
endDay: TAT_CONFIG.WORK_END_DAY
|
||||
};
|
||||
console.log('[TAT Utils] Using fallback working hours from TAT_CONFIG');
|
||||
console.log(`[TAT Utils] ⚠️ Using fallback working hours from system config: ${TAT_CONFIG.WORK_START_HOUR}:00 - ${TAT_CONFIG.WORK_END_HOUR}:00`);
|
||||
}
|
||||
}
|
||||
|
||||
@ -144,6 +146,37 @@ export async function addWorkingHours(start: Date | string, hoursToAdd: number):
|
||||
await loadWorkingHoursCache();
|
||||
await loadHolidaysCache();
|
||||
|
||||
const config = workingHoursCache || {
|
||||
startHour: TAT_CONFIG.WORK_START_HOUR,
|
||||
endHour: TAT_CONFIG.WORK_END_HOUR,
|
||||
startDay: TAT_CONFIG.WORK_START_DAY,
|
||||
endDay: TAT_CONFIG.WORK_END_DAY
|
||||
};
|
||||
|
||||
// If start time is before working hours or outside working days/holidays,
|
||||
// advance to the next working hour start (reset to clean hour)
|
||||
const originalStart = current.format('YYYY-MM-DD HH:mm:ss');
|
||||
const wasOutsideWorkingHours = !isWorkingTime(current);
|
||||
|
||||
while (!isWorkingTime(current)) {
|
||||
const hour = current.hour();
|
||||
const day = current.day();
|
||||
|
||||
// If before work start hour on a working day, jump to work start hour
|
||||
if (day >= config.startDay && day <= config.endDay && !isHoliday(current) && hour < config.startHour) {
|
||||
current = current.hour(config.startHour);
|
||||
} else {
|
||||
// After working hours or non-working day - advance to next working period
|
||||
current = current.add(1, 'hour');
|
||||
}
|
||||
}
|
||||
|
||||
// If start time was outside working hours, reset to clean work start time (no minutes)
|
||||
if (wasOutsideWorkingHours) {
|
||||
current = current.minute(0).second(0).millisecond(0);
|
||||
console.log(`[TAT Utils] Start time ${originalStart} was outside working hours, advanced to ${current.format('YYYY-MM-DD HH:mm:ss')}`);
|
||||
}
|
||||
|
||||
let remaining = hoursToAdd;
|
||||
|
||||
while (remaining > 0) {
|
||||
@ -157,9 +190,62 @@ export async function addWorkingHours(start: Date | string, hoursToAdd: number):
|
||||
}
|
||||
|
||||
/**
|
||||
* Add calendar hours (EXPRESS mode - 24/7, no exclusions)
|
||||
* For EXPRESS priority requests - counts all hours including holidays, weekends, non-working hours
|
||||
* In TEST MODE: 1 hour = 1 minute for faster testing
|
||||
* Add working hours for EXPRESS priority
|
||||
* Includes ALL days (weekends, holidays) but only counts working hours (9 AM - 6 PM)
|
||||
* @param start - Start date/time
|
||||
* @param hoursToAdd - Hours to add
|
||||
* @returns Deadline date
|
||||
*/
|
||||
export async function addWorkingHoursExpress(start: Date | string, hoursToAdd: number): Promise<Dayjs> {
|
||||
let current = dayjs(start);
|
||||
|
||||
// In test mode, convert hours to minutes for faster testing
|
||||
if (isTestMode()) {
|
||||
return current.add(hoursToAdd, 'minute');
|
||||
}
|
||||
|
||||
// Load configuration
|
||||
await loadWorkingHoursCache();
|
||||
|
||||
const config = workingHoursCache || {
|
||||
startHour: TAT_CONFIG.WORK_START_HOUR,
|
||||
endHour: TAT_CONFIG.WORK_END_HOUR,
|
||||
startDay: TAT_CONFIG.WORK_START_DAY,
|
||||
endDay: TAT_CONFIG.WORK_END_DAY
|
||||
};
|
||||
|
||||
// If start time is outside working hours, advance to work start hour (reset to clean hour)
|
||||
const originalStart = current.format('YYYY-MM-DD HH:mm:ss');
|
||||
const currentHour = current.hour();
|
||||
if (currentHour < config.startHour) {
|
||||
// Before working hours - reset to clean work start
|
||||
current = current.hour(config.startHour).minute(0).second(0).millisecond(0);
|
||||
console.log(`[TAT Utils Express] Start time ${originalStart} was before working hours, advanced to ${current.format('YYYY-MM-DD HH:mm:ss')}`);
|
||||
} else if (currentHour >= config.endHour) {
|
||||
// After working hours - reset to clean start of next day
|
||||
current = current.add(1, 'day').hour(config.startHour).minute(0).second(0).millisecond(0);
|
||||
console.log(`[TAT Utils Express] Start time ${originalStart} was after working hours, advanced to ${current.format('YYYY-MM-DD HH:mm:ss')}`);
|
||||
}
|
||||
|
||||
let remaining = hoursToAdd;
|
||||
|
||||
while (remaining > 0) {
|
||||
current = current.add(1, 'hour');
|
||||
const hour = current.hour();
|
||||
|
||||
// For express: count ALL days (including weekends/holidays)
|
||||
// But only during working hours (configured start - end hour)
|
||||
if (hour >= config.startHour && hour < config.endHour) {
|
||||
remaining -= 1;
|
||||
}
|
||||
}
|
||||
|
||||
return current;
|
||||
}
|
||||
|
||||
/**
|
||||
* Add calendar hours (24/7, no exclusions) - DEPRECATED
|
||||
* @deprecated Use addWorkingHoursExpress() for express priority
|
||||
*/
|
||||
export function addCalendarHours(start: Date | string, hoursToAdd: number): Dayjs {
|
||||
let current = dayjs(start);
|
||||
@ -169,7 +255,7 @@ export function addCalendarHours(start: Date | string, hoursToAdd: number): Dayj
|
||||
return current.add(hoursToAdd, 'minute');
|
||||
}
|
||||
|
||||
// Express mode: Simply add hours without any exclusions (24/7)
|
||||
// Simply add hours without any exclusions (24/7)
|
||||
return current.add(hoursToAdd, 'hour');
|
||||
}
|
||||
|
||||
@ -194,6 +280,32 @@ export function addWorkingHoursSync(start: Date | string, hoursToAdd: number): D
|
||||
endDay: TAT_CONFIG.WORK_END_DAY
|
||||
};
|
||||
|
||||
// If start time is before working hours or outside working days,
|
||||
// advance to the next working hour start (reset to clean hour)
|
||||
const originalStart = current.format('YYYY-MM-DD HH:mm:ss');
|
||||
let hour = current.hour();
|
||||
let day = current.day();
|
||||
|
||||
// Check if originally outside working hours
|
||||
const wasOutsideWorkingHours = !(day >= config.startDay && day <= config.endDay && hour >= config.startHour && hour < config.endHour);
|
||||
|
||||
// If before work start hour on a working day, jump to work start hour
|
||||
if (day >= config.startDay && day <= config.endDay && hour < config.startHour) {
|
||||
current = current.hour(config.startHour);
|
||||
} else {
|
||||
// Advance to next working hour
|
||||
while (!(day >= config.startDay && day <= config.endDay && hour >= config.startHour && hour < config.endHour)) {
|
||||
current = current.add(1, 'hour');
|
||||
day = current.day();
|
||||
hour = current.hour();
|
||||
}
|
||||
}
|
||||
|
||||
// If start time was outside working hours, reset to clean work start time
|
||||
if (wasOutsideWorkingHours) {
|
||||
current = current.minute(0).second(0).millisecond(0);
|
||||
}
|
||||
|
||||
let remaining = hoursToAdd;
|
||||
|
||||
while (remaining > 0) {
|
||||
@ -220,11 +332,15 @@ export async function initializeHolidaysCache(): Promise<void> {
|
||||
|
||||
/**
|
||||
* Clear working hours cache (call when admin updates configuration)
|
||||
* Also immediately reloads the cache with new values
|
||||
*/
|
||||
export function clearWorkingHoursCache(): void {
|
||||
export async function clearWorkingHoursCache(): Promise<void> {
|
||||
workingHoursCache = null;
|
||||
workingHoursCacheExpiry = null;
|
||||
// Cache cleared
|
||||
console.log('[TAT Utils] Working hours cache cleared - reloading from database...');
|
||||
|
||||
// Immediately reload the cache with new values
|
||||
await loadWorkingHoursCache();
|
||||
}
|
||||
|
||||
/**
|
||||
@ -269,3 +385,268 @@ export function calculateDelay(targetDate: Date): number {
|
||||
return delay > 0 ? delay : 0; // Return 0 if target is in the past
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if current time is within working hours
|
||||
* @returns true if currently in working hours, false if paused
|
||||
*/
|
||||
export async function isCurrentlyWorkingTime(priority: string = 'standard'): Promise<boolean> {
|
||||
await loadWorkingHoursCache();
|
||||
await loadHolidaysCache();
|
||||
|
||||
const now = dayjs();
|
||||
|
||||
// In test mode, always working time
|
||||
if (isTestMode()) {
|
||||
return true;
|
||||
}
|
||||
|
||||
const config = workingHoursCache || {
|
||||
startHour: TAT_CONFIG.WORK_START_HOUR,
|
||||
endHour: TAT_CONFIG.WORK_END_HOUR,
|
||||
startDay: TAT_CONFIG.WORK_START_DAY,
|
||||
endDay: TAT_CONFIG.WORK_END_DAY
|
||||
};
|
||||
|
||||
const day = now.day();
|
||||
const hour = now.hour();
|
||||
const dateStr = now.format('YYYY-MM-DD');
|
||||
|
||||
// Check working hours
|
||||
const isWorkingHour = hour >= config.startHour && hour < config.endHour;
|
||||
|
||||
// For express: include weekends, for standard: exclude weekends
|
||||
const isWorkingDay = priority === 'express'
|
||||
? true
|
||||
: (day >= config.startDay && day <= config.endDay);
|
||||
|
||||
// Check if not a holiday
|
||||
const isNotHoliday = !holidaysCache.has(dateStr);
|
||||
|
||||
return isWorkingDay && isWorkingHour && isNotHoliday;
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate comprehensive SLA status for an approval level
|
||||
* Returns all data needed for frontend display
|
||||
*/
|
||||
export async function calculateSLAStatus(
|
||||
levelStartTime: Date | string,
|
||||
tatHours: number,
|
||||
priority: string = 'standard'
|
||||
) {
|
||||
await loadWorkingHoursCache();
|
||||
await loadHolidaysCache();
|
||||
|
||||
const startDate = dayjs(levelStartTime);
|
||||
const now = dayjs();
|
||||
|
||||
// Calculate elapsed working hours
|
||||
const elapsedHours = await calculateElapsedWorkingHours(levelStartTime, now.toDate(), priority);
|
||||
const remainingHours = Math.max(0, tatHours - elapsedHours);
|
||||
const percentageUsed = tatHours > 0 ? Math.min(100, Math.round((elapsedHours / tatHours) * 100)) : 0;
|
||||
|
||||
// Calculate deadline based on priority
|
||||
// EXPRESS: All days (Mon-Sun) but working hours only (9 AM - 6 PM)
|
||||
// STANDARD: Weekdays only (Mon-Fri) and working hours (9 AM - 6 PM)
|
||||
const deadline = priority === 'express'
|
||||
? (await addWorkingHoursExpress(levelStartTime, tatHours)).toDate()
|
||||
: (await addWorkingHours(levelStartTime, tatHours)).toDate();
|
||||
|
||||
// Check if currently paused (outside working hours)
|
||||
const isPaused = !(await isCurrentlyWorkingTime(priority));
|
||||
|
||||
// Determine status
|
||||
let status: 'on_track' | 'approaching' | 'critical' | 'breached' = 'on_track';
|
||||
if (percentageUsed >= 100) {
|
||||
status = 'breached';
|
||||
} else if (percentageUsed >= 80) {
|
||||
status = 'critical';
|
||||
} else if (percentageUsed >= 60) {
|
||||
status = 'approaching';
|
||||
}
|
||||
|
||||
// Format remaining time
|
||||
const formatTime = (hours: number) => {
|
||||
if (hours <= 0) return '0h';
|
||||
const days = Math.floor(hours / 8); // 8 working hours per day
|
||||
const remainingHrs = Math.floor(hours % 8);
|
||||
const minutes = Math.round((hours % 1) * 60);
|
||||
|
||||
if (days > 0) {
|
||||
return minutes > 0
|
||||
? `${days}d ${remainingHrs}h ${minutes}m`
|
||||
: `${days}d ${remainingHrs}h`;
|
||||
}
|
||||
return minutes > 0 ? `${remainingHrs}h ${minutes}m` : `${remainingHrs}h`;
|
||||
};
|
||||
|
||||
return {
|
||||
elapsedHours: Math.round(elapsedHours * 100) / 100,
|
||||
remainingHours: Math.round(remainingHours * 100) / 100,
|
||||
percentageUsed,
|
||||
deadline: deadline.toISOString(),
|
||||
isPaused,
|
||||
status,
|
||||
remainingText: formatTime(remainingHours),
|
||||
elapsedText: formatTime(elapsedHours)
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate elapsed working hours between two dates
|
||||
* Uses minute-by-minute precision to accurately count only working time
|
||||
* @param startDate - Start time (when level was assigned)
|
||||
* @param endDate - End time (defaults to now)
|
||||
* @param priority - 'express' or 'standard' (express includes weekends, standard excludes)
|
||||
* @returns Elapsed working hours (with decimal precision)
|
||||
*/
|
||||
export async function calculateElapsedWorkingHours(
|
||||
startDate: Date | string,
|
||||
endDateParam: Date | string | null = null,
|
||||
priority: string = 'standard'
|
||||
): Promise<number> {
|
||||
await loadWorkingHoursCache();
|
||||
await loadHolidaysCache();
|
||||
|
||||
let start = dayjs(startDate);
|
||||
const end = dayjs(endDateParam || new Date());
|
||||
|
||||
// In test mode, use raw minutes for 1:1 conversion
|
||||
if (isTestMode()) {
|
||||
return end.diff(start, 'minute') / 60;
|
||||
}
|
||||
|
||||
const config = workingHoursCache || {
|
||||
startHour: TAT_CONFIG.WORK_START_HOUR,
|
||||
endHour: TAT_CONFIG.WORK_END_HOUR,
|
||||
startDay: TAT_CONFIG.WORK_START_DAY,
|
||||
endDay: TAT_CONFIG.WORK_END_DAY
|
||||
};
|
||||
|
||||
// CRITICAL FIX: If start time is outside working hours, advance to next working period
|
||||
// This ensures we only count elapsed time when TAT is actually running
|
||||
const originalStart = start.format('YYYY-MM-DD HH:mm:ss');
|
||||
|
||||
// For standard priority, check working days and hours
|
||||
if (priority !== 'express') {
|
||||
const wasOutsideWorkingHours = !isWorkingTime(start);
|
||||
|
||||
while (!isWorkingTime(start)) {
|
||||
const hour = start.hour();
|
||||
const day = start.day();
|
||||
|
||||
// If before work start hour on a working day, jump to work start hour
|
||||
if (day >= config.startDay && day <= config.endDay && !isHoliday(start) && hour < config.startHour) {
|
||||
start = start.hour(config.startHour);
|
||||
} else {
|
||||
// Otherwise, advance by 1 hour and check again
|
||||
start = start.add(1, 'hour');
|
||||
}
|
||||
}
|
||||
|
||||
// If start time was outside working hours, reset to clean work start time
|
||||
if (wasOutsideWorkingHours) {
|
||||
start = start.minute(0).second(0).millisecond(0);
|
||||
}
|
||||
} else {
|
||||
// For express priority, only check working hours (not days)
|
||||
const hour = start.hour();
|
||||
if (hour < config.startHour) {
|
||||
// Before hours - reset to clean start
|
||||
start = start.hour(config.startHour).minute(0).second(0).millisecond(0);
|
||||
} else if (hour >= config.endHour) {
|
||||
// After hours - reset to clean start of next day
|
||||
start = start.add(1, 'day').hour(config.startHour).minute(0).second(0).millisecond(0);
|
||||
}
|
||||
}
|
||||
|
||||
// Log if we advanced the start time for elapsed calculation
|
||||
if (start.format('YYYY-MM-DD HH:mm:ss') !== originalStart) {
|
||||
console.log(`[TAT Utils] Elapsed time calculation: Start ${originalStart} was outside working hours, advanced to ${start.format('YYYY-MM-DD HH:mm:ss')}`);
|
||||
}
|
||||
|
||||
// If end time is before adjusted start time, return 0 (TAT hasn't started yet)
|
||||
if (end.isBefore(start)) {
|
||||
console.log(`[TAT Utils] Current time is before TAT start time - elapsed hours: 0`);
|
||||
return 0;
|
||||
}
|
||||
|
||||
let totalWorkingMinutes = 0;
|
||||
let currentDate = start.startOf('day');
|
||||
const endDay = end.startOf('day');
|
||||
|
||||
// Process each day
|
||||
while (currentDate.isBefore(endDay) || currentDate.isSame(endDay, 'day')) {
|
||||
const dateStr = currentDate.format('YYYY-MM-DD');
|
||||
const dayOfWeek = currentDate.day();
|
||||
|
||||
// Check if this day is a working day
|
||||
const isWorkingDay = priority === 'express'
|
||||
? true
|
||||
: (dayOfWeek >= config.startDay && dayOfWeek <= config.endDay);
|
||||
const isNotHoliday = !holidaysCache.has(dateStr);
|
||||
|
||||
if (isWorkingDay && isNotHoliday) {
|
||||
// Determine the working period for this day
|
||||
let dayStart = currentDate.hour(config.startHour).minute(0).second(0);
|
||||
let dayEnd = currentDate.hour(config.endHour).minute(0).second(0);
|
||||
|
||||
// Adjust for first day (might start mid-day)
|
||||
if (currentDate.isSame(start, 'day')) {
|
||||
if (start.hour() >= config.endHour) {
|
||||
// Started after work hours - skip this day
|
||||
currentDate = currentDate.add(1, 'day');
|
||||
continue;
|
||||
} else if (start.hour() >= config.startHour) {
|
||||
// Started during work hours - use actual start time
|
||||
dayStart = start;
|
||||
}
|
||||
// If before work hours, dayStart is already correct (work start time)
|
||||
}
|
||||
|
||||
// Adjust for last day (might end mid-day)
|
||||
if (currentDate.isSame(end, 'day')) {
|
||||
if (end.hour() < config.startHour) {
|
||||
// Ended before work hours - skip this day
|
||||
currentDate = currentDate.add(1, 'day');
|
||||
continue;
|
||||
} else if (end.hour() < config.endHour) {
|
||||
// Ended during work hours - use actual end time
|
||||
dayEnd = end;
|
||||
}
|
||||
// If after work hours, dayEnd is already correct (work end time)
|
||||
}
|
||||
|
||||
// Calculate minutes worked this day
|
||||
if (dayStart.isBefore(dayEnd)) {
|
||||
const minutesThisDay = dayEnd.diff(dayStart, 'minute');
|
||||
totalWorkingMinutes += minutesThisDay;
|
||||
}
|
||||
}
|
||||
|
||||
currentDate = currentDate.add(1, 'day');
|
||||
|
||||
// Safety check
|
||||
if (currentDate.diff(start, 'day') > 730) { // 2 years
|
||||
console.error('[TAT] Safety break - exceeded 2 years');
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
const hours = totalWorkingMinutes / 60;
|
||||
|
||||
// Warning log for unusually high values
|
||||
if (hours > 16) { // More than 2 working days
|
||||
console.warn('[TAT] High elapsed hours detected:', {
|
||||
startDate: start.format('YYYY-MM-DD HH:mm'),
|
||||
endDate: end.format('YYYY-MM-DD HH:mm'),
|
||||
priority,
|
||||
elapsedHours: hours,
|
||||
workingHoursConfig: config,
|
||||
calendarHours: end.diff(start, 'hour')
|
||||
});
|
||||
}
|
||||
|
||||
return hours;
|
||||
}
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user