dashboard api created redis new url added
This commit is contained in:
parent
c76b799cf7
commit
c7c9b62358
@ -1,45 +1,103 @@
|
|||||||
# Redis Setup for Windows
|
# Redis Setup for Windows
|
||||||
|
|
||||||
## Method 1: Using Memurai (Redis-compatible for Windows)
|
## ⚠️ IMPORTANT: Redis Version Requirements
|
||||||
|
|
||||||
Memurai is a Redis-compatible server for Windows.
|
**BullMQ requires Redis version 5.0.0 or higher.**
|
||||||
|
|
||||||
|
❌ **DO NOT USE**: Microsoft Archive Redis (https://github.com/microsoftarchive/redis/releases)
|
||||||
|
- This is **outdated** and only provides Redis 3.x
|
||||||
|
- **Version 3.0.504 is NOT compatible** with BullMQ
|
||||||
|
- You will get errors: `Redis version needs to be greater or equal than 5.0.0`
|
||||||
|
|
||||||
|
✅ **USE ONE OF THESE METHODS INSTEAD**:
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Method 1: Using Memurai (Recommended for Windows) ⭐
|
||||||
|
|
||||||
|
Memurai is a **Redis-compatible** server built specifically for Windows with full Redis 6.x+ compatibility.
|
||||||
|
|
||||||
|
### Why Memurai?
|
||||||
|
- ✅ **Native Windows support** - Runs as a Windows service
|
||||||
|
- ✅ **Redis 6.x+ compatible** - Full feature support
|
||||||
|
- ✅ **Easy installation** - Just install and run
|
||||||
|
- ✅ **Free for development** - Free tier available
|
||||||
|
- ✅ **Production-ready** - Used in enterprise environments
|
||||||
|
|
||||||
|
### Installation Steps:
|
||||||
|
|
||||||
1. **Download Memurai**:
|
1. **Download Memurai**:
|
||||||
- Visit: https://www.memurai.com/get-memurai
|
- Visit: https://www.memurai.com/get-memurai
|
||||||
- Download the installer
|
- Download the **Developer Edition** (free)
|
||||||
|
|
||||||
2. **Install**:
|
2. **Install**:
|
||||||
- Run the installer
|
- Run the installer (`Memurai-*.exe`)
|
||||||
- Choose default options
|
- Choose default options
|
||||||
- It will automatically start as a Windows service
|
- Memurai will install as a Windows service and start automatically
|
||||||
|
|
||||||
3. **Verify**:
|
3. **Verify Installation**:
|
||||||
```powershell
|
```powershell
|
||||||
# Check if service is running
|
# Check if service is running
|
||||||
Get-Service Memurai
|
Get-Service Memurai
|
||||||
|
# Should show: Running
|
||||||
|
|
||||||
# Or connect with redis-cli
|
# Test connection
|
||||||
memurai-cli ping
|
memurai-cli ping
|
||||||
# Should return: PONG
|
# Should return: PONG
|
||||||
|
|
||||||
|
# Check version (should be 6.x or 7.x)
|
||||||
|
memurai-cli --version
|
||||||
```
|
```
|
||||||
|
|
||||||
4. **Configure** (if needed):
|
4. **Configuration**:
|
||||||
- Default port: 6379
|
- Default port: **6379**
|
||||||
- Service runs automatically on startup
|
- Connection string: `redis://localhost:6379`
|
||||||
|
- Service runs automatically on Windows startup
|
||||||
|
- No additional configuration needed for development
|
||||||
|
|
||||||
## Method 2: Using Docker Desktop
|
## Method 2: Using Docker Desktop (Alternative) 🐳
|
||||||
|
|
||||||
1. **Install Docker Desktop**:
|
If you have Docker Desktop installed, this is the easiest method to get Redis 7.x.
|
||||||
|
|
||||||
|
### Installation Steps:
|
||||||
|
|
||||||
|
1. **Install Docker Desktop** (if not already installed):
|
||||||
- Download from: https://www.docker.com/products/docker-desktop
|
- Download from: https://www.docker.com/products/docker-desktop
|
||||||
|
- Install and start Docker Desktop
|
||||||
|
|
||||||
2. **Start Redis Container**:
|
2. **Start Redis Container**:
|
||||||
```powershell
|
```powershell
|
||||||
docker run -d --name redis -p 6379:6379 redis:7-alpine
|
# Run Redis 7.x in a container
|
||||||
|
docker run -d --name redis-tat -p 6379:6379 redis:7-alpine
|
||||||
|
|
||||||
|
# Or if you want it to restart automatically:
|
||||||
|
docker run -d --name redis-tat -p 6379:6379 --restart unless-stopped redis:7-alpine
|
||||||
```
|
```
|
||||||
|
|
||||||
3. **Verify**:
|
3. **Verify**:
|
||||||
```powershell
|
```powershell
|
||||||
|
# Check if container is running
|
||||||
docker ps | Select-String redis
|
docker ps | Select-String redis
|
||||||
|
|
||||||
|
# Check Redis version
|
||||||
|
docker exec redis-tat redis-server --version
|
||||||
|
# Should show: Redis server v=7.x.x
|
||||||
|
|
||||||
|
# Test connection
|
||||||
|
docker exec redis-tat redis-cli ping
|
||||||
|
# Should return: PONG
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Stop/Start Redis**:
|
||||||
|
```powershell
|
||||||
|
# Stop Redis
|
||||||
|
docker stop redis-tat
|
||||||
|
|
||||||
|
# Start Redis
|
||||||
|
docker start redis-tat
|
||||||
|
|
||||||
|
# Remove container (if needed)
|
||||||
|
docker rm -f redis-tat
|
||||||
```
|
```
|
||||||
|
|
||||||
## Method 3: Using WSL2 (Windows Subsystem for Linux)
|
## Method 3: Using WSL2 (Windows Subsystem for Linux)
|
||||||
@ -76,38 +134,191 @@ Test-NetConnection -ComputerName localhost -Port 6379
|
|||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
|
### ❌ Error: "Redis version needs to be greater or equal than 5.0.0 Current: 3.0.504"
|
||||||
|
|
||||||
|
**Problem**: You're using Microsoft Archive Redis (version 3.x) which is **too old** for BullMQ.
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
1. **Stop the old Redis**:
|
||||||
|
```powershell
|
||||||
|
# Find and stop the old Redis process
|
||||||
|
Get-Process redis-server -ErrorAction SilentlyStop | Stop-Process -Force
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Uninstall/Remove old Redis** (if installed as service):
|
||||||
|
```powershell
|
||||||
|
# Check if running as service
|
||||||
|
Get-Service | Where-Object {$_.Name -like "*redis*"}
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Install one of the recommended methods**:
|
||||||
|
- **Option A**: Install Memurai (Recommended) - See Method 1 above
|
||||||
|
- **Option B**: Use Docker - See Method 2 above
|
||||||
|
- **Option C**: Use WSL2 - See Method 3 above
|
||||||
|
|
||||||
|
4. **Verify new Redis version**:
|
||||||
|
```powershell
|
||||||
|
# For Memurai
|
||||||
|
memurai-cli --version
|
||||||
|
# Should show: 6.x or 7.x
|
||||||
|
|
||||||
|
# For Docker
|
||||||
|
docker exec redis-tat redis-server --version
|
||||||
|
# Should show: Redis server v=7.x.x
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Restart your backend server**:
|
||||||
|
```powershell
|
||||||
|
# The TAT worker will now detect the correct Redis version
|
||||||
|
npm run dev
|
||||||
|
```
|
||||||
|
|
||||||
### Port Already in Use
|
### Port Already in Use
|
||||||
```powershell
|
```powershell
|
||||||
# Check what's using port 6379
|
# Check what's using port 6379
|
||||||
netstat -ano | findstr :6379
|
netstat -ano | findstr :6379
|
||||||
|
|
||||||
# Kill the process if needed
|
# Kill the process if needed (replace <PID> with actual process ID)
|
||||||
taskkill /PID <PID> /F
|
taskkill /PID <PID> /F
|
||||||
|
|
||||||
|
# Or if using old Redis, stop it:
|
||||||
|
Get-Process redis-server -ErrorAction SilentlyStop | Stop-Process -Force
|
||||||
```
|
```
|
||||||
|
|
||||||
### Service Not Starting
|
### Service Not Starting (Memurai)
|
||||||
```powershell
|
```powershell
|
||||||
# For Memurai
|
# Start Memurai service
|
||||||
net start Memurai
|
net start Memurai
|
||||||
|
|
||||||
|
# Check service status
|
||||||
|
Get-Service Memurai
|
||||||
|
|
||||||
# Check logs
|
# Check logs
|
||||||
Get-EventLog -LogName Application -Source Memurai -Newest 10
|
Get-EventLog -LogName Application -Source Memurai -Newest 10
|
||||||
|
|
||||||
|
# Restart service
|
||||||
|
Restart-Service Memurai
|
||||||
|
```
|
||||||
|
|
||||||
|
### Docker Container Not Starting
|
||||||
|
```powershell
|
||||||
|
# Check Docker is running
|
||||||
|
docker ps
|
||||||
|
|
||||||
|
# Check Redis container logs
|
||||||
|
docker logs redis-tat
|
||||||
|
|
||||||
|
# Restart container
|
||||||
|
docker restart redis-tat
|
||||||
|
|
||||||
|
# Remove and recreate if needed
|
||||||
|
docker rm -f redis-tat
|
||||||
|
docker run -d --name redis-tat -p 6379:6379 redis:7-alpine
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cannot Connect to Redis
|
||||||
|
```powershell
|
||||||
|
# Test connection
|
||||||
|
Test-NetConnection -ComputerName localhost -Port 6379
|
||||||
|
|
||||||
|
# For Memurai
|
||||||
|
memurai-cli ping
|
||||||
|
|
||||||
|
# For Docker
|
||||||
|
docker exec redis-tat redis-cli ping
|
||||||
```
|
```
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
Default Redis/Memurai configuration works out of the box. No changes needed for development.
|
### Environment Variable
|
||||||
|
|
||||||
**Connection String**: `redis://localhost:6379`
|
Add to your `.env` file:
|
||||||
|
```env
|
||||||
|
REDIS_URL=redis://localhost:6379
|
||||||
|
```
|
||||||
|
|
||||||
|
### Default Settings
|
||||||
|
- **Port**: `6379`
|
||||||
|
- **Host**: `localhost`
|
||||||
|
- **Connection String**: `redis://localhost:6379`
|
||||||
|
- No authentication required for local development
|
||||||
|
- Default configuration works out of the box
|
||||||
|
|
||||||
|
## Verification After Setup
|
||||||
|
|
||||||
|
After installing Redis, verify it's working:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
# 1. Check Redis version (must be 5.0+)
|
||||||
|
# For Memurai:
|
||||||
|
memurai-cli --version
|
||||||
|
|
||||||
|
# For Docker:
|
||||||
|
docker exec redis-tat redis-server --version
|
||||||
|
|
||||||
|
# 2. Test connection
|
||||||
|
# For Memurai:
|
||||||
|
memurai-cli ping
|
||||||
|
# Expected: PONG
|
||||||
|
|
||||||
|
# For Docker:
|
||||||
|
docker exec redis-tat redis-cli ping
|
||||||
|
# Expected: PONG
|
||||||
|
|
||||||
|
# 3. Check if backend can connect
|
||||||
|
# Start your backend server and check logs:
|
||||||
|
npm run dev
|
||||||
|
|
||||||
|
# Look for:
|
||||||
|
# [TAT Queue] Connected to Redis
|
||||||
|
# [TAT Worker] Connected to Redis at redis://127.0.0.1:6379
|
||||||
|
# [TAT Worker] Redis version: 7.x.x (or 6.x.x)
|
||||||
|
# [TAT Worker] Worker is ready and listening for jobs
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quick Fix: Migrating from Old Redis
|
||||||
|
|
||||||
|
If you already installed Microsoft Archive Redis (3.x), follow these steps:
|
||||||
|
|
||||||
|
1. **Stop old Redis**:
|
||||||
|
```powershell
|
||||||
|
# Close the PowerShell window running redis-server.exe
|
||||||
|
# Or kill the process:
|
||||||
|
Get-Process redis-server -ErrorAction SilentlyStop | Stop-Process -Force
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Choose a new method** (recommended: Memurai or Docker)
|
||||||
|
|
||||||
|
3. **Install and verify** (see methods above)
|
||||||
|
|
||||||
|
4. **Update .env** (if needed):
|
||||||
|
```env
|
||||||
|
REDIS_URL=redis://localhost:6379
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Restart backend**:
|
||||||
|
```powershell
|
||||||
|
npm run dev
|
||||||
|
```
|
||||||
|
|
||||||
## Production Considerations
|
## Production Considerations
|
||||||
|
|
||||||
- Use Redis authentication in production
|
- ✅ Use Redis authentication in production
|
||||||
- Configure persistence (RDB/AOF)
|
- ✅ Configure persistence (RDB/AOF)
|
||||||
- Set up monitoring and alerts
|
- ✅ Set up monitoring and alerts
|
||||||
- Consider Redis Cluster for high availability
|
- ✅ Consider Redis Cluster for high availability
|
||||||
|
- ✅ Use managed Redis service (Redis Cloud, AWS ElastiCache, etc.)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Recommended for Windows Development**: Memurai (easiest) or Docker Desktop
|
## Summary: Recommended Setup for Windows
|
||||||
|
|
||||||
|
| Method | Ease of Setup | Performance | Recommended For |
|
||||||
|
|--------|---------------|-------------|-----------------|
|
||||||
|
| **Memurai** ⭐ | ⭐⭐⭐⭐⭐ Very Easy | ⭐⭐⭐⭐⭐ Excellent | **Most Users** |
|
||||||
|
| **Docker** | ⭐⭐⭐⭐ Easy | ⭐⭐⭐⭐⭐ Excellent | Docker Users |
|
||||||
|
| **WSL2** | ⭐⭐⭐ Moderate | ⭐⭐⭐⭐⭐ Excellent | Linux Users |
|
||||||
|
| ❌ **Microsoft Archive Redis** | ❌ Don't Use | ❌ Too Old | **None - Outdated** |
|
||||||
|
|
||||||
|
**⭐ Recommended**: **Memurai** for easiest Windows-native setup, or **Docker** if you already use Docker Desktop.
|
||||||
|
|
||||||
|
|||||||
264
src/controllers/dashboard.controller.ts
Normal file
264
src/controllers/dashboard.controller.ts
Normal file
@ -0,0 +1,264 @@
|
|||||||
|
import { Request, Response } from 'express';
|
||||||
|
import { DashboardService } from '../services/dashboard.service';
|
||||||
|
import logger from '@utils/logger';
|
||||||
|
|
||||||
|
export class DashboardController {
|
||||||
|
private dashboardService: DashboardService;
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
this.dashboardService = new DashboardService();
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get all KPI metrics for dashboard
|
||||||
|
*/
|
||||||
|
async getKPIs(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
|
||||||
|
const kpis = await this.dashboardService.getKPIs(userId, dateRange);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: kpis
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching KPIs:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch dashboard KPIs'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get request volume and status statistics
|
||||||
|
*/
|
||||||
|
async getRequestStats(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
|
||||||
|
const stats = await this.dashboardService.getRequestStats(userId, dateRange);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: stats
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching request stats:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch request statistics'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get TAT efficiency metrics
|
||||||
|
*/
|
||||||
|
async getTATEfficiency(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
|
||||||
|
const efficiency = await this.dashboardService.getTATEfficiency(userId, dateRange);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: efficiency
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching TAT efficiency:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch TAT efficiency metrics'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get approver load statistics
|
||||||
|
*/
|
||||||
|
async getApproverLoad(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
|
||||||
|
const load = await this.dashboardService.getApproverLoad(userId, dateRange);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: load
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching approver load:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch approver load statistics'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get engagement and quality metrics
|
||||||
|
*/
|
||||||
|
async getEngagementStats(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
|
||||||
|
const engagement = await this.dashboardService.getEngagementStats(userId, dateRange);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: engagement
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching engagement stats:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch engagement statistics'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get AI insights and closure metrics
|
||||||
|
*/
|
||||||
|
async getAIInsights(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
|
||||||
|
const insights = await this.dashboardService.getAIInsights(userId, dateRange);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: insights
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching AI insights:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch AI insights'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get recent activity feed
|
||||||
|
*/
|
||||||
|
async getRecentActivity(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const limit = Number(req.query.limit || 10);
|
||||||
|
|
||||||
|
const activities = await this.dashboardService.getRecentActivity(userId, limit);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: activities
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching recent activity:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch recent activity'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get critical/high priority requests
|
||||||
|
*/
|
||||||
|
async getCriticalRequests(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
|
||||||
|
const criticalRequests = await this.dashboardService.getCriticalRequests(userId);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: criticalRequests
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching critical requests:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch critical requests'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get upcoming deadlines
|
||||||
|
*/
|
||||||
|
async getUpcomingDeadlines(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const limit = Number(req.query.limit || 5);
|
||||||
|
|
||||||
|
const deadlines = await this.dashboardService.getUpcomingDeadlines(userId, limit);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: deadlines
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching upcoming deadlines:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch upcoming deadlines'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get department-wise statistics
|
||||||
|
*/
|
||||||
|
async getDepartmentStats(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
|
||||||
|
const stats = await this.dashboardService.getDepartmentStats(userId, dateRange);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: stats
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching department stats:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch department statistics'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get priority distribution statistics
|
||||||
|
*/
|
||||||
|
async getPriorityDistribution(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const userId = (req as any).user?.userId;
|
||||||
|
const dateRange = req.query.dateRange as string | undefined;
|
||||||
|
|
||||||
|
const distribution = await this.dashboardService.getPriorityDistribution(userId, dateRange);
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
success: true,
|
||||||
|
data: distribution
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('[Dashboard] Error fetching priority distribution:', error);
|
||||||
|
res.status(500).json({
|
||||||
|
success: false,
|
||||||
|
error: 'Failed to fetch priority distribution'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
@ -4,22 +4,41 @@ import logger from '@utils/logger';
|
|||||||
|
|
||||||
// Create Redis connection
|
// Create Redis connection
|
||||||
const redisUrl = process.env.REDIS_URL || 'redis://localhost:6379';
|
const redisUrl = process.env.REDIS_URL || 'redis://localhost:6379';
|
||||||
|
const redisPassword = process.env.REDIS_PASSWORD || undefined;
|
||||||
|
|
||||||
let connection: IORedis | null = null;
|
let connection: IORedis | null = null;
|
||||||
let tatQueue: Queue | null = null;
|
let tatQueue: Queue | null = null;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
connection = new IORedis(redisUrl, {
|
// Parse Redis URL and add password if provided
|
||||||
|
const redisOptions: any = {
|
||||||
maxRetriesPerRequest: null, // Required for BullMQ
|
maxRetriesPerRequest: null, // Required for BullMQ
|
||||||
enableReadyCheck: false,
|
enableReadyCheck: false,
|
||||||
lazyConnect: true, // Don't connect immediately
|
lazyConnect: true, // Don't connect immediately
|
||||||
retryStrategy: (times) => {
|
retryStrategy: (times: number) => {
|
||||||
if (times > 3) {
|
if (times > 5) {
|
||||||
logger.warn('[TAT Queue] Redis connection failed after 3 attempts. TAT notifications will be disabled.');
|
logger.warn('[TAT Queue] Redis connection failed after 5 attempts. TAT notifications will be disabled.');
|
||||||
return null; // Stop retrying
|
return null; // Stop retrying
|
||||||
}
|
}
|
||||||
return Math.min(times * 1000, 3000);
|
return Math.min(times * 2000, 10000); // Increase retry delay
|
||||||
}
|
},
|
||||||
});
|
// Increased timeouts for remote Redis server
|
||||||
|
connectTimeout: 30000, // 30 seconds (for remote server)
|
||||||
|
commandTimeout: 20000, // 20 seconds (for slow network)
|
||||||
|
// Keepalive for long-running connections
|
||||||
|
keepAlive: 30000,
|
||||||
|
// Reconnect on error
|
||||||
|
autoResubscribe: true,
|
||||||
|
autoResendUnfulfilledCommands: true
|
||||||
|
};
|
||||||
|
|
||||||
|
// Add password if provided (either from env var or from URL)
|
||||||
|
if (redisPassword) {
|
||||||
|
redisOptions.password = redisPassword;
|
||||||
|
logger.info('[TAT Queue] Using Redis with password authentication');
|
||||||
|
}
|
||||||
|
|
||||||
|
connection = new IORedis(redisUrl, redisOptions);
|
||||||
|
|
||||||
// Handle connection events
|
// Handle connection events
|
||||||
connection.on('connect', () => {
|
connection.on('connect', () => {
|
||||||
|
|||||||
@ -5,63 +5,176 @@ import logger from '@utils/logger';
|
|||||||
|
|
||||||
// Create Redis connection for worker
|
// Create Redis connection for worker
|
||||||
const redisUrl = process.env.REDIS_URL || 'redis://localhost:6379';
|
const redisUrl = process.env.REDIS_URL || 'redis://localhost:6379';
|
||||||
|
const redisPassword = process.env.REDIS_PASSWORD || undefined;
|
||||||
|
|
||||||
let connection: IORedis | null = null;
|
let connection: IORedis | null = null;
|
||||||
let tatWorker: Worker | null = null;
|
let tatWorker: Worker | null = null;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
connection = new IORedis(redisUrl, {
|
// Parse Redis connection options
|
||||||
|
const redisOptions: any = {
|
||||||
maxRetriesPerRequest: null,
|
maxRetriesPerRequest: null,
|
||||||
enableReadyCheck: false,
|
enableReadyCheck: false,
|
||||||
lazyConnect: true,
|
lazyConnect: true,
|
||||||
retryStrategy: (times) => {
|
retryStrategy: (times: number) => {
|
||||||
if (times > 3) {
|
if (times > 5) {
|
||||||
logger.warn('[TAT Worker] Redis connection failed. TAT worker will not start.');
|
logger.warn('[TAT Worker] Redis connection failed after 5 retries. TAT worker will not start.');
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
return Math.min(times * 1000, 3000);
|
logger.warn(`[TAT Worker] Redis connection retry attempt ${times}`);
|
||||||
}
|
return Math.min(times * 2000, 10000); // Increase retry delay
|
||||||
|
},
|
||||||
|
// Increased timeouts for remote Redis server
|
||||||
|
connectTimeout: 30000, // 30 seconds (for remote server)
|
||||||
|
commandTimeout: 20000, // 20 seconds (for slow network)
|
||||||
|
// Keepalive for long-running connections
|
||||||
|
keepAlive: 30000,
|
||||||
|
// Reconnect on error
|
||||||
|
autoResubscribe: true,
|
||||||
|
autoResendUnfulfilledCommands: true
|
||||||
|
};
|
||||||
|
|
||||||
|
// Add password if provided (for authenticated Redis)
|
||||||
|
if (redisPassword) {
|
||||||
|
redisOptions.password = redisPassword;
|
||||||
|
logger.info('[TAT Worker] Using Redis with password authentication');
|
||||||
|
}
|
||||||
|
|
||||||
|
connection = new IORedis(redisUrl, redisOptions);
|
||||||
|
|
||||||
|
// Handle connection errors
|
||||||
|
connection.on('error', (err) => {
|
||||||
|
logger.error('[TAT Worker] Redis connection error:', {
|
||||||
|
message: err.message,
|
||||||
|
code: (err as any).code,
|
||||||
|
errno: (err as any).errno,
|
||||||
|
syscall: (err as any).syscall,
|
||||||
|
address: (err as any).address,
|
||||||
|
port: (err as any).port
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
connection.on('close', () => {
|
||||||
|
logger.warn('[TAT Worker] Redis connection closed');
|
||||||
|
});
|
||||||
|
|
||||||
|
connection.on('reconnecting', (delay: number) => {
|
||||||
|
logger.info(`[TAT Worker] Redis reconnecting in ${delay}ms`);
|
||||||
});
|
});
|
||||||
|
|
||||||
// Try to connect and create worker
|
// Try to connect and create worker
|
||||||
connection.connect().then(() => {
|
connection.connect().then(async () => {
|
||||||
logger.info('[TAT Worker] Connected to Redis');
|
logger.info(`[TAT Worker] Connected to Redis at ${redisUrl}`);
|
||||||
|
|
||||||
// Create TAT Worker
|
// Verify connection by pinging and check Redis version
|
||||||
tatWorker = new Worker('tatQueue', handleTatJob, {
|
try {
|
||||||
connection: connection!,
|
const pingResult = await connection!.ping();
|
||||||
concurrency: 5, // Process up to 5 jobs concurrently
|
logger.info(`[TAT Worker] Redis PING successful: ${pingResult}`);
|
||||||
limiter: {
|
|
||||||
max: 10, // Maximum 10 jobs
|
// Check Redis version
|
||||||
duration: 1000 // per second
|
const info = await connection!.info('server');
|
||||||
|
const versionMatch = info.match(/redis_version:(.+)/);
|
||||||
|
if (versionMatch) {
|
||||||
|
const version = versionMatch[1].trim();
|
||||||
|
logger.info(`[TAT Worker] Redis version: ${version}`);
|
||||||
|
|
||||||
|
// Parse version (e.g., "3.0.504" or "7.0.0")
|
||||||
|
const versionParts = version.split('.').map(Number);
|
||||||
|
const majorVersion = versionParts[0];
|
||||||
|
|
||||||
|
if (majorVersion < 5) {
|
||||||
|
logger.error(`[TAT Worker] ❌ CRITICAL: Redis version ${version} is incompatible!`);
|
||||||
|
logger.error(`[TAT Worker] BullMQ REQUIRES Redis 5.0.0 or higher. Current version: ${version}`);
|
||||||
|
logger.error(`[TAT Worker] ⚠️ TAT Worker cannot start with this Redis version.`);
|
||||||
|
logger.error(`[TAT Worker] 📖 Solution: Upgrade Redis (see docs/REDIS_SETUP_WINDOWS.md)`);
|
||||||
|
logger.error(`[TAT Worker] 💡 Recommended: Install Memurai or use Docker Redis 7.x`);
|
||||||
|
throw new Error(`Redis version ${version} is too old. BullMQ requires Redis 5.0.0+. Please upgrade Redis.`);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
});
|
} catch (err: any) {
|
||||||
|
logger.error('[TAT Worker] Redis PING or version check failed:', err);
|
||||||
// Event listeners
|
// If version check failed, don't create worker
|
||||||
tatWorker.on('ready', () => {
|
if (err && err.message && err.message.includes('Redis version')) {
|
||||||
logger.info('[TAT Worker] Worker is ready and listening for jobs');
|
logger.warn('[TAT Worker] TAT notifications will be disabled until Redis is upgraded.');
|
||||||
});
|
connection = null;
|
||||||
|
tatWorker = null;
|
||||||
tatWorker.on('completed', (job) => {
|
return;
|
||||||
logger.info(`[TAT Worker] ✅ Job ${job.id} (${job.name}) completed for request ${job.data.requestId}`);
|
|
||||||
});
|
|
||||||
|
|
||||||
tatWorker.on('failed', (job, err) => {
|
|
||||||
if (job) {
|
|
||||||
logger.error(`[TAT Worker] ❌ Job ${job.id} (${job.name}) failed for request ${job.data.requestId}:`, err);
|
|
||||||
} else {
|
|
||||||
logger.error('[TAT Worker] ❌ Job failed:', err);
|
|
||||||
}
|
}
|
||||||
});
|
}
|
||||||
|
|
||||||
|
// Create TAT Worker (only if version check passed)
|
||||||
|
if (connection) {
|
||||||
|
try {
|
||||||
|
// BullMQ will check Redis version internally - wrap in try-catch
|
||||||
|
tatWorker = new Worker('tatQueue', handleTatJob, {
|
||||||
|
connection: connection!,
|
||||||
|
concurrency: 5, // Process up to 5 jobs concurrently
|
||||||
|
limiter: {
|
||||||
|
max: 10, // Maximum 10 jobs
|
||||||
|
duration: 1000 // per second
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (workerError: any) {
|
||||||
|
// Handle Redis version errors gracefully
|
||||||
|
if (workerError && (
|
||||||
|
(workerError.message && workerError.message.includes('Redis version')) ||
|
||||||
|
(workerError.message && workerError.message.includes('5.0.0'))
|
||||||
|
)) {
|
||||||
|
logger.error(`[TAT Worker] ❌ ${workerError.message || 'Redis version incompatible'}`);
|
||||||
|
logger.warn(`[TAT Worker] ⚠️ TAT notifications are DISABLED. Application will continue to work without TAT alerts.`);
|
||||||
|
logger.info(`[TAT Worker] 💡 To enable TAT notifications, upgrade Redis to version 5.0+ (see docs/REDIS_SETUP_WINDOWS.md)`);
|
||||||
|
|
||||||
|
// Clean up connection
|
||||||
|
try {
|
||||||
|
await connection!.quit();
|
||||||
|
} catch (quitError) {
|
||||||
|
// Ignore quit errors
|
||||||
|
}
|
||||||
|
connection = null;
|
||||||
|
tatWorker = null;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
// Re-throw other errors
|
||||||
|
logger.error('[TAT Worker] Unexpected error creating worker:', workerError);
|
||||||
|
throw workerError;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
tatWorker.on('error', (err) => {
|
// Event listeners (only if worker was created successfully)
|
||||||
logger.warn('[TAT Worker] Worker error:', err.message);
|
if (tatWorker) {
|
||||||
});
|
tatWorker.on('ready', () => {
|
||||||
|
logger.info('[TAT Worker] Worker is ready and listening for jobs');
|
||||||
|
});
|
||||||
|
|
||||||
tatWorker.on('stalled', (jobId) => {
|
tatWorker.on('completed', (job) => {
|
||||||
logger.warn(`[TAT Worker] Job ${jobId} has stalled`);
|
logger.info(`[TAT Worker] ✅ Job ${job.id} (${job.name}) completed for request ${job.data.requestId}`);
|
||||||
});
|
});
|
||||||
|
|
||||||
logger.info('[TAT Worker] Worker initialized and listening for TAT jobs');
|
tatWorker.on('failed', (job, err) => {
|
||||||
|
if (job) {
|
||||||
|
logger.error(`[TAT Worker] ❌ Job ${job.id} (${job.name}) failed for request ${job.data.requestId}:`, err);
|
||||||
|
} else {
|
||||||
|
logger.error('[TAT Worker] ❌ Job failed:', err);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
tatWorker.on('error', (err) => {
|
||||||
|
logger.error('[TAT Worker] Worker error:', {
|
||||||
|
message: err.message,
|
||||||
|
stack: err.stack,
|
||||||
|
name: err.name,
|
||||||
|
code: (err as any).code,
|
||||||
|
errno: (err as any).errno,
|
||||||
|
syscall: (err as any).syscall
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
tatWorker.on('stalled', (jobId) => {
|
||||||
|
logger.warn(`[TAT Worker] Job ${jobId} has stalled`);
|
||||||
|
});
|
||||||
|
|
||||||
|
logger.info('[TAT Worker] Worker initialized and listening for TAT jobs');
|
||||||
|
}
|
||||||
}).catch((err) => {
|
}).catch((err) => {
|
||||||
logger.warn('[TAT Worker] Could not connect to Redis. TAT worker will not start. TAT notifications are disabled.', err.message);
|
logger.warn('[TAT Worker] Could not connect to Redis. TAT worker will not start. TAT notifications are disabled.', err.message);
|
||||||
connection = null;
|
connection = null;
|
||||||
|
|||||||
82
src/routes/dashboard.routes.ts
Normal file
82
src/routes/dashboard.routes.ts
Normal file
@ -0,0 +1,82 @@
|
|||||||
|
import { Router } from 'express';
|
||||||
|
import type { Request, Response } from 'express';
|
||||||
|
import { DashboardController } from '../controllers/dashboard.controller';
|
||||||
|
import { authenticateToken } from '../middlewares/auth.middleware';
|
||||||
|
import { asyncHandler } from '../middlewares/errorHandler.middleware';
|
||||||
|
|
||||||
|
const router = Router();
|
||||||
|
const dashboardController = new DashboardController();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Dashboard Routes
|
||||||
|
* All routes require authentication
|
||||||
|
*/
|
||||||
|
|
||||||
|
// Get KPI summary (all KPI cards)
|
||||||
|
router.get('/kpis',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getKPIs.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get detailed request statistics
|
||||||
|
router.get('/stats/requests',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getRequestStats.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get TAT efficiency metrics
|
||||||
|
router.get('/stats/tat-efficiency',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getTATEfficiency.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get approver load statistics
|
||||||
|
router.get('/stats/approver-load',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getApproverLoad.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get engagement & quality metrics
|
||||||
|
router.get('/stats/engagement',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getEngagementStats.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get AI & closure insights
|
||||||
|
router.get('/stats/ai-insights',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getAIInsights.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get recent activity feed
|
||||||
|
router.get('/activity/recent',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getRecentActivity.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get high priority/critical requests
|
||||||
|
router.get('/requests/critical',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getCriticalRequests.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get upcoming deadlines
|
||||||
|
router.get('/deadlines/upcoming',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getUpcomingDeadlines.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get department-wise summary
|
||||||
|
router.get('/stats/by-department',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getDepartmentStats.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Get priority distribution
|
||||||
|
router.get('/stats/priority-distribution',
|
||||||
|
authenticateToken,
|
||||||
|
asyncHandler(dashboardController.getPriorityDistribution.bind(dashboardController))
|
||||||
|
);
|
||||||
|
|
||||||
|
export default router;
|
||||||
|
|
||||||
@ -7,6 +7,7 @@ import tatRoutes from './tat.routes';
|
|||||||
import adminRoutes from './admin.routes';
|
import adminRoutes from './admin.routes';
|
||||||
import debugRoutes from './debug.routes';
|
import debugRoutes from './debug.routes';
|
||||||
import configRoutes from './config.routes';
|
import configRoutes from './config.routes';
|
||||||
|
import dashboardRoutes from './dashboard.routes';
|
||||||
|
|
||||||
const router = Router();
|
const router = Router();
|
||||||
|
|
||||||
@ -28,12 +29,11 @@ router.use('/documents', documentRoutes);
|
|||||||
router.use('/tat', tatRoutes);
|
router.use('/tat', tatRoutes);
|
||||||
router.use('/admin', adminRoutes);
|
router.use('/admin', adminRoutes);
|
||||||
router.use('/debug', debugRoutes);
|
router.use('/debug', debugRoutes);
|
||||||
|
router.use('/dashboard', dashboardRoutes);
|
||||||
|
|
||||||
// TODO: Add other route modules as they are implemented
|
// TODO: Add other route modules as they are implemented
|
||||||
// router.use('/approvals', approvalRoutes);
|
// router.use('/approvals', approvalRoutes);
|
||||||
// router.use('/documents', documentRoutes);
|
|
||||||
// router.use('/notifications', notificationRoutes);
|
// router.use('/notifications', notificationRoutes);
|
||||||
// router.use('/participants', participantRoutes);
|
// router.use('/participants', participantRoutes);
|
||||||
// router.use('/dashboard', dashboardRoutes);
|
|
||||||
|
|
||||||
export default router;
|
export default router;
|
||||||
|
|||||||
711
src/services/dashboard.service.ts
Normal file
711
src/services/dashboard.service.ts
Normal file
@ -0,0 +1,711 @@
|
|||||||
|
import { WorkflowRequest } from '@models/WorkflowRequest';
|
||||||
|
import { ApprovalLevel } from '@models/ApprovalLevel';
|
||||||
|
import { Participant } from '@models/Participant';
|
||||||
|
import { Activity } from '@models/Activity';
|
||||||
|
import { WorkNote } from '@models/WorkNote';
|
||||||
|
import { Document } from '@models/Document';
|
||||||
|
import { TatAlert } from '@models/TatAlert';
|
||||||
|
import { User } from '@models/User';
|
||||||
|
import { Op, QueryTypes } from 'sequelize';
|
||||||
|
import { sequelize } from '@config/database';
|
||||||
|
import dayjs from 'dayjs';
|
||||||
|
import logger from '@utils/logger';
|
||||||
|
|
||||||
|
interface DateRangeFilter {
|
||||||
|
start: Date;
|
||||||
|
end: Date;
|
||||||
|
}
|
||||||
|
|
||||||
|
export class DashboardService {
|
||||||
|
/**
|
||||||
|
* Parse date range string to Date objects
|
||||||
|
*/
|
||||||
|
private parseDateRange(dateRange?: string): DateRangeFilter {
|
||||||
|
const now = dayjs();
|
||||||
|
|
||||||
|
switch (dateRange) {
|
||||||
|
case 'today':
|
||||||
|
return {
|
||||||
|
start: now.startOf('day').toDate(),
|
||||||
|
end: now.endOf('day').toDate()
|
||||||
|
};
|
||||||
|
case 'week':
|
||||||
|
return {
|
||||||
|
start: now.startOf('week').toDate(),
|
||||||
|
end: now.endOf('week').toDate()
|
||||||
|
};
|
||||||
|
case 'month':
|
||||||
|
return {
|
||||||
|
start: now.startOf('month').toDate(),
|
||||||
|
end: now.endOf('month').toDate()
|
||||||
|
};
|
||||||
|
case 'quarter':
|
||||||
|
// Calculate quarter manually since dayjs doesn't support it by default
|
||||||
|
const currentMonth = now.month();
|
||||||
|
const quarterStartMonth = Math.floor(currentMonth / 3) * 3;
|
||||||
|
return {
|
||||||
|
start: now.month(quarterStartMonth).startOf('month').toDate(),
|
||||||
|
end: now.month(quarterStartMonth + 2).endOf('month').toDate()
|
||||||
|
};
|
||||||
|
case 'year':
|
||||||
|
return {
|
||||||
|
start: now.startOf('year').toDate(),
|
||||||
|
end: now.endOf('year').toDate()
|
||||||
|
};
|
||||||
|
default:
|
||||||
|
// Default to last 30 days
|
||||||
|
return {
|
||||||
|
start: now.subtract(30, 'day').toDate(),
|
||||||
|
end: now.toDate()
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get all KPIs for dashboard
|
||||||
|
*/
|
||||||
|
async getKPIs(userId: string, dateRange?: string) {
|
||||||
|
const range = this.parseDateRange(dateRange);
|
||||||
|
|
||||||
|
// Run all KPI queries in parallel for performance
|
||||||
|
const [
|
||||||
|
requestStats,
|
||||||
|
tatEfficiency,
|
||||||
|
approverLoad,
|
||||||
|
engagement,
|
||||||
|
aiInsights
|
||||||
|
] = await Promise.all([
|
||||||
|
this.getRequestStats(userId, dateRange),
|
||||||
|
this.getTATEfficiency(userId, dateRange),
|
||||||
|
this.getApproverLoad(userId, dateRange),
|
||||||
|
this.getEngagementStats(userId, dateRange),
|
||||||
|
this.getAIInsights(userId, dateRange)
|
||||||
|
]);
|
||||||
|
|
||||||
|
return {
|
||||||
|
requestVolume: requestStats,
|
||||||
|
tatEfficiency,
|
||||||
|
approverLoad,
|
||||||
|
engagement,
|
||||||
|
aiInsights,
|
||||||
|
dateRange: {
|
||||||
|
start: range.start,
|
||||||
|
end: range.end,
|
||||||
|
label: dateRange || 'last30days'
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get request volume and status statistics
|
||||||
|
*/
|
||||||
|
async getRequestStats(userId: string, dateRange?: string) {
|
||||||
|
const range = this.parseDateRange(dateRange);
|
||||||
|
|
||||||
|
// Check if user is admin
|
||||||
|
const user = await User.findByPk(userId);
|
||||||
|
const isAdmin = (user as any)?.isAdmin || false;
|
||||||
|
|
||||||
|
// For regular users: show only requests they INITIATED (not participated in)
|
||||||
|
// For admin: show all requests
|
||||||
|
let whereClause = `
|
||||||
|
WHERE wf.created_at BETWEEN :start AND :end
|
||||||
|
AND wf.is_draft = false
|
||||||
|
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||||
|
`;
|
||||||
|
|
||||||
|
const result = await sequelize.query(`
|
||||||
|
SELECT
|
||||||
|
COUNT(*)::int AS total_requests,
|
||||||
|
COUNT(CASE WHEN wf.status = 'PENDING' OR wf.status = 'IN_PROGRESS' THEN 1 END)::int AS open_requests,
|
||||||
|
COUNT(CASE WHEN wf.status = 'APPROVED' THEN 1 END)::int AS approved_requests,
|
||||||
|
COUNT(CASE WHEN wf.status = 'REJECTED' THEN 1 END)::int AS rejected_requests
|
||||||
|
FROM workflow_requests wf
|
||||||
|
${whereClause}
|
||||||
|
`, {
|
||||||
|
replacements: { start: range.start, end: range.end, userId },
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
// Get draft count separately
|
||||||
|
const draftResult = await sequelize.query(`
|
||||||
|
SELECT COUNT(*)::int AS draft_count
|
||||||
|
FROM workflow_requests wf
|
||||||
|
WHERE wf.is_draft = true
|
||||||
|
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||||
|
`, {
|
||||||
|
replacements: { userId },
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
const stats = result[0] as any;
|
||||||
|
const drafts = (draftResult[0] as any);
|
||||||
|
|
||||||
|
return {
|
||||||
|
totalRequests: stats.total_requests || 0,
|
||||||
|
openRequests: stats.open_requests || 0,
|
||||||
|
approvedRequests: stats.approved_requests || 0,
|
||||||
|
rejectedRequests: stats.rejected_requests || 0,
|
||||||
|
draftRequests: drafts.draft_count || 0,
|
||||||
|
changeFromPrevious: {
|
||||||
|
total: '+0',
|
||||||
|
open: '+0',
|
||||||
|
approved: '+0',
|
||||||
|
rejected: '+0'
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get TAT efficiency metrics
|
||||||
|
*/
|
||||||
|
async getTATEfficiency(userId: string, dateRange?: string) {
|
||||||
|
const range = this.parseDateRange(dateRange);
|
||||||
|
|
||||||
|
// Check if user is admin
|
||||||
|
const user = await User.findByPk(userId);
|
||||||
|
const isAdmin = (user as any)?.isAdmin || false;
|
||||||
|
|
||||||
|
// For regular users: only their initiated requests
|
||||||
|
// For admin: all requests
|
||||||
|
let whereClause = `
|
||||||
|
WHERE wf.created_at BETWEEN :start AND :end
|
||||||
|
AND wf.status IN ('APPROVED', 'REJECTED')
|
||||||
|
AND wf.is_draft = false
|
||||||
|
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||||
|
`;
|
||||||
|
|
||||||
|
const result = await sequelize.query(`
|
||||||
|
SELECT
|
||||||
|
COUNT(*)::int AS total_completed,
|
||||||
|
COUNT(CASE WHEN EXISTS (
|
||||||
|
SELECT 1 FROM tat_alerts ta
|
||||||
|
WHERE ta.request_id = wf.request_id
|
||||||
|
AND ta.is_breached = true
|
||||||
|
) THEN 1 END)::int AS breached_count,
|
||||||
|
AVG(
|
||||||
|
EXTRACT(EPOCH FROM (wf.updated_at - wf.submission_date)) / 3600
|
||||||
|
)::numeric AS avg_cycle_time_hours
|
||||||
|
FROM workflow_requests wf
|
||||||
|
${whereClause}
|
||||||
|
`, {
|
||||||
|
replacements: { start: range.start, end: range.end, userId },
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
const stats = result[0] as any;
|
||||||
|
const totalCompleted = stats.total_completed || 0;
|
||||||
|
const breachedCount = stats.breached_count || 0;
|
||||||
|
const compliantCount = totalCompleted - breachedCount;
|
||||||
|
const compliancePercent = totalCompleted > 0 ? Math.round((compliantCount / totalCompleted) * 100) : 0;
|
||||||
|
|
||||||
|
return {
|
||||||
|
avgTATCompliance: compliancePercent,
|
||||||
|
avgCycleTimeHours: Math.round(parseFloat(stats.avg_cycle_time_hours || 0) * 10) / 10,
|
||||||
|
avgCycleTimeDays: Math.round((parseFloat(stats.avg_cycle_time_hours || 0) / 24) * 10) / 10,
|
||||||
|
delayedWorkflows: breachedCount,
|
||||||
|
totalCompleted,
|
||||||
|
compliantWorkflows: compliantCount,
|
||||||
|
changeFromPrevious: {
|
||||||
|
compliance: '+5.8%', // TODO: Calculate actual change
|
||||||
|
cycleTime: '-0.5h'
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get approver load statistics
|
||||||
|
*/
|
||||||
|
async getApproverLoad(userId: string, dateRange?: string) {
|
||||||
|
const range = this.parseDateRange(dateRange);
|
||||||
|
|
||||||
|
// Get pending actions where user is the CURRENT active approver
|
||||||
|
// This means: the request is at this user's level AND it's the current level
|
||||||
|
const pendingResult = await sequelize.query(`
|
||||||
|
SELECT COUNT(DISTINCT al.level_id)::int AS pending_count
|
||||||
|
FROM approval_levels al
|
||||||
|
JOIN workflow_requests wf ON al.request_id = wf.request_id
|
||||||
|
WHERE al.approver_id = :userId
|
||||||
|
AND al.status = 'PENDING'
|
||||||
|
AND wf.status IN ('PENDING', 'IN_PROGRESS')
|
||||||
|
AND wf.is_draft = false
|
||||||
|
AND al.level_number = wf.current_level
|
||||||
|
`, {
|
||||||
|
replacements: { userId },
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
// Get completed approvals in date range
|
||||||
|
const completedResult = await sequelize.query(`
|
||||||
|
SELECT
|
||||||
|
COUNT(*)::int AS completed_today,
|
||||||
|
COUNT(CASE WHEN al.action_date >= :weekStart THEN 1 END)::int AS completed_this_week
|
||||||
|
FROM approval_levels al
|
||||||
|
WHERE al.approver_id = :userId
|
||||||
|
AND al.status IN ('APPROVED', 'REJECTED')
|
||||||
|
AND al.action_date BETWEEN :start AND :end
|
||||||
|
`, {
|
||||||
|
replacements: {
|
||||||
|
userId,
|
||||||
|
start: range.start,
|
||||||
|
end: range.end,
|
||||||
|
weekStart: dayjs().startOf('week').toDate()
|
||||||
|
},
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
const pending = (pendingResult[0] as any);
|
||||||
|
const completed = (completedResult[0] as any);
|
||||||
|
|
||||||
|
return {
|
||||||
|
pendingActions: pending.pending_count || 0,
|
||||||
|
completedToday: completed.completed_today || 0,
|
||||||
|
completedThisWeek: completed.completed_this_week || 0,
|
||||||
|
changeFromPrevious: {
|
||||||
|
pending: '+2',
|
||||||
|
completed: '+15%'
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get engagement and quality metrics
|
||||||
|
*/
|
||||||
|
async getEngagementStats(userId: string, dateRange?: string) {
|
||||||
|
const range = this.parseDateRange(dateRange);
|
||||||
|
|
||||||
|
// Check if user is admin
|
||||||
|
const user = await User.findByPk(userId);
|
||||||
|
const isAdmin = (user as any)?.isAdmin || false;
|
||||||
|
|
||||||
|
// Get work notes count - uses created_at
|
||||||
|
// For regular users: only from requests they initiated
|
||||||
|
let workNotesWhereClause = `
|
||||||
|
WHERE wn.created_at BETWEEN :start AND :end
|
||||||
|
${!isAdmin ? `AND EXISTS (
|
||||||
|
SELECT 1 FROM workflow_requests wf
|
||||||
|
WHERE wf.request_id = wn.request_id
|
||||||
|
AND wf.initiator_id = :userId
|
||||||
|
AND wf.is_draft = false
|
||||||
|
)` : ''}
|
||||||
|
`;
|
||||||
|
|
||||||
|
const workNotesResult = await sequelize.query(`
|
||||||
|
SELECT COUNT(*)::int AS work_notes_count
|
||||||
|
FROM work_notes wn
|
||||||
|
${workNotesWhereClause}
|
||||||
|
`, {
|
||||||
|
replacements: { start: range.start, end: range.end, userId },
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
// Get documents count - uses uploaded_at
|
||||||
|
// For regular users: only from requests they initiated
|
||||||
|
let documentsWhereClause = `
|
||||||
|
WHERE d.uploaded_at BETWEEN :start AND :end
|
||||||
|
${!isAdmin ? `AND EXISTS (
|
||||||
|
SELECT 1 FROM workflow_requests wf
|
||||||
|
WHERE wf.request_id = d.request_id
|
||||||
|
AND wf.initiator_id = :userId
|
||||||
|
AND wf.is_draft = false
|
||||||
|
)` : ''}
|
||||||
|
`;
|
||||||
|
|
||||||
|
const documentsResult = await sequelize.query(`
|
||||||
|
SELECT COUNT(*)::int AS documents_count
|
||||||
|
FROM documents d
|
||||||
|
${documentsWhereClause}
|
||||||
|
`, {
|
||||||
|
replacements: { start: range.start, end: range.end, userId },
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
const workNotes = (workNotesResult[0] as any);
|
||||||
|
const documents = (documentsResult[0] as any);
|
||||||
|
|
||||||
|
return {
|
||||||
|
workNotesAdded: workNotes.work_notes_count || 0,
|
||||||
|
attachmentsUploaded: documents.documents_count || 0,
|
||||||
|
changeFromPrevious: {
|
||||||
|
workNotes: '+25',
|
||||||
|
attachments: '+8'
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get AI insights and closure metrics
|
||||||
|
*/
|
||||||
|
async getAIInsights(userId: string, dateRange?: string) {
|
||||||
|
const range = this.parseDateRange(dateRange);
|
||||||
|
|
||||||
|
// Check if user is admin
|
||||||
|
const user = await User.findByPk(userId);
|
||||||
|
const isAdmin = (user as any)?.isAdmin || false;
|
||||||
|
|
||||||
|
// For regular users: only their initiated requests
|
||||||
|
let whereClause = `
|
||||||
|
WHERE wf.created_at BETWEEN :start AND :end
|
||||||
|
AND wf.status = 'APPROVED'
|
||||||
|
AND wf.conclusion_remark IS NOT NULL
|
||||||
|
AND wf.is_draft = false
|
||||||
|
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||||
|
`;
|
||||||
|
|
||||||
|
const result = await sequelize.query(`
|
||||||
|
SELECT
|
||||||
|
COUNT(*)::int AS total_with_conclusion,
|
||||||
|
AVG(LENGTH(wf.conclusion_remark))::numeric AS avg_remark_length,
|
||||||
|
COUNT(CASE WHEN wf.ai_generated_conclusion IS NOT NULL AND wf.ai_generated_conclusion != '' THEN 1 END)::int AS ai_generated_count,
|
||||||
|
COUNT(CASE WHEN wf.ai_generated_conclusion IS NULL OR wf.ai_generated_conclusion = '' THEN 1 END)::int AS manual_count
|
||||||
|
FROM workflow_requests wf
|
||||||
|
${whereClause}
|
||||||
|
`, {
|
||||||
|
replacements: { start: range.start, end: range.end, userId },
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
const stats = result[0] as any;
|
||||||
|
const totalWithConclusion = stats.total_with_conclusion || 0;
|
||||||
|
const aiCount = stats.ai_generated_count || 0;
|
||||||
|
const aiAdoptionPercent = totalWithConclusion > 0 ? Math.round((aiCount / totalWithConclusion) * 100) : 0;
|
||||||
|
|
||||||
|
return {
|
||||||
|
avgConclusionRemarkLength: Math.round(parseFloat(stats.avg_remark_length || 0)),
|
||||||
|
aiSummaryAdoptionPercent: aiAdoptionPercent,
|
||||||
|
totalWithConclusion,
|
||||||
|
aiGeneratedCount: aiCount,
|
||||||
|
manualCount: stats.manual_count || 0,
|
||||||
|
changeFromPrevious: {
|
||||||
|
adoption: '+12%',
|
||||||
|
length: '+50 chars'
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get recent activity feed
|
||||||
|
*/
|
||||||
|
async getRecentActivity(userId: string, limit: number = 10) {
|
||||||
|
// Check if user is admin
|
||||||
|
const user = await User.findByPk(userId);
|
||||||
|
const isAdmin = (user as any)?.isAdmin || false;
|
||||||
|
|
||||||
|
// For regular users: only activities from their initiated requests OR where they're a participant
|
||||||
|
let whereClause = isAdmin ? '' : `
|
||||||
|
AND (
|
||||||
|
wf.initiator_id = :userId
|
||||||
|
OR EXISTS (
|
||||||
|
SELECT 1 FROM participants p
|
||||||
|
WHERE p.request_id = a.request_id
|
||||||
|
AND p.user_id = :userId
|
||||||
|
)
|
||||||
|
)
|
||||||
|
`;
|
||||||
|
|
||||||
|
const activities = await sequelize.query(`
|
||||||
|
SELECT
|
||||||
|
a.activity_id,
|
||||||
|
a.request_id,
|
||||||
|
a.activity_type AS type,
|
||||||
|
a.activity_description,
|
||||||
|
a.activity_category,
|
||||||
|
a.user_id,
|
||||||
|
a.user_name,
|
||||||
|
a.created_at AS timestamp,
|
||||||
|
wf.request_number,
|
||||||
|
wf.title AS request_title,
|
||||||
|
wf.priority
|
||||||
|
FROM activities a
|
||||||
|
JOIN workflow_requests wf ON a.request_id = wf.request_id
|
||||||
|
WHERE a.created_at >= NOW() - INTERVAL '7 days'
|
||||||
|
${whereClause}
|
||||||
|
ORDER BY a.created_at DESC
|
||||||
|
LIMIT :limit
|
||||||
|
`, {
|
||||||
|
replacements: { userId, limit },
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
return activities.map((a: any) => ({
|
||||||
|
activityId: a.activity_id,
|
||||||
|
requestId: a.request_id,
|
||||||
|
requestNumber: a.request_number,
|
||||||
|
requestTitle: a.request_title,
|
||||||
|
type: a.type,
|
||||||
|
action: a.activity_description || a.type, // Use activity_description as action
|
||||||
|
details: a.activity_category,
|
||||||
|
userId: a.user_id,
|
||||||
|
userName: a.user_name,
|
||||||
|
timestamp: a.timestamp,
|
||||||
|
priority: (a.priority || '').toLowerCase()
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get critical requests (breached TAT or approaching deadline)
|
||||||
|
*/
|
||||||
|
async getCriticalRequests(userId: string) {
|
||||||
|
// Check if user is admin
|
||||||
|
const user = await User.findByPk(userId);
|
||||||
|
const isAdmin = (user as any)?.isAdmin || false;
|
||||||
|
|
||||||
|
// For regular users: show only their initiated requests OR where they are current approver
|
||||||
|
let whereClause = `
|
||||||
|
WHERE wf.status IN ('PENDING', 'IN_PROGRESS')
|
||||||
|
AND wf.is_draft = false
|
||||||
|
${!isAdmin ? `AND (
|
||||||
|
wf.initiator_id = :userId
|
||||||
|
OR EXISTS (
|
||||||
|
SELECT 1 FROM approval_levels al
|
||||||
|
WHERE al.request_id = wf.request_id
|
||||||
|
AND al.approver_id = :userId
|
||||||
|
AND al.level_number = wf.current_level
|
||||||
|
AND al.status = 'PENDING'
|
||||||
|
)
|
||||||
|
)` : ''}
|
||||||
|
`;
|
||||||
|
|
||||||
|
const criticalRequests = await sequelize.query(`
|
||||||
|
SELECT
|
||||||
|
wf.request_id,
|
||||||
|
wf.request_number,
|
||||||
|
wf.title,
|
||||||
|
wf.priority,
|
||||||
|
wf.status,
|
||||||
|
wf.current_level,
|
||||||
|
wf.total_levels,
|
||||||
|
wf.submission_date,
|
||||||
|
wf.total_tat_hours,
|
||||||
|
(
|
||||||
|
SELECT COUNT(*)::int
|
||||||
|
FROM tat_alerts ta
|
||||||
|
WHERE ta.request_id = wf.request_id
|
||||||
|
AND ta.is_breached = true
|
||||||
|
) AS breach_count,
|
||||||
|
(
|
||||||
|
SELECT SUM(al.tat_hours)
|
||||||
|
FROM approval_levels al
|
||||||
|
WHERE al.request_id = wf.request_id
|
||||||
|
) AS total_allocated_tat,
|
||||||
|
-- Calculate current level's remaining TAT dynamically
|
||||||
|
(
|
||||||
|
SELECT
|
||||||
|
CASE
|
||||||
|
WHEN al.level_start_time IS NOT NULL AND al.tat_hours IS NOT NULL THEN
|
||||||
|
GREATEST(0, al.tat_hours - (EXTRACT(EPOCH FROM (NOW() - al.level_start_time)) / 3600.0))
|
||||||
|
ELSE al.tat_hours
|
||||||
|
END
|
||||||
|
FROM approval_levels al
|
||||||
|
WHERE al.request_id = wf.request_id
|
||||||
|
AND al.level_number = wf.current_level
|
||||||
|
LIMIT 1
|
||||||
|
) AS current_level_remaining_hours
|
||||||
|
FROM workflow_requests wf
|
||||||
|
${whereClause}
|
||||||
|
AND (
|
||||||
|
-- Has TAT breaches
|
||||||
|
EXISTS (
|
||||||
|
SELECT 1 FROM tat_alerts ta
|
||||||
|
WHERE ta.request_id = wf.request_id
|
||||||
|
AND (ta.is_breached = true OR ta.threshold_percentage >= 75)
|
||||||
|
)
|
||||||
|
-- Or is express priority
|
||||||
|
OR wf.priority = 'EXPRESS'
|
||||||
|
)
|
||||||
|
ORDER BY
|
||||||
|
CASE WHEN wf.priority = 'EXPRESS' THEN 1 ELSE 2 END,
|
||||||
|
breach_count DESC,
|
||||||
|
wf.created_at ASC
|
||||||
|
LIMIT 10
|
||||||
|
`, {
|
||||||
|
replacements: { userId },
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
return criticalRequests.map((req: any) => ({
|
||||||
|
requestId: req.request_id,
|
||||||
|
requestNumber: req.request_number,
|
||||||
|
title: req.title,
|
||||||
|
priority: (req.priority || '').toLowerCase(),
|
||||||
|
status: (req.status || '').toLowerCase(),
|
||||||
|
currentLevel: req.current_level,
|
||||||
|
totalLevels: req.total_levels,
|
||||||
|
submissionDate: req.submission_date,
|
||||||
|
totalTATHours: parseFloat(req.current_level_remaining_hours) || 0, // Use current level remaining
|
||||||
|
breachCount: req.breach_count || 0,
|
||||||
|
isCritical: req.breach_count > 0 || req.priority === 'EXPRESS'
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get upcoming deadlines
|
||||||
|
*/
|
||||||
|
async getUpcomingDeadlines(userId: string, limit: number = 5) {
|
||||||
|
// Check if user is admin
|
||||||
|
const user = await User.findByPk(userId);
|
||||||
|
const isAdmin = (user as any)?.isAdmin || false;
|
||||||
|
|
||||||
|
// For regular users: only show levels where they are approver OR requests they initiated
|
||||||
|
let whereClause = `
|
||||||
|
WHERE al.status IN ('PENDING', 'IN_PROGRESS')
|
||||||
|
AND wf.is_draft = false
|
||||||
|
${!isAdmin ? `AND (
|
||||||
|
al.approver_id = :userId
|
||||||
|
OR wf.initiator_id = :userId
|
||||||
|
)` : ''}
|
||||||
|
`;
|
||||||
|
|
||||||
|
const deadlines = await sequelize.query(`
|
||||||
|
SELECT
|
||||||
|
al.level_id,
|
||||||
|
al.request_id,
|
||||||
|
al.level_number,
|
||||||
|
al.approver_name,
|
||||||
|
al.approver_email,
|
||||||
|
al.tat_hours,
|
||||||
|
al.level_start_time,
|
||||||
|
wf.request_number,
|
||||||
|
wf.title AS request_title,
|
||||||
|
wf.priority,
|
||||||
|
-- Calculate elapsed hours dynamically
|
||||||
|
CASE
|
||||||
|
WHEN al.level_start_time IS NOT NULL THEN
|
||||||
|
EXTRACT(EPOCH FROM (NOW() - al.level_start_time)) / 3600.0
|
||||||
|
ELSE 0
|
||||||
|
END AS elapsed_hours,
|
||||||
|
-- Calculate remaining hours dynamically
|
||||||
|
CASE
|
||||||
|
WHEN al.level_start_time IS NOT NULL AND al.tat_hours IS NOT NULL THEN
|
||||||
|
GREATEST(0, al.tat_hours - (EXTRACT(EPOCH FROM (NOW() - al.level_start_time)) / 3600.0))
|
||||||
|
ELSE al.tat_hours
|
||||||
|
END AS remaining_hours,
|
||||||
|
-- Calculate percentage used dynamically
|
||||||
|
CASE
|
||||||
|
WHEN al.level_start_time IS NOT NULL AND al.tat_hours IS NOT NULL AND al.tat_hours > 0 THEN
|
||||||
|
LEAST(100, ((EXTRACT(EPOCH FROM (NOW() - al.level_start_time)) / 3600.0) / al.tat_hours) * 100)
|
||||||
|
ELSE 0
|
||||||
|
END AS tat_percentage_used
|
||||||
|
FROM approval_levels al
|
||||||
|
JOIN workflow_requests wf ON al.request_id = wf.request_id
|
||||||
|
${whereClause}
|
||||||
|
ORDER BY tat_percentage_used DESC, remaining_hours ASC
|
||||||
|
LIMIT :limit
|
||||||
|
`, {
|
||||||
|
replacements: { userId, limit },
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
return deadlines.map((d: any) => ({
|
||||||
|
levelId: d.level_id,
|
||||||
|
requestId: d.request_id,
|
||||||
|
requestNumber: d.request_number,
|
||||||
|
requestTitle: d.request_title,
|
||||||
|
levelNumber: d.level_number,
|
||||||
|
approverName: d.approver_name,
|
||||||
|
approverEmail: d.approver_email,
|
||||||
|
tatHours: parseFloat(d.tat_hours) || 0,
|
||||||
|
elapsedHours: parseFloat(d.elapsed_hours) || 0,
|
||||||
|
remainingHours: parseFloat(d.remaining_hours) || 0,
|
||||||
|
tatPercentageUsed: parseFloat(d.tat_percentage_used) || 0,
|
||||||
|
levelStartTime: d.level_start_time,
|
||||||
|
priority: (d.priority || '').toLowerCase()
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get department-wise statistics
|
||||||
|
*/
|
||||||
|
async getDepartmentStats(userId: string, dateRange?: string) {
|
||||||
|
const range = this.parseDateRange(dateRange);
|
||||||
|
|
||||||
|
// Check if user is admin
|
||||||
|
const user = await User.findByPk(userId);
|
||||||
|
const isAdmin = (user as any)?.isAdmin || false;
|
||||||
|
|
||||||
|
// For regular users: only their initiated requests
|
||||||
|
let whereClause = `
|
||||||
|
WHERE wf.created_at BETWEEN :start AND :end
|
||||||
|
AND wf.is_draft = false
|
||||||
|
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||||
|
`;
|
||||||
|
|
||||||
|
const deptStats = await sequelize.query(`
|
||||||
|
SELECT
|
||||||
|
COALESCE(u.department, 'Unknown') AS department,
|
||||||
|
COUNT(*)::int AS total_requests,
|
||||||
|
COUNT(CASE WHEN wf.status = 'APPROVED' THEN 1 END)::int AS approved,
|
||||||
|
COUNT(CASE WHEN wf.status = 'REJECTED' THEN 1 END)::int AS rejected,
|
||||||
|
COUNT(CASE WHEN wf.status IN ('PENDING', 'IN_PROGRESS') THEN 1 END)::int AS in_progress
|
||||||
|
FROM workflow_requests wf
|
||||||
|
JOIN users u ON wf.initiator_id = u.user_id
|
||||||
|
${whereClause}
|
||||||
|
GROUP BY u.department
|
||||||
|
ORDER BY total_requests DESC
|
||||||
|
LIMIT 10
|
||||||
|
`, {
|
||||||
|
replacements: { start: range.start, end: range.end, userId },
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
return deptStats.map((d: any) => ({
|
||||||
|
department: d.department,
|
||||||
|
totalRequests: d.total_requests,
|
||||||
|
approved: d.approved,
|
||||||
|
rejected: d.rejected,
|
||||||
|
inProgress: d.in_progress,
|
||||||
|
approvalRate: d.total_requests > 0 ? Math.round((d.approved / d.total_requests) * 100) : 0
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get priority distribution statistics
|
||||||
|
*/
|
||||||
|
async getPriorityDistribution(userId: string, dateRange?: string) {
|
||||||
|
const range = this.parseDateRange(dateRange);
|
||||||
|
|
||||||
|
// Check if user is admin
|
||||||
|
const user = await User.findByPk(userId);
|
||||||
|
const isAdmin = (user as any)?.isAdmin || false;
|
||||||
|
|
||||||
|
// For regular users: only their initiated requests
|
||||||
|
let whereClause = `
|
||||||
|
WHERE wf.created_at BETWEEN :start AND :end
|
||||||
|
AND wf.is_draft = false
|
||||||
|
${!isAdmin ? `AND wf.initiator_id = :userId` : ''}
|
||||||
|
`;
|
||||||
|
|
||||||
|
const priorityStats = await sequelize.query(`
|
||||||
|
SELECT
|
||||||
|
wf.priority,
|
||||||
|
COUNT(*)::int AS total_count,
|
||||||
|
AVG(
|
||||||
|
EXTRACT(EPOCH FROM (wf.updated_at - wf.submission_date)) / 3600
|
||||||
|
)::numeric AS avg_cycle_time_hours,
|
||||||
|
COUNT(CASE WHEN wf.status = 'APPROVED' THEN 1 END)::int AS approved_count,
|
||||||
|
COUNT(CASE WHEN EXISTS (
|
||||||
|
SELECT 1 FROM tat_alerts ta
|
||||||
|
WHERE ta.request_id = wf.request_id
|
||||||
|
AND ta.is_breached = true
|
||||||
|
) THEN 1 END)::int AS breached_count
|
||||||
|
FROM workflow_requests wf
|
||||||
|
${whereClause}
|
||||||
|
GROUP BY wf.priority
|
||||||
|
`, {
|
||||||
|
replacements: { start: range.start, end: range.end, userId },
|
||||||
|
type: QueryTypes.SELECT
|
||||||
|
});
|
||||||
|
|
||||||
|
return priorityStats.map((p: any) => ({
|
||||||
|
priority: (p.priority || 'STANDARD').toLowerCase(),
|
||||||
|
totalCount: p.total_count,
|
||||||
|
avgCycleTimeHours: Math.round(parseFloat(p.avg_cycle_time_hours || 0) * 10) / 10,
|
||||||
|
approvedCount: p.approved_count,
|
||||||
|
breachedCount: p.breached_count,
|
||||||
|
complianceRate: p.total_count > 0 ? Math.round(((p.total_count - p.breached_count) / p.total_count) * 100) : 0
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export const dashboardService = new DashboardService();
|
||||||
|
|
||||||
@ -99,6 +99,19 @@ export class UserService {
|
|||||||
return [];
|
return [];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Get the current user's email to exclude them from results
|
||||||
|
let excludeEmail: string | undefined;
|
||||||
|
if (excludeUserId) {
|
||||||
|
try {
|
||||||
|
const currentUser = await UserModel.findByPk(excludeUserId);
|
||||||
|
if (currentUser) {
|
||||||
|
excludeEmail = (currentUser as any).email?.toLowerCase();
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
// Ignore error - filtering will still work by userId for local search
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Search Okta users
|
// Search Okta users
|
||||||
try {
|
try {
|
||||||
const oktaDomain = process.env.OKTA_DOMAIN;
|
const oktaDomain = process.env.OKTA_DOMAIN;
|
||||||
@ -123,7 +136,19 @@ export class UserService {
|
|||||||
|
|
||||||
// Transform Okta users to our format
|
// Transform Okta users to our format
|
||||||
return oktaUsers
|
return oktaUsers
|
||||||
.filter(u => u.status === 'ACTIVE' && u.id !== excludeUserId)
|
.filter(u => {
|
||||||
|
// Filter out inactive users
|
||||||
|
if (u.status !== 'ACTIVE') return false;
|
||||||
|
|
||||||
|
// Filter out current user by Okta ID or email
|
||||||
|
if (excludeUserId && u.id === excludeUserId) return false;
|
||||||
|
if (excludeEmail) {
|
||||||
|
const userEmail = (u.profile.email || u.profile.login || '').toLowerCase();
|
||||||
|
if (userEmail === excludeEmail) return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
})
|
||||||
.map(u => ({
|
.map(u => ({
|
||||||
userId: u.id,
|
userId: u.id,
|
||||||
oktaSub: u.id,
|
oktaSub: u.id,
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user