dashboard enhanced and pagination added

This commit is contained in:
laxmanhalaki 2025-11-12 11:16:14 +05:30
parent 56258205ea
commit cbca9d1b15
23 changed files with 621 additions and 4620 deletions

View File

@ -1,426 +0,0 @@
# Admin Panel - AI Provider Configuration
## Overview
Admins can configure AI providers **directly through the admin panel** without touching code or `.env` files. The system supports three AI providers with automatic failover.
---
## 🎯 Quick Start for Admins
### Step 1: Access Admin Panel
Navigate to the admin configurations page in your workflow system.
### Step 2: Configure AI Provider
Look for the **AI Configuration** section with these settings:
| Setting | Description | Example Value |
|---------|-------------|---------------|
| **AI Provider** | Choose your AI provider | `claude`, `openai`, or `gemini` |
| **Claude API Key** | API key from Anthropic | `sk-ant-xxxxxxxxxxxxx` |
| **OpenAI API Key** | API key from OpenAI | `sk-proj-xxxxxxxxxxxxx` |
| **Gemini API Key** | API key from Google | `AIzaxxxxxxxxxxxxxxx` |
| **Enable AI Features** | Turn AI on/off | `true` or `false` |
### Step 3: Get Your API Key
Choose ONE provider and get an API key:
#### Option A: Claude (Recommended)
1. Go to https://console.anthropic.com
2. Create account / Sign in
3. Generate API key
4. Copy key (starts with `sk-ant-`)
#### Option B: OpenAI
1. Go to https://platform.openai.com
2. Create account / Sign in
3. Navigate to API keys
4. Create new key
5. Copy key (starts with `sk-proj-` or `sk-`)
#### Option C: Gemini (Free Tier Available!)
1. Go to https://ai.google.dev
2. Sign in with Google account
3. Get API key
4. Copy key
### Step 4: Configure in Admin Panel
**Example: Setting up Claude**
1. Set **AI Provider** = `claude`
2. Set **Claude API Key** = `sk-ant-api03-xxxxxxxxxxxxx`
3. Leave other API keys empty (optional)
4. Set **Enable AI Features** = `true`
5. Click **Save Configuration**
**Done!** The system will automatically initialize Claude.
---
## 🔄 How It Works
### Automatic Initialization
When you save the configuration:
```
Admin saves config
System clears cache
AI Service reads new config from database
Initializes selected provider (Claude/OpenAI/Gemini)
✅ AI features active
```
**You'll see in server logs:**
```
info: [Admin] AI configuration 'AI_PROVIDER' updated
info: [AI Service] Reinitializing AI provider from updated configuration...
info: [AI Service] Preferred provider from config: claude
info: [AI Service] ✅ Claude provider initialized
info: [AI Service] ✅ Active provider: Claude (Anthropic)
info: [Admin] AI service reinitialized with Claude (Anthropic)
```
### Automatic Failover
If your primary provider fails, the system automatically tries alternatives:
```sql
-- Example: Admin configured Claude, but API key is invalid
UPDATE admin_configurations
SET config_value = 'claude'
WHERE config_key = 'AI_PROVIDER';
UPDATE admin_configurations
SET config_value = 'sk-ant-INVALID'
WHERE config_key = 'CLAUDE_API_KEY';
```
**System Response:**
```
warn: [AI Service] Claude API key not configured.
warn: [AI Service] Preferred provider unavailable. Trying fallbacks...
info: [AI Service] ✅ OpenAI provider initialized
info: [AI Service] ✅ Using fallback provider: OpenAI (GPT-4)
```
**AI features still work!** (if OpenAI key is configured)
---
## 📋 Configuration Guide by Provider
### Claude (Anthropic) - Best for Production
**Pros:**
- ✅ High-quality, professional output
- ✅ Excellent instruction following
- ✅ Good for formal business documents
- ✅ Reliable and consistent
**Cons:**
- ⚠️ Paid service (no free tier)
- ⚠️ Requires account setup
**Configuration:**
```
AI_PROVIDER = claude
CLAUDE_API_KEY = sk-ant-api03-xxxxxxxxxxxxx
```
**Cost:** ~$0.004 per conclusion generation
---
### OpenAI (GPT-4) - Industry Standard
**Pros:**
- ✅ Fast response times
- ✅ Well-documented
- ✅ Widely used and trusted
- ✅ Good performance
**Cons:**
- ⚠️ Paid service
- ⚠️ Higher cost than alternatives
**Configuration:**
```
AI_PROVIDER = openai
OPENAI_API_KEY = sk-proj-xxxxxxxxxxxxx
```
**Cost:** ~$0.005 per conclusion generation
---
### Gemini (Google) - Cost-Effective
**Pros:**
- ✅ **Free tier available!**
- ✅ Good performance
- ✅ Easy Google integration
- ✅ Generous rate limits
**Cons:**
- ⚠️ Slightly lower quality than Claude/GPT-4
- ⚠️ Rate limits on free tier
**Configuration:**
```
AI_PROVIDER = gemini
GEMINI_API_KEY = AIzaxxxxxxxxxxxxxxx
```
**Cost:** **FREE** (up to rate limits), then $0.0001 per generation
---
## 🔐 Security Best Practices
### 1. API Key Storage
- ✅ **Stored in database** (encrypted in production)
- ✅ **Marked as sensitive** (hidden in UI by default)
- ✅ **Never exposed** to frontend
- ✅ **Admin access only**
### 2. Key Rotation
- Rotate API keys every 3-6 months
- Update in admin panel
- System automatically reinitializes
### 3. Access Control
- Only **Super Admins** can update AI configurations
- Regular users cannot view API keys
- All changes are logged in audit trail
---
## 🧪 Testing AI Configuration
### Method 1: Check Status via API
```bash
curl -H "Authorization: Bearer YOUR_JWT_TOKEN" \
http://localhost:5000/api/v1/ai/status
```
**Response:**
```json
{
"success": true,
"data": {
"available": true,
"provider": "Claude (Anthropic)",
"status": "active"
}
}
```
### Method 2: Check Server Logs
Look for initialization logs when server starts:
```
info: [AI Service] Preferred provider from config: claude
info: [AI Service] ✅ Claude provider initialized
info: [AI Service] ✅ Active provider: Claude (Anthropic)
```
### Method 3: Test in Application
1. Create a workflow request
2. Complete all approvals
3. As initiator, click "Finalize & Close Request"
4. Click "Generate with AI"
5. Should see AI-generated conclusion
---
## 🔄 Switching Providers
### Example: Switching from Claude to Gemini
**Current Configuration:**
```
AI_PROVIDER = claude
CLAUDE_API_KEY = sk-ant-xxxxxxxxxxxxx
```
**Steps to Switch:**
1. **Get Gemini API key** from https://ai.google.dev
2. **Open Admin Panel** → AI Configuration
3. **Update settings:**
- Set **AI Provider** = `gemini`
- Set **Gemini API Key** = `AIzaxxxxxxxxxxxxxxx`
4. **Click Save**
**Result:**
```
info: [Admin] AI configuration 'AI_PROVIDER' updated
info: [AI Service] Reinitializing...
info: [AI Service] Preferred provider from config: gemini
info: [AI Service] ✅ Gemini provider initialized
info: [AI Service] ✅ Active provider: Gemini (Google)
```
**Done!** System now uses Gemini. **No server restart needed!**
---
## 💡 Pro Tips
### 1. Multi-Provider Setup (Recommended)
Configure ALL three providers for maximum reliability:
```
AI_PROVIDER = claude
CLAUDE_API_KEY = sk-ant-xxxxxxxxxxxxx
OPENAI_API_KEY = sk-proj-xxxxxxxxxxxxx
GEMINI_API_KEY = AIzaxxxxxxxxxxxxxxx
AI_ENABLED = true
```
**Benefits:**
- If Claude is down → automatically uses OpenAI
- If OpenAI is down → automatically uses Gemini
- **Zero downtime** for AI features!
### 2. Cost Optimization
**Development/Testing:**
- Use `gemini` (free tier)
- Switch to paid provider only for production
**Production:**
- Use `claude` for best quality
- Or use `openai` for fastest responses
### 3. Monitor Usage
Check which provider is being used:
```sql
SELECT
ai_model_used,
COUNT(*) as usage_count,
AVG(ai_confidence_score) as avg_confidence
FROM conclusion_remarks
WHERE created_at > NOW() - INTERVAL '30 days'
GROUP BY ai_model_used;
```
---
## ⚠️ Troubleshooting
### Issue: "AI Service not configured"
**Check:**
1. Is `AI_ENABLED` set to `true`?
2. Is at least one API key configured?
3. Is the API key valid?
**Fix:**
- Open Admin Panel
- Verify AI Provider setting
- Re-enter API key
- Click Save
### Issue: "Failed to generate conclusion"
**Check:**
1. API key still valid (not expired/revoked)?
2. Provider service available (check status.anthropic.com, etc.)?
3. Sufficient API quota/credits?
**Fix:**
- Test API key manually (use provider's playground)
- Check account balance/quota
- Try switching to different provider
### Issue: Provider keeps failing
**Fallback Strategy:**
1. Configure multiple providers
2. System will auto-switch
3. Check logs to see which one succeeded
---
## 📊 Admin Panel UI
The admin configuration page should show:
```
┌─────────────────────────────────────────────┐
│ AI Configuration │
├─────────────────────────────────────────────┤
│ │
│ AI Provider: [claude ▼] │
│ Options: claude, openai, gemini │
│ │
│ Claude API Key: [••••••••••••••] [Show] │
│ Enter Claude API key from console.anthr... │
│ │
│ OpenAI API Key: [••••••••••••••] [Show] │
│ Enter OpenAI API key from platform.open... │
│ │
│ Gemini API Key: [••••••••••••••] [Show] │
│ Enter Gemini API key from ai.google.dev │
│ │
│ Enable AI Features: [✓] Enabled │
│ │
│ Current Status: ✅ Active (Claude) │
│ │
│ [Save Configuration] [Test AI] │
└─────────────────────────────────────────────┘
```
---
## 🎯 Summary
**Key Advantages:**
- ✅ **No code changes** - Configure through UI
- ✅ **No server restart** - Hot reload on save
- ✅ **Automatic failover** - Multiple providers
- ✅ **Vendor flexibility** - Switch anytime
- ✅ **Audit trail** - All changes logged
- ✅ **Secure storage** - API keys encrypted
**Admin Actions Required:**
1. Choose AI provider
2. Enter API key
3. Click Save
4. Done!
**User Impact:**
- Zero - users just click "Generate with AI"
- System handles provider selection automatically
- Professional conclusions generated seamlessly
---
## 📞 Support
**Provider Documentation:**
- Claude: https://docs.anthropic.com
- OpenAI: https://platform.openai.com/docs
- Gemini: https://ai.google.dev/docs
**For System Issues:**
- Check `/api/v1/ai/status` endpoint
- Review server logs for initialization
- Verify admin_configurations table entries

View File

@ -1,270 +0,0 @@
# Admin Configurable Settings - Complete Reference
## 📋 All 18 Settings Across 7 Categories
This document lists all admin-configurable settings as per the SRS document requirements.
All settings are **editable via the Settings page** (Admin users only) and stored in the `admin_configurations` table.
---
## 1**TAT Settings** (6 Settings)
Settings that control Turnaround Time calculations and reminders.
| Setting | Key | Type | Default | Range | Description |
|---------|-----|------|---------|-------|-------------|
| Default TAT - Express | `DEFAULT_TAT_EXPRESS_HOURS` | Number | 24 | 1-168 | Default TAT hours for express priority (calendar days) |
| Default TAT - Standard | `DEFAULT_TAT_STANDARD_HOURS` | Number | 48 | 1-720 | Default TAT hours for standard priority (working days) |
| First Reminder Threshold | `TAT_REMINDER_THRESHOLD_1` | Number | 50 | 1-100 | Send gentle reminder at this % of TAT elapsed |
| Second Reminder Threshold | `TAT_REMINDER_THRESHOLD_2` | Number | 75 | 1-100 | Send escalation warning at this % of TAT elapsed |
| Work Start Hour | `WORK_START_HOUR` | Number | 9 | 0-23 | Hour when working day starts (24h format) |
| Work End Hour | `WORK_END_HOUR` | Number | 18 | 0-23 | Hour when working day ends (24h format) |
**UI Component:** Number input + Slider for thresholds
**Category Color:** Blue 🔵
---
## 2**Document Policy** (3 Settings)
Settings that control file uploads and document management.
| Setting | Key | Type | Default | Range | Description |
|---------|-----|------|---------|-------|-------------|
| Max File Size | `MAX_FILE_SIZE_MB` | Number | 10 | 1-100 | Maximum file upload size in MB |
| Allowed File Types | `ALLOWED_FILE_TYPES` | String | pdf,doc,docx... | - | Comma-separated list of allowed extensions |
| Document Retention Period | `DOCUMENT_RETENTION_DAYS` | Number | 365 | 30-3650 | Days to retain documents after closure |
**UI Component:** Number input + Text input
**Category Color:** Purple 🟣
---
## 3**AI Configuration** (2 Settings)
Settings for AI-generated conclusion remarks.
| Setting | Key | Type | Default | Range | Description |
|---------|-----|------|---------|-------|-------------|
| Enable AI Remarks | `AI_REMARK_GENERATION_ENABLED` | Boolean | true | - | Toggle AI-generated conclusion remarks |
| Max Remark Characters | `AI_REMARK_MAX_CHARACTERS` | Number | 500 | 100-2000 | Maximum character limit for AI remarks |
**UI Component:** Toggle + Number input
**Category Color:** Pink 💗
---
## 4**Notification Rules** (3 Settings)
Settings for notification channels and frequency.
| Setting | Key | Type | Default | Range | Description |
|---------|-----|------|---------|-------|-------------|
| Enable Email Notifications | `ENABLE_EMAIL_NOTIFICATIONS` | Boolean | true | - | Send email notifications for events |
| Enable Push Notifications | `ENABLE_PUSH_NOTIFICATIONS` | Boolean | true | - | Send browser push notifications |
| Notification Batch Delay | `NOTIFICATION_BATCH_DELAY_MS` | Number | 5000 | 1000-30000 | Delay (ms) before sending batched notifications |
**UI Component:** Toggle + Number input
**Category Color:** Amber 🟠
---
## 5**Dashboard Layout** (4 Settings)
Settings to enable/disable KPI cards on dashboard per role.
| Setting | Key | Type | Default | Description |
|---------|-----|------|---------|-------------|
| Show Total Requests | `DASHBOARD_SHOW_TOTAL_REQUESTS` | Boolean | true | Display total requests KPI card |
| Show Open Requests | `DASHBOARD_SHOW_OPEN_REQUESTS` | Boolean | true | Display open requests KPI card |
| Show TAT Compliance | `DASHBOARD_SHOW_TAT_COMPLIANCE` | Boolean | true | Display TAT compliance KPI card |
| Show Pending Actions | `DASHBOARD_SHOW_PENDING_ACTIONS` | Boolean | true | Display pending actions KPI card |
**UI Component:** Toggle switches
**Category Color:** Teal 🟢
---
## 6**Workflow Sharing Policy** (3 Settings)
Settings to control who can add spectators and share workflows.
| Setting | Key | Type | Default | Range | Description |
|---------|-----|------|---------|-------|-------------|
| Allow Add Spectator | `ALLOW_ADD_SPECTATOR` | Boolean | true | - | Enable users to add spectators |
| Max Spectators | `MAX_SPECTATORS_PER_REQUEST` | Number | 20 | 1-100 | Maximum spectators per workflow |
| Allow External Sharing | `ALLOW_EXTERNAL_SHARING` | Boolean | false | - | Allow sharing with external users |
**UI Component:** Toggle + Number input
**Category Color:** Emerald 💚
---
## 7**Workflow Limits** (2 Settings)
System limits for workflow structure.
| Setting | Key | Type | Default | Range | Description |
|---------|-----|------|---------|-------|-------------|
| Max Approval Levels | `MAX_APPROVAL_LEVELS` | Number | 10 | 1-20 | Maximum approval levels per workflow |
| Max Participants | `MAX_PARTICIPANTS_PER_REQUEST` | Number | 50 | 2-200 | Maximum total participants per workflow |
**UI Component:** Number input
**Category Color:** Gray ⚪
---
## 📊 Total Settings Summary
| Category | Count | Editable | UI |
|----------|-------|----------|-----|
| TAT Settings | 6 | ✅ All | Number + Slider |
| Document Policy | 3 | ✅ All | Number + Text |
| AI Configuration | 2 | ✅ All | Toggle + Number |
| Notification Rules | 3 | ✅ All | Toggle + Number |
| Dashboard Layout | 4 | ✅ All | Toggle |
| Workflow Sharing | 3 | ✅ All | Toggle + Number |
| Workflow Limits | 2 | ✅ All | Number |
| **TOTAL** | **18** | **18/18** | **All Editable** |
---
## 🎯 SRS Document Compliance
### Required Config Areas (from SRS Section 7):
1. ✅ **TAT Settings** - Default TAT per priority, auto-reminder thresholds
2. ✅ **User Roles** - Covered via Workflow Limits (max participants, levels)
3. ✅ **Notification Rules** - Channels (email/push), frequency (batch delay)
4. ✅ **Document Policy** - Max upload size, allowed types, retention period
5. ✅ **Dashboard Layout** - Enable/disable KPI cards per role
6. ✅ **AI Configuration** - Toggle AI, set max characters
7. ✅ **Workflow Sharing Policy** - Control spectators, external sharing
**All 7 required areas are fully covered!** ✅
---
## 🔧 How to Edit Settings
### **Step 1: Access Settings** (Admin Only)
1. Login as Admin user
2. Navigate to **Settings** from sidebar
3. Click **"System Configuration"** tab
### **Step 2: Select Category**
Choose from 7 category tabs:
- TAT Settings
- Document Policy
- AI Configuration
- Notification Rules
- Dashboard Layout
- Workflow Sharing
- Workflow Limits
### **Step 3: Modify Values**
- **Number fields**: Enter numeric value within allowed range
- **Toggles**: Switch ON/OFF
- **Sliders**: Drag to set percentage
- **Text fields**: Enter comma-separated values
### **Step 4: Save Changes**
1. Click **"Save"** button for each modified setting
2. See success message confirmation
3. Some settings may show **"Requires Restart"** badge
### **Step 5: Reset if Needed**
- Click **"Reset to Default"** to revert any setting
- Confirmation dialog appears before reset
---
## 🚀 Initial Setup
### **First Time Setup:**
1. **Start backend** - Configurations auto-seed on first run:
```bash
cd Re_Backend
npm run dev
```
2. **Check logs** - Should see:
```
⚙️ System configurations initialized
✅ Default configurations seeded (18 settings across 7 categories)
```
3. **Login as Admin** and verify settings are editable
---
## 🗄️ Database Storage
**Table:** `admin_configurations`
**Key Columns:**
- `config_key` - Unique identifier
- `config_category` - Grouping (TAT_SETTINGS, DOCUMENT_POLICY, etc.)
- `config_value` - Current value
- `default_value` - Reset value
- `is_editable` - Whether admin can edit (all are `true`)
- `ui_component` - UI type (toggle, number, slider, text)
- `validation_rules` - JSON with min/max constraints
- `sort_order` - Display order within category
---
## 🔄 How Settings Are Applied
### **Backend:**
```typescript
import { SYSTEM_CONFIG } from '@config/system.config';
const workStartHour = SYSTEM_CONFIG.WORKING_HOURS.START_HOUR;
// Value is loaded from admin_configurations table
```
### **Frontend:**
```typescript
import { configService } from '@/services/configService';
const config = await configService.getConfig();
const maxFileSize = config.upload.maxFileSizeMB;
// Fetched from backend API: GET /api/v1/config
```
---
## ✅ Benefits
**No hardcoded values** - Everything configurable
**Admin-friendly UI** - No technical knowledge needed
**Validation built-in** - Prevents invalid values
**Audit trail** - All changes logged with timestamps
**Reset capability** - Can revert to defaults anytime
**Real-time effect** - Most changes apply immediately
**SRS compliant** - All 7 required areas covered
---
## 📝 Notes
- **User Role Management** is handled separately via user administration (not in this config)
- **Holiday Calendar** has its own dedicated management interface
- All settings have **validation rules** to prevent invalid configurations
- Settings marked **"Requires Restart"** need backend restart to take effect
- Non-admin users cannot see or edit system configurations
---
## 🎯 Result
Your system now has **complete admin configurability** as specified in the SRS document with:
📌 **18 editable settings**
📌 **7 configuration categories**
📌 **100% SRS compliance**
📌 **Admin-friendly UI**
📌 **Database-driven** (not hardcoded)

View File

@ -1,180 +0,0 @@
# AI Conclusion Remark Examples
## ✅ What Makes a Good Conclusion Remark?
A good conclusion remark should:
- **Be concise** (100-200 words)
- **Sound natural** (like a human wrote it, not AI)
- **State the facts** (what was requested, who approved, outcome)
- **Be practical** (suitable for archiving and future reference)
- **Avoid jargon** (no corporate buzzwords or overly formal language)
---
## ❌ BAD Example (Too Formal/Corporate)
```
## Workflow Completion Summary
Request REQ-2025-82736 "testing ai conclusion" has been successfully completed with EXPRESS priority status. The workflow proceeded efficiently through the approval process, achieving completion well within the established timeframes and meeting all required approval criteria.
### Key Highlights:
**Expedited Approval Process**: The request was processed with EXPRESS priority and completed successfully within the designated approval framework
**Efficient Level 1 Approval**: Test User11 provided prompt approval, completing the review in 0.0 hours against the allocated 1-hour TAT, demonstrating exceptional processing efficiency
**Document Compliance**: The approver confirmed that all submitted documentation met the required standards with the comment "Documents are fine i am approving it"
**Streamlined Execution**: The workflow proceeded without requiring additional work notes, discussions, or document revisions, indicating clear initial requirements and proper submission formatting
**Zero Delays**: No bottlenecks or escalations were encountered during the approval process, ensuring optimal workflow performance
The successful completion of this EXPRESS priority request demonstrates the effectiveness of Royal Enfield's approval mechanisms and the commitment of stakeholders to maintain operational efficiency. The workflow concluded with all necessary approvals obtained and compliance requirements satisfied.
```
**Problems:**
- Way too long and verbose
- Overly formal corporate language
- Sounds like AI/marketing material
- Uses buzzwords ("synergy", "streamlined execution", "optimal workflow performance")
- Not practical for quick reference
---
## ✅ GOOD Example (Natural & Practical)
```
Request for testing AI conclusion feature (REQ-2025-82736) was submitted with EXPRESS priority and approved by Test User11 at Level 1. The approver reviewed the submitted documents and confirmed everything was in order, with the comment "Documents are fine i am approving it."
The approval was completed quickly (within the 1-hour TAT), with no revisions or additional documentation required. Request is now closed and ready for implementation.
```
**Why This Works:**
- Concise and to the point (~80 words)
- Sounds like a human wrote it
- States the key facts clearly
- Easy to read and reference later
- Professional but not overly formal
- Mentions the outcome
---
## 💡 Example: Request with Multiple Approvers
### Bad (Too Formal):
```
The multi-level approval workflow demonstrated exceptional efficiency and stakeholder engagement across all hierarchical levels, with each approver providing valuable insights and maintaining adherence to established turnaround time parameters...
```
### Good (Natural):
```
This purchase request (REQ-2025-12345) was approved by all three levels: Rajesh (Department Head), Priya (Finance), and Amit (Director). Rajesh approved the budget allocation, Priya confirmed fund availability, and Amit gave final sign-off. Total processing time was 2.5 days. Purchase order can now be raised.
```
---
## 💡 Example: Request with Work Notes
### Bad (Too Formal):
```
Throughout the approval lifecycle, stakeholders engaged in comprehensive discussions via the work notes functionality, demonstrating collaborative problem-solving and thorough due diligence...
```
### Good (Natural):
```
Marketing campaign request (REQ-2025-23456) approved by Sarah after discussion about budget allocation. Initial request was for ₹50,000, but after work note clarification, it was revised to ₹45,000 to stay within quarterly limits. Campaign is approved to proceed with revised budget.
```
---
## 💡 Example: Rejected Request
### Bad (Too Formal):
```
Following comprehensive review and evaluation against established organizational criteria and resource allocation parameters, the request has been declined due to insufficiency in budgetary justification documentation...
```
### Good (Natural):
```
Equipment purchase request (REQ-2025-34567) was rejected by Finance (Priya). Reason: Budget already exhausted for Q4, and the equipment is not critical for current operations. Initiator can resubmit in Q1 next year with updated cost estimates and business justification.
```
---
## 📝 Template for Writing Good Conclusions
Use this structure:
1. **What was requested**: Brief description and request number
2. **Who approved/rejected**: Name and level/department
3. **Key decision or comment**: Any important feedback from approvers
4. **Outcome**: What happens next or status
**Example:**
```
[What] Request for new laptop (REQ-2025-45678)
[Who] Approved by IT Manager (Suresh) and Finance (Meera)
[Decision] Both approved, Meera confirmed budget is available
[Outcome] Procurement team can proceed with laptop order, estimated delivery in 2 weeks
```
---
## 🎯 Key Differences: AI-Generated vs Human-Written
| AI-Generated (Bad) | Human-Written (Good) |
|-------------------|---------------------|
| "Stakeholder engagement" | "Discussed with..." |
| "Achieved completion well within established timeframes" | "Completed on time" |
| "Demonstrating exceptional processing efficiency" | "Processed quickly" |
| "Optimal workflow performance" | "Everything went smoothly" |
| "The workflow concluded with all necessary approvals obtained" | "All approvals received, request is closed" |
---
## ✅ Updated AI Prompt
The AI service now uses an improved prompt that generates more realistic conclusions:
**Old Prompt:**
- Asked for "professional workflow management assistant"
- Requested "formal and factual" tone
- Asked for corporate language
**New Prompt:**
- Asks AI to "write like an employee documenting the outcome"
- Requests "natural and human-written" style
- Explicitly forbids "corporate jargon or buzzwords"
- Limits length to 100-200 words
- Focuses on practical, archival value
---
## 🔧 How It Works Now
When you click "Generate Conclusion":
1. **AI analyzes** the request, approvals, work notes, and documents
2. **AI generates** a concise, practical summary (100-200 words)
3. **You review** and can edit it if needed
4. **You finalize** to close the request
The conclusion is now:
- ✅ More realistic and natural
- ✅ Concise and to the point
- ✅ Professional but not stuffy
- ✅ Suitable for archiving
- ✅ Easy to read and reference
---
## 💬 Feedback
If the AI still generates overly formal conclusions, you can always:
1. **Edit it** directly in the text area
2. **Simplify** the language before finalizing
3. **Rewrite** key sections to sound more natural
The goal is a conclusion that **you would actually write yourself** if you were closing the request.

View File

@ -1,309 +0,0 @@
# AI Provider Configuration Guide
The Workflow Management System supports multiple AI providers for generating conclusion remarks. The system uses a **provider-agnostic architecture** with automatic fallback, making it easy to switch between providers.
## Supported Providers
| Provider | Environment Variable | Model Used | Installation |
|----------|---------------------|------------|--------------|
| **Claude (Anthropic)** | `CLAUDE_API_KEY` or `ANTHROPIC_API_KEY` | `claude-3-5-sonnet-20241022` | `npm install @anthropic-ai/sdk` |
| **OpenAI (GPT)** | `OPENAI_API_KEY` | `gpt-4o` | `npm install openai` |
| **Gemini (Google)** | `GEMINI_API_KEY` or `GOOGLE_AI_API_KEY` | `gemini-1.5-pro` | `npm install @google/generative-ai` |
---
## Quick Start
### Option 1: Claude (Recommended)
```bash
# Install package
npm install @anthropic-ai/sdk
# Set environment variable
AI_PROVIDER=claude
CLAUDE_API_KEY=sk-ant-xxxxxxxxxxxxx
```
### Option 2: OpenAI
```bash
# Install package
npm install openai
# Set environment variable
AI_PROVIDER=openai
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
```
### Option 3: Gemini
```bash
# Install package
npm install @google/generative-ai
# Set environment variable
AI_PROVIDER=gemini
GEMINI_API_KEY=xxxxxxxxxxxxx
```
---
## Configuration
### 1. Set Preferred Provider (Optional)
Add to your `.env` file:
```bash
# Preferred AI provider (claude, openai, or gemini)
# Default: claude
AI_PROVIDER=claude
```
### 2. Add API Key
Add the corresponding API key for your chosen provider:
```bash
# For Claude (Anthropic)
CLAUDE_API_KEY=sk-ant-xxxxxxxxxxxxx
# For OpenAI
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
# For Gemini (Google)
GEMINI_API_KEY=xxxxxxxxxxxxx
```
---
## Automatic Fallback
The system has built-in intelligence to handle provider failures:
1. **Primary**: Tries the provider specified in `AI_PROVIDER`
2. **Fallback**: If primary fails, tries other available providers in order
3. **Graceful Degradation**: If no provider is available, shows error to user
**Example Startup Logs:**
```
info: [AI Service] Preferred provider: claude
info: [AI Service] ✅ Claude provider initialized
info: [AI Service] ✅ Active provider: Claude (Anthropic)
```
**Example Fallback:**
```
info: [AI Service] Preferred provider: openai
warn: [AI Service] OpenAI API key not configured.
warn: [AI Service] Preferred provider unavailable. Trying fallbacks...
info: [AI Service] ✅ Claude provider initialized
info: [AI Service] ✅ Using fallback provider: Claude (Anthropic)
```
---
## Provider Comparison
### Claude (Anthropic)
- ✅ **Best for**: Professional, well-structured summaries
- ✅ **Strengths**: Excellent at following instructions, consistent output
- ✅ **Pricing**: Moderate (pay-per-token)
- ⚠️ **Requires**: API key from console.anthropic.com
### OpenAI (GPT-4)
- ✅ **Best for**: General-purpose text generation
- ✅ **Strengths**: Fast, widely adopted, good documentation
- ✅ **Pricing**: Moderate to high
- ⚠️ **Requires**: API key from platform.openai.com
### Gemini (Google)
- ✅ **Best for**: Cost-effective solution
- ✅ **Strengths**: Free tier available, good performance
- ✅ **Pricing**: Free tier + paid tiers
- ⚠️ **Requires**: API key from ai.google.dev
---
## Switching Providers
### Option A: Simple Switch (via .env)
Just change the `AI_PROVIDER` variable and restart the server:
```bash
# Old
AI_PROVIDER=claude
CLAUDE_API_KEY=sk-ant-xxxxxxxxxxxxx
# New
AI_PROVIDER=openai
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
```
```bash
# Restart backend
npm run dev
```
### Option B: Multi-Provider Setup (Automatic Failover)
Configure multiple API keys for automatic failover:
```bash
AI_PROVIDER=claude
CLAUDE_API_KEY=sk-ant-xxxxxxxxxxxxx
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
GEMINI_API_KEY=xxxxxxxxxxxxx
```
If Claude fails, the system automatically tries OpenAI, then Gemini.
---
## Testing AI Generation
### 1. Check if AI is configured:
```bash
curl http://localhost:5000/api/v1/health
```
Look for logs:
```
info: [AI Service] ✅ Active provider: Claude (Anthropic)
```
### 2. Test conclusion generation:
1. Create a workflow request
2. Complete all approvals (as final approver)
3. As initiator, click "Finalize & Close Request"
4. Click "Generate with AI"
5. Review AI-generated conclusion
6. Edit if needed
7. Finalize
---
## Troubleshooting
### Error: "AI Service not configured"
**Solution**: Add at least one API key to `.env`:
```bash
CLAUDE_API_KEY=your-key-here
# OR
OPENAI_API_KEY=your-key-here
# OR
GEMINI_API_KEY=your-key-here
```
### Error: "Cannot find module '@anthropic-ai/sdk'"
**Solution**: Install the required package:
```bash
npm install @anthropic-ai/sdk
```
### Provider not working
**Check logs** for initialization errors:
```bash
# Successful
info: [AI Service] ✅ Claude provider initialized
# Failed
error: [AI Service] Failed to initialize Claude: Invalid API key
```
**Verify API key**:
- Claude: Should start with `sk-ant-`
- OpenAI: Should start with `sk-proj-` or `sk-`
- Gemini: No specific prefix
---
## Cost Management
### Estimated Costs (per conclusion generation):
| Provider | Tokens | Cost (approx) |
|----------|--------|---------------|
| Claude Sonnet | ~500 input + ~300 output | $0.004 |
| GPT-4o | ~500 input + ~300 output | $0.005 |
| Gemini Pro | ~500 input + ~300 output | Free tier or $0.0001 |
**Tips to reduce costs:**
- Use Gemini for development/testing (free tier)
- Use Claude/OpenAI for production
- Monitor usage via provider dashboards
---
## Security Best Practices
1. **Never commit API keys** to version control
2. **Use environment variables** for all sensitive data
3. **Rotate keys regularly** (every 3-6 months)
4. **Set rate limits** on provider dashboards
5. **Monitor usage** to detect anomalies
---
## Adding a New Provider
To add a new AI provider (e.g., Cohere, Hugging Face):
1. **Create Provider Class**:
```typescript
class NewProvider implements AIProvider {
private client: any = null;
constructor() {
const apiKey = process.env.NEW_PROVIDER_API_KEY;
if (!apiKey) return;
try {
const SDK = require('new-provider-sdk');
this.client = new SDK({ apiKey });
} catch (error) {
logger.error('Failed to initialize NewProvider:', error);
}
}
async generateText(prompt: string): Promise<string> {
// Implementation
}
isAvailable(): boolean {
return this.client !== null;
}
getProviderName(): string {
return 'NewProvider';
}
}
```
2. **Register in AIService**:
Add to constructor's switch statement and fallback array.
3. **Update Documentation**: Add to this README.
---
## Support
For issues with AI providers:
- **Claude**: https://docs.anthropic.com
- **OpenAI**: https://platform.openai.com/docs
- **Gemini**: https://ai.google.dev/docs
For system-specific issues, check application logs or contact the development team.

View File

@ -1,264 +0,0 @@
# 🔑 API Key Troubleshooting Guide
## ⚠️ Problem: Getting 404 Errors for ALL Claude Models
If you're getting 404 "model not found" errors for **multiple Claude models**, the issue is likely with your Anthropic API key, not the model versions.
---
## 🔍 Step 1: Verify Your API Key
### Check Your API Key Status
1. Go to: https://console.anthropic.com/
2. Log in to your account
3. Navigate to **Settings** → **API Keys**
4. Check:
- ✅ Is your API key active?
- ✅ Does it have an active billing method?
- ✅ Have you verified your email?
- ✅ Are there any usage limits or restrictions?
### API Key Tiers
Anthropic has different API access tiers:
| Tier | Access Level | Requirements |
|------|-------------|--------------|
| **Free Trial** | Limited models, low usage | Email verification |
| **Paid Tier 1** | All Claude 3 models | Add payment method, some usage |
| **Paid Tier 2+** | All models + higher limits | More usage history |
**If you just created your API key:**
- You might need to add a payment method
- You might need to make a small payment first
- Some models might not be available immediately
---
## 🎯 Step 2: Try the Most Basic Model (Claude 3 Haiku)
I've changed the default to **`claude-3-haiku-20240307`** - this should work with ANY valid API key.
### Restart Your Backend
**IMPORTANT:** You must restart the server for changes to take effect.
```bash
# Stop the current server (Ctrl+C)
# Then start again:
cd Re_Backend
npm run dev
```
### Check the Startup Logs
Look for this line:
```
[AI Service] ✅ Claude provider initialized with model: claude-3-haiku-20240307
```
### Test Again
Try generating a conclusion. You should see in logs:
```
[AI Service] Generating with Claude model: claude-3-haiku-20240307
```
---
## 🔧 Step 3: Check for Environment Variable Overrides
Your `.env` file might be overriding the default model.
### Check Your `.env` File
Open `Re_Backend/.env` and look for:
```bash
CLAUDE_MODEL=...
```
**If it exists:**
1. **Delete or comment it out** (add `#` at the start)
2. **Or change it to Haiku:**
```bash
CLAUDE_MODEL=claude-3-haiku-20240307
```
3. **Restart the server**
---
## 🐛 Step 4: Verify API Key is Loaded
Add this temporary check to see if your API key is being loaded:
### Option A: Check Logs on Startup
When you start the server, you should see:
```
[AI Service] ✅ Claude provider initialized with model: claude-3-haiku-20240307
```
If you DON'T see this:
- Your API key might be missing or invalid
- Check `.env` file has: `CLAUDE_API_KEY=sk-ant-api03-...`
### Option B: Test API Key Manually
Create a test file `Re_Backend/test-api-key.js`:
```javascript
const Anthropic = require('@anthropic-ai/sdk');
require('dotenv').config();
const apiKey = process.env.CLAUDE_API_KEY || process.env.ANTHROPIC_API_KEY;
console.log('API Key found:', apiKey ? 'YES' : 'NO');
console.log('API Key starts with:', apiKey ? apiKey.substring(0, 20) + '...' : 'N/A');
async function testKey() {
try {
const client = new Anthropic({ apiKey });
// Try the most basic model
const response = await client.messages.create({
model: 'claude-3-haiku-20240307',
max_tokens: 100,
messages: [{ role: 'user', content: 'Say hello' }]
});
console.log('✅ API Key works!');
console.log('Response:', response.content[0].text);
} catch (error) {
console.error('❌ API Key test failed:', error.message);
console.error('Error details:', error);
}
}
testKey();
```
Run it:
```bash
cd Re_Backend
node test-api-key.js
```
---
## 💡 Step 5: Alternative - Use OpenAI or Gemini
If your Anthropic API key has issues, you can switch to another provider:
### Option A: Use OpenAI
1. **Get OpenAI API key** from: https://platform.openai.com/api-keys
2. **Add to `.env`:**
```bash
AI_PROVIDER=openai
OPENAI_API_KEY=sk-...
```
3. **Install OpenAI SDK:**
```bash
cd Re_Backend
npm install openai
```
4. **Restart server**
### Option B: Use Google Gemini
1. **Get Gemini API key** from: https://makersuite.google.com/app/apikey
2. **Add to `.env`:**
```bash
AI_PROVIDER=gemini
GEMINI_API_KEY=...
```
3. **Install Gemini SDK:**
```bash
cd Re_Backend
npm install @google/generative-ai
```
4. **Restart server**
---
## 🎯 Quick Checklist
- [ ] My Anthropic API key is valid and active
- [ ] I have a payment method added (if required)
- [ ] My email is verified
- [ ] I've deleted/commented out `CLAUDE_MODEL` from `.env` (or set it to haiku)
- [ ] I've **restarted the backend server completely**
- [ ] I see the correct model in startup logs
- [ ] I've tested with the test script above
---
## 🆘 Still Not Working?
### Check Your API Key Format
Valid format: `sk-ant-api03-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX`
- Must start with `sk-ant-`
- Must be quite long (80+ characters)
- No spaces or line breaks
### Get a New API Key
1. Go to https://console.anthropic.com/settings/keys
2. Delete old key
3. Create new key
4. Add payment method if prompted
5. Update `.env` with new key
6. Restart server
### Contact Anthropic Support
If nothing works:
- Email: support@anthropic.com
- Check: https://status.anthropic.com/ (for service issues)
- Community: https://anthropic.com/community
---
## 🎯 Current System Default
The system now defaults to:
```
claude-3-haiku-20240307
```
This is the **most basic Claude model** that should work with **any valid API key**, even free tier.
If even Haiku doesn't work, there's a fundamental issue with your Anthropic API key or account status.
---
## ✅ Success Indicators
When everything is working correctly, you should see:
1. **On server startup:**
```
[AI Service] ✅ Claude provider initialized with model: claude-3-haiku-20240307
```
2. **When generating conclusion:**
```
[AI Service] Generating with Claude model: claude-3-haiku-20240307
```
3. **In response:**
```
[AI Service] ✅ Conclusion generated successfully
```
No 404 errors! ✅

View File

@ -1,134 +0,0 @@
# Claude Model Versions - Quick Reference
## ✅ Current Claude Model (November 2025)
### Claude 4 Models (Latest)
- **`claude-sonnet-4-20250514`** ← **DEFAULT & CURRENT**
- Latest Claude Sonnet 4 model
- Released: May 14, 2025
- Best for complex reasoning and conclusion generation
- **This is what your API key supports**
## ⚠️ Deprecated Models (Do NOT Use)
The following Claude 3 models are deprecated and no longer available:
- ❌ `claude-3-opus-20240229` - Deprecated
- ❌ `claude-3-sonnet-20240229` - Deprecated
- ❌ `claude-3-haiku-20240307` - Deprecated
- ❌ `claude-3-5-sonnet-20240620` - Deprecated
**These will return 404 errors.**
---
## 🎯 What Happened?
All Claude 3 and 3.5 models have been deprecated and replaced with Claude 4.
**Your API key is current and working perfectly** - it just needs the **current model version**.
---
## 🔧 How to Change the Model
### Option 1: Environment Variable (Recommended)
Add to your `.env` file:
```bash
# Use Claude Sonnet 4 (current default)
CLAUDE_MODEL=claude-sonnet-4-20250514
# This is the ONLY model that currently works
```
### Option 2: Admin Configuration (Future)
The model can also be configured via the admin panel under AI settings.
---
## 🐛 Troubleshooting 404 Errors
If you get a 404 error like:
```
model: claude-3-5-sonnet-20241029
{"type":"error","error":{"type":"not_found_error","message":"model: ..."}
```
**Solutions:**
1. **Check your `.env` file** for `CLAUDE_MODEL` variable
2. **Remove or update** any invalid model version
3. **Restart the backend** server after changing `.env`
4. **Check server logs** on startup to see which model is being used:
```
[AI Service] ✅ Claude provider initialized with model: claude-3-5-sonnet-20240620
```
---
## 📊 Current Default
The system now defaults to:
```
claude-sonnet-4-20250514
```
This is the **current Claude 4 model** (November 2025) and the only one that works with active API keys.
---
## 🔑 API Key Requirements
Make sure you have a valid Anthropic API key in your `.env`:
```bash
CLAUDE_API_KEY=sk-ant-api03-...
# OR
ANTHROPIC_API_KEY=sk-ant-api03-...
```
Get your API key from: https://console.anthropic.com/
---
## 📝 Model Selection Guide
| Use Case | Recommended Model | Notes |
|----------|------------------|-------|
| **All use cases** | **`claude-sonnet-4-20250514`** | **Only model currently available** |
| Older models | ❌ Deprecated | Will return 404 errors |
---
## 🎯 Quick Fix for 404 Errors
If you're getting 404 errors (model not found):
**Your API key likely doesn't have access to Claude 3.5 models.**
### Solution: Use Claude 3 Opus (works with all API keys)
1. **Restart backend** - the default is now Claude 3 Opus:
```bash
npm run dev
```
2. **Check logs** for confirmation:
```
[AI Service] ✅ Claude provider initialized with model: claude-3-opus-20240229
```
3. **Test again** - Should work now! ✅
### Alternative: Try Other Models
If Opus doesn't work, try in your `.env`:
```bash
# Try Sonnet (lighter, faster)
CLAUDE_MODEL=claude-3-sonnet-20240229
# OR try Haiku (fastest)
CLAUDE_MODEL=claude-3-haiku-20240307
```
Then restart backend.

View File

@ -1,428 +0,0 @@
# Conclusion Remark Feature - Implementation Guide
## Overview
The **Conclusion Remark** feature allows the initiator to review and finalize a professional summary after all approvals are complete. The system uses **AI-powered generation** with support for multiple LLM providers.
---
## ✅ What's Implemented
### 1. **Database Layer**
- ✅ `conclusion_remarks` table created
- ✅ Stores AI-generated and final remarks
- ✅ Tracks edits, confidence scores, and KPIs
- ✅ One-to-one relationship with `workflow_requests`
### 2. **Backend Services**
- ✅ **Multi-provider AI service** (Claude, OpenAI, Gemini)
- ✅ Automatic fallback if primary provider fails
- ✅ Professional prompt engineering
- ✅ Key discussion points extraction
- ✅ Confidence scoring
### 3. **API Endpoints**
- ✅ `POST /api/v1/conclusions/:requestId/generate` - Generate AI remark
- ✅ `PUT /api/v1/conclusions/:requestId` - Update/edit remark
- ✅ `POST /api/v1/conclusions/:requestId/finalize` - Finalize & close
- ✅ `GET /api/v1/conclusions/:requestId` - Get conclusion
### 4. **Frontend Components**
- ✅ `ConclusionRemarkModal` with 3-step wizard
- ✅ AI generation button with loading states
- ✅ Manual entry option
- ✅ Edit and preview functionality
- ✅ Closure banner in RequestDetail
### 5. **Workflow Integration**
- ✅ Final approver triggers notification to initiator
- ✅ Green banner appears for approved requests
- ✅ Status changes from APPROVED → CLOSED on finalization
- ✅ Activity logging for audit trail
---
## 🎯 User Flow
### Step 1: Final Approval
```
Final Approver → Clicks "Approve Request"
System → Marks request as APPROVED
System → Sends notification to Initiator:
"Request Approved - Closure Pending"
```
### Step 2: Initiator Reviews Request
```
Initiator → Opens request detail
System → Shows green closure banner:
"All approvals complete! Finalize conclusion to close."
Initiator → Clicks "Finalize & Close Request"
```
### Step 3: AI Generation
```
Modal Opens → 3 options:
1. Generate with AI (recommended)
2. Write Manually
3. Cancel
Initiator → Clicks "Generate with AI"
System → Analyzes:
- Approval flow & comments
- Work notes & discussions
- Uploaded documents
- Activity timeline
AI → Generates professional conclusion (150-300 words)
```
### Step 4: Review & Edit
```
AI Remark Displayed
Initiator → Reviews AI suggestion
Options:
- Accept as-is → Click "Preview & Continue"
- Edit remark → Modify text → Click "Preview & Continue"
- Regenerate → Click "Regenerate" for new version
```
### Step 5: Finalize
```
Preview Screen → Shows final remark + next steps
Initiator → Clicks "Finalize & Close Request"
System Actions:
✅ Save final remark to database
✅ Update request status to CLOSED
✅ Set closure_date timestamp
✅ Log activity "Request Closed"
✅ Notify all participants
✅ Move to Closed Requests
```
---
## 📊 Database Schema
```sql
CREATE TABLE conclusion_remarks (
conclusion_id UUID PRIMARY KEY,
request_id UUID UNIQUE REFERENCES workflow_requests(request_id),
-- AI Generation
ai_generated_remark TEXT,
ai_model_used VARCHAR(100), -- e.g., "Claude (Anthropic)"
ai_confidence_score DECIMAL(5,2), -- 0.00 to 1.00
-- Final Version
final_remark TEXT,
edited_by UUID REFERENCES users(user_id),
is_edited BOOLEAN DEFAULT false,
edit_count INTEGER DEFAULT 0,
-- Context Summaries (for KPIs)
approval_summary JSONB,
document_summary JSONB,
key_discussion_points TEXT[],
-- Timestamps
generated_at TIMESTAMP,
finalized_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
```
---
## 🔌 AI Provider Setup
### Environment Variables
```bash
# Choose provider (claude, openai, or gemini)
AI_PROVIDER=claude
# API Keys (configure at least one)
CLAUDE_API_KEY=sk-ant-xxxxxxxxxxxxx
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
GEMINI_API_KEY=xxxxxxxxxxxxx
```
### Provider Priority
1. **Primary**: Provider specified in `AI_PROVIDER`
2. **Fallback 1**: Claude (if available)
3. **Fallback 2**: OpenAI (if available)
4. **Fallback 3**: Gemini (if available)
### Installation
Install your chosen provider's SDK:
```bash
# For Claude
npm install @anthropic-ai/sdk
# For OpenAI
npm install openai
# For Gemini
npm install @google/generative-ai
```
---
## 📋 KPI Tracking
The `conclusion_remarks` table enables powerful analytics:
### 1. AI Adoption Rate
```sql
SELECT
COUNT(CASE WHEN ai_generated_remark IS NOT NULL THEN 1 END) as ai_generated,
COUNT(*) as total,
ROUND(COUNT(CASE WHEN ai_generated_remark IS NOT NULL THEN 1 END)::DECIMAL / COUNT(*) * 100, 2) as adoption_rate
FROM conclusion_remarks
WHERE finalized_at IS NOT NULL;
```
### 2. Edit Frequency
```sql
SELECT
COUNT(CASE WHEN is_edited = true THEN 1 END) as edited,
COUNT(*) as total,
AVG(edit_count) as avg_edits_per_conclusion
FROM conclusion_remarks;
```
### 3. Average Confidence Score
```sql
SELECT
AVG(ai_confidence_score) as avg_confidence,
MIN(ai_confidence_score) as min_confidence,
MAX(ai_confidence_score) as max_confidence
FROM conclusion_remarks
WHERE ai_generated_remark IS NOT NULL;
```
### 4. Conclusion Length Analysis
```sql
SELECT
AVG(LENGTH(final_remark)) as avg_length,
MAX(LENGTH(final_remark)) as max_length,
MIN(LENGTH(final_remark)) as min_length
FROM conclusion_remarks
WHERE final_remark IS NOT NULL;
```
### 5. Provider Usage
```sql
SELECT
ai_model_used,
COUNT(*) as usage_count,
AVG(ai_confidence_score) as avg_confidence
FROM conclusion_remarks
WHERE ai_model_used IS NOT NULL
GROUP BY ai_model_used;
```
---
## 🎨 Frontend UI
### Closure Banner (RequestDetail)
```
┌─────────────────────────────────────────────────┐
│ ✅ Request Approved - Closure Pending │
│ │
│ All approvals are complete! Please review and │
│ finalize the conclusion remark to officially │
│ close this request. │
│ │
│ [✅ Finalize & Close Request] │
└─────────────────────────────────────────────────┘
```
### Conclusion Modal - Step 1: Generate
```
┌─────────────────────────────────────────────────┐
│ 📄 Finalize Request Closure │
├─────────────────────────────────────────────────┤
│ │
│ ✨ AI-Powered Conclusion Generation │
│ │
│ Let AI analyze your request's approval flow, │
│ work notes, and documents to generate a │
│ professional conclusion remark. │
│ │
│ [✨ Generate with AI] [✏️ Write Manually] │
│ │
│ Powered by Claude AI • Analyzes approvals, │
│ work notes & documents │
└─────────────────────────────────────────────────┘
```
### Step 2: Edit
```
┌─────────────────────────────────────────────────┐
│ ✨ AI-Generated Conclusion [85% confidence]│
│ │
│ Key Highlights: │
│ • All 3 approval levels completed successfully │
│ • Request completed within TAT │
│ • 5 documents attached for reference │
│ │
│ Review & Edit Conclusion Remark: │
│ ┌─────────────────────────────────────────────┐ │
│ │ The request for new office location was │ │
│ │ thoroughly reviewed and approved by all │ │
│ │ stakeholders... │ │
│ └─────────────────────────────────────────────┘ │
│ 450 / 2000 │
│ │
│ [✨ Regenerate] [Cancel] [Preview & Continue] │
└─────────────────────────────────────────────────┘
```
### Step 3: Preview
```
┌─────────────────────────────────────────────────┐
│ ✅ Final Conclusion Remark [Edited by You] │
│ ┌─────────────────────────────────────────────┐ │
│ │ The request for new office location... │ │
│ └─────────────────────────────────────────────┘ │
│ │
What happens next? │
│ • Request status will change to "CLOSED" │
│ • All participants will be notified │
│ • Conclusion remark will be permanently saved │
│ • Request will move to Closed Requests │
│ │
│ [✏️ Edit Again] [✅ Finalize & Close Request] │
└─────────────────────────────────────────────────┘
```
---
## 🔄 Status Transition
```
DRAFT → PENDING → IN_PROGRESS → APPROVED → CLOSED
↑ ↓
(Final Approval) (Conclusion)
```
**Key States:**
- `APPROVED`: All approvals complete, awaiting conclusion
- `CLOSED`: Conclusion finalized, request archived
---
## 🧪 Testing
### 1. Setup AI Provider
```bash
# Option A: Claude (Recommended)
AI_PROVIDER=claude
CLAUDE_API_KEY=sk-ant-xxxxxxxxxxxxx
# Option B: OpenAI
AI_PROVIDER=openai
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
# Option C: Gemini (Free tier)
AI_PROVIDER=gemini
GEMINI_API_KEY=xxxxxxxxxxxxx
```
### 2. Run Migration
```bash
cd Re_Backend
npm run migrate
```
### 3. Test Workflow
1. Create workflow request
2. Add approvers
3. Complete all approvals
4. As initiator, click "Finalize & Close"
5. Generate AI conclusion
6. Review, edit, preview
7. Finalize and close
### 4. Verify Database
```sql
-- Check conclusion was created
SELECT * FROM conclusion_remarks WHERE request_id = 'your-request-id';
-- Check request was closed
SELECT status, closure_date, conclusion_remark
FROM workflow_requests
WHERE request_id = 'your-request-id';
```
---
## 🎯 Benefits
### For Users
- ✅ Professional, well-structured conclusion remarks
- ✅ Saves time (AI does the heavy lifting)
- ✅ Consistent format across all requests
- ✅ Can edit/customize AI suggestions
- ✅ Complete control over final content
### For Business
- ✅ Better documentation quality
- ✅ Audit trail of all decisions
- ✅ KPI tracking (AI adoption, edit rates)
- ✅ Vendor flexibility (swap AI providers anytime)
- ✅ Cost optimization (use free tier for testing)
---
## 📝 Notes
- **Required**: At least one AI provider API key must be configured
- **Automatic**: System selects best available provider
- **Flexible**: Switch providers without code changes
- **Graceful**: Falls back to manual entry if AI unavailable
- **Secure**: API keys stored in environment variables only
- **Logged**: All AI generations tracked for audit
---
## 🆘 Support
**AI Provider Issues:**
- Claude: https://docs.anthropic.com
- OpenAI: https://platform.openai.com/docs
- Gemini: https://ai.google.dev/docs
**System Issues:**
Check logs for AI service initialization:
```bash
grep "AI Service" logs/combined.log
```
Expected output:
```
info: [AI Service] Preferred provider: claude
info: [AI Service] ✅ Claude provider initialized
info: [AI Service] ✅ Active provider: Claude (Anthropic)
```

View File

@ -1,363 +0,0 @@
# Royal Enfield Workflow Management System - Configuration Guide
## 📋 Overview
All system configurations are centralized in `src/config/system.config.ts` and can be customized via environment variables.
## ⚙️ Configuration Structure
### 1. **Working Hours**
Controls when TAT tracking is active.
```env
WORK_START_HOUR=9 # 9 AM (default)
WORK_END_HOUR=18 # 6 PM (default)
TZ=Asia/Kolkata # Timezone
```
**Working Days:** Monday - Friday (hardcoded)
---
### 2. **TAT (Turnaround Time) Settings**
```env
TAT_TEST_MODE=false # Enable for testing (1 hour = 1 minute)
DEFAULT_EXPRESS_TAT=24 # Express priority default TAT (hours)
DEFAULT_STANDARD_TAT=72 # Standard priority default TAT (hours)
```
**TAT Thresholds** (hardcoded):
- 50% - Warning notification
- 75% - Critical notification
- 100% - Breach notification
---
### 3. **File Upload Limits**
```env
MAX_FILE_SIZE_MB=10 # Max file size per upload
MAX_FILES_PER_REQUEST=10 # Max files per request
ALLOWED_FILE_TYPES=pdf,doc,docx,xls,xlsx,ppt,pptx,jpg,jpeg,png,gif,txt
```
---
### 4. **Workflow Limits**
```env
MAX_APPROVAL_LEVELS=10 # Max approval stages
MAX_PARTICIPANTS_PER_REQUEST=50 # Max total participants
MAX_SPECTATORS=20 # Max spectators
```
---
### 5. **Work Notes Configuration**
```env
MAX_MESSAGE_LENGTH=2000 # Max characters per message
MAX_ATTACHMENTS_PER_NOTE=5 # Max files per work note
ENABLE_REACTIONS=true # Allow emoji reactions
ENABLE_MENTIONS=true # Allow @mentions
```
---
### 6. **Redis & Queue**
```env
REDIS_URL=redis://localhost:6379 # Redis connection string
QUEUE_CONCURRENCY=5 # Concurrent job processing
RATE_LIMIT_MAX=10 # Max requests per duration
RATE_LIMIT_DURATION=1000 # Rate limit window (ms)
```
---
### 7. **Security & Session**
```env
JWT_SECRET=your_secret_min_32_characters # JWT signing key
JWT_EXPIRY=8h # Token expiration
SESSION_TIMEOUT_MINUTES=480 # 8 hours
ENABLE_2FA=false # Two-factor authentication
```
---
### 8. **Notifications**
```env
ENABLE_EMAIL_NOTIFICATIONS=true # Email alerts
ENABLE_PUSH_NOTIFICATIONS=true # Browser push
NOTIFICATION_BATCH_DELAY=5000 # Batch delay (ms)
```
**Email SMTP** (if email enabled):
```env
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=your_email@royalenfield.com
SMTP_PASSWORD=your_password
SMTP_FROM=noreply@royalenfield.com
```
---
### 9. **Feature Flags**
```env
ENABLE_AI_CONCLUSION=true # AI-generated conclusion remarks
ENABLE_TEMPLATES=false # Template-based workflows (future)
ENABLE_ANALYTICS=true # Dashboard analytics
ENABLE_EXPORT=true # Export to CSV/PDF
```
---
### 10. **Database**
```env
DB_HOST=localhost
DB_PORT=5432
DB_NAME=re_workflow
DB_USER=postgres
DB_PASSWORD=your_password
DB_SSL=false
```
---
### 11. **Storage**
```env
STORAGE_TYPE=local # Options: local, s3, gcs
STORAGE_PATH=./uploads # Local storage path
```
**For S3 (if using cloud storage):**
```env
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret
AWS_REGION=ap-south-1
AWS_S3_BUCKET=re-workflow-documents
```
---
## 🚀 Quick Setup
### Development Environment
1. Copy example configuration:
```bash
cp .env.example .env
```
2. Update critical values:
```env
DB_PASSWORD=your_local_postgres_password
JWT_SECRET=generate_random_32_char_string
REDIS_URL=redis://localhost:6379
```
3. Enable test mode for faster TAT testing:
```env
TAT_TEST_MODE=true # 1 hour = 1 minute
```
---
### Production Environment
1. Set environment to production:
```env
NODE_ENV=production
```
2. Configure secure secrets:
```env
JWT_SECRET=use_very_strong_secret_here
DB_PASSWORD=strong_database_password
```
3. Disable test mode:
```env
TAT_TEST_MODE=false
```
4. Enable SSL:
```env
DB_SSL=true
```
5. Configure email/push notifications with real credentials
---
## 📊 Configuration API
### GET `/api/v1/config`
Returns public (non-sensitive) configuration for frontend.
**Response:**
```json
{
"success": true,
"data": {
"appName": "Royal Enfield Workflow Management",
"appVersion": "1.2.0",
"workingHours": {
"START_HOUR": 9,
"END_HOUR": 18,
"START_DAY": 1,
"END_DAY": 5,
"TIMEZONE": "Asia/Kolkata"
},
"tat": {
"thresholds": {
"warning": 50,
"critical": 75,
"breach": 100
},
"testMode": false
},
"upload": {
"maxFileSizeMB": 10,
"allowedFileTypes": ["pdf", "doc", "docx", ...],
"maxFilesPerRequest": 10
},
"workflow": {
"maxApprovalLevels": 10,
"maxParticipants": 50,
"maxSpectators": 20
},
"workNotes": {
"maxMessageLength": 2000,
"maxAttachmentsPerNote": 5,
"enableReactions": true,
"enableMentions": true
},
"features": {
"ENABLE_AI_CONCLUSION": true,
"ENABLE_TEMPLATES": false,
"ENABLE_ANALYTICS": true,
"ENABLE_EXPORT": true
},
"ui": {
"DEFAULT_THEME": "light",
"DEFAULT_LANGUAGE": "en",
"DATE_FORMAT": "DD/MM/YYYY",
"TIME_FORMAT": "12h",
"CURRENCY": "INR",
"CURRENCY_SYMBOL": "₹"
}
}
}
```
---
## 🎯 Usage in Code
### Backend
```typescript
import { SYSTEM_CONFIG } from '@config/system.config';
// Access configuration
const maxLevels = SYSTEM_CONFIG.WORKFLOW.MAX_APPROVAL_LEVELS;
const workStart = SYSTEM_CONFIG.WORKING_HOURS.START_HOUR;
```
### Frontend
```typescript
import { configService } from '@/services/configService';
// Async usage
const config = await configService.getConfig();
const maxFileSize = config.upload.maxFileSizeMB;
// Helper functions
import { getWorkingHours, getTATThresholds } from '@/services/configService';
const workingHours = await getWorkingHours();
```
---
## 🔐 Security Best Practices
1. **Never commit `.env`** with real credentials
2. **Use strong JWT secrets** (min 32 characters)
3. **Rotate secrets regularly** in production
4. **Use environment-specific configs** for dev/staging/prod
5. **Store secrets in secure vaults** (AWS Secrets Manager, Azure Key Vault)
---
## 📝 Configuration Checklist
### Before Deployment
- [ ] Set `NODE_ENV=production`
- [ ] Configure database with SSL
- [ ] Set strong JWT secret
- [ ] Disable TAT test mode
- [ ] Configure email SMTP
- [ ] Set up Redis connection
- [ ] Configure file storage (local/S3/GCS)
- [ ] Test working hours match business hours
- [ ] Verify TAT thresholds are correct
- [ ] Enable/disable feature flags as needed
---
## 🛠️ Adding New Configuration
1. Add to `system.config.ts`:
```typescript
export const SYSTEM_CONFIG = {
// ...existing config
MY_NEW_SETTING: {
VALUE: process.env.MY_VALUE || 'default',
},
};
```
2. Add to `getPublicConfig()` if needed on frontend:
```typescript
export function getPublicConfig() {
return {
// ...existing
myNewSetting: SYSTEM_CONFIG.MY_NEW_SETTING,
};
}
```
3. Access in code:
```typescript
const value = SYSTEM_CONFIG.MY_NEW_SETTING.VALUE;
```
---
## 📚 Related Files
- `src/config/system.config.ts` - Central configuration
- `src/config/tat.config.ts` - TAT-specific (re-exports from system.config)
- `src/config/constants.ts` - Legacy constants (being migrated)
- `src/routes/config.routes.ts` - Configuration API endpoint
- Frontend: `src/services/configService.ts` - Configuration fetching service
---
## ✅ Benefits of Centralized Configuration
**Single Source of Truth** - All settings in one place
**Environment-based** - Different configs for dev/staging/prod
**Frontend Sync** - Frontend fetches config from backend
**No Hardcoding** - All values configurable via .env
**Type-Safe** - TypeScript interfaces ensure correctness
**Easy Updates** - Change .env without code changes

View File

@ -1,549 +0,0 @@
# KPI Reporting System - Complete Guide
## Overview
This document describes the complete KPI (Key Performance Indicator) reporting system for the Royal Enfield Workflow Management System, including database schema, views, and query examples.
---
## 📊 Database Schema
### 1. TAT Alerts Table (`tat_alerts`)
**Purpose**: Store all TAT notification records for display and KPI analysis
```sql
CREATE TABLE tat_alerts (
alert_id UUID PRIMARY KEY,
request_id UUID REFERENCES workflow_requests(request_id),
level_id UUID REFERENCES approval_levels(level_id),
approver_id UUID REFERENCES users(user_id),
alert_type ENUM('TAT_50', 'TAT_75', 'TAT_100'),
threshold_percentage INTEGER, -- 50, 75, or 100
tat_hours_allocated DECIMAL(10,2),
tat_hours_elapsed DECIMAL(10,2),
tat_hours_remaining DECIMAL(10,2),
level_start_time TIMESTAMP,
alert_sent_at TIMESTAMP DEFAULT NOW(),
expected_completion_time TIMESTAMP,
alert_message TEXT,
notification_sent BOOLEAN DEFAULT true,
notification_channels TEXT[], -- ['push', 'email', 'sms']
is_breached BOOLEAN DEFAULT false,
was_completed_on_time BOOLEAN, -- Set when level completed
completion_time TIMESTAMP,
metadata JSONB DEFAULT '{}',
created_at TIMESTAMP DEFAULT NOW()
);
```
**Key Features**:
- ✅ Tracks every TAT notification sent (50%, 75%, 100%)
- ✅ Records timing information for KPI calculation
- ✅ Stores completion status for compliance reporting
- ✅ Metadata includes request title, approver name, priority
---
## 🎯 KPI Categories & Metrics
### Category 1: Request Volume & Status
| KPI Name | Description | SQL View | Primary Users |
|----------|-------------|----------|---------------|
| Total Requests Created | Count of all workflow requests | `vw_request_volume_summary` | All |
| Open Requests | Requests currently in progress with age | `vw_workflow_aging` | All |
| Approved Requests | Fully approved and closed | `vw_request_volume_summary` | All |
| Rejected Requests | Rejected at any stage | `vw_request_volume_summary` | All |
**Query Examples**:
```sql
-- Total requests created this month
SELECT COUNT(*) as total_requests
FROM vw_request_volume_summary
WHERE created_at >= DATE_TRUNC('month', CURRENT_DATE);
-- Open requests with age
SELECT request_number, title, status, age_hours, status_category
FROM vw_request_volume_summary
WHERE status_category = 'IN_PROGRESS'
ORDER BY age_hours DESC;
-- Approved vs Rejected (last 30 days)
SELECT
status,
COUNT(*) as count,
ROUND(COUNT(*) * 100.0 / SUM(COUNT(*)) OVER (), 2) as percentage
FROM vw_request_volume_summary
WHERE closure_date >= CURRENT_DATE - INTERVAL '30 days'
AND status IN ('APPROVED', 'REJECTED')
GROUP BY status;
```
---
### Category 2: TAT Efficiency
| KPI Name | Description | SQL View | Primary Users |
|----------|-------------|----------|---------------|
| Average TAT Compliance % | % of workflows completed within TAT | `vw_tat_compliance` | All |
| Avg Approval Cycle Time | Average time from creation to closure | `vw_request_volume_summary` | All |
| Delayed Workflows | Requests currently breaching TAT | `vw_tat_compliance` | All |
**Query Examples**:
```sql
-- Overall TAT compliance rate
SELECT
COUNT(CASE WHEN completed_within_tat = true THEN 1 END) * 100.0 /
NULLIF(COUNT(CASE WHEN completed_within_tat IS NOT NULL THEN 1 END), 0) as compliance_rate,
COUNT(CASE WHEN completed_within_tat = true THEN 1 END) as on_time_count,
COUNT(CASE WHEN completed_within_tat = false THEN 1 END) as breached_count
FROM vw_tat_compliance;
-- Average cycle time by priority
SELECT
priority,
ROUND(AVG(cycle_time_hours), 2) as avg_hours,
ROUND(AVG(cycle_time_hours) / 24, 2) as avg_days,
COUNT(*) as total_requests
FROM vw_request_volume_summary
WHERE closure_date IS NOT NULL
GROUP BY priority;
-- Currently delayed workflows
SELECT
request_number,
approver_name,
level_number,
tat_status,
tat_percentage_used,
remaining_hours
FROM vw_tat_compliance
WHERE tat_status IN ('CRITICAL', 'BREACHED')
AND level_status IN ('PENDING', 'IN_PROGRESS')
ORDER BY tat_percentage_used DESC;
```
---
### Category 3: Approver Load
| KPI Name | Description | SQL View | Primary Users |
|----------|-------------|----------|---------------|
| Pending Actions (My Queue) | Requests awaiting user approval | `vw_approver_performance` | Approvers |
| Approvals Completed | Count of actions in timeframe | `vw_approver_performance` | Approvers |
**Query Examples**:
```sql
-- My pending queue (for specific approver)
SELECT
pending_count,
in_progress_count,
oldest_pending_hours
FROM vw_approver_performance
WHERE approver_id = 'USER_ID_HERE';
-- Approvals completed today
SELECT
approver_name,
COUNT(*) as approvals_today
FROM approval_levels
WHERE action_date >= CURRENT_DATE
AND status IN ('APPROVED', 'REJECTED')
GROUP BY approver_name
ORDER BY approvals_today DESC;
-- Approvals completed this week
SELECT
approver_name,
approved_count,
rejected_count,
(approved_count + rejected_count) as total_actions
FROM vw_approver_performance
ORDER BY total_actions DESC;
```
---
### Category 4: Engagement & Quality
| KPI Name | Description | SQL View | Primary Users |
|----------|-------------|----------|---------------|
| Comments/Work Notes Added | Collaboration activity | `vw_engagement_metrics` | All |
| Attachments Uploaded | Documents added | `vw_engagement_metrics` | All |
**Query Examples**:
```sql
-- Engagement metrics summary
SELECT
engagement_level,
COUNT(*) as requests_count,
AVG(work_notes_count) as avg_comments,
AVG(documents_count) as avg_documents
FROM vw_engagement_metrics
GROUP BY engagement_level;
-- Most active requests (by comments)
SELECT
request_number,
title,
work_notes_count,
documents_count,
spectators_count
FROM vw_engagement_metrics
ORDER BY work_notes_count DESC
LIMIT 10;
-- Document upload trends (last 7 days)
SELECT
DATE(uploaded_at) as date,
COUNT(*) as documents_uploaded
FROM documents
WHERE uploaded_at >= CURRENT_DATE - INTERVAL '7 days'
AND is_deleted = false
GROUP BY DATE(uploaded_at)
ORDER BY date DESC;
```
---
## 📈 Analytical Reports
### 1. Request Lifecycle Report
**Purpose**: End-to-end status with timeline, approvers, and TAT compliance
```sql
SELECT
w.request_number,
w.title,
w.status,
w.priority,
w.submission_date,
w.closure_date,
w.cycle_time_hours / 24 as cycle_days,
al.level_number,
al.approver_name,
al.status as level_status,
al.completed_within_tat,
al.elapsed_hours,
al.tat_hours as allocated_hours,
ta.threshold_percentage as last_alert_threshold,
ta.alert_sent_at as last_alert_time
FROM vw_request_volume_summary w
LEFT JOIN vw_tat_compliance al ON w.request_id = al.request_id
LEFT JOIN vw_tat_alerts_summary ta ON al.level_id = ta.level_id
WHERE w.request_number = 'REQ-YYYY-NNNNN'
ORDER BY al.level_number;
```
**Export**: Can be exported as CSV using `\copy` or application-level export
---
### 2. Approver Performance Report
**Purpose**: Track response time, pending count, TAT compliance by approver
```sql
SELECT
ap.approver_name,
ap.department,
ap.pending_count,
ap.approved_count,
ap.rejected_count,
ROUND(ap.avg_response_time_hours, 2) as avg_response_hours,
ROUND(ap.tat_compliance_percentage, 2) as compliance_percent,
ap.breaches_count,
ROUND(ap.oldest_pending_hours, 2) as oldest_pending_hours
FROM vw_approver_performance ap
WHERE ap.total_assignments > 0
ORDER BY ap.tat_compliance_percentage DESC;
```
**Visualization**: Bar chart or leaderboard
---
### 3. Department-wise Workflow Summary
**Purpose**: Compare requests by department
```sql
SELECT
department,
total_requests,
open_requests,
approved_requests,
rejected_requests,
ROUND(approved_requests * 100.0 / NULLIF(total_requests, 0), 2) as approval_rate,
ROUND(avg_cycle_time_hours / 24, 2) as avg_cycle_days,
express_priority_count,
standard_priority_count
FROM vw_department_summary
WHERE department IS NOT NULL
ORDER BY total_requests DESC;
```
**Visualization**: Pie chart or stacked bar chart
---
### 4. TAT Breach Report
**Purpose**: List all requests that breached TAT with reasons
```sql
SELECT
ta.request_number,
ta.request_title,
ta.priority,
ta.level_number,
u.display_name as approver_name,
ta.threshold_percentage,
ta.alert_sent_at,
ta.expected_completion_time,
ta.completion_time,
ta.was_completed_on_time,
CASE
WHEN ta.completion_time IS NULL THEN 'Still Pending'
WHEN ta.was_completed_on_time = false THEN 'Completed Late'
ELSE 'Completed On Time'
END as status,
ta.response_time_after_alert_hours
FROM vw_tat_alerts_summary ta
LEFT JOIN users u ON ta.approver_id = u.user_id
WHERE ta.is_breached = true
ORDER BY ta.alert_sent_at DESC;
```
**Visualization**: Table with filters
---
### 5. Priority Distribution Report
**Purpose**: Express vs Standard workflows and cycle times
```sql
SELECT
priority,
COUNT(*) as total_requests,
COUNT(CASE WHEN status_category = 'IN_PROGRESS' THEN 1 END) as open_requests,
COUNT(CASE WHEN status_category = 'COMPLETED' THEN 1 END) as completed_requests,
ROUND(AVG(CASE WHEN closure_date IS NOT NULL THEN cycle_time_hours END), 2) as avg_cycle_hours,
ROUND(AVG(CASE WHEN closure_date IS NOT NULL THEN cycle_time_hours / 24 END), 2) as avg_cycle_days
FROM vw_request_volume_summary
GROUP BY priority;
```
**Visualization**: Pie chart + KPI cards
---
### 6. Workflow Aging Report
**Purpose**: Workflows open beyond threshold
```sql
SELECT
request_number,
title,
age_days,
age_category,
current_approver,
current_level_age_hours,
current_level_tat_hours,
current_level_tat_used
FROM vw_workflow_aging
WHERE age_category IN ('AGING', 'CRITICAL')
ORDER BY age_days DESC;
```
**Visualization**: Table with age color-coding
---
### 7. Daily/Weekly Trends
**Purpose**: Track volume and performance trends
```sql
-- Daily KPIs for last 30 days
SELECT
date,
requests_created,
requests_submitted,
requests_closed,
requests_approved,
requests_rejected,
ROUND(avg_completion_time_hours, 2) as avg_completion_hours
FROM vw_daily_kpi_metrics
WHERE date >= CURRENT_DATE - INTERVAL '30 days'
ORDER BY date DESC;
-- Weekly aggregation
SELECT
DATE_TRUNC('week', date) as week_start,
SUM(requests_created) as weekly_created,
SUM(requests_closed) as weekly_closed,
ROUND(AVG(avg_completion_time_hours), 2) as avg_completion_hours
FROM vw_daily_kpi_metrics
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY DATE_TRUNC('week', date)
ORDER BY week_start DESC;
```
**Visualization**: Line chart or area chart
---
## 🔍 TAT Alerts - Display in UI
### Get TAT Alerts for a Request
```sql
-- For displaying in Request Detail screen (like the image shared)
SELECT
ta.alert_type,
ta.threshold_percentage,
ta.alert_sent_at,
ta.alert_message,
ta.tat_hours_elapsed,
ta.tat_hours_remaining,
ta.notification_sent,
CASE
WHEN ta.alert_type = 'TAT_50' THEN '⏳ 50% of TAT elapsed'
WHEN ta.alert_type = 'TAT_75' THEN '⚠️ 75% of TAT elapsed - Escalation warning'
WHEN ta.alert_type = 'TAT_100' THEN '⏰ TAT breached - Immediate action required'
END as alert_title
FROM tat_alerts ta
WHERE ta.request_id = 'REQUEST_ID_HERE'
AND ta.level_id = 'LEVEL_ID_HERE'
ORDER BY ta.created_at ASC;
```
### Display Format (like image):
```
Reminder 1
⏳ 50% of SLA breach reminder have been sent
Reminder sent by system automatically
Sent at: Oct 6 at 2:30 PM
```
---
## 📊 KPI Dashboard Queries
### Executive Dashboard
```sql
-- Overall KPIs for dashboard cards
SELECT
(SELECT COUNT(*) FROM vw_request_volume_summary WHERE created_at >= DATE_TRUNC('month', CURRENT_DATE)) as requests_this_month,
(SELECT COUNT(*) FROM vw_request_volume_summary WHERE status_category = 'IN_PROGRESS') as open_requests,
(SELECT ROUND(AVG(cycle_time_hours / 24), 2) FROM vw_request_volume_summary WHERE closure_date IS NOT NULL) as avg_cycle_days,
(SELECT ROUND(COUNT(CASE WHEN completed_within_tat = true THEN 1 END) * 100.0 / NULLIF(COUNT(*), 0), 2) FROM vw_tat_compliance WHERE completed_within_tat IS NOT NULL) as tat_compliance_percent;
```
---
## 🚀 API Endpoint Examples
### Example Service Method (TypeScript)
```typescript
// services/kpi.service.ts
export class KPIService {
/**
* Get Request Volume Summary
*/
async getRequestVolumeSummary(startDate: string, endDate: string) {
const query = `
SELECT
status_category,
COUNT(*) as count
FROM vw_request_volume_summary
WHERE created_at BETWEEN :startDate AND :endDate
GROUP BY status_category
`;
return await sequelize.query(query, {
replacements: { startDate, endDate },
type: QueryTypes.SELECT
});
}
/**
* Get TAT Compliance Rate
*/
async getTATComplianceRate(period: 'daily' | 'weekly' | 'monthly') {
const query = `
SELECT
COUNT(CASE WHEN completed_within_tat = true THEN 1 END) * 100.0 /
NULLIF(COUNT(*), 0) as compliance_rate
FROM vw_tat_compliance
WHERE action_date >= NOW() - INTERVAL '1 ${period}'
`;
return await sequelize.query(query, { type: QueryTypes.SELECT });
}
/**
* Get TAT Alerts for Request
*/
async getTATAlertsForRequest(requestId: string) {
return await TatAlert.findAll({
where: { requestId },
order: [['alertSentAt', 'ASC']],
include: [
{ model: ApprovalLevel, as: 'level' },
{ model: User, as: 'approver' }
]
});
}
}
```
---
## 📋 Maintenance & Performance
### Indexes
All views use indexed columns for optimal performance:
- `request_id`, `level_id`, `approver_id`
- `status`, `created_at`, `alert_sent_at`
- `is_deleted` (for soft deletes)
### Refresh Materialized Views (if needed)
If you convert views to materialized views for better performance:
```sql
-- Refresh all materialized views
REFRESH MATERIALIZED VIEW CONCURRENTLY mv_request_volume_summary;
REFRESH MATERIALIZED VIEW CONCURRENTLY mv_tat_compliance;
-- etc.
```
---
## 📖 Related Documentation
- **TAT Notification System**: `TAT_NOTIFICATION_SYSTEM.md`
- **Database Structure**: `backend_structure.txt`
- **API Documentation**: `API_DOCUMENTATION.md`
---
**Last Updated**: November 4, 2025
**Version**: 1.0.0
**Maintained By**: Royal Enfield Workflow Team

View File

@ -1,324 +0,0 @@
# Redis Setup for Windows
## ⚠️ IMPORTANT: Redis Version Requirements
**BullMQ requires Redis version 5.0.0 or higher.**
**DO NOT USE**: Microsoft Archive Redis (https://github.com/microsoftarchive/redis/releases)
- This is **outdated** and only provides Redis 3.x
- **Version 3.0.504 is NOT compatible** with BullMQ
- You will get errors: `Redis version needs to be greater or equal than 5.0.0`
**USE ONE OF THESE METHODS INSTEAD**:
---
## Method 1: Using Memurai (Recommended for Windows) ⭐
Memurai is a **Redis-compatible** server built specifically for Windows with full Redis 6.x+ compatibility.
### Why Memurai?
- ✅ **Native Windows support** - Runs as a Windows service
- ✅ **Redis 6.x+ compatible** - Full feature support
- ✅ **Easy installation** - Just install and run
- ✅ **Free for development** - Free tier available
- ✅ **Production-ready** - Used in enterprise environments
### Installation Steps:
1. **Download Memurai**:
- Visit: https://www.memurai.com/get-memurai
- Download the **Developer Edition** (free)
2. **Install**:
- Run the installer (`Memurai-*.exe`)
- Choose default options
- Memurai will install as a Windows service and start automatically
3. **Verify Installation**:
```powershell
# Check if service is running
Get-Service Memurai
# Should show: Running
# Test connection
memurai-cli ping
# Should return: PONG
# Check version (should be 6.x or 7.x)
memurai-cli --version
```
4. **Configuration**:
- Default port: **6379**
- Connection string: `redis://localhost:6379`
- Service runs automatically on Windows startup
- No additional configuration needed for development
## Method 2: Using Docker Desktop (Alternative) 🐳
If you have Docker Desktop installed, this is the easiest method to get Redis 7.x.
### Installation Steps:
1. **Install Docker Desktop** (if not already installed):
- Download from: https://www.docker.com/products/docker-desktop
- Install and start Docker Desktop
2. **Start Redis Container**:
```powershell
# Run Redis 7.x in a container
docker run -d --name redis-tat -p 6379:6379 redis:7-alpine
# Or if you want it to restart automatically:
docker run -d --name redis-tat -p 6379:6379 --restart unless-stopped redis:7-alpine
```
3. **Verify**:
```powershell
# Check if container is running
docker ps | Select-String redis
# Check Redis version
docker exec redis-tat redis-server --version
# Should show: Redis server v=7.x.x
# Test connection
docker exec redis-tat redis-cli ping
# Should return: PONG
```
4. **Stop/Start Redis**:
```powershell
# Stop Redis
docker stop redis-tat
# Start Redis
docker start redis-tat
# Remove container (if needed)
docker rm -f redis-tat
```
## Method 3: Using WSL2 (Windows Subsystem for Linux)
1. **Enable WSL2**:
```powershell
wsl --install
```
2. **Install Redis in WSL**:
```bash
sudo apt update
sudo apt install redis-server
sudo service redis-server start
```
3. **Verify**:
```bash
redis-cli ping
# Should return: PONG
```
## Quick Test
After starting Redis, test the connection:
```powershell
# If you have redis-cli or memurai-cli
redis-cli ping
# Or use telnet
Test-NetConnection -ComputerName localhost -Port 6379
```
## Troubleshooting
### ❌ Error: "Redis version needs to be greater or equal than 5.0.0 Current: 3.0.504"
**Problem**: You're using Microsoft Archive Redis (version 3.x) which is **too old** for BullMQ.
**Solution**:
1. **Stop the old Redis**:
```powershell
# Find and stop the old Redis process
Get-Process redis-server -ErrorAction SilentlyStop | Stop-Process -Force
```
2. **Uninstall/Remove old Redis** (if installed as service):
```powershell
# Check if running as service
Get-Service | Where-Object {$_.Name -like "*redis*"}
```
3. **Install one of the recommended methods**:
- **Option A**: Install Memurai (Recommended) - See Method 1 above
- **Option B**: Use Docker - See Method 2 above
- **Option C**: Use WSL2 - See Method 3 above
4. **Verify new Redis version**:
```powershell
# For Memurai
memurai-cli --version
# Should show: 6.x or 7.x
# For Docker
docker exec redis-tat redis-server --version
# Should show: Redis server v=7.x.x
```
5. **Restart your backend server**:
```powershell
# The TAT worker will now detect the correct Redis version
npm run dev
```
### Port Already in Use
```powershell
# Check what's using port 6379
netstat -ano | findstr :6379
# Kill the process if needed (replace <PID> with actual process ID)
taskkill /PID <PID> /F
# Or if using old Redis, stop it:
Get-Process redis-server -ErrorAction SilentlyStop | Stop-Process -Force
```
### Service Not Starting (Memurai)
```powershell
# Start Memurai service
net start Memurai
# Check service status
Get-Service Memurai
# Check logs
Get-EventLog -LogName Application -Source Memurai -Newest 10
# Restart service
Restart-Service Memurai
```
### Docker Container Not Starting
```powershell
# Check Docker is running
docker ps
# Check Redis container logs
docker logs redis-tat
# Restart container
docker restart redis-tat
# Remove and recreate if needed
docker rm -f redis-tat
docker run -d --name redis-tat -p 6379:6379 redis:7-alpine
```
### Cannot Connect to Redis
```powershell
# Test connection
Test-NetConnection -ComputerName localhost -Port 6379
# For Memurai
memurai-cli ping
# For Docker
docker exec redis-tat redis-cli ping
```
## Configuration
### Environment Variable
Add to your `.env` file:
```env
REDIS_URL=redis://localhost:6379
```
### Default Settings
- **Port**: `6379`
- **Host**: `localhost`
- **Connection String**: `redis://localhost:6379`
- No authentication required for local development
- Default configuration works out of the box
## Verification After Setup
After installing Redis, verify it's working:
```powershell
# 1. Check Redis version (must be 5.0+)
# For Memurai:
memurai-cli --version
# For Docker:
docker exec redis-tat redis-server --version
# 2. Test connection
# For Memurai:
memurai-cli ping
# Expected: PONG
# For Docker:
docker exec redis-tat redis-cli ping
# Expected: PONG
# 3. Check if backend can connect
# Start your backend server and check logs:
npm run dev
# Look for:
# [TAT Queue] Connected to Redis
# [TAT Worker] Connected to Redis at redis://127.0.0.1:6379
# [TAT Worker] Redis version: 7.x.x (or 6.x.x)
# [TAT Worker] Worker is ready and listening for jobs
```
## Quick Fix: Migrating from Old Redis
If you already installed Microsoft Archive Redis (3.x), follow these steps:
1. **Stop old Redis**:
```powershell
# Close the PowerShell window running redis-server.exe
# Or kill the process:
Get-Process redis-server -ErrorAction SilentlyStop | Stop-Process -Force
```
2. **Choose a new method** (recommended: Memurai or Docker)
3. **Install and verify** (see methods above)
4. **Update .env** (if needed):
```env
REDIS_URL=redis://localhost:6379
```
5. **Restart backend**:
```powershell
npm run dev
```
## Production Considerations
- ✅ Use Redis authentication in production
- ✅ Configure persistence (RDB/AOF)
- ✅ Set up monitoring and alerts
- ✅ Consider Redis Cluster for high availability
- ✅ Use managed Redis service (Redis Cloud, AWS ElastiCache, etc.)
---
## Summary: Recommended Setup for Windows
| Method | Ease of Setup | Performance | Recommended For |
|--------|---------------|-------------|-----------------|
| **Memurai** ⭐ | ⭐⭐⭐⭐⭐ Very Easy | ⭐⭐⭐⭐⭐ Excellent | **Most Users** |
| **Docker** | ⭐⭐⭐⭐ Easy | ⭐⭐⭐⭐⭐ Excellent | Docker Users |
| **WSL2** | ⭐⭐⭐ Moderate | ⭐⭐⭐⭐⭐ Excellent | Linux Users |
| ❌ **Microsoft Archive Redis** | ❌ Don't Use | ❌ Too Old | **None - Outdated** |
**⭐ Recommended**: **Memurai** for easiest Windows-native setup, or **Docker** if you already use Docker Desktop.

View File

@ -1,387 +0,0 @@
# TAT (Turnaround Time) Notification System
## Overview
The TAT Notification System automatically tracks and notifies approvers about their approval deadlines at key milestones (50%, 75%, and 100% of allotted time). It uses a queue-based architecture with BullMQ and Redis to ensure reliable, scheduled notifications.
## Architecture
```
┌─────────────────┐
│ Workflow │
│ Submission │
└────────┬────────┘
├──> Schedule TAT Jobs (50%, 75%, 100%)
┌────────▼────────┐ ┌──────────────┐ ┌─────────────┐
│ TAT Queue │────>│ TAT Worker │────>│ Processor │
│ (BullMQ) │ │ (Background)│ │ Handler │
└─────────────────┘ └──────────────┘ └──────┬──────┘
├──> Send Notification
├──> Update Database
└──> Log Activity
```
## Components
### 1. TAT Time Utilities (`tatTimeUtils.ts`)
Handles working hours calculations (Monday-Friday, 9 AM - 6 PM):
```typescript
// Calculate TAT milestones considering working hours
const { halfTime, seventyFive, full } = calculateTatMilestones(startDate, tatHours);
```
**Key Functions:**
- `addWorkingHours()`: Adds working hours to a start date, skipping weekends
- `calculateTatMilestones()`: Calculates 50%, 75%, and 100% time points
- `calculateDelay()`: Computes delay in milliseconds from now to target
### 2. TAT Queue (`tatQueue.ts`)
BullMQ queue configuration with Redis:
```typescript
export const tatQueue = new Queue('tatQueue', {
connection: IORedis,
defaultJobOptions: {
removeOnComplete: true,
removeOnFail: false,
attempts: 3,
backoff: { type: 'exponential', delay: 2000 }
}
});
```
### 3. TAT Processor (`tatProcessor.ts`)
Handles job execution when TAT milestones are reached:
```typescript
export async function handleTatJob(job: Job<TatJobData>) {
// Process tat50, tat75, or tatBreach
// - Send notification to approver
// - Update database flags
// - Log activity
}
```
**Job Types:**
- `tat50`: ⏳ 50% of TAT elapsed (gentle reminder)
- `tat75`: ⚠️ 75% of TAT elapsed (escalation warning)
- `tatBreach`: ⏰ 100% of TAT elapsed (breach notification)
### 4. TAT Worker (`tatWorker.ts`)
Background worker that processes jobs from the queue:
```typescript
export const tatWorker = new Worker('tatQueue', handleTatJob, {
connection,
concurrency: 5,
limiter: { max: 10, duration: 1000 }
});
```
**Features:**
- Concurrent job processing (up to 5 jobs)
- Rate limiting (10 jobs/second)
- Automatic retry on failure
- Graceful shutdown on SIGTERM/SIGINT
### 5. TAT Scheduler Service (`tatScheduler.service.ts`)
Service for scheduling and managing TAT jobs:
```typescript
// Schedule TAT jobs for an approval level
await tatSchedulerService.scheduleTatJobs(
requestId,
levelId,
approverId,
tatHours,
startTime
);
// Cancel TAT jobs when level is completed
await tatSchedulerService.cancelTatJobs(requestId, levelId);
```
## Database Schema
### New Fields in `approval_levels` Table
```sql
ALTER TABLE approval_levels ADD COLUMN tat50_alert_sent BOOLEAN NOT NULL DEFAULT false;
ALTER TABLE approval_levels ADD COLUMN tat75_alert_sent BOOLEAN NOT NULL DEFAULT false;
ALTER TABLE approval_levels ADD COLUMN tat_breached BOOLEAN NOT NULL DEFAULT false;
ALTER TABLE approval_levels ADD COLUMN tat_start_time TIMESTAMP WITH TIME ZONE;
```
**Field Descriptions:**
- `tat50_alert_sent`: Tracks if 50% notification was sent
- `tat75_alert_sent`: Tracks if 75% notification was sent
- `tat_breached`: Tracks if TAT deadline was breached
- `tat_start_time`: Timestamp when TAT monitoring started
## Integration Points
### 1. Workflow Submission
When a workflow is submitted, TAT monitoring starts for the first approval level:
```typescript
// workflow.service.ts - submitWorkflow()
await current.update({
levelStartTime: now,
tatStartTime: now,
status: ApprovalStatus.IN_PROGRESS
});
await tatSchedulerService.scheduleTatJobs(
requestId,
levelId,
approverId,
tatHours,
now
);
```
### 2. Approval Flow
When a level is approved, TAT jobs are cancelled and new ones are scheduled for the next level:
```typescript
// approval.service.ts - approveLevel()
// Cancel current level TAT jobs
await tatSchedulerService.cancelTatJobs(requestId, levelId);
// Schedule TAT jobs for next level
await tatSchedulerService.scheduleTatJobs(
nextRequestId,
nextLevelId,
nextApproverId,
nextTatHours,
now
);
```
### 3. Rejection Flow
When a level is rejected, all pending TAT jobs are cancelled:
```typescript
// approval.service.ts - approveLevel()
await tatSchedulerService.cancelTatJobs(requestId, levelId);
```
## Notification Flow
### 50% TAT Alert (⏳)
**Message:** "50% of TAT elapsed for Request REQ-XXX: [Title]"
**Actions:**
- Send push notification to approver
- Update `tat50_alert_sent = true`
- Update `tat_percentage_used = 50`
- Log activity: "50% of TAT time has elapsed"
### 75% TAT Alert (⚠️)
**Message:** "75% of TAT elapsed for Request REQ-XXX: [Title]. Please take action soon."
**Actions:**
- Send push notification to approver
- Update `tat75_alert_sent = true`
- Update `tat_percentage_used = 75`
- Log activity: "75% of TAT time has elapsed - Escalation warning"
### 100% TAT Breach (⏰)
**Message:** "TAT breached for Request REQ-XXX: [Title]. Immediate action required!"
**Actions:**
- Send push notification to approver
- Update `tat_breached = true`
- Update `tat_percentage_used = 100`
- Log activity: "TAT deadline reached - Breach notification"
## Configuration
### Environment Variables
```bash
# Redis connection for TAT queue
REDIS_URL=redis://localhost:6379
# Optional: TAT monitoring settings
TAT_CHECK_INTERVAL_MINUTES=30
TAT_REMINDER_THRESHOLD_1=50
TAT_REMINDER_THRESHOLD_2=80
```
### Docker Compose
Redis service is automatically configured:
```yaml
redis:
image: redis:7-alpine
container_name: re_workflow_redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
networks:
- re_workflow_network
restart: unless-stopped
```
## Working Hours Configuration
**Default Schedule:**
- Working Days: Monday - Friday
- Working Hours: 9:00 AM - 6:00 PM (9 hours/day)
- Timezone: Server timezone
**To Modify:**
Edit `WORK_START_HOUR` and `WORK_END_HOUR` in `tatTimeUtils.ts`
## Example Scenario
### Scenario: 48-hour TAT Approval
1. **Workflow Submitted**: Monday 10:00 AM
2. **50% Alert (24 hours)**: Tuesday 10:00 AM
- Notification sent to approver
- Database updated: `tat50_alert_sent = true`
3. **75% Alert (36 hours)**: Wednesday 10:00 AM
- Escalation warning sent
- Database updated: `tat75_alert_sent = true`
4. **100% Breach (48 hours)**: Thursday 10:00 AM
- Breach alert sent
- Database updated: `tat_breached = true`
## Error Handling
### Queue Job Failures
- **Automatic Retry**: Failed jobs retry up to 3 times with exponential backoff
- **Error Logging**: All failures logged to console and logs
- **Non-Blocking**: TAT failures don't block workflow approval process
### Redis Connection Failures
- **Graceful Degradation**: Application continues to work even if Redis is down
- **Reconnection**: Automatic reconnection attempts
- **Logging**: Connection status logged
## Monitoring & Debugging
### Check Queue Status
```bash
# View jobs in Redis
redis-cli
> KEYS bull:tatQueue:*
> LRANGE bull:tatQueue:delayed 0 -1
```
### View Worker Logs
```bash
# Check worker status in application logs
grep "TAT Worker" logs/app.log
grep "TAT Scheduler" logs/app.log
grep "TAT Processor" logs/app.log
```
### Database Queries
```sql
-- Check TAT status for all approval levels
SELECT
level_id,
request_id,
approver_name,
tat_hours,
tat_percentage_used,
tat50_alert_sent,
tat75_alert_sent,
tat_breached,
level_start_time,
tat_start_time
FROM approval_levels
WHERE status IN ('PENDING', 'IN_PROGRESS');
-- Find breached TATs
SELECT * FROM approval_levels WHERE tat_breached = true;
```
## Best Practices
1. **Always Schedule on Level Start**: Ensure `tatStartTime` is set when a level becomes active
2. **Always Cancel on Level Complete**: Cancel jobs when level is approved/rejected to avoid duplicate notifications
3. **Use Job IDs**: Unique job IDs (`tat50-{requestId}-{levelId}`) allow easy cancellation
4. **Monitor Queue Health**: Regularly check Redis and worker status
5. **Test with Short TATs**: Use short TAT durations in development for testing
## Troubleshooting
### Notifications Not Sent
1. Check Redis connection: `redis-cli ping`
2. Verify worker is running: Check logs for "TAT Worker: Initialized"
3. Check job scheduling: Look for "TAT jobs scheduled" logs
4. Verify VAPID configuration for push notifications
### Duplicate Notifications
1. Ensure jobs are cancelled when level is completed
2. Check for duplicate job IDs in Redis
3. Verify `tat50_alert_sent` and `tat75_alert_sent` flags
### Jobs Not Executing
1. Check system time (jobs use timestamps)
2. Verify working hours calculation
3. Check job delays in Redis
4. Review worker concurrency and rate limits
## Future Enhancements
1. **Configurable Working Hours**: Allow per-organization working hours
2. **Holiday Calendar**: Skip public holidays in TAT calculations
3. **Escalation Rules**: Auto-escalate to manager on breach
4. **TAT Dashboard**: Real-time visualization of TAT statuses
5. **Email Notifications**: Add email alerts alongside push notifications
6. **SMS Notifications**: Critical breach alerts via SMS
## API Endpoints (Future)
Potential API endpoints for TAT management:
```
GET /api/tat/status/:requestId - Get TAT status for request
GET /api/tat/breaches - List all breached requests
POST /api/tat/extend/:levelId - Extend TAT for a level
GET /api/tat/analytics - TAT analytics and reports
```
## References
- [BullMQ Documentation](https://docs.bullmq.io/)
- [Redis Documentation](https://redis.io/documentation)
- [Day.js Documentation](https://day.js.org/)
- [Web Push Notifications](https://developer.mozilla.org/en-US/docs/Web/API/Push_API)
---
**Last Updated**: November 4, 2025
**Version**: 1.0.0
**Maintained By**: Royal Enfield Workflow Team

View File

@ -1,411 +0,0 @@
# TAT Notification Testing Guide
## Quick Setup for Testing
### Step 1: Setup Redis
**You MUST have Redis for TAT notifications to work.**
#### 🚀 Option A: Upstash (RECOMMENDED - No Installation!)
**Best choice for Windows development:**
1. Go to: https://console.upstash.com/
2. Sign up (free)
3. Create Database:
- Name: `redis-tat-dev`
- Type: Regional
- Region: Choose closest
4. Copy Redis URL (format: `rediss://default:...@host.upstash.io:6379`)
5. Add to `Re_Backend/.env`:
```bash
REDIS_URL=rediss://default:YOUR_PASSWORD@YOUR_HOST.upstash.io:6379
```
**✅ Done!** No installation, works everywhere!
See detailed guide: `docs/UPSTASH_SETUP_GUIDE.md`
#### Option B: Docker (If you prefer local)
```bash
docker run -d --name redis-tat -p 6379:6379 redis:latest
```
Then in `.env`:
```bash
REDIS_URL=redis://localhost:6379
```
#### Option C: Linux Production
```bash
sudo apt install redis-server -y
sudo systemctl start redis-server
```
#### Verify Connection
- **Upstash**: Use Console CLI → `PING` → should return `PONG`
- **Local**: `Test-NetConnection localhost -Port 6379`
---
### Step 2: Enable Test Mode (Optional but Recommended)
For faster testing, enable test mode where **1 hour = 1 minute**:
1. **Edit your `.env` file**:
```bash
TAT_TEST_MODE=true
```
2. **Restart your backend**:
```bash
cd Re_Backend
npm run dev
```
3. **Verify test mode is enabled** - You should see:
```
⏰ TAT Configuration:
- Test Mode: ENABLED (1 hour = 1 minute)
- Working Hours: 9:00 - 18:00
- Working Days: Monday - Friday
- Redis: redis://localhost:6379
```
---
### Step 3: Create a Test Workflow
#### Production Mode (TAT_TEST_MODE=false)
- Create a request with **2 hours TAT**
- Notifications will come at:
- **1 hour** (50%)
- **1.5 hours** (75%)
- **2 hours** (100% breach)
#### Test Mode (TAT_TEST_MODE=true) ⚡ FASTER
- Create a request with **6 hours TAT** (becomes 6 minutes)
- Notifications will come at:
- **3 minutes** (50%)
- **4.5 minutes** (75%)
- **6 minutes** (100% breach)
---
### Step 4: Submit and Monitor
1. **Create and Submit Request** via your frontend or API
2. **Check Backend Logs** - You should see:
```
[TAT Scheduler] Calculating TAT milestones for request...
[TAT Scheduler] Start: 2025-11-04 12:00
[TAT Scheduler] 50%: 2025-11-04 12:03
[TAT Scheduler] 75%: 2025-11-04 12:04
[TAT Scheduler] 100%: 2025-11-04 12:06
[TAT Scheduler] Scheduled tat50 for level...
[TAT Scheduler] Scheduled tat75 for level...
[TAT Scheduler] Scheduled tatBreach for level...
[TAT Scheduler] ✅ TAT jobs scheduled for request...
```
3. **Wait for Notifications**
- Watch the logs
- Check push notifications
- Verify database updates
4. **Verify Notifications** - Look for:
```
[TAT Processor] Processing tat50 for request...
[TAT Processor] tat50 notification sent for request...
```
---
## Testing Scenarios
### Scenario 1: Normal Flow (Happy Path)
```
1. Create request with TAT = 6 hours (6 min in test mode)
2. Submit request
3. Wait for 50% notification (3 min)
4. Wait for 75% notification (4.5 min)
5. Wait for 100% breach (6 min)
```
**Expected Result:**
- ✅ 3 notifications sent
- ✅ Database flags updated
- ✅ Activity logs created
---
### Scenario 2: Early Approval
```
1. Create request with TAT = 6 hours
2. Submit request
3. Wait for 50% notification (3 min)
4. Approve immediately
5. Remaining notifications should be cancelled
```
**Expected Result:**
- ✅ 50% notification received
- ✅ 75% and 100% notifications cancelled
- ✅ TAT jobs for next level scheduled
---
### Scenario 3: Multi-Level Approval
```
1. Create request with 3 approval levels (2 hours each)
2. Submit request
3. Level 1: Wait for notifications, then approve
4. Level 2: Should schedule new TAT jobs
5. Level 2: Wait for notifications, then approve
6. Level 3: Should schedule new TAT jobs
```
**Expected Result:**
- ✅ Each level gets its own TAT monitoring
- ✅ Previous level jobs cancelled on approval
- ✅ New level jobs scheduled
---
### Scenario 4: Rejection
```
1. Create request with TAT = 6 hours
2. Submit request
3. Wait for 50% notification
4. Reject the request
5. All remaining notifications should be cancelled
```
**Expected Result:**
- ✅ TAT jobs cancelled
- ✅ No further notifications
---
## Verification Checklist
### Backend Logs ✅
```bash
# Should see these messages:
✓ [TAT Queue] Connected to Redis
✓ [TAT Worker] Initialized and listening
✓ [TAT Scheduler] TAT jobs scheduled
✓ [TAT Processor] Processing tat50
✓ [TAT Processor] tat50 notification sent
```
### Database Check ✅
```sql
-- Check approval level TAT status
SELECT
request_id,
level_number,
approver_name,
tat_hours,
tat_percentage_used,
tat50_alert_sent,
tat75_alert_sent,
tat_breached,
tat_start_time,
status
FROM approval_levels
WHERE request_id = '<YOUR_REQUEST_ID>';
```
**Expected Fields:**
- `tat_start_time`: Should be set when level starts
- `tat50_alert_sent`: true after 50% notification
- `tat75_alert_sent`: true after 75% notification
- `tat_breached`: true after 100% notification
- `tat_percentage_used`: 50, 75, or 100
### Activity Logs ✅
```sql
-- Check activity timeline
SELECT
activity_type,
activity_description,
user_name,
created_at
FROM activities
WHERE request_id = '<YOUR_REQUEST_ID>'
ORDER BY created_at DESC;
```
**Expected Entries:**
- "50% of TAT time has elapsed"
- "75% of TAT time has elapsed - Escalation warning"
- "TAT deadline reached - Breach notification"
### Redis Queue ✅
```bash
# Connect to Redis
redis-cli
# Check scheduled jobs
KEYS bull:tatQueue:*
LRANGE bull:tatQueue:delayed 0 -1
# Check job details
HGETALL bull:tatQueue:tat50-<REQUEST_ID>-<LEVEL_ID>
```
---
## Troubleshooting
### ❌ No Notifications Received
**Problem:** TAT jobs scheduled but no notifications
**Solutions:**
1. Check Redis is running:
```powershell
Test-NetConnection localhost -Port 6379
```
2. Check worker is running:
```bash
# Look for in backend logs:
[TAT Worker] Worker is ready and listening
```
3. Check job delays:
```bash
redis-cli
> LRANGE bull:tatQueue:delayed 0 -1
```
4. Verify VAPID keys for push notifications:
```bash
# In .env file:
VAPID_PUBLIC_KEY=...
VAPID_PRIVATE_KEY=...
```
---
### ❌ Jobs Not Executing
**Problem:** Jobs scheduled but never execute
**Solutions:**
1. Check system time is correct
2. Verify test mode settings
3. Check worker logs for errors
4. Restart worker:
```bash
# Restart backend server
npm run dev
```
---
### ❌ Duplicate Notifications
**Problem:** Receiving multiple notifications for same milestone
**Solutions:**
1. Check database flags are being set:
```sql
SELECT tat50_alert_sent, tat75_alert_sent FROM approval_levels;
```
2. Verify job cancellation on approval:
```bash
# Should see in logs:
[Approval] TAT jobs cancelled for level...
```
3. Check for duplicate job IDs in Redis
---
### ❌ Redis Connection Errors
**Problem:** `ECONNREFUSED` errors
**Solutions:**
1. **Start Redis** - See Step 1
2. Check Redis URL in `.env`:
```bash
REDIS_URL=redis://localhost:6379
```
3. Verify port 6379 is not blocked:
```powershell
Test-NetConnection localhost -Port 6379
```
---
## Testing Timeline Examples
### Test Mode Enabled (1 hour = 1 minute)
| TAT Hours | Real Time | 50% | 75% | 100% |
|-----------|-----------|-----|-----|------|
| 2 hours | 2 minutes | 1m | 1.5m| 2m |
| 6 hours | 6 minutes | 3m | 4.5m| 6m |
| 24 hours | 24 minutes| 12m | 18m | 24m |
| 48 hours | 48 minutes| 24m | 36m | 48m |
### Production Mode (Normal)
| TAT Hours | 50% | 75% | 100% |
|-----------|--------|--------|--------|
| 2 hours | 1h | 1.5h | 2h |
| 6 hours | 3h | 4.5h | 6h |
| 24 hours | 12h | 18h | 24h |
| 48 hours | 24h | 36h | 48h |
---
## Quick Test Commands
```powershell
# 1. Check Redis
Test-NetConnection localhost -Port 6379
# 2. Start Backend (with test mode)
cd Re_Backend
$env:TAT_TEST_MODE="true"
npm run dev
# 3. Monitor Logs (in another terminal)
cd Re_Backend
Get-Content -Path "logs/app.log" -Wait -Tail 50
# 4. Check Redis Jobs
redis-cli KEYS "bull:tatQueue:*"
# 5. Query Database
psql -U laxman -d re_workflow_db -c "SELECT * FROM approval_levels WHERE tat_start_time IS NOT NULL;"
```
---
## Support
If you encounter issues:
1. **Check Logs**: `Re_Backend/logs/`
2. **Enable Debug**: Set `LOG_LEVEL=debug` in `.env`
3. **Redis Status**: `redis-cli ping` should return `PONG`
4. **Worker Status**: Look for "TAT Worker: Initialized" in logs
5. **Database**: Verify TAT fields exist in `approval_levels` table
---
**Happy Testing!** 🎉
For more information, see:
- `TAT_NOTIFICATION_SYSTEM.md` - Full system documentation
- `INSTALL_REDIS.txt` - Redis installation guide
- `backend_structure.txt` - Database schema reference

View File

@ -1,381 +0,0 @@
# Upstash Redis Setup Guide
## Why Upstash?
**No Installation**: Works instantly on Windows, Mac, Linux
**100% Free Tier**: 10,000 commands/day (more than enough for dev)
**Production Ready**: Same service for dev and production
**Global CDN**: Fast from anywhere
**Zero Maintenance**: No Redis server to manage
---
## Step-by-Step Setup (3 minutes)
### 1. Create Upstash Account
1. Go to: https://console.upstash.com/
2. Sign up with GitHub, Google, or Email
3. Verify your email (if required)
### 2. Create Redis Database
1. **Click "Create Database"**
2. **Fill in details**:
- **Name**: `redis-tat-dev` (or any name you like)
- **Type**: Select "Regional"
- **Region**: Choose closest to you (e.g., US East, EU West)
- **TLS**: Keep enabled (recommended)
- **Eviction**: Choose "No Eviction"
3. **Click "Create"**
### 3. Copy Connection URL
After creation, you'll see your database dashboard:
1. **Find "REST API" section**
2. **Look for "Redis URL"** - it looks like:
```
rediss://default:AbCdEfGh1234567890XyZ@us1-mighty-shark-12345.upstash.io:6379
```
3. **Click the copy button** 📋
---
## Configure Your Application
### Edit `.env` File
Open `Re_Backend/.env` and add/update:
```bash
# Upstash Redis URL
REDIS_URL=rediss://default:YOUR_PASSWORD@YOUR_URL.upstash.io:6379
# Enable test mode for faster testing
TAT_TEST_MODE=true
```
**Important**:
- Note the **double `s`** in `rediss://` (TLS enabled)
- Copy the entire URL including the password
---
## Verify Connection
### Start Your Backend
```bash
cd Re_Backend
npm run dev
```
### Check Logs
You should see:
```
✅ [TAT Queue] Connected to Redis
✅ [TAT Worker] Initialized and listening
⏰ TAT Configuration:
- Test Mode: ENABLED (1 hour = 1 minute)
- Redis: rediss://***@upstash.io:6379
```
---
## Test Using Upstash Console
### Method 1: Web CLI (Easiest)
1. Go to your database in Upstash Console
2. Click the **"CLI"** tab
3. Type commands:
```redis
PING
# → PONG
KEYS *
# → Shows all keys (should see TAT jobs after submitting request)
INFO
# → Shows Redis server info
```
### Method 2: Redis CLI (Optional)
If you have `redis-cli` installed:
```bash
redis-cli -u "rediss://default:YOUR_PASSWORD@YOUR_URL.upstash.io:6379" ping
# → PONG
```
---
## Monitor Your TAT Jobs
### View Queued Jobs
In Upstash Console CLI:
```redis
# List all TAT jobs
KEYS bull:tatQueue:*
# See delayed jobs
LRANGE bull:tatQueue:delayed 0 -1
# Get specific job details
HGETALL bull:tatQueue:tat50-<REQUEST_ID>-<LEVEL_ID>
```
### Example Output
After submitting a request, you should see:
```redis
KEYS bull:tatQueue:*
# Returns:
# 1) "bull:tatQueue:id"
# 2) "bull:tatQueue:delayed"
# 3) "bull:tatQueue:tat50-abc123-xyz789"
# 4) "bull:tatQueue:tat75-abc123-xyz789"
# 5) "bull:tatQueue:tatBreach-abc123-xyz789"
```
---
## Upstash Features for Development
### 1. Data Browser
- View all keys and values
- Edit data directly
- Delete specific keys
### 2. CLI Tab
- Run Redis commands
- Test queries
- Debug issues
### 3. Metrics
- Monitor requests/sec
- Track data usage
- View connection count
### 4. Logs
- See all commands executed
- Debug connection issues
- Monitor performance
---
## Free Tier Limits
**Upstash Free Tier includes:**
- ✅ 10,000 commands per day
- ✅ 256 MB storage
- ✅ TLS/SSL encryption
- ✅ Global edge caching
- ✅ REST API access
**Perfect for:**
- ✅ Development
- ✅ Testing
- ✅ Small production apps (up to ~100 users)
---
## Production Considerations
### Upgrade When Needed
For production with high traffic:
- **Pro Plan**: $0.2 per 100K commands
- **Pay as you go**: No monthly fee
- **Auto-scaling**: Handles any load
### Security Best Practices
1. **Use TLS**: Always use `rediss://` (double s)
2. **Rotate Passwords**: Change regularly in production
3. **IP Restrictions**: Add allowed IPs in Upstash console
4. **Environment Variables**: Never commit REDIS_URL to Git
### Production Setup
```bash
# .env.production
REDIS_URL=rediss://default:PROD_PASSWORD@prod-region.upstash.io:6379
TAT_TEST_MODE=false # Use real hours in production
WORK_START_HOUR=9
WORK_END_HOUR=18
```
---
## Troubleshooting
### Connection Refused Error
**Problem**: `ECONNREFUSED` or timeout
**Solutions**:
1. **Check URL format**:
```bash
# Should be:
rediss://default:password@host.upstash.io:6379
# NOT:
redis://... (missing second 's' for TLS)
```
2. **Verify database is active**:
- Go to Upstash Console
- Check database status (should be green "Active")
3. **Test connection**:
- Use Upstash Console CLI tab
- Type `PING` - should return `PONG`
### Slow Response Times
**Problem**: High latency
**Solutions**:
1. **Choose closer region**:
- Delete database
- Create new one in region closer to you
2. **Use REST API** (alternative):
```bash
UPSTASH_REDIS_REST_URL=https://YOUR_URL.upstash.io
UPSTASH_REDIS_REST_TOKEN=YOUR_TOKEN
```
### Command Limit Exceeded
**Problem**: "Daily request limit exceeded"
**Solutions**:
1. **Check usage**:
- Go to Upstash Console → Metrics
- See command count
2. **Optimize**:
- Remove unnecessary Redis calls
- Batch operations where possible
3. **Upgrade** (if needed):
- Pro plan: $0.2 per 100K commands
- No monthly fee
---
## Comparison: Upstash vs Local Redis
| Feature | Upstash | Local Redis |
|---------|---------|-------------|
| **Setup Time** | 2 minutes | 10-30 minutes |
| **Installation** | None | Docker/Memurai |
| **Maintenance** | Zero | Manual updates |
| **Cost (Dev)** | Free | Free |
| **Works Offline** | No | Yes |
| **Production** | Same setup | Need migration |
| **Monitoring** | Built-in | Setup required |
| **Backup** | Automatic | Manual |
**Verdict**:
- ✅ **Upstash for most cases** (especially Windows dev)
- Local Redis only if you need offline development
---
## Migration from Local Redis
If you were using local Redis:
### 1. Export Data (Optional)
```bash
# From local Redis
redis-cli --rdb dump.rdb
# Import to Upstash (use Upstash REST API or CLI)
```
### 2. Update Configuration
```bash
# Old (.env)
REDIS_URL=redis://localhost:6379
# New (.env)
REDIS_URL=rediss://default:PASSWORD@host.upstash.io:6379
```
### 3. Restart Application
```bash
npm run dev
```
**That's it!** No code changes needed - BullMQ works identically.
---
## FAQs
### Q: Is Upstash free forever?
**A**: Yes, 10,000 commands/day free tier is permanent.
### Q: Can I use it in production?
**A**: Absolutely! Many companies use Upstash in production.
### Q: What if I exceed free tier?
**A**: You get notified. Either optimize or upgrade to pay-as-you-go.
### Q: Is my data secure?
**A**: Yes, TLS encryption by default, SOC 2 compliant.
### Q: Can I have multiple databases?
**A**: Yes, unlimited databases on free tier.
### Q: What about data persistence?
**A**: Full Redis persistence (RDB + AOF) with automatic backups.
---
## Resources
- **Upstash Docs**: https://docs.upstash.com/redis
- **Redis Commands**: https://redis.io/commands
- **BullMQ Docs**: https://docs.bullmq.io/
- **Our TAT System**: See `TAT_NOTIFICATION_SYSTEM.md`
---
## Next Steps
✅ Upstash setup complete? Now:
1. **Enable Test Mode**: Set `TAT_TEST_MODE=true` in `.env`
2. **Create Test Request**: Submit a 6-hour TAT request
3. **Watch Logs**: See notifications at 3min, 4.5min, 6min
4. **Check Upstash CLI**: Monitor jobs in real-time
---
**Setup Complete!** 🎉
Your TAT notification system is now powered by Upstash Redis!
---
**Last Updated**: November 4, 2025
**Contact**: Royal Enfield Workflow Team

View File

@ -145,15 +145,26 @@ export async function getPublicConfig() {
ui: SYSTEM_CONFIG.UI ui: SYSTEM_CONFIG.UI
}; };
// Try to get AI service status (gracefully handle if not available) // Try to get AI service status and configuration (gracefully handle if not available)
try { try {
const { aiService } = require('../services/ai.service'); const { aiService } = require('../services/ai.service');
const { getConfigValue } = require('../services/configReader.service');
// Get AI configuration from admin settings
const aiEnabled = (await getConfigValue('AI_ENABLED', 'true'))?.toLowerCase() === 'true';
const remarkGenerationEnabled = (await getConfigValue('AI_REMARK_GENERATION_ENABLED', 'true'))?.toLowerCase() === 'true';
const maxRemarkLength = parseInt(await getConfigValue('AI_MAX_REMARK_LENGTH', '2000') || '2000', 10);
return { return {
...baseConfig, ...baseConfig,
ai: { ai: {
enabled: aiService.isAvailable(), enabled: aiEnabled && aiService.isAvailable(),
provider: aiService.getProviderName() provider: aiService.getProviderName(),
remarkGenerationEnabled: remarkGenerationEnabled && aiEnabled && aiService.isAvailable(),
maxRemarkLength: maxRemarkLength,
features: {
conclusionGeneration: remarkGenerationEnabled && aiEnabled && aiService.isAvailable()
}
} }
}; };
} catch (error) { } catch (error) {
@ -162,7 +173,12 @@ export async function getPublicConfig() {
...baseConfig, ...baseConfig,
ai: { ai: {
enabled: false, enabled: false,
provider: 'None' provider: 'None',
remarkGenerationEnabled: false,
maxRemarkLength: 2000,
features: {
conclusionGeneration: false
}
} }
}; };
} }

View File

@ -36,6 +36,29 @@ export class ConclusionController {
return res.status(400).json({ error: 'Conclusion can only be generated for approved requests' }); return res.status(400).json({ error: 'Conclusion can only be generated for approved requests' });
} }
// Check if AI features are enabled in admin config
const { getConfigValue } = await import('../services/configReader.service');
const aiEnabled = (await getConfigValue('AI_ENABLED', 'true'))?.toLowerCase() === 'true';
const remarkGenerationEnabled = (await getConfigValue('AI_REMARK_GENERATION_ENABLED', 'true'))?.toLowerCase() === 'true';
if (!aiEnabled) {
logger.warn(`[Conclusion] AI features disabled in admin config for request ${requestId}`);
return res.status(400).json({
error: 'AI features disabled',
message: 'AI features are currently disabled by administrator. Please write the conclusion manually.',
canContinueManually: true
});
}
if (!remarkGenerationEnabled) {
logger.warn(`[Conclusion] AI remark generation disabled in admin config for request ${requestId}`);
return res.status(400).json({
error: 'AI remark generation disabled',
message: 'AI-powered conclusion generation is currently disabled by administrator. Please write the conclusion manually.',
canContinueManually: true
});
}
// Check if AI service is available // Check if AI service is available
if (!aiService.isAvailable()) { if (!aiService.isAvailable()) {
logger.warn(`[Conclusion] AI service unavailable for request ${requestId}`); logger.warn(`[Conclusion] AI service unavailable for request ${requestId}`);

View File

@ -147,19 +147,80 @@ export class DashboardController {
} }
} }
/**
* Get AI Remark Utilization metrics with monthly trends
*/
async getAIRemarkUtilization(req: Request, res: Response): Promise<void> {
try {
const userId = (req as any).user?.userId;
const dateRange = req.query.dateRange as string | undefined;
const utilization = await this.dashboardService.getAIRemarkUtilization(userId, dateRange);
res.json({
success: true,
data: utilization
});
} catch (error) {
logger.error('[Dashboard] Error fetching AI remark utilization:', error);
res.status(500).json({
success: false,
error: 'Failed to fetch AI remark utilization'
});
}
}
/**
* Get Approver Performance metrics with pagination
*/
async getApproverPerformance(req: Request, res: Response): Promise<void> {
try {
const userId = (req as any).user?.userId;
const dateRange = req.query.dateRange as string | undefined;
const page = Number(req.query.page || 1);
const limit = Number(req.query.limit || 10);
const result = await this.dashboardService.getApproverPerformance(userId, dateRange, page, limit);
res.json({
success: true,
data: result.performance,
pagination: {
currentPage: result.currentPage,
totalPages: result.totalPages,
totalRecords: result.totalRecords,
limit: result.limit
}
});
} catch (error) {
logger.error('[Dashboard] Error fetching approver performance:', error);
res.status(500).json({
success: false,
error: 'Failed to fetch approver performance metrics'
});
}
}
/** /**
* Get recent activity feed * Get recent activity feed
*/ */
async getRecentActivity(req: Request, res: Response): Promise<void> { async getRecentActivity(req: Request, res: Response): Promise<void> {
try { try {
const userId = (req as any).user?.userId; const userId = (req as any).user?.userId;
const page = Number(req.query.page || 1);
const limit = Number(req.query.limit || 10); const limit = Number(req.query.limit || 10);
const activities = await this.dashboardService.getRecentActivity(userId, limit); const result = await this.dashboardService.getRecentActivity(userId, page, limit);
res.json({ res.json({
success: true, success: true,
data: activities data: result.activities,
pagination: {
currentPage: result.currentPage,
totalPages: result.totalPages,
totalRecords: result.totalRecords,
limit: result.limit
}
}); });
} catch (error) { } catch (error) {
logger.error('[Dashboard] Error fetching recent activity:', error); logger.error('[Dashboard] Error fetching recent activity:', error);
@ -171,17 +232,25 @@ export class DashboardController {
} }
/** /**
* Get critical/high priority requests * Get critical/high priority requests with pagination
*/ */
async getCriticalRequests(req: Request, res: Response): Promise<void> { async getCriticalRequests(req: Request, res: Response): Promise<void> {
try { try {
const userId = (req as any).user?.userId; const userId = (req as any).user?.userId;
const page = Number(req.query.page || 1);
const limit = Number(req.query.limit || 10);
const criticalRequests = await this.dashboardService.getCriticalRequests(userId); const result = await this.dashboardService.getCriticalRequests(userId, page, limit);
res.json({ res.json({
success: true, success: true,
data: criticalRequests data: result.criticalRequests,
pagination: {
currentPage: result.currentPage,
totalPages: result.totalPages,
totalRecords: result.totalRecords,
limit: result.limit
}
}); });
} catch (error) { } catch (error) {
logger.error('[Dashboard] Error fetching critical requests:', error); logger.error('[Dashboard] Error fetching critical requests:', error);
@ -193,18 +262,25 @@ export class DashboardController {
} }
/** /**
* Get upcoming deadlines * Get upcoming deadlines with pagination
*/ */
async getUpcomingDeadlines(req: Request, res: Response): Promise<void> { async getUpcomingDeadlines(req: Request, res: Response): Promise<void> {
try { try {
const userId = (req as any).user?.userId; const userId = (req as any).user?.userId;
const limit = Number(req.query.limit || 5); const page = Number(req.query.page || 1);
const limit = Number(req.query.limit || 10);
const deadlines = await this.dashboardService.getUpcomingDeadlines(userId, limit); const result = await this.dashboardService.getUpcomingDeadlines(userId, page, limit);
res.json({ res.json({
success: true, success: true,
data: deadlines data: result.deadlines,
pagination: {
currentPage: result.currentPage,
totalPages: result.totalPages,
totalRecords: result.totalRecords,
limit: result.limit
}
}); });
} catch (error) { } catch (error) {
logger.error('[Dashboard] Error fetching upcoming deadlines:', error); logger.error('[Dashboard] Error fetching upcoming deadlines:', error);

View File

@ -99,7 +99,7 @@ export async function up(queryInterface: QueryInterface): Promise<void> {
config_value: 'true', config_value: 'true',
value_type: 'BOOLEAN', value_type: 'BOOLEAN',
config_category: 'AI_CONFIGURATION', config_category: 'AI_CONFIGURATION',
description: 'Enable/disable AI-powered conclusion generation feature', description: 'Master toggle to enable/disable all AI-powered features in the system',
is_editable: true, is_editable: true,
is_sensitive: false, is_sensitive: false,
default_value: 'true', default_value: 'true',
@ -112,13 +112,55 @@ export async function up(queryInterface: QueryInterface): Promise<void> {
requires_restart: false, requires_restart: false,
created_at: now, created_at: now,
updated_at: now updated_at: now
},
{
config_id: uuidv4(),
config_key: 'AI_REMARK_GENERATION_ENABLED',
config_value: 'true',
value_type: 'BOOLEAN',
config_category: 'AI_CONFIGURATION',
description: 'Enable/disable AI-powered conclusion remark generation when requests are approved',
is_editable: true,
is_sensitive: false,
default_value: 'true',
display_name: 'Enable AI Remark Generation',
validation_rules: JSON.stringify({
type: 'boolean'
}),
ui_component: 'toggle',
sort_order: 105,
requires_restart: false,
created_at: now,
updated_at: now
},
{
config_id: uuidv4(),
config_key: 'AI_MAX_REMARK_LENGTH',
config_value: '2000',
value_type: 'INTEGER',
config_category: 'AI_CONFIGURATION',
description: 'Maximum character length for AI-generated conclusion remarks (used as context for AI prompt)',
is_editable: true,
is_sensitive: false,
default_value: '2000',
display_name: 'AI Max Remark Length',
validation_rules: JSON.stringify({
type: 'number',
min: 500,
max: 5000
}),
ui_component: 'number',
sort_order: 106,
requires_restart: false,
created_at: now,
updated_at: now
} }
]); ]);
} }
export async function down(queryInterface: QueryInterface): Promise<void> { export async function down(queryInterface: QueryInterface): Promise<void> {
await queryInterface.bulkDelete('admin_configurations', { await queryInterface.bulkDelete('admin_configurations', {
config_key: ['AI_PROVIDER', 'CLAUDE_API_KEY', 'OPENAI_API_KEY', 'GEMINI_API_KEY', 'AI_ENABLED'] config_key: ['AI_PROVIDER', 'CLAUDE_API_KEY', 'OPENAI_API_KEY', 'GEMINI_API_KEY', 'AI_ENABLED', 'AI_REMARK_GENERATION_ENABLED', 'AI_MAX_REMARK_LENGTH']
} as any); } as any);
} }

View File

@ -48,6 +48,18 @@ router.get('/stats/ai-insights',
asyncHandler(dashboardController.getAIInsights.bind(dashboardController)) asyncHandler(dashboardController.getAIInsights.bind(dashboardController))
); );
// Get AI Remark Utilization with monthly trends
router.get('/stats/ai-remark-utilization',
authenticateToken,
asyncHandler(dashboardController.getAIRemarkUtilization.bind(dashboardController))
);
// Get Approver Performance metrics
router.get('/stats/approver-performance',
authenticateToken,
asyncHandler(dashboardController.getApproverPerformance.bind(dashboardController))
);
// Get recent activity feed // Get recent activity feed
router.get('/activity/recent', router.get('/activity/recent',
authenticateToken, authenticateToken,

View File

@ -355,13 +355,24 @@ class AIService {
} }
try { try {
// Build context prompt // Build context prompt with max length from config
const prompt = this.buildConclusionPrompt(context); const prompt = await this.buildConclusionPrompt(context);
logger.info(`[AI Service] Generating conclusion for request ${context.requestNumber} using ${this.providerName}...`); logger.info(`[AI Service] Generating conclusion for request ${context.requestNumber} using ${this.providerName}...`);
// Use provider's generateText method // Use provider's generateText method
const remarkText = await this.provider.generateText(prompt); let remarkText = await this.provider.generateText(prompt);
// Get max length from config for validation
const { getConfigValue } = require('./configReader.service');
const maxLengthStr = await getConfigValue('AI_MAX_REMARK_LENGTH', '2000');
const maxLength = parseInt(maxLengthStr || '2000', 10);
// Validate and trim if exceeds max length
if (remarkText.length > maxLength) {
logger.warn(`[AI Service] Generated remark exceeds max length (${remarkText.length} > ${maxLength}), trimming...`);
remarkText = remarkText.substring(0, maxLength - 3) + '...'; // Trim with ellipsis
}
// Extract key points (look for bullet points or numbered items) // Extract key points (look for bullet points or numbered items)
const keyPoints = this.extractKeyPoints(remarkText); const keyPoints = this.extractKeyPoints(remarkText);
@ -369,7 +380,7 @@ class AIService {
// Calculate confidence based on response quality (simple heuristic) // Calculate confidence based on response quality (simple heuristic)
const confidence = this.calculateConfidence(remarkText, context); const confidence = this.calculateConfidence(remarkText, context);
logger.info(`[AI Service] ✅ Generated conclusion (${remarkText.length} chars, ${keyPoints.length} key points) via ${this.providerName}`); logger.info(`[AI Service] ✅ Generated conclusion (${remarkText.length}/${maxLength} chars, ${keyPoints.length} key points) via ${this.providerName}`);
return { return {
remark: remarkText, remark: remarkText,
@ -386,7 +397,7 @@ class AIService {
/** /**
* Build the prompt for Claude to generate a professional conclusion remark * Build the prompt for Claude to generate a professional conclusion remark
*/ */
private buildConclusionPrompt(context: any): string { private async buildConclusionPrompt(context: any): Promise<string> {
const { const {
requestTitle, requestTitle,
requestDescription, requestDescription,
@ -398,6 +409,14 @@ class AIService {
activities activities
} = context; } = context;
// Get max remark length from admin configuration
const { getConfigValue } = require('./configReader.service');
const maxLengthStr = await getConfigValue('AI_MAX_REMARK_LENGTH', '2000');
const maxLength = parseInt(maxLengthStr || '2000', 10);
const targetWordCount = Math.floor(maxLength / 6); // Approximate words (avg 6 chars per word)
logger.info(`[AI Service] Using max remark length: ${maxLength} characters (≈${targetWordCount} words) from admin config`);
// Summarize approvals // Summarize approvals
const approvalSummary = approvalFlow const approvalSummary = approvalFlow
.filter((a: any) => a.status === 'APPROVED' || a.status === 'REJECTED') .filter((a: any) => a.status === 'APPROVED' || a.status === 'REJECTED')
@ -437,7 +456,7 @@ ${workNoteSummary || 'No work notes'}
${documentSummary || 'No documents'} ${documentSummary || 'No documents'}
**YOUR TASK:** **YOUR TASK:**
Write a brief, professional conclusion (100-200 words) that: Write a brief, professional conclusion (approximately ${targetWordCount} words, max ${maxLength} characters) that:
- Summarizes what was requested and the final decision - Summarizes what was requested and the final decision
- Mentions who approved it and any key comments - Mentions who approved it and any key comments
- Notes the outcome and next steps (if applicable) - Notes the outcome and next steps (if applicable)
@ -447,6 +466,7 @@ Write a brief, professional conclusion (100-200 words) that:
**IMPORTANT:** **IMPORTANT:**
- Be concise and direct - Be concise and direct
- MUST stay within ${maxLength} characters limit
- No time-specific words like "today", "now", "currently", "recently" - No time-specific words like "today", "now", "currently", "recently"
- No corporate jargon or buzzwords - No corporate jargon or buzzwords
- No emojis or excessive formatting - No emojis or excessive formatting
@ -454,7 +474,7 @@ Write a brief, professional conclusion (100-200 words) that:
- Focus on facts: what was requested, who approved, what was decided - Focus on facts: what was requested, who approved, what was decided
- Use past tense for completed actions - Use past tense for completed actions
Write the conclusion now:`; Write the conclusion now (remember: max ${maxLength} characters):`;
return prompt; return prompt;
} }

View File

@ -90,7 +90,9 @@ export class ApprovalService {
details: `Request approved and finalized by ${level.approverName || level.approverEmail}. Awaiting conclusion remark from initiator.` details: `Request approved and finalized by ${level.approverName || level.approverEmail}. Awaiting conclusion remark from initiator.`
}); });
// Generate AI conclusion remark // Generate AI conclusion remark ASYNCHRONOUSLY (don't wait)
// This runs in the background without blocking the approval response
(async () => {
try { try {
const { aiService } = await import('./ai.service'); const { aiService } = await import('./ai.service');
const { ConclusionRemark } = await import('@models/index'); const { ConclusionRemark } = await import('@models/index');
@ -98,9 +100,14 @@ export class ApprovalService {
const { WorkNote } = await import('@models/WorkNote'); const { WorkNote } = await import('@models/WorkNote');
const { Document } = await import('@models/Document'); const { Document } = await import('@models/Document');
const { Activity } = await import('@models/Activity'); const { Activity } = await import('@models/Activity');
const { getConfigValue } = await import('./configReader.service');
if (aiService.isAvailable()) { // Check if AI features and remark generation are enabled in admin config
logger.info(`[Approval] Generating AI conclusion for ${level.requestId}...`); const aiEnabled = (await getConfigValue('AI_ENABLED', 'true'))?.toLowerCase() === 'true';
const remarkGenerationEnabled = (await getConfigValue('AI_REMARK_GENERATION_ENABLED', 'true'))?.toLowerCase() === 'true';
if (aiEnabled && remarkGenerationEnabled && aiService.isAvailable()) {
logger.info(`[Approval] 🔄 Starting background AI conclusion generation for ${level.requestId}...`);
// Gather context for AI generation // Gather context for AI generation
const approvalLevels = await ApprovalLevel.findAll({ const approvalLevels = await ApprovalLevel.findAll({
@ -185,7 +192,7 @@ export class ApprovalService {
finalizedAt: null finalizedAt: null
} as any); } as any);
logger.info(`[Approval] ✅ AI conclusion generated for ${level.requestId}`); logger.info(`[Approval] ✅ Background AI conclusion completed for ${level.requestId}`);
// Log activity // Log activity
activityService.log({ activityService.log({
@ -197,12 +204,23 @@ export class ApprovalService {
details: 'AI-powered conclusion remark generated for review by initiator' details: 'AI-powered conclusion remark generated for review by initiator'
}); });
} else { } else {
// Log why AI generation was skipped
if (!aiEnabled) {
logger.info(`[Approval] AI features disabled in admin config, skipping conclusion generation for ${level.requestId}`);
} else if (!remarkGenerationEnabled) {
logger.info(`[Approval] AI remark generation disabled in admin config, skipping for ${level.requestId}`);
} else if (!aiService.isAvailable()) {
logger.warn(`[Approval] AI service unavailable for ${level.requestId}, skipping conclusion generation`); logger.warn(`[Approval] AI service unavailable for ${level.requestId}, skipping conclusion generation`);
} }
} catch (aiError) {
logger.error(`[Approval] Failed to generate AI conclusion:`, aiError);
// Don't fail the approval if AI generation fails - initiator can write manually
} }
} catch (aiError) {
logger.error(`[Approval] Background AI generation failed for ${level.requestId}:`, aiError);
// Silent failure - initiator can write manually
}
})().catch(err => {
// Catch any unhandled promise rejections
logger.error(`[Approval] Unhandled error in background AI generation:`, err);
});
// Notify initiator about approval and pending conclusion step // Notify initiator about approval and pending conclusion step
if (wf) { if (wf) {

View File

@ -385,9 +385,153 @@ export class DashboardService {
} }
/** /**
* Get recent activity feed * Get AI Remark Utilization with monthly trends
*/ */
async getRecentActivity(userId: string, limit: number = 10) { async getAIRemarkUtilization(userId: string, dateRange?: string) {
const range = this.parseDateRange(dateRange);
// Check if user is admin
const user = await User.findByPk(userId);
const isAdmin = (user as any)?.isAdmin || false;
// For regular users: only their initiated requests
const userFilter = !isAdmin ? `AND cr.edited_by = :userId` : '';
// Get overall metrics
const overallMetrics = await sequelize.query(`
SELECT
COUNT(*)::int AS total_usage,
COUNT(CASE WHEN cr.is_edited = true THEN 1 END)::int AS total_edits,
ROUND(
(COUNT(CASE WHEN cr.is_edited = true THEN 1 END)::numeric /
NULLIF(COUNT(*)::numeric, 0)) * 100, 0
)::int AS edit_rate
FROM conclusion_remarks cr
WHERE cr.generated_at BETWEEN :start AND :end
${userFilter}
`, {
replacements: { start: range.start, end: range.end, userId },
type: QueryTypes.SELECT
});
// Get monthly trends (last 7 months)
const monthlyTrends = await sequelize.query(`
SELECT
TO_CHAR(DATE_TRUNC('month', cr.generated_at), 'Mon') AS month,
EXTRACT(MONTH FROM cr.generated_at)::int AS month_num,
COUNT(*)::int AS ai_usage,
COUNT(CASE WHEN cr.is_edited = true THEN 1 END)::int AS manual_edits
FROM conclusion_remarks cr
WHERE cr.generated_at >= NOW() - INTERVAL '7 months'
${userFilter}
GROUP BY month, month_num
ORDER BY month_num ASC
`, {
replacements: { userId },
type: QueryTypes.SELECT
});
const stats = overallMetrics[0] as any;
return {
totalUsage: stats.total_usage || 0,
totalEdits: stats.total_edits || 0,
editRate: stats.edit_rate || 0,
monthlyTrends: monthlyTrends.map((m: any) => ({
month: m.month,
aiUsage: m.ai_usage,
manualEdits: m.manual_edits
}))
};
}
/**
* Get Approver Performance metrics with pagination
*/
async getApproverPerformance(userId: string, dateRange?: string, page: number = 1, limit: number = 10) {
const range = this.parseDateRange(dateRange);
// Check if user is admin
const user = await User.findByPk(userId);
const isAdmin = (user as any)?.isAdmin || false;
// For regular users: return empty (only admins should see this)
if (!isAdmin) {
return {
performance: [],
currentPage: page,
totalPages: 0,
totalRecords: 0,
limit
};
}
// Calculate offset
const offset = (page - 1) * limit;
// Get total count
const countResult = await sequelize.query(`
SELECT COUNT(DISTINCT al.approver_id) as total
FROM approval_levels al
WHERE al.action_date BETWEEN :start AND :end
AND al.status IN ('APPROVED', 'REJECTED')
HAVING COUNT(*) > 0
`, {
replacements: { start: range.start, end: range.end },
type: QueryTypes.SELECT
});
const totalRecords = Number((countResult[0] as any)?.total || 0);
const totalPages = Math.ceil(totalRecords / limit);
// Get approver performance metrics
const approverMetrics = await sequelize.query(`
SELECT
al.approver_id,
al.approver_name,
COUNT(*)::int AS total_approved,
ROUND(
AVG(
CASE
WHEN al.tat_breached = false THEN 100
ELSE 0
END
), 0
)::int AS tat_compliance_percent,
ROUND(AVG(al.elapsed_hours)::numeric, 1) AS avg_response_hours,
COUNT(CASE WHEN al.status = 'PENDING' THEN 1 END)::int AS pending_count
FROM approval_levels al
WHERE al.action_date BETWEEN :start AND :end
AND al.status IN ('APPROVED', 'REJECTED')
GROUP BY al.approver_id, al.approver_name
HAVING COUNT(*) > 0
ORDER BY total_approved DESC
LIMIT :limit OFFSET :offset
`, {
replacements: { start: range.start, end: range.end, limit, offset },
type: QueryTypes.SELECT
});
return {
performance: approverMetrics.map((a: any) => ({
approverId: a.approver_id,
approverName: a.approver_name,
totalApproved: a.total_approved,
tatCompliancePercent: a.tat_compliance_percent,
avgResponseHours: parseFloat(a.avg_response_hours || 0),
pendingCount: a.pending_count
})),
currentPage: page,
totalPages,
totalRecords,
limit
};
}
/**
* Get recent activity feed with pagination
*/
async getRecentActivity(userId: string, page: number = 1, limit: number = 10) {
// Check if user is admin // Check if user is admin
const user = await User.findByPk(userId); const user = await User.findByPk(userId);
const isAdmin = (user as any)?.isAdmin || false; const isAdmin = (user as any)?.isAdmin || false;
@ -404,6 +548,25 @@ export class DashboardService {
) )
`; `;
// Calculate offset
const offset = (page - 1) * limit;
// Get total count
const countResult = await sequelize.query(`
SELECT COUNT(*) as total
FROM activities a
JOIN workflow_requests wf ON a.request_id = wf.request_id
WHERE a.created_at >= NOW() - INTERVAL '7 days'
${whereClause}
`, {
replacements: { userId },
type: QueryTypes.SELECT
});
const totalRecords = Number((countResult[0] as any).total);
const totalPages = Math.ceil(totalRecords / limit);
// Get paginated activities
const activities = await sequelize.query(` const activities = await sequelize.query(`
SELECT SELECT
a.activity_id, a.activity_id,
@ -422,31 +585,37 @@ export class DashboardService {
WHERE a.created_at >= NOW() - INTERVAL '7 days' WHERE a.created_at >= NOW() - INTERVAL '7 days'
${whereClause} ${whereClause}
ORDER BY a.created_at DESC ORDER BY a.created_at DESC
LIMIT :limit LIMIT :limit OFFSET :offset
`, { `, {
replacements: { userId, limit }, replacements: { userId, limit, offset },
type: QueryTypes.SELECT type: QueryTypes.SELECT
}); });
return activities.map((a: any) => ({ return {
activities: activities.map((a: any) => ({
activityId: a.activity_id, activityId: a.activity_id,
requestId: a.request_id, requestId: a.request_id,
requestNumber: a.request_number, requestNumber: a.request_number,
requestTitle: a.request_title, requestTitle: a.request_title,
type: a.type, type: a.type,
action: a.activity_description || a.type, // Use activity_description as action action: a.activity_description || a.type,
details: a.activity_category, details: a.activity_category,
userId: a.user_id, userId: a.user_id,
userName: a.user_name, userName: a.user_name,
timestamp: a.timestamp, timestamp: a.timestamp,
priority: (a.priority || '').toLowerCase() priority: (a.priority || '').toLowerCase()
})); })),
currentPage: page,
totalPages,
totalRecords,
limit
};
} }
/** /**
* Get critical requests (breached TAT or approaching deadline) * Get critical requests (breached TAT or approaching deadline) with pagination
*/ */
async getCriticalRequests(userId: string) { async getCriticalRequests(userId: string, page: number = 1, limit: number = 10) {
// Check if user is admin // Check if user is admin
const user = await User.findByPk(userId); const user = await User.findByPk(userId);
const isAdmin = (user as any)?.isAdmin || false; const isAdmin = (user as any)?.isAdmin || false;
@ -467,6 +636,36 @@ export class DashboardService {
)` : ''} )` : ''}
`; `;
const criticalCondition = `
AND (
-- Has TAT breaches
EXISTS (
SELECT 1 FROM tat_alerts ta
WHERE ta.request_id = wf.request_id
AND (ta.is_breached = true OR ta.threshold_percentage >= 75)
)
-- Or is express priority
OR wf.priority = 'EXPRESS'
)
`;
// Calculate offset
const offset = (page - 1) * limit;
// Get total count
const countResult = await sequelize.query(`
SELECT COUNT(*) as total
FROM workflow_requests wf
${whereClause}
${criticalCondition}
`, {
replacements: { userId },
type: QueryTypes.SELECT
});
const totalRecords = Number((countResult[0] as any).total);
const totalPages = Math.ceil(totalRecords / limit);
const criticalRequests = await sequelize.query(` const criticalRequests = await sequelize.query(`
SELECT SELECT
wf.request_id, wf.request_id,
@ -500,23 +699,14 @@ export class DashboardService {
) AS current_level_start_time ) AS current_level_start_time
FROM workflow_requests wf FROM workflow_requests wf
${whereClause} ${whereClause}
AND ( ${criticalCondition}
-- Has TAT breaches
EXISTS (
SELECT 1 FROM tat_alerts ta
WHERE ta.request_id = wf.request_id
AND (ta.is_breached = true OR ta.threshold_percentage >= 75)
)
-- Or is express priority
OR wf.priority = 'EXPRESS'
)
ORDER BY ORDER BY
CASE WHEN wf.priority = 'EXPRESS' THEN 1 ELSE 2 END, CASE WHEN wf.priority = 'EXPRESS' THEN 1 ELSE 2 END,
breach_count DESC, breach_count DESC,
wf.created_at ASC wf.created_at ASC
LIMIT 10 LIMIT :limit OFFSET :offset
`, { `, {
replacements: { userId }, replacements: { userId, limit, offset },
type: QueryTypes.SELECT type: QueryTypes.SELECT
}); });
@ -548,18 +738,25 @@ export class DashboardService {
totalLevels: req.total_levels, totalLevels: req.total_levels,
submissionDate: req.submission_date, submissionDate: req.submission_date,
totalTATHours: currentLevelRemainingHours, // Current level remaining hours totalTATHours: currentLevelRemainingHours, // Current level remaining hours
originalTATHours: currentLevelTatHours, // Original TAT hours allocated for current level
breachCount: req.breach_count || 0, breachCount: req.breach_count || 0,
isCritical: req.breach_count > 0 || req.priority === 'EXPRESS' isCritical: req.breach_count > 0 || req.priority === 'EXPRESS'
}; };
})); }));
return criticalWithSLA; return {
criticalRequests: criticalWithSLA,
currentPage: page,
totalPages,
totalRecords,
limit
};
} }
/** /**
* Get upcoming deadlines * Get upcoming deadlines with pagination
*/ */
async getUpcomingDeadlines(userId: string, limit: number = 5) { async getUpcomingDeadlines(userId: string, page: number = 1, limit: number = 10) {
// Check if user is admin // Check if user is admin
const user = await User.findByPk(userId); const user = await User.findByPk(userId);
const isAdmin = (user as any)?.isAdmin || false; const isAdmin = (user as any)?.isAdmin || false;
@ -574,6 +771,23 @@ export class DashboardService {
${!isAdmin ? `AND al.approver_id = :userId` : ''} ${!isAdmin ? `AND al.approver_id = :userId` : ''}
`; `;
// Calculate offset
const offset = (page - 1) * limit;
// Get total count
const countResult = await sequelize.query(`
SELECT COUNT(*) as total
FROM approval_levels al
JOIN workflow_requests wf ON al.request_id = wf.request_id
${whereClause}
`, {
replacements: { userId },
type: QueryTypes.SELECT
});
const totalRecords = Number((countResult[0] as any).total);
const totalPages = Math.ceil(totalRecords / limit);
const deadlines = await sequelize.query(` const deadlines = await sequelize.query(`
SELECT SELECT
al.level_id, al.level_id,
@ -592,9 +806,9 @@ export class DashboardService {
JOIN workflow_requests wf ON al.request_id = wf.request_id JOIN workflow_requests wf ON al.request_id = wf.request_id
${whereClause} ${whereClause}
ORDER BY al.level_start_time ASC ORDER BY al.level_start_time ASC
LIMIT :limit LIMIT :limit OFFSET :offset
`, { `, {
replacements: { userId, limit }, replacements: { userId, limit, offset },
type: QueryTypes.SELECT type: QueryTypes.SELECT
}); });
@ -639,8 +853,16 @@ export class DashboardService {
}; };
})); }));
// Sort by TAT percentage used (descending) and return // Sort by TAT percentage used (descending)
return deadlinesWithSLA.sort((a, b) => b.tatPercentageUsed - a.tatPercentageUsed); const sortedDeadlines = deadlinesWithSLA.sort((a, b) => b.tatPercentageUsed - a.tatPercentageUsed);
return {
deadlines: sortedDeadlines,
currentPage: page,
totalPages,
totalRecords,
limit
};
} }
/** /**

View File

@ -827,6 +827,7 @@ export class WorkflowService {
const initiator = await User.findByPk(initiatorId); const initiator = await User.findByPk(initiatorId);
const initiatorName = (initiator as any)?.displayName || (initiator as any)?.email || 'User'; const initiatorName = (initiator as any)?.displayName || (initiator as any)?.email || 'User';
// Log creation activity
activityService.log({ activityService.log({
requestId: (workflow as any).requestId, requestId: (workflow as any).requestId,
type: 'created', type: 'created',
@ -835,14 +836,32 @@ export class WorkflowService {
action: 'Initial request submitted', action: 'Initial request submitted',
details: `Initial request submitted for ${workflowData.title} by ${initiatorName}` details: `Initial request submitted for ${workflowData.title} by ${initiatorName}`
}); });
// Send notification to INITIATOR confirming submission
await notificationService.sendToUsers([initiatorId], {
title: 'Request Submitted Successfully',
body: `Your request "${workflowData.title}" has been submitted and is now with the first approver.`,
requestNumber: requestNumber,
requestId: (workflow as any).requestId,
url: `/request/${requestNumber}`,
type: 'request_submitted',
priority: 'MEDIUM'
});
// Send notification to FIRST APPROVER for assignment
const firstLevel = await ApprovalLevel.findOne({ where: { requestId: (workflow as any).requestId, levelNumber: 1 } }); const firstLevel = await ApprovalLevel.findOne({ where: { requestId: (workflow as any).requestId, levelNumber: 1 } });
if (firstLevel) { if (firstLevel) {
await notificationService.sendToUsers([(firstLevel as any).approverId], { await notificationService.sendToUsers([(firstLevel as any).approverId], {
title: 'New request assigned', title: 'New Request Assigned',
body: `${workflowData.title}`, body: `${workflowData.title}`,
requestNumber: requestNumber, requestNumber: requestNumber,
url: `/request/${requestNumber}` requestId: (workflow as any).requestId,
url: `/request/${requestNumber}`,
type: 'assignment',
priority: 'HIGH',
actionRequired: true
}); });
activityService.log({ activityService.log({
requestId: (workflow as any).requestId, requestId: (workflow as any).requestId,
type: 'assignment', type: 'assignment',
@ -852,6 +871,7 @@ export class WorkflowService {
details: `Request assigned to ${(firstLevel as any).approverName || (firstLevel as any).approverEmail || 'approver'} for review` details: `Request assigned to ${(firstLevel as any).approverName || (firstLevel as any).approverEmail || 'approver'} for review`
}); });
} }
return workflow; return workflow;
} catch (error) { } catch (error) {
logger.error('Failed to create workflow:', error); logger.error('Failed to create workflow:', error);

View File

@ -1,22 +0,0 @@
// Test setup file
import { sequelize } from '../src/models';
beforeAll(async () => {
// Setup test database connection
await sequelize.authenticate();
});
afterAll(async () => {
// Close database connection
await sequelize.close();
});
beforeEach(async () => {
// Clean up test data before each test
// Add cleanup logic here
});
afterEach(async () => {
// Clean up test data after each test
// Add cleanup logic here
});