ai conlusion flow added and TAT triggering got issue resolved enhanced system config for ai setup
This commit is contained in:
parent
54ecae5b7b
commit
56258205ea
426
ADMIN_AI_CONFIGURATION.md
Normal file
426
ADMIN_AI_CONFIGURATION.md
Normal file
@ -0,0 +1,426 @@
|
||||
# Admin Panel - AI Provider Configuration
|
||||
|
||||
## Overview
|
||||
|
||||
Admins can configure AI providers **directly through the admin panel** without touching code or `.env` files. The system supports three AI providers with automatic failover.
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Quick Start for Admins
|
||||
|
||||
### Step 1: Access Admin Panel
|
||||
|
||||
Navigate to the admin configurations page in your workflow system.
|
||||
|
||||
### Step 2: Configure AI Provider
|
||||
|
||||
Look for the **AI Configuration** section with these settings:
|
||||
|
||||
| Setting | Description | Example Value |
|
||||
|---------|-------------|---------------|
|
||||
| **AI Provider** | Choose your AI provider | `claude`, `openai`, or `gemini` |
|
||||
| **Claude API Key** | API key from Anthropic | `sk-ant-xxxxxxxxxxxxx` |
|
||||
| **OpenAI API Key** | API key from OpenAI | `sk-proj-xxxxxxxxxxxxx` |
|
||||
| **Gemini API Key** | API key from Google | `AIzaxxxxxxxxxxxxxxx` |
|
||||
| **Enable AI Features** | Turn AI on/off | `true` or `false` |
|
||||
|
||||
### Step 3: Get Your API Key
|
||||
|
||||
Choose ONE provider and get an API key:
|
||||
|
||||
#### Option A: Claude (Recommended)
|
||||
1. Go to https://console.anthropic.com
|
||||
2. Create account / Sign in
|
||||
3. Generate API key
|
||||
4. Copy key (starts with `sk-ant-`)
|
||||
|
||||
#### Option B: OpenAI
|
||||
1. Go to https://platform.openai.com
|
||||
2. Create account / Sign in
|
||||
3. Navigate to API keys
|
||||
4. Create new key
|
||||
5. Copy key (starts with `sk-proj-` or `sk-`)
|
||||
|
||||
#### Option C: Gemini (Free Tier Available!)
|
||||
1. Go to https://ai.google.dev
|
||||
2. Sign in with Google account
|
||||
3. Get API key
|
||||
4. Copy key
|
||||
|
||||
### Step 4: Configure in Admin Panel
|
||||
|
||||
**Example: Setting up Claude**
|
||||
|
||||
1. Set **AI Provider** = `claude`
|
||||
2. Set **Claude API Key** = `sk-ant-api03-xxxxxxxxxxxxx`
|
||||
3. Leave other API keys empty (optional)
|
||||
4. Set **Enable AI Features** = `true`
|
||||
5. Click **Save Configuration**
|
||||
|
||||
✅ **Done!** The system will automatically initialize Claude.
|
||||
|
||||
---
|
||||
|
||||
## 🔄 How It Works
|
||||
|
||||
### Automatic Initialization
|
||||
|
||||
When you save the configuration:
|
||||
|
||||
```
|
||||
Admin saves config
|
||||
↓
|
||||
System clears cache
|
||||
↓
|
||||
AI Service reads new config from database
|
||||
↓
|
||||
Initializes selected provider (Claude/OpenAI/Gemini)
|
||||
↓
|
||||
✅ AI features active
|
||||
```
|
||||
|
||||
**You'll see in server logs:**
|
||||
```
|
||||
info: [Admin] AI configuration 'AI_PROVIDER' updated
|
||||
info: [AI Service] Reinitializing AI provider from updated configuration...
|
||||
info: [AI Service] Preferred provider from config: claude
|
||||
info: [AI Service] ✅ Claude provider initialized
|
||||
info: [AI Service] ✅ Active provider: Claude (Anthropic)
|
||||
info: [Admin] AI service reinitialized with Claude (Anthropic)
|
||||
```
|
||||
|
||||
### Automatic Failover
|
||||
|
||||
If your primary provider fails, the system automatically tries alternatives:
|
||||
|
||||
```sql
|
||||
-- Example: Admin configured Claude, but API key is invalid
|
||||
UPDATE admin_configurations
|
||||
SET config_value = 'claude'
|
||||
WHERE config_key = 'AI_PROVIDER';
|
||||
|
||||
UPDATE admin_configurations
|
||||
SET config_value = 'sk-ant-INVALID'
|
||||
WHERE config_key = 'CLAUDE_API_KEY';
|
||||
```
|
||||
|
||||
**System Response:**
|
||||
```
|
||||
warn: [AI Service] Claude API key not configured.
|
||||
warn: [AI Service] Preferred provider unavailable. Trying fallbacks...
|
||||
info: [AI Service] ✅ OpenAI provider initialized
|
||||
info: [AI Service] ✅ Using fallback provider: OpenAI (GPT-4)
|
||||
```
|
||||
|
||||
✅ **AI features still work!** (if OpenAI key is configured)
|
||||
|
||||
---
|
||||
|
||||
## 📋 Configuration Guide by Provider
|
||||
|
||||
### Claude (Anthropic) - Best for Production
|
||||
|
||||
**Pros:**
|
||||
- ✅ High-quality, professional output
|
||||
- ✅ Excellent instruction following
|
||||
- ✅ Good for formal business documents
|
||||
- ✅ Reliable and consistent
|
||||
|
||||
**Cons:**
|
||||
- ⚠️ Paid service (no free tier)
|
||||
- ⚠️ Requires account setup
|
||||
|
||||
**Configuration:**
|
||||
```
|
||||
AI_PROVIDER = claude
|
||||
CLAUDE_API_KEY = sk-ant-api03-xxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
**Cost:** ~$0.004 per conclusion generation
|
||||
|
||||
---
|
||||
|
||||
### OpenAI (GPT-4) - Industry Standard
|
||||
|
||||
**Pros:**
|
||||
- ✅ Fast response times
|
||||
- ✅ Well-documented
|
||||
- ✅ Widely used and trusted
|
||||
- ✅ Good performance
|
||||
|
||||
**Cons:**
|
||||
- ⚠️ Paid service
|
||||
- ⚠️ Higher cost than alternatives
|
||||
|
||||
**Configuration:**
|
||||
```
|
||||
AI_PROVIDER = openai
|
||||
OPENAI_API_KEY = sk-proj-xxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
**Cost:** ~$0.005 per conclusion generation
|
||||
|
||||
---
|
||||
|
||||
### Gemini (Google) - Cost-Effective
|
||||
|
||||
**Pros:**
|
||||
- ✅ **Free tier available!**
|
||||
- ✅ Good performance
|
||||
- ✅ Easy Google integration
|
||||
- ✅ Generous rate limits
|
||||
|
||||
**Cons:**
|
||||
- ⚠️ Slightly lower quality than Claude/GPT-4
|
||||
- ⚠️ Rate limits on free tier
|
||||
|
||||
**Configuration:**
|
||||
```
|
||||
AI_PROVIDER = gemini
|
||||
GEMINI_API_KEY = AIzaxxxxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
**Cost:** **FREE** (up to rate limits), then $0.0001 per generation
|
||||
|
||||
---
|
||||
|
||||
## 🔐 Security Best Practices
|
||||
|
||||
### 1. API Key Storage
|
||||
- ✅ **Stored in database** (encrypted in production)
|
||||
- ✅ **Marked as sensitive** (hidden in UI by default)
|
||||
- ✅ **Never exposed** to frontend
|
||||
- ✅ **Admin access only**
|
||||
|
||||
### 2. Key Rotation
|
||||
- Rotate API keys every 3-6 months
|
||||
- Update in admin panel
|
||||
- System automatically reinitializes
|
||||
|
||||
### 3. Access Control
|
||||
- Only **Super Admins** can update AI configurations
|
||||
- Regular users cannot view API keys
|
||||
- All changes are logged in audit trail
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Testing AI Configuration
|
||||
|
||||
### Method 1: Check Status via API
|
||||
|
||||
```bash
|
||||
curl -H "Authorization: Bearer YOUR_JWT_TOKEN" \
|
||||
http://localhost:5000/api/v1/ai/status
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"data": {
|
||||
"available": true,
|
||||
"provider": "Claude (Anthropic)",
|
||||
"status": "active"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Method 2: Check Server Logs
|
||||
|
||||
Look for initialization logs when server starts:
|
||||
|
||||
```
|
||||
info: [AI Service] Preferred provider from config: claude
|
||||
info: [AI Service] ✅ Claude provider initialized
|
||||
info: [AI Service] ✅ Active provider: Claude (Anthropic)
|
||||
```
|
||||
|
||||
### Method 3: Test in Application
|
||||
|
||||
1. Create a workflow request
|
||||
2. Complete all approvals
|
||||
3. As initiator, click "Finalize & Close Request"
|
||||
4. Click "Generate with AI"
|
||||
5. Should see AI-generated conclusion
|
||||
|
||||
---
|
||||
|
||||
## 🔄 Switching Providers
|
||||
|
||||
### Example: Switching from Claude to Gemini
|
||||
|
||||
**Current Configuration:**
|
||||
```
|
||||
AI_PROVIDER = claude
|
||||
CLAUDE_API_KEY = sk-ant-xxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
**Steps to Switch:**
|
||||
|
||||
1. **Get Gemini API key** from https://ai.google.dev
|
||||
2. **Open Admin Panel** → AI Configuration
|
||||
3. **Update settings:**
|
||||
- Set **AI Provider** = `gemini`
|
||||
- Set **Gemini API Key** = `AIzaxxxxxxxxxxxxxxx`
|
||||
4. **Click Save**
|
||||
|
||||
**Result:**
|
||||
```
|
||||
info: [Admin] AI configuration 'AI_PROVIDER' updated
|
||||
info: [AI Service] Reinitializing...
|
||||
info: [AI Service] Preferred provider from config: gemini
|
||||
info: [AI Service] ✅ Gemini provider initialized
|
||||
info: [AI Service] ✅ Active provider: Gemini (Google)
|
||||
```
|
||||
|
||||
✅ **Done!** System now uses Gemini. **No server restart needed!**
|
||||
|
||||
---
|
||||
|
||||
## 💡 Pro Tips
|
||||
|
||||
### 1. Multi-Provider Setup (Recommended)
|
||||
|
||||
Configure ALL three providers for maximum reliability:
|
||||
|
||||
```
|
||||
AI_PROVIDER = claude
|
||||
CLAUDE_API_KEY = sk-ant-xxxxxxxxxxxxx
|
||||
OPENAI_API_KEY = sk-proj-xxxxxxxxxxxxx
|
||||
GEMINI_API_KEY = AIzaxxxxxxxxxxxxxxx
|
||||
AI_ENABLED = true
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- If Claude is down → automatically uses OpenAI
|
||||
- If OpenAI is down → automatically uses Gemini
|
||||
- **Zero downtime** for AI features!
|
||||
|
||||
### 2. Cost Optimization
|
||||
|
||||
**Development/Testing:**
|
||||
- Use `gemini` (free tier)
|
||||
- Switch to paid provider only for production
|
||||
|
||||
**Production:**
|
||||
- Use `claude` for best quality
|
||||
- Or use `openai` for fastest responses
|
||||
|
||||
### 3. Monitor Usage
|
||||
|
||||
Check which provider is being used:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
ai_model_used,
|
||||
COUNT(*) as usage_count,
|
||||
AVG(ai_confidence_score) as avg_confidence
|
||||
FROM conclusion_remarks
|
||||
WHERE created_at > NOW() - INTERVAL '30 days'
|
||||
GROUP BY ai_model_used;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Troubleshooting
|
||||
|
||||
### Issue: "AI Service not configured"
|
||||
|
||||
**Check:**
|
||||
1. Is `AI_ENABLED` set to `true`?
|
||||
2. Is at least one API key configured?
|
||||
3. Is the API key valid?
|
||||
|
||||
**Fix:**
|
||||
- Open Admin Panel
|
||||
- Verify AI Provider setting
|
||||
- Re-enter API key
|
||||
- Click Save
|
||||
|
||||
### Issue: "Failed to generate conclusion"
|
||||
|
||||
**Check:**
|
||||
1. API key still valid (not expired/revoked)?
|
||||
2. Provider service available (check status.anthropic.com, etc.)?
|
||||
3. Sufficient API quota/credits?
|
||||
|
||||
**Fix:**
|
||||
- Test API key manually (use provider's playground)
|
||||
- Check account balance/quota
|
||||
- Try switching to different provider
|
||||
|
||||
### Issue: Provider keeps failing
|
||||
|
||||
**Fallback Strategy:**
|
||||
1. Configure multiple providers
|
||||
2. System will auto-switch
|
||||
3. Check logs to see which one succeeded
|
||||
|
||||
---
|
||||
|
||||
## 📊 Admin Panel UI
|
||||
|
||||
The admin configuration page should show:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ AI Configuration │
|
||||
├─────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ AI Provider: [claude ▼] │
|
||||
│ Options: claude, openai, gemini │
|
||||
│ │
|
||||
│ Claude API Key: [••••••••••••••] [Show] │
|
||||
│ Enter Claude API key from console.anthr... │
|
||||
│ │
|
||||
│ OpenAI API Key: [••••••••••••••] [Show] │
|
||||
│ Enter OpenAI API key from platform.open... │
|
||||
│ │
|
||||
│ Gemini API Key: [••••••••••••••] [Show] │
|
||||
│ Enter Gemini API key from ai.google.dev │
|
||||
│ │
|
||||
│ Enable AI Features: [✓] Enabled │
|
||||
│ │
|
||||
│ Current Status: ✅ Active (Claude) │
|
||||
│ │
|
||||
│ [Save Configuration] [Test AI] │
|
||||
└─────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Summary
|
||||
|
||||
**Key Advantages:**
|
||||
- ✅ **No code changes** - Configure through UI
|
||||
- ✅ **No server restart** - Hot reload on save
|
||||
- ✅ **Automatic failover** - Multiple providers
|
||||
- ✅ **Vendor flexibility** - Switch anytime
|
||||
- ✅ **Audit trail** - All changes logged
|
||||
- ✅ **Secure storage** - API keys encrypted
|
||||
|
||||
**Admin Actions Required:**
|
||||
1. Choose AI provider
|
||||
2. Enter API key
|
||||
3. Click Save
|
||||
4. Done!
|
||||
|
||||
**User Impact:**
|
||||
- Zero - users just click "Generate with AI"
|
||||
- System handles provider selection automatically
|
||||
- Professional conclusions generated seamlessly
|
||||
|
||||
---
|
||||
|
||||
## 📞 Support
|
||||
|
||||
**Provider Documentation:**
|
||||
- Claude: https://docs.anthropic.com
|
||||
- OpenAI: https://platform.openai.com/docs
|
||||
- Gemini: https://ai.google.dev/docs
|
||||
|
||||
**For System Issues:**
|
||||
- Check `/api/v1/ai/status` endpoint
|
||||
- Review server logs for initialization
|
||||
- Verify admin_configurations table entries
|
||||
|
||||
180
AI_CONCLUSION_EXAMPLES.md
Normal file
180
AI_CONCLUSION_EXAMPLES.md
Normal file
@ -0,0 +1,180 @@
|
||||
# AI Conclusion Remark Examples
|
||||
|
||||
## ✅ What Makes a Good Conclusion Remark?
|
||||
|
||||
A good conclusion remark should:
|
||||
- **Be concise** (100-200 words)
|
||||
- **Sound natural** (like a human wrote it, not AI)
|
||||
- **State the facts** (what was requested, who approved, outcome)
|
||||
- **Be practical** (suitable for archiving and future reference)
|
||||
- **Avoid jargon** (no corporate buzzwords or overly formal language)
|
||||
|
||||
---
|
||||
|
||||
## ❌ BAD Example (Too Formal/Corporate)
|
||||
|
||||
```
|
||||
## Workflow Completion Summary
|
||||
|
||||
Request REQ-2025-82736 "testing ai conclusion" has been successfully completed with EXPRESS priority status. The workflow proceeded efficiently through the approval process, achieving completion well within the established timeframes and meeting all required approval criteria.
|
||||
|
||||
### Key Highlights:
|
||||
|
||||
• **Expedited Approval Process**: The request was processed with EXPRESS priority and completed successfully within the designated approval framework
|
||||
|
||||
• **Efficient Level 1 Approval**: Test User11 provided prompt approval, completing the review in 0.0 hours against the allocated 1-hour TAT, demonstrating exceptional processing efficiency
|
||||
|
||||
• **Document Compliance**: The approver confirmed that all submitted documentation met the required standards with the comment "Documents are fine i am approving it"
|
||||
|
||||
• **Streamlined Execution**: The workflow proceeded without requiring additional work notes, discussions, or document revisions, indicating clear initial requirements and proper submission formatting
|
||||
|
||||
• **Zero Delays**: No bottlenecks or escalations were encountered during the approval process, ensuring optimal workflow performance
|
||||
|
||||
The successful completion of this EXPRESS priority request demonstrates the effectiveness of Royal Enfield's approval mechanisms and the commitment of stakeholders to maintain operational efficiency. The workflow concluded with all necessary approvals obtained and compliance requirements satisfied.
|
||||
```
|
||||
|
||||
**Problems:**
|
||||
- Way too long and verbose
|
||||
- Overly formal corporate language
|
||||
- Sounds like AI/marketing material
|
||||
- Uses buzzwords ("synergy", "streamlined execution", "optimal workflow performance")
|
||||
- Not practical for quick reference
|
||||
|
||||
---
|
||||
|
||||
## ✅ GOOD Example (Natural & Practical)
|
||||
|
||||
```
|
||||
Request for testing AI conclusion feature (REQ-2025-82736) was submitted with EXPRESS priority and approved by Test User11 at Level 1. The approver reviewed the submitted documents and confirmed everything was in order, with the comment "Documents are fine i am approving it."
|
||||
|
||||
The approval was completed quickly (within the 1-hour TAT), with no revisions or additional documentation required. Request is now closed and ready for implementation.
|
||||
```
|
||||
|
||||
**Why This Works:**
|
||||
- Concise and to the point (~80 words)
|
||||
- Sounds like a human wrote it
|
||||
- States the key facts clearly
|
||||
- Easy to read and reference later
|
||||
- Professional but not overly formal
|
||||
- Mentions the outcome
|
||||
|
||||
---
|
||||
|
||||
## 💡 Example: Request with Multiple Approvers
|
||||
|
||||
### Bad (Too Formal):
|
||||
```
|
||||
The multi-level approval workflow demonstrated exceptional efficiency and stakeholder engagement across all hierarchical levels, with each approver providing valuable insights and maintaining adherence to established turnaround time parameters...
|
||||
```
|
||||
|
||||
### Good (Natural):
|
||||
```
|
||||
This purchase request (REQ-2025-12345) was approved by all three levels: Rajesh (Department Head), Priya (Finance), and Amit (Director). Rajesh approved the budget allocation, Priya confirmed fund availability, and Amit gave final sign-off. Total processing time was 2.5 days. Purchase order can now be raised.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 💡 Example: Request with Work Notes
|
||||
|
||||
### Bad (Too Formal):
|
||||
```
|
||||
Throughout the approval lifecycle, stakeholders engaged in comprehensive discussions via the work notes functionality, demonstrating collaborative problem-solving and thorough due diligence...
|
||||
```
|
||||
|
||||
### Good (Natural):
|
||||
```
|
||||
Marketing campaign request (REQ-2025-23456) approved by Sarah after discussion about budget allocation. Initial request was for ₹50,000, but after work note clarification, it was revised to ₹45,000 to stay within quarterly limits. Campaign is approved to proceed with revised budget.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 💡 Example: Rejected Request
|
||||
|
||||
### Bad (Too Formal):
|
||||
```
|
||||
Following comprehensive review and evaluation against established organizational criteria and resource allocation parameters, the request has been declined due to insufficiency in budgetary justification documentation...
|
||||
```
|
||||
|
||||
### Good (Natural):
|
||||
```
|
||||
Equipment purchase request (REQ-2025-34567) was rejected by Finance (Priya). Reason: Budget already exhausted for Q4, and the equipment is not critical for current operations. Initiator can resubmit in Q1 next year with updated cost estimates and business justification.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📝 Template for Writing Good Conclusions
|
||||
|
||||
Use this structure:
|
||||
|
||||
1. **What was requested**: Brief description and request number
|
||||
2. **Who approved/rejected**: Name and level/department
|
||||
3. **Key decision or comment**: Any important feedback from approvers
|
||||
4. **Outcome**: What happens next or status
|
||||
|
||||
**Example:**
|
||||
```
|
||||
[What] Request for new laptop (REQ-2025-45678)
|
||||
[Who] Approved by IT Manager (Suresh) and Finance (Meera)
|
||||
[Decision] Both approved, Meera confirmed budget is available
|
||||
[Outcome] Procurement team can proceed with laptop order, estimated delivery in 2 weeks
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Key Differences: AI-Generated vs Human-Written
|
||||
|
||||
| AI-Generated (Bad) | Human-Written (Good) |
|
||||
|-------------------|---------------------|
|
||||
| "Stakeholder engagement" | "Discussed with..." |
|
||||
| "Achieved completion well within established timeframes" | "Completed on time" |
|
||||
| "Demonstrating exceptional processing efficiency" | "Processed quickly" |
|
||||
| "Optimal workflow performance" | "Everything went smoothly" |
|
||||
| "The workflow concluded with all necessary approvals obtained" | "All approvals received, request is closed" |
|
||||
|
||||
---
|
||||
|
||||
## ✅ Updated AI Prompt
|
||||
|
||||
The AI service now uses an improved prompt that generates more realistic conclusions:
|
||||
|
||||
**Old Prompt:**
|
||||
- Asked for "professional workflow management assistant"
|
||||
- Requested "formal and factual" tone
|
||||
- Asked for corporate language
|
||||
|
||||
**New Prompt:**
|
||||
- Asks AI to "write like an employee documenting the outcome"
|
||||
- Requests "natural and human-written" style
|
||||
- Explicitly forbids "corporate jargon or buzzwords"
|
||||
- Limits length to 100-200 words
|
||||
- Focuses on practical, archival value
|
||||
|
||||
---
|
||||
|
||||
## 🔧 How It Works Now
|
||||
|
||||
When you click "Generate Conclusion":
|
||||
|
||||
1. **AI analyzes** the request, approvals, work notes, and documents
|
||||
2. **AI generates** a concise, practical summary (100-200 words)
|
||||
3. **You review** and can edit it if needed
|
||||
4. **You finalize** to close the request
|
||||
|
||||
The conclusion is now:
|
||||
- ✅ More realistic and natural
|
||||
- ✅ Concise and to the point
|
||||
- ✅ Professional but not stuffy
|
||||
- ✅ Suitable for archiving
|
||||
- ✅ Easy to read and reference
|
||||
|
||||
---
|
||||
|
||||
## 💬 Feedback
|
||||
|
||||
If the AI still generates overly formal conclusions, you can always:
|
||||
1. **Edit it** directly in the text area
|
||||
2. **Simplify** the language before finalizing
|
||||
3. **Rewrite** key sections to sound more natural
|
||||
|
||||
The goal is a conclusion that **you would actually write yourself** if you were closing the request.
|
||||
|
||||
309
AI_PROVIDER_SETUP.md
Normal file
309
AI_PROVIDER_SETUP.md
Normal file
@ -0,0 +1,309 @@
|
||||
# AI Provider Configuration Guide
|
||||
|
||||
The Workflow Management System supports multiple AI providers for generating conclusion remarks. The system uses a **provider-agnostic architecture** with automatic fallback, making it easy to switch between providers.
|
||||
|
||||
## Supported Providers
|
||||
|
||||
| Provider | Environment Variable | Model Used | Installation |
|
||||
|----------|---------------------|------------|--------------|
|
||||
| **Claude (Anthropic)** | `CLAUDE_API_KEY` or `ANTHROPIC_API_KEY` | `claude-3-5-sonnet-20241022` | `npm install @anthropic-ai/sdk` |
|
||||
| **OpenAI (GPT)** | `OPENAI_API_KEY` | `gpt-4o` | `npm install openai` |
|
||||
| **Gemini (Google)** | `GEMINI_API_KEY` or `GOOGLE_AI_API_KEY` | `gemini-1.5-pro` | `npm install @google/generative-ai` |
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Option 1: Claude (Recommended)
|
||||
|
||||
```bash
|
||||
# Install package
|
||||
npm install @anthropic-ai/sdk
|
||||
|
||||
# Set environment variable
|
||||
AI_PROVIDER=claude
|
||||
CLAUDE_API_KEY=sk-ant-xxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
### Option 2: OpenAI
|
||||
|
||||
```bash
|
||||
# Install package
|
||||
npm install openai
|
||||
|
||||
# Set environment variable
|
||||
AI_PROVIDER=openai
|
||||
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
### Option 3: Gemini
|
||||
|
||||
```bash
|
||||
# Install package
|
||||
npm install @google/generative-ai
|
||||
|
||||
# Set environment variable
|
||||
AI_PROVIDER=gemini
|
||||
GEMINI_API_KEY=xxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. Set Preferred Provider (Optional)
|
||||
|
||||
Add to your `.env` file:
|
||||
|
||||
```bash
|
||||
# Preferred AI provider (claude, openai, or gemini)
|
||||
# Default: claude
|
||||
AI_PROVIDER=claude
|
||||
```
|
||||
|
||||
### 2. Add API Key
|
||||
|
||||
Add the corresponding API key for your chosen provider:
|
||||
|
||||
```bash
|
||||
# For Claude (Anthropic)
|
||||
CLAUDE_API_KEY=sk-ant-xxxxxxxxxxxxx
|
||||
|
||||
# For OpenAI
|
||||
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
|
||||
|
||||
# For Gemini (Google)
|
||||
GEMINI_API_KEY=xxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Automatic Fallback
|
||||
|
||||
The system has built-in intelligence to handle provider failures:
|
||||
|
||||
1. **Primary**: Tries the provider specified in `AI_PROVIDER`
|
||||
2. **Fallback**: If primary fails, tries other available providers in order
|
||||
3. **Graceful Degradation**: If no provider is available, shows error to user
|
||||
|
||||
**Example Startup Logs:**
|
||||
|
||||
```
|
||||
info: [AI Service] Preferred provider: claude
|
||||
info: [AI Service] ✅ Claude provider initialized
|
||||
info: [AI Service] ✅ Active provider: Claude (Anthropic)
|
||||
```
|
||||
|
||||
**Example Fallback:**
|
||||
|
||||
```
|
||||
info: [AI Service] Preferred provider: openai
|
||||
warn: [AI Service] OpenAI API key not configured.
|
||||
warn: [AI Service] Preferred provider unavailable. Trying fallbacks...
|
||||
info: [AI Service] ✅ Claude provider initialized
|
||||
info: [AI Service] ✅ Using fallback provider: Claude (Anthropic)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Provider Comparison
|
||||
|
||||
### Claude (Anthropic)
|
||||
- ✅ **Best for**: Professional, well-structured summaries
|
||||
- ✅ **Strengths**: Excellent at following instructions, consistent output
|
||||
- ✅ **Pricing**: Moderate (pay-per-token)
|
||||
- ⚠️ **Requires**: API key from console.anthropic.com
|
||||
|
||||
### OpenAI (GPT-4)
|
||||
- ✅ **Best for**: General-purpose text generation
|
||||
- ✅ **Strengths**: Fast, widely adopted, good documentation
|
||||
- ✅ **Pricing**: Moderate to high
|
||||
- ⚠️ **Requires**: API key from platform.openai.com
|
||||
|
||||
### Gemini (Google)
|
||||
- ✅ **Best for**: Cost-effective solution
|
||||
- ✅ **Strengths**: Free tier available, good performance
|
||||
- ✅ **Pricing**: Free tier + paid tiers
|
||||
- ⚠️ **Requires**: API key from ai.google.dev
|
||||
|
||||
---
|
||||
|
||||
## Switching Providers
|
||||
|
||||
### Option A: Simple Switch (via .env)
|
||||
|
||||
Just change the `AI_PROVIDER` variable and restart the server:
|
||||
|
||||
```bash
|
||||
# Old
|
||||
AI_PROVIDER=claude
|
||||
CLAUDE_API_KEY=sk-ant-xxxxxxxxxxxxx
|
||||
|
||||
# New
|
||||
AI_PROVIDER=openai
|
||||
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
```bash
|
||||
# Restart backend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
### Option B: Multi-Provider Setup (Automatic Failover)
|
||||
|
||||
Configure multiple API keys for automatic failover:
|
||||
|
||||
```bash
|
||||
AI_PROVIDER=claude
|
||||
CLAUDE_API_KEY=sk-ant-xxxxxxxxxxxxx
|
||||
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
|
||||
GEMINI_API_KEY=xxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
If Claude fails, the system automatically tries OpenAI, then Gemini.
|
||||
|
||||
---
|
||||
|
||||
## Testing AI Generation
|
||||
|
||||
### 1. Check if AI is configured:
|
||||
|
||||
```bash
|
||||
curl http://localhost:5000/api/v1/health
|
||||
```
|
||||
|
||||
Look for logs:
|
||||
```
|
||||
info: [AI Service] ✅ Active provider: Claude (Anthropic)
|
||||
```
|
||||
|
||||
### 2. Test conclusion generation:
|
||||
|
||||
1. Create a workflow request
|
||||
2. Complete all approvals (as final approver)
|
||||
3. As initiator, click "Finalize & Close Request"
|
||||
4. Click "Generate with AI"
|
||||
5. Review AI-generated conclusion
|
||||
6. Edit if needed
|
||||
7. Finalize
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Error: "AI Service not configured"
|
||||
|
||||
**Solution**: Add at least one API key to `.env`:
|
||||
```bash
|
||||
CLAUDE_API_KEY=your-key-here
|
||||
# OR
|
||||
OPENAI_API_KEY=your-key-here
|
||||
# OR
|
||||
GEMINI_API_KEY=your-key-here
|
||||
```
|
||||
|
||||
### Error: "Cannot find module '@anthropic-ai/sdk'"
|
||||
|
||||
**Solution**: Install the required package:
|
||||
```bash
|
||||
npm install @anthropic-ai/sdk
|
||||
```
|
||||
|
||||
### Provider not working
|
||||
|
||||
**Check logs** for initialization errors:
|
||||
```bash
|
||||
# Successful
|
||||
info: [AI Service] ✅ Claude provider initialized
|
||||
|
||||
# Failed
|
||||
error: [AI Service] Failed to initialize Claude: Invalid API key
|
||||
```
|
||||
|
||||
**Verify API key**:
|
||||
- Claude: Should start with `sk-ant-`
|
||||
- OpenAI: Should start with `sk-proj-` or `sk-`
|
||||
- Gemini: No specific prefix
|
||||
|
||||
---
|
||||
|
||||
## Cost Management
|
||||
|
||||
### Estimated Costs (per conclusion generation):
|
||||
|
||||
| Provider | Tokens | Cost (approx) |
|
||||
|----------|--------|---------------|
|
||||
| Claude Sonnet | ~500 input + ~300 output | $0.004 |
|
||||
| GPT-4o | ~500 input + ~300 output | $0.005 |
|
||||
| Gemini Pro | ~500 input + ~300 output | Free tier or $0.0001 |
|
||||
|
||||
**Tips to reduce costs:**
|
||||
- Use Gemini for development/testing (free tier)
|
||||
- Use Claude/OpenAI for production
|
||||
- Monitor usage via provider dashboards
|
||||
|
||||
---
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
1. **Never commit API keys** to version control
|
||||
2. **Use environment variables** for all sensitive data
|
||||
3. **Rotate keys regularly** (every 3-6 months)
|
||||
4. **Set rate limits** on provider dashboards
|
||||
5. **Monitor usage** to detect anomalies
|
||||
|
||||
---
|
||||
|
||||
## Adding a New Provider
|
||||
|
||||
To add a new AI provider (e.g., Cohere, Hugging Face):
|
||||
|
||||
1. **Create Provider Class**:
|
||||
|
||||
```typescript
|
||||
class NewProvider implements AIProvider {
|
||||
private client: any = null;
|
||||
|
||||
constructor() {
|
||||
const apiKey = process.env.NEW_PROVIDER_API_KEY;
|
||||
if (!apiKey) return;
|
||||
|
||||
try {
|
||||
const SDK = require('new-provider-sdk');
|
||||
this.client = new SDK({ apiKey });
|
||||
} catch (error) {
|
||||
logger.error('Failed to initialize NewProvider:', error);
|
||||
}
|
||||
}
|
||||
|
||||
async generateText(prompt: string): Promise<string> {
|
||||
// Implementation
|
||||
}
|
||||
|
||||
isAvailable(): boolean {
|
||||
return this.client !== null;
|
||||
}
|
||||
|
||||
getProviderName(): string {
|
||||
return 'NewProvider';
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Register in AIService**:
|
||||
|
||||
Add to constructor's switch statement and fallback array.
|
||||
|
||||
3. **Update Documentation**: Add to this README.
|
||||
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
For issues with AI providers:
|
||||
- **Claude**: https://docs.anthropic.com
|
||||
- **OpenAI**: https://platform.openai.com/docs
|
||||
- **Gemini**: https://ai.google.dev/docs
|
||||
|
||||
For system-specific issues, check application logs or contact the development team.
|
||||
|
||||
264
API_KEY_TROUBLESHOOTING.md
Normal file
264
API_KEY_TROUBLESHOOTING.md
Normal file
@ -0,0 +1,264 @@
|
||||
# 🔑 API Key Troubleshooting Guide
|
||||
|
||||
## ⚠️ Problem: Getting 404 Errors for ALL Claude Models
|
||||
|
||||
If you're getting 404 "model not found" errors for **multiple Claude models**, the issue is likely with your Anthropic API key, not the model versions.
|
||||
|
||||
---
|
||||
|
||||
## 🔍 Step 1: Verify Your API Key
|
||||
|
||||
### Check Your API Key Status
|
||||
|
||||
1. Go to: https://console.anthropic.com/
|
||||
2. Log in to your account
|
||||
3. Navigate to **Settings** → **API Keys**
|
||||
4. Check:
|
||||
- ✅ Is your API key active?
|
||||
- ✅ Does it have an active billing method?
|
||||
- ✅ Have you verified your email?
|
||||
- ✅ Are there any usage limits or restrictions?
|
||||
|
||||
### API Key Tiers
|
||||
|
||||
Anthropic has different API access tiers:
|
||||
|
||||
| Tier | Access Level | Requirements |
|
||||
|------|-------------|--------------|
|
||||
| **Free Trial** | Limited models, low usage | Email verification |
|
||||
| **Paid Tier 1** | All Claude 3 models | Add payment method, some usage |
|
||||
| **Paid Tier 2+** | All models + higher limits | More usage history |
|
||||
|
||||
**If you just created your API key:**
|
||||
- You might need to add a payment method
|
||||
- You might need to make a small payment first
|
||||
- Some models might not be available immediately
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Step 2: Try the Most Basic Model (Claude 3 Haiku)
|
||||
|
||||
I've changed the default to **`claude-3-haiku-20240307`** - this should work with ANY valid API key.
|
||||
|
||||
### Restart Your Backend
|
||||
|
||||
**IMPORTANT:** You must restart the server for changes to take effect.
|
||||
|
||||
```bash
|
||||
# Stop the current server (Ctrl+C)
|
||||
# Then start again:
|
||||
cd Re_Backend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
### Check the Startup Logs
|
||||
|
||||
Look for this line:
|
||||
```
|
||||
[AI Service] ✅ Claude provider initialized with model: claude-3-haiku-20240307
|
||||
```
|
||||
|
||||
### Test Again
|
||||
|
||||
Try generating a conclusion. You should see in logs:
|
||||
```
|
||||
[AI Service] Generating with Claude model: claude-3-haiku-20240307
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Step 3: Check for Environment Variable Overrides
|
||||
|
||||
Your `.env` file might be overriding the default model.
|
||||
|
||||
### Check Your `.env` File
|
||||
|
||||
Open `Re_Backend/.env` and look for:
|
||||
```bash
|
||||
CLAUDE_MODEL=...
|
||||
```
|
||||
|
||||
**If it exists:**
|
||||
1. **Delete or comment it out** (add `#` at the start)
|
||||
2. **Or change it to Haiku:**
|
||||
```bash
|
||||
CLAUDE_MODEL=claude-3-haiku-20240307
|
||||
```
|
||||
3. **Restart the server**
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Step 4: Verify API Key is Loaded
|
||||
|
||||
Add this temporary check to see if your API key is being loaded:
|
||||
|
||||
### Option A: Check Logs on Startup
|
||||
|
||||
When you start the server, you should see:
|
||||
```
|
||||
[AI Service] ✅ Claude provider initialized with model: claude-3-haiku-20240307
|
||||
```
|
||||
|
||||
If you DON'T see this:
|
||||
- Your API key might be missing or invalid
|
||||
- Check `.env` file has: `CLAUDE_API_KEY=sk-ant-api03-...`
|
||||
|
||||
### Option B: Test API Key Manually
|
||||
|
||||
Create a test file `Re_Backend/test-api-key.js`:
|
||||
|
||||
```javascript
|
||||
const Anthropic = require('@anthropic-ai/sdk');
|
||||
require('dotenv').config();
|
||||
|
||||
const apiKey = process.env.CLAUDE_API_KEY || process.env.ANTHROPIC_API_KEY;
|
||||
|
||||
console.log('API Key found:', apiKey ? 'YES' : 'NO');
|
||||
console.log('API Key starts with:', apiKey ? apiKey.substring(0, 20) + '...' : 'N/A');
|
||||
|
||||
async function testKey() {
|
||||
try {
|
||||
const client = new Anthropic({ apiKey });
|
||||
|
||||
// Try the most basic model
|
||||
const response = await client.messages.create({
|
||||
model: 'claude-3-haiku-20240307',
|
||||
max_tokens: 100,
|
||||
messages: [{ role: 'user', content: 'Say hello' }]
|
||||
});
|
||||
|
||||
console.log('✅ API Key works!');
|
||||
console.log('Response:', response.content[0].text);
|
||||
} catch (error) {
|
||||
console.error('❌ API Key test failed:', error.message);
|
||||
console.error('Error details:', error);
|
||||
}
|
||||
}
|
||||
|
||||
testKey();
|
||||
```
|
||||
|
||||
Run it:
|
||||
```bash
|
||||
cd Re_Backend
|
||||
node test-api-key.js
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 💡 Step 5: Alternative - Use OpenAI or Gemini
|
||||
|
||||
If your Anthropic API key has issues, you can switch to another provider:
|
||||
|
||||
### Option A: Use OpenAI
|
||||
|
||||
1. **Get OpenAI API key** from: https://platform.openai.com/api-keys
|
||||
|
||||
2. **Add to `.env`:**
|
||||
```bash
|
||||
AI_PROVIDER=openai
|
||||
OPENAI_API_KEY=sk-...
|
||||
```
|
||||
|
||||
3. **Install OpenAI SDK:**
|
||||
```bash
|
||||
cd Re_Backend
|
||||
npm install openai
|
||||
```
|
||||
|
||||
4. **Restart server**
|
||||
|
||||
### Option B: Use Google Gemini
|
||||
|
||||
1. **Get Gemini API key** from: https://makersuite.google.com/app/apikey
|
||||
|
||||
2. **Add to `.env`:**
|
||||
```bash
|
||||
AI_PROVIDER=gemini
|
||||
GEMINI_API_KEY=...
|
||||
```
|
||||
|
||||
3. **Install Gemini SDK:**
|
||||
```bash
|
||||
cd Re_Backend
|
||||
npm install @google/generative-ai
|
||||
```
|
||||
|
||||
4. **Restart server**
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Quick Checklist
|
||||
|
||||
- [ ] My Anthropic API key is valid and active
|
||||
- [ ] I have a payment method added (if required)
|
||||
- [ ] My email is verified
|
||||
- [ ] I've deleted/commented out `CLAUDE_MODEL` from `.env` (or set it to haiku)
|
||||
- [ ] I've **restarted the backend server completely**
|
||||
- [ ] I see the correct model in startup logs
|
||||
- [ ] I've tested with the test script above
|
||||
|
||||
---
|
||||
|
||||
## 🆘 Still Not Working?
|
||||
|
||||
### Check Your API Key Format
|
||||
|
||||
Valid format: `sk-ant-api03-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX`
|
||||
|
||||
- Must start with `sk-ant-`
|
||||
- Must be quite long (80+ characters)
|
||||
- No spaces or line breaks
|
||||
|
||||
### Get a New API Key
|
||||
|
||||
1. Go to https://console.anthropic.com/settings/keys
|
||||
2. Delete old key
|
||||
3. Create new key
|
||||
4. Add payment method if prompted
|
||||
5. Update `.env` with new key
|
||||
6. Restart server
|
||||
|
||||
### Contact Anthropic Support
|
||||
|
||||
If nothing works:
|
||||
- Email: support@anthropic.com
|
||||
- Check: https://status.anthropic.com/ (for service issues)
|
||||
- Community: https://anthropic.com/community
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Current System Default
|
||||
|
||||
The system now defaults to:
|
||||
```
|
||||
claude-3-haiku-20240307
|
||||
```
|
||||
|
||||
This is the **most basic Claude model** that should work with **any valid API key**, even free tier.
|
||||
|
||||
If even Haiku doesn't work, there's a fundamental issue with your Anthropic API key or account status.
|
||||
|
||||
---
|
||||
|
||||
## ✅ Success Indicators
|
||||
|
||||
When everything is working correctly, you should see:
|
||||
|
||||
1. **On server startup:**
|
||||
```
|
||||
[AI Service] ✅ Claude provider initialized with model: claude-3-haiku-20240307
|
||||
```
|
||||
|
||||
2. **When generating conclusion:**
|
||||
```
|
||||
[AI Service] Generating with Claude model: claude-3-haiku-20240307
|
||||
```
|
||||
|
||||
3. **In response:**
|
||||
```
|
||||
[AI Service] ✅ Conclusion generated successfully
|
||||
```
|
||||
|
||||
No 404 errors! ✅
|
||||
|
||||
134
CLAUDE_MODELS.md
Normal file
134
CLAUDE_MODELS.md
Normal file
@ -0,0 +1,134 @@
|
||||
# Claude Model Versions - Quick Reference
|
||||
|
||||
## ✅ Current Claude Model (November 2025)
|
||||
|
||||
### Claude 4 Models (Latest)
|
||||
- **`claude-sonnet-4-20250514`** ← **DEFAULT & CURRENT**
|
||||
- Latest Claude Sonnet 4 model
|
||||
- Released: May 14, 2025
|
||||
- Best for complex reasoning and conclusion generation
|
||||
- **This is what your API key supports**
|
||||
|
||||
## ⚠️ Deprecated Models (Do NOT Use)
|
||||
|
||||
The following Claude 3 models are deprecated and no longer available:
|
||||
- ❌ `claude-3-opus-20240229` - Deprecated
|
||||
- ❌ `claude-3-sonnet-20240229` - Deprecated
|
||||
- ❌ `claude-3-haiku-20240307` - Deprecated
|
||||
- ❌ `claude-3-5-sonnet-20240620` - Deprecated
|
||||
|
||||
**These will return 404 errors.**
|
||||
|
||||
---
|
||||
|
||||
## 🎯 What Happened?
|
||||
|
||||
All Claude 3 and 3.5 models have been deprecated and replaced with Claude 4.
|
||||
|
||||
**Your API key is current and working perfectly** - it just needs the **current model version**.
|
||||
|
||||
---
|
||||
|
||||
## 🔧 How to Change the Model
|
||||
|
||||
### Option 1: Environment Variable (Recommended)
|
||||
Add to your `.env` file:
|
||||
|
||||
```bash
|
||||
# Use Claude Sonnet 4 (current default)
|
||||
CLAUDE_MODEL=claude-sonnet-4-20250514
|
||||
|
||||
# This is the ONLY model that currently works
|
||||
```
|
||||
|
||||
### Option 2: Admin Configuration (Future)
|
||||
The model can also be configured via the admin panel under AI settings.
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Troubleshooting 404 Errors
|
||||
|
||||
If you get a 404 error like:
|
||||
```
|
||||
model: claude-3-5-sonnet-20241029
|
||||
{"type":"error","error":{"type":"not_found_error","message":"model: ..."}
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
|
||||
1. **Check your `.env` file** for `CLAUDE_MODEL` variable
|
||||
2. **Remove or update** any invalid model version
|
||||
3. **Restart the backend** server after changing `.env`
|
||||
4. **Check server logs** on startup to see which model is being used:
|
||||
```
|
||||
[AI Service] ✅ Claude provider initialized with model: claude-3-5-sonnet-20240620
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 Current Default
|
||||
|
||||
The system now defaults to:
|
||||
```
|
||||
claude-sonnet-4-20250514
|
||||
```
|
||||
|
||||
This is the **current Claude 4 model** (November 2025) and the only one that works with active API keys.
|
||||
|
||||
---
|
||||
|
||||
## 🔑 API Key Requirements
|
||||
|
||||
Make sure you have a valid Anthropic API key in your `.env`:
|
||||
```bash
|
||||
CLAUDE_API_KEY=sk-ant-api03-...
|
||||
# OR
|
||||
ANTHROPIC_API_KEY=sk-ant-api03-...
|
||||
```
|
||||
|
||||
Get your API key from: https://console.anthropic.com/
|
||||
|
||||
---
|
||||
|
||||
## 📝 Model Selection Guide
|
||||
|
||||
| Use Case | Recommended Model | Notes |
|
||||
|----------|------------------|-------|
|
||||
| **All use cases** | **`claude-sonnet-4-20250514`** | **Only model currently available** |
|
||||
| Older models | ❌ Deprecated | Will return 404 errors |
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Quick Fix for 404 Errors
|
||||
|
||||
If you're getting 404 errors (model not found):
|
||||
|
||||
**Your API key likely doesn't have access to Claude 3.5 models.**
|
||||
|
||||
### Solution: Use Claude 3 Opus (works with all API keys)
|
||||
|
||||
1. **Restart backend** - the default is now Claude 3 Opus:
|
||||
```bash
|
||||
npm run dev
|
||||
```
|
||||
|
||||
2. **Check logs** for confirmation:
|
||||
```
|
||||
[AI Service] ✅ Claude provider initialized with model: claude-3-opus-20240229
|
||||
```
|
||||
|
||||
3. **Test again** - Should work now! ✅
|
||||
|
||||
### Alternative: Try Other Models
|
||||
|
||||
If Opus doesn't work, try in your `.env`:
|
||||
```bash
|
||||
# Try Sonnet (lighter, faster)
|
||||
CLAUDE_MODEL=claude-3-sonnet-20240229
|
||||
|
||||
# OR try Haiku (fastest)
|
||||
CLAUDE_MODEL=claude-3-haiku-20240307
|
||||
```
|
||||
|
||||
Then restart backend.
|
||||
|
||||
428
CONCLUSION_FEATURE.md
Normal file
428
CONCLUSION_FEATURE.md
Normal file
@ -0,0 +1,428 @@
|
||||
# Conclusion Remark Feature - Implementation Guide
|
||||
|
||||
## Overview
|
||||
|
||||
The **Conclusion Remark** feature allows the initiator to review and finalize a professional summary after all approvals are complete. The system uses **AI-powered generation** with support for multiple LLM providers.
|
||||
|
||||
---
|
||||
|
||||
## ✅ What's Implemented
|
||||
|
||||
### 1. **Database Layer**
|
||||
- ✅ `conclusion_remarks` table created
|
||||
- ✅ Stores AI-generated and final remarks
|
||||
- ✅ Tracks edits, confidence scores, and KPIs
|
||||
- ✅ One-to-one relationship with `workflow_requests`
|
||||
|
||||
### 2. **Backend Services**
|
||||
- ✅ **Multi-provider AI service** (Claude, OpenAI, Gemini)
|
||||
- ✅ Automatic fallback if primary provider fails
|
||||
- ✅ Professional prompt engineering
|
||||
- ✅ Key discussion points extraction
|
||||
- ✅ Confidence scoring
|
||||
|
||||
### 3. **API Endpoints**
|
||||
- ✅ `POST /api/v1/conclusions/:requestId/generate` - Generate AI remark
|
||||
- ✅ `PUT /api/v1/conclusions/:requestId` - Update/edit remark
|
||||
- ✅ `POST /api/v1/conclusions/:requestId/finalize` - Finalize & close
|
||||
- ✅ `GET /api/v1/conclusions/:requestId` - Get conclusion
|
||||
|
||||
### 4. **Frontend Components**
|
||||
- ✅ `ConclusionRemarkModal` with 3-step wizard
|
||||
- ✅ AI generation button with loading states
|
||||
- ✅ Manual entry option
|
||||
- ✅ Edit and preview functionality
|
||||
- ✅ Closure banner in RequestDetail
|
||||
|
||||
### 5. **Workflow Integration**
|
||||
- ✅ Final approver triggers notification to initiator
|
||||
- ✅ Green banner appears for approved requests
|
||||
- ✅ Status changes from APPROVED → CLOSED on finalization
|
||||
- ✅ Activity logging for audit trail
|
||||
|
||||
---
|
||||
|
||||
## 🎯 User Flow
|
||||
|
||||
### Step 1: Final Approval
|
||||
```
|
||||
Final Approver → Clicks "Approve Request"
|
||||
↓
|
||||
System → Marks request as APPROVED
|
||||
↓
|
||||
System → Sends notification to Initiator:
|
||||
"Request Approved - Closure Pending"
|
||||
```
|
||||
|
||||
### Step 2: Initiator Reviews Request
|
||||
```
|
||||
Initiator → Opens request detail
|
||||
↓
|
||||
System → Shows green closure banner:
|
||||
"All approvals complete! Finalize conclusion to close."
|
||||
↓
|
||||
Initiator → Clicks "Finalize & Close Request"
|
||||
```
|
||||
|
||||
### Step 3: AI Generation
|
||||
```
|
||||
Modal Opens → 3 options:
|
||||
1. Generate with AI (recommended)
|
||||
2. Write Manually
|
||||
3. Cancel
|
||||
↓
|
||||
Initiator → Clicks "Generate with AI"
|
||||
↓
|
||||
System → Analyzes:
|
||||
- Approval flow & comments
|
||||
- Work notes & discussions
|
||||
- Uploaded documents
|
||||
- Activity timeline
|
||||
↓
|
||||
AI → Generates professional conclusion (150-300 words)
|
||||
```
|
||||
|
||||
### Step 4: Review & Edit
|
||||
```
|
||||
AI Remark Displayed
|
||||
↓
|
||||
Initiator → Reviews AI suggestion
|
||||
↓
|
||||
Options:
|
||||
- Accept as-is → Click "Preview & Continue"
|
||||
- Edit remark → Modify text → Click "Preview & Continue"
|
||||
- Regenerate → Click "Regenerate" for new version
|
||||
```
|
||||
|
||||
### Step 5: Finalize
|
||||
```
|
||||
Preview Screen → Shows final remark + next steps
|
||||
↓
|
||||
Initiator → Clicks "Finalize & Close Request"
|
||||
↓
|
||||
System Actions:
|
||||
✅ Save final remark to database
|
||||
✅ Update request status to CLOSED
|
||||
✅ Set closure_date timestamp
|
||||
✅ Log activity "Request Closed"
|
||||
✅ Notify all participants
|
||||
✅ Move to Closed Requests
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 Database Schema
|
||||
|
||||
```sql
|
||||
CREATE TABLE conclusion_remarks (
|
||||
conclusion_id UUID PRIMARY KEY,
|
||||
request_id UUID UNIQUE REFERENCES workflow_requests(request_id),
|
||||
|
||||
-- AI Generation
|
||||
ai_generated_remark TEXT,
|
||||
ai_model_used VARCHAR(100), -- e.g., "Claude (Anthropic)"
|
||||
ai_confidence_score DECIMAL(5,2), -- 0.00 to 1.00
|
||||
|
||||
-- Final Version
|
||||
final_remark TEXT,
|
||||
edited_by UUID REFERENCES users(user_id),
|
||||
is_edited BOOLEAN DEFAULT false,
|
||||
edit_count INTEGER DEFAULT 0,
|
||||
|
||||
-- Context Summaries (for KPIs)
|
||||
approval_summary JSONB,
|
||||
document_summary JSONB,
|
||||
key_discussion_points TEXT[],
|
||||
|
||||
-- Timestamps
|
||||
generated_at TIMESTAMP,
|
||||
finalized_at TIMESTAMP,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔌 AI Provider Setup
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
# Choose provider (claude, openai, or gemini)
|
||||
AI_PROVIDER=claude
|
||||
|
||||
# API Keys (configure at least one)
|
||||
CLAUDE_API_KEY=sk-ant-xxxxxxxxxxxxx
|
||||
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
|
||||
GEMINI_API_KEY=xxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
### Provider Priority
|
||||
|
||||
1. **Primary**: Provider specified in `AI_PROVIDER`
|
||||
2. **Fallback 1**: Claude (if available)
|
||||
3. **Fallback 2**: OpenAI (if available)
|
||||
4. **Fallback 3**: Gemini (if available)
|
||||
|
||||
### Installation
|
||||
|
||||
Install your chosen provider's SDK:
|
||||
|
||||
```bash
|
||||
# For Claude
|
||||
npm install @anthropic-ai/sdk
|
||||
|
||||
# For OpenAI
|
||||
npm install openai
|
||||
|
||||
# For Gemini
|
||||
npm install @google/generative-ai
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📋 KPI Tracking
|
||||
|
||||
The `conclusion_remarks` table enables powerful analytics:
|
||||
|
||||
### 1. AI Adoption Rate
|
||||
```sql
|
||||
SELECT
|
||||
COUNT(CASE WHEN ai_generated_remark IS NOT NULL THEN 1 END) as ai_generated,
|
||||
COUNT(*) as total,
|
||||
ROUND(COUNT(CASE WHEN ai_generated_remark IS NOT NULL THEN 1 END)::DECIMAL / COUNT(*) * 100, 2) as adoption_rate
|
||||
FROM conclusion_remarks
|
||||
WHERE finalized_at IS NOT NULL;
|
||||
```
|
||||
|
||||
### 2. Edit Frequency
|
||||
```sql
|
||||
SELECT
|
||||
COUNT(CASE WHEN is_edited = true THEN 1 END) as edited,
|
||||
COUNT(*) as total,
|
||||
AVG(edit_count) as avg_edits_per_conclusion
|
||||
FROM conclusion_remarks;
|
||||
```
|
||||
|
||||
### 3. Average Confidence Score
|
||||
```sql
|
||||
SELECT
|
||||
AVG(ai_confidence_score) as avg_confidence,
|
||||
MIN(ai_confidence_score) as min_confidence,
|
||||
MAX(ai_confidence_score) as max_confidence
|
||||
FROM conclusion_remarks
|
||||
WHERE ai_generated_remark IS NOT NULL;
|
||||
```
|
||||
|
||||
### 4. Conclusion Length Analysis
|
||||
```sql
|
||||
SELECT
|
||||
AVG(LENGTH(final_remark)) as avg_length,
|
||||
MAX(LENGTH(final_remark)) as max_length,
|
||||
MIN(LENGTH(final_remark)) as min_length
|
||||
FROM conclusion_remarks
|
||||
WHERE final_remark IS NOT NULL;
|
||||
```
|
||||
|
||||
### 5. Provider Usage
|
||||
```sql
|
||||
SELECT
|
||||
ai_model_used,
|
||||
COUNT(*) as usage_count,
|
||||
AVG(ai_confidence_score) as avg_confidence
|
||||
FROM conclusion_remarks
|
||||
WHERE ai_model_used IS NOT NULL
|
||||
GROUP BY ai_model_used;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎨 Frontend UI
|
||||
|
||||
### Closure Banner (RequestDetail)
|
||||
```
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ ✅ Request Approved - Closure Pending │
|
||||
│ │
|
||||
│ All approvals are complete! Please review and │
|
||||
│ finalize the conclusion remark to officially │
|
||||
│ close this request. │
|
||||
│ │
|
||||
│ [✅ Finalize & Close Request] │
|
||||
└─────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Conclusion Modal - Step 1: Generate
|
||||
```
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ 📄 Finalize Request Closure │
|
||||
├─────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ✨ AI-Powered Conclusion Generation │
|
||||
│ │
|
||||
│ Let AI analyze your request's approval flow, │
|
||||
│ work notes, and documents to generate a │
|
||||
│ professional conclusion remark. │
|
||||
│ │
|
||||
│ [✨ Generate with AI] [✏️ Write Manually] │
|
||||
│ │
|
||||
│ Powered by Claude AI • Analyzes approvals, │
|
||||
│ work notes & documents │
|
||||
└─────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Step 2: Edit
|
||||
```
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ ✨ AI-Generated Conclusion [85% confidence]│
|
||||
│ │
|
||||
│ Key Highlights: │
|
||||
│ • All 3 approval levels completed successfully │
|
||||
│ • Request completed within TAT │
|
||||
│ • 5 documents attached for reference │
|
||||
│ │
|
||||
│ Review & Edit Conclusion Remark: │
|
||||
│ ┌─────────────────────────────────────────────┐ │
|
||||
│ │ The request for new office location was │ │
|
||||
│ │ thoroughly reviewed and approved by all │ │
|
||||
│ │ stakeholders... │ │
|
||||
│ └─────────────────────────────────────────────┘ │
|
||||
│ 450 / 2000 │
|
||||
│ │
|
||||
│ [✨ Regenerate] [Cancel] [Preview & Continue] │
|
||||
└─────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Step 3: Preview
|
||||
```
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ ✅ Final Conclusion Remark [Edited by You] │
|
||||
│ ┌─────────────────────────────────────────────┐ │
|
||||
│ │ The request for new office location... │ │
|
||||
│ └─────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ℹ️ What happens next? │
|
||||
│ • Request status will change to "CLOSED" │
|
||||
│ • All participants will be notified │
|
||||
│ • Conclusion remark will be permanently saved │
|
||||
│ • Request will move to Closed Requests │
|
||||
│ │
|
||||
│ [✏️ Edit Again] [✅ Finalize & Close Request] │
|
||||
└─────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔄 Status Transition
|
||||
|
||||
```
|
||||
DRAFT → PENDING → IN_PROGRESS → APPROVED → CLOSED
|
||||
↑ ↓
|
||||
(Final Approval) (Conclusion)
|
||||
```
|
||||
|
||||
**Key States:**
|
||||
- `APPROVED`: All approvals complete, awaiting conclusion
|
||||
- `CLOSED`: Conclusion finalized, request archived
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
### 1. Setup AI Provider
|
||||
|
||||
```bash
|
||||
# Option A: Claude (Recommended)
|
||||
AI_PROVIDER=claude
|
||||
CLAUDE_API_KEY=sk-ant-xxxxxxxxxxxxx
|
||||
|
||||
# Option B: OpenAI
|
||||
AI_PROVIDER=openai
|
||||
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
|
||||
|
||||
# Option C: Gemini (Free tier)
|
||||
AI_PROVIDER=gemini
|
||||
GEMINI_API_KEY=xxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
### 2. Run Migration
|
||||
|
||||
```bash
|
||||
cd Re_Backend
|
||||
npm run migrate
|
||||
```
|
||||
|
||||
### 3. Test Workflow
|
||||
|
||||
1. Create workflow request
|
||||
2. Add approvers
|
||||
3. Complete all approvals
|
||||
4. As initiator, click "Finalize & Close"
|
||||
5. Generate AI conclusion
|
||||
6. Review, edit, preview
|
||||
7. Finalize and close
|
||||
|
||||
### 4. Verify Database
|
||||
|
||||
```sql
|
||||
-- Check conclusion was created
|
||||
SELECT * FROM conclusion_remarks WHERE request_id = 'your-request-id';
|
||||
|
||||
-- Check request was closed
|
||||
SELECT status, closure_date, conclusion_remark
|
||||
FROM workflow_requests
|
||||
WHERE request_id = 'your-request-id';
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Benefits
|
||||
|
||||
### For Users
|
||||
- ✅ Professional, well-structured conclusion remarks
|
||||
- ✅ Saves time (AI does the heavy lifting)
|
||||
- ✅ Consistent format across all requests
|
||||
- ✅ Can edit/customize AI suggestions
|
||||
- ✅ Complete control over final content
|
||||
|
||||
### For Business
|
||||
- ✅ Better documentation quality
|
||||
- ✅ Audit trail of all decisions
|
||||
- ✅ KPI tracking (AI adoption, edit rates)
|
||||
- ✅ Vendor flexibility (swap AI providers anytime)
|
||||
- ✅ Cost optimization (use free tier for testing)
|
||||
|
||||
---
|
||||
|
||||
## 📝 Notes
|
||||
|
||||
- **Required**: At least one AI provider API key must be configured
|
||||
- **Automatic**: System selects best available provider
|
||||
- **Flexible**: Switch providers without code changes
|
||||
- **Graceful**: Falls back to manual entry if AI unavailable
|
||||
- **Secure**: API keys stored in environment variables only
|
||||
- **Logged**: All AI generations tracked for audit
|
||||
|
||||
---
|
||||
|
||||
## 🆘 Support
|
||||
|
||||
**AI Provider Issues:**
|
||||
- Claude: https://docs.anthropic.com
|
||||
- OpenAI: https://platform.openai.com/docs
|
||||
- Gemini: https://ai.google.dev/docs
|
||||
|
||||
**System Issues:**
|
||||
Check logs for AI service initialization:
|
||||
```bash
|
||||
grep "AI Service" logs/combined.log
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
info: [AI Service] Preferred provider: claude
|
||||
info: [AI Service] ✅ Claude provider initialized
|
||||
info: [AI Service] ✅ Active provider: Claude (Anthropic)
|
||||
```
|
||||
|
||||
222
docs/IN_APP_NOTIFICATIONS_SETUP.md
Normal file
222
docs/IN_APP_NOTIFICATIONS_SETUP.md
Normal file
@ -0,0 +1,222 @@
|
||||
# In-App Notification System - Setup Guide
|
||||
|
||||
## 🎯 Overview
|
||||
|
||||
Complete real-time in-app notification system for Royal Enfield Workflow Management System.
|
||||
|
||||
## ✅ Features Implemented
|
||||
|
||||
### Backend:
|
||||
1. **Notification Model** (`models/Notification.ts`)
|
||||
- Stores all in-app notifications
|
||||
- Tracks read/unread status
|
||||
- Supports priority levels (LOW, MEDIUM, HIGH, URGENT)
|
||||
- Metadata for request context
|
||||
|
||||
2. **Notification Controller** (`controllers/notification.controller.ts`)
|
||||
- GET `/api/v1/notifications` - List user's notifications with pagination
|
||||
- GET `/api/v1/notifications/unread-count` - Get unread count
|
||||
- PATCH `/api/v1/notifications/:notificationId/read` - Mark as read
|
||||
- POST `/api/v1/notifications/mark-all-read` - Mark all as read
|
||||
- DELETE `/api/v1/notifications/:notificationId` - Delete notification
|
||||
|
||||
3. **Enhanced Notification Service** (`services/notification.service.ts`)
|
||||
- Saves notifications to database (for in-app display)
|
||||
- Emits real-time socket.io events
|
||||
- Sends push notifications (if subscribed)
|
||||
- All in one call: `notificationService.sendToUsers()`
|
||||
|
||||
4. **Socket.io Enhancement** (`realtime/socket.ts`)
|
||||
- Added `join:user` event for personal notification room
|
||||
- Added `emitToUser()` function for targeted notifications
|
||||
- Real-time delivery without page refresh
|
||||
|
||||
### Frontend:
|
||||
1. **Notification API Service** (`services/notificationApi.ts`)
|
||||
- Complete API client for all notification endpoints
|
||||
|
||||
2. **PageLayout Integration** (`components/layout/PageLayout/PageLayout.tsx`)
|
||||
- Real-time notification bell with unread count badge
|
||||
- Dropdown showing latest 10 notifications
|
||||
- Click to mark as read and navigate to request
|
||||
- "Mark all as read" functionality
|
||||
- Auto-refreshes when new notifications arrive
|
||||
- Works even if browser push notifications disabled
|
||||
|
||||
3. **Data Freshness** (MyRequests, OpenRequests, ClosedRequests)
|
||||
- Fixed stale data after DB deletion
|
||||
- Always shows fresh data from API
|
||||
|
||||
## 📦 Database Setup
|
||||
|
||||
### Step 1: Run Migration
|
||||
|
||||
Execute this SQL in your PostgreSQL database:
|
||||
|
||||
```bash
|
||||
psql -U postgres -d re_workflow_db -f migrations/create_notifications_table.sql
|
||||
```
|
||||
|
||||
OR run manually in pgAdmin/SQL tool:
|
||||
|
||||
```sql
|
||||
-- See: migrations/create_notifications_table.sql
|
||||
```
|
||||
|
||||
### Step 2: Verify Table Created
|
||||
|
||||
```sql
|
||||
SELECT table_name FROM information_schema.tables
|
||||
WHERE table_schema = 'public' AND table_name = 'notifications';
|
||||
```
|
||||
|
||||
## 🚀 How It Works
|
||||
|
||||
### 1. When an Event Occurs (e.g., Request Assigned):
|
||||
|
||||
**Backend:**
|
||||
```typescript
|
||||
await notificationService.sendToUsers(
|
||||
[approverId],
|
||||
{
|
||||
title: 'New request assigned',
|
||||
body: 'Marketing Campaign Approval - REQ-2025-12345',
|
||||
requestId: workflowId,
|
||||
requestNumber: 'REQ-2025-12345',
|
||||
url: `/request/REQ-2025-12345`,
|
||||
type: 'assignment',
|
||||
priority: 'HIGH',
|
||||
actionRequired: true
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
This automatically:
|
||||
- ✅ Saves notification to `notifications` table
|
||||
- ✅ Emits `notification:new` socket event to user
|
||||
- ✅ Sends browser push notification (if enabled)
|
||||
|
||||
### 2. Frontend Receives Notification:
|
||||
|
||||
**PageLayout** automatically:
|
||||
- ✅ Receives socket event in real-time
|
||||
- ✅ Updates notification count badge
|
||||
- ✅ Adds to notification dropdown
|
||||
- ✅ Shows blue dot for unread
|
||||
- ✅ User clicks → marks as read → navigates to request
|
||||
|
||||
## 📌 Notification Events (Major)
|
||||
|
||||
Based on your requirement, here are the key events that trigger notifications:
|
||||
|
||||
| Event | Type | Sent To | Priority |
|
||||
|-------|------|---------|----------|
|
||||
| Request Created | `created` | Initiator | MEDIUM |
|
||||
| Request Assigned | `assignment` | Approver | HIGH |
|
||||
| Approval Given | `approved` | Initiator | HIGH |
|
||||
| Request Rejected | `rejected` | Initiator | URGENT |
|
||||
| TAT Alert (50%) | `tat_alert` | Approver | MEDIUM |
|
||||
| TAT Alert (75%) | `tat_alert` | Approver | HIGH |
|
||||
| TAT Breached | `tat_breach` | Approver + Initiator | URGENT |
|
||||
| Work Note Mention | `mention` | Tagged Users | MEDIUM |
|
||||
| New Comment | `comment` | Participants | LOW |
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Backend (.env):
|
||||
```env
|
||||
# Already configured - no changes needed
|
||||
VAPID_PUBLIC_KEY=your_vapid_public_key
|
||||
VAPID_PRIVATE_KEY=your_vapid_private_key
|
||||
```
|
||||
|
||||
### Frontend (.env):
|
||||
```env
|
||||
# Already configured
|
||||
VITE_API_BASE_URL=http://localhost:5000/api/v1
|
||||
```
|
||||
|
||||
## ✅ Testing
|
||||
|
||||
### 1. Test Basic Notification:
|
||||
```bash
|
||||
# Create a workflow and assign to an approver
|
||||
# Check approver's notification bell - should show count
|
||||
```
|
||||
|
||||
### 2. Test Real-Time Delivery:
|
||||
```bash
|
||||
# Have 2 users logged in (different browsers)
|
||||
# User A creates request, assigns to User B
|
||||
# User B should see notification appear immediately (no refresh needed)
|
||||
```
|
||||
|
||||
### 3. Test TAT Notifications:
|
||||
```bash
|
||||
# Create request with 1-hour TAT
|
||||
# Wait for threshold notifications (50%, 75%, 100%)
|
||||
# Approver should receive in-app notifications
|
||||
```
|
||||
|
||||
### 4. Test Work Note Mentions:
|
||||
```bash
|
||||
# Add work note with @mention
|
||||
# Tagged user should receive notification
|
||||
```
|
||||
|
||||
## 🎨 UI Features
|
||||
|
||||
- **Unread Badge**: Shows count (1-9, or "9+" for 10+)
|
||||
- **Blue Dot**: Indicates unread notifications
|
||||
- **Blue Background**: Highlights unread items
|
||||
- **Time Ago**: "5 minutes ago", "2 hours ago", etc.
|
||||
- **Click to Navigate**: Automatically opens the related request
|
||||
- **Mark All Read**: Single click to clear all unread
|
||||
- **Scrollable**: Shows latest 10, with "View all" link
|
||||
|
||||
## 📱 Fallback for Disabled Push Notifications
|
||||
|
||||
Even if user denies browser push notifications:
|
||||
- ✅ In-app notifications ALWAYS work
|
||||
- ✅ Notifications saved to database
|
||||
- ✅ Real-time delivery via socket.io
|
||||
- ✅ No permission required
|
||||
- ✅ Works on all browsers
|
||||
|
||||
## 🔍 Debug Endpoints
|
||||
|
||||
```bash
|
||||
# Get notifications for current user
|
||||
GET /api/v1/notifications?page=1&limit=10
|
||||
|
||||
# Get only unread
|
||||
GET /api/v1/notifications?unreadOnly=true
|
||||
|
||||
# Get unread count
|
||||
GET /api/v1/notifications/unread-count
|
||||
```
|
||||
|
||||
## 🎉 Benefits
|
||||
|
||||
1. **No Browser Permission Needed** - Always works, unlike push notifications
|
||||
2. **Real-Time Updates** - Instant delivery via socket.io
|
||||
3. **Persistent** - Saved in database, available after login
|
||||
4. **Actionable** - Click to navigate to related request
|
||||
5. **User-Friendly** - Clean UI integrated into header
|
||||
6. **Complete Tracking** - Know what was sent via which channel
|
||||
|
||||
## 🔥 Next Steps (Optional)
|
||||
|
||||
1. **Email Integration**: Send email for URGENT priority notifications
|
||||
2. **SMS Integration**: Critical alerts via SMS
|
||||
3. **Notification Preferences**: Let users choose which events to receive
|
||||
4. **Notification History Page**: Full-page view with filters
|
||||
5. **Sound Alerts**: Play sound when new notification arrives
|
||||
6. **Desktop Notifications**: Browser native notifications (if permitted)
|
||||
|
||||
---
|
||||
|
||||
**✅ In-App Notifications are now fully operational!**
|
||||
|
||||
Users will receive instant notifications for all major workflow events, even without browser push permissions enabled.
|
||||
|
||||
131
package-lock.json
generated
131
package-lock.json
generated
@ -8,7 +8,9 @@
|
||||
"name": "re-workflow-backend",
|
||||
"version": "1.0.0",
|
||||
"dependencies": {
|
||||
"@anthropic-ai/sdk": "^0.68.0",
|
||||
"@google-cloud/storage": "^7.14.0",
|
||||
"@google/generative-ai": "^0.24.1",
|
||||
"@types/uuid": "^8.3.4",
|
||||
"axios": "^1.7.9",
|
||||
"bcryptjs": "^2.4.3",
|
||||
@ -25,6 +27,7 @@
|
||||
"morgan": "^1.10.0",
|
||||
"multer": "^1.4.5-lts.1",
|
||||
"node-cron": "^3.0.3",
|
||||
"openai": "^6.8.1",
|
||||
"passport": "^0.7.0",
|
||||
"passport-jwt": "^4.0.1",
|
||||
"pg": "^8.13.1",
|
||||
@ -69,6 +72,26 @@
|
||||
"npm": ">=10.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@anthropic-ai/sdk": {
|
||||
"version": "0.68.0",
|
||||
"resolved": "https://registry.npmjs.org/@anthropic-ai/sdk/-/sdk-0.68.0.tgz",
|
||||
"integrity": "sha512-SMYAmbbiprG8k1EjEPMTwaTqssDT7Ae+jxcR5kWXiqTlbwMR2AthXtscEVWOHkRfyAV5+y3PFYTJRNa3OJWIEw==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"json-schema-to-ts": "^3.1.1"
|
||||
},
|
||||
"bin": {
|
||||
"anthropic-ai-sdk": "bin/cli"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"zod": "^3.25.0 || ^4.0.0"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"zod": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/code-frame": {
|
||||
"version": "7.27.1",
|
||||
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.27.1.tgz",
|
||||
@ -530,6 +553,15 @@
|
||||
"@babel/core": "^7.0.0-0"
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/runtime": {
|
||||
"version": "7.28.4",
|
||||
"resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.28.4.tgz",
|
||||
"integrity": "sha512-Q/N6JNWvIvPnLDvjlE1OUBLPQHH6l3CltCEsHIujp45zQUSSh8K+gHnaEX45yAT1nyngnINhvWtzN+Nb9D8RAQ==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=6.9.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/template": {
|
||||
"version": "7.27.2",
|
||||
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.27.2.tgz",
|
||||
@ -875,6 +907,15 @@
|
||||
"node": ">=14"
|
||||
}
|
||||
},
|
||||
"node_modules/@google/generative-ai": {
|
||||
"version": "0.24.1",
|
||||
"resolved": "https://registry.npmjs.org/@google/generative-ai/-/generative-ai-0.24.1.tgz",
|
||||
"integrity": "sha512-MqO+MLfM6kjxcKoy0p1wRzG3b4ZZXtPI+z2IE26UogS2Cm/XHO+7gGRBh6gcJsOiIVoH93UwKvW4HdgiOZCy9Q==",
|
||||
"license": "Apache-2.0",
|
||||
"engines": {
|
||||
"node": ">=18.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@humanfs/core": {
|
||||
"version": "0.19.1",
|
||||
"resolved": "https://registry.npmjs.org/@humanfs/core/-/core-0.19.1.tgz",
|
||||
@ -3975,6 +4016,27 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/engine.io/node_modules/ws": {
|
||||
"version": "8.17.1",
|
||||
"resolved": "https://registry.npmjs.org/ws/-/ws-8.17.1.tgz",
|
||||
"integrity": "sha512-6XQFvXTkbfUOZOKKILFG1PDK2NDQs4azKQl26T0YS5CxqWLgXajbPZ+h4gZekJyRqFU8pvnbAbbs/3TgRPy+GQ==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=10.0.0"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"bufferutil": "^4.0.1",
|
||||
"utf-8-validate": ">=5.0.2"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"bufferutil": {
|
||||
"optional": true
|
||||
},
|
||||
"utf-8-validate": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/error-ex": {
|
||||
"version": "1.3.4",
|
||||
"resolved": "https://registry.npmjs.org/error-ex/-/error-ex-1.3.4.tgz",
|
||||
@ -6264,6 +6326,19 @@
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/json-schema-to-ts": {
|
||||
"version": "3.1.1",
|
||||
"resolved": "https://registry.npmjs.org/json-schema-to-ts/-/json-schema-to-ts-3.1.1.tgz",
|
||||
"integrity": "sha512-+DWg8jCJG2TEnpy7kOm/7/AxaYoaRbjVB4LFZLySZlWn8exGs3A4OLJR966cVvU26N7X9TWxl+Jsw7dzAqKT6g==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@babel/runtime": "^7.18.3",
|
||||
"ts-algebra": "^2.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=16"
|
||||
}
|
||||
},
|
||||
"node_modules/json-schema-traverse": {
|
||||
"version": "0.4.1",
|
||||
"resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz",
|
||||
@ -7148,6 +7223,27 @@
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/openai": {
|
||||
"version": "6.8.1",
|
||||
"resolved": "https://registry.npmjs.org/openai/-/openai-6.8.1.tgz",
|
||||
"integrity": "sha512-ACifslrVgf+maMz9vqwMP4+v9qvx5Yzssydizks8n+YUJ6YwUoxj51sKRQ8HYMfR6wgKLSIlaI108ZwCk+8yig==",
|
||||
"license": "Apache-2.0",
|
||||
"bin": {
|
||||
"openai": "bin/cli"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"ws": "^8.18.0",
|
||||
"zod": "^3.25 || ^4.0"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"ws": {
|
||||
"optional": true
|
||||
},
|
||||
"zod": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/optionator": {
|
||||
"version": "0.9.4",
|
||||
"resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz",
|
||||
@ -8443,6 +8539,27 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/socket.io-adapter/node_modules/ws": {
|
||||
"version": "8.17.1",
|
||||
"resolved": "https://registry.npmjs.org/ws/-/ws-8.17.1.tgz",
|
||||
"integrity": "sha512-6XQFvXTkbfUOZOKKILFG1PDK2NDQs4azKQl26T0YS5CxqWLgXajbPZ+h4gZekJyRqFU8pvnbAbbs/3TgRPy+GQ==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=10.0.0"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"bufferutil": "^4.0.1",
|
||||
"utf-8-validate": ">=5.0.2"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"bufferutil": {
|
||||
"optional": true
|
||||
},
|
||||
"utf-8-validate": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/socket.io-parser": {
|
||||
"version": "4.2.4",
|
||||
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.2.4.tgz",
|
||||
@ -8972,6 +9089,12 @@
|
||||
"node": ">= 14.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/ts-algebra": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/ts-algebra/-/ts-algebra-2.0.0.tgz",
|
||||
"integrity": "sha512-FPAhNPFMrkwz76P7cdjdmiShwMynZYN6SgOujD1urY4oNm80Ou9oMdmbR45LotcKOXoy7wSmHkRFE6Mxbrhefw==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/ts-api-utils": {
|
||||
"version": "2.1.0",
|
||||
"resolved": "https://registry.npmjs.org/ts-api-utils/-/ts-api-utils-2.1.0.tgz",
|
||||
@ -9627,10 +9750,12 @@
|
||||
}
|
||||
},
|
||||
"node_modules/ws": {
|
||||
"version": "8.17.1",
|
||||
"resolved": "https://registry.npmjs.org/ws/-/ws-8.17.1.tgz",
|
||||
"integrity": "sha512-6XQFvXTkbfUOZOKKILFG1PDK2NDQs4azKQl26T0YS5CxqWLgXajbPZ+h4gZekJyRqFU8pvnbAbbs/3TgRPy+GQ==",
|
||||
"version": "8.18.3",
|
||||
"resolved": "https://registry.npmjs.org/ws/-/ws-8.18.3.tgz",
|
||||
"integrity": "sha512-PEIGCY5tSlUt50cqyMXfCzX+oOPqN0vuGqWzbcJ2xvnkzkq46oOpz7dQaTDBdfICb4N14+GARUDw2XV2N4tvzg==",
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"peer": true,
|
||||
"engines": {
|
||||
"node": ">=10.0.0"
|
||||
},
|
||||
|
||||
@ -25,7 +25,9 @@
|
||||
"seed:config": "ts-node -r tsconfig-paths/register src/scripts/seed-admin-config.ts"
|
||||
},
|
||||
"dependencies": {
|
||||
"@anthropic-ai/sdk": "^0.68.0",
|
||||
"@google-cloud/storage": "^7.14.0",
|
||||
"@google/generative-ai": "^0.24.1",
|
||||
"@types/uuid": "^8.3.4",
|
||||
"axios": "^1.7.9",
|
||||
"bcryptjs": "^2.4.3",
|
||||
@ -42,6 +44,7 @@
|
||||
"morgan": "^1.10.0",
|
||||
"multer": "^1.4.5-lts.1",
|
||||
"node-cron": "^3.0.3",
|
||||
"openai": "^6.8.1",
|
||||
"passport": "^0.7.0",
|
||||
"passport-jwt": "^4.0.1",
|
||||
"pg": "^8.13.1",
|
||||
|
||||
@ -111,8 +111,9 @@ export const SYSTEM_CONFIG = {
|
||||
* Get configuration for frontend consumption
|
||||
* Returns only non-sensitive configuration values
|
||||
*/
|
||||
export function getPublicConfig() {
|
||||
return {
|
||||
export async function getPublicConfig() {
|
||||
// Get base configuration
|
||||
const baseConfig = {
|
||||
appName: SYSTEM_CONFIG.APP_NAME,
|
||||
appVersion: SYSTEM_CONFIG.APP_VERSION,
|
||||
workingHours: SYSTEM_CONFIG.WORKING_HOURS,
|
||||
@ -141,8 +142,30 @@ export function getPublicConfig() {
|
||||
enableMentions: SYSTEM_CONFIG.WORK_NOTES.ENABLE_MENTIONS,
|
||||
},
|
||||
features: SYSTEM_CONFIG.FEATURES,
|
||||
ui: SYSTEM_CONFIG.UI,
|
||||
ui: SYSTEM_CONFIG.UI
|
||||
};
|
||||
|
||||
// Try to get AI service status (gracefully handle if not available)
|
||||
try {
|
||||
const { aiService } = require('../services/ai.service');
|
||||
|
||||
return {
|
||||
...baseConfig,
|
||||
ai: {
|
||||
enabled: aiService.isAvailable(),
|
||||
provider: aiService.getProviderName()
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
// AI service not available - return config without AI info
|
||||
return {
|
||||
...baseConfig,
|
||||
ai: {
|
||||
enabled: false,
|
||||
provider: 'None'
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@ -367,6 +367,18 @@ export const updateConfiguration = async (req: Request, res: Response): Promise<
|
||||
if (workingHoursKeys.includes(configKey)) {
|
||||
await clearWorkingHoursCache();
|
||||
logger.info(`[Admin] Working hours configuration '${configKey}' updated - cache cleared and reloaded`);
|
||||
}
|
||||
|
||||
// If AI config was updated, reinitialize AI service
|
||||
const aiConfigKeys = ['AI_PROVIDER', 'CLAUDE_API_KEY', 'OPENAI_API_KEY', 'GEMINI_API_KEY', 'AI_ENABLED'];
|
||||
if (aiConfigKeys.includes(configKey)) {
|
||||
try {
|
||||
const { aiService } = require('../services/ai.service');
|
||||
await aiService.reinitialize();
|
||||
logger.info(`[Admin] AI configuration '${configKey}' updated - AI service reinitialized with ${aiService.getProviderName()}`);
|
||||
} catch (error) {
|
||||
logger.error(`[Admin] Failed to reinitialize AI service:`, error);
|
||||
}
|
||||
} else {
|
||||
logger.info(`[Admin] Configuration '${configKey}' updated and cache cleared`);
|
||||
}
|
||||
|
||||
374
src/controllers/conclusion.controller.ts
Normal file
374
src/controllers/conclusion.controller.ts
Normal file
@ -0,0 +1,374 @@
|
||||
import { Request, Response } from 'express';
|
||||
import { WorkflowRequest, ApprovalLevel, WorkNote, Document, Activity, ConclusionRemark } from '@models/index';
|
||||
import { aiService } from '@services/ai.service';
|
||||
import { activityService } from '@services/activity.service';
|
||||
import logger from '@utils/logger';
|
||||
|
||||
export class ConclusionController {
|
||||
/**
|
||||
* Generate AI conclusion remark for a request
|
||||
* POST /api/v1/conclusions/:requestId/generate
|
||||
*/
|
||||
async generateConclusion(req: Request, res: Response) {
|
||||
try {
|
||||
const { requestId } = req.params;
|
||||
const userId = (req as any).user?.userId;
|
||||
|
||||
// Fetch request with all related data
|
||||
const request = await WorkflowRequest.findOne({
|
||||
where: { requestId },
|
||||
include: [
|
||||
{ association: 'initiator', attributes: ['userId', 'displayName', 'email'] }
|
||||
]
|
||||
});
|
||||
|
||||
if (!request) {
|
||||
return res.status(404).json({ error: 'Request not found' });
|
||||
}
|
||||
|
||||
// Check if user is the initiator
|
||||
if ((request as any).initiatorId !== userId) {
|
||||
return res.status(403).json({ error: 'Only the initiator can generate conclusion remarks' });
|
||||
}
|
||||
|
||||
// Check if request is approved
|
||||
if ((request as any).status !== 'APPROVED') {
|
||||
return res.status(400).json({ error: 'Conclusion can only be generated for approved requests' });
|
||||
}
|
||||
|
||||
// Check if AI service is available
|
||||
if (!aiService.isAvailable()) {
|
||||
logger.warn(`[Conclusion] AI service unavailable for request ${requestId}`);
|
||||
return res.status(503).json({
|
||||
error: 'AI service not available',
|
||||
message: 'AI features are currently unavailable. Please configure an AI provider (Claude, OpenAI, or Gemini) in the admin panel, or write the conclusion manually.',
|
||||
canContinueManually: true
|
||||
});
|
||||
}
|
||||
|
||||
// Gather context for AI generation
|
||||
const approvalLevels = await ApprovalLevel.findAll({
|
||||
where: { requestId },
|
||||
order: [['levelNumber', 'ASC']]
|
||||
});
|
||||
|
||||
const workNotes = await WorkNote.findAll({
|
||||
where: { requestId },
|
||||
order: [['createdAt', 'ASC']],
|
||||
limit: 20 // Last 20 work notes
|
||||
});
|
||||
|
||||
const documents = await Document.findAll({
|
||||
where: { requestId },
|
||||
order: [['uploadedAt', 'DESC']]
|
||||
});
|
||||
|
||||
const activities = await Activity.findAll({
|
||||
where: { requestId },
|
||||
order: [['createdAt', 'ASC']],
|
||||
limit: 50 // Last 50 activities
|
||||
});
|
||||
|
||||
// Build context object
|
||||
const context = {
|
||||
requestTitle: (request as any).title,
|
||||
requestDescription: (request as any).description,
|
||||
requestNumber: (request as any).requestNumber,
|
||||
priority: (request as any).priority,
|
||||
approvalFlow: approvalLevels.map((level: any) => ({
|
||||
levelNumber: level.levelNumber,
|
||||
approverName: level.approverName,
|
||||
status: level.status,
|
||||
comments: level.comments,
|
||||
actionDate: level.actionDate,
|
||||
tatHours: Number(level.tatHours || 0),
|
||||
elapsedHours: Number(level.elapsedHours || 0)
|
||||
})),
|
||||
workNotes: workNotes.map((note: any) => ({
|
||||
userName: note.userName,
|
||||
message: note.message,
|
||||
createdAt: note.createdAt
|
||||
})),
|
||||
documents: documents.map((doc: any) => ({
|
||||
fileName: doc.originalFileName || doc.fileName,
|
||||
uploadedBy: doc.uploadedBy,
|
||||
uploadedAt: doc.uploadedAt
|
||||
})),
|
||||
activities: activities.map((activity: any) => ({
|
||||
type: activity.activityType,
|
||||
action: activity.activityDescription,
|
||||
details: activity.activityDescription,
|
||||
timestamp: activity.createdAt
|
||||
}))
|
||||
};
|
||||
|
||||
logger.info(`[Conclusion] Generating AI remark for request ${requestId}...`);
|
||||
|
||||
// Generate AI conclusion
|
||||
const aiResult = await aiService.generateConclusionRemark(context);
|
||||
|
||||
// Check if conclusion already exists
|
||||
let conclusionInstance = await ConclusionRemark.findOne({ where: { requestId } });
|
||||
|
||||
const conclusionData = {
|
||||
aiGeneratedRemark: aiResult.remark,
|
||||
aiModelUsed: aiResult.provider,
|
||||
aiConfidenceScore: aiResult.confidence,
|
||||
approvalSummary: {
|
||||
totalLevels: approvalLevels.length,
|
||||
approvedLevels: approvalLevels.filter((l: any) => l.status === 'APPROVED').length,
|
||||
averageTatUsage: approvalLevels.reduce((sum: number, l: any) =>
|
||||
sum + Number(l.tatPercentageUsed || 0), 0) / (approvalLevels.length || 1)
|
||||
},
|
||||
documentSummary: {
|
||||
totalDocuments: documents.length,
|
||||
documentNames: documents.map((d: any) => d.originalFileName || d.fileName)
|
||||
},
|
||||
keyDiscussionPoints: aiResult.keyPoints,
|
||||
generatedAt: new Date()
|
||||
};
|
||||
|
||||
if (conclusionInstance) {
|
||||
// Update existing conclusion (allow regeneration)
|
||||
await conclusionInstance.update(conclusionData as any);
|
||||
logger.info(`[Conclusion] ✅ AI conclusion regenerated for request ${requestId}`);
|
||||
} else {
|
||||
// Create new conclusion
|
||||
conclusionInstance = await ConclusionRemark.create({
|
||||
requestId,
|
||||
...conclusionData,
|
||||
finalRemark: null,
|
||||
editedBy: null,
|
||||
isEdited: false,
|
||||
editCount: 0,
|
||||
finalizedAt: null
|
||||
} as any);
|
||||
logger.info(`[Conclusion] ✅ AI conclusion generated for request ${requestId}`);
|
||||
}
|
||||
|
||||
// Log activity
|
||||
await activityService.log({
|
||||
requestId,
|
||||
type: 'ai_conclusion_generated',
|
||||
user: { userId, name: (request as any).initiator?.displayName || 'Initiator' },
|
||||
timestamp: new Date().toISOString(),
|
||||
action: 'AI Conclusion Generated',
|
||||
details: 'AI-powered conclusion remark generated for review'
|
||||
});
|
||||
|
||||
return res.status(200).json({
|
||||
message: 'Conclusion generated successfully',
|
||||
data: {
|
||||
conclusionId: (conclusionInstance as any).conclusionId,
|
||||
aiGeneratedRemark: aiResult.remark,
|
||||
keyDiscussionPoints: aiResult.keyPoints,
|
||||
confidence: aiResult.confidence,
|
||||
provider: aiResult.provider,
|
||||
generatedAt: new Date()
|
||||
}
|
||||
});
|
||||
} catch (error: any) {
|
||||
logger.error('[Conclusion] Error generating conclusion:', error);
|
||||
|
||||
// Provide helpful error messages
|
||||
const isConfigError = error.message?.includes('not configured') ||
|
||||
error.message?.includes('not available') ||
|
||||
error.message?.includes('not initialized');
|
||||
|
||||
return res.status(isConfigError ? 503 : 500).json({
|
||||
error: isConfigError ? 'AI service not configured' : 'Failed to generate conclusion',
|
||||
message: error.message || 'An unexpected error occurred',
|
||||
canContinueManually: true // User can still write manual conclusion
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update conclusion remark (edit by initiator)
|
||||
* PUT /api/v1/conclusions/:requestId
|
||||
*/
|
||||
async updateConclusion(req: Request, res: Response) {
|
||||
try {
|
||||
const { requestId } = req.params;
|
||||
const { finalRemark } = req.body;
|
||||
const userId = (req as any).user?.userId;
|
||||
|
||||
if (!finalRemark || typeof finalRemark !== 'string') {
|
||||
return res.status(400).json({ error: 'Final remark is required' });
|
||||
}
|
||||
|
||||
// Fetch request
|
||||
const request = await WorkflowRequest.findOne({ where: { requestId } });
|
||||
if (!request) {
|
||||
return res.status(404).json({ error: 'Request not found' });
|
||||
}
|
||||
|
||||
// Check if user is the initiator
|
||||
if ((request as any).initiatorId !== userId) {
|
||||
return res.status(403).json({ error: 'Only the initiator can update conclusion remarks' });
|
||||
}
|
||||
|
||||
// Find conclusion
|
||||
const conclusion = await ConclusionRemark.findOne({ where: { requestId } });
|
||||
if (!conclusion) {
|
||||
return res.status(404).json({ error: 'Conclusion not found. Generate it first.' });
|
||||
}
|
||||
|
||||
// Update conclusion
|
||||
const wasEdited = (conclusion as any).aiGeneratedRemark !== finalRemark;
|
||||
|
||||
await conclusion.update({
|
||||
finalRemark: finalRemark,
|
||||
editedBy: userId,
|
||||
isEdited: wasEdited,
|
||||
editCount: wasEdited ? (conclusion as any).editCount + 1 : (conclusion as any).editCount
|
||||
} as any);
|
||||
|
||||
logger.info(`[Conclusion] Updated conclusion for request ${requestId} (edited: ${wasEdited})`);
|
||||
|
||||
return res.status(200).json({
|
||||
message: 'Conclusion updated successfully',
|
||||
data: conclusion
|
||||
});
|
||||
} catch (error: any) {
|
||||
logger.error('[Conclusion] Error updating conclusion:', error);
|
||||
return res.status(500).json({ error: 'Failed to update conclusion' });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Finalize conclusion and close request
|
||||
* POST /api/v1/conclusions/:requestId/finalize
|
||||
*/
|
||||
async finalizeConclusion(req: Request, res: Response) {
|
||||
try {
|
||||
const { requestId } = req.params;
|
||||
const { finalRemark } = req.body;
|
||||
const userId = (req as any).user?.userId;
|
||||
|
||||
if (!finalRemark || typeof finalRemark !== 'string') {
|
||||
return res.status(400).json({ error: 'Final remark is required' });
|
||||
}
|
||||
|
||||
// Fetch request
|
||||
const request = await WorkflowRequest.findOne({
|
||||
where: { requestId },
|
||||
include: [
|
||||
{ association: 'initiator', attributes: ['userId', 'displayName', 'email'] }
|
||||
]
|
||||
});
|
||||
|
||||
if (!request) {
|
||||
return res.status(404).json({ error: 'Request not found' });
|
||||
}
|
||||
|
||||
// Check if user is the initiator
|
||||
if ((request as any).initiatorId !== userId) {
|
||||
return res.status(403).json({ error: 'Only the initiator can finalize conclusion remarks' });
|
||||
}
|
||||
|
||||
// Check if request is approved
|
||||
if ((request as any).status !== 'APPROVED') {
|
||||
return res.status(400).json({ error: 'Only approved requests can be closed' });
|
||||
}
|
||||
|
||||
// Find or create conclusion
|
||||
let conclusion = await ConclusionRemark.findOne({ where: { requestId } });
|
||||
|
||||
if (!conclusion) {
|
||||
// Create if doesn't exist (manual conclusion without AI)
|
||||
conclusion = await ConclusionRemark.create({
|
||||
requestId,
|
||||
aiGeneratedRemark: null,
|
||||
aiModelUsed: null,
|
||||
aiConfidenceScore: null,
|
||||
finalRemark: finalRemark,
|
||||
editedBy: userId,
|
||||
isEdited: false,
|
||||
editCount: 0,
|
||||
approvalSummary: {},
|
||||
documentSummary: {},
|
||||
keyDiscussionPoints: [],
|
||||
generatedAt: null,
|
||||
finalizedAt: new Date()
|
||||
} as any);
|
||||
} else {
|
||||
// Update existing conclusion
|
||||
const wasEdited = (conclusion as any).aiGeneratedRemark !== finalRemark;
|
||||
|
||||
await conclusion.update({
|
||||
finalRemark: finalRemark,
|
||||
editedBy: userId,
|
||||
isEdited: wasEdited,
|
||||
editCount: wasEdited ? (conclusion as any).editCount + 1 : (conclusion as any).editCount,
|
||||
finalizedAt: new Date()
|
||||
} as any);
|
||||
}
|
||||
|
||||
// Update request status to CLOSED
|
||||
await request.update({
|
||||
status: 'CLOSED',
|
||||
conclusionRemark: finalRemark,
|
||||
closureDate: new Date()
|
||||
} as any);
|
||||
|
||||
logger.info(`[Conclusion] ✅ Request ${requestId} finalized and closed`);
|
||||
|
||||
// Log activity
|
||||
await activityService.log({
|
||||
requestId,
|
||||
type: 'closed',
|
||||
user: { userId, name: (request as any).initiator?.displayName || 'Initiator' },
|
||||
timestamp: new Date().toISOString(),
|
||||
action: 'Request Closed',
|
||||
details: `Request closed with conclusion remark by ${(request as any).initiator?.displayName}`
|
||||
});
|
||||
|
||||
return res.status(200).json({
|
||||
message: 'Request finalized and closed successfully',
|
||||
data: {
|
||||
conclusionId: (conclusion as any).conclusionId,
|
||||
requestNumber: (request as any).requestNumber,
|
||||
status: 'CLOSED',
|
||||
finalRemark: finalRemark,
|
||||
finalizedAt: (conclusion as any).finalizedAt
|
||||
}
|
||||
});
|
||||
} catch (error: any) {
|
||||
logger.error('[Conclusion] Error finalizing conclusion:', error);
|
||||
return res.status(500).json({ error: 'Failed to finalize conclusion' });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get conclusion for a request
|
||||
* GET /api/v1/conclusions/:requestId
|
||||
*/
|
||||
async getConclusion(req: Request, res: Response) {
|
||||
try {
|
||||
const { requestId } = req.params;
|
||||
|
||||
const conclusion = await ConclusionRemark.findOne({
|
||||
where: { requestId },
|
||||
include: [
|
||||
{ association: 'editor', attributes: ['userId', 'displayName', 'email'] }
|
||||
]
|
||||
});
|
||||
|
||||
if (!conclusion) {
|
||||
return res.status(404).json({ error: 'Conclusion not found' });
|
||||
}
|
||||
|
||||
return res.status(200).json({
|
||||
message: 'Conclusion retrieved successfully',
|
||||
data: conclusion
|
||||
});
|
||||
} catch (error: any) {
|
||||
logger.error('[Conclusion] Error getting conclusion:', error);
|
||||
return res.status(500).json({ error: 'Failed to get conclusion' });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export const conclusionController = new ConclusionController();
|
||||
|
||||
176
src/controllers/notification.controller.ts
Normal file
176
src/controllers/notification.controller.ts
Normal file
@ -0,0 +1,176 @@
|
||||
import { Request, Response } from 'express';
|
||||
import { Notification } from '@models/Notification';
|
||||
import { Op } from 'sequelize';
|
||||
import logger from '@utils/logger';
|
||||
|
||||
export class NotificationController {
|
||||
/**
|
||||
* Get user's notifications with pagination
|
||||
*/
|
||||
async getUserNotifications(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const { page = 1, limit = 20, unreadOnly = false } = req.query;
|
||||
|
||||
if (!userId) {
|
||||
res.status(401).json({ success: false, message: 'Unauthorized' });
|
||||
return;
|
||||
}
|
||||
|
||||
const where: any = { userId };
|
||||
if (unreadOnly === 'true') {
|
||||
where.isRead = false;
|
||||
}
|
||||
|
||||
const offset = (Number(page) - 1) * Number(limit);
|
||||
|
||||
const { rows, count } = await Notification.findAndCountAll({
|
||||
where,
|
||||
order: [['createdAt', 'DESC']],
|
||||
limit: Number(limit),
|
||||
offset
|
||||
});
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: {
|
||||
notifications: rows,
|
||||
pagination: {
|
||||
page: Number(page),
|
||||
limit: Number(limit),
|
||||
total: count,
|
||||
totalPages: Math.ceil(count / Number(limit))
|
||||
},
|
||||
unreadCount: unreadOnly === 'true' ? count : await Notification.count({ where: { userId, isRead: false } })
|
||||
}
|
||||
});
|
||||
} catch (error: any) {
|
||||
logger.error('[Notification Controller] Error fetching notifications:', error);
|
||||
res.status(500).json({ success: false, message: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get unread notification count
|
||||
*/
|
||||
async getUnreadCount(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
|
||||
if (!userId) {
|
||||
res.status(401).json({ success: false, message: 'Unauthorized' });
|
||||
return;
|
||||
}
|
||||
|
||||
const count = await Notification.count({
|
||||
where: { userId, isRead: false }
|
||||
});
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: { unreadCount: count }
|
||||
});
|
||||
} catch (error: any) {
|
||||
logger.error('[Notification Controller] Error fetching unread count:', error);
|
||||
res.status(500).json({ success: false, message: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Mark notification as read
|
||||
*/
|
||||
async markAsRead(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const { notificationId } = req.params;
|
||||
|
||||
if (!userId) {
|
||||
res.status(401).json({ success: false, message: 'Unauthorized' });
|
||||
return;
|
||||
}
|
||||
|
||||
const notification = await Notification.findOne({
|
||||
where: { notificationId, userId }
|
||||
});
|
||||
|
||||
if (!notification) {
|
||||
res.status(404).json({ success: false, message: 'Notification not found' });
|
||||
return;
|
||||
}
|
||||
|
||||
await notification.update({
|
||||
isRead: true,
|
||||
readAt: new Date()
|
||||
});
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
message: 'Notification marked as read',
|
||||
data: { notification }
|
||||
});
|
||||
} catch (error: any) {
|
||||
logger.error('[Notification Controller] Error marking notification as read:', error);
|
||||
res.status(500).json({ success: false, message: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Mark all notifications as read
|
||||
*/
|
||||
async markAllAsRead(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
|
||||
if (!userId) {
|
||||
res.status(401).json({ success: false, message: 'Unauthorized' });
|
||||
return;
|
||||
}
|
||||
|
||||
await Notification.update(
|
||||
{ isRead: true, readAt: new Date() },
|
||||
{ where: { userId, isRead: false } }
|
||||
);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
message: 'All notifications marked as read'
|
||||
});
|
||||
} catch (error: any) {
|
||||
logger.error('[Notification Controller] Error marking all as read:', error);
|
||||
res.status(500).json({ success: false, message: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete notification
|
||||
*/
|
||||
async deleteNotification(req: Request, res: Response): Promise<void> {
|
||||
try {
|
||||
const userId = (req as any).user?.userId;
|
||||
const { notificationId } = req.params;
|
||||
|
||||
if (!userId) {
|
||||
res.status(401).json({ success: false, message: 'Unauthorized' });
|
||||
return;
|
||||
}
|
||||
|
||||
const deleted = await Notification.destroy({
|
||||
where: { notificationId, userId }
|
||||
});
|
||||
|
||||
if (deleted === 0) {
|
||||
res.status(404).json({ success: false, message: 'Notification not found' });
|
||||
return;
|
||||
}
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
message: 'Notification deleted'
|
||||
});
|
||||
} catch (error: any) {
|
||||
logger.error('[Notification Controller] Error deleting notification:', error);
|
||||
res.status(500).json({ success: false, message: error.message });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
124
src/migrations/20251111-add-ai-provider-configs.ts
Normal file
124
src/migrations/20251111-add-ai-provider-configs.ts
Normal file
@ -0,0 +1,124 @@
|
||||
import { QueryInterface, DataTypes } from 'sequelize';
|
||||
import { v4 as uuidv4 } from 'uuid';
|
||||
|
||||
/**
|
||||
* Migration to add AI provider configurations to admin_configurations
|
||||
* Allows admins to configure AI provider and API keys through the UI
|
||||
*/
|
||||
export async function up(queryInterface: QueryInterface): Promise<void> {
|
||||
const now = new Date();
|
||||
|
||||
await queryInterface.bulkInsert('admin_configurations', [
|
||||
{
|
||||
config_id: uuidv4(),
|
||||
config_key: 'AI_PROVIDER',
|
||||
config_value: 'claude',
|
||||
value_type: 'STRING',
|
||||
config_category: 'AI_CONFIGURATION',
|
||||
description: 'Active AI provider for conclusion generation (claude, openai, or gemini)',
|
||||
is_editable: true,
|
||||
is_sensitive: false,
|
||||
default_value: 'claude',
|
||||
display_name: 'AI Provider',
|
||||
validation_rules: JSON.stringify({
|
||||
enum: ['claude', 'openai', 'gemini'],
|
||||
required: true
|
||||
}),
|
||||
ui_component: 'select',
|
||||
options: JSON.stringify(['claude', 'openai', 'gemini']),
|
||||
sort_order: 100,
|
||||
requires_restart: false,
|
||||
created_at: now,
|
||||
updated_at: now
|
||||
},
|
||||
{
|
||||
config_id: uuidv4(),
|
||||
config_key: 'CLAUDE_API_KEY',
|
||||
config_value: '',
|
||||
value_type: 'STRING',
|
||||
config_category: 'AI_CONFIGURATION',
|
||||
description: 'API key for Claude (Anthropic) - Get from console.anthropic.com',
|
||||
is_editable: true,
|
||||
is_sensitive: true,
|
||||
default_value: '',
|
||||
display_name: 'Claude API Key',
|
||||
validation_rules: JSON.stringify({
|
||||
pattern: '^sk-ant-',
|
||||
minLength: 40
|
||||
}),
|
||||
ui_component: 'input',
|
||||
sort_order: 101,
|
||||
requires_restart: false,
|
||||
created_at: now,
|
||||
updated_at: now
|
||||
},
|
||||
{
|
||||
config_id: uuidv4(),
|
||||
config_key: 'OPENAI_API_KEY',
|
||||
config_value: '',
|
||||
value_type: 'STRING',
|
||||
config_category: 'AI_CONFIGURATION',
|
||||
description: 'API key for OpenAI (GPT-4) - Get from platform.openai.com',
|
||||
is_editable: true,
|
||||
is_sensitive: true,
|
||||
default_value: '',
|
||||
display_name: 'OpenAI API Key',
|
||||
validation_rules: JSON.stringify({
|
||||
pattern: '^sk-',
|
||||
minLength: 40
|
||||
}),
|
||||
ui_component: 'input',
|
||||
sort_order: 102,
|
||||
requires_restart: false,
|
||||
created_at: now,
|
||||
updated_at: now
|
||||
},
|
||||
{
|
||||
config_id: uuidv4(),
|
||||
config_key: 'GEMINI_API_KEY',
|
||||
config_value: '',
|
||||
value_type: 'STRING',
|
||||
config_category: 'AI_CONFIGURATION',
|
||||
description: 'API key for Gemini (Google) - Get from ai.google.dev',
|
||||
is_editable: true,
|
||||
is_sensitive: true,
|
||||
default_value: '',
|
||||
display_name: 'Gemini API Key',
|
||||
validation_rules: JSON.stringify({
|
||||
minLength: 20
|
||||
}),
|
||||
ui_component: 'input',
|
||||
sort_order: 103,
|
||||
requires_restart: false,
|
||||
created_at: now,
|
||||
updated_at: now
|
||||
},
|
||||
{
|
||||
config_id: uuidv4(),
|
||||
config_key: 'AI_ENABLED',
|
||||
config_value: 'true',
|
||||
value_type: 'BOOLEAN',
|
||||
config_category: 'AI_CONFIGURATION',
|
||||
description: 'Enable/disable AI-powered conclusion generation feature',
|
||||
is_editable: true,
|
||||
is_sensitive: false,
|
||||
default_value: 'true',
|
||||
display_name: 'Enable AI Features',
|
||||
validation_rules: JSON.stringify({
|
||||
type: 'boolean'
|
||||
}),
|
||||
ui_component: 'toggle',
|
||||
sort_order: 104,
|
||||
requires_restart: false,
|
||||
created_at: now,
|
||||
updated_at: now
|
||||
}
|
||||
]);
|
||||
}
|
||||
|
||||
export async function down(queryInterface: QueryInterface): Promise<void> {
|
||||
await queryInterface.bulkDelete('admin_configurations', {
|
||||
config_key: ['AI_PROVIDER', 'CLAUDE_API_KEY', 'OPENAI_API_KEY', 'GEMINI_API_KEY', 'AI_ENABLED']
|
||||
} as any);
|
||||
}
|
||||
|
||||
109
src/migrations/20251111-create-conclusion-remarks.ts
Normal file
109
src/migrations/20251111-create-conclusion-remarks.ts
Normal file
@ -0,0 +1,109 @@
|
||||
import { QueryInterface, DataTypes } from 'sequelize';
|
||||
|
||||
/**
|
||||
* Migration to create conclusion_remarks table
|
||||
* Stores AI-generated and finalized conclusion remarks for workflow requests
|
||||
*/
|
||||
export async function up(queryInterface: QueryInterface): Promise<void> {
|
||||
await queryInterface.createTable('conclusion_remarks', {
|
||||
conclusion_id: {
|
||||
type: DataTypes.UUID,
|
||||
defaultValue: DataTypes.UUIDV4,
|
||||
primaryKey: true,
|
||||
allowNull: false
|
||||
},
|
||||
request_id: {
|
||||
type: DataTypes.UUID,
|
||||
allowNull: false,
|
||||
references: {
|
||||
model: 'workflow_requests',
|
||||
key: 'request_id'
|
||||
},
|
||||
onUpdate: 'CASCADE',
|
||||
onDelete: 'CASCADE',
|
||||
unique: true // One conclusion per request
|
||||
},
|
||||
ai_generated_remark: {
|
||||
type: DataTypes.TEXT,
|
||||
allowNull: true
|
||||
},
|
||||
ai_model_used: {
|
||||
type: DataTypes.STRING(100),
|
||||
allowNull: true
|
||||
},
|
||||
ai_confidence_score: {
|
||||
type: DataTypes.DECIMAL(5, 2),
|
||||
allowNull: true
|
||||
},
|
||||
final_remark: {
|
||||
type: DataTypes.TEXT,
|
||||
allowNull: true
|
||||
},
|
||||
edited_by: {
|
||||
type: DataTypes.UUID,
|
||||
allowNull: true,
|
||||
references: {
|
||||
model: 'users',
|
||||
key: 'user_id'
|
||||
},
|
||||
onUpdate: 'CASCADE',
|
||||
onDelete: 'SET NULL'
|
||||
},
|
||||
is_edited: {
|
||||
type: DataTypes.BOOLEAN,
|
||||
allowNull: false,
|
||||
defaultValue: false
|
||||
},
|
||||
edit_count: {
|
||||
type: DataTypes.INTEGER,
|
||||
allowNull: false,
|
||||
defaultValue: 0
|
||||
},
|
||||
approval_summary: {
|
||||
type: DataTypes.JSONB,
|
||||
allowNull: true
|
||||
},
|
||||
document_summary: {
|
||||
type: DataTypes.JSONB,
|
||||
allowNull: true
|
||||
},
|
||||
key_discussion_points: {
|
||||
type: DataTypes.ARRAY(DataTypes.TEXT),
|
||||
allowNull: false,
|
||||
defaultValue: []
|
||||
},
|
||||
generated_at: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: true
|
||||
},
|
||||
finalized_at: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: true
|
||||
},
|
||||
created_at: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: false,
|
||||
defaultValue: DataTypes.NOW
|
||||
},
|
||||
updated_at: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: false,
|
||||
defaultValue: DataTypes.NOW
|
||||
}
|
||||
});
|
||||
|
||||
// Add index on request_id for faster lookups
|
||||
await queryInterface.addIndex('conclusion_remarks', ['request_id'], {
|
||||
name: 'idx_conclusion_remarks_request_id'
|
||||
});
|
||||
|
||||
// Add index on finalized_at for KPI queries
|
||||
await queryInterface.addIndex('conclusion_remarks', ['finalized_at'], {
|
||||
name: 'idx_conclusion_remarks_finalized_at'
|
||||
});
|
||||
}
|
||||
|
||||
export async function down(queryInterface: QueryInterface): Promise<void> {
|
||||
await queryInterface.dropTable('conclusion_remarks');
|
||||
}
|
||||
|
||||
137
src/migrations/20251111-create-notifications.ts
Normal file
137
src/migrations/20251111-create-notifications.ts
Normal file
@ -0,0 +1,137 @@
|
||||
import { QueryInterface, DataTypes } from 'sequelize';
|
||||
|
||||
export async function up(queryInterface: QueryInterface): Promise<void> {
|
||||
// Create priority enum type
|
||||
await queryInterface.sequelize.query(`
|
||||
DO $$ BEGIN
|
||||
CREATE TYPE notification_priority_enum AS ENUM ('LOW', 'MEDIUM', 'HIGH', 'URGENT');
|
||||
EXCEPTION
|
||||
WHEN duplicate_object THEN null;
|
||||
END $$;
|
||||
`);
|
||||
|
||||
// Create notifications table
|
||||
await queryInterface.createTable('notifications', {
|
||||
notification_id: {
|
||||
type: DataTypes.UUID,
|
||||
defaultValue: DataTypes.UUIDV4,
|
||||
primaryKey: true
|
||||
},
|
||||
user_id: {
|
||||
type: DataTypes.UUID,
|
||||
allowNull: false,
|
||||
references: {
|
||||
model: 'users',
|
||||
key: 'user_id'
|
||||
},
|
||||
onUpdate: 'CASCADE',
|
||||
onDelete: 'CASCADE'
|
||||
},
|
||||
request_id: {
|
||||
type: DataTypes.UUID,
|
||||
allowNull: true,
|
||||
references: {
|
||||
model: 'workflow_requests',
|
||||
key: 'request_id'
|
||||
},
|
||||
onUpdate: 'CASCADE',
|
||||
onDelete: 'SET NULL'
|
||||
},
|
||||
notification_type: {
|
||||
type: DataTypes.STRING(50),
|
||||
allowNull: false
|
||||
},
|
||||
title: {
|
||||
type: DataTypes.STRING(255),
|
||||
allowNull: false
|
||||
},
|
||||
message: {
|
||||
type: DataTypes.TEXT,
|
||||
allowNull: false
|
||||
},
|
||||
is_read: {
|
||||
type: DataTypes.BOOLEAN,
|
||||
defaultValue: false,
|
||||
allowNull: false
|
||||
},
|
||||
priority: {
|
||||
type: 'notification_priority_enum',
|
||||
defaultValue: 'MEDIUM',
|
||||
allowNull: false
|
||||
},
|
||||
action_url: {
|
||||
type: DataTypes.STRING(500),
|
||||
allowNull: true
|
||||
},
|
||||
action_required: {
|
||||
type: DataTypes.BOOLEAN,
|
||||
defaultValue: false,
|
||||
allowNull: false
|
||||
},
|
||||
metadata: {
|
||||
type: DataTypes.JSONB,
|
||||
allowNull: true,
|
||||
defaultValue: {}
|
||||
},
|
||||
sent_via: {
|
||||
type: DataTypes.ARRAY(DataTypes.STRING),
|
||||
defaultValue: [],
|
||||
allowNull: false
|
||||
},
|
||||
email_sent: {
|
||||
type: DataTypes.BOOLEAN,
|
||||
defaultValue: false,
|
||||
allowNull: false
|
||||
},
|
||||
sms_sent: {
|
||||
type: DataTypes.BOOLEAN,
|
||||
defaultValue: false,
|
||||
allowNull: false
|
||||
},
|
||||
push_sent: {
|
||||
type: DataTypes.BOOLEAN,
|
||||
defaultValue: false,
|
||||
allowNull: false
|
||||
},
|
||||
read_at: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: true
|
||||
},
|
||||
expires_at: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: true
|
||||
},
|
||||
created_at: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: false,
|
||||
defaultValue: DataTypes.NOW
|
||||
}
|
||||
});
|
||||
|
||||
// Create indexes for better query performance
|
||||
await queryInterface.addIndex('notifications', ['user_id'], {
|
||||
name: 'idx_notifications_user_id'
|
||||
});
|
||||
|
||||
await queryInterface.addIndex('notifications', ['user_id', 'is_read'], {
|
||||
name: 'idx_notifications_user_unread'
|
||||
});
|
||||
|
||||
await queryInterface.addIndex('notifications', ['request_id'], {
|
||||
name: 'idx_notifications_request_id'
|
||||
});
|
||||
|
||||
await queryInterface.addIndex('notifications', ['created_at'], {
|
||||
name: 'idx_notifications_created_at'
|
||||
});
|
||||
|
||||
await queryInterface.addIndex('notifications', ['notification_type'], {
|
||||
name: 'idx_notifications_type'
|
||||
});
|
||||
}
|
||||
|
||||
export async function down(queryInterface: QueryInterface): Promise<void> {
|
||||
await queryInterface.dropTable('notifications');
|
||||
await queryInterface.sequelize.query('DROP TYPE IF EXISTS notification_priority_enum;');
|
||||
}
|
||||
|
||||
152
src/models/ConclusionRemark.ts
Normal file
152
src/models/ConclusionRemark.ts
Normal file
@ -0,0 +1,152 @@
|
||||
import { DataTypes, Model, Optional } from 'sequelize';
|
||||
import { sequelize } from '../config/database';
|
||||
|
||||
interface ConclusionRemarkAttributes {
|
||||
conclusionId: string;
|
||||
requestId: string;
|
||||
aiGeneratedRemark: string | null;
|
||||
aiModelUsed: string | null;
|
||||
aiConfidenceScore: number | null;
|
||||
finalRemark: string | null;
|
||||
editedBy: string | null;
|
||||
isEdited: boolean;
|
||||
editCount: number;
|
||||
approvalSummary: any;
|
||||
documentSummary: any;
|
||||
keyDiscussionPoints: string[];
|
||||
generatedAt: Date | null;
|
||||
finalizedAt: Date | null;
|
||||
createdAt?: Date;
|
||||
updatedAt?: Date;
|
||||
}
|
||||
|
||||
interface ConclusionRemarkCreationAttributes
|
||||
extends Optional<ConclusionRemarkAttributes, 'conclusionId' | 'aiGeneratedRemark' | 'aiModelUsed' | 'aiConfidenceScore' | 'finalRemark' | 'editedBy' | 'isEdited' | 'editCount' | 'approvalSummary' | 'documentSummary' | 'keyDiscussionPoints' | 'generatedAt' | 'finalizedAt'> {}
|
||||
|
||||
class ConclusionRemark extends Model<ConclusionRemarkAttributes, ConclusionRemarkCreationAttributes>
|
||||
implements ConclusionRemarkAttributes {
|
||||
public conclusionId!: string;
|
||||
public requestId!: string;
|
||||
public aiGeneratedRemark!: string | null;
|
||||
public aiModelUsed!: string | null;
|
||||
public aiConfidenceScore!: number | null;
|
||||
public finalRemark!: string | null;
|
||||
public editedBy!: string | null;
|
||||
public isEdited!: boolean;
|
||||
public editCount!: number;
|
||||
public approvalSummary!: any;
|
||||
public documentSummary!: any;
|
||||
public keyDiscussionPoints!: string[];
|
||||
public generatedAt!: Date | null;
|
||||
public finalizedAt!: Date | null;
|
||||
public readonly createdAt!: Date;
|
||||
public readonly updatedAt!: Date;
|
||||
}
|
||||
|
||||
ConclusionRemark.init(
|
||||
{
|
||||
conclusionId: {
|
||||
type: DataTypes.UUID,
|
||||
defaultValue: DataTypes.UUIDV4,
|
||||
primaryKey: true,
|
||||
field: 'conclusion_id'
|
||||
},
|
||||
requestId: {
|
||||
type: DataTypes.UUID,
|
||||
allowNull: false,
|
||||
field: 'request_id',
|
||||
references: {
|
||||
model: 'workflow_requests',
|
||||
key: 'request_id'
|
||||
}
|
||||
},
|
||||
aiGeneratedRemark: {
|
||||
type: DataTypes.TEXT,
|
||||
allowNull: true,
|
||||
field: 'ai_generated_remark'
|
||||
},
|
||||
aiModelUsed: {
|
||||
type: DataTypes.STRING(100),
|
||||
allowNull: true,
|
||||
field: 'ai_model_used'
|
||||
},
|
||||
aiConfidenceScore: {
|
||||
type: DataTypes.DECIMAL(5, 2),
|
||||
allowNull: true,
|
||||
field: 'ai_confidence_score'
|
||||
},
|
||||
finalRemark: {
|
||||
type: DataTypes.TEXT,
|
||||
allowNull: true,
|
||||
field: 'final_remark'
|
||||
},
|
||||
editedBy: {
|
||||
type: DataTypes.UUID,
|
||||
allowNull: true,
|
||||
field: 'edited_by',
|
||||
references: {
|
||||
model: 'users',
|
||||
key: 'user_id'
|
||||
}
|
||||
},
|
||||
isEdited: {
|
||||
type: DataTypes.BOOLEAN,
|
||||
allowNull: false,
|
||||
defaultValue: false,
|
||||
field: 'is_edited'
|
||||
},
|
||||
editCount: {
|
||||
type: DataTypes.INTEGER,
|
||||
allowNull: false,
|
||||
defaultValue: 0,
|
||||
field: 'edit_count'
|
||||
},
|
||||
approvalSummary: {
|
||||
type: DataTypes.JSONB,
|
||||
allowNull: true,
|
||||
field: 'approval_summary'
|
||||
},
|
||||
documentSummary: {
|
||||
type: DataTypes.JSONB,
|
||||
allowNull: true,
|
||||
field: 'document_summary'
|
||||
},
|
||||
keyDiscussionPoints: {
|
||||
type: DataTypes.ARRAY(DataTypes.TEXT),
|
||||
allowNull: false,
|
||||
defaultValue: [],
|
||||
field: 'key_discussion_points'
|
||||
},
|
||||
generatedAt: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: true,
|
||||
field: 'generated_at'
|
||||
},
|
||||
finalizedAt: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: true,
|
||||
field: 'finalized_at'
|
||||
},
|
||||
createdAt: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: false,
|
||||
defaultValue: DataTypes.NOW,
|
||||
field: 'created_at'
|
||||
},
|
||||
updatedAt: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: false,
|
||||
defaultValue: DataTypes.NOW,
|
||||
field: 'updated_at'
|
||||
}
|
||||
},
|
||||
{
|
||||
sequelize,
|
||||
tableName: 'conclusion_remarks',
|
||||
timestamps: true,
|
||||
underscored: true
|
||||
}
|
||||
);
|
||||
|
||||
export default ConclusionRemark;
|
||||
|
||||
156
src/models/Notification.ts
Normal file
156
src/models/Notification.ts
Normal file
@ -0,0 +1,156 @@
|
||||
import { DataTypes, Model, Optional } from 'sequelize';
|
||||
import { sequelize } from '../config/database';
|
||||
|
||||
interface NotificationAttributes {
|
||||
notificationId: string;
|
||||
userId: string;
|
||||
requestId?: string;
|
||||
notificationType: string;
|
||||
title: string;
|
||||
message: string;
|
||||
isRead: boolean;
|
||||
priority: 'LOW' | 'MEDIUM' | 'HIGH' | 'URGENT';
|
||||
actionUrl?: string;
|
||||
actionRequired: boolean;
|
||||
metadata?: any;
|
||||
sentVia: string[];
|
||||
emailSent: boolean;
|
||||
smsSent: boolean;
|
||||
pushSent: boolean;
|
||||
readAt?: Date;
|
||||
expiresAt?: Date;
|
||||
createdAt: Date;
|
||||
}
|
||||
|
||||
interface NotificationCreationAttributes extends Optional<NotificationAttributes, 'notificationId' | 'isRead' | 'priority' | 'actionRequired' | 'sentVia' | 'emailSent' | 'smsSent' | 'pushSent' | 'createdAt'> {}
|
||||
|
||||
class Notification extends Model<NotificationAttributes, NotificationCreationAttributes> implements NotificationAttributes {
|
||||
public notificationId!: string;
|
||||
public userId!: string;
|
||||
public requestId?: string;
|
||||
public notificationType!: string;
|
||||
public title!: string;
|
||||
public message!: string;
|
||||
public isRead!: boolean;
|
||||
public priority!: 'LOW' | 'MEDIUM' | 'HIGH' | 'URGENT';
|
||||
public actionUrl?: string;
|
||||
public actionRequired!: boolean;
|
||||
public metadata?: any;
|
||||
public sentVia!: string[];
|
||||
public emailSent!: boolean;
|
||||
public smsSent!: boolean;
|
||||
public pushSent!: boolean;
|
||||
public readAt?: Date;
|
||||
public expiresAt?: Date;
|
||||
public readonly createdAt!: Date;
|
||||
}
|
||||
|
||||
Notification.init(
|
||||
{
|
||||
notificationId: {
|
||||
type: DataTypes.UUID,
|
||||
defaultValue: DataTypes.UUIDV4,
|
||||
primaryKey: true,
|
||||
field: 'notification_id'
|
||||
},
|
||||
userId: {
|
||||
type: DataTypes.UUID,
|
||||
allowNull: false,
|
||||
field: 'user_id',
|
||||
references: {
|
||||
model: 'users',
|
||||
key: 'user_id'
|
||||
}
|
||||
},
|
||||
requestId: {
|
||||
type: DataTypes.UUID,
|
||||
allowNull: true,
|
||||
field: 'request_id',
|
||||
references: {
|
||||
model: 'workflow_requests',
|
||||
key: 'request_id'
|
||||
}
|
||||
},
|
||||
notificationType: {
|
||||
type: DataTypes.STRING(50),
|
||||
allowNull: false,
|
||||
field: 'notification_type'
|
||||
},
|
||||
title: {
|
||||
type: DataTypes.STRING(255),
|
||||
allowNull: false
|
||||
},
|
||||
message: {
|
||||
type: DataTypes.TEXT,
|
||||
allowNull: false
|
||||
},
|
||||
isRead: {
|
||||
type: DataTypes.BOOLEAN,
|
||||
defaultValue: false,
|
||||
field: 'is_read'
|
||||
},
|
||||
priority: {
|
||||
type: DataTypes.ENUM('LOW', 'MEDIUM', 'HIGH', 'URGENT'),
|
||||
defaultValue: 'MEDIUM'
|
||||
},
|
||||
actionUrl: {
|
||||
type: DataTypes.STRING(500),
|
||||
allowNull: true,
|
||||
field: 'action_url'
|
||||
},
|
||||
actionRequired: {
|
||||
type: DataTypes.BOOLEAN,
|
||||
defaultValue: false,
|
||||
field: 'action_required'
|
||||
},
|
||||
metadata: {
|
||||
type: DataTypes.JSONB,
|
||||
allowNull: true
|
||||
},
|
||||
sentVia: {
|
||||
type: DataTypes.ARRAY(DataTypes.STRING),
|
||||
defaultValue: [],
|
||||
field: 'sent_via'
|
||||
},
|
||||
emailSent: {
|
||||
type: DataTypes.BOOLEAN,
|
||||
defaultValue: false,
|
||||
field: 'email_sent'
|
||||
},
|
||||
smsSent: {
|
||||
type: DataTypes.BOOLEAN,
|
||||
defaultValue: false,
|
||||
field: 'sms_sent'
|
||||
},
|
||||
pushSent: {
|
||||
type: DataTypes.BOOLEAN,
|
||||
defaultValue: false,
|
||||
field: 'push_sent'
|
||||
},
|
||||
readAt: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: true,
|
||||
field: 'read_at'
|
||||
},
|
||||
expiresAt: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: true,
|
||||
field: 'expires_at'
|
||||
},
|
||||
createdAt: {
|
||||
type: DataTypes.DATE,
|
||||
allowNull: false,
|
||||
defaultValue: DataTypes.NOW,
|
||||
field: 'created_at'
|
||||
}
|
||||
},
|
||||
{
|
||||
sequelize,
|
||||
tableName: 'notifications',
|
||||
timestamps: false,
|
||||
underscored: true
|
||||
}
|
||||
);
|
||||
|
||||
export { Notification };
|
||||
|
||||
@ -12,6 +12,8 @@ import { WorkNote } from './WorkNote';
|
||||
import { WorkNoteAttachment } from './WorkNoteAttachment';
|
||||
import { TatAlert } from './TatAlert';
|
||||
import { Holiday } from './Holiday';
|
||||
import { Notification } from './Notification';
|
||||
import ConclusionRemark from './ConclusionRemark';
|
||||
|
||||
// Define associations
|
||||
const defineAssociations = () => {
|
||||
@ -59,6 +61,23 @@ const defineAssociations = () => {
|
||||
sourceKey: 'requestId'
|
||||
});
|
||||
|
||||
WorkflowRequest.hasOne(ConclusionRemark, {
|
||||
as: 'conclusion',
|
||||
foreignKey: 'requestId',
|
||||
sourceKey: 'requestId'
|
||||
});
|
||||
|
||||
ConclusionRemark.belongsTo(WorkflowRequest, {
|
||||
foreignKey: 'requestId',
|
||||
targetKey: 'requestId'
|
||||
});
|
||||
|
||||
ConclusionRemark.belongsTo(User, {
|
||||
as: 'editor',
|
||||
foreignKey: 'editedBy',
|
||||
targetKey: 'userId'
|
||||
});
|
||||
|
||||
// Note: belongsTo associations are defined in individual model files to avoid duplicate alias conflicts
|
||||
// Only hasMany associations from WorkflowRequest are defined here since they're one-way
|
||||
};
|
||||
@ -79,7 +98,9 @@ export {
|
||||
WorkNote,
|
||||
WorkNoteAttachment,
|
||||
TatAlert,
|
||||
Holiday
|
||||
Holiday,
|
||||
Notification,
|
||||
ConclusionRemark
|
||||
};
|
||||
|
||||
// Export default sequelize instance
|
||||
|
||||
@ -1,33 +0,0 @@
|
||||
import { tatQueue } from './tatQueue';
|
||||
import logger from '@utils/logger';
|
||||
|
||||
async function promoteDelayedJobs() {
|
||||
if (!tatQueue) return;
|
||||
|
||||
try {
|
||||
const delayedJobs = await tatQueue.getJobs(['delayed']);
|
||||
const now = Date.now();
|
||||
let promotedCount = 0;
|
||||
|
||||
for (const job of delayedJobs) {
|
||||
const readyTime = job.timestamp + (job.opts.delay || 0);
|
||||
const secondsUntil = Math.round((readyTime - now) / 1000);
|
||||
|
||||
// Promote if ready within 15 seconds
|
||||
if (secondsUntil <= 15) {
|
||||
await job.promote();
|
||||
promotedCount++;
|
||||
}
|
||||
}
|
||||
|
||||
if (promotedCount > 0) {
|
||||
logger.info(`[TAT] Promoted ${promotedCount} jobs`);
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('[TAT] Promoter error:', error);
|
||||
}
|
||||
}
|
||||
|
||||
// Check every 3 seconds
|
||||
setInterval(promoteDelayedJobs, 3000);
|
||||
logger.info('[TAT] Delayed job promoter started');
|
||||
@ -31,11 +31,12 @@ let sharedConnection: IORedis | null = null;
|
||||
// Create a SINGLE shared connection for both Queue and Worker
|
||||
export const getSharedRedisConnection = (): IORedis => {
|
||||
if (!sharedConnection) {
|
||||
logger.info(`[Redis] Creating shared connection to ${redisUrl}`);
|
||||
logger.info(`[Redis] Connecting to ${redisUrl}`);
|
||||
|
||||
sharedConnection = new IORedis(redisUrl, redisOptions);
|
||||
|
||||
sharedConnection.on('connect', () => {
|
||||
logger.info(`[Redis] ✅ Connected to ${redisUrl}`);
|
||||
logger.info(`[Redis] ✅ Connected successfully`);
|
||||
});
|
||||
|
||||
sharedConnection.on('error', (err) => {
|
||||
|
||||
@ -24,7 +24,6 @@ export async function handleTatJob(job: Job<TatJobData>) {
|
||||
logger.info(`[TAT Processor] Processing ${type} (${threshold}%) for request ${requestId}`);
|
||||
|
||||
try {
|
||||
|
||||
// Get approval level and workflow details
|
||||
const approvalLevel = await ApprovalLevel.findOne({
|
||||
where: { levelId }
|
||||
@ -69,43 +68,58 @@ export async function handleTatJob(job: Job<TatJobData>) {
|
||||
|
||||
switch (type) {
|
||||
case 'threshold1':
|
||||
emoji = '⏳';
|
||||
emoji = '';
|
||||
alertType = TatAlertType.TAT_50; // Keep enum for backwards compatibility
|
||||
thresholdPercentage = threshold;
|
||||
message = `${emoji} ${threshold}% of TAT elapsed for Request ${requestNumber}: ${title}`;
|
||||
message = `${threshold}% of TAT elapsed for Request ${requestNumber}: ${title}`;
|
||||
activityDetails = `${threshold}% of TAT time has elapsed`;
|
||||
|
||||
// Update TAT status in database
|
||||
// Update TAT status in database with comprehensive tracking
|
||||
await ApprovalLevel.update(
|
||||
{ tatPercentageUsed: threshold, tat50AlertSent: true },
|
||||
{
|
||||
tatPercentageUsed: threshold,
|
||||
tat50AlertSent: true,
|
||||
elapsedHours: elapsedHours,
|
||||
remainingHours: remainingHours
|
||||
},
|
||||
{ where: { levelId } }
|
||||
);
|
||||
break;
|
||||
|
||||
case 'threshold2':
|
||||
emoji = '⚠️';
|
||||
emoji = '';
|
||||
alertType = TatAlertType.TAT_75; // Keep enum for backwards compatibility
|
||||
thresholdPercentage = threshold;
|
||||
message = `${emoji} ${threshold}% of TAT elapsed for Request ${requestNumber}: ${title}. Please take action soon.`;
|
||||
message = `${threshold}% of TAT elapsed for Request ${requestNumber}: ${title}. Please take action soon.`;
|
||||
activityDetails = `${threshold}% of TAT time has elapsed - Escalation warning`;
|
||||
|
||||
// Update TAT status in database
|
||||
// Update TAT status in database with comprehensive tracking
|
||||
await ApprovalLevel.update(
|
||||
{ tatPercentageUsed: threshold, tat75AlertSent: true },
|
||||
{
|
||||
tatPercentageUsed: threshold,
|
||||
tat75AlertSent: true,
|
||||
elapsedHours: elapsedHours,
|
||||
remainingHours: remainingHours
|
||||
},
|
||||
{ where: { levelId } }
|
||||
);
|
||||
break;
|
||||
|
||||
case 'breach':
|
||||
emoji = '⏰';
|
||||
emoji = '';
|
||||
alertType = TatAlertType.TAT_100;
|
||||
thresholdPercentage = 100;
|
||||
message = `${emoji} TAT breached for Request ${requestNumber}: ${title}. Immediate action required!`;
|
||||
message = `TAT breached for Request ${requestNumber}: ${title}. Immediate action required!`;
|
||||
activityDetails = 'TAT deadline reached - Breach notification';
|
||||
|
||||
// Update TAT status in database
|
||||
// Update TAT status in database with comprehensive tracking
|
||||
await ApprovalLevel.update(
|
||||
{ tatPercentageUsed: 100, tatBreached: true },
|
||||
{
|
||||
tatPercentageUsed: 100,
|
||||
tatBreached: true,
|
||||
elapsedHours: elapsedHours,
|
||||
remainingHours: 0 // No time remaining after breach
|
||||
},
|
||||
{ where: { levelId } }
|
||||
);
|
||||
break;
|
||||
@ -146,6 +160,12 @@ export async function handleTatJob(job: Job<TatJobData>) {
|
||||
logger.error(`[TAT Processor] ❌ Alert creation failed for ${type}: ${alertError.message}`);
|
||||
}
|
||||
|
||||
// Determine notification priority based on TAT threshold
|
||||
const notificationPriority =
|
||||
type === 'breach' ? 'URGENT' :
|
||||
type === 'threshold2' ? 'HIGH' :
|
||||
'MEDIUM';
|
||||
|
||||
// Send notification to approver
|
||||
await notificationService.sendToUsers([approverId], {
|
||||
title: type === 'breach' ? 'TAT Breach Alert' : 'TAT Reminder',
|
||||
@ -153,9 +173,29 @@ export async function handleTatJob(job: Job<TatJobData>) {
|
||||
requestId,
|
||||
requestNumber,
|
||||
url: `/request/${requestNumber}`,
|
||||
type: type
|
||||
type: type,
|
||||
priority: notificationPriority,
|
||||
actionRequired: type === 'breach' || type === 'threshold2' // Require action for critical alerts
|
||||
});
|
||||
|
||||
// If breached, also notify the initiator (workflow creator)
|
||||
if (type === 'breach') {
|
||||
const initiatorId = (workflow as any).initiatorId;
|
||||
if (initiatorId && initiatorId !== approverId) {
|
||||
await notificationService.sendToUsers([initiatorId], {
|
||||
title: 'TAT Breach - Request Delayed',
|
||||
body: `Your request ${requestNumber}: "${title}" has exceeded its TAT. The approver has been notified.`,
|
||||
requestId,
|
||||
requestNumber,
|
||||
url: `/request/${requestNumber}`,
|
||||
type: 'tat_breach_initiator',
|
||||
priority: 'HIGH',
|
||||
actionRequired: false
|
||||
});
|
||||
logger.info(`[TAT Processor] Breach notification sent to initiator ${initiatorId}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Log activity (skip if it fails - don't break the TAT notification)
|
||||
try {
|
||||
await activityService.log({
|
||||
@ -199,7 +239,7 @@ export async function handleTatJob(job: Job<TatJobData>) {
|
||||
// Don't fail the job if socket emission fails
|
||||
}
|
||||
|
||||
logger.info(`[TAT Processor] ${type} notification sent for request ${requestId}`);
|
||||
logger.info(`[TAT Processor] ✅ ${type} notification sent for request ${requestId}`);
|
||||
} catch (error) {
|
||||
logger.error(`[TAT Processor] Failed to process ${type} job:`, error);
|
||||
throw error; // Re-throw to trigger retry
|
||||
|
||||
@ -18,7 +18,12 @@ try {
|
||||
}
|
||||
}
|
||||
});
|
||||
logger.info('[TAT Queue] Queue initialized');
|
||||
|
||||
tatQueue.on('error', (error) => {
|
||||
logger.error('[TAT Queue] Queue error:', error);
|
||||
});
|
||||
|
||||
logger.info('[TAT Queue] ✅ Queue initialized');
|
||||
} catch (error) {
|
||||
logger.error('[TAT Queue] Failed to initialize:', error);
|
||||
tatQueue = null;
|
||||
|
||||
@ -16,15 +16,13 @@ try {
|
||||
}
|
||||
});
|
||||
|
||||
logger.info('[TAT Worker] Worker initialized');
|
||||
|
||||
if (tatWorker) {
|
||||
tatWorker.on('ready', () => {
|
||||
logger.info('[TAT Worker] Ready and listening');
|
||||
logger.info('[TAT Worker] ✅ Ready and listening for TAT jobs');
|
||||
});
|
||||
|
||||
tatWorker.on('active', (job) => {
|
||||
logger.info(`[TAT Worker] Processing: ${job.name}`);
|
||||
logger.info(`[TAT Worker] Processing: ${job.name} for request ${job.data.requestId}`);
|
||||
});
|
||||
|
||||
tatWorker.on('completed', (job) => {
|
||||
|
||||
@ -29,6 +29,14 @@ export function initSocket(httpServer: any) {
|
||||
let currentRequestId: string | null = null;
|
||||
let currentUserId: string | null = null;
|
||||
|
||||
// Join user's personal notification room
|
||||
socket.on('join:user', (data: { userId: string }) => {
|
||||
const userId = typeof data === 'string' ? data : data.userId;
|
||||
socket.join(`user:${userId}`);
|
||||
currentUserId = userId;
|
||||
console.log(`[Socket] User ${userId} joined personal notification room`);
|
||||
});
|
||||
|
||||
socket.on('join:request', (data: { requestId: string; userId?: string }) => {
|
||||
const requestId = typeof data === 'string' ? data : data.requestId;
|
||||
const userId = typeof data === 'object' ? data.userId : null;
|
||||
@ -99,4 +107,10 @@ export function emitToRequestRoom(requestId: string, event: string, payload: any
|
||||
io.to(`request:${requestId}`).emit(event, payload);
|
||||
}
|
||||
|
||||
export function emitToUser(userId: string, event: string, payload: any) {
|
||||
if (!io) return;
|
||||
io.to(`user:${userId}`).emit(event, payload);
|
||||
console.log(`[Socket] Emitted '${event}' to user ${userId}`);
|
||||
}
|
||||
|
||||
|
||||
|
||||
75
src/routes/ai.routes.ts
Normal file
75
src/routes/ai.routes.ts
Normal file
@ -0,0 +1,75 @@
|
||||
import { Router, Request, Response } from 'express';
|
||||
import { aiService } from '@services/ai.service';
|
||||
import { authenticateToken } from '../middlewares/auth.middleware';
|
||||
import logger from '@utils/logger';
|
||||
|
||||
const router = Router();
|
||||
|
||||
/**
|
||||
* @route GET /api/v1/ai/status
|
||||
* @desc Get AI service status
|
||||
* @access Private (Admin only)
|
||||
*/
|
||||
router.get('/status', authenticateToken, async (req: Request, res: Response) => {
|
||||
try {
|
||||
const isAvailable = aiService.isAvailable();
|
||||
const provider = aiService.getProviderName();
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: {
|
||||
available: isAvailable,
|
||||
provider: provider,
|
||||
status: isAvailable ? 'active' : 'unavailable'
|
||||
}
|
||||
});
|
||||
} catch (error: any) {
|
||||
logger.error('[AI Routes] Error getting status:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to get AI status'
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* @route POST /api/v1/ai/reinitialize
|
||||
* @desc Reinitialize AI service (after config change)
|
||||
* @access Private (Admin only)
|
||||
*/
|
||||
router.post('/reinitialize', authenticateToken, async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
// Check if user is admin
|
||||
const isAdmin = (req as any).user?.isAdmin;
|
||||
if (!isAdmin) {
|
||||
res.status(403).json({
|
||||
success: false,
|
||||
error: 'Only admins can reinitialize AI service'
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
await aiService.reinitialize();
|
||||
|
||||
const isAvailable = aiService.isAvailable();
|
||||
const provider = aiService.getProviderName();
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
message: 'AI service reinitialized successfully',
|
||||
data: {
|
||||
available: isAvailable,
|
||||
provider: provider
|
||||
}
|
||||
});
|
||||
} catch (error: any) {
|
||||
logger.error('[AI Routes] Error reinitializing:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to reinitialize AI service'
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
export default router;
|
||||
|
||||
47
src/routes/conclusion.routes.ts
Normal file
47
src/routes/conclusion.routes.ts
Normal file
@ -0,0 +1,47 @@
|
||||
import { Router } from 'express';
|
||||
import { conclusionController } from '@controllers/conclusion.controller';
|
||||
import { authenticateToken } from '../middlewares/auth.middleware';
|
||||
|
||||
const router = Router();
|
||||
|
||||
// All routes require authentication
|
||||
router.use(authenticateToken);
|
||||
|
||||
/**
|
||||
* @route POST /api/v1/conclusions/:requestId/generate
|
||||
* @desc Generate AI-powered conclusion remark
|
||||
* @access Private (Initiator only)
|
||||
*/
|
||||
router.post('/:requestId/generate', (req, res) =>
|
||||
conclusionController.generateConclusion(req, res)
|
||||
);
|
||||
|
||||
/**
|
||||
* @route PUT /api/v1/conclusions/:requestId
|
||||
* @desc Update conclusion remark (edit by initiator)
|
||||
* @access Private (Initiator only)
|
||||
*/
|
||||
router.put('/:requestId', (req, res) =>
|
||||
conclusionController.updateConclusion(req, res)
|
||||
);
|
||||
|
||||
/**
|
||||
* @route POST /api/v1/conclusions/:requestId/finalize
|
||||
* @desc Finalize conclusion and close request
|
||||
* @access Private (Initiator only)
|
||||
*/
|
||||
router.post('/:requestId/finalize', (req, res) =>
|
||||
conclusionController.finalizeConclusion(req, res)
|
||||
);
|
||||
|
||||
/**
|
||||
* @route GET /api/v1/conclusions/:requestId
|
||||
* @desc Get conclusion for a request
|
||||
* @access Private
|
||||
*/
|
||||
router.get('/:requestId', (req, res) =>
|
||||
conclusionController.getConclusion(req, res)
|
||||
);
|
||||
|
||||
export default router;
|
||||
|
||||
@ -11,7 +11,7 @@ const router = Router();
|
||||
*/
|
||||
router.get('/',
|
||||
asyncHandler(async (req: Request, res: Response): Promise<void> => {
|
||||
const config = getPublicConfig();
|
||||
const config = await getPublicConfig();
|
||||
res.json({
|
||||
success: true,
|
||||
data: config
|
||||
|
||||
@ -1,5 +1,6 @@
|
||||
import { Router, Request, Response } from 'express';
|
||||
import { tatQueue } from '../queues/tatQueue';
|
||||
import { tatWorker } from '../queues/tatWorker';
|
||||
import { TatAlert } from '@models/TatAlert';
|
||||
import { ApprovalLevel } from '@models/ApprovalLevel';
|
||||
import dayjs from 'dayjs';
|
||||
@ -234,4 +235,122 @@ router.post('/tat-calculate', async (req: Request, res: Response): Promise<void>
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Debug endpoint to check queue and worker status
|
||||
*/
|
||||
router.get('/queue-status', async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
if (!tatQueue || !tatWorker) {
|
||||
res.json({
|
||||
error: 'Queue or Worker not available',
|
||||
queueAvailable: !!tatQueue,
|
||||
workerAvailable: !!tatWorker
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Get job counts
|
||||
const [waiting, delayed, active, completed, failed] = await Promise.all([
|
||||
tatQueue.getJobCounts('waiting'),
|
||||
tatQueue.getJobCounts('delayed'),
|
||||
tatQueue.getJobCounts('active'),
|
||||
tatQueue.getJobCounts('completed'),
|
||||
tatQueue.getJobCounts('failed')
|
||||
]);
|
||||
|
||||
// Get all jobs in various states
|
||||
const waitingJobs = await tatQueue.getJobs(['waiting'], 0, 10);
|
||||
const delayedJobs = await tatQueue.getJobs(['delayed'], 0, 10);
|
||||
const activeJobs = await tatQueue.getJobs(['active'], 0, 10);
|
||||
|
||||
res.json({
|
||||
timestamp: new Date().toISOString(),
|
||||
queue: {
|
||||
name: tatQueue.name,
|
||||
available: true
|
||||
},
|
||||
worker: {
|
||||
available: true,
|
||||
running: tatWorker.isRunning(),
|
||||
paused: tatWorker.isPaused(),
|
||||
closing: tatWorker.closing,
|
||||
concurrency: tatWorker.opts.concurrency,
|
||||
autorun: tatWorker.opts.autorun
|
||||
},
|
||||
jobCounts: {
|
||||
waiting: waiting.waiting,
|
||||
delayed: delayed.delayed,
|
||||
active: active.active,
|
||||
completed: completed.completed,
|
||||
failed: failed.failed
|
||||
},
|
||||
recentJobs: {
|
||||
waiting: waitingJobs.map(j => ({ id: j.id, name: j.name, data: j.data })),
|
||||
delayed: delayedJobs.map(j => ({
|
||||
id: j.id,
|
||||
name: j.name,
|
||||
data: j.data,
|
||||
delay: j.opts.delay,
|
||||
timestamp: j.timestamp,
|
||||
scheduledFor: new Date(j.timestamp + (j.opts.delay || 0)).toISOString()
|
||||
})),
|
||||
active: activeJobs.map(j => ({ id: j.id, name: j.name, data: j.data }))
|
||||
}
|
||||
});
|
||||
|
||||
} catch (error: any) {
|
||||
logger.error('[Debug] Error checking queue status:', error);
|
||||
res.status(500).json({ error: error.message, stack: error.stack });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Debug endpoint to manually trigger a test TAT job (immediate execution)
|
||||
*/
|
||||
router.post('/trigger-test-tat', async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
if (!tatQueue) {
|
||||
res.json({
|
||||
error: 'TAT queue not available (Redis not connected)'
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const { requestId, levelId, approverId } = req.body;
|
||||
|
||||
// Add a test job with 5 second delay
|
||||
const job = await tatQueue.add(
|
||||
'test-threshold1',
|
||||
{
|
||||
type: 'threshold1',
|
||||
threshold: 50,
|
||||
requestId: requestId || 'test-request-123',
|
||||
levelId: levelId || 'test-level-456',
|
||||
approverId: approverId || 'test-approver-789'
|
||||
},
|
||||
{
|
||||
delay: 5000, // 5 seconds
|
||||
jobId: `test-tat-${Date.now()}`,
|
||||
removeOnComplete: false, // Keep for debugging
|
||||
removeOnFail: false
|
||||
}
|
||||
);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
message: 'Test TAT job created (will fire in 5 seconds)',
|
||||
job: {
|
||||
id: job.id,
|
||||
name: job.name,
|
||||
data: job.data,
|
||||
delay: 5000
|
||||
}
|
||||
});
|
||||
|
||||
} catch (error: any) {
|
||||
logger.error('[Debug] Error triggering test TAT:', error);
|
||||
res.status(500).json({ error: error.message, stack: error.stack });
|
||||
}
|
||||
});
|
||||
|
||||
export default router;
|
||||
|
||||
@ -8,6 +8,9 @@ import adminRoutes from './admin.routes';
|
||||
import debugRoutes from './debug.routes';
|
||||
import configRoutes from './config.routes';
|
||||
import dashboardRoutes from './dashboard.routes';
|
||||
import notificationRoutes from './notification.routes';
|
||||
import conclusionRoutes from './conclusion.routes';
|
||||
import aiRoutes from './ai.routes';
|
||||
|
||||
const router = Router();
|
||||
|
||||
@ -30,10 +33,12 @@ router.use('/tat', tatRoutes);
|
||||
router.use('/admin', adminRoutes);
|
||||
router.use('/debug', debugRoutes);
|
||||
router.use('/dashboard', dashboardRoutes);
|
||||
router.use('/notifications', notificationRoutes);
|
||||
router.use('/conclusions', conclusionRoutes);
|
||||
router.use('/ai', aiRoutes);
|
||||
|
||||
// TODO: Add other route modules as they are implemented
|
||||
// router.use('/approvals', approvalRoutes);
|
||||
// router.use('/notifications', notificationRoutes);
|
||||
// router.use('/participants', participantRoutes);
|
||||
|
||||
export default router;
|
||||
|
||||
46
src/routes/notification.routes.ts
Normal file
46
src/routes/notification.routes.ts
Normal file
@ -0,0 +1,46 @@
|
||||
import { Router } from 'express';
|
||||
import { NotificationController } from '../controllers/notification.controller';
|
||||
import { authenticateToken } from '../middlewares/auth.middleware';
|
||||
import { asyncHandler } from '../middlewares/errorHandler.middleware';
|
||||
|
||||
const router = Router();
|
||||
const notificationController = new NotificationController();
|
||||
|
||||
/**
|
||||
* Notification Routes
|
||||
* All routes require authentication
|
||||
*/
|
||||
|
||||
// Get user's notifications (with pagination)
|
||||
// Query params: page, limit, unreadOnly
|
||||
router.get('/',
|
||||
authenticateToken,
|
||||
asyncHandler(notificationController.getUserNotifications.bind(notificationController))
|
||||
);
|
||||
|
||||
// Get unread count
|
||||
router.get('/unread-count',
|
||||
authenticateToken,
|
||||
asyncHandler(notificationController.getUnreadCount.bind(notificationController))
|
||||
);
|
||||
|
||||
// Mark notification as read
|
||||
router.patch('/:notificationId/read',
|
||||
authenticateToken,
|
||||
asyncHandler(notificationController.markAsRead.bind(notificationController))
|
||||
);
|
||||
|
||||
// Mark all as read
|
||||
router.post('/mark-all-read',
|
||||
authenticateToken,
|
||||
asyncHandler(notificationController.markAllAsRead.bind(notificationController))
|
||||
);
|
||||
|
||||
// Delete notification
|
||||
router.delete('/:notificationId',
|
||||
authenticateToken,
|
||||
asyncHandler(notificationController.deleteNotification.bind(notificationController))
|
||||
);
|
||||
|
||||
export default router;
|
||||
|
||||
@ -16,6 +16,9 @@ import * as m12 from '../migrations/20251104-create-holidays';
|
||||
import * as m13 from '../migrations/20251104-create-admin-config';
|
||||
import * as m14 from '../migrations/20251105-add-skip-fields-to-approval-levels';
|
||||
import * as m15 from '../migrations/2025110501-alter-tat-days-to-generated';
|
||||
import * as m16 from '../migrations/20251111-create-notifications';
|
||||
import * as m17 from '../migrations/20251111-create-conclusion-remarks';
|
||||
import * as m18 from '../migrations/20251111-add-ai-provider-configs';
|
||||
|
||||
interface Migration {
|
||||
name: string;
|
||||
@ -46,6 +49,9 @@ const migrations: Migration[] = [
|
||||
{ name: '20251104-create-admin-config', module: m13 },
|
||||
{ name: '20251105-add-skip-fields-to-approval-levels', module: m14 },
|
||||
{ name: '2025110501-alter-tat-days-to-generated', module: m15 },
|
||||
{ name: '20251111-create-notifications', module: m16 },
|
||||
{ name: '20251111-create-conclusion-remarks', module: m17 },
|
||||
{ name: '20251111-add-ai-provider-configs', module: m18 },
|
||||
];
|
||||
|
||||
/**
|
||||
|
||||
@ -2,7 +2,6 @@ import app from './app';
|
||||
import http from 'http';
|
||||
import { initSocket } from './realtime/socket';
|
||||
import './queues/tatWorker'; // Initialize TAT worker
|
||||
import './queues/delayedJobPromoter'; // Initialize delayed job promoter (workaround for BullMQ + remote Redis)
|
||||
import { logTatConfig } from './config/tat.config';
|
||||
import { logSystemConfig } from './config/system.config';
|
||||
import { initializeHolidaysCache } from './utils/tatTimeUtils';
|
||||
|
||||
@ -2,7 +2,7 @@ import logger from '@utils/logger';
|
||||
|
||||
export type ActivityEntry = {
|
||||
requestId: string;
|
||||
type: 'created' | 'assignment' | 'approval' | 'rejection' | 'status_change' | 'comment' | 'reminder' | 'document_added' | 'sla_warning';
|
||||
type: 'created' | 'assignment' | 'approval' | 'rejection' | 'status_change' | 'comment' | 'reminder' | 'document_added' | 'sla_warning' | 'ai_conclusion_generated' | 'closed';
|
||||
user?: { userId: string; name?: string; email?: string };
|
||||
timestamp: string;
|
||||
action: string;
|
||||
|
||||
533
src/services/ai.service.ts
Normal file
533
src/services/ai.service.ts
Normal file
@ -0,0 +1,533 @@
|
||||
import logger from '@utils/logger';
|
||||
import { getAIProviderConfig } from './configReader.service';
|
||||
|
||||
// Provider-specific interfaces
|
||||
interface AIProvider {
|
||||
generateText(prompt: string): Promise<string>;
|
||||
isAvailable(): boolean;
|
||||
getProviderName(): string;
|
||||
}
|
||||
|
||||
// Claude Provider
|
||||
class ClaudeProvider implements AIProvider {
|
||||
private client: any = null;
|
||||
private model: string;
|
||||
|
||||
constructor(apiKey?: string) {
|
||||
// Allow model override via environment variable
|
||||
// Current models (November 2025):
|
||||
// - claude-sonnet-4-20250514 (default - latest Claude Sonnet 4)
|
||||
// - Use env variable CLAUDE_MODEL to override if needed
|
||||
this.model = process.env.CLAUDE_MODEL || 'claude-sonnet-4-20250514';
|
||||
|
||||
try {
|
||||
// Priority: 1. Provided key, 2. Environment variable
|
||||
const key = apiKey || process.env.CLAUDE_API_KEY || process.env.ANTHROPIC_API_KEY;
|
||||
|
||||
if (!key || key.trim() === '') {
|
||||
return; // Silently skip if no key available
|
||||
}
|
||||
|
||||
// Dynamic import to avoid hard dependency
|
||||
const Anthropic = require('@anthropic-ai/sdk');
|
||||
this.client = new Anthropic({ apiKey: key });
|
||||
logger.info(`[AI Service] ✅ Claude provider initialized with model: ${this.model}`);
|
||||
} catch (error: any) {
|
||||
// Handle missing package gracefully
|
||||
if (error.code === 'MODULE_NOT_FOUND') {
|
||||
logger.warn('[AI Service] Claude SDK not installed. Run: npm install @anthropic-ai/sdk');
|
||||
} else {
|
||||
logger.error('[AI Service] Failed to initialize Claude:', error.message);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async generateText(prompt: string): Promise<string> {
|
||||
if (!this.client) throw new Error('Claude client not initialized');
|
||||
|
||||
logger.info(`[AI Service] Generating with Claude model: ${this.model}`);
|
||||
|
||||
const response = await this.client.messages.create({
|
||||
model: this.model,
|
||||
max_tokens: 2048, // Increased for longer conclusions
|
||||
temperature: 0.3,
|
||||
messages: [{ role: 'user', content: prompt }]
|
||||
});
|
||||
|
||||
const content = response.content[0];
|
||||
return content.type === 'text' ? content.text : '';
|
||||
}
|
||||
|
||||
isAvailable(): boolean {
|
||||
return this.client !== null;
|
||||
}
|
||||
|
||||
getProviderName(): string {
|
||||
return 'Claude (Anthropic)';
|
||||
}
|
||||
}
|
||||
|
||||
// OpenAI Provider
|
||||
class OpenAIProvider implements AIProvider {
|
||||
private client: any = null;
|
||||
private model: string = 'gpt-4o';
|
||||
|
||||
constructor(apiKey?: string) {
|
||||
try {
|
||||
// Priority: 1. Provided key, 2. Environment variable
|
||||
const key = apiKey || process.env.OPENAI_API_KEY;
|
||||
|
||||
if (!key || key.trim() === '') {
|
||||
return; // Silently skip if no key available
|
||||
}
|
||||
|
||||
const OpenAI = require('openai');
|
||||
this.client = new OpenAI({ apiKey: key });
|
||||
logger.info('[AI Service] ✅ OpenAI provider initialized');
|
||||
} catch (error: any) {
|
||||
// Handle missing package gracefully
|
||||
if (error.code === 'MODULE_NOT_FOUND') {
|
||||
logger.warn('[AI Service] OpenAI SDK not installed. Run: npm install openai');
|
||||
} else {
|
||||
logger.error('[AI Service] Failed to initialize OpenAI:', error.message);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async generateText(prompt: string): Promise<string> {
|
||||
if (!this.client) throw new Error('OpenAI client not initialized');
|
||||
|
||||
const response = await this.client.chat.completions.create({
|
||||
model: this.model,
|
||||
messages: [{ role: 'user', content: prompt }],
|
||||
max_tokens: 1024,
|
||||
temperature: 0.3
|
||||
});
|
||||
|
||||
return response.choices[0]?.message?.content || '';
|
||||
}
|
||||
|
||||
isAvailable(): boolean {
|
||||
return this.client !== null;
|
||||
}
|
||||
|
||||
getProviderName(): string {
|
||||
return 'OpenAI (GPT-4)';
|
||||
}
|
||||
}
|
||||
|
||||
// Gemini Provider (Google)
|
||||
class GeminiProvider implements AIProvider {
|
||||
private client: any = null;
|
||||
private model: string = 'gemini-1.5-pro';
|
||||
|
||||
constructor(apiKey?: string) {
|
||||
try {
|
||||
// Priority: 1. Provided key, 2. Environment variable
|
||||
const key = apiKey || process.env.GEMINI_API_KEY || process.env.GOOGLE_AI_API_KEY;
|
||||
|
||||
if (!key || key.trim() === '') {
|
||||
return; // Silently skip if no key available
|
||||
}
|
||||
|
||||
const { GoogleGenerativeAI } = require('@google/generative-ai');
|
||||
this.client = new GoogleGenerativeAI(key);
|
||||
logger.info('[AI Service] ✅ Gemini provider initialized');
|
||||
} catch (error: any) {
|
||||
// Handle missing package gracefully
|
||||
if (error.code === 'MODULE_NOT_FOUND') {
|
||||
logger.warn('[AI Service] Gemini SDK not installed. Run: npm install @google/generative-ai');
|
||||
} else {
|
||||
logger.error('[AI Service] Failed to initialize Gemini:', error.message);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async generateText(prompt: string): Promise<string> {
|
||||
if (!this.client) throw new Error('Gemini client not initialized');
|
||||
|
||||
const model = this.client.getGenerativeModel({ model: this.model });
|
||||
const result = await model.generateContent(prompt);
|
||||
const response = await result.response;
|
||||
return response.text();
|
||||
}
|
||||
|
||||
isAvailable(): boolean {
|
||||
return this.client !== null;
|
||||
}
|
||||
|
||||
getProviderName(): string {
|
||||
return 'Gemini (Google)';
|
||||
}
|
||||
}
|
||||
|
||||
class AIService {
|
||||
private provider: AIProvider | null = null;
|
||||
private providerName: string = 'None';
|
||||
private isInitialized: boolean = false;
|
||||
|
||||
constructor() {
|
||||
// Initialization happens asynchronously
|
||||
this.initialize();
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize AI provider from database configuration
|
||||
*/
|
||||
async initialize(): Promise<void> {
|
||||
try {
|
||||
// Read AI configuration from database (with env fallback)
|
||||
const config = await getAIProviderConfig();
|
||||
|
||||
if (!config.enabled) {
|
||||
logger.warn('[AI Service] AI features disabled in admin configuration');
|
||||
return;
|
||||
}
|
||||
|
||||
const preferredProvider = config.provider.toLowerCase();
|
||||
logger.info(`[AI Service] Preferred provider from config: ${preferredProvider}`);
|
||||
|
||||
// Try to initialize the preferred provider first
|
||||
let initialized = false;
|
||||
|
||||
switch (preferredProvider) {
|
||||
case 'openai':
|
||||
case 'gpt':
|
||||
initialized = this.tryProvider(new OpenAIProvider(config.openaiKey));
|
||||
break;
|
||||
case 'gemini':
|
||||
case 'google':
|
||||
initialized = this.tryProvider(new GeminiProvider(config.geminiKey));
|
||||
break;
|
||||
case 'claude':
|
||||
case 'anthropic':
|
||||
default:
|
||||
initialized = this.tryProvider(new ClaudeProvider(config.claudeKey));
|
||||
break;
|
||||
}
|
||||
|
||||
// Fallback: Try other providers if preferred one failed
|
||||
if (!initialized) {
|
||||
logger.warn('[AI Service] Preferred provider unavailable. Trying fallbacks...');
|
||||
|
||||
const fallbackProviders = [
|
||||
new ClaudeProvider(config.claudeKey),
|
||||
new OpenAIProvider(config.openaiKey),
|
||||
new GeminiProvider(config.geminiKey)
|
||||
];
|
||||
|
||||
for (const provider of fallbackProviders) {
|
||||
if (this.tryProvider(provider)) {
|
||||
logger.info(`[AI Service] ✅ Using fallback provider: ${this.providerName}`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!this.provider) {
|
||||
logger.warn('[AI Service] ⚠️ No AI provider available. AI features will be disabled.');
|
||||
logger.warn('[AI Service] To enable AI: Configure API keys in admin panel or set environment variables.');
|
||||
logger.warn('[AI Service] Supported providers: Claude (CLAUDE_API_KEY), OpenAI (OPENAI_API_KEY), Gemini (GEMINI_API_KEY)');
|
||||
}
|
||||
|
||||
this.isInitialized = true;
|
||||
} catch (error) {
|
||||
logger.error('[AI Service] Failed to initialize from config:', error);
|
||||
// Fallback to environment variables
|
||||
try {
|
||||
this.initializeFromEnv();
|
||||
} catch (envError) {
|
||||
logger.error('[AI Service] Environment fallback also failed:', envError);
|
||||
this.isInitialized = true; // Mark as initialized even if failed
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Fallback initialization from environment variables
|
||||
*/
|
||||
private initializeFromEnv(): void {
|
||||
try {
|
||||
const preferredProvider = (process.env.AI_PROVIDER || 'claude').toLowerCase();
|
||||
|
||||
logger.info(`[AI Service] Using environment variable configuration`);
|
||||
|
||||
switch (preferredProvider) {
|
||||
case 'openai':
|
||||
case 'gpt':
|
||||
this.tryProvider(new OpenAIProvider());
|
||||
break;
|
||||
case 'gemini':
|
||||
case 'google':
|
||||
this.tryProvider(new GeminiProvider());
|
||||
break;
|
||||
case 'claude':
|
||||
case 'anthropic':
|
||||
default:
|
||||
this.tryProvider(new ClaudeProvider());
|
||||
break;
|
||||
}
|
||||
|
||||
if (!this.provider) {
|
||||
logger.warn('[AI Service] ⚠️ No provider available from environment variables either.');
|
||||
}
|
||||
|
||||
this.isInitialized = true;
|
||||
} catch (error) {
|
||||
logger.error('[AI Service] Environment initialization failed:', error);
|
||||
this.isInitialized = true; // Still mark as initialized to prevent infinite loops
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Reinitialize AI provider (call after admin updates config)
|
||||
*/
|
||||
async reinitialize(): Promise<void> {
|
||||
logger.info('[AI Service] Reinitializing AI provider from updated configuration...');
|
||||
this.provider = null;
|
||||
this.providerName = 'None';
|
||||
this.isInitialized = false;
|
||||
await this.initialize();
|
||||
}
|
||||
|
||||
private tryProvider(provider: AIProvider): boolean {
|
||||
if (provider.isAvailable()) {
|
||||
this.provider = provider;
|
||||
this.providerName = provider.getProviderName();
|
||||
logger.info(`[AI Service] ✅ Active provider: ${this.providerName}`);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current AI provider name
|
||||
*/
|
||||
getProviderName(): string {
|
||||
return this.providerName;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate conclusion remark for a workflow request
|
||||
* @param context - All relevant data for generating the conclusion
|
||||
* @returns AI-generated conclusion remark
|
||||
*/
|
||||
async generateConclusionRemark(context: {
|
||||
requestTitle: string;
|
||||
requestDescription: string;
|
||||
requestNumber: string;
|
||||
priority: string;
|
||||
approvalFlow: Array<{
|
||||
levelNumber: number;
|
||||
approverName: string;
|
||||
status: string;
|
||||
comments?: string;
|
||||
actionDate?: string;
|
||||
tatHours?: number;
|
||||
elapsedHours?: number;
|
||||
}>;
|
||||
workNotes: Array<{
|
||||
userName: string;
|
||||
message: string;
|
||||
createdAt: string;
|
||||
}>;
|
||||
documents: Array<{
|
||||
fileName: string;
|
||||
uploadedBy: string;
|
||||
uploadedAt: string;
|
||||
}>;
|
||||
activities: Array<{
|
||||
type: string;
|
||||
action: string;
|
||||
details: string;
|
||||
timestamp: string;
|
||||
}>;
|
||||
}): Promise<{ remark: string; confidence: number; keyPoints: string[]; provider: string }> {
|
||||
// Ensure initialization is complete
|
||||
if (!this.isInitialized) {
|
||||
logger.warn('[AI Service] Not yet initialized, attempting initialization...');
|
||||
await this.initialize();
|
||||
}
|
||||
|
||||
if (!this.provider) {
|
||||
logger.error('[AI Service] No AI provider available');
|
||||
throw new Error('AI features are currently unavailable. Please configure an AI provider (Claude, OpenAI, or Gemini) in the admin panel, or write the conclusion manually.');
|
||||
}
|
||||
|
||||
try {
|
||||
// Build context prompt
|
||||
const prompt = this.buildConclusionPrompt(context);
|
||||
|
||||
logger.info(`[AI Service] Generating conclusion for request ${context.requestNumber} using ${this.providerName}...`);
|
||||
|
||||
// Use provider's generateText method
|
||||
const remarkText = await this.provider.generateText(prompt);
|
||||
|
||||
// Extract key points (look for bullet points or numbered items)
|
||||
const keyPoints = this.extractKeyPoints(remarkText);
|
||||
|
||||
// Calculate confidence based on response quality (simple heuristic)
|
||||
const confidence = this.calculateConfidence(remarkText, context);
|
||||
|
||||
logger.info(`[AI Service] ✅ Generated conclusion (${remarkText.length} chars, ${keyPoints.length} key points) via ${this.providerName}`);
|
||||
|
||||
return {
|
||||
remark: remarkText,
|
||||
confidence: confidence,
|
||||
keyPoints: keyPoints,
|
||||
provider: this.providerName
|
||||
};
|
||||
} catch (error: any) {
|
||||
logger.error('[AI Service] Failed to generate conclusion:', error);
|
||||
throw new Error(`AI generation failed (${this.providerName}): ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Build the prompt for Claude to generate a professional conclusion remark
|
||||
*/
|
||||
private buildConclusionPrompt(context: any): string {
|
||||
const {
|
||||
requestTitle,
|
||||
requestDescription,
|
||||
requestNumber,
|
||||
priority,
|
||||
approvalFlow,
|
||||
workNotes,
|
||||
documents,
|
||||
activities
|
||||
} = context;
|
||||
|
||||
// Summarize approvals
|
||||
const approvalSummary = approvalFlow
|
||||
.filter((a: any) => a.status === 'APPROVED' || a.status === 'REJECTED')
|
||||
.map((a: any) => {
|
||||
const tatInfo = a.elapsedHours && a.tatHours
|
||||
? ` (completed in ${a.elapsedHours.toFixed(1)}h of ${a.tatHours}h TAT)`
|
||||
: '';
|
||||
return `- Level ${a.levelNumber}: ${a.approverName} ${a.status}${tatInfo}${a.comments ? `\n Comment: "${a.comments}"` : ''}`;
|
||||
})
|
||||
.join('\n');
|
||||
|
||||
// Summarize work notes (limit to important ones)
|
||||
const workNoteSummary = workNotes
|
||||
.slice(-10) // Last 10 work notes
|
||||
.map((wn: any) => `- ${wn.userName}: "${wn.message.substring(0, 150)}${wn.message.length > 150 ? '...' : ''}"`)
|
||||
.join('\n');
|
||||
|
||||
// Summarize documents
|
||||
const documentSummary = documents
|
||||
.map((d: any) => `- ${d.fileName} (by ${d.uploadedBy})`)
|
||||
.join('\n');
|
||||
|
||||
const prompt = `You are writing a closure summary for a workflow request at Royal Enfield. Write a practical, realistic conclusion that an employee would write when closing a request.
|
||||
|
||||
**Request:**
|
||||
${requestNumber} - ${requestTitle}
|
||||
Description: ${requestDescription}
|
||||
Priority: ${priority}
|
||||
|
||||
**What Happened:**
|
||||
${approvalSummary || 'No approvals recorded'}
|
||||
|
||||
**Discussions (if any):**
|
||||
${workNoteSummary || 'No work notes'}
|
||||
|
||||
**Documents:**
|
||||
${documentSummary || 'No documents'}
|
||||
|
||||
**YOUR TASK:**
|
||||
Write a brief, professional conclusion (100-200 words) that:
|
||||
- Summarizes what was requested and the final decision
|
||||
- Mentions who approved it and any key comments
|
||||
- Notes the outcome and next steps (if applicable)
|
||||
- Uses clear, factual language without time-specific references
|
||||
- Is suitable for permanent archiving and future reference
|
||||
- Sounds natural and human-written (not AI-generated)
|
||||
|
||||
**IMPORTANT:**
|
||||
- Be concise and direct
|
||||
- No time-specific words like "today", "now", "currently", "recently"
|
||||
- No corporate jargon or buzzwords
|
||||
- No emojis or excessive formatting
|
||||
- Write like a professional documenting a completed process
|
||||
- Focus on facts: what was requested, who approved, what was decided
|
||||
- Use past tense for completed actions
|
||||
|
||||
Write the conclusion now:`;
|
||||
|
||||
return prompt;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract key points from the AI-generated remark
|
||||
*/
|
||||
private extractKeyPoints(remark: string): string[] {
|
||||
const keyPoints: string[] = [];
|
||||
|
||||
// Look for bullet points (-, •, *) or numbered items (1., 2., etc.)
|
||||
const lines = remark.split('\n');
|
||||
|
||||
for (const line of lines) {
|
||||
const trimmed = line.trim();
|
||||
|
||||
// Match bullet points
|
||||
if (trimmed.match(/^[-•*]\s+(.+)$/)) {
|
||||
const point = trimmed.replace(/^[-•*]\s+/, '');
|
||||
if (point.length > 10) { // Ignore very short lines
|
||||
keyPoints.push(point);
|
||||
}
|
||||
}
|
||||
|
||||
// Match numbered items
|
||||
if (trimmed.match(/^\d+\.\s+(.+)$/)) {
|
||||
const point = trimmed.replace(/^\d+\.\s+/, '');
|
||||
if (point.length > 10) {
|
||||
keyPoints.push(point);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If no bullet points found, extract first few sentences
|
||||
if (keyPoints.length === 0) {
|
||||
const sentences = remark.split(/[.!?]+/).filter(s => s.trim().length > 20);
|
||||
keyPoints.push(...sentences.slice(0, 3).map(s => s.trim()));
|
||||
}
|
||||
|
||||
return keyPoints.slice(0, 5); // Max 5 key points
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate confidence score based on response quality
|
||||
*/
|
||||
private calculateConfidence(remark: string, context: any): number {
|
||||
let score = 0.6; // Base score (slightly higher for new prompt)
|
||||
|
||||
// Check if remark has good length (100-400 chars - more realistic)
|
||||
if (remark.length >= 100 && remark.length <= 400) {
|
||||
score += 0.2;
|
||||
}
|
||||
|
||||
// Check if remark mentions key elements
|
||||
if (remark.toLowerCase().includes('approv')) {
|
||||
score += 0.1;
|
||||
}
|
||||
|
||||
// Check if remark is not too generic
|
||||
if (remark.length > 80 && !remark.toLowerCase().includes('lorem ipsum')) {
|
||||
score += 0.1;
|
||||
}
|
||||
|
||||
return Math.min(1.0, score);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if AI service is available
|
||||
*/
|
||||
isAvailable(): boolean {
|
||||
return this.provider !== null;
|
||||
}
|
||||
}
|
||||
|
||||
export const aiService = new AIService();
|
||||
|
||||
@ -79,22 +79,145 @@ export class ApprovalService {
|
||||
{ where: { requestId: level.requestId } }
|
||||
);
|
||||
logger.info(`Final approver approved. Workflow ${level.requestId} closed as APPROVED`);
|
||||
// Notify initiator
|
||||
if (wf) {
|
||||
await notificationService.sendToUsers([ (wf as any).initiatorId ], {
|
||||
title: `Approved: ${(wf as any).requestNumber}`,
|
||||
body: `${(wf as any).title}`,
|
||||
requestNumber: (wf as any).requestNumber,
|
||||
url: `/request/${(wf as any).requestNumber}`
|
||||
});
|
||||
|
||||
// Log final approval activity first (so it's included in AI context)
|
||||
activityService.log({
|
||||
requestId: level.requestId,
|
||||
type: 'approval',
|
||||
user: { userId: level.approverId, name: level.approverName },
|
||||
timestamp: new Date().toISOString(),
|
||||
action: 'Approved',
|
||||
details: `Request approved and finalized by ${level.approverName || level.approverEmail}`
|
||||
details: `Request approved and finalized by ${level.approverName || level.approverEmail}. Awaiting conclusion remark from initiator.`
|
||||
});
|
||||
|
||||
// Generate AI conclusion remark
|
||||
try {
|
||||
const { aiService } = await import('./ai.service');
|
||||
const { ConclusionRemark } = await import('@models/index');
|
||||
const { ApprovalLevel } = await import('@models/ApprovalLevel');
|
||||
const { WorkNote } = await import('@models/WorkNote');
|
||||
const { Document } = await import('@models/Document');
|
||||
const { Activity } = await import('@models/Activity');
|
||||
|
||||
if (aiService.isAvailable()) {
|
||||
logger.info(`[Approval] Generating AI conclusion for ${level.requestId}...`);
|
||||
|
||||
// Gather context for AI generation
|
||||
const approvalLevels = await ApprovalLevel.findAll({
|
||||
where: { requestId: level.requestId },
|
||||
order: [['levelNumber', 'ASC']]
|
||||
});
|
||||
|
||||
const workNotes = await WorkNote.findAll({
|
||||
where: { requestId: level.requestId },
|
||||
order: [['createdAt', 'ASC']],
|
||||
limit: 20
|
||||
});
|
||||
|
||||
const documents = await Document.findAll({
|
||||
where: { requestId: level.requestId },
|
||||
order: [['uploadedAt', 'DESC']]
|
||||
});
|
||||
|
||||
const activities = await Activity.findAll({
|
||||
where: { requestId: level.requestId },
|
||||
order: [['createdAt', 'ASC']],
|
||||
limit: 50
|
||||
});
|
||||
|
||||
// Build context object
|
||||
const context = {
|
||||
requestTitle: (wf as any).title,
|
||||
requestDescription: (wf as any).description,
|
||||
requestNumber: (wf as any).requestNumber,
|
||||
priority: (wf as any).priority,
|
||||
approvalFlow: approvalLevels.map((l: any) => ({
|
||||
levelNumber: l.levelNumber,
|
||||
approverName: l.approverName,
|
||||
status: l.status,
|
||||
comments: l.comments,
|
||||
actionDate: l.actionDate,
|
||||
tatHours: Number(l.tatHours || 0),
|
||||
elapsedHours: Number(l.elapsedHours || 0)
|
||||
})),
|
||||
workNotes: workNotes.map((note: any) => ({
|
||||
userName: note.userName,
|
||||
message: note.message,
|
||||
createdAt: note.createdAt
|
||||
})),
|
||||
documents: documents.map((doc: any) => ({
|
||||
fileName: doc.originalFileName || doc.fileName,
|
||||
uploadedBy: doc.uploadedBy,
|
||||
uploadedAt: doc.uploadedAt
|
||||
})),
|
||||
activities: activities.map((activity: any) => ({
|
||||
type: activity.activityType,
|
||||
action: activity.activityDescription,
|
||||
details: activity.activityDescription,
|
||||
timestamp: activity.createdAt
|
||||
}))
|
||||
};
|
||||
|
||||
const aiResult = await aiService.generateConclusionRemark(context);
|
||||
|
||||
// Save to database
|
||||
await ConclusionRemark.create({
|
||||
requestId: level.requestId,
|
||||
aiGeneratedRemark: aiResult.remark,
|
||||
aiModelUsed: aiResult.provider,
|
||||
aiConfidenceScore: aiResult.confidence,
|
||||
finalRemark: null,
|
||||
editedBy: null,
|
||||
isEdited: false,
|
||||
editCount: 0,
|
||||
approvalSummary: {
|
||||
totalLevels: approvalLevels.length,
|
||||
approvedLevels: approvalLevels.filter((l: any) => l.status === 'APPROVED').length,
|
||||
averageTatUsage: approvalLevels.reduce((sum: number, l: any) =>
|
||||
sum + Number(l.tatPercentageUsed || 0), 0) / (approvalLevels.length || 1)
|
||||
},
|
||||
documentSummary: {
|
||||
totalDocuments: documents.length,
|
||||
documentNames: documents.map((d: any) => d.originalFileName || d.fileName)
|
||||
},
|
||||
keyDiscussionPoints: aiResult.keyPoints,
|
||||
generatedAt: new Date(),
|
||||
finalizedAt: null
|
||||
} as any);
|
||||
|
||||
logger.info(`[Approval] ✅ AI conclusion generated for ${level.requestId}`);
|
||||
|
||||
// Log activity
|
||||
activityService.log({
|
||||
requestId: level.requestId,
|
||||
type: 'ai_conclusion_generated',
|
||||
user: { userId: 'system', name: 'System' },
|
||||
timestamp: new Date().toISOString(),
|
||||
action: 'AI Conclusion Generated',
|
||||
details: 'AI-powered conclusion remark generated for review by initiator'
|
||||
});
|
||||
} else {
|
||||
logger.warn(`[Approval] AI service unavailable for ${level.requestId}, skipping conclusion generation`);
|
||||
}
|
||||
} catch (aiError) {
|
||||
logger.error(`[Approval] Failed to generate AI conclusion:`, aiError);
|
||||
// Don't fail the approval if AI generation fails - initiator can write manually
|
||||
}
|
||||
|
||||
// Notify initiator about approval and pending conclusion step
|
||||
if (wf) {
|
||||
await notificationService.sendToUsers([ (wf as any).initiatorId ], {
|
||||
title: `Request Approved - Closure Pending`,
|
||||
body: `Your request "${(wf as any).title}" has been fully approved. Please review and finalize the conclusion remark to close the request.`,
|
||||
requestNumber: (wf as any).requestNumber,
|
||||
requestId: level.requestId,
|
||||
url: `/request/${(wf as any).requestNumber}`,
|
||||
type: 'approval_pending_closure',
|
||||
priority: 'HIGH',
|
||||
actionRequired: true
|
||||
});
|
||||
|
||||
logger.info(`[Approval] ✅ Final approval complete for ${level.requestId}. Initiator notified to finalize conclusion.`);
|
||||
}
|
||||
} else {
|
||||
// Not final - move to next level
|
||||
|
||||
@ -119,3 +119,22 @@ export async function preloadConfigurations(): Promise<void> {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get AI provider configurations
|
||||
*/
|
||||
export async function getAIProviderConfig(): Promise<{
|
||||
provider: string;
|
||||
claudeKey: string;
|
||||
openaiKey: string;
|
||||
geminiKey: string;
|
||||
enabled: boolean;
|
||||
}> {
|
||||
const provider = await getConfigValue('AI_PROVIDER', 'claude');
|
||||
const claudeKey = await getConfigValue('CLAUDE_API_KEY', '');
|
||||
const openaiKey = await getConfigValue('OPENAI_API_KEY', '');
|
||||
const geminiKey = await getConfigValue('GEMINI_API_KEY', '');
|
||||
const enabled = await getConfigBoolean('AI_ENABLED', true);
|
||||
|
||||
return { provider, claudeKey, openaiKey, geminiKey, enabled };
|
||||
}
|
||||
|
||||
|
||||
@ -1,9 +1,22 @@
|
||||
import webpush from 'web-push';
|
||||
import logger from '@utils/logger';
|
||||
import { Subscription } from '@models/Subscription';
|
||||
import { Notification } from '@models/Notification';
|
||||
|
||||
type PushSubscription = any; // Web Push protocol JSON
|
||||
|
||||
interface NotificationPayload {
|
||||
title: string;
|
||||
body: string;
|
||||
requestId?: string;
|
||||
requestNumber?: string;
|
||||
url?: string;
|
||||
type?: string;
|
||||
priority?: 'LOW' | 'MEDIUM' | 'HIGH' | 'URGENT';
|
||||
actionRequired?: boolean;
|
||||
metadata?: any;
|
||||
}
|
||||
|
||||
class NotificationService {
|
||||
private userIdToSubscriptions: Map<string, PushSubscription[]> = new Map();
|
||||
|
||||
@ -44,25 +57,78 @@ class NotificationService {
|
||||
logger.info(`Subscription stored for user ${userId}. Total: ${list.length}`);
|
||||
}
|
||||
|
||||
async sendToUsers(userIds: string[], payload: any) {
|
||||
/**
|
||||
* Send notification to users - saves to DB and sends via push/socket
|
||||
*/
|
||||
async sendToUsers(userIds: string[], payload: NotificationPayload) {
|
||||
const message = JSON.stringify(payload);
|
||||
for (const uid of userIds) {
|
||||
let subs = this.userIdToSubscriptions.get(uid) || [];
|
||||
const sentVia: string[] = ['IN_APP']; // Always save to DB for in-app display
|
||||
|
||||
for (const userId of userIds) {
|
||||
try {
|
||||
// 1. Save notification to database for in-app display
|
||||
const notification = await Notification.create({
|
||||
userId,
|
||||
requestId: payload.requestId,
|
||||
notificationType: payload.type || 'general',
|
||||
title: payload.title,
|
||||
message: payload.body,
|
||||
isRead: false,
|
||||
priority: payload.priority || 'MEDIUM',
|
||||
actionUrl: payload.url,
|
||||
actionRequired: payload.actionRequired || false,
|
||||
metadata: {
|
||||
requestNumber: payload.requestNumber,
|
||||
...payload.metadata
|
||||
},
|
||||
sentVia,
|
||||
emailSent: false,
|
||||
smsSent: false,
|
||||
pushSent: false
|
||||
} as any);
|
||||
|
||||
logger.info(`[Notification] Created in-app notification for user ${userId}: ${payload.title}`);
|
||||
|
||||
// 2. Emit real-time socket event for immediate delivery
|
||||
try {
|
||||
const { emitToUser } = require('../realtime/socket');
|
||||
if (emitToUser) {
|
||||
emitToUser(userId, 'notification:new', {
|
||||
notification: notification.toJSON(),
|
||||
...payload
|
||||
});
|
||||
logger.info(`[Notification] Emitted socket event to user ${userId}`);
|
||||
}
|
||||
} catch (socketError) {
|
||||
logger.warn(`[Notification] Socket emit failed (not critical):`, socketError);
|
||||
}
|
||||
|
||||
// 3. Send push notification (if user has subscriptions)
|
||||
let subs = this.userIdToSubscriptions.get(userId) || [];
|
||||
// Load from DB if memory empty
|
||||
if (subs.length === 0) {
|
||||
try {
|
||||
const rows = await Subscription.findAll({ where: { userId: uid } });
|
||||
const rows = await Subscription.findAll({ where: { userId } });
|
||||
subs = rows.map((r: any) => ({ endpoint: r.endpoint, keys: { p256dh: r.p256dh, auth: r.auth } }));
|
||||
} catch {}
|
||||
}
|
||||
|
||||
if (subs.length > 0) {
|
||||
for (const sub of subs) {
|
||||
try {
|
||||
await webpush.sendNotification(sub, message);
|
||||
await notification.update({ pushSent: true });
|
||||
logger.info(`[Notification] Push sent to user ${userId}`);
|
||||
} catch (err) {
|
||||
logger.error(`Failed to send push to ${uid}:`, err);
|
||||
logger.error(`Failed to send push to ${userId}:`, err);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`[Notification] Failed to create notification for user ${userId}:`, error);
|
||||
// Continue to next user even if one fails
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -70,3 +136,4 @@ export const notificationService = new NotificationService();
|
||||
notificationService.configure();
|
||||
|
||||
|
||||
|
||||
|
||||
@ -134,7 +134,10 @@ export class TatSchedulerService {
|
||||
{
|
||||
delay: spacedDelay,
|
||||
jobId: jobId,
|
||||
removeOnComplete: true,
|
||||
removeOnComplete: {
|
||||
age: 3600, // Keep for 1 hour for debugging
|
||||
count: 1000
|
||||
},
|
||||
removeOnFail: false
|
||||
}
|
||||
);
|
||||
|
||||
@ -460,7 +460,7 @@ export class WorkflowService {
|
||||
limit,
|
||||
order: [['createdAt', 'DESC']],
|
||||
include: [
|
||||
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName'] },
|
||||
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName', 'department', 'designation'] },
|
||||
],
|
||||
});
|
||||
const data = await this.enrichForCards(rows);
|
||||
@ -499,6 +499,9 @@ export class WorkflowService {
|
||||
return sum + Number(a.tatHours || 0);
|
||||
}, 0);
|
||||
|
||||
// Calculate approved levels count
|
||||
const approvedLevelsCount = approvals.filter((a: any) => a.status === 'APPROVED').length;
|
||||
|
||||
const priority = ((wf as any).priority || 'standard').toString().toLowerCase();
|
||||
|
||||
// Calculate OVERALL request SLA (from submission to total deadline)
|
||||
@ -537,7 +540,11 @@ export class WorkflowService {
|
||||
status: (wf as any).status,
|
||||
priority: (wf as any).priority,
|
||||
submittedAt: (wf as any).submissionDate,
|
||||
createdAt: (wf as any).createdAt,
|
||||
closureDate: (wf as any).closureDate,
|
||||
conclusionRemark: (wf as any).conclusionRemark,
|
||||
initiator: (wf as any).initiator,
|
||||
department: (wf as any).initiator?.department,
|
||||
totalLevels: (wf as any).totalLevels,
|
||||
totalTatHours: totalTatHours,
|
||||
currentLevel: currentLevel ? (currentLevel as any).levelNumber : null,
|
||||
@ -561,6 +568,18 @@ export class WorkflowService {
|
||||
status: a.status,
|
||||
levelStartTime: a.levelStartTime || a.tatStartTime
|
||||
})),
|
||||
summary: {
|
||||
approvedLevels: approvedLevelsCount,
|
||||
totalLevels: (wf as any).totalLevels,
|
||||
sla: overallSLA || {
|
||||
elapsedHours: 0,
|
||||
remainingHours: totalTatHours,
|
||||
percentageUsed: 0,
|
||||
remainingText: `${totalTatHours}h remaining`,
|
||||
isPaused: false,
|
||||
status: 'on_track'
|
||||
}
|
||||
},
|
||||
sla: overallSLA || {
|
||||
elapsedHours: 0,
|
||||
remainingHours: totalTatHours,
|
||||
@ -583,7 +602,7 @@ export class WorkflowService {
|
||||
limit,
|
||||
order: [['createdAt', 'DESC']],
|
||||
include: [
|
||||
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName'] },
|
||||
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName', 'department', 'designation'] },
|
||||
],
|
||||
});
|
||||
const data = await this.enrichForCards(rows);
|
||||
@ -632,16 +651,36 @@ export class WorkflowService {
|
||||
// Combine both sets of request IDs (unique)
|
||||
const allRequestIds = Array.from(new Set([...approverRequestIds, ...spectatorRequestIds]));
|
||||
|
||||
// Also include APPROVED requests where the user is the initiator (awaiting closure)
|
||||
const approvedAsInitiator = await WorkflowRequest.findAll({
|
||||
where: {
|
||||
initiatorId: userId,
|
||||
status: { [Op.in]: [WorkflowStatus.APPROVED as any, 'APPROVED'] as any },
|
||||
},
|
||||
attributes: ['requestId'],
|
||||
});
|
||||
const approvedInitiatorRequestIds = approvedAsInitiator.map((r: any) => r.requestId);
|
||||
|
||||
// Combine all request IDs (approver, spectator, and approved as initiator)
|
||||
const allOpenRequestIds = Array.from(new Set([...allRequestIds, ...approvedInitiatorRequestIds]));
|
||||
|
||||
const { rows, count } = await WorkflowRequest.findAndCountAll({
|
||||
where: {
|
||||
requestId: { [Op.in]: allRequestIds.length ? allRequestIds : ['00000000-0000-0000-0000-000000000000'] },
|
||||
status: { [Op.in]: [WorkflowStatus.PENDING as any, (WorkflowStatus as any).IN_PROGRESS ?? 'IN_PROGRESS'] as any },
|
||||
requestId: { [Op.in]: allOpenRequestIds.length ? allOpenRequestIds : ['00000000-0000-0000-0000-000000000000'] },
|
||||
status: { [Op.in]: [
|
||||
WorkflowStatus.PENDING as any,
|
||||
(WorkflowStatus as any).IN_PROGRESS ?? 'IN_PROGRESS',
|
||||
WorkflowStatus.APPROVED as any, // Include APPROVED for initiators awaiting closure
|
||||
'PENDING',
|
||||
'IN_PROGRESS',
|
||||
'APPROVED'
|
||||
] as any },
|
||||
},
|
||||
offset,
|
||||
limit,
|
||||
order: [['createdAt', 'DESC']],
|
||||
include: [
|
||||
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName'] },
|
||||
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName', 'department', 'designation'] },
|
||||
],
|
||||
});
|
||||
const data = await this.enrichForCards(rows);
|
||||
@ -679,22 +718,46 @@ export class WorkflowService {
|
||||
// Combine both sets of request IDs (unique)
|
||||
const allRequestIds = Array.from(new Set([...approverRequestIds, ...spectatorRequestIds]));
|
||||
|
||||
// Fetch closed/rejected requests
|
||||
const { rows, count } = await WorkflowRequest.findAndCountAll({
|
||||
where: {
|
||||
requestId: { [Op.in]: allRequestIds.length ? allRequestIds : ['00000000-0000-0000-0000-000000000000'] },
|
||||
// Build query conditions
|
||||
const whereConditions: any[] = [];
|
||||
|
||||
// 1. Requests where user was approver/spectator (show APPROVED, REJECTED, CLOSED)
|
||||
if (allRequestIds.length > 0) {
|
||||
whereConditions.push({
|
||||
requestId: { [Op.in]: allRequestIds },
|
||||
status: { [Op.in]: [
|
||||
WorkflowStatus.APPROVED as any,
|
||||
WorkflowStatus.REJECTED as any,
|
||||
(WorkflowStatus as any).CLOSED ?? 'CLOSED',
|
||||
'APPROVED',
|
||||
'REJECTED'
|
||||
] as any },
|
||||
'REJECTED',
|
||||
'CLOSED'
|
||||
] as any }
|
||||
});
|
||||
}
|
||||
|
||||
// 2. Requests where user is initiator (show ONLY REJECTED or CLOSED, NOT APPROVED)
|
||||
// APPROVED means initiator still needs to finalize conclusion
|
||||
whereConditions.push({
|
||||
initiatorId: userId,
|
||||
status: { [Op.in]: [
|
||||
WorkflowStatus.REJECTED as any,
|
||||
(WorkflowStatus as any).CLOSED ?? 'CLOSED',
|
||||
'REJECTED',
|
||||
'CLOSED'
|
||||
] as any }
|
||||
});
|
||||
|
||||
// Fetch closed/rejected/approved requests (including finalized ones)
|
||||
const { rows, count } = await WorkflowRequest.findAndCountAll({
|
||||
where: {
|
||||
[Op.or]: whereConditions
|
||||
},
|
||||
offset,
|
||||
limit,
|
||||
order: [['createdAt', 'DESC']],
|
||||
include: [
|
||||
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName'] },
|
||||
{ association: 'initiator', required: false, attributes: ['userId', 'email', 'displayName', 'department', 'designation'] },
|
||||
],
|
||||
});
|
||||
const data = await this.enrichForCards(rows);
|
||||
|
||||
@ -177,8 +177,13 @@ export async function addWorkingHours(start: Date | string, hoursToAdd: number):
|
||||
console.log(`[TAT Utils] Start time ${originalStart} was outside working hours, advanced to ${current.format('YYYY-MM-DD HH:mm:ss')}`);
|
||||
}
|
||||
|
||||
let remaining = hoursToAdd;
|
||||
// Split into whole hours and fractional part
|
||||
const wholeHours = Math.floor(hoursToAdd);
|
||||
const fractionalHours = hoursToAdd - wholeHours;
|
||||
|
||||
let remaining = wholeHours;
|
||||
|
||||
// Add whole hours
|
||||
while (remaining > 0) {
|
||||
current = current.add(1, 'hour');
|
||||
if (isWorkingTime(current)) {
|
||||
@ -186,6 +191,27 @@ export async function addWorkingHours(start: Date | string, hoursToAdd: number):
|
||||
}
|
||||
}
|
||||
|
||||
// Add fractional part (convert to minutes)
|
||||
if (fractionalHours > 0) {
|
||||
const minutesToAdd = Math.round(fractionalHours * 60);
|
||||
current = current.add(minutesToAdd, 'minute');
|
||||
|
||||
// Check if fractional addition pushed us outside working time
|
||||
if (!isWorkingTime(current)) {
|
||||
// Advance to next working period
|
||||
while (!isWorkingTime(current)) {
|
||||
current = current.add(1, 'hour');
|
||||
const hour = current.hour();
|
||||
const day = current.day();
|
||||
|
||||
// If before work start hour on a working day, jump to work start hour
|
||||
if (day >= config.startDay && day <= config.endDay && !isHoliday(current) && hour < config.startHour) {
|
||||
current = current.hour(config.startHour).minute(0).second(0).millisecond(0);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return current;
|
||||
}
|
||||
|
||||
@ -227,8 +253,13 @@ export async function addWorkingHoursExpress(start: Date | string, hoursToAdd: n
|
||||
console.log(`[TAT Utils Express] Start time ${originalStart} was after working hours, advanced to ${current.format('YYYY-MM-DD HH:mm:ss')}`);
|
||||
}
|
||||
|
||||
let remaining = hoursToAdd;
|
||||
// Split into whole hours and fractional part
|
||||
const wholeHours = Math.floor(hoursToAdd);
|
||||
const fractionalHours = hoursToAdd - wholeHours;
|
||||
|
||||
let remaining = wholeHours;
|
||||
|
||||
// Add whole hours
|
||||
while (remaining > 0) {
|
||||
current = current.add(1, 'hour');
|
||||
const hour = current.hour();
|
||||
@ -240,6 +271,19 @@ export async function addWorkingHoursExpress(start: Date | string, hoursToAdd: n
|
||||
}
|
||||
}
|
||||
|
||||
// Add fractional part (convert to minutes)
|
||||
if (fractionalHours > 0) {
|
||||
const minutesToAdd = Math.round(fractionalHours * 60);
|
||||
current = current.add(minutesToAdd, 'minute');
|
||||
|
||||
// Check if fractional addition pushed us past working hours
|
||||
if (current.hour() >= config.endHour) {
|
||||
// Overflow to next day's working hours
|
||||
const excessMinutes = (current.hour() - config.endHour) * 60 + current.minute();
|
||||
current = current.add(1, 'day').hour(config.startHour).minute(excessMinutes).second(0).millisecond(0);
|
||||
}
|
||||
}
|
||||
|
||||
return current;
|
||||
}
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user