23 KiB
AI Conclusion Remark Generation Documentation
Table of Contents
- Overview
- Architecture
- Configuration
- API Usage
- Implementation Details
- Prompt Engineering
- Error Handling
- Best Practices
- Troubleshooting
Overview
The AI Conclusion Remark Generation feature automatically generates professional, context-aware conclusion remarks for workflow requests that have been approved or rejected. This feature uses AI providers (Claude, OpenAI, or Gemini) to analyze the entire request lifecycle and create a comprehensive summary suitable for permanent archiving.
Key Features
- Multi-Provider Support: Supports Claude (Anthropic), OpenAI (GPT-4), and Google Gemini
- Context-Aware: Analyzes approval flow, work notes, documents, and activities
- Configurable: Admin-configurable max length, provider selection, and enable/disable
- Automatic Generation: Can be triggered automatically when a request is approved/rejected
- Manual Generation: Users can regenerate conclusions on demand
- Editable: Generated remarks can be edited before finalization
Use Cases
- Automatic Generation: When the final approver approves/rejects a request, an AI conclusion is generated in the background
- Manual Generation: Initiator can click "Generate AI Conclusion" button to create or regenerate a conclusion
- Finalization: Initiator reviews, edits (if needed), and finalizes the conclusion to close the request
Architecture
Component Diagram
┌─────────────────────────────────────────────────────────────┐
│ Frontend (React) │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ useConclusionRemark Hook │ │
│ │ - handleGenerateConclusion() │ │
│ │ - handleFinalizeConclusion() │ │
│ └──────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ conclusionApi Service │ │
│ │ - generateConclusion(requestId) │ │
│ │ - finalizeConclusion(requestId, remark) │ │
│ └──────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
│ HTTP API
▼
┌─────────────────────────────────────────────────────────────┐
│ Backend (Node.js/Express) │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ ConclusionController │ │
│ │ - generateConclusion() │ │
│ │ - finalizeConclusion() │ │
│ └──────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ AIService │ │
│ │ - generateConclusionRemark(context) │ │
│ │ - buildConclusionPrompt(context) │ │
│ │ - extractKeyPoints(remark) │ │
│ │ - calculateConfidence(remark, context) │ │
│ └──────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ AI Providers (Claude/OpenAI/Gemini) │ │
│ │ - ClaudeProvider │ │
│ │ - OpenAIProvider │ │
│ │ - GeminiProvider │ │
│ └──────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Database (PostgreSQL) │ │
│ │ - conclusion_remarks table │ │
│ │ - workflow_requests table │ │
│ └──────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Data Flow
-
Request Approval/Rejection →
ApprovalService.approveLevel()- Automatically triggers AI generation in background
- Saves to
conclusion_remarkstable
-
Manual Generation →
ConclusionController.generateConclusion()- User clicks "Generate AI Conclusion"
- Fetches request context
- Calls
AIService.generateConclusionRemark() - Returns generated remark
-
Finalization →
ConclusionController.finalizeConclusion()- User reviews and edits (optional)
- Submits final remark
- Updates request status to
CLOSED - Saves
finalRemarkto database
Configuration
Environment Variables
# AI Provider Selection (claude, openai, gemini)
AI_PROVIDER=claude
# Claude Configuration
CLAUDE_API_KEY=your_claude_api_key
CLAUDE_MODEL=claude-sonnet-4-20250514
# OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key
OPENAI_MODEL=gpt-4o
# Gemini Configuration
GEMINI_API_KEY=your_gemini_api_key
GEMINI_MODEL=gemini-2.0-flash-lite
Admin Configuration (Database)
The system reads configuration from the system_config table. Key settings:
| Config Key | Default | Description |
|---|---|---|
AI_ENABLED |
true |
Enable/disable all AI features |
AI_REMARK_GENERATION_ENABLED |
true |
Enable/disable conclusion generation |
AI_PROVIDER |
claude |
Preferred AI provider (claude, openai, gemini) |
AI_MAX_REMARK_LENGTH |
2000 |
Maximum characters for generated remarks |
CLAUDE_API_KEY |
- | Claude API key (if using Claude) |
CLAUDE_MODEL |
claude-sonnet-4-20250514 |
Claude model name |
OPENAI_API_KEY |
- | OpenAI API key (if using OpenAI) |
OPENAI_MODEL |
gpt-4o |
OpenAI model name |
GEMINI_API_KEY |
- | Gemini API key (if using Gemini) |
GEMINI_MODEL |
gemini-2.0-flash-lite |
Gemini model name |
Provider Priority
- Preferred Provider: Set via
AI_PROVIDERconfig - Fallback Chain: If preferred fails, tries:
- Claude → OpenAI → Gemini
- Environment Fallback: If database config fails, uses environment variables
API Usage
Generate AI Conclusion
Endpoint: POST /api/v1/conclusions/:requestId/generate
Authentication: Required (JWT token)
Authorization: Only the request initiator can generate conclusions
Request:
POST /api/v1/conclusions/REQ-2025-00123/generate
Authorization: Bearer <token>
Response (Success - 200):
{
"success": true,
"data": {
"conclusionId": "concl-123",
"aiGeneratedRemark": "This request for [title] was approved through [levels]...",
"keyDiscussionPoints": [
"Approved by John Doe at Level 1",
"TAT compliance: 85%",
"3 documents attached"
],
"confidence": 0.85,
"generatedAt": "2025-01-15T10:30:00Z",
"provider": "Claude (Anthropic)"
}
}
Response (Error - 403):
{
"success": false,
"error": "Only the initiator can generate conclusion remarks"
}
Response (Error - 400):
{
"success": false,
"error": "Conclusion can only be generated for approved or rejected requests"
}
Finalize Conclusion
Endpoint: POST /api/v1/conclusions/:requestId/finalize
Authentication: Required (JWT token)
Authorization: Only the request initiator can finalize
Request:
POST /api/v1/conclusions/REQ-2025-00123/finalize
Authorization: Bearer <token>
Content-Type: application/json
{
"finalRemark": "This request was approved through all levels. The implementation will begin next week."
}
Response (Success - 200):
{
"success": true,
"data": {
"conclusionId": "concl-123",
"finalRemark": "This request was approved through all levels...",
"finalizedAt": "2025-01-15T10:35:00Z",
"requestStatus": "CLOSED"
}
}
Get Existing Conclusion
Endpoint: GET /api/v1/conclusions/:requestId
Response:
{
"success": true,
"data": {
"conclusionId": "concl-123",
"requestId": "REQ-2025-00123",
"aiGeneratedRemark": "Generated text...",
"finalRemark": "Finalized text...",
"isEdited": true,
"editCount": 2,
"aiModelUsed": "Claude (Anthropic)",
"aiConfidenceScore": 0.85,
"keyDiscussionPoints": ["Point 1", "Point 2"],
"generatedAt": "2025-01-15T10:30:00Z",
"finalizedAt": "2025-01-15T10:35:00Z"
}
}
Implementation Details
Context Data Structure
The generateConclusionRemark() method accepts a context object with the following structure:
interface ConclusionContext {
requestTitle: string;
requestDescription: string;
requestNumber: string;
priority: string;
approvalFlow: Array<{
levelNumber: number;
approverName: string;
status: 'APPROVED' | 'REJECTED' | 'PENDING' | 'IN_PROGRESS';
comments?: string;
actionDate?: string;
tatHours?: number;
elapsedHours?: number;
tatPercentageUsed?: number;
}>;
workNotes: Array<{
userName: string;
message: string;
createdAt: string;
}>;
documents: Array<{
fileName: string;
uploadedBy: string;
uploadedAt: string;
}>;
activities: Array<{
type: string;
action: string;
details: string;
timestamp: string;
}>;
rejectionReason?: string;
rejectedBy?: string;
}
Generation Process
-
Context Collection:
- Fetches request details from
workflow_requests - Fetches approval levels from
approval_levels - Fetches work notes from
work_notes - Fetches documents from
documents - Fetches activities from
activities
- Fetches request details from
-
Prompt Building:
- Constructs a detailed prompt with all context
- Includes TAT risk information (ON_TRACK, AT_RISK, CRITICAL, BREACHED)
- Includes rejection context if applicable
- Sets target word count based on
AI_MAX_REMARK_LENGTH
-
AI Generation:
- Sends prompt to selected AI provider
- Receives generated text
- Validates length (trims if exceeds max)
- Extracts key points
- Calculates confidence score
-
Storage:
- Saves to
conclusion_remarkstable - Links to
workflow_requestsviarequestId - Stores metadata (provider, confidence, key points)
- Saves to
Automatic Generation
When a request is approved/rejected, ApprovalService.approveLevel() automatically generates a conclusion in the background:
// In ApprovalService.approveLevel()
if (isFinalApproval) {
// Background task - doesn't block the approval response
(async () => {
const context = { /* ... */ };
const aiResult = await aiService.generateConclusionRemark(context);
await ConclusionRemark.create({ /* ... */ });
})();
}
Prompt Engineering
Prompt Structure
The prompt is designed to generate professional, archival-quality conclusions:
You are writing a closure summary for a workflow request at Royal Enfield.
Write a practical, realistic conclusion that an employee would write when closing a request.
**Request:**
[Request Number] - [Title]
Description: [Description]
Priority: [Priority]
**What Happened:**
[Approval Summary with TAT info]
[Rejection Context if applicable]
**Discussions (if any):**
[Work Notes Summary]
**Documents:**
[Document List]
**YOUR TASK:**
Write a brief, professional conclusion (approximately X words, max Y characters) that:
- Summarizes what was requested and the final decision
- Mentions who approved it and any key comments
- Mentions if any approval levels were AT_RISK, CRITICAL, or BREACHED
- Notes the outcome and next steps (if applicable)
- Uses clear, factual language without time-specific references
- Is suitable for permanent archiving and future reference
- Sounds natural and human-written (not AI-generated)
**IMPORTANT:**
- Be concise and direct
- MUST stay within [maxLength] characters limit
- No time-specific words like "today", "now", "currently", "recently"
- No corporate jargon or buzzwords
- No emojis or excessive formatting
- Write like a professional documenting a completed process
- Focus on facts: what was requested, who approved, what was decided
- Use past tense for completed actions
Key Prompt Features
- TAT Risk Integration: Includes TAT percentage usage and risk status for each approval level
- Rejection Handling: Different instructions for rejected vs approved requests
- Length Control: Dynamically sets target word count based on config
- Tone Guidelines: Emphasizes natural, professional, archival-quality writing
- Context Awareness: Includes all relevant data (approvals, notes, documents, activities)
Provider-Specific Settings
| Provider | Model | Max Tokens | Temperature | Notes |
|---|---|---|---|---|
| Claude | claude-sonnet-4-20250514 | 2048 | 0.3 | Best for longer, detailed conclusions |
| OpenAI | gpt-4o | 1024 | 0.3 | Balanced performance |
| Gemini | gemini-2.0-flash-lite | - | 0.3 | Fast and cost-effective |
Error Handling
Common Errors
-
No AI Provider Available
Error: AI features are currently unavailable. Please configure an AI provider...Solution: Configure API keys in admin panel or environment variables
-
Provider API Error
Error: AI generation failed (Claude): API rate limit exceededSolution: Check API key validity, rate limits, and provider status
-
Request Not Found
Error: Request not foundSolution: Verify requestId is correct and request exists
-
Unauthorized Access
Error: Only the initiator can generate conclusion remarksSolution: Ensure user is the request initiator
-
Invalid Request Status
Error: Conclusion can only be generated for approved or rejected requestsSolution: Request must be in APPROVED or REJECTED status
Error Recovery
- Automatic Fallback: If preferred provider fails, system tries fallback providers
- Graceful Degradation: If AI generation fails, user can write conclusion manually
- Retry Logic: Manual regeneration is always available
- Logging: All errors are logged with context for debugging
Best Practices
For Developers
- Error Handling: Always wrap AI calls in try-catch blocks
- Async Operations: Use background tasks for automatic generation (don't block approval)
- Validation: Validate context data before sending to AI
- Logging: Log all AI operations for debugging and monitoring
- Configuration: Use database config for flexibility (not hardcoded values)
For Administrators
- API Key Management: Store API keys securely in database or environment variables
- Provider Selection: Choose provider based on:
- Claude: Best quality, higher cost
- OpenAI: Balanced quality/cost
- Gemini: Fast, cost-effective
- Length Configuration: Set
AI_MAX_REMARK_LENGTHbased on your archival needs - Monitoring: Monitor AI usage and costs through provider dashboards
- Testing: Test with sample requests before enabling in production
For Users
- Review Before Finalizing: Always review AI-generated conclusions
- Edit if Needed: Don't hesitate to edit the generated text
- Regenerate: If not satisfied, regenerate with updated context
- Finalize Promptly: Finalize conclusions soon after generation for accuracy
Troubleshooting
Issue: AI Generation Not Working
Symptoms: Error message "AI features are currently unavailable"
Diagnosis:
- Check
AI_ENABLEDconfig value - Check
AI_REMARK_GENERATION_ENABLEDconfig value - Verify API keys are configured
- Check provider initialization logs
Solution:
# Check logs
tail -f logs/app.log | grep "AI Service"
# Verify config
SELECT * FROM system_config WHERE config_key LIKE 'AI_%';
Issue: Generated Text Too Long/Short
Symptoms: Generated remarks exceed or are much shorter than expected
Solution:
- Adjust
AI_MAX_REMARK_LENGTHin admin config - Check prompt target word count calculation
- Verify provider max_tokens setting
Issue: Poor Quality Conclusions
Symptoms: Generated text is generic or inaccurate
Solution:
- Verify context data is complete (approvals, notes, documents)
- Check prompt includes all relevant information
- Try different provider (Claude generally produces better quality)
- Adjust temperature if needed (lower = more focused)
Issue: Slow Generation
Symptoms: AI generation takes too long
Solution:
- Check provider API status
- Verify network connectivity
- Consider using faster provider (Gemini)
- Check for rate limiting
Issue: Provider Not Initializing
Symptoms: Provider shows as "None" in logs
Diagnosis:
- Check API key is valid
- Verify SDK package is installed
- Check environment variables
Solution:
# Install missing SDK
npm install @anthropic-ai/sdk # For Claude
npm install openai # For OpenAI
npm install @google/generative-ai # For Gemini
# Verify API key
echo $CLAUDE_API_KEY # Should show key
Database Schema
conclusion_remarks Table
CREATE TABLE conclusion_remarks (
conclusion_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
request_id VARCHAR(50) NOT NULL UNIQUE,
ai_generated_remark TEXT,
ai_model_used VARCHAR(100),
ai_confidence_score DECIMAL(3,2),
final_remark TEXT,
edited_by UUID,
is_edited BOOLEAN DEFAULT false,
edit_count INTEGER DEFAULT 0,
approval_summary JSONB,
document_summary JSONB,
key_discussion_points TEXT[],
generated_at TIMESTAMP,
finalized_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
FOREIGN KEY (request_id) REFERENCES workflow_requests(request_id),
FOREIGN KEY (edited_by) REFERENCES users(user_id)
);
Key Fields
ai_generated_remark: Original AI-generated textfinal_remark: User-edited/finalized textai_confidence_score: Quality score (0.0 - 1.0)key_discussion_points: Extracted key points arrayapproval_summary: JSON with approval statisticsdocument_summary: JSON with document information
Examples
Example 1: Approved Request Conclusion
Context:
- Request: "Purchase 50 laptops for IT department"
- Priority: STANDARD
- 3 approval levels, all approved
- TAT: 100%, 85%, 90% usage
- 2 documents attached
Generated Conclusion:
This request for the purchase of 50 laptops for the IT department was approved
through all three approval levels. The request was reviewed and approved by
John Doe at Level 1, Jane Smith at Level 2, and Bob Johnson at Level 3. All
approval levels completed within their respective TAT windows, with Level 1
using 100% of allocated time. The purchase order has been generated and
forwarded to the procurement team for processing. Implementation is expected
to begin within the next two weeks.
Example 2: Rejected Request Conclusion
Context:
- Request: "Implement new HR policy"
- Priority: EXPRESS
- Rejected at Level 2 by Jane Smith
- Reason: "Budget constraints"
Generated Conclusion:
This request for implementing a new HR policy was reviewed through two approval
levels but was ultimately rejected. The request was approved by John Doe at
Level 1, but rejected by Jane Smith at Level 2 due to budget constraints.
The rejection was communicated to the initiator, and alternative approaches
are being considered. The request documentation has been archived for future
reference.
Version History
- v1.0.0 (2025-01-15): Initial implementation
- Multi-provider support (Claude, OpenAI, Gemini)
- Automatic and manual generation
- TAT risk integration
- Key points extraction
- Confidence scoring
Support
For issues or questions:
- Check logs:
logs/app.log - Review admin configuration panel
- Contact development team
- Refer to provider documentation:
Last Updated: January 2025 Maintained By: Royal Enfield Development Team