672 lines
23 KiB
Markdown
672 lines
23 KiB
Markdown
# AI Conclusion Remark Generation Documentation
|
|
|
|
## Table of Contents
|
|
1. [Overview](#overview)
|
|
2. [Architecture](#architecture)
|
|
3. [Configuration](#configuration)
|
|
4. [API Usage](#api-usage)
|
|
5. [Implementation Details](#implementation-details)
|
|
6. [Prompt Engineering](#prompt-engineering)
|
|
7. [Error Handling](#error-handling)
|
|
8. [Best Practices](#best-practices)
|
|
9. [Troubleshooting](#troubleshooting)
|
|
|
|
---
|
|
|
|
## Overview
|
|
|
|
The AI Conclusion Remark Generation feature automatically generates professional, context-aware conclusion remarks for workflow requests that have been approved or rejected. This feature uses AI providers (Claude, OpenAI, or Gemini) to analyze the entire request lifecycle and create a comprehensive summary suitable for permanent archiving.
|
|
|
|
### Key Features
|
|
- **Multi-Provider Support**: Supports Claude (Anthropic), OpenAI (GPT-4), and Google Gemini
|
|
- **Context-Aware**: Analyzes approval flow, work notes, documents, and activities
|
|
- **Configurable**: Admin-configurable max length, provider selection, and enable/disable
|
|
- **Automatic Generation**: Can be triggered automatically when a request is approved/rejected
|
|
- **Manual Generation**: Users can regenerate conclusions on demand
|
|
- **Editable**: Generated remarks can be edited before finalization
|
|
|
|
### Use Cases
|
|
1. **Automatic Generation**: When the final approver approves/rejects a request, an AI conclusion is generated in the background
|
|
2. **Manual Generation**: Initiator can click "Generate AI Conclusion" button to create or regenerate a conclusion
|
|
3. **Finalization**: Initiator reviews, edits (if needed), and finalizes the conclusion to close the request
|
|
|
|
---
|
|
|
|
## Architecture
|
|
|
|
### Component Diagram
|
|
|
|
```
|
|
┌─────────────────────────────────────────────────────────────┐
|
|
│ Frontend (React) │
|
|
│ ┌──────────────────────────────────────────────────────┐ │
|
|
│ │ useConclusionRemark Hook │ │
|
|
│ │ - handleGenerateConclusion() │ │
|
|
│ │ - handleFinalizeConclusion() │ │
|
|
│ └──────────────────────────────────────────────────────┘ │
|
|
│ │ │
|
|
│ ▼ │
|
|
│ ┌──────────────────────────────────────────────────────┐ │
|
|
│ │ conclusionApi Service │ │
|
|
│ │ - generateConclusion(requestId) │ │
|
|
│ │ - finalizeConclusion(requestId, remark) │ │
|
|
│ └──────────────────────────────────────────────────────┘ │
|
|
└─────────────────────────────────────────────────────────────┘
|
|
│
|
|
│ HTTP API
|
|
▼
|
|
┌─────────────────────────────────────────────────────────────┐
|
|
│ Backend (Node.js/Express) │
|
|
│ ┌──────────────────────────────────────────────────────┐ │
|
|
│ │ ConclusionController │ │
|
|
│ │ - generateConclusion() │ │
|
|
│ │ - finalizeConclusion() │ │
|
|
│ └──────────────────────────────────────────────────────┘ │
|
|
│ │ │
|
|
│ ▼ │
|
|
│ ┌──────────────────────────────────────────────────────┐ │
|
|
│ │ AIService │ │
|
|
│ │ - generateConclusionRemark(context) │ │
|
|
│ │ - buildConclusionPrompt(context) │ │
|
|
│ │ - extractKeyPoints(remark) │ │
|
|
│ │ - calculateConfidence(remark, context) │ │
|
|
│ └──────────────────────────────────────────────────────┘ │
|
|
│ │ │
|
|
│ ▼ │
|
|
│ ┌──────────────────────────────────────────────────────┐ │
|
|
│ │ AI Providers (Claude/OpenAI/Gemini) │ │
|
|
│ │ - ClaudeProvider │ │
|
|
│ │ - OpenAIProvider │ │
|
|
│ │ - GeminiProvider │ │
|
|
│ └──────────────────────────────────────────────────────┘ │
|
|
│ │ │
|
|
│ ▼ │
|
|
│ ┌──────────────────────────────────────────────────────┐ │
|
|
│ │ Database (PostgreSQL) │ │
|
|
│ │ - conclusion_remarks table │ │
|
|
│ │ - workflow_requests table │ │
|
|
│ └──────────────────────────────────────────────────────┘ │
|
|
└─────────────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
### Data Flow
|
|
|
|
1. **Request Approval/Rejection** → `ApprovalService.approveLevel()`
|
|
- Automatically triggers AI generation in background
|
|
- Saves to `conclusion_remarks` table
|
|
|
|
2. **Manual Generation** → `ConclusionController.generateConclusion()`
|
|
- User clicks "Generate AI Conclusion"
|
|
- Fetches request context
|
|
- Calls `AIService.generateConclusionRemark()`
|
|
- Returns generated remark
|
|
|
|
3. **Finalization** → `ConclusionController.finalizeConclusion()`
|
|
- User reviews and edits (optional)
|
|
- Submits final remark
|
|
- Updates request status to `CLOSED`
|
|
- Saves `finalRemark` to database
|
|
|
|
---
|
|
|
|
## Configuration
|
|
|
|
### Environment Variables
|
|
|
|
```bash
|
|
# AI Provider Selection (claude, openai, gemini)
|
|
AI_PROVIDER=claude
|
|
|
|
# Claude Configuration
|
|
CLAUDE_API_KEY=your_claude_api_key
|
|
CLAUDE_MODEL=claude-sonnet-4-20250514
|
|
|
|
# OpenAI Configuration
|
|
OPENAI_API_KEY=your_openai_api_key
|
|
OPENAI_MODEL=gpt-4o
|
|
|
|
# Gemini Configuration
|
|
GEMINI_API_KEY=your_gemini_api_key
|
|
GEMINI_MODEL=gemini-2.0-flash-lite
|
|
```
|
|
|
|
### Admin Configuration (Database)
|
|
|
|
The system reads configuration from the `system_config` table. Key settings:
|
|
|
|
| Config Key | Default | Description |
|
|
|------------|---------|-------------|
|
|
| `AI_ENABLED` | `true` | Enable/disable all AI features |
|
|
| `AI_REMARK_GENERATION_ENABLED` | `true` | Enable/disable conclusion generation |
|
|
| `AI_PROVIDER` | `claude` | Preferred AI provider (claude, openai, gemini) |
|
|
| `AI_MAX_REMARK_LENGTH` | `2000` | Maximum characters for generated remarks |
|
|
| `CLAUDE_API_KEY` | - | Claude API key (if using Claude) |
|
|
| `CLAUDE_MODEL` | `claude-sonnet-4-20250514` | Claude model name |
|
|
| `OPENAI_API_KEY` | - | OpenAI API key (if using OpenAI) |
|
|
| `OPENAI_MODEL` | `gpt-4o` | OpenAI model name |
|
|
| `GEMINI_API_KEY` | - | Gemini API key (if using Gemini) |
|
|
| `GEMINI_MODEL` | `gemini-2.0-flash-lite` | Gemini model name |
|
|
|
|
### Provider Priority
|
|
|
|
1. **Preferred Provider**: Set via `AI_PROVIDER` config
|
|
2. **Fallback Chain**: If preferred fails, tries:
|
|
- Claude → OpenAI → Gemini
|
|
3. **Environment Fallback**: If database config fails, uses environment variables
|
|
|
|
---
|
|
|
|
## API Usage
|
|
|
|
### Generate AI Conclusion
|
|
|
|
**Endpoint**: `POST /api/v1/conclusions/:requestId/generate`
|
|
|
|
**Authentication**: Required (JWT token)
|
|
|
|
**Authorization**: Only the request initiator can generate conclusions
|
|
|
|
**Request**:
|
|
```http
|
|
POST /api/v1/conclusions/REQ-2025-00123/generate
|
|
Authorization: Bearer <token>
|
|
```
|
|
|
|
**Response** (Success - 200):
|
|
```json
|
|
{
|
|
"success": true,
|
|
"data": {
|
|
"conclusionId": "concl-123",
|
|
"aiGeneratedRemark": "This request for [title] was approved through [levels]...",
|
|
"keyDiscussionPoints": [
|
|
"Approved by John Doe at Level 1",
|
|
"TAT compliance: 85%",
|
|
"3 documents attached"
|
|
],
|
|
"confidence": 0.85,
|
|
"generatedAt": "2025-01-15T10:30:00Z",
|
|
"provider": "Claude (Anthropic)"
|
|
}
|
|
}
|
|
```
|
|
|
|
**Response** (Error - 403):
|
|
```json
|
|
{
|
|
"success": false,
|
|
"error": "Only the initiator can generate conclusion remarks"
|
|
}
|
|
```
|
|
|
|
**Response** (Error - 400):
|
|
```json
|
|
{
|
|
"success": false,
|
|
"error": "Conclusion can only be generated for approved or rejected requests"
|
|
}
|
|
```
|
|
|
|
### Finalize Conclusion
|
|
|
|
**Endpoint**: `POST /api/v1/conclusions/:requestId/finalize`
|
|
|
|
**Authentication**: Required (JWT token)
|
|
|
|
**Authorization**: Only the request initiator can finalize
|
|
|
|
**Request**:
|
|
```http
|
|
POST /api/v1/conclusions/REQ-2025-00123/finalize
|
|
Authorization: Bearer <token>
|
|
Content-Type: application/json
|
|
|
|
{
|
|
"finalRemark": "This request was approved through all levels. The implementation will begin next week."
|
|
}
|
|
```
|
|
|
|
**Response** (Success - 200):
|
|
```json
|
|
{
|
|
"success": true,
|
|
"data": {
|
|
"conclusionId": "concl-123",
|
|
"finalRemark": "This request was approved through all levels...",
|
|
"finalizedAt": "2025-01-15T10:35:00Z",
|
|
"requestStatus": "CLOSED"
|
|
}
|
|
}
|
|
```
|
|
|
|
### Get Existing Conclusion
|
|
|
|
**Endpoint**: `GET /api/v1/conclusions/:requestId`
|
|
|
|
**Response**:
|
|
```json
|
|
{
|
|
"success": true,
|
|
"data": {
|
|
"conclusionId": "concl-123",
|
|
"requestId": "REQ-2025-00123",
|
|
"aiGeneratedRemark": "Generated text...",
|
|
"finalRemark": "Finalized text...",
|
|
"isEdited": true,
|
|
"editCount": 2,
|
|
"aiModelUsed": "Claude (Anthropic)",
|
|
"aiConfidenceScore": 0.85,
|
|
"keyDiscussionPoints": ["Point 1", "Point 2"],
|
|
"generatedAt": "2025-01-15T10:30:00Z",
|
|
"finalizedAt": "2025-01-15T10:35:00Z"
|
|
}
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## Implementation Details
|
|
|
|
### Context Data Structure
|
|
|
|
The `generateConclusionRemark()` method accepts a context object with the following structure:
|
|
|
|
```typescript
|
|
interface ConclusionContext {
|
|
requestTitle: string;
|
|
requestDescription: string;
|
|
requestNumber: string;
|
|
priority: string;
|
|
approvalFlow: Array<{
|
|
levelNumber: number;
|
|
approverName: string;
|
|
status: 'APPROVED' | 'REJECTED' | 'PENDING' | 'IN_PROGRESS';
|
|
comments?: string;
|
|
actionDate?: string;
|
|
tatHours?: number;
|
|
elapsedHours?: number;
|
|
tatPercentageUsed?: number;
|
|
}>;
|
|
workNotes: Array<{
|
|
userName: string;
|
|
message: string;
|
|
createdAt: string;
|
|
}>;
|
|
documents: Array<{
|
|
fileName: string;
|
|
uploadedBy: string;
|
|
uploadedAt: string;
|
|
}>;
|
|
activities: Array<{
|
|
type: string;
|
|
action: string;
|
|
details: string;
|
|
timestamp: string;
|
|
}>;
|
|
rejectionReason?: string;
|
|
rejectedBy?: string;
|
|
}
|
|
```
|
|
|
|
### Generation Process
|
|
|
|
1. **Context Collection**:
|
|
- Fetches request details from `workflow_requests`
|
|
- Fetches approval levels from `approval_levels`
|
|
- Fetches work notes from `work_notes`
|
|
- Fetches documents from `documents`
|
|
- Fetches activities from `activities`
|
|
|
|
2. **Prompt Building**:
|
|
- Constructs a detailed prompt with all context
|
|
- Includes TAT risk information (ON_TRACK, AT_RISK, CRITICAL, BREACHED)
|
|
- Includes rejection context if applicable
|
|
- Sets target word count based on `AI_MAX_REMARK_LENGTH`
|
|
|
|
3. **AI Generation**:
|
|
- Sends prompt to selected AI provider
|
|
- Receives generated text
|
|
- Validates length (trims if exceeds max)
|
|
- Extracts key points
|
|
- Calculates confidence score
|
|
|
|
4. **Storage**:
|
|
- Saves to `conclusion_remarks` table
|
|
- Links to `workflow_requests` via `requestId`
|
|
- Stores metadata (provider, confidence, key points)
|
|
|
|
### Automatic Generation
|
|
|
|
When a request is approved/rejected, `ApprovalService.approveLevel()` automatically generates a conclusion in the background:
|
|
|
|
```typescript
|
|
// In ApprovalService.approveLevel()
|
|
if (isFinalApproval) {
|
|
// Background task - doesn't block the approval response
|
|
(async () => {
|
|
const context = { /* ... */ };
|
|
const aiResult = await aiService.generateConclusionRemark(context);
|
|
await ConclusionRemark.create({ /* ... */ });
|
|
})();
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## Prompt Engineering
|
|
|
|
### Prompt Structure
|
|
|
|
The prompt is designed to generate professional, archival-quality conclusions:
|
|
|
|
```
|
|
You are writing a closure summary for a workflow request at Royal Enfield.
|
|
Write a practical, realistic conclusion that an employee would write when closing a request.
|
|
|
|
**Request:**
|
|
[Request Number] - [Title]
|
|
Description: [Description]
|
|
Priority: [Priority]
|
|
|
|
**What Happened:**
|
|
[Approval Summary with TAT info]
|
|
[Rejection Context if applicable]
|
|
|
|
**Discussions (if any):**
|
|
[Work Notes Summary]
|
|
|
|
**Documents:**
|
|
[Document List]
|
|
|
|
**YOUR TASK:**
|
|
Write a brief, professional conclusion (approximately X words, max Y characters) that:
|
|
- Summarizes what was requested and the final decision
|
|
- Mentions who approved it and any key comments
|
|
- Mentions if any approval levels were AT_RISK, CRITICAL, or BREACHED
|
|
- Notes the outcome and next steps (if applicable)
|
|
- Uses clear, factual language without time-specific references
|
|
- Is suitable for permanent archiving and future reference
|
|
- Sounds natural and human-written (not AI-generated)
|
|
|
|
**IMPORTANT:**
|
|
- Be concise and direct
|
|
- MUST stay within [maxLength] characters limit
|
|
- No time-specific words like "today", "now", "currently", "recently"
|
|
- No corporate jargon or buzzwords
|
|
- No emojis or excessive formatting
|
|
- Write like a professional documenting a completed process
|
|
- Focus on facts: what was requested, who approved, what was decided
|
|
- Use past tense for completed actions
|
|
```
|
|
|
|
### Key Prompt Features
|
|
|
|
1. **TAT Risk Integration**: Includes TAT percentage usage and risk status for each approval level
|
|
2. **Rejection Handling**: Different instructions for rejected vs approved requests
|
|
3. **Length Control**: Dynamically sets target word count based on config
|
|
4. **Tone Guidelines**: Emphasizes natural, professional, archival-quality writing
|
|
5. **Context Awareness**: Includes all relevant data (approvals, notes, documents, activities)
|
|
|
|
### Provider-Specific Settings
|
|
|
|
| Provider | Model | Max Tokens | Temperature | Notes |
|
|
|----------|-------|------------|-------------|-------|
|
|
| Claude | claude-sonnet-4-20250514 | 2048 | 0.3 | Best for longer, detailed conclusions |
|
|
| OpenAI | gpt-4o | 1024 | 0.3 | Balanced performance |
|
|
| Gemini | gemini-2.0-flash-lite | - | 0.3 | Fast and cost-effective |
|
|
|
|
---
|
|
|
|
## Error Handling
|
|
|
|
### Common Errors
|
|
|
|
1. **No AI Provider Available**
|
|
```
|
|
Error: AI features are currently unavailable. Please configure an AI provider...
|
|
```
|
|
**Solution**: Configure API keys in admin panel or environment variables
|
|
|
|
2. **Provider API Error**
|
|
```
|
|
Error: AI generation failed (Claude): API rate limit exceeded
|
|
```
|
|
**Solution**: Check API key validity, rate limits, and provider status
|
|
|
|
3. **Request Not Found**
|
|
```
|
|
Error: Request not found
|
|
```
|
|
**Solution**: Verify requestId is correct and request exists
|
|
|
|
4. **Unauthorized Access**
|
|
```
|
|
Error: Only the initiator can generate conclusion remarks
|
|
```
|
|
**Solution**: Ensure user is the request initiator
|
|
|
|
5. **Invalid Request Status**
|
|
```
|
|
Error: Conclusion can only be generated for approved or rejected requests
|
|
```
|
|
**Solution**: Request must be in APPROVED or REJECTED status
|
|
|
|
### Error Recovery
|
|
|
|
- **Automatic Fallback**: If preferred provider fails, system tries fallback providers
|
|
- **Graceful Degradation**: If AI generation fails, user can write conclusion manually
|
|
- **Retry Logic**: Manual regeneration is always available
|
|
- **Logging**: All errors are logged with context for debugging
|
|
|
|
---
|
|
|
|
## Best Practices
|
|
|
|
### For Developers
|
|
|
|
1. **Error Handling**: Always wrap AI calls in try-catch blocks
|
|
2. **Async Operations**: Use background tasks for automatic generation (don't block approval)
|
|
3. **Validation**: Validate context data before sending to AI
|
|
4. **Logging**: Log all AI operations for debugging and monitoring
|
|
5. **Configuration**: Use database config for flexibility (not hardcoded values)
|
|
|
|
### For Administrators
|
|
|
|
1. **API Key Management**: Store API keys securely in database or environment variables
|
|
2. **Provider Selection**: Choose provider based on:
|
|
- **Claude**: Best quality, higher cost
|
|
- **OpenAI**: Balanced quality/cost
|
|
- **Gemini**: Fast, cost-effective
|
|
3. **Length Configuration**: Set `AI_MAX_REMARK_LENGTH` based on your archival needs
|
|
4. **Monitoring**: Monitor AI usage and costs through provider dashboards
|
|
5. **Testing**: Test with sample requests before enabling in production
|
|
|
|
### For Users
|
|
|
|
1. **Review Before Finalizing**: Always review AI-generated conclusions
|
|
2. **Edit if Needed**: Don't hesitate to edit the generated text
|
|
3. **Regenerate**: If not satisfied, regenerate with updated context
|
|
4. **Finalize Promptly**: Finalize conclusions soon after generation for accuracy
|
|
|
|
---
|
|
|
|
## Troubleshooting
|
|
|
|
### Issue: AI Generation Not Working
|
|
|
|
**Symptoms**: Error message "AI features are currently unavailable"
|
|
|
|
**Diagnosis**:
|
|
1. Check `AI_ENABLED` config value
|
|
2. Check `AI_REMARK_GENERATION_ENABLED` config value
|
|
3. Verify API keys are configured
|
|
4. Check provider initialization logs
|
|
|
|
**Solution**:
|
|
```bash
|
|
# Check logs
|
|
tail -f logs/app.log | grep "AI Service"
|
|
|
|
# Verify config
|
|
SELECT * FROM system_config WHERE config_key LIKE 'AI_%';
|
|
```
|
|
|
|
### Issue: Generated Text Too Long/Short
|
|
|
|
**Symptoms**: Generated remarks exceed or are much shorter than expected
|
|
|
|
**Solution**:
|
|
1. Adjust `AI_MAX_REMARK_LENGTH` in admin config
|
|
2. Check prompt target word count calculation
|
|
3. Verify provider max_tokens setting
|
|
|
|
### Issue: Poor Quality Conclusions
|
|
|
|
**Symptoms**: Generated text is generic or inaccurate
|
|
|
|
**Solution**:
|
|
1. Verify context data is complete (approvals, notes, documents)
|
|
2. Check prompt includes all relevant information
|
|
3. Try different provider (Claude generally produces better quality)
|
|
4. Adjust temperature if needed (lower = more focused)
|
|
|
|
### Issue: Slow Generation
|
|
|
|
**Symptoms**: AI generation takes too long
|
|
|
|
**Solution**:
|
|
1. Check provider API status
|
|
2. Verify network connectivity
|
|
3. Consider using faster provider (Gemini)
|
|
4. Check for rate limiting
|
|
|
|
### Issue: Provider Not Initializing
|
|
|
|
**Symptoms**: Provider shows as "None" in logs
|
|
|
|
**Diagnosis**:
|
|
1. Check API key is valid
|
|
2. Verify SDK package is installed
|
|
3. Check environment variables
|
|
|
|
**Solution**:
|
|
```bash
|
|
# Install missing SDK
|
|
npm install @anthropic-ai/sdk # For Claude
|
|
npm install openai # For OpenAI
|
|
npm install @google/generative-ai # For Gemini
|
|
|
|
# Verify API key
|
|
echo $CLAUDE_API_KEY # Should show key
|
|
```
|
|
|
|
---
|
|
|
|
## Database Schema
|
|
|
|
### conclusion_remarks Table
|
|
|
|
```sql
|
|
CREATE TABLE conclusion_remarks (
|
|
conclusion_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
|
request_id VARCHAR(50) NOT NULL UNIQUE,
|
|
ai_generated_remark TEXT,
|
|
ai_model_used VARCHAR(100),
|
|
ai_confidence_score DECIMAL(3,2),
|
|
final_remark TEXT,
|
|
edited_by UUID,
|
|
is_edited BOOLEAN DEFAULT false,
|
|
edit_count INTEGER DEFAULT 0,
|
|
approval_summary JSONB,
|
|
document_summary JSONB,
|
|
key_discussion_points TEXT[],
|
|
generated_at TIMESTAMP,
|
|
finalized_at TIMESTAMP,
|
|
created_at TIMESTAMP DEFAULT NOW(),
|
|
updated_at TIMESTAMP DEFAULT NOW(),
|
|
FOREIGN KEY (request_id) REFERENCES workflow_requests(request_id),
|
|
FOREIGN KEY (edited_by) REFERENCES users(user_id)
|
|
);
|
|
```
|
|
|
|
### Key Fields
|
|
|
|
- `ai_generated_remark`: Original AI-generated text
|
|
- `final_remark`: User-edited/finalized text
|
|
- `ai_confidence_score`: Quality score (0.0 - 1.0)
|
|
- `key_discussion_points`: Extracted key points array
|
|
- `approval_summary`: JSON with approval statistics
|
|
- `document_summary`: JSON with document information
|
|
|
|
---
|
|
|
|
## Examples
|
|
|
|
### Example 1: Approved Request Conclusion
|
|
|
|
**Context**:
|
|
- Request: "Purchase 50 laptops for IT department"
|
|
- Priority: STANDARD
|
|
- 3 approval levels, all approved
|
|
- TAT: 100%, 85%, 90% usage
|
|
- 2 documents attached
|
|
|
|
**Generated Conclusion**:
|
|
```
|
|
This request for the purchase of 50 laptops for the IT department was approved
|
|
through all three approval levels. The request was reviewed and approved by
|
|
John Doe at Level 1, Jane Smith at Level 2, and Bob Johnson at Level 3. All
|
|
approval levels completed within their respective TAT windows, with Level 1
|
|
using 100% of allocated time. The purchase order has been generated and
|
|
forwarded to the procurement team for processing. Implementation is expected
|
|
to begin within the next two weeks.
|
|
```
|
|
|
|
### Example 2: Rejected Request Conclusion
|
|
|
|
**Context**:
|
|
- Request: "Implement new HR policy"
|
|
- Priority: EXPRESS
|
|
- Rejected at Level 2 by Jane Smith
|
|
- Reason: "Budget constraints"
|
|
|
|
**Generated Conclusion**:
|
|
```
|
|
This request for implementing a new HR policy was reviewed through two approval
|
|
levels but was ultimately rejected. The request was approved by John Doe at
|
|
Level 1, but rejected by Jane Smith at Level 2 due to budget constraints.
|
|
The rejection was communicated to the initiator, and alternative approaches
|
|
are being considered. The request documentation has been archived for future
|
|
reference.
|
|
```
|
|
|
|
---
|
|
|
|
## Version History
|
|
|
|
- **v1.0.0** (2025-01-15): Initial implementation
|
|
- Multi-provider support (Claude, OpenAI, Gemini)
|
|
- Automatic and manual generation
|
|
- TAT risk integration
|
|
- Key points extraction
|
|
- Confidence scoring
|
|
|
|
---
|
|
|
|
## Support
|
|
|
|
For issues or questions:
|
|
1. Check logs: `logs/app.log`
|
|
2. Review admin configuration panel
|
|
3. Contact development team
|
|
4. Refer to provider documentation:
|
|
- [Claude API Docs](https://docs.anthropic.com)
|
|
- [OpenAI API Docs](https://platform.openai.com/docs)
|
|
- [Gemini API Docs](https://ai.google.dev/docs)
|
|
|
|
---
|
|
|
|
**Last Updated**: January 2025
|
|
**Maintained By**: Royal Enfield Development Team
|
|
|