727 lines
26 KiB
Markdown
727 lines
26 KiB
Markdown
# AI Conclusion Remark Generation Documentation
|
|
|
|
## Table of Contents
|
|
1. [Overview](#overview)
|
|
2. [Architecture](#architecture)
|
|
3. [Configuration](#configuration)
|
|
4. [API Usage](#api-usage)
|
|
5. [Implementation Details](#implementation-details)
|
|
6. [Prompt Engineering](#prompt-engineering)
|
|
7. [Error Handling](#error-handling)
|
|
8. [Best Practices](#best-practices)
|
|
9. [Troubleshooting](#troubleshooting)
|
|
|
|
---
|
|
|
|
## Overview
|
|
|
|
The AI Conclusion Remark Generation feature automatically generates professional, context-aware conclusion remarks for workflow requests that have been approved or rejected. This feature uses **Google Cloud Vertex AI Gemini** to analyze the entire request lifecycle and create a comprehensive summary suitable for permanent archiving.
|
|
|
|
### Key Features
|
|
- **Vertex AI Integration**: Uses Google Cloud Vertex AI Gemini with service account authentication
|
|
- **Context-Aware**: Analyzes approval flow, work notes, documents, and activities
|
|
- **Configurable**: Admin-configurable max length, model selection, and enable/disable
|
|
- **Automatic Generation**: Can be triggered automatically when a request is approved/rejected
|
|
- **Manual Generation**: Users can regenerate conclusions on demand
|
|
- **Editable**: Generated remarks can be edited before finalization
|
|
- **Enterprise Security**: Uses same service account credentials as Google Cloud Storage
|
|
|
|
### Use Cases
|
|
1. **Automatic Generation**: When the final approver approves/rejects a request, an AI conclusion is generated in the background
|
|
2. **Manual Generation**: Initiator can click "Generate AI Conclusion" button to create or regenerate a conclusion
|
|
3. **Finalization**: Initiator reviews, edits (if needed), and finalizes the conclusion to close the request
|
|
|
|
---
|
|
|
|
## Architecture
|
|
|
|
### Component Diagram
|
|
|
|
```
|
|
┌─────────────────────────────────────────────────────────────┐
|
|
│ Frontend (React) │
|
|
│ ┌──────────────────────────────────────────────────────┐ │
|
|
│ │ useConclusionRemark Hook │ │
|
|
│ │ - handleGenerateConclusion() │ │
|
|
│ │ - handleFinalizeConclusion() │ │
|
|
│ └──────────────────────────────────────────────────────┘ │
|
|
│ │ │
|
|
│ ▼ │
|
|
│ ┌──────────────────────────────────────────────────────┐ │
|
|
│ │ conclusionApi Service │ │
|
|
│ │ - generateConclusion(requestId) │ │
|
|
│ │ - finalizeConclusion(requestId, remark) │ │
|
|
│ └──────────────────────────────────────────────────────┘ │
|
|
└─────────────────────────────────────────────────────────────┘
|
|
│
|
|
│ HTTP API
|
|
▼
|
|
┌─────────────────────────────────────────────────────────────┐
|
|
│ Backend (Node.js/Express) │
|
|
│ ┌──────────────────────────────────────────────────────┐ │
|
|
│ │ ConclusionController │ │
|
|
│ │ - generateConclusion() │ │
|
|
│ │ - finalizeConclusion() │ │
|
|
│ └──────────────────────────────────────────────────────┘ │
|
|
│ │ │
|
|
│ ▼ │
|
|
│ ┌──────────────────────────────────────────────────────┐ │
|
|
│ │ AIService │ │
|
|
│ │ - generateConclusionRemark(context) │ │
|
|
│ │ - buildConclusionPrompt(context) │ │
|
|
│ │ - extractKeyPoints(remark) │ │
|
|
│ │ - calculateConfidence(remark, context) │ │
|
|
│ └──────────────────────────────────────────────────────┘ │
|
|
│ │ │
|
|
│ ▼ │
|
|
│ ┌──────────────────────────────────────────────────────┐ │
|
|
│ │ Vertex AI Gemini (Google Cloud) │ │
|
|
│ │ - VertexAI Client │ │
|
|
│ │ - Service Account Authentication │ │
|
|
│ │ - Gemini Models (gemini-2.5-flash, etc.) │ │
|
|
│ └──────────────────────────────────────────────────────┘ │
|
|
│ │ │
|
|
│ ▼ │
|
|
│ ┌──────────────────────────────────────────────────────┐ │
|
|
│ │ Database (PostgreSQL) │ │
|
|
│ │ - conclusion_remarks table │ │
|
|
│ │ - workflow_requests table │ │
|
|
│ └──────────────────────────────────────────────────────┘ │
|
|
└─────────────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
### Data Flow
|
|
|
|
1. **Request Approval/Rejection** → `ApprovalService.approveLevel()`
|
|
- Automatically triggers AI generation in background
|
|
- Saves to `conclusion_remarks` table
|
|
|
|
2. **Manual Generation** → `ConclusionController.generateConclusion()`
|
|
- User clicks "Generate AI Conclusion"
|
|
- Fetches request context
|
|
- Calls `AIService.generateConclusionRemark()`
|
|
- Returns generated remark
|
|
|
|
3. **Finalization** → `ConclusionController.finalizeConclusion()`
|
|
- User reviews and edits (optional)
|
|
- Submits final remark
|
|
- Updates request status to `CLOSED`
|
|
- Saves `finalRemark` to database
|
|
|
|
---
|
|
|
|
## Configuration
|
|
|
|
### Environment Variables
|
|
|
|
```bash
|
|
# Google Cloud Configuration (required - same as GCS)
|
|
GCP_PROJECT_ID=re-platform-workflow-dealer
|
|
GCP_KEY_FILE=./credentials/re-platform-workflow-dealer-3d5738fcc1f9.json
|
|
|
|
# Vertex AI Configuration (optional - defaults provided)
|
|
VERTEX_AI_MODEL=gemini-2.5-flash
|
|
VERTEX_AI_LOCATION=asia-south1
|
|
AI_ENABLED=true
|
|
```
|
|
|
|
**Note**: The service account key file is the same one used for Google Cloud Storage, ensuring consistent authentication across services.
|
|
|
|
### Admin Configuration (Database)
|
|
|
|
The system reads configuration from the `system_config` table. Key settings:
|
|
|
|
| Config Key | Default | Description |
|
|
|------------|---------|-------------|
|
|
| `AI_ENABLED` | `true` | Enable/disable all AI features |
|
|
| `AI_REMARK_GENERATION_ENABLED` | `true` | Enable/disable conclusion generation |
|
|
| `AI_MAX_REMARK_LENGTH` | `2000` | Maximum characters for generated remarks |
|
|
| `VERTEX_AI_MODEL` | `gemini-2.5-flash` | Vertex AI Gemini model name |
|
|
|
|
### Available Models
|
|
|
|
| Model Name | Description | Use Case |
|
|
|------------|-------------|----------|
|
|
| `gemini-2.5-flash` | Latest fast model (default) | General purpose, quick responses |
|
|
| `gemini-1.5-flash` | Previous fast model | General purpose |
|
|
| `gemini-1.5-pro` | Advanced model | Complex tasks, better quality |
|
|
| `gemini-1.5-pro-latest` | Latest Pro version | Best quality, complex reasoning |
|
|
|
|
### Supported Regions
|
|
|
|
| Region Code | Location | Availability |
|
|
|-------------|----------|--------------|
|
|
| `us-central1` | Iowa, USA | ✅ Default |
|
|
| `us-east1` | South Carolina, USA | ✅ |
|
|
| `us-west1` | Oregon, USA | ✅ |
|
|
| `europe-west1` | Belgium | ✅ |
|
|
| `asia-south1` | Mumbai, India | ✅ (Current default) |
|
|
|
|
**Note**: Model and region are configured via environment variables, not database config.
|
|
|
|
---
|
|
|
|
## API Usage
|
|
|
|
### Generate AI Conclusion
|
|
|
|
**Endpoint**: `POST /api/v1/conclusions/:requestId/generate`
|
|
|
|
**Authentication**: Required (JWT token)
|
|
|
|
**Authorization**: Only the request initiator can generate conclusions
|
|
|
|
**Request**:
|
|
```http
|
|
POST /api/v1/conclusions/REQ-2025-00123/generate
|
|
Authorization: Bearer <token>
|
|
```
|
|
|
|
**Response** (Success - 200):
|
|
```json
|
|
{
|
|
"success": true,
|
|
"data": {
|
|
"conclusionId": "concl-123",
|
|
"aiGeneratedRemark": "This request for [title] was approved through [levels]...",
|
|
"keyDiscussionPoints": [
|
|
"Approved by John Doe at Level 1",
|
|
"TAT compliance: 85%",
|
|
"3 documents attached"
|
|
],
|
|
"confidence": 0.85,
|
|
"generatedAt": "2025-01-15T10:30:00Z",
|
|
"provider": "Vertex AI (Gemini)"
|
|
}
|
|
}
|
|
```
|
|
|
|
**Response** (Error - 403):
|
|
```json
|
|
{
|
|
"success": false,
|
|
"error": "Only the initiator can generate conclusion remarks"
|
|
}
|
|
```
|
|
|
|
**Response** (Error - 400):
|
|
```json
|
|
{
|
|
"success": false,
|
|
"error": "Conclusion can only be generated for approved or rejected requests"
|
|
}
|
|
```
|
|
|
|
### Finalize Conclusion
|
|
|
|
**Endpoint**: `POST /api/v1/conclusions/:requestId/finalize`
|
|
|
|
**Authentication**: Required (JWT token)
|
|
|
|
**Authorization**: Only the request initiator can finalize
|
|
|
|
**Request**:
|
|
```http
|
|
POST /api/v1/conclusions/REQ-2025-00123/finalize
|
|
Authorization: Bearer <token>
|
|
Content-Type: application/json
|
|
|
|
{
|
|
"finalRemark": "This request was approved through all levels. The implementation will begin next week."
|
|
}
|
|
```
|
|
|
|
**Response** (Success - 200):
|
|
```json
|
|
{
|
|
"success": true,
|
|
"data": {
|
|
"conclusionId": "concl-123",
|
|
"finalRemark": "This request was approved through all levels...",
|
|
"finalizedAt": "2025-01-15T10:35:00Z",
|
|
"requestStatus": "CLOSED"
|
|
}
|
|
}
|
|
```
|
|
|
|
### Get Existing Conclusion
|
|
|
|
**Endpoint**: `GET /api/v1/conclusions/:requestId`
|
|
|
|
**Response**:
|
|
```json
|
|
{
|
|
"success": true,
|
|
"data": {
|
|
"conclusionId": "concl-123",
|
|
"requestId": "REQ-2025-00123",
|
|
"aiGeneratedRemark": "Generated text...",
|
|
"finalRemark": "Finalized text...",
|
|
"isEdited": true,
|
|
"editCount": 2,
|
|
"aiModelUsed": "Vertex AI (Gemini)",
|
|
"aiConfidenceScore": 0.85,
|
|
"keyDiscussionPoints": ["Point 1", "Point 2"],
|
|
"generatedAt": "2025-01-15T10:30:00Z",
|
|
"finalizedAt": "2025-01-15T10:35:00Z"
|
|
}
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## Implementation Details
|
|
|
|
### Context Data Structure
|
|
|
|
The `generateConclusionRemark()` method accepts a context object with the following structure:
|
|
|
|
```typescript
|
|
interface ConclusionContext {
|
|
requestTitle: string;
|
|
requestDescription: string;
|
|
requestNumber: string;
|
|
priority: string;
|
|
approvalFlow: Array<{
|
|
levelNumber: number;
|
|
approverName: string;
|
|
status: 'APPROVED' | 'REJECTED' | 'PENDING' | 'IN_PROGRESS';
|
|
comments?: string;
|
|
actionDate?: string;
|
|
tatHours?: number;
|
|
elapsedHours?: number;
|
|
tatPercentageUsed?: number;
|
|
}>;
|
|
workNotes: Array<{
|
|
userName: string;
|
|
message: string;
|
|
createdAt: string;
|
|
}>;
|
|
documents: Array<{
|
|
fileName: string;
|
|
uploadedBy: string;
|
|
uploadedAt: string;
|
|
}>;
|
|
activities: Array<{
|
|
type: string;
|
|
action: string;
|
|
details: string;
|
|
timestamp: string;
|
|
}>;
|
|
rejectionReason?: string;
|
|
rejectedBy?: string;
|
|
}
|
|
```
|
|
|
|
### Generation Process
|
|
|
|
1. **Context Collection**:
|
|
- Fetches request details from `workflow_requests`
|
|
- Fetches approval levels from `approval_levels`
|
|
- Fetches work notes from `work_notes`
|
|
- Fetches documents from `documents`
|
|
- Fetches activities from `activities`
|
|
|
|
2. **Prompt Building**:
|
|
- Constructs a detailed prompt with all context
|
|
- Includes TAT risk information (ON_TRACK, AT_RISK, CRITICAL, BREACHED)
|
|
- Includes rejection context if applicable
|
|
- Sets target word count based on `AI_MAX_REMARK_LENGTH`
|
|
|
|
3. **AI Generation**:
|
|
- Sends prompt to Vertex AI Gemini
|
|
- Receives generated text (up to 4096 tokens)
|
|
- Preserves full AI response (no truncation)
|
|
- Extracts key points
|
|
- Calculates confidence score
|
|
|
|
4. **Storage**:
|
|
- Saves to `conclusion_remarks` table
|
|
- Links to `workflow_requests` via `requestId`
|
|
- Stores metadata (provider, confidence, key points)
|
|
|
|
### Automatic Generation
|
|
|
|
When a request is approved/rejected, `ApprovalService.approveLevel()` automatically generates a conclusion in the background:
|
|
|
|
```typescript
|
|
// In ApprovalService.approveLevel()
|
|
if (isFinalApproval) {
|
|
// Background task - doesn't block the approval response
|
|
(async () => {
|
|
const context = { /* ... */ };
|
|
const aiResult = await aiService.generateConclusionRemark(context);
|
|
await ConclusionRemark.create({ /* ... */ });
|
|
})();
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## Prompt Engineering
|
|
|
|
### Prompt Structure
|
|
|
|
The prompt is designed to generate professional, archival-quality conclusions:
|
|
|
|
```
|
|
You are writing a closure summary for a workflow request at Royal Enfield.
|
|
Write a practical, realistic conclusion that an employee would write when closing a request.
|
|
|
|
**Request:**
|
|
[Request Number] - [Title]
|
|
Description: [Description]
|
|
Priority: [Priority]
|
|
|
|
**What Happened:**
|
|
[Approval Summary with TAT info]
|
|
[Rejection Context if applicable]
|
|
|
|
**Discussions (if any):**
|
|
[Work Notes Summary]
|
|
|
|
**Documents:**
|
|
[Document List]
|
|
|
|
**YOUR TASK:**
|
|
Write a brief, professional conclusion (approximately X words, max Y characters) that:
|
|
- Summarizes what was requested and the final decision
|
|
- Mentions who approved it and any key comments
|
|
- Mentions if any approval levels were AT_RISK, CRITICAL, or BREACHED
|
|
- Notes the outcome and next steps (if applicable)
|
|
- Uses clear, factual language without time-specific references
|
|
- Is suitable for permanent archiving and future reference
|
|
- Sounds natural and human-written (not AI-generated)
|
|
|
|
**IMPORTANT:**
|
|
- Be concise and direct
|
|
- MUST stay within [maxLength] characters limit
|
|
- No time-specific words like "today", "now", "currently", "recently"
|
|
- No corporate jargon or buzzwords
|
|
- No emojis or excessive formatting
|
|
- Write like a professional documenting a completed process
|
|
- Focus on facts: what was requested, who approved, what was decided
|
|
- Use past tense for completed actions
|
|
```
|
|
|
|
### Key Prompt Features
|
|
|
|
1. **TAT Risk Integration**: Includes TAT percentage usage and risk status for each approval level
|
|
2. **Rejection Handling**: Different instructions for rejected vs approved requests
|
|
3. **Length Control**: Dynamically sets target word count based on config
|
|
4. **Tone Guidelines**: Emphasizes natural, professional, archival-quality writing
|
|
5. **Context Awareness**: Includes all relevant data (approvals, notes, documents, activities)
|
|
|
|
### Vertex AI Settings
|
|
|
|
| Setting | Value | Description |
|
|
|---------|-------|-------------|
|
|
| Model | `gemini-2.5-flash` (default) | Fast, efficient model for conclusion generation |
|
|
| Max Output Tokens | `4096` | Maximum tokens in response (technical limit) |
|
|
| Character Limit | `2000` (configurable) | Actual limit enforced via prompt (`AI_MAX_REMARK_LENGTH`) |
|
|
| Temperature | `0.3` | Lower temperature for more focused, consistent output |
|
|
| Location | `asia-south1` (default) | Google Cloud region for API calls |
|
|
| Authentication | Service Account | Uses same credentials as Google Cloud Storage |
|
|
|
|
**Note on Token vs Character Limits:**
|
|
- **4096 tokens** is the technical maximum Vertex AI can generate
|
|
- **2000 characters** (default) is the actual limit enforced by the prompt
|
|
- Token-to-character conversion: ~1 token ≈ 3-4 characters
|
|
- With HTML tags: 4096 tokens ≈ 12,000-16,000 characters (including tags)
|
|
- The AI is instructed to stay within the character limit, not the token limit
|
|
- The token limit provides headroom but the character limit is what matters for storage
|
|
|
|
---
|
|
|
|
## Error Handling
|
|
|
|
### Common Errors
|
|
|
|
1. **No AI Provider Available**
|
|
```
|
|
Error: AI features are currently unavailable. Please verify Vertex AI configuration and service account credentials.
|
|
```
|
|
**Solution**:
|
|
- Verify service account key file exists at path specified in `GCP_KEY_FILE`
|
|
- Ensure Vertex AI API is enabled in Google Cloud Console
|
|
- Check service account has `Vertex AI User` role (`roles/aiplatform.user`)
|
|
|
|
2. **Vertex AI API Error**
|
|
```
|
|
Error: AI generation failed (Vertex AI): Model was not found or your project does not have access
|
|
```
|
|
**Solution**:
|
|
- Verify model name is correct (e.g., `gemini-2.5-flash`)
|
|
- Ensure model is available in selected region
|
|
- Check Vertex AI API is enabled in Google Cloud Console
|
|
|
|
3. **Request Not Found**
|
|
```
|
|
Error: Request not found
|
|
```
|
|
**Solution**: Verify requestId is correct and request exists
|
|
|
|
4. **Unauthorized Access**
|
|
```
|
|
Error: Only the initiator can generate conclusion remarks
|
|
```
|
|
**Solution**: Ensure user is the request initiator
|
|
|
|
5. **Invalid Request Status**
|
|
```
|
|
Error: Conclusion can only be generated for approved or rejected requests
|
|
```
|
|
**Solution**: Request must be in APPROVED or REJECTED status
|
|
|
|
### Error Recovery
|
|
|
|
- **Graceful Degradation**: If AI generation fails, user can write conclusion manually
|
|
- **Retry Logic**: Manual regeneration is always available
|
|
- **Logging**: All errors are logged with context for debugging
|
|
- **Token Limit Handling**: If response hits token limit, full response is preserved (no truncation)
|
|
|
|
---
|
|
|
|
## Best Practices
|
|
|
|
### For Developers
|
|
|
|
1. **Error Handling**: Always wrap AI calls in try-catch blocks
|
|
2. **Async Operations**: Use background tasks for automatic generation (don't block approval)
|
|
3. **Validation**: Validate context data before sending to AI
|
|
4. **Logging**: Log all AI operations for debugging and monitoring
|
|
5. **Configuration**: Use database config for flexibility (not hardcoded values)
|
|
|
|
### For Administrators
|
|
|
|
1. **Service Account Setup**:
|
|
- Ensure service account key file exists and is accessible
|
|
- Verify service account has `Vertex AI User` role
|
|
- Use same credentials as Google Cloud Storage for consistency
|
|
2. **Model Selection**: Choose model based on needs:
|
|
- **gemini-2.5-flash**: Fast, cost-effective (default, recommended)
|
|
- **gemini-1.5-pro**: Better quality for complex requests
|
|
3. **Length Configuration**: Set `AI_MAX_REMARK_LENGTH` based on your archival needs
|
|
4. **Monitoring**: Monitor AI usage and costs through Google Cloud Console
|
|
5. **Testing**: Test with sample requests before enabling in production
|
|
6. **Region Selection**: Choose region closest to your deployment for lower latency
|
|
|
|
### For Users
|
|
|
|
1. **Review Before Finalizing**: Always review AI-generated conclusions
|
|
2. **Edit if Needed**: Don't hesitate to edit the generated text
|
|
3. **Regenerate**: If not satisfied, regenerate with updated context
|
|
4. **Finalize Promptly**: Finalize conclusions soon after generation for accuracy
|
|
|
|
---
|
|
|
|
## Troubleshooting
|
|
|
|
### Issue: AI Generation Not Working
|
|
|
|
**Symptoms**: Error message "AI features are currently unavailable"
|
|
|
|
**Diagnosis**:
|
|
1. Check `AI_ENABLED` config value
|
|
2. Check `AI_REMARK_GENERATION_ENABLED` config value
|
|
3. Verify service account key file exists and is accessible
|
|
4. Check Vertex AI API is enabled in Google Cloud Console
|
|
5. Verify service account has `Vertex AI User` role
|
|
6. Check provider initialization logs
|
|
|
|
**Solution**:
|
|
```bash
|
|
# Check logs
|
|
tail -f logs/app.log | grep "AI Service"
|
|
|
|
# Verify config
|
|
SELECT * FROM system_config WHERE config_key LIKE 'AI_%';
|
|
|
|
# Verify service account key file
|
|
ls -la credentials/re-platform-workflow-dealer-3d5738fcc1f9.json
|
|
|
|
# Check environment variables
|
|
echo $GCP_PROJECT_ID
|
|
echo $GCP_KEY_FILE
|
|
echo $VERTEX_AI_MODEL
|
|
```
|
|
|
|
### Issue: Generated Text Too Long/Short
|
|
|
|
**Symptoms**: Generated remarks exceed or are much shorter than expected
|
|
|
|
**Solution**:
|
|
1. Adjust `AI_MAX_REMARK_LENGTH` in admin config
|
|
2. Check prompt target word count calculation
|
|
3. Note: Vertex AI max output tokens is 4096 (system handles this automatically)
|
|
4. AI is instructed to stay within character limit, but full response is preserved
|
|
|
|
### Issue: Poor Quality Conclusions
|
|
|
|
**Symptoms**: Generated text is generic or inaccurate
|
|
|
|
**Solution**:
|
|
1. Verify context data is complete (approvals, notes, documents)
|
|
2. Check prompt includes all relevant information
|
|
3. Try different model (e.g., `gemini-1.5-pro` for better quality)
|
|
4. Temperature is set to 0.3 for focused output (can be adjusted in code if needed)
|
|
|
|
### Issue: Slow Generation
|
|
|
|
**Symptoms**: AI generation takes too long
|
|
|
|
**Solution**:
|
|
1. Check Vertex AI API status in Google Cloud Console
|
|
2. Verify network connectivity
|
|
3. Consider using `gemini-2.5-flash` model (fastest option)
|
|
4. Check for rate limiting in Google Cloud Console
|
|
5. Verify region selection (closer region = lower latency)
|
|
|
|
### Issue: Vertex AI Not Initializing
|
|
|
|
**Symptoms**: Provider shows as "None" or initialization fails in logs
|
|
|
|
**Diagnosis**:
|
|
1. Check service account key file exists and is valid
|
|
2. Verify `@google-cloud/vertexai` package is installed
|
|
3. Check environment variables (`GCP_PROJECT_ID`, `GCP_KEY_FILE`)
|
|
4. Verify Vertex AI API is enabled in Google Cloud Console
|
|
5. Check service account permissions
|
|
|
|
**Solution**:
|
|
```bash
|
|
# Install missing SDK
|
|
npm install @google-cloud/vertexai
|
|
|
|
# Verify service account key file
|
|
ls -la credentials/re-platform-workflow-dealer-3d5738fcc1f9.json
|
|
|
|
# Verify environment variables
|
|
echo $GCP_PROJECT_ID
|
|
echo $GCP_KEY_FILE
|
|
echo $VERTEX_AI_MODEL
|
|
echo $VERTEX_AI_LOCATION
|
|
|
|
# Check Google Cloud Console
|
|
# 1. Go to APIs & Services > Library
|
|
# 2. Search for "Vertex AI API"
|
|
# 3. Ensure it's enabled
|
|
# 4. Verify service account has "Vertex AI User" role
|
|
```
|
|
|
|
---
|
|
|
|
## Database Schema
|
|
|
|
### conclusion_remarks Table
|
|
|
|
```sql
|
|
CREATE TABLE conclusion_remarks (
|
|
conclusion_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
|
request_id VARCHAR(50) NOT NULL UNIQUE,
|
|
ai_generated_remark TEXT,
|
|
ai_model_used VARCHAR(100),
|
|
ai_confidence_score DECIMAL(3,2),
|
|
final_remark TEXT,
|
|
edited_by UUID,
|
|
is_edited BOOLEAN DEFAULT false,
|
|
edit_count INTEGER DEFAULT 0,
|
|
approval_summary JSONB,
|
|
document_summary JSONB,
|
|
key_discussion_points TEXT[],
|
|
generated_at TIMESTAMP,
|
|
finalized_at TIMESTAMP,
|
|
created_at TIMESTAMP DEFAULT NOW(),
|
|
updated_at TIMESTAMP DEFAULT NOW(),
|
|
FOREIGN KEY (request_id) REFERENCES workflow_requests(request_id),
|
|
FOREIGN KEY (edited_by) REFERENCES users(user_id)
|
|
);
|
|
```
|
|
|
|
### Key Fields
|
|
|
|
- `ai_generated_remark`: Original AI-generated text
|
|
- `final_remark`: User-edited/finalized text
|
|
- `ai_confidence_score`: Quality score (0.0 - 1.0)
|
|
- `key_discussion_points`: Extracted key points array
|
|
- `approval_summary`: JSON with approval statistics
|
|
- `document_summary`: JSON with document information
|
|
|
|
---
|
|
|
|
## Examples
|
|
|
|
### Example 1: Approved Request Conclusion
|
|
|
|
**Context**:
|
|
- Request: "Purchase 50 laptops for IT department"
|
|
- Priority: STANDARD
|
|
- 3 approval levels, all approved
|
|
- TAT: 100%, 85%, 90% usage
|
|
- 2 documents attached
|
|
|
|
**Generated Conclusion**:
|
|
```
|
|
This request for the purchase of 50 laptops for the IT department was approved
|
|
through all three approval levels. The request was reviewed and approved by
|
|
John Doe at Level 1, Jane Smith at Level 2, and Bob Johnson at Level 3. All
|
|
approval levels completed within their respective TAT windows, with Level 1
|
|
using 100% of allocated time. The purchase order has been generated and
|
|
forwarded to the procurement team for processing. Implementation is expected
|
|
to begin within the next two weeks.
|
|
```
|
|
|
|
### Example 2: Rejected Request Conclusion
|
|
|
|
**Context**:
|
|
- Request: "Implement new HR policy"
|
|
- Priority: EXPRESS
|
|
- Rejected at Level 2 by Jane Smith
|
|
- Reason: "Budget constraints"
|
|
|
|
**Generated Conclusion**:
|
|
```
|
|
This request for implementing a new HR policy was reviewed through two approval
|
|
levels but was ultimately rejected. The request was approved by John Doe at
|
|
Level 1, but rejected by Jane Smith at Level 2 due to budget constraints.
|
|
The rejection was communicated to the initiator, and alternative approaches
|
|
are being considered. The request documentation has been archived for future
|
|
reference.
|
|
```
|
|
|
|
---
|
|
|
|
## Version History
|
|
|
|
- **v2.0.0**: Vertex AI Migration
|
|
- Migrated to Google Cloud Vertex AI Gemini
|
|
- Service account authentication (same as GCS)
|
|
- Removed multi-provider support
|
|
- Increased max output tokens to 4096
|
|
- Full response preservation (no truncation)
|
|
- HTML format support for rich text editor
|
|
|
|
---
|
|
|
|
## Support
|
|
|
|
For issues or questions:
|
|
1. Check logs: `logs/app.log`
|
|
2. Review admin configuration panel
|
|
3. Contact development team
|
|
4. Refer to Vertex AI documentation:
|
|
- [Vertex AI Documentation](https://cloud.google.com/vertex-ai/docs)
|
|
- [Gemini Models](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/gemini)
|
|
- [Vertex AI Setup Guide](../VERTEX_AI_INTEGRATION.md)
|
|
|
|
---
|
|
|
|
**Maintained By**: Royal Enfield Development Team
|
|
|
|
---
|
|
|
|
## Related Documentation
|
|
|
|
- [Vertex AI Integration Guide](./VERTEX_AI_INTEGRATION.md) - Detailed setup and migration information
|
|
|