backend changes
This commit is contained in:
parent
4f139af3c7
commit
7eb2ab1dc2
106
DATABASE_MIGRATION_FIX.md
Normal file
106
DATABASE_MIGRATION_FIX.md
Normal file
@ -0,0 +1,106 @@
|
||||
# Database Migration Issues - SOLVED
|
||||
|
||||
## Problem Summary
|
||||
You were experiencing unwanted tables being created and duplicates when starting the server. This was caused by multiple migration sources creating the same tables and conflicting migration execution.
|
||||
|
||||
## Root Causes Identified
|
||||
|
||||
### 1. **Multiple Migration Sources**
|
||||
- PostgreSQL init script (`databases/scripts/init.sql`) creates the `dev_pipeline` database
|
||||
- Shared schemas (`databases/scripts/schemas.sql`) creates core tables
|
||||
- Individual service migrations create their own tables
|
||||
- Template-manager was also applying shared schemas, causing duplicates
|
||||
|
||||
### 2. **Migration Execution Order Issues**
|
||||
- Services were running migrations in parallel
|
||||
- No proper dependency management between shared schemas and service-specific tables
|
||||
- DROP TABLE statements in development mode causing data loss
|
||||
|
||||
### 3. **Table Conflicts**
|
||||
- `users` table created by both `schemas.sql` and `user-auth` migration
|
||||
- `user_projects` table created by both sources
|
||||
- Function conflicts (`update_updated_at_column()` created multiple times)
|
||||
- Extension conflicts (`uuid-ossp` created multiple times)
|
||||
|
||||
## Solutions Implemented
|
||||
|
||||
### 1. **Fixed Migration Order**
|
||||
- Created separate `shared-schemas` service for core database tables
|
||||
- Updated migration script to run in correct order:
|
||||
1. `shared-schemas` (core tables first)
|
||||
2. `user-auth` (user-specific tables)
|
||||
3. `template-manager` (template-specific tables)
|
||||
|
||||
### 2. **Made Migrations Production-Safe**
|
||||
- Replaced `DROP TABLE IF EXISTS` with `CREATE TABLE IF NOT EXISTS`
|
||||
- Prevents data loss on server restarts
|
||||
- Safe for production environments
|
||||
|
||||
### 3. **Eliminated Duplicate Table Creation**
|
||||
- Removed shared schema application from template-manager
|
||||
- Each service now only creates its own tables
|
||||
- Proper dependency management
|
||||
|
||||
### 4. **Created Database Cleanup Script**
|
||||
- `scripts/cleanup-database.sh` removes unwanted/duplicate tables
|
||||
- Can be run to clean up existing database issues
|
||||
|
||||
## How to Use
|
||||
|
||||
### Clean Up Existing Database
|
||||
```bash
|
||||
cd /home/tech4biz/Desktop/Projectsnew/CODENUK1/codenuk-backend-live
|
||||
./scripts/cleanup-database.sh
|
||||
```
|
||||
|
||||
### Start Server with Fixed Migrations
|
||||
```bash
|
||||
docker-compose up --build
|
||||
```
|
||||
|
||||
The migrations will now run in the correct order:
|
||||
1. Shared schemas (projects, tech_stack_decisions, etc.)
|
||||
2. User authentication tables
|
||||
3. Template management tables
|
||||
|
||||
## Files Modified
|
||||
|
||||
1. **`services/template-manager/src/migrations/migrate.js`**
|
||||
- Removed shared schema application
|
||||
- Now only handles template-specific tables
|
||||
|
||||
2. **`services/user-auth/src/migrations/001_user_auth_schema.sql`**
|
||||
- Replaced DROP TABLE with CREATE TABLE IF NOT EXISTS
|
||||
- Made migration production-safe
|
||||
|
||||
3. **`services/template-manager/src/migrations/001_initial_schema.sql`**
|
||||
- Replaced DROP TABLE with CREATE TABLE IF NOT EXISTS
|
||||
- Made migration production-safe
|
||||
|
||||
4. **`scripts/migrate-all.sh`**
|
||||
- Added shared-schemas service
|
||||
- Proper migration order
|
||||
|
||||
5. **`docker-compose.yml`**
|
||||
- Removed APPLY_SCHEMAS_SQL environment variable
|
||||
|
||||
6. **Created new files:**
|
||||
- `services/shared-schemas/` - Dedicated service for shared schemas
|
||||
- `scripts/cleanup-database.sh` - Database cleanup script
|
||||
|
||||
## Expected Results
|
||||
|
||||
After these changes:
|
||||
- ✅ No duplicate tables will be created
|
||||
- ✅ No unwanted tables from pgAdmin
|
||||
- ✅ Proper migration order
|
||||
- ✅ Production-safe migrations
|
||||
- ✅ Clean database schema
|
||||
|
||||
## Verification
|
||||
|
||||
To verify the fix worked:
|
||||
1. Run the cleanup script
|
||||
2. Start the server
|
||||
3. Check pgAdmin - you should only see the intended tables
|
||||
4. No duplicate or unwanted tables should appear
|
||||
@ -12,8 +12,8 @@
|
||||
// ========================================
|
||||
// LOCAL DEVELOPMENT URLS
|
||||
// ========================================
|
||||
const FRONTEND_URL = 'http://192.168.1.13:3001';
|
||||
const BACKEND_URL = 'http://192.168.1.13:8000';
|
||||
const FRONTEND_URL = 'http://localhost:3001';
|
||||
const BACKEND_URL = 'http://localhost:8000';
|
||||
|
||||
// ========================================
|
||||
// CORS CONFIGURATION (Auto-generated)
|
||||
|
||||
@ -95,7 +95,6 @@ services:
|
||||
- POSTGRES_DB=dev_pipeline
|
||||
- POSTGRES_USER=pipeline_admin
|
||||
- POSTGRES_PASSWORD=secure_pipeline_2024
|
||||
- APPLY_SCHEMAS_SQL=true
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=redis_secure_2024
|
||||
@ -234,7 +233,7 @@ services:
|
||||
- NODE_ENV=development
|
||||
- PORT=8000
|
||||
- HOST=0.0.0.0
|
||||
- CORS_ORIGINS=http://192.168.1.13:3001
|
||||
- CORS_ORIGINS=http://localhost:3001
|
||||
- CORS_METHODS=GET,POST,PUT,DELETE,PATCH,OPTIONS # Add this line
|
||||
- CORS_CREDENTIALS=true # Add this line
|
||||
# Database connections
|
||||
@ -269,6 +268,8 @@ services:
|
||||
- DASHBOARD_URL=http://dashboard:8008
|
||||
- SELF_IMPROVING_GENERATOR_URL=http://self-improving-generator:8007
|
||||
- AI_MOCKUP_URL=http://ai-mockup-service:8021
|
||||
- UNISON_URL=http://unison:8010
|
||||
- TEMPLATE_MANAGER_AI_URL=http://template-manager:8013
|
||||
volumes:
|
||||
- api_gateway_logs:/app/logs # Add persistent volume for logs
|
||||
user: "node" # Run as node user instead of root
|
||||
@ -340,6 +341,7 @@ services:
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=redis_secure_2024
|
||||
- CLAUDE_API_KEY=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
|
||||
networks:
|
||||
- pipeline_network
|
||||
depends_on:
|
||||
@ -492,7 +494,7 @@ services:
|
||||
ports:
|
||||
- "8011:8011"
|
||||
environment:
|
||||
- FRONTEND_URL=http://192.168.1.13:3001
|
||||
- FRONTEND_URL=http://localhost:3001
|
||||
- PORT=8011
|
||||
- HOST=0.0.0.0
|
||||
- NODE_ENV=development
|
||||
@ -556,6 +558,11 @@ services:
|
||||
- NODE_ENV=development
|
||||
- JWT_ACCESS_SECRET=access-secret-key-2024-tech4biz-secure_pipeline_2024
|
||||
- CLAUDE_API_KEY=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
|
||||
- TEMPLATE_MANAGER_AI_URL=http://127.0.0.1:8013
|
||||
- NEO4J_URI=bolt://neo4j:7687
|
||||
- NEO4J_USERNAME=neo4j
|
||||
- NEO4J_PASSWORD=password
|
||||
- PYTHONUNBUFFERED=1
|
||||
networks:
|
||||
- pipeline_network
|
||||
depends_on:
|
||||
@ -573,6 +580,25 @@ services:
|
||||
start_period: 40s
|
||||
restart: unless-stopped
|
||||
|
||||
unison:
|
||||
build: ./services/unison
|
||||
container_name: pipeline_unison
|
||||
environment:
|
||||
- PORT=8010
|
||||
- HOST=0.0.0.0
|
||||
- TECH_STACK_SELECTOR_URL=http://tech-stack-selector:8002
|
||||
- TEMPLATE_MANAGER_URL=http://template-manager:8009
|
||||
- TEMPLATE_MANAGER_AI_URL=http://template-manager:8013
|
||||
- CLAUDE_API_KEY=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
|
||||
- LOG_LEVEL=info
|
||||
networks:
|
||||
- pipeline_network
|
||||
depends_on:
|
||||
tech-stack-selector:
|
||||
condition: service_started
|
||||
template-manager:
|
||||
condition: service_started
|
||||
|
||||
# AI Mockup / Wireframe Generation Service
|
||||
ai-mockup-service:
|
||||
build: ./services/ai-mockup-service
|
||||
@ -615,6 +641,7 @@ services:
|
||||
environment:
|
||||
- PORT=8012
|
||||
- HOST=0.0.0.0
|
||||
- FRONTEND_URL=http://localhost:3001
|
||||
- POSTGRES_HOST=postgres
|
||||
- POSTGRES_PORT=5432
|
||||
- POSTGRES_DB=dev_pipeline
|
||||
@ -624,11 +651,13 @@ services:
|
||||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=redis_secure_2024
|
||||
- NODE_ENV=development
|
||||
- GITHUB_REDIRECT_URI=*
|
||||
- ATTACHED_REPOS_DIR=/tmp/attached-repos
|
||||
- GITHUB_CLIENT_ID=Ov23liQgF14aogXVZNCR
|
||||
- GITHUB_CLIENT_SECRET=8bf82a29154fdccb837bc150539a2226d00b5da5
|
||||
- GITHUB_REDIRECT_URI=http://localhost:8012/api/github/auth/github/callback
|
||||
- ATTACHED_REPOS_DIR=/app/git-repos
|
||||
- SESSION_SECRET=git-integration-secret-key-2024
|
||||
volumes:
|
||||
- git_repos_data:/tmp/attached-repos
|
||||
- /home/tech4biz/Desktop/Projectsnew/CODENUK1/git-repos:/app/git-repos
|
||||
networks:
|
||||
- pipeline_network
|
||||
depends_on:
|
||||
@ -639,7 +668,7 @@ services:
|
||||
migrations:
|
||||
condition: service_completed_successfully
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8012/health"]
|
||||
test: ["CMD", "node", "-e", "require('http').get('http://127.0.0.1:8012/health', (res) => process.exit(res.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
|
||||
103
scripts/cleanup-database.sh
Normal file
103
scripts/cleanup-database.sh
Normal file
@ -0,0 +1,103 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# ========================================
|
||||
# DATABASE CLEANUP SCRIPT
|
||||
# ========================================
|
||||
|
||||
# Database connection parameters
|
||||
DB_HOST=${POSTGRES_HOST:-postgres}
|
||||
DB_PORT=${POSTGRES_PORT:-5432}
|
||||
DB_NAME=${POSTGRES_DB:-dev_pipeline}
|
||||
DB_USER=${POSTGRES_USER:-pipeline_admin}
|
||||
DB_PASSWORD=${POSTGRES_PASSWORD:-secure_pipeline_2024}
|
||||
|
||||
# Log function with timestamp
|
||||
log() {
|
||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*"
|
||||
}
|
||||
|
||||
log "🧹 Starting database cleanup..."
|
||||
|
||||
# Connect to PostgreSQL and clean up unwanted tables
|
||||
PGPASSWORD="$DB_PASSWORD" psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" << 'EOF'
|
||||
-- List all tables before cleanup
|
||||
\echo '📋 Tables before cleanup:'
|
||||
SELECT table_name FROM information_schema.tables WHERE table_schema = 'public' ORDER BY table_name;
|
||||
|
||||
-- Drop unwanted/duplicate tables that might have been created
|
||||
\echo '🗑️ Dropping unwanted tables...'
|
||||
|
||||
-- Drop tables that might be duplicates or unwanted
|
||||
DROP TABLE IF EXISTS user_api_keys CASCADE;
|
||||
DROP TABLE IF EXISTS role_scope CASCADE;
|
||||
DROP TABLE IF EXISTS scope CASCADE;
|
||||
DROP TABLE IF EXISTS service_health CASCADE;
|
||||
DROP TABLE IF EXISTS settings CASCADE;
|
||||
DROP TABLE IF EXISTS shared_credentials CASCADE;
|
||||
DROP TABLE IF EXISTS shared_workflow CASCADE;
|
||||
DROP TABLE IF EXISTS stack_recommendations CASCADE;
|
||||
DROP TABLE IF EXISTS system_architectures CASCADE;
|
||||
DROP TABLE IF EXISTS tag_entity CASCADE;
|
||||
DROP TABLE IF EXISTS tech_pricing CASCADE;
|
||||
DROP TABLE IF EXISTS tech_stack_decisions CASCADE;
|
||||
DROP TABLE IF EXISTS template_features CASCADE;
|
||||
DROP TABLE IF EXISTS templates CASCADE;
|
||||
DROP TABLE IF EXISTS test_case_execution CASCADE;
|
||||
DROP TABLE IF EXISTS test_results CASCADE;
|
||||
DROP TABLE IF EXISTS test_run CASCADE;
|
||||
DROP TABLE IF EXISTS testing_technologies CASCADE;
|
||||
DROP TABLE IF EXISTS tools CASCADE;
|
||||
DROP TABLE IF EXISTS user CASCADE;
|
||||
DROP TABLE IF EXISTS user_feature_preferences CASCADE;
|
||||
DROP TABLE IF EXISTS user_preferences CASCADE;
|
||||
DROP TABLE IF EXISTS user_projects CASCADE;
|
||||
DROP TABLE IF EXISTS user_sessions CASCADE;
|
||||
DROP TABLE IF EXISTS users CASCADE;
|
||||
DROP TABLE IF EXISTS variables CASCADE;
|
||||
DROP TABLE IF EXISTS webhook_entity CASCADE;
|
||||
DROP TABLE IF EXISTS wireframe_elements CASCADE;
|
||||
DROP TABLE IF EXISTS wireframe_versions CASCADE;
|
||||
DROP TABLE IF EXISTS wireframes CASCADE;
|
||||
DROP TABLE IF EXISTS workflow_entity CASCADE;
|
||||
DROP TABLE IF EXISTS workflow_history CASCADE;
|
||||
DROP TABLE IF EXISTS workflow_statistics CASCADE;
|
||||
DROP TABLE IF EXISTS workflows_tags CASCADE;
|
||||
|
||||
-- Drop any duplicate functions
|
||||
DROP FUNCTION IF EXISTS update_updated_at_column() CASCADE;
|
||||
|
||||
-- Clean up any orphaned sequences
|
||||
DO $$
|
||||
DECLARE
|
||||
seq_record RECORD;
|
||||
BEGIN
|
||||
FOR seq_record IN
|
||||
SELECT sequence_name
|
||||
FROM information_schema.sequences
|
||||
WHERE sequence_schema = 'public'
|
||||
AND sequence_name NOT IN (
|
||||
SELECT column_default
|
||||
FROM information_schema.columns
|
||||
WHERE table_schema = 'public'
|
||||
AND column_default LIKE 'nextval%'
|
||||
)
|
||||
LOOP
|
||||
EXECUTE 'DROP SEQUENCE IF EXISTS ' || seq_record.sequence_name || ' CASCADE';
|
||||
END LOOP;
|
||||
END $$;
|
||||
|
||||
-- List tables after cleanup
|
||||
\echo '📋 Tables after cleanup:'
|
||||
SELECT table_name FROM information_schema.tables WHERE table_schema = 'public' ORDER BY table_name;
|
||||
|
||||
\echo '✅ Database cleanup completed!'
|
||||
EOF
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
log "✅ Database cleanup completed successfully"
|
||||
else
|
||||
log "❌ Database cleanup failed"
|
||||
exit 1
|
||||
fi
|
||||
@ -11,6 +11,7 @@ ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
|
||||
# Default services list (can be overridden by CLI args)
|
||||
default_services=(
|
||||
"shared-schemas"
|
||||
"user-auth"
|
||||
"template-manager"
|
||||
)
|
||||
@ -70,10 +71,19 @@ for service in "${services[@]}"; do
|
||||
log "➡️ ${service}: installing dependencies"
|
||||
log "========================================"
|
||||
|
||||
if ! (cd "${SERVICE_DIR}" && npm ci --no-audit --no-fund --prefer-offline); then
|
||||
log "ERROR: Failed to install dependencies for ${service}"
|
||||
failed_services+=("${service}")
|
||||
continue
|
||||
# Check if package-lock.json exists, use appropriate install command
|
||||
if [ -f "${SERVICE_DIR}/package-lock.json" ]; then
|
||||
if ! (cd "${SERVICE_DIR}" && npm ci --no-audit --no-fund --prefer-offline); then
|
||||
log "ERROR: Failed to install dependencies for ${service}"
|
||||
failed_services+=("${service}")
|
||||
continue
|
||||
fi
|
||||
else
|
||||
if ! (cd "${SERVICE_DIR}" && npm install --no-audit --no-fund); then
|
||||
log "ERROR: Failed to install dependencies for ${service}"
|
||||
failed_services+=("${service}")
|
||||
continue
|
||||
fi
|
||||
fi
|
||||
|
||||
log "========================================"
|
||||
|
||||
@ -28,10 +28,10 @@ RABBITMQ_USER=pipeline_admin
|
||||
RABBITMQ_PASSWORD=secure_rabbitmq_password
|
||||
|
||||
# CORS
|
||||
FRONTEND_URL=http://192.168.1.13:3001
|
||||
FRONTEND_URL=http://localhost:3001
|
||||
|
||||
# CORS Configuration
|
||||
CORS_ORIGIN=http://192.168.1.13:3001
|
||||
CORS_ORIGIN=http://localhost:3001
|
||||
CORS_METHODS=GET,POST,PUT,DELETE,PATCH,OPT
|
||||
IONS
|
||||
CORS_CREDENTIALS=true
|
||||
@ -12,6 +12,9 @@ const corsMiddleware = cors({
|
||||
'Authorization',
|
||||
'X-Requested-With',
|
||||
'Origin',
|
||||
// Custom user context headers used by frontend
|
||||
'X-User-Id',
|
||||
'x-user-id',
|
||||
'X-Gateway-Request-ID',
|
||||
'X-Gateway-Timestamp',
|
||||
'X-Forwarded-By',
|
||||
|
||||
@ -34,6 +34,24 @@ app.use((req, res, next) => {
|
||||
res.setHeader('Access-Control-Allow-Origin', origin);
|
||||
res.setHeader('Vary', 'Origin');
|
||||
res.setHeader('Access-Control-Allow-Credentials', 'true');
|
||||
res.setHeader('Access-Control-Allow-Headers', [
|
||||
'Content-Type',
|
||||
'Authorization',
|
||||
'X-Requested-With',
|
||||
'Origin',
|
||||
'X-User-Id',
|
||||
'x-user-id',
|
||||
'X-Gateway-Request-ID',
|
||||
'X-Gateway-Timestamp',
|
||||
'X-Forwarded-By',
|
||||
'X-Forwarded-For',
|
||||
'X-Forwarded-Proto',
|
||||
'X-Forwarded-Host',
|
||||
'X-Session-Token',
|
||||
'X-Platform',
|
||||
'X-App-Version'
|
||||
].join(', '));
|
||||
res.setHeader('Access-Control-Allow-Methods', (process.env.CORS_METHODS || 'GET,POST,PUT,DELETE,OPTIONS'));
|
||||
next();
|
||||
});
|
||||
const server = http.createServer(app);
|
||||
@ -55,7 +73,8 @@ global.io = io;
|
||||
// Service targets configuration
|
||||
const serviceTargets = {
|
||||
USER_AUTH_URL: process.env.USER_AUTH_URL || 'http://localhost:8011',
|
||||
TEMPLATE_MANAGER_URL: process.env.TEMPLATE_MANAGER_URL || 'http://192.168.1.13:8009',
|
||||
TEMPLATE_MANAGER_URL: process.env.TEMPLATE_MANAGER_URL || 'http://localhost:8009',
|
||||
TEMPLATE_MANAGER_AI_URL: process.env.TEMPLATE_MANAGER_AI_URL || 'http://localhost:8013',
|
||||
GIT_INTEGRATION_URL: process.env.GIT_INTEGRATION_URL || 'http://localhost:8012',
|
||||
REQUIREMENT_PROCESSOR_URL: process.env.REQUIREMENT_PROCESSOR_URL || 'http://requirement-processor:8001',
|
||||
TECH_STACK_SELECTOR_URL: process.env.TECH_STACK_SELECTOR_URL || 'http://localhost:8002',
|
||||
@ -66,6 +85,7 @@ const serviceTargets = {
|
||||
DASHBOARD_URL: process.env.DASHBOARD_URL || 'http://localhost:8008',
|
||||
SELF_IMPROVING_GENERATOR_URL: process.env.SELF_IMPROVING_GENERATOR_URL || 'http://localhost:8007',
|
||||
AI_MOCKUP_URL: process.env.AI_MOCKUP_URL || 'http://localhost:8021',
|
||||
UNISON_URL: process.env.UNISON_URL || 'http://localhost:8010',
|
||||
};
|
||||
|
||||
// Log service targets for debugging
|
||||
@ -509,6 +529,82 @@ app.use('/api/ai/analyze-feature',
|
||||
}
|
||||
);
|
||||
|
||||
// Template Manager AI - expose AI recommendations through the gateway
|
||||
console.log('🔧 Registering /api/ai/tech-stack proxy route...');
|
||||
app.use('/api/ai/tech-stack',
|
||||
createServiceLimiter(300),
|
||||
// Public (reads); Unison handles auth if needed
|
||||
(req, res, next) => next(),
|
||||
(req, res, next) => {
|
||||
const aiUrl = serviceTargets.TEMPLATE_MANAGER_AI_URL;
|
||||
// Map gateway paths to AI service:
|
||||
// POST /api/ai/tech-stack/recommendations -> POST /ai/recommendations
|
||||
// POST /api/ai/tech-stack/recommendations/formatted -> POST /ai/recommendations/formatted
|
||||
// GET /api/ai/tech-stack/extract-keywords/:id -> GET /extract-keywords/:id
|
||||
// POST /api/ai/tech-stack/extract-keywords/:id -> POST /extract-keywords/:id
|
||||
// POST /api/ai/tech-stack/auto-workflow/:id -> POST /auto-workflow/:id
|
||||
let rewrittenPath = req.originalUrl
|
||||
.replace(/^\/api\/ai\/tech-stack\/recommendations\/formatted/, '/ai/recommendations/formatted')
|
||||
.replace(/^\/api\/ai\/tech-stack\/recommendations/, '/ai/recommendations')
|
||||
.replace(/^\/api\/ai\/tech-stack\/extract-keywords\//, '/extract-keywords/')
|
||||
.replace(/^\/api\/ai\/tech-stack\/auto-workflow\//, '/auto-workflow/')
|
||||
.replace(/^\/api\/ai\/tech-stack\/?$/, '/');
|
||||
|
||||
const targetUrl = `${aiUrl}${rewrittenPath.replace(/^\/api\/ai\/tech-stack/, '')}`;
|
||||
console.log(`🔥 [TEMPLATE-AI PROXY] ${req.method} ${req.originalUrl} → ${targetUrl}`);
|
||||
|
||||
res.setTimeout(30000, () => {
|
||||
console.error('❌ [TEMPLATE-AI PROXY] Response timeout');
|
||||
if (!res.headersSent) {
|
||||
res.status(504).json({ error: 'Gateway timeout', service: 'template-manager-ai' });
|
||||
}
|
||||
});
|
||||
|
||||
const options = {
|
||||
method: req.method,
|
||||
url: targetUrl,
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'User-Agent': 'API-Gateway/1.0',
|
||||
'Connection': 'keep-alive',
|
||||
'Authorization': req.headers.authorization,
|
||||
'X-User-ID': req.user?.id || req.user?.userId,
|
||||
'X-User-Role': req.user?.role,
|
||||
},
|
||||
timeout: 25000,
|
||||
validateStatus: () => true,
|
||||
maxRedirects: 0
|
||||
};
|
||||
|
||||
if (req.method === 'POST' || req.method === 'PUT' || req.method === 'PATCH') {
|
||||
options.data = req.body || {};
|
||||
console.log(`📦 [TEMPLATE-AI PROXY] Request body:`, JSON.stringify(req.body));
|
||||
}
|
||||
|
||||
axios(options)
|
||||
.then(response => {
|
||||
console.log(`✅ [TEMPLATE-AI PROXY] Response: ${response.status} for ${req.method} ${req.originalUrl}`);
|
||||
if (!res.headersSent) {
|
||||
res.status(response.status).json(response.data);
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
console.error(`❌ [TEMPLATE-AI PROXY ERROR]:`, error.message);
|
||||
if (!res.headersSent) {
|
||||
if (error.response) {
|
||||
res.status(error.response.status).json(error.response.data);
|
||||
} else {
|
||||
res.status(502).json({
|
||||
error: 'Template Manager AI unavailable',
|
||||
message: error.code || error.message,
|
||||
service: 'template-manager-ai'
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
|
||||
// Requirement Processor Service - General routes (MUST come after specific routes)
|
||||
app.use('/api/requirements',
|
||||
createServiceLimiter(300),
|
||||
@ -641,6 +737,244 @@ app.use('/api/self-improving',
|
||||
serviceRouter.createServiceProxy(serviceTargets.SELF_IMPROVING_GENERATOR_URL, 'self-improving-generator')
|
||||
);
|
||||
|
||||
// Unison (Unified Recommendations) Service
|
||||
console.log('🔧 Registering /api/unison proxy route...');
|
||||
app.use('/api/unison',
|
||||
createServiceLimiter(200),
|
||||
// Allow unauthenticated access for unified recommendations
|
||||
(req, res, next) => next(),
|
||||
(req, res, next) => {
|
||||
const unisonUrl = serviceTargets.UNISON_URL;
|
||||
// Forward to same path on Unison (e.g., /api/unison/recommendations/unified)
|
||||
const rewrittenPath = (req.originalUrl || '').replace(/^\/api\/unison/, '/api');
|
||||
const targetUrl = `${unisonUrl}${rewrittenPath}`;
|
||||
console.log(`🔥 [UNISON PROXY] ${req.method} ${req.originalUrl} → ${targetUrl}`);
|
||||
|
||||
res.setTimeout(30000, () => {
|
||||
console.error('❌ [UNISON PROXY] Response timeout');
|
||||
if (!res.headersSent) {
|
||||
res.status(504).json({ error: 'Gateway timeout', service: 'unison' });
|
||||
}
|
||||
});
|
||||
|
||||
const options = {
|
||||
method: req.method,
|
||||
url: targetUrl,
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'User-Agent': 'API-Gateway/1.0',
|
||||
'Connection': 'keep-alive',
|
||||
'Authorization': req.headers.authorization,
|
||||
'X-User-ID': req.user?.id || req.user?.userId,
|
||||
'X-User-Role': req.user?.role,
|
||||
},
|
||||
timeout: 25000,
|
||||
validateStatus: () => true,
|
||||
maxRedirects: 0
|
||||
};
|
||||
|
||||
if (req.method === 'POST' || req.method === 'PUT' || req.method === 'PATCH') {
|
||||
options.data = req.body || {};
|
||||
console.log(`📦 [UNISON PROXY] Request body:`, JSON.stringify(req.body));
|
||||
}
|
||||
|
||||
axios(options)
|
||||
.then(response => {
|
||||
console.log(`✅ [UNISON PROXY] Response: ${response.status} for ${req.method} ${req.originalUrl}`);
|
||||
if (!res.headersSent) {
|
||||
res.status(response.status).json(response.data);
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
console.error(`❌ [UNISON PROXY ERROR]:`, error.message);
|
||||
if (!res.headersSent) {
|
||||
if (error.response) {
|
||||
res.status(error.response.status).json(error.response.data);
|
||||
} else {
|
||||
res.status(502).json({
|
||||
error: 'Unison service unavailable',
|
||||
message: error.code || error.message,
|
||||
service: 'unison'
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
|
||||
// Unified recommendations shortcut
|
||||
console.log('🔧 Registering /api/recommendations proxy route (shortcut to Unison)...');
|
||||
app.use('/api/recommendations',
|
||||
createServiceLimiter(200),
|
||||
(req, res, next) => next(),
|
||||
(req, res, next) => {
|
||||
const unisonUrl = serviceTargets.UNISON_URL;
|
||||
// Keep path under /api/recommendations/* when forwarding to Unison
|
||||
const targetUrl = `${unisonUrl}${req.originalUrl}`;
|
||||
console.log(`🔥 [UNIFIED PROXY] ${req.method} ${req.originalUrl} → ${targetUrl}`);
|
||||
|
||||
res.setTimeout(30000, () => {
|
||||
console.error('❌ [UNIFIED PROXY] Response timeout');
|
||||
if (!res.headersSent) {
|
||||
res.status(504).json({ error: 'Gateway timeout', service: 'unison' });
|
||||
}
|
||||
});
|
||||
|
||||
const options = {
|
||||
method: req.method,
|
||||
url: targetUrl,
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'User-Agent': 'API-Gateway/1.0',
|
||||
'Connection': 'keep-alive',
|
||||
'Authorization': req.headers.authorization,
|
||||
'X-User-ID': req.user?.id || req.user?.userId,
|
||||
'X-User-Role': req.user?.role,
|
||||
},
|
||||
timeout: 25000,
|
||||
validateStatus: () => true,
|
||||
maxRedirects: 0
|
||||
};
|
||||
|
||||
if (req.method === 'POST' || req.method === 'PUT' || req.method === 'PATCH') {
|
||||
options.data = req.body || {};
|
||||
console.log(`📦 [UNIFIED PROXY] Request body:`, JSON.stringify(req.body));
|
||||
}
|
||||
|
||||
axios(options)
|
||||
.then(response => {
|
||||
console.log(`✅ [UNIFIED PROXY] Response: ${response.status} for ${req.method} ${req.originalUrl}`);
|
||||
if (!res.headersSent) {
|
||||
res.status(response.status).json(response.data);
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
console.error(`❌ [UNIFIED PROXY ERROR]:`, error.message);
|
||||
if (!res.headersSent) {
|
||||
if (error.response) {
|
||||
res.status(error.response.status).json(error.response.data);
|
||||
} else {
|
||||
res.status(502).json({
|
||||
error: 'Unison service unavailable',
|
||||
message: error.code || error.message,
|
||||
service: 'unison'
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
|
||||
// Convenience alias: POST /api/recommendations -> POST /api/recommendations/unified
|
||||
console.log('🔧 Registering /api/recommendations (root) alias to unified...');
|
||||
app.post('/api/recommendations',
|
||||
createServiceLimiter(200),
|
||||
(req, res, next) => {
|
||||
const unisonUrl = serviceTargets.UNISON_URL;
|
||||
const targetUrl = `${unisonUrl}/api/recommendations/unified`;
|
||||
console.log(`🔥 [UNIFIED ROOT ALIAS] ${req.method} ${req.originalUrl} → ${targetUrl}`);
|
||||
|
||||
const options = {
|
||||
method: 'POST',
|
||||
url: targetUrl,
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'User-Agent': 'API-Gateway/1.0',
|
||||
'Connection': 'keep-alive',
|
||||
'Authorization': req.headers.authorization,
|
||||
'X-User-ID': req.user?.id || req.user?.userId,
|
||||
'X-User-Role': req.user?.role,
|
||||
},
|
||||
timeout: 25000,
|
||||
validateStatus: () => true,
|
||||
maxRedirects: 0,
|
||||
data: req.body || {}
|
||||
};
|
||||
|
||||
axios(options)
|
||||
.then(response => {
|
||||
if (!res.headersSent) {
|
||||
res.status(response.status).json(response.data);
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
if (!res.headersSent) {
|
||||
if (error.response) {
|
||||
res.status(error.response.status).json(error.response.data);
|
||||
} else {
|
||||
res.status(502).json({
|
||||
error: 'Unison service unavailable',
|
||||
message: error.code || error.message,
|
||||
service: 'unison'
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
|
||||
// Backward-compatible alias: /ai/recommendations -> Unison /api/recommendations
|
||||
console.log('🔧 Registering /ai/recommendations alias to Unison...');
|
||||
app.use('/ai/recommendations',
|
||||
createServiceLimiter(200),
|
||||
(req, res, next) => next(),
|
||||
(req, res, next) => {
|
||||
const unisonUrl = serviceTargets.UNISON_URL;
|
||||
const targetUrl = `${unisonUrl}/api/recommendations${req.originalUrl.replace(/^\/ai\/recommendations/, '')}`;
|
||||
console.log(`🔥 [AI→UNIFIED PROXY] ${req.method} ${req.originalUrl} → ${targetUrl}`);
|
||||
|
||||
res.setTimeout(30000, () => {
|
||||
console.error('❌ [AI→UNIFIED PROXY] Response timeout');
|
||||
if (!res.headersSent) {
|
||||
res.status(504).json({ error: 'Gateway timeout', service: 'unison' });
|
||||
}
|
||||
});
|
||||
|
||||
const options = {
|
||||
method: req.method,
|
||||
url: targetUrl,
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'User-Agent': 'API-Gateway/1.0',
|
||||
'Connection': 'keep-alive',
|
||||
'Authorization': req.headers.authorization,
|
||||
'X-User-ID': req.user?.id || req.user?.userId,
|
||||
'X-User-Role': req.user?.role,
|
||||
},
|
||||
timeout: 25000,
|
||||
validateStatus: () => true,
|
||||
maxRedirects: 0
|
||||
};
|
||||
|
||||
if (req.method === 'POST' || req.method === 'PUT' || req.method === 'PATCH') {
|
||||
options.data = req.body || {};
|
||||
console.log(`📦 [AI→UNIFIED PROXY] Request body:`, JSON.stringify(req.body));
|
||||
}
|
||||
|
||||
axios(options)
|
||||
.then(response => {
|
||||
console.log(`✅ [AI→UNIFIED PROXY] Response: ${response.status} for ${req.method} ${req.originalUrl}`);
|
||||
if (!res.headersSent) {
|
||||
res.status(response.status).json(response.data);
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
console.error(`❌ [AI→UNIFIED PROXY ERROR]:`, error.message);
|
||||
if (!res.headersSent) {
|
||||
if (error.response) {
|
||||
res.status(error.response.status).json(error.response.data);
|
||||
} else {
|
||||
res.status(502).json({
|
||||
error: 'Unison service unavailable',
|
||||
message: error.code || error.message,
|
||||
service: 'unison'
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
|
||||
// Features (Template Manager) - expose /api/features via gateway
|
||||
console.log('🔧 Registering /api/features proxy route...');
|
||||
app.use('/api/features',
|
||||
@ -714,7 +1048,7 @@ app.use('/api/github',
|
||||
(req, res, next) => {
|
||||
// Allow unauthenticated access for read-only requests and specific public endpoints
|
||||
if (req.method === 'GET') {
|
||||
return next();
|
||||
return authMiddleware.verifyTokenOptional(req, res, () => authMiddleware.forwardUserContext(req, res, next));
|
||||
}
|
||||
// Allowlist certain POST endpoints that must be public to initiate flows
|
||||
const url = req.originalUrl || '';
|
||||
@ -722,10 +1056,11 @@ app.use('/api/github',
|
||||
url.startsWith('/api/github/test-access') ||
|
||||
url.startsWith('/api/github/auth/github') ||
|
||||
url.startsWith('/api/github/auth/github/callback') ||
|
||||
url.startsWith('/api/github/auth/github/status')
|
||||
url.startsWith('/api/github/auth/github/status') ||
|
||||
url.startsWith('/api/github/attach-repository')
|
||||
);
|
||||
if (isPublicGithubEndpoint) {
|
||||
return next();
|
||||
return authMiddleware.verifyTokenOptional(req, res, () => authMiddleware.forwardUserContext(req, res, next));
|
||||
}
|
||||
return authMiddleware.verifyToken(req, res, () => authMiddleware.forwardUserContext(req, res, next));
|
||||
},
|
||||
@ -733,8 +1068,8 @@ app.use('/api/github',
|
||||
const gitServiceUrl = serviceTargets.GIT_INTEGRATION_URL;
|
||||
console.log(`🔥 [GIT PROXY] ${req.method} ${req.originalUrl} → ${gitServiceUrl}${req.originalUrl}`);
|
||||
|
||||
// Set response timeout to prevent hanging
|
||||
res.setTimeout(15000, () => {
|
||||
// Set response timeout to prevent hanging (increased for repository operations)
|
||||
res.setTimeout(60000, () => {
|
||||
console.error('❌ [GIT PROXY] Response timeout');
|
||||
if (!res.headersSent) {
|
||||
res.status(504).json({ error: 'Gateway timeout', service: 'git-integration' });
|
||||
@ -753,7 +1088,7 @@ app.use('/api/github',
|
||||
'X-User-Role': req.user?.role,
|
||||
'Authorization': req.headers.authorization
|
||||
},
|
||||
timeout: 8000,
|
||||
timeout: 45000,
|
||||
validateStatus: () => true,
|
||||
maxRedirects: 0
|
||||
};
|
||||
@ -767,6 +1102,13 @@ app.use('/api/github',
|
||||
axios(options)
|
||||
.then(response => {
|
||||
console.log(`✅ [GIT PROXY] Response: ${response.status} for ${req.method} ${req.originalUrl}`);
|
||||
// Forward redirects so browser follows OAuth location
|
||||
if (response.status >= 300 && response.status < 400 && response.headers?.location) {
|
||||
const location = response.headers.location;
|
||||
console.log(`↪️ [GIT PROXY] Forwarding redirect to ${location}`);
|
||||
if (!res.headersSent) return res.redirect(response.status, location);
|
||||
return;
|
||||
}
|
||||
if (!res.headersSent) {
|
||||
res.status(response.status).json(response.data);
|
||||
}
|
||||
@ -930,7 +1272,9 @@ app.get('/', (req, res) => {
|
||||
deploy: '/api/deploy',
|
||||
dashboard: '/api/dashboard',
|
||||
self_improving: '/api/self-improving',
|
||||
mockup: '/api/mockup'
|
||||
mockup: '/api/mockup',
|
||||
unison: '/api/unison',
|
||||
unified: '/api/recommendations'
|
||||
},
|
||||
websocket: {
|
||||
endpoint: '/socket.io/',
|
||||
|
||||
@ -8,6 +8,9 @@ COPY package*.json ./
|
||||
# Install dependencies
|
||||
RUN npm install
|
||||
|
||||
# Install git and tools required for healthchecks and HTTPS clones
|
||||
RUN apk add --no-cache git curl ca-certificates openssh-client && update-ca-certificates
|
||||
|
||||
# Copy source code
|
||||
COPY . .
|
||||
|
||||
@ -15,8 +18,10 @@ COPY . .
|
||||
RUN addgroup -g 1001 -S nodejs
|
||||
RUN adduser -S git-integration -u 1001
|
||||
|
||||
# Change ownership
|
||||
# Create git-repos directory and set proper permissions
|
||||
RUN mkdir -p /app/git-repos
|
||||
RUN chown -R git-integration:nodejs /app
|
||||
RUN chmod -R 755 /app/git-repos
|
||||
USER git-integration
|
||||
|
||||
# Expose port
|
||||
|
||||
13
services/git-integration/package-lock.json
generated
13
services/git-integration/package-lock.json
generated
@ -15,6 +15,7 @@
|
||||
"express-session": "^1.18.2",
|
||||
"helmet": "^7.1.0",
|
||||
"morgan": "^1.10.0",
|
||||
"parse-github-url": "^1.0.3",
|
||||
"pg": "^8.11.3",
|
||||
"uuid": "^9.0.1"
|
||||
},
|
||||
@ -1135,6 +1136,18 @@
|
||||
"wrappy": "1"
|
||||
}
|
||||
},
|
||||
"node_modules/parse-github-url": {
|
||||
"version": "1.0.3",
|
||||
"resolved": "https://registry.npmjs.org/parse-github-url/-/parse-github-url-1.0.3.tgz",
|
||||
"integrity": "sha512-tfalY5/4SqGaV/GIGzWyHnFjlpTPTNpENR9Ea2lLldSJ8EWXMsvacWucqY3m3I4YPtas15IxTLQVQ5NSYXPrww==",
|
||||
"license": "MIT",
|
||||
"bin": {
|
||||
"parse-github-url": "cli.js"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 0.10"
|
||||
}
|
||||
},
|
||||
"node_modules/parseurl": {
|
||||
"version": "1.3.3",
|
||||
"resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz",
|
||||
|
||||
@ -16,6 +16,7 @@
|
||||
"express-session": "^1.18.2",
|
||||
"helmet": "^7.1.0",
|
||||
"morgan": "^1.10.0",
|
||||
"parse-github-url": "^1.0.3",
|
||||
"pg": "^8.11.3",
|
||||
"uuid": "^9.0.1"
|
||||
},
|
||||
|
||||
@ -11,6 +11,8 @@ const database = require('./config/database');
|
||||
// Import routes
|
||||
const githubRoutes = require('./routes/github-integration.routes');
|
||||
const githubOAuthRoutes = require('./routes/github-oauth');
|
||||
const webhookRoutes = require('./routes/webhook.routes');
|
||||
const vcsRoutes = require('./routes/vcs.routes');
|
||||
|
||||
const app = express();
|
||||
const PORT = process.env.PORT || 8012;
|
||||
@ -37,6 +39,8 @@ app.use(session({
|
||||
// Routes
|
||||
app.use('/api/github', githubRoutes);
|
||||
app.use('/api/github', githubOAuthRoutes);
|
||||
app.use('/api/github', webhookRoutes);
|
||||
app.use('/api/vcs', vcsRoutes);
|
||||
|
||||
// Health check endpoint
|
||||
app.get('/health', (req, res) => {
|
||||
@ -57,7 +61,9 @@ app.get('/', (req, res) => {
|
||||
endpoints: {
|
||||
health: '/health',
|
||||
github: '/api/github',
|
||||
oauth: '/api/github/auth'
|
||||
oauth: '/api/github/auth',
|
||||
webhook: '/api/github/webhook',
|
||||
vcs: '/api/vcs/:provider'
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
@ -0,0 +1,35 @@
|
||||
-- Create table if it does not exist (compatible with existing schemas)
|
||||
CREATE TABLE IF NOT EXISTS webhook_events (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
event_type VARCHAR(100) NOT NULL,
|
||||
processing_status VARCHAR(50) DEFAULT 'pending',
|
||||
created_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Bring table to desired schema (idempotent)
|
||||
ALTER TABLE webhook_events ADD COLUMN IF NOT EXISTS action VARCHAR(100);
|
||||
ALTER TABLE webhook_events ADD COLUMN IF NOT EXISTS repository_full_name VARCHAR(255);
|
||||
ALTER TABLE webhook_events ADD COLUMN IF NOT EXISTS delivery_id VARCHAR(100);
|
||||
ALTER TABLE webhook_events ADD COLUMN IF NOT EXISTS metadata JSONB;
|
||||
ALTER TABLE webhook_events ADD COLUMN IF NOT EXISTS received_at TIMESTAMP DEFAULT NOW();
|
||||
ALTER TABLE webhook_events ADD COLUMN IF NOT EXISTS processed_at TIMESTAMP;
|
||||
ALTER TABLE webhook_events ADD COLUMN IF NOT EXISTS error_message TEXT;
|
||||
ALTER TABLE webhook_events ADD COLUMN IF NOT EXISTS updated_at TIMESTAMP DEFAULT NOW();
|
||||
|
||||
-- Create indexes (safe if columns now exist)
|
||||
CREATE INDEX IF NOT EXISTS idx_webhook_events_event_type ON webhook_events(event_type);
|
||||
CREATE INDEX IF NOT EXISTS idx_webhook_events_repository ON webhook_events(repository_full_name);
|
||||
CREATE INDEX IF NOT EXISTS idx_webhook_events_received_at ON webhook_events(received_at);
|
||||
CREATE INDEX IF NOT EXISTS idx_webhook_events_delivery_id ON webhook_events(delivery_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_webhook_events_status ON webhook_events(processing_status);
|
||||
|
||||
-- Add trigger to update timestamp
|
||||
DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_trigger WHERE tgname = 'update_webhook_events_updated_at'
|
||||
) THEN
|
||||
CREATE TRIGGER update_webhook_events_updated_at BEFORE UPDATE ON webhook_events
|
||||
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
|
||||
END IF;
|
||||
END $$;
|
||||
@ -0,0 +1,52 @@
|
||||
-- Migration 005: GitHub webhook tracking and commit SHA history
|
||||
|
||||
-- Create a durable table for raw webhook deliveries (compat with existing code expectations)
|
||||
CREATE TABLE IF NOT EXISTS github_webhooks (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
delivery_id VARCHAR(120),
|
||||
event_type VARCHAR(100) NOT NULL,
|
||||
action VARCHAR(100),
|
||||
owner_name VARCHAR(120),
|
||||
repository_name VARCHAR(200),
|
||||
repository_id UUID REFERENCES github_repositories(id) ON DELETE SET NULL,
|
||||
ref VARCHAR(255),
|
||||
before_sha VARCHAR(64),
|
||||
after_sha VARCHAR(64),
|
||||
commit_count INTEGER,
|
||||
payload JSONB NOT NULL,
|
||||
processed_at TIMESTAMP,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_github_webhooks_delivery_id ON github_webhooks(delivery_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_github_webhooks_repo ON github_webhooks(owner_name, repository_name);
|
||||
CREATE INDEX IF NOT EXISTS idx_github_webhooks_event_type ON github_webhooks(event_type);
|
||||
|
||||
-- Track commit SHA transitions per repository to detect changes over time
|
||||
CREATE TABLE IF NOT EXISTS repository_commit_events (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
repository_id UUID REFERENCES github_repositories(id) ON DELETE CASCADE,
|
||||
ref VARCHAR(255),
|
||||
before_sha VARCHAR(64),
|
||||
after_sha VARCHAR(64),
|
||||
commit_count INTEGER DEFAULT 0,
|
||||
received_at TIMESTAMP DEFAULT NOW(),
|
||||
created_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_repo_commit_events_repo ON repository_commit_events(repository_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_repo_commit_events_sha ON repository_commit_events(after_sha);
|
||||
|
||||
-- Safe trigger creation
|
||||
DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_trigger WHERE tgname = 'update_github_webhooks_updated_at'
|
||||
) THEN
|
||||
CREATE TRIGGER update_github_webhooks_updated_at BEFORE UPDATE ON github_webhooks
|
||||
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
|
||||
END IF;
|
||||
END $$;
|
||||
|
||||
|
||||
@ -0,0 +1,31 @@
|
||||
-- Migration 006: Store commit messages and per-file changes from push webhooks
|
||||
|
||||
-- Per-commit details linked to an attached repository
|
||||
CREATE TABLE IF NOT EXISTS repository_commit_details (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
repository_id UUID REFERENCES github_repositories(id) ON DELETE CASCADE,
|
||||
commit_sha VARCHAR(64) NOT NULL,
|
||||
author_name VARCHAR(200),
|
||||
author_email VARCHAR(320),
|
||||
message TEXT,
|
||||
url TEXT,
|
||||
committed_at TIMESTAMP DEFAULT NOW(),
|
||||
created_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS uq_repo_commit_sha ON repository_commit_details(repository_id, commit_sha);
|
||||
CREATE INDEX IF NOT EXISTS idx_repo_commit_created_at ON repository_commit_details(created_at);
|
||||
|
||||
-- Per-file changes for each commit
|
||||
CREATE TABLE IF NOT EXISTS repository_commit_files (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
commit_id UUID REFERENCES repository_commit_details(id) ON DELETE CASCADE,
|
||||
change_type VARCHAR(20) NOT NULL, -- added | modified | removed
|
||||
file_path TEXT NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_commit_files_commit_id ON repository_commit_files(commit_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_commit_files_path ON repository_commit_files(file_path);
|
||||
|
||||
|
||||
@ -0,0 +1,6 @@
|
||||
-- 007_add_last_synced_commit.sql
|
||||
ALTER TABLE github_repositories
|
||||
ADD COLUMN IF NOT EXISTS last_synced_commit_sha VARCHAR(64),
|
||||
ADD COLUMN IF NOT EXISTS last_synced_at TIMESTAMP WITH TIME ZONE;
|
||||
|
||||
|
||||
@ -0,0 +1,36 @@
|
||||
-- 008_provider_token_tables.sql
|
||||
|
||||
CREATE TABLE IF NOT EXISTS gitlab_user_tokens (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
access_token TEXT NOT NULL,
|
||||
gitlab_username TEXT,
|
||||
gitlab_user_id TEXT,
|
||||
scopes JSONB,
|
||||
expires_at TIMESTAMP NULL,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS bitbucket_user_tokens (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
access_token TEXT NOT NULL,
|
||||
bitbucket_username TEXT,
|
||||
bitbucket_user_id TEXT,
|
||||
scopes JSONB,
|
||||
expires_at TIMESTAMP NULL,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS gitea_user_tokens (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
access_token TEXT NOT NULL,
|
||||
gitea_username TEXT,
|
||||
gitea_user_id TEXT,
|
||||
scopes JSONB,
|
||||
expires_at TIMESTAMP NULL,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
|
||||
@ -0,0 +1,68 @@
|
||||
-- 009_provider_webhook_tables.sql
|
||||
|
||||
-- GitLab webhooks table
|
||||
CREATE TABLE IF NOT EXISTS gitlab_webhooks (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
delivery_id TEXT,
|
||||
event_type TEXT NOT NULL,
|
||||
action TEXT,
|
||||
owner_name TEXT NOT NULL,
|
||||
repository_name TEXT NOT NULL,
|
||||
repository_id UUID REFERENCES github_repositories(id),
|
||||
ref TEXT,
|
||||
before_sha TEXT,
|
||||
after_sha TEXT,
|
||||
commit_count INTEGER DEFAULT 0,
|
||||
payload JSONB,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Bitbucket webhooks table
|
||||
CREATE TABLE IF NOT EXISTS bitbucket_webhooks (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
delivery_id TEXT,
|
||||
event_type TEXT NOT NULL,
|
||||
action TEXT,
|
||||
owner_name TEXT NOT NULL,
|
||||
repository_name TEXT NOT NULL,
|
||||
repository_id UUID REFERENCES github_repositories(id),
|
||||
ref TEXT,
|
||||
before_sha TEXT,
|
||||
after_sha TEXT,
|
||||
commit_count INTEGER DEFAULT 0,
|
||||
payload JSONB,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Gitea webhooks table
|
||||
CREATE TABLE IF NOT EXISTS gitea_webhooks (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
delivery_id TEXT,
|
||||
event_type TEXT NOT NULL,
|
||||
action TEXT,
|
||||
owner_name TEXT NOT NULL,
|
||||
repository_name TEXT NOT NULL,
|
||||
repository_id UUID REFERENCES github_repositories(id),
|
||||
ref TEXT,
|
||||
before_sha TEXT,
|
||||
after_sha TEXT,
|
||||
commit_count INTEGER DEFAULT 0,
|
||||
payload JSONB,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes for better performance
|
||||
CREATE INDEX IF NOT EXISTS idx_gitlab_webhooks_repository_id ON gitlab_webhooks(repository_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_gitlab_webhooks_created_at ON gitlab_webhooks(created_at);
|
||||
CREATE INDEX IF NOT EXISTS idx_gitlab_webhooks_event_type ON gitlab_webhooks(event_type);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_bitbucket_webhooks_repository_id ON bitbucket_webhooks(repository_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_bitbucket_webhooks_created_at ON bitbucket_webhooks(created_at);
|
||||
CREATE INDEX IF NOT EXISTS idx_bitbucket_webhooks_event_type ON bitbucket_webhooks(event_type);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_gitea_webhooks_repository_id ON gitea_webhooks(repository_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_gitea_webhooks_created_at ON gitea_webhooks(created_at);
|
||||
CREATE INDEX IF NOT EXISTS idx_gitea_webhooks_event_type ON gitea_webhooks(event_type);
|
||||
@ -0,0 +1,18 @@
|
||||
-- Migration 010: Remove template_id columns and related indexes
|
||||
-- This migration removes template_id references from git-integration tables
|
||||
|
||||
-- Drop indexes that reference template_id first
|
||||
DROP INDEX IF EXISTS idx_github_repos_template_id;
|
||||
DROP INDEX IF EXISTS idx_github_repos_template_user;
|
||||
DROP INDEX IF EXISTS idx_feature_mappings_template_user;
|
||||
|
||||
-- Remove template_id column from github_repositories table
|
||||
ALTER TABLE IF EXISTS github_repositories
|
||||
DROP COLUMN IF EXISTS template_id;
|
||||
|
||||
-- Remove template_id column from feature_codebase_mappings table
|
||||
ALTER TABLE IF EXISTS feature_codebase_mappings
|
||||
DROP COLUMN IF EXISTS template_id;
|
||||
|
||||
-- Note: This migration removes the template_id foreign key relationships
|
||||
-- The tables will now rely on user_id for ownership tracking
|
||||
@ -0,0 +1,32 @@
|
||||
-- Migration 011: Support multiple GitHub accounts per user
|
||||
-- This allows each user to authenticate with multiple GitHub accounts
|
||||
|
||||
-- Add user_id column to github_user_tokens
|
||||
ALTER TABLE github_user_tokens
|
||||
ADD COLUMN IF NOT EXISTS user_id UUID REFERENCES users(id) ON DELETE CASCADE;
|
||||
|
||||
-- Create indexes for faster lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_github_user_tokens_user_id ON github_user_tokens(user_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_github_user_tokens_user_github ON github_user_tokens(user_id, github_username);
|
||||
|
||||
-- Remove the old unique constraint on github_username (if it exists)
|
||||
-- Allow multiple tokens per user, but one token per GitHub account per user
|
||||
DROP INDEX IF EXISTS idx_github_user_tokens_github_username;
|
||||
|
||||
-- Create new unique constraint: one token per GitHub account per user
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS idx_github_user_tokens_unique_user_github
|
||||
ON github_user_tokens(user_id, github_username)
|
||||
WHERE user_id IS NOT NULL;
|
||||
|
||||
-- Add a column to track if this is the primary/default GitHub account for the user
|
||||
ALTER TABLE github_user_tokens
|
||||
ADD COLUMN IF NOT EXISTS is_primary BOOLEAN DEFAULT FALSE;
|
||||
|
||||
-- Create index for primary account lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_github_user_tokens_primary ON github_user_tokens(user_id, is_primary);
|
||||
|
||||
-- Note:
|
||||
-- - Each user can have multiple GitHub accounts
|
||||
-- - Each GitHub account can only be linked once per user
|
||||
-- - One account per user can be marked as primary
|
||||
-- - Repository access will be checked against all user's GitHub accounts
|
||||
@ -0,0 +1,10 @@
|
||||
-- Migration 012: Track which user attached/downloaded a repository
|
||||
|
||||
-- Add user_id to github_repositories to associate records with the initiating user
|
||||
ALTER TABLE github_repositories
|
||||
ADD COLUMN IF NOT EXISTS user_id UUID REFERENCES users(id) ON DELETE SET NULL;
|
||||
|
||||
-- Helpful index for filtering user-owned repositories
|
||||
CREATE INDEX IF NOT EXISTS idx_github_repositories_user_id ON github_repositories(user_id);
|
||||
|
||||
|
||||
@ -15,45 +15,45 @@ const fileStorageService = new FileStorageService();
|
||||
// Attach GitHub repository to template
|
||||
router.post('/attach-repository', async (req, res) => {
|
||||
try {
|
||||
const { template_id, repository_url, branch_name } = req.body;
|
||||
const { repository_url, branch_name } = req.body;
|
||||
const userId = req.headers['x-user-id'] || req.query.user_id || req.body.user_id || (req.user && (req.user.id || req.user.userId));
|
||||
|
||||
// Validate input
|
||||
if (!template_id || !repository_url) {
|
||||
if (!repository_url) {
|
||||
return res.status(400).json({
|
||||
success: false,
|
||||
message: 'Template ID and repository URL are required'
|
||||
});
|
||||
}
|
||||
|
||||
// Check if template exists
|
||||
const templateQuery = 'SELECT * FROM templates WHERE id = $1 AND is_active = true';
|
||||
const templateResult = await database.query(templateQuery, [template_id]);
|
||||
|
||||
if (templateResult.rows.length === 0) {
|
||||
return res.status(404).json({
|
||||
success: false,
|
||||
message: 'Template not found'
|
||||
message: 'Repository URL is required'
|
||||
});
|
||||
}
|
||||
|
||||
// Parse GitHub URL
|
||||
const { owner, repo, branch } = githubService.parseGitHubUrl(repository_url);
|
||||
|
||||
// Check repository access
|
||||
const accessCheck = await githubService.checkRepositoryAccess(owner, repo);
|
||||
// Check repository access with user-specific tokens
|
||||
const accessCheck = await githubService.checkRepositoryAccessWithUser(owner, repo, userId);
|
||||
|
||||
if (!accessCheck.hasAccess) {
|
||||
if (accessCheck.requiresAuth) {
|
||||
// Check if we have OAuth token
|
||||
const tokenRecord = await oauthService.getToken();
|
||||
if (!tokenRecord) {
|
||||
return res.status(401).json({
|
||||
success: false,
|
||||
message: 'GitHub authentication required for this repository',
|
||||
requires_auth: true,
|
||||
auth_url: `/api/github/auth/github`
|
||||
});
|
||||
}
|
||||
if (accessCheck.requiresAuth || accessCheck.authError) {
|
||||
// Generate an auth URL that encodes the current user and returns absolute via gateway
|
||||
const state = Math.random().toString(36).substring(7);
|
||||
const userIdForAuth = userId || null;
|
||||
const rawAuthUrl = oauthService.getAuthUrl(state, userIdForAuth);
|
||||
|
||||
// Prefer returning a gateway URL so frontend doesn't need to know service ports
|
||||
const gatewayBase = process.env.API_GATEWAY_PUBLIC_URL || 'http://localhost:8000';
|
||||
const serviceRelative = '/api/github/auth/github';
|
||||
// redirect=1 makes the endpoint issue a 302 directly to GitHub so UI doesn't land on JSON
|
||||
const serviceAuthUrl = `${gatewayBase}${serviceRelative}?redirect=1&state=${encodeURIComponent(state)}${userIdForAuth ? `&user_id=${encodeURIComponent(userIdForAuth)}` : ''}`;
|
||||
|
||||
return res.status(401).json({
|
||||
success: false,
|
||||
message: accessCheck.error || 'GitHub authentication required for this repository',
|
||||
requires_auth: true,
|
||||
// Return both, frontend can pick the gateway URL
|
||||
auth_url: serviceAuthUrl,
|
||||
service_auth_url: rawAuthUrl,
|
||||
auth_error: accessCheck.authError || false
|
||||
});
|
||||
}
|
||||
|
||||
return res.status(404).json({
|
||||
@ -65,39 +65,66 @@ router.post('/attach-repository', async (req, res) => {
|
||||
// Get repository information from GitHub
|
||||
const repositoryData = await githubService.fetchRepositoryMetadata(owner, repo);
|
||||
|
||||
// Analyze the codebase
|
||||
const codebaseAnalysis = await githubService.analyzeCodebase(owner, repo, branch || branch_name);
|
||||
// Use the actual default branch from repository metadata if the requested branch doesn't exist
|
||||
let actualBranch = branch || branch_name || repositoryData.default_branch || 'main';
|
||||
|
||||
// Validate that the requested branch exists, fallback to default if not
|
||||
try {
|
||||
if (branch || branch_name) {
|
||||
const octokit = await githubService.getAuthenticatedOctokit();
|
||||
await octokit.git.getRef({
|
||||
owner,
|
||||
repo,
|
||||
ref: `heads/${actualBranch}`
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
if (error.status === 404) {
|
||||
console.warn(`Branch ${actualBranch} not found, using default branch: ${repositoryData.default_branch}`);
|
||||
actualBranch = repositoryData.default_branch || 'main';
|
||||
} else {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// Store everything in PostgreSQL
|
||||
// Analyze the codebase
|
||||
const codebaseAnalysis = await githubService.analyzeCodebase(owner, repo, actualBranch);
|
||||
|
||||
// Store everything in PostgreSQL (without template_id)
|
||||
const insertQuery = `
|
||||
INSERT INTO github_repositories (
|
||||
template_id, repository_url, repository_name, owner_name,
|
||||
repository_url, repository_name, owner_name,
|
||||
branch_name, is_public, metadata, codebase_analysis, sync_status,
|
||||
requires_auth
|
||||
requires_auth, user_id
|
||||
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
|
||||
RETURNING *
|
||||
`;
|
||||
|
||||
const insertValues = [
|
||||
template_id,
|
||||
repository_url,
|
||||
repo,
|
||||
owner,
|
||||
branch || branch_name || 'main',
|
||||
actualBranch,
|
||||
repositoryData.visibility === 'public',
|
||||
JSON.stringify(repositoryData),
|
||||
JSON.stringify(codebaseAnalysis),
|
||||
'synced',
|
||||
accessCheck.requiresAuth
|
||||
accessCheck.requiresAuth,
|
||||
userId || null
|
||||
];
|
||||
|
||||
const insertResult = await database.query(insertQuery, insertValues);
|
||||
const repositoryRecord = insertResult.rows[0];
|
||||
|
||||
// Download repository with file storage
|
||||
console.log('Downloading repository with storage...');
|
||||
const downloadResult = await githubService.downloadRepositoryWithStorage(
|
||||
owner, repo, branch || branch_name || 'main', repositoryRecord.id
|
||||
// Attempt to auto-create webhook on the attached repository using OAuth token
|
||||
const publicBaseUrl = process.env.PUBLIC_BASE_URL || null; // e.g., your ngrok URL https://xxx.ngrok-free.app
|
||||
const callbackUrl = publicBaseUrl ? `${publicBaseUrl}/api/github/webhook` : null;
|
||||
const webhookResult = await githubService.ensureRepositoryWebhook(owner, repo, callbackUrl);
|
||||
|
||||
// Sync with fallback: try git first, then API
|
||||
console.log('Syncing repository (git first, API fallback)...');
|
||||
const downloadResult = await githubService.syncRepositoryWithFallback(
|
||||
owner, repo, actualBranch, repositoryRecord.id
|
||||
);
|
||||
|
||||
if (!downloadResult.success) {
|
||||
@ -105,33 +132,6 @@ router.post('/attach-repository', async (req, res) => {
|
||||
console.warn('Repository download failed:', downloadResult.error);
|
||||
}
|
||||
|
||||
// Create feature-codebase mappings
|
||||
const featureQuery = 'SELECT id FROM template_features WHERE template_id = $1';
|
||||
const featureResult = await database.query(featureQuery, [template_id]);
|
||||
|
||||
if (featureResult.rows.length > 0) {
|
||||
const mappingValues = [];
|
||||
const mappingParams = [];
|
||||
let paramIndex = 1;
|
||||
|
||||
for (const feature of featureResult.rows) {
|
||||
mappingValues.push(`(uuid_generate_v4(), $${paramIndex++}, $${paramIndex++}, $${paramIndex++}, $${paramIndex++})`);
|
||||
mappingParams.push(
|
||||
feature.id,
|
||||
repositoryRecord.id,
|
||||
'[]', // Empty paths for now
|
||||
'{}' // Empty snippets for now
|
||||
);
|
||||
}
|
||||
|
||||
const mappingQuery = `
|
||||
INSERT INTO feature_codebase_mappings (id, feature_id, repository_id, code_paths, code_snippets)
|
||||
VALUES ${mappingValues.join(', ')}
|
||||
`;
|
||||
|
||||
await database.query(mappingQuery, mappingParams);
|
||||
}
|
||||
|
||||
// Get storage information
|
||||
const storageInfo = await githubService.getRepositoryStorage(repositoryRecord.id);
|
||||
|
||||
@ -140,7 +140,6 @@ router.post('/attach-repository', async (req, res) => {
|
||||
message: 'Repository attached successfully',
|
||||
data: {
|
||||
repository_id: repositoryRecord.id,
|
||||
template_id: repositoryRecord.template_id,
|
||||
repository_name: repositoryRecord.repository_name,
|
||||
owner_name: repositoryRecord.owner_name,
|
||||
branch_name: repositoryRecord.branch_name,
|
||||
@ -161,6 +160,55 @@ router.post('/attach-repository', async (req, res) => {
|
||||
});
|
||||
}
|
||||
});
|
||||
// Get repository diff between two SHAs (unified patch)
|
||||
router.get('/repository/:id/diff', async (req, res) => {
|
||||
try {
|
||||
const { id } = req.params;
|
||||
const { from, to, path: dirPath } = req.query;
|
||||
|
||||
const repoQuery = 'SELECT * FROM github_repositories WHERE id = $1';
|
||||
const repoResult = await database.query(repoQuery, [id]);
|
||||
if (repoResult.rows.length === 0) {
|
||||
return res.status(404).json({ success: false, message: 'Repository not found' });
|
||||
}
|
||||
const record = repoResult.rows[0];
|
||||
const { owner, repo } = githubService.parseGitHubUrl(record.repository_url);
|
||||
// Always use stored branch_name to avoid mismatches like master/main
|
||||
const targetBranch = record.branch_name || 'main';
|
||||
const patch = await githubService.getRepositoryDiff(owner, repo, targetBranch, from || record.last_synced_commit_sha, to || 'HEAD');
|
||||
res.json({ success: true, data: { patch, from: from || record.last_synced_commit_sha, to: to || 'HEAD' } });
|
||||
} catch (error) {
|
||||
console.error('Error getting diff:', error);
|
||||
res.status(500).json({ success: false, message: error.message || 'Failed to get diff' });
|
||||
}
|
||||
});
|
||||
|
||||
// Get list of changed files since a SHA
|
||||
router.get('/repository/:id/changes', async (req, res) => {
|
||||
try {
|
||||
const { id } = req.params;
|
||||
const { since } = req.query;
|
||||
|
||||
const repoQuery = 'SELECT * FROM github_repositories WHERE id = $1';
|
||||
const repoResult = await database.query(repoQuery, [id]);
|
||||
if (repoResult.rows.length === 0) {
|
||||
return res.status(404).json({ success: false, message: 'Repository not found' });
|
||||
}
|
||||
const record = repoResult.rows[0];
|
||||
const { owner, repo, branch } = githubService.parseGitHubUrl(record.repository_url);
|
||||
|
||||
const sinceSha = since || record.last_synced_commit_sha;
|
||||
if (!sinceSha) {
|
||||
return res.status(400).json({ success: false, message: 'since SHA is required or must be available as last_synced_commit_sha' });
|
||||
}
|
||||
|
||||
const changes = await githubService.getRepositoryChangesSince(owner, repo, branch || record.branch_name, sinceSha);
|
||||
res.json({ success: true, data: { since: sinceSha, changes } });
|
||||
} catch (error) {
|
||||
console.error('Error getting changes:', error);
|
||||
res.status(500).json({ success: false, message: error.message || 'Failed to get changes' });
|
||||
}
|
||||
});
|
||||
|
||||
// Get repository information for a template
|
||||
router.get('/template/:id/repository', async (req, res) => {
|
||||
@ -481,8 +529,8 @@ router.post('/repository/:id/sync', async (req, res) => {
|
||||
// Clean up existing storage
|
||||
await githubService.cleanupRepositoryStorage(id);
|
||||
|
||||
// Re-download with storage
|
||||
const downloadResult = await githubService.downloadRepositoryWithStorage(
|
||||
// Re-sync with fallback (git first, API fallback)
|
||||
const downloadResult = await githubService.syncRepositoryWithFallback(
|
||||
owner, repo, branch || repository.branch_name, id
|
||||
);
|
||||
|
||||
@ -558,4 +606,4 @@ router.delete('/repository/:id', async (req, res) => {
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
module.exports = router;
|
||||
@ -5,11 +5,31 @@ const GitHubOAuthService = require('../services/github-oauth');
|
||||
|
||||
const oauthService = new GitHubOAuthService();
|
||||
|
||||
// Initiate GitHub OAuth flow
|
||||
// Initiate GitHub OAuth flow (supports optional user_id). If redirect=1, do 302 to GitHub.
|
||||
router.get('/auth/github', async (req, res) => {
|
||||
try {
|
||||
const state = Math.random().toString(36).substring(7);
|
||||
const authUrl = oauthService.getAuthUrl(state);
|
||||
const userId =
|
||||
req.query.user_id ||
|
||||
(req.body && req.body.user_id) ||
|
||||
req.headers['x-user-id'] ||
|
||||
(req.cookies && (req.cookies.user_id || req.cookies.uid)) ||
|
||||
(req.session && req.session.user && (req.session.user.id || req.session.user.userId)) ||
|
||||
(req.user && (req.user.id || req.user.userId));
|
||||
|
||||
if (!userId) {
|
||||
return res.status(400).json({
|
||||
success: false,
|
||||
message: 'user_id is required to initiate GitHub authentication'
|
||||
});
|
||||
}
|
||||
console.log('[GitHub OAuth] /auth/github resolved user_id =', userId || null);
|
||||
const authUrl = oauthService.getAuthUrl(state, userId || null);
|
||||
|
||||
const shouldRedirect = ['1', 'true', 'yes'].includes(String(req.query.redirect || '').toLowerCase());
|
||||
if (shouldRedirect) {
|
||||
return res.redirect(302, authUrl);
|
||||
}
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
@ -32,6 +52,24 @@ router.get('/auth/github', async (req, res) => {
|
||||
router.get('/auth/github/callback', async (req, res) => {
|
||||
try {
|
||||
const { code, state } = req.query;
|
||||
// user_id may arrive as query param or embedded in the state
|
||||
let user_id =
|
||||
req.query.user_id ||
|
||||
(req.body && req.body.user_id) ||
|
||||
req.headers['x-user-id'] ||
|
||||
(req.cookies && (req.cookies.user_id || req.cookies.uid)) ||
|
||||
(req.session && req.session.user && (req.session.user.id || req.session.user.userId)) ||
|
||||
(req.user && (req.user.id || req.user.userId));
|
||||
if (!user_id && typeof state === 'string' && state.includes('|uid=')) {
|
||||
try { user_id = state.split('|uid=')[1]; } catch {}
|
||||
}
|
||||
|
||||
if (!user_id) {
|
||||
return res.status(400).json({
|
||||
success: false,
|
||||
message: 'user_id is required to complete GitHub authentication'
|
||||
});
|
||||
}
|
||||
|
||||
if (!code) {
|
||||
return res.status(400).json({
|
||||
@ -46,8 +84,9 @@ router.get('/auth/github/callback', async (req, res) => {
|
||||
// Get user info from GitHub
|
||||
const githubUser = await oauthService.getUserInfo(accessToken);
|
||||
|
||||
// Store token
|
||||
const tokenRecord = await oauthService.storeToken(accessToken, githubUser);
|
||||
// Store token with user context (if provided)
|
||||
console.log('[GitHub OAuth] callback about to store token for user_id =', user_id || null);
|
||||
const tokenRecord = await oauthService.storeToken(accessToken, githubUser, user_id || null);
|
||||
|
||||
// Redirect back to frontend if configured
|
||||
const frontendUrl = process.env.FRONTEND_URL || 'http://localhost:3000';
|
||||
@ -79,27 +118,12 @@ router.get('/auth/github/callback', async (req, res) => {
|
||||
// Get GitHub connection status
|
||||
router.get('/auth/github/status', async (req, res) => {
|
||||
try {
|
||||
const tokenRecord = await oauthService.getToken();
|
||||
const authStatus = await oauthService.getAuthStatus();
|
||||
|
||||
if (tokenRecord) {
|
||||
res.json({
|
||||
success: true,
|
||||
data: {
|
||||
connected: true,
|
||||
github_username: tokenRecord.github_username,
|
||||
github_user_id: tokenRecord.github_user_id,
|
||||
connected_at: tokenRecord.created_at,
|
||||
scopes: tokenRecord.scopes
|
||||
}
|
||||
});
|
||||
} else {
|
||||
res.json({
|
||||
success: true,
|
||||
data: {
|
||||
connected: false
|
||||
}
|
||||
});
|
||||
}
|
||||
res.json({
|
||||
success: true,
|
||||
data: authStatus
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
console.error('Error checking GitHub status:', error);
|
||||
|
||||
511
services/git-integration/src/routes/vcs.routes.js
Normal file
511
services/git-integration/src/routes/vcs.routes.js
Normal file
@ -0,0 +1,511 @@
|
||||
// routes/vcs.routes.js
|
||||
const express = require('express');
|
||||
const router = express.Router({ mergeParams: true });
|
||||
const providerRegistry = require('../services/provider-registry');
|
||||
const database = require('../config/database');
|
||||
const FileStorageService = require('../services/file-storage.service');
|
||||
|
||||
const fileStorageService = new FileStorageService();
|
||||
const GitLabOAuthService = require('../services/gitlab-oauth');
|
||||
const BitbucketOAuthService = require('../services/bitbucket-oauth');
|
||||
const GiteaOAuthService = require('../services/gitea-oauth');
|
||||
const VcsWebhookService = require('../services/vcs-webhook.service');
|
||||
|
||||
const vcsWebhookService = new VcsWebhookService();
|
||||
|
||||
function getProvider(req) {
|
||||
const providerKey = (req.params.provider || '').toLowerCase();
|
||||
return providerRegistry.resolve(providerKey);
|
||||
}
|
||||
|
||||
function getOAuthService(providerKey) {
|
||||
if (providerKey === 'gitlab') return new GitLabOAuthService();
|
||||
if (providerKey === 'bitbucket') return new BitbucketOAuthService();
|
||||
if (providerKey === 'gitea') return new GiteaOAuthService();
|
||||
return null;
|
||||
}
|
||||
|
||||
function extractEventType(providerKey, payload) {
|
||||
switch (providerKey) {
|
||||
case 'gitlab':
|
||||
return payload.object_kind || (payload.ref ? 'push' : 'unknown');
|
||||
case 'bitbucket':
|
||||
return (payload.push && 'push') || (payload.pullrequest && 'pull_request') || 'unknown';
|
||||
case 'gitea':
|
||||
return payload.action || (payload.ref ? 'push' : 'unknown');
|
||||
default:
|
||||
return 'unknown';
|
||||
}
|
||||
}
|
||||
|
||||
// Attach repository (provider-agnostic)
|
||||
router.post('/:provider/attach-repository', async (req, res) => {
|
||||
try {
|
||||
const provider = getProvider(req);
|
||||
const { template_id, repository_url, branch_name } = req.body;
|
||||
const userId = req.headers['x-user-id'] || req.query.user_id || req.body.user_id || (req.user && (req.user.id || req.user.userId));
|
||||
|
||||
if (!template_id || !repository_url) {
|
||||
return res.status(400).json({ success: false, message: 'Template ID and repository URL are required' });
|
||||
}
|
||||
|
||||
const templateResult = await database.query('SELECT * FROM templates WHERE id = $1 AND is_active = true', [template_id]);
|
||||
if (templateResult.rows.length === 0) {
|
||||
return res.status(404).json({ success: false, message: 'Template not found' });
|
||||
}
|
||||
|
||||
const { owner, repo, branch } = provider.parseRepoUrl(repository_url);
|
||||
const accessCheck = await provider.checkRepositoryAccess(owner, repo);
|
||||
|
||||
if (!accessCheck.hasAccess) {
|
||||
if (accessCheck.requiresAuth) {
|
||||
// Check if we have OAuth token for this provider
|
||||
const providerKey = (req.params.provider || '').toLowerCase();
|
||||
const oauthService = getOAuthService(providerKey);
|
||||
if (oauthService) {
|
||||
const tokenRecord = await oauthService.getToken();
|
||||
if (!tokenRecord) {
|
||||
return res.status(401).json({
|
||||
success: false,
|
||||
message: `${providerKey.charAt(0).toUpperCase() + providerKey.slice(1)} authentication required for this repository`,
|
||||
requires_auth: true,
|
||||
auth_url: `/api/vcs/${providerKey}/auth/start`
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return res.status(404).json({ success: false, message: accessCheck.error || 'Repository not accessible' });
|
||||
}
|
||||
|
||||
const repositoryData = await provider.fetchRepositoryMetadata(owner, repo);
|
||||
let actualBranch = branch || branch_name || repositoryData.default_branch || 'main';
|
||||
|
||||
try {
|
||||
// No-op for non-GitHub providers if not supported; adapters can throw if needed
|
||||
} catch (_) {}
|
||||
|
||||
// Preliminary analysis (may be refined after full sync)
|
||||
let codebaseAnalysis = await provider.analyzeCodebase(owner, repo, actualBranch);
|
||||
|
||||
// For backward-compatibility, insert into github_repositories for now
|
||||
const insertQuery = `
|
||||
INSERT INTO github_repositories (
|
||||
template_id, repository_url, repository_name, owner_name,
|
||||
branch_name, is_public, metadata, codebase_analysis, sync_status,
|
||||
requires_auth, user_id
|
||||
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11)
|
||||
RETURNING *
|
||||
`;
|
||||
const insertValues = [
|
||||
template_id,
|
||||
repository_url,
|
||||
repo,
|
||||
owner,
|
||||
actualBranch,
|
||||
repositoryData.visibility === 'public',
|
||||
JSON.stringify(repositoryData),
|
||||
JSON.stringify(codebaseAnalysis),
|
||||
'synced',
|
||||
accessCheck.requiresAuth,
|
||||
userId || null
|
||||
];
|
||||
const insertResult = await database.query(insertQuery, insertValues);
|
||||
const repositoryRecord = insertResult.rows[0];
|
||||
|
||||
const publicBaseUrl = process.env.PUBLIC_BASE_URL || null;
|
||||
const callbackUrl = publicBaseUrl ? `${publicBaseUrl}/api/vcs/${req.params.provider}/webhook` : null;
|
||||
try { await provider.ensureRepositoryWebhook(owner, repo, callbackUrl); } catch (_) {}
|
||||
|
||||
const downloadResult = await provider.syncRepositoryWithFallback(owner, repo, actualBranch, repositoryRecord.id);
|
||||
|
||||
// Recompute analysis from indexed storage for accurate counts
|
||||
try {
|
||||
const aggQuery = `
|
||||
SELECT
|
||||
COALESCE(SUM(rf.file_size_bytes), 0) AS total_size,
|
||||
COALESCE(COUNT(rf.id), 0) AS total_files,
|
||||
COALESCE((SELECT COUNT(1) FROM repository_directories rd WHERE rd.storage_id = rs.id), 0) AS total_directories
|
||||
FROM repository_storage rs
|
||||
LEFT JOIN repository_files rf ON rs.id = rf.storage_id
|
||||
WHERE rs.repository_id = $1
|
||||
GROUP BY rs.id
|
||||
LIMIT 1
|
||||
`;
|
||||
const aggRes = await database.query(aggQuery, [repositoryRecord.id]);
|
||||
if (aggRes.rows.length > 0) {
|
||||
const agg = aggRes.rows[0];
|
||||
codebaseAnalysis = {
|
||||
total_files: Number(agg.total_files) || 0,
|
||||
total_size: Number(agg.total_size) || 0,
|
||||
directories: [],
|
||||
branch: actualBranch
|
||||
};
|
||||
// Persist refined analysis
|
||||
await database.query('UPDATE github_repositories SET codebase_analysis = $1, updated_at = NOW() WHERE id = $2', [JSON.stringify(codebaseAnalysis), repositoryRecord.id]);
|
||||
}
|
||||
} catch (_) {}
|
||||
|
||||
// Create empty feature mappings like existing flow
|
||||
const featureResult = await database.query('SELECT id FROM template_features WHERE template_id = $1', [template_id]);
|
||||
if (featureResult.rows.length > 0) {
|
||||
const mappingValues = [];
|
||||
const params = [];
|
||||
let i = 1;
|
||||
for (const feature of featureResult.rows) {
|
||||
mappingValues.push(`(uuid_generate_v4(), $${i++}, $${i++}, $${i++}, $${i++})`);
|
||||
params.push(feature.id, repositoryRecord.id, '[]', '{}');
|
||||
}
|
||||
await database.query(
|
||||
`INSERT INTO feature_codebase_mappings (id, feature_id, repository_id, code_paths, code_snippets) VALUES ${mappingValues.join(', ')}`,
|
||||
params
|
||||
);
|
||||
}
|
||||
|
||||
const storageInfo = await (async () => {
|
||||
const q = `
|
||||
SELECT rs.*, COUNT(DISTINCT rd.id) AS directories_count, COUNT(rf.id) AS files_count
|
||||
FROM repository_storage rs
|
||||
LEFT JOIN repository_directories rd ON rs.id = rd.storage_id
|
||||
LEFT JOIN repository_files rf ON rs.id = rf.storage_id
|
||||
WHERE rs.repository_id = $1
|
||||
GROUP BY rs.id
|
||||
`;
|
||||
const r = await database.query(q, [repositoryRecord.id]);
|
||||
return r.rows[0] || null;
|
||||
})();
|
||||
|
||||
res.status(201).json({
|
||||
success: true,
|
||||
message: 'Repository attached successfully',
|
||||
data: {
|
||||
repository_id: repositoryRecord.id,
|
||||
template_id: repositoryRecord.template_id,
|
||||
repository_name: repositoryRecord.repository_name,
|
||||
owner_name: repositoryRecord.owner_name,
|
||||
branch_name: repositoryRecord.branch_name,
|
||||
is_public: repositoryRecord.is_public,
|
||||
requires_auth: repositoryRecord.requires_auth,
|
||||
metadata: repositoryData,
|
||||
codebase_analysis: codebaseAnalysis,
|
||||
storage_info: storageInfo,
|
||||
download_result: downloadResult
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Error attaching repository (vcs):', error);
|
||||
res.status(500).json({ success: false, message: error.message || 'Failed to attach repository' });
|
||||
}
|
||||
});
|
||||
|
||||
// Generic webhook endpoint with provider-specific verification and push handling
|
||||
router.post('/:provider/webhook', async (req, res) => {
|
||||
try {
|
||||
const providerKey = (req.params.provider || '').toLowerCase();
|
||||
const payload = req.body || {};
|
||||
|
||||
if (providerKey === 'github') {
|
||||
return res.status(400).json({ success: false, message: 'Use /api/github/webhook for GitHub' });
|
||||
}
|
||||
|
||||
// Signature verification
|
||||
const rawBody = JSON.stringify(payload);
|
||||
const verifySignature = () => {
|
||||
try {
|
||||
if (providerKey === 'gitlab') {
|
||||
const token = req.headers['x-gitlab-token'];
|
||||
const secret = process.env.GITLAB_WEBHOOK_SECRET;
|
||||
if (!secret) return true; // if not set, skip
|
||||
return token && token === secret;
|
||||
}
|
||||
if (providerKey === 'gitea') {
|
||||
const crypto = require('crypto');
|
||||
const provided = req.headers['x-gitea-signature'];
|
||||
const secret = process.env.GITEA_WEBHOOK_SECRET;
|
||||
if (!secret) return true;
|
||||
if (!provided) return false;
|
||||
const expected = crypto.createHmac('sha256', secret).update(rawBody).digest('hex');
|
||||
return crypto.timingSafeEqual(Buffer.from(expected, 'hex'), Buffer.from(provided, 'hex'));
|
||||
}
|
||||
if (providerKey === 'bitbucket') {
|
||||
// Bitbucket Cloud webhooks typically have no shared secret by default
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
} catch (_) {
|
||||
return false;
|
||||
}
|
||||
};
|
||||
|
||||
if (!verifySignature()) {
|
||||
return res.status(401).json({ success: false, message: 'Invalid webhook signature' });
|
||||
}
|
||||
|
||||
// Process webhook event using comprehensive service
|
||||
const eventType = extractEventType(providerKey, payload);
|
||||
await vcsWebhookService.processWebhookEvent(providerKey, eventType, payload);
|
||||
|
||||
return res.status(200).json({ success: true, message: 'Webhook processed', provider: providerKey, event_type: eventType });
|
||||
} catch (error) {
|
||||
console.error('Error in VCS webhook:', error);
|
||||
res.status(500).json({ success: false, message: 'Failed to process webhook' });
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
// Additional provider-agnostic routes mirroring GitHub endpoints
|
||||
|
||||
// Get repository diff between two SHAs (unified patch)
|
||||
router.get('/:provider/repository/:id/diff', async (req, res) => {
|
||||
try {
|
||||
const provider = getProvider(req);
|
||||
const { id } = req.params;
|
||||
const { from, to } = req.query;
|
||||
const repoQuery = 'SELECT * FROM github_repositories WHERE id = $1';
|
||||
const repoResult = await database.query(repoQuery, [id]);
|
||||
if (repoResult.rows.length === 0) {
|
||||
return res.status(404).json({ success: false, message: 'Repository not found' });
|
||||
}
|
||||
const record = repoResult.rows[0];
|
||||
const { owner, repo } = provider.parseRepoUrl(record.repository_url);
|
||||
// Use stored branch_name to avoid master/main mismatch
|
||||
const targetBranch = record.branch_name || 'main';
|
||||
const patch = await provider.getRepositoryDiff(owner, repo, targetBranch, from || record.last_synced_commit_sha, to || 'HEAD');
|
||||
res.json({ success: true, data: { patch, from: from || record.last_synced_commit_sha, to: to || 'HEAD' } });
|
||||
} catch (error) {
|
||||
console.error('Error getting diff (vcs):', error);
|
||||
res.status(500).json({ success: false, message: error.message || 'Failed to get diff' });
|
||||
}
|
||||
});
|
||||
|
||||
// Get list of changed files since a SHA
|
||||
router.get('/:provider/repository/:id/changes', async (req, res) => {
|
||||
try {
|
||||
const provider = getProvider(req);
|
||||
const { id } = req.params;
|
||||
const { since } = req.query;
|
||||
const repoQuery = 'SELECT * FROM github_repositories WHERE id = $1';
|
||||
const repoResult = await database.query(repoQuery, [id]);
|
||||
if (repoResult.rows.length === 0) {
|
||||
return res.status(404).json({ success: false, message: 'Repository not found' });
|
||||
}
|
||||
const record = repoResult.rows[0];
|
||||
const { owner, repo } = provider.parseRepoUrl(record.repository_url);
|
||||
const sinceSha = since || record.last_synced_commit_sha;
|
||||
if (!sinceSha) {
|
||||
return res.status(400).json({ success: false, message: 'since SHA is required or must be available as last_synced_commit_sha' });
|
||||
}
|
||||
const changes = await provider.getRepositoryChangesSince(owner, repo, record.branch_name, sinceSha);
|
||||
res.json({ success: true, data: { since: sinceSha, changes } });
|
||||
} catch (error) {
|
||||
console.error('Error getting changes (vcs):', error);
|
||||
res.status(500).json({ success: false, message: error.message || 'Failed to get changes' });
|
||||
}
|
||||
});
|
||||
|
||||
// Get repository information for a template (latest)
|
||||
router.get('/:provider/template/:id/repository', async (req, res) => {
|
||||
try {
|
||||
const { id } = req.params;
|
||||
const query = `
|
||||
SELECT gr.*, rs.local_path, rs.storage_status, rs.total_files_count,
|
||||
rs.total_directories_count, rs.total_size_bytes, rs.download_completed_at
|
||||
FROM github_repositories gr
|
||||
LEFT JOIN repository_storage rs ON gr.id = rs.repository_id
|
||||
WHERE gr.template_id = $1
|
||||
ORDER BY gr.created_at DESC
|
||||
LIMIT 1
|
||||
`;
|
||||
const result = await database.query(query, [id]);
|
||||
if (result.rows.length === 0) {
|
||||
return res.status(404).json({ success: false, message: 'No repository found for this template' });
|
||||
}
|
||||
const repository = result.rows[0];
|
||||
const parseMaybe = (v) => {
|
||||
if (v == null) return {};
|
||||
if (typeof v === 'string') { try { return JSON.parse(v); } catch { return {}; } }
|
||||
return v;
|
||||
};
|
||||
res.json({ success: true, data: { ...repository, metadata: parseMaybe(repository.metadata), codebase_analysis: parseMaybe(repository.codebase_analysis) } });
|
||||
} catch (error) {
|
||||
console.error('Error fetching repository (vcs):', error);
|
||||
res.status(500).json({ success: false, message: error.message || 'Failed to fetch repository' });
|
||||
}
|
||||
});
|
||||
|
||||
// Get repository file structure
|
||||
router.get('/:provider/repository/:id/structure', async (req, res) => {
|
||||
try {
|
||||
const { id } = req.params;
|
||||
const { path: directoryPath } = req.query;
|
||||
const repoQuery = 'SELECT * FROM github_repositories WHERE id = $1';
|
||||
const repoResult = await database.query(repoQuery, [id]);
|
||||
if (repoResult.rows.length === 0) {
|
||||
return res.status(404).json({ success: false, message: 'Repository not found' });
|
||||
}
|
||||
const structure = await fileStorageService.getRepositoryStructure(id, directoryPath);
|
||||
res.json({ success: true, data: { repository_id: id, directory_path: directoryPath || '', structure } });
|
||||
} catch (error) {
|
||||
console.error('Error fetching repository structure (vcs):', error);
|
||||
res.status(500).json({ success: false, message: error.message || 'Failed to fetch repository structure' });
|
||||
}
|
||||
});
|
||||
|
||||
// Get files in a directory
|
||||
router.get('/:provider/repository/:id/files', async (req, res) => {
|
||||
try {
|
||||
const { id } = req.params;
|
||||
const { directory_path = '' } = req.query;
|
||||
const repoQuery = 'SELECT * FROM github_repositories WHERE id = $1';
|
||||
const repoResult = await database.query(repoQuery, [id]);
|
||||
if (repoResult.rows.length === 0) {
|
||||
return res.status(404).json({ success: false, message: 'Repository not found' });
|
||||
}
|
||||
const files = await fileStorageService.getDirectoryFiles(id, directory_path);
|
||||
res.json({ success: true, data: { repository_id: id, directory_path, files } });
|
||||
} catch (error) {
|
||||
console.error('Error fetching directory files (vcs):', error);
|
||||
res.status(500).json({ success: false, message: error.message || 'Failed to fetch directory files' });
|
||||
}
|
||||
});
|
||||
|
||||
// Get file content
|
||||
router.get('/:provider/repository/:id/file-content', async (req, res) => {
|
||||
try {
|
||||
const { id } = req.params;
|
||||
const { file_path } = req.query;
|
||||
if (!file_path) {
|
||||
return res.status(400).json({ success: false, message: 'File path is required' });
|
||||
}
|
||||
const query = `
|
||||
SELECT rf.*, rfc.content_text, rfc.content_preview, rfc.language_detected,
|
||||
rfc.line_count, rfc.char_count
|
||||
FROM repository_files rf
|
||||
LEFT JOIN repository_file_contents rfc ON rf.id = rfc.file_id
|
||||
WHERE rf.repository_id = $1 AND rf.relative_path = $2
|
||||
`;
|
||||
const result = await database.query(query, [id, file_path]);
|
||||
if (result.rows.length === 0) {
|
||||
return res.status(404).json({ success: false, message: 'File not found' });
|
||||
}
|
||||
const file = result.rows[0];
|
||||
res.json({ success: true, data: { file_info: { id: file.id, filename: file.filename, file_extension: file.file_extension, relative_path: file.relative_path, file_size_bytes: file.file_size_bytes, mime_type: file.mime_type, is_binary: file.is_binary, language_detected: file.language_detected, line_count: file.line_count, char_count: file.char_count }, content: file.is_binary ? null : file.content_text, preview: file.content_preview } });
|
||||
} catch (error) {
|
||||
console.error('Error fetching file content (vcs):', error);
|
||||
res.status(500).json({ success: false, message: error.message || 'Failed to fetch file content' });
|
||||
}
|
||||
});
|
||||
|
||||
// List all repositories for a template
|
||||
router.get('/:provider/template/:id/repositories', async (req, res) => {
|
||||
try {
|
||||
const { id } = req.params;
|
||||
const query = `
|
||||
SELECT gr.*, rs.local_path, rs.storage_status, rs.total_files_count,
|
||||
rs.total_directories_count, rs.total_size_bytes, rs.download_completed_at
|
||||
FROM github_repositories gr
|
||||
LEFT JOIN repository_storage rs ON gr.id = rs.repository_id
|
||||
WHERE gr.template_id = $1
|
||||
ORDER BY gr.created_at DESC
|
||||
`;
|
||||
const result = await database.query(query, [id]);
|
||||
const repositories = result.rows.map(repo => ({ ...repo, metadata: JSON.parse(repo.metadata || '{}'), codebase_analysis: JSON.parse(repo.codebase_analysis || '{}') }));
|
||||
res.json({ success: true, data: repositories });
|
||||
} catch (error) {
|
||||
console.error('Error fetching repositories (vcs):', error);
|
||||
res.status(500).json({ success: false, message: error.message || 'Failed to fetch repositories' });
|
||||
}
|
||||
});
|
||||
|
||||
// Re-sync repository (git-based)
|
||||
router.post('/:provider/repository/:id/sync', async (req, res) => {
|
||||
try {
|
||||
const provider = getProvider(req);
|
||||
const { id } = req.params;
|
||||
const repoQuery = 'SELECT * FROM github_repositories WHERE id = $1';
|
||||
const repoResult = await database.query(repoQuery, [id]);
|
||||
if (repoResult.rows.length === 0) {
|
||||
return res.status(404).json({ success: false, message: 'Repository not found' });
|
||||
}
|
||||
const repository = repoResult.rows[0];
|
||||
const { owner, repo, branch } = provider.parseRepoUrl(repository.repository_url);
|
||||
await provider.cleanupRepositoryStorage(id);
|
||||
const downloadResult = await provider.syncRepositoryWithFallback(owner, repo, branch || repository.branch_name, id);
|
||||
await database.query('UPDATE github_repositories SET sync_status = $1, updated_at = NOW() WHERE id = $2', [downloadResult.success ? 'synced' : 'error', id]);
|
||||
res.json({ success: downloadResult.success, message: downloadResult.success ? 'Repository synced successfully' : 'Failed to sync repository', data: downloadResult });
|
||||
} catch (error) {
|
||||
console.error('Error syncing repository (vcs):', error);
|
||||
res.status(500).json({ success: false, message: error.message || 'Failed to sync repository' });
|
||||
}
|
||||
});
|
||||
|
||||
// Remove repository
|
||||
router.delete('/:provider/repository/:id', async (req, res) => {
|
||||
try {
|
||||
const { id } = req.params;
|
||||
const getResult = await database.query('SELECT * FROM github_repositories WHERE id = $1', [id]);
|
||||
if (getResult.rows.length === 0) {
|
||||
return res.status(404).json({ success: false, message: 'Repository not found' });
|
||||
}
|
||||
const repository = getResult.rows[0];
|
||||
await fileStorageService.cleanupRepositoryStorage(id);
|
||||
await database.query('DELETE FROM feature_codebase_mappings WHERE repository_id = $1', [id]);
|
||||
await database.query('DELETE FROM github_repositories WHERE id = $1', [id]);
|
||||
res.json({ success: true, message: 'Repository removed successfully', data: { removed_repository: repository.repository_name, template_id: repository.template_id } });
|
||||
} catch (error) {
|
||||
console.error('Error removing repository (vcs):', error);
|
||||
res.status(500).json({ success: false, message: error.message || 'Failed to remove repository' });
|
||||
}
|
||||
});
|
||||
|
||||
// OAuth placeholders (start/callback) per provider for future implementation
|
||||
router.get('/:provider/auth/start', (req, res) => {
|
||||
try {
|
||||
const providerKey = (req.params.provider || '').toLowerCase();
|
||||
const oauth = getOAuthService(providerKey);
|
||||
if (!oauth) return res.status(400).json({ success: false, message: 'Unsupported provider or OAuth not available' });
|
||||
const state = req.query.state || Math.random().toString(36).slice(2);
|
||||
const url = oauth.getAuthUrl(state);
|
||||
res.json({ success: true, auth_url: url, provider: providerKey, state });
|
||||
} catch (e) {
|
||||
res.status(500).json({ success: false, message: e.message || 'Failed to start OAuth' });
|
||||
}
|
||||
});
|
||||
|
||||
router.get('/:provider/auth/callback', (req, res) => {
|
||||
(async () => {
|
||||
try {
|
||||
const providerKey = (req.params.provider || '').toLowerCase();
|
||||
const code = req.query.code;
|
||||
const error = req.query.error;
|
||||
const errorDescription = req.query.error_description;
|
||||
const oauth = getOAuthService(providerKey);
|
||||
if (!oauth) return res.status(400).json({ success: false, message: 'Unsupported provider or OAuth not available' });
|
||||
if (!code) {
|
||||
// Surface upstream provider error details if present
|
||||
if (error || errorDescription) {
|
||||
return res.status(400).json({ success: false, message: 'OAuth error from provider', provider: providerKey, error: error || 'unknown_error', error_description: errorDescription || null, query: req.query });
|
||||
}
|
||||
return res.status(400).json({ success: false, message: 'Missing code' });
|
||||
}
|
||||
const accessToken = await oauth.exchangeCodeForToken(code);
|
||||
const user = await oauth.getUserInfo(accessToken);
|
||||
const userId =
|
||||
req.query.user_id ||
|
||||
(req.body && req.body.user_id) ||
|
||||
req.headers['x-user-id'] ||
|
||||
(req.cookies && (req.cookies.user_id || req.cookies.uid)) ||
|
||||
(req.session && req.session.user && (req.session.user.id || req.session.user.userId)) ||
|
||||
(req.user && (req.user.id || req.user.userId));
|
||||
if (providerKey === 'github' && !userId) {
|
||||
return res.status(400).json({ success: false, message: 'user_id is required to complete GitHub authentication' });
|
||||
}
|
||||
console.log('[VCS OAuth] callback provider=%s resolved user_id = %s', providerKey, userId || null);
|
||||
const tokenRecord = await oauth.storeToken(accessToken, user, userId || null);
|
||||
res.json({ success: true, provider: providerKey, user, token: { id: tokenRecord.id || null } });
|
||||
} catch (e) {
|
||||
res.status(500).json({ success: false, message: e.message || 'OAuth callback failed' });
|
||||
}
|
||||
})();
|
||||
});
|
||||
|
||||
114
services/git-integration/src/routes/webhook.routes.js
Normal file
114
services/git-integration/src/routes/webhook.routes.js
Normal file
@ -0,0 +1,114 @@
|
||||
// routes/webhook.routes.js
|
||||
const express = require('express');
|
||||
const router = express.Router();
|
||||
const WebhookService = require('../services/webhook.service');
|
||||
|
||||
const webhookService = new WebhookService();
|
||||
|
||||
// GitHub webhook endpoint
|
||||
router.post('/webhook', async (req, res) => {
|
||||
try {
|
||||
const signature = req.headers['x-hub-signature-256'];
|
||||
const eventType = req.headers['x-github-event'];
|
||||
const deliveryId = req.headers['x-github-delivery'];
|
||||
const userAgent = req.headers['user-agent'];
|
||||
|
||||
console.log('🔔 GitHub webhook received:');
|
||||
console.log(`- Event Type: ${eventType}`);
|
||||
console.log(`- Delivery ID: ${deliveryId}`);
|
||||
console.log(`- User Agent: ${userAgent}`);
|
||||
console.log(`- Signature: ${signature ? 'Present' : 'Missing'}`);
|
||||
console.log(`- Payload Size: ${JSON.stringify(req.body).length} bytes`);
|
||||
console.log(`- Timestamp: ${new Date().toISOString()}`);
|
||||
|
||||
// Verify webhook signature if secret is configured
|
||||
if (process.env.GITHUB_WEBHOOK_SECRET) {
|
||||
const rawBody = JSON.stringify(req.body);
|
||||
const isValidSignature = webhookService.verifySignature(rawBody, signature);
|
||||
|
||||
if (!isValidSignature) {
|
||||
console.warn('Invalid webhook signature - potential security issue');
|
||||
return res.status(401).json({
|
||||
success: false,
|
||||
message: 'Invalid webhook signature'
|
||||
});
|
||||
}
|
||||
} else {
|
||||
console.warn('GitHub webhook secret not configured - skipping signature verification');
|
||||
}
|
||||
|
||||
// Attach delivery_id into payload for downstream persistence convenience
|
||||
const payloadWithDelivery = { ...req.body, delivery_id: deliveryId };
|
||||
|
||||
// Process the webhook event
|
||||
if (eventType) {
|
||||
await webhookService.processWebhookEvent(eventType, payloadWithDelivery);
|
||||
}
|
||||
|
||||
// Log the webhook event
|
||||
await webhookService.logWebhookEvent(
|
||||
eventType || 'unknown',
|
||||
req.body.action || 'unknown',
|
||||
req.body.repository?.full_name || 'unknown',
|
||||
{
|
||||
delivery_id: deliveryId,
|
||||
event_type: eventType,
|
||||
action: req.body.action,
|
||||
repository: req.body.repository?.full_name,
|
||||
sender: req.body.sender?.login
|
||||
},
|
||||
deliveryId,
|
||||
payloadWithDelivery
|
||||
);
|
||||
|
||||
res.status(200).json({
|
||||
success: true,
|
||||
message: 'Webhook processed successfully',
|
||||
event_type: eventType,
|
||||
delivery_id: deliveryId
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
console.error('Error processing webhook:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
message: 'Failed to process webhook',
|
||||
error: process.env.NODE_ENV === 'development' ? error.message : 'Internal server error'
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Get recent webhook events (for debugging)
|
||||
router.get('/webhook/events', async (req, res) => {
|
||||
try {
|
||||
const limit = parseInt(req.query.limit) || 50;
|
||||
const events = await webhookService.getRecentWebhookEvents(limit);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: {
|
||||
events,
|
||||
total: events.length,
|
||||
limit
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Error fetching webhook events:', error);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
message: 'Failed to fetch webhook events'
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Webhook health check
|
||||
router.get('/webhook/health', (req, res) => {
|
||||
res.json({
|
||||
success: true,
|
||||
message: 'Webhook service is healthy',
|
||||
timestamp: new Date().toISOString(),
|
||||
webhook_secret_configured: !!process.env.GITHUB_WEBHOOK_SECRET
|
||||
});
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
64
services/git-integration/src/services/bitbucket-oauth.js
Normal file
64
services/git-integration/src/services/bitbucket-oauth.js
Normal file
@ -0,0 +1,64 @@
|
||||
// services/bitbucket-oauth.js
|
||||
const database = require('../config/database');
|
||||
|
||||
class BitbucketOAuthService {
|
||||
constructor() {
|
||||
this.clientId = process.env.BITBUCKET_CLIENT_ID;
|
||||
this.clientSecret = process.env.BITBUCKET_CLIENT_SECRET;
|
||||
this.redirectUri = process.env.BITBUCKET_REDIRECT_URI || 'http://localhost:8012/api/vcs/bitbucket/auth/callback';
|
||||
}
|
||||
|
||||
getAuthUrl(state) {
|
||||
if (!this.clientId) throw new Error('Bitbucket OAuth not configured');
|
||||
const params = new URLSearchParams({
|
||||
client_id: this.clientId,
|
||||
response_type: 'code',
|
||||
state,
|
||||
// Bitbucket Cloud uses 'repository' for read access; 'repository:write' for write
|
||||
scope: 'repository account',
|
||||
redirect_uri: this.redirectUri
|
||||
});
|
||||
return `https://bitbucket.org/site/oauth2/authorize?${params.toString()}`;
|
||||
}
|
||||
|
||||
async exchangeCodeForToken(code) {
|
||||
const resp = await fetch('https://bitbucket.org/site/oauth2/access_token', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/x-www-form-urlencoded', Authorization: `Basic ${Buffer.from(`${this.clientId}:${this.clientSecret}`).toString('base64')}` },
|
||||
body: new URLSearchParams({ grant_type: 'authorization_code', code, redirect_uri: this.redirectUri })
|
||||
});
|
||||
let data = null;
|
||||
try { data = await resp.json(); } catch (_) { data = null; }
|
||||
if (!resp.ok) {
|
||||
const detail = data?.error_description || data?.error || (await resp.text().catch(() => '')) || 'unknown_error';
|
||||
throw new Error(`Bitbucket token exchange failed: ${detail}`);
|
||||
}
|
||||
return data.access_token;
|
||||
}
|
||||
|
||||
async getUserInfo(accessToken) {
|
||||
const resp = await fetch('https://api.bitbucket.org/2.0/user', { headers: { Authorization: `Bearer ${accessToken}` } });
|
||||
if (!resp.ok) throw new Error('Failed to fetch Bitbucket user');
|
||||
return await resp.json();
|
||||
}
|
||||
|
||||
async storeToken(accessToken, user) {
|
||||
const result = await database.query(
|
||||
`INSERT INTO bitbucket_user_tokens (access_token, bitbucket_username, bitbucket_user_id, scopes, expires_at)
|
||||
VALUES ($1, $2, $3, $4, $5)
|
||||
ON CONFLICT (id) DO UPDATE SET access_token = EXCLUDED.access_token, bitbucket_username = EXCLUDED.bitbucket_username, bitbucket_user_id = EXCLUDED.bitbucket_user_id, scopes = EXCLUDED.scopes, expires_at = EXCLUDED.expires_at, updated_at = NOW()
|
||||
RETURNING *`,
|
||||
[accessToken, user.username || user.display_name, user.uuid || null, JSON.stringify(['repository:read','account']), null]
|
||||
);
|
||||
return result.rows[0];
|
||||
}
|
||||
|
||||
async getToken() {
|
||||
const r = await database.query('SELECT * FROM bitbucket_user_tokens ORDER BY created_at DESC LIMIT 1');
|
||||
return r.rows[0];
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = BitbucketOAuthService;
|
||||
|
||||
|
||||
@ -61,6 +61,16 @@ class FileStorageService {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Skip any .git directory anywhere in the tree
|
||||
const normalizedRel = currentPath.replace(/\\/g, '/');
|
||||
if (
|
||||
normalizedRel === '.git' ||
|
||||
normalizedRel.startsWith('.git/') ||
|
||||
normalizedRel.includes('/.git/')
|
||||
) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Insert directory record
|
||||
const dirName = currentPath === '' ? '.' : path.basename(currentPath);
|
||||
const relativePath = currentPath === '' ? '' : currentPath;
|
||||
@ -91,6 +101,11 @@ class FileStorageService {
|
||||
const itemRelativePath = currentPath ? path.join(currentPath, item) : item;
|
||||
const itemStats = fs.statSync(itemPath);
|
||||
|
||||
// Skip .git directory and its contents
|
||||
if (item === '.git' || itemRelativePath.replace(/\\/g, '/').includes('/.git/')) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (itemStats.isDirectory()) {
|
||||
// Recursively process subdirectory
|
||||
const subDir = await this.processDirectoryStructure(
|
||||
|
||||
143
services/git-integration/src/services/git-repo.service.js
Normal file
143
services/git-integration/src/services/git-repo.service.js
Normal file
@ -0,0 +1,143 @@
|
||||
// services/git-repo.service.js
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { exec, execFile } = require('child_process');
|
||||
|
||||
class GitRepoService {
|
||||
constructor() {
|
||||
this.baseDir = process.env.ATTACHED_REPOS_DIR || '/tmp/attached-repos';
|
||||
}
|
||||
|
||||
getLocalRepoPath(owner, repo, branch) {
|
||||
return path.join(this.baseDir, `${owner}__${repo}__${branch}`);
|
||||
}
|
||||
|
||||
async ensureDirectory(dirPath) {
|
||||
if (!fs.existsSync(dirPath)) {
|
||||
fs.mkdirSync(dirPath, { recursive: true });
|
||||
}
|
||||
}
|
||||
|
||||
runGitCommand(cwd, command) {
|
||||
return new Promise((resolve, reject) => {
|
||||
try {
|
||||
if (!fs.existsSync(cwd)) {
|
||||
return reject(new Error(`Working directory not found: ${cwd}`));
|
||||
}
|
||||
} catch (_) {
|
||||
return reject(new Error(`Invalid working directory: ${cwd}`));
|
||||
}
|
||||
// Make git non-interactive to avoid terminal credential prompts
|
||||
const env = { ...process.env, GIT_TERMINAL_PROMPT: '0' };
|
||||
exec(command, { cwd, maxBuffer: 1024 * 1024 * 64, env }, (error, stdout, stderr) => {
|
||||
if (error) {
|
||||
const details = [`cmd: ${command}`, `cwd: ${cwd}`, stderr ? `stderr: ${stderr}` : ''].filter(Boolean).join('\n');
|
||||
return reject(new Error((stderr && stderr.trim()) || `${error.message}\n${details}`));
|
||||
}
|
||||
resolve(stdout.trim());
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
runGit(cwd, args) {
|
||||
return new Promise((resolve, reject) => {
|
||||
try {
|
||||
if (!fs.existsSync(cwd)) {
|
||||
return reject(new Error(`Working directory not found: ${cwd}`));
|
||||
}
|
||||
} catch (_) {
|
||||
return reject(new Error(`Invalid working directory: ${cwd}`));
|
||||
}
|
||||
const env = { ...process.env, GIT_TERMINAL_PROMPT: '0' };
|
||||
execFile('git', args, { cwd, maxBuffer: 1024 * 1024 * 64, env }, (error, stdout, stderr) => {
|
||||
if (error) {
|
||||
const details = [`git ${args.join(' ')}`, `cwd: ${cwd}`, stderr ? `stderr: ${stderr}` : ''].filter(Boolean).join('\n');
|
||||
return reject(new Error((stderr && stderr.trim()) || `${error.message}\n${details}`));
|
||||
}
|
||||
resolve(stdout.trim());
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
async cloneIfMissing(owner, repo, branch) {
|
||||
const repoPath = this.getLocalRepoPath(owner, repo, branch);
|
||||
await this.ensureDirectory(path.dirname(repoPath));
|
||||
if (!fs.existsSync(repoPath) || !fs.existsSync(path.join(repoPath, '.git'))) {
|
||||
const cloneUrl = `https://github.com/${owner}/${repo}.git`;
|
||||
await this.runGit(path.dirname(repoPath), ['clone', '--depth', '1', '-b', branch, cloneUrl, path.basename(repoPath)]);
|
||||
}
|
||||
return repoPath;
|
||||
}
|
||||
|
||||
async cloneIfMissingWithHost(owner, repo, branch, host) {
|
||||
const repoPath = this.getLocalRepoPath(owner, repo, branch);
|
||||
await this.ensureDirectory(path.dirname(repoPath));
|
||||
if (!fs.existsSync(repoPath) || !fs.existsSync(path.join(repoPath, '.git'))) {
|
||||
const normalizedHost = (host || 'github.com').replace(/^https?:\/\//, '').replace(/\/$/, '');
|
||||
const cloneUrl = `https://${normalizedHost}/${owner}/${repo}.git`;
|
||||
await this.runGit(path.dirname(repoPath), ['clone', '--depth', '1', '-b', branch, cloneUrl, path.basename(repoPath)]);
|
||||
}
|
||||
return repoPath;
|
||||
}
|
||||
|
||||
async getHeadSha(repoPath) {
|
||||
try {
|
||||
const sha = await this.runGit(repoPath, ['rev-parse', 'HEAD']);
|
||||
return sha;
|
||||
} catch (_) {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
async fetchAndFastForward(repoPath, branch) {
|
||||
const beforeSha = await this.getHeadSha(repoPath);
|
||||
await this.runGit(repoPath, ['fetch', '--all', '--prune']);
|
||||
await this.runGit(repoPath, ['checkout', branch]);
|
||||
await this.runGit(repoPath, ['pull', '--ff-only', 'origin', branch]);
|
||||
const afterSha = await this.getHeadSha(repoPath);
|
||||
return { beforeSha, afterSha };
|
||||
}
|
||||
|
||||
async cloneIfMissingWithAuth(owner, repo, branch, host, token, tokenType = 'oauth2') {
|
||||
const repoPath = this.getLocalRepoPath(owner, repo, branch);
|
||||
await this.ensureDirectory(path.dirname(repoPath));
|
||||
if (!fs.existsSync(repoPath) || !fs.existsSync(path.join(repoPath, '.git'))) {
|
||||
const normalizedHost = (host || 'github.com').replace(/^https?:\/\//, '').replace(/\/$/, '');
|
||||
let cloneUrl = `https://${normalizedHost}/${owner}/${repo}.git`;
|
||||
if (token) {
|
||||
if (tokenType === 'oauth2') {
|
||||
// Many providers accept oauth2:<token>@host
|
||||
cloneUrl = `https://oauth2:${token}@${normalizedHost}/${owner}/${repo}.git`;
|
||||
} else if (tokenType === 'bearer') {
|
||||
// Use extraheader auth pattern
|
||||
await this.runGit(path.dirname(repoPath), ['-c', `http.extraheader=Authorization: Bearer ${token}`, 'clone', '--depth', '1', '-b', branch, cloneUrl, path.basename(repoPath)]);
|
||||
return repoPath;
|
||||
}
|
||||
}
|
||||
await this.runGit(path.dirname(repoPath), ['clone', '--depth', '1', '-b', branch, cloneUrl, path.basename(repoPath)]);
|
||||
}
|
||||
return repoPath;
|
||||
}
|
||||
|
||||
async getDiff(repoPath, fromSha, toSha, options = { patch: true }) {
|
||||
const range = fromSha && toSha ? `${fromSha}..${toSha}` : toSha ? `${toSha}^..${toSha}` : '';
|
||||
const mode = options.patch ? '--patch' : '--name-status';
|
||||
const args = ['diff', mode];
|
||||
if (range) args.push(range);
|
||||
const output = await this.runGit(repoPath, args);
|
||||
return output;
|
||||
}
|
||||
|
||||
async getChangedFilesSince(repoPath, sinceSha) {
|
||||
const output = await this.runGit(repoPath, ['diff', '--name-status', `${sinceSha}..HEAD`]);
|
||||
const lines = output.split('\n').filter(Boolean);
|
||||
return lines.map(line => {
|
||||
const [status, filePath] = line.split(/\s+/, 2);
|
||||
return { status, filePath };
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = GitRepoService;
|
||||
|
||||
|
||||
77
services/git-integration/src/services/gitea-oauth.js
Normal file
77
services/git-integration/src/services/gitea-oauth.js
Normal file
@ -0,0 +1,77 @@
|
||||
// services/gitea-oauth.js
|
||||
const database = require('../config/database');
|
||||
|
||||
class GiteaOAuthService {
|
||||
constructor() {
|
||||
this.clientId = process.env.GITEA_CLIENT_ID;
|
||||
this.clientSecret = process.env.GITEA_CLIENT_SECRET;
|
||||
this.baseUrl = (process.env.GITEA_BASE_URL || 'https://gitea.com').replace(/\/$/, '');
|
||||
this.redirectUri = process.env.GITEA_REDIRECT_URI || 'http://localhost:8012/api/vcs/gitea/auth/callback';
|
||||
}
|
||||
|
||||
getAuthUrl(state) {
|
||||
if (!this.clientId) throw new Error('Gitea OAuth not configured');
|
||||
const authUrl = `${this.baseUrl}/login/oauth/authorize`;
|
||||
const params = new URLSearchParams({
|
||||
client_id: this.clientId,
|
||||
redirect_uri: this.redirectUri,
|
||||
response_type: 'code',
|
||||
// Request both user and repository read scopes
|
||||
scope: 'read:user read:repository',
|
||||
state
|
||||
});
|
||||
return `${authUrl}?${params.toString()}`;
|
||||
}
|
||||
|
||||
async exchangeCodeForToken(code) {
|
||||
const tokenUrl = `${this.baseUrl}/login/oauth/access_token`;
|
||||
const resp = await fetch(tokenUrl, {
|
||||
method: 'POST',
|
||||
headers: { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' },
|
||||
body: new URLSearchParams({
|
||||
client_id: this.clientId,
|
||||
client_secret: this.clientSecret,
|
||||
code,
|
||||
grant_type: 'authorization_code',
|
||||
redirect_uri: this.redirectUri
|
||||
})
|
||||
});
|
||||
let data = null;
|
||||
try { data = await resp.json(); } catch (_) { data = null; }
|
||||
if (!resp.ok || data?.error) {
|
||||
const detail = data?.error_description || data?.error || (await resp.text().catch(() => '')) || 'unknown_error';
|
||||
throw new Error(`Gitea token exchange failed: ${detail}`);
|
||||
}
|
||||
return data.access_token;
|
||||
}
|
||||
|
||||
async getUserInfo(accessToken) {
|
||||
const resp = await fetch(`${this.baseUrl}/api/v1/user`, { headers: { Authorization: `Bearer ${accessToken}` } });
|
||||
if (!resp.ok) {
|
||||
let txt = '';
|
||||
try { txt = await resp.text(); } catch (_) {}
|
||||
throw new Error(`Failed to fetch Gitea user (status ${resp.status}): ${txt}`);
|
||||
}
|
||||
return await resp.json();
|
||||
}
|
||||
|
||||
async storeToken(accessToken, user) {
|
||||
const result = await database.query(
|
||||
`INSERT INTO gitea_user_tokens (access_token, gitea_username, gitea_user_id, scopes, expires_at)
|
||||
VALUES ($1, $2, $3, $4, $5)
|
||||
ON CONFLICT (id) DO UPDATE SET access_token = EXCLUDED.access_token, gitea_username = EXCLUDED.gitea_username, gitea_user_id = EXCLUDED.gitea_user_id, scopes = EXCLUDED.scopes, expires_at = EXCLUDED.expires_at, updated_at = NOW()
|
||||
RETURNING *`,
|
||||
[accessToken, user.login, user.id, JSON.stringify(['read:repository']), null]
|
||||
);
|
||||
return result.rows[0];
|
||||
}
|
||||
|
||||
async getToken() {
|
||||
const r = await database.query('SELECT * FROM gitea_user_tokens ORDER BY created_at DESC LIMIT 1');
|
||||
return r.rows[0];
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = GiteaOAuthService;
|
||||
|
||||
|
||||
@ -3,13 +3,16 @@ const { Octokit } = require('@octokit/rest');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { exec } = require('child_process');
|
||||
const parseGitHubUrl = require('parse-github-url');
|
||||
const GitHubOAuthService = require('./github-oauth');
|
||||
const FileStorageService = require('./file-storage.service');
|
||||
const GitRepoService = require('./git-repo.service');
|
||||
|
||||
class GitHubIntegrationService {
|
||||
constructor() {
|
||||
this.oauthService = new GitHubOAuthService();
|
||||
this.fileStorageService = new FileStorageService();
|
||||
this.gitRepoService = new GitRepoService();
|
||||
|
||||
// Default unauthenticated instance
|
||||
this.octokit = new Octokit({
|
||||
@ -22,19 +25,86 @@ class GitHubIntegrationService {
|
||||
return await this.oauthService.getAuthenticatedOctokit();
|
||||
}
|
||||
|
||||
// Extract owner, repo, and branch from GitHub URL
|
||||
// Extract owner, repo, and branch from GitHub URL using parse-github-url library
|
||||
parseGitHubUrl(url) {
|
||||
const regex = /github\.com\/([^\/]+)\/([^\/]+)(?:\/tree\/([^\/]+))?/;
|
||||
const match = url.match(regex);
|
||||
if (!url || typeof url !== 'string') {
|
||||
throw new Error('URL must be a non-empty string');
|
||||
}
|
||||
|
||||
// Normalize the URL first
|
||||
let normalizedUrl = url.trim();
|
||||
|
||||
if (!match) {
|
||||
throw new Error('Invalid GitHub repository URL');
|
||||
// Handle URLs without protocol
|
||||
if (!normalizedUrl.startsWith('http://') && !normalizedUrl.startsWith('https://') && !normalizedUrl.startsWith('git@')) {
|
||||
normalizedUrl = 'https://' + normalizedUrl;
|
||||
}
|
||||
|
||||
// Handle SSH format: git@github.com:owner/repo.git
|
||||
if (normalizedUrl.startsWith('git@github.com:')) {
|
||||
normalizedUrl = normalizedUrl.replace('git@github.com:', 'https://github.com/');
|
||||
}
|
||||
|
||||
// Handle git+https format: git+https://github.com/owner/repo.git
|
||||
if (normalizedUrl.startsWith('git+https://') || normalizedUrl.startsWith('git+http://')) {
|
||||
normalizedUrl = normalizedUrl.replace('git+', '');
|
||||
}
|
||||
|
||||
// Validate that it's a GitHub URL before parsing
|
||||
if (!normalizedUrl.includes('github.com')) {
|
||||
throw new Error(`Invalid GitHub repository URL: ${url}`);
|
||||
}
|
||||
|
||||
// Clean URL by removing query parameters and fragments for parsing
|
||||
const cleanUrl = normalizedUrl.split('?')[0].split('#')[0];
|
||||
|
||||
// Use the parse-github-url library to parse the URL
|
||||
const parsed = parseGitHubUrl(cleanUrl);
|
||||
|
||||
if (!parsed || !parsed.owner || !parsed.name) {
|
||||
throw new Error(`Invalid GitHub repository URL: ${url}`);
|
||||
}
|
||||
|
||||
// Additional validation: reject URLs with invalid paths
|
||||
const urlWithoutQuery = normalizedUrl.split('?')[0].split('#')[0];
|
||||
const pathAfterRepo = urlWithoutQuery.split(/github\.com\/[^\/]+\/[^\/]+/)[1];
|
||||
if (pathAfterRepo && pathAfterRepo.length > 0) {
|
||||
const validPaths = ['/tree/', '/blob/', '/commit/', '/pull/', '/issue', '/archive/', '/releases', '/actions', '/projects', '/wiki', '/settings', '/security', '/insights', '/pulse', '/graphs', '/network', '/compare'];
|
||||
const hasValidPath = validPaths.some(path => pathAfterRepo.startsWith(path));
|
||||
if (!hasValidPath) {
|
||||
throw new Error(`Invalid GitHub repository URL: ${url}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Extract branch information
|
||||
let branch = parsed.branch;
|
||||
|
||||
// Handle special cases for branch extraction
|
||||
if (branch) {
|
||||
// For archive URLs, remove .zip or .tar.gz extensions
|
||||
branch = branch.replace(/\.(zip|tar\.gz|tar)$/, '');
|
||||
|
||||
// For blob URLs, the branch might be followed by a path, take only the first part
|
||||
branch = branch.split('/')[0];
|
||||
|
||||
// For commit/PR/issue URLs, don't treat the ID as a branch
|
||||
if (normalizedUrl.includes('/commit/') || normalizedUrl.includes('/pull/') || normalizedUrl.includes('/issue')) {
|
||||
branch = 'main'; // Default to main for these cases
|
||||
}
|
||||
}
|
||||
|
||||
// Validate owner and repo names (GitHub naming rules)
|
||||
if (!/^[a-zA-Z0-9]([a-zA-Z0-9\-\.]*[a-zA-Z0-9])?$/.test(parsed.owner)) {
|
||||
throw new Error(`Invalid GitHub owner name: ${parsed.owner}`);
|
||||
}
|
||||
|
||||
if (!/^[a-zA-Z0-9]([a-zA-Z0-9\-\._]*[a-zA-Z0-9])?$/.test(parsed.name)) {
|
||||
throw new Error(`Invalid GitHub repository name: ${parsed.name}`);
|
||||
}
|
||||
|
||||
return {
|
||||
owner: match[1],
|
||||
repo: match[2].replace('.git', ''),
|
||||
branch: match[3] || 'main'
|
||||
owner: parsed.owner,
|
||||
repo: parsed.name,
|
||||
branch: branch || 'main'
|
||||
};
|
||||
}
|
||||
|
||||
@ -60,6 +130,77 @@ class GitHubIntegrationService {
|
||||
error: 'Repository not found or requires authentication'
|
||||
};
|
||||
}
|
||||
|
||||
// Handle authentication errors
|
||||
if (error.status === 401 || error.message.includes('token has expired') || error.message.includes('authenticate with GitHub')) {
|
||||
return {
|
||||
exists: null,
|
||||
isPrivate: null,
|
||||
hasAccess: false,
|
||||
requiresAuth: true,
|
||||
error: 'GitHub authentication required or token expired',
|
||||
authError: true
|
||||
};
|
||||
}
|
||||
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// Check repository access with user-specific tokens
|
||||
async checkRepositoryAccessWithUser(owner, repo, userId) {
|
||||
try {
|
||||
// First try to find a token that can access this repository
|
||||
const token = await this.oauthService.findTokenForRepository(userId, owner, repo);
|
||||
|
||||
if (token) {
|
||||
// We found a token that can access this repository
|
||||
const octokit = new Octokit({ auth: token.access_token });
|
||||
const { data } = await octokit.repos.get({ owner, repo });
|
||||
|
||||
return {
|
||||
exists: true,
|
||||
isPrivate: data.private,
|
||||
hasAccess: true,
|
||||
requiresAuth: data.private,
|
||||
github_username: token.github_username,
|
||||
token_id: token.id
|
||||
};
|
||||
}
|
||||
|
||||
// No token found that can access this repository
|
||||
return {
|
||||
exists: null,
|
||||
isPrivate: null,
|
||||
hasAccess: false,
|
||||
requiresAuth: true,
|
||||
error: 'Repository not found or requires authentication',
|
||||
authError: false
|
||||
};
|
||||
|
||||
} catch (error) {
|
||||
if (error.status === 404) {
|
||||
return {
|
||||
exists: false,
|
||||
isPrivate: null,
|
||||
hasAccess: false,
|
||||
requiresAuth: true,
|
||||
error: 'Repository not found or requires authentication'
|
||||
};
|
||||
}
|
||||
|
||||
// Handle authentication errors
|
||||
if (error.status === 401 || error.message.includes('token has expired') || error.message.includes('authenticate with GitHub')) {
|
||||
return {
|
||||
exists: null,
|
||||
isPrivate: null,
|
||||
hasAccess: false,
|
||||
requiresAuth: true,
|
||||
error: 'GitHub authentication required or token expired',
|
||||
authError: true
|
||||
};
|
||||
}
|
||||
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
@ -195,15 +336,253 @@ class GitHubIntegrationService {
|
||||
return result.rows[0] || null;
|
||||
}
|
||||
|
||||
// Ensure a GitHub webhook exists for the repository (uses OAuth token)
|
||||
async ensureRepositoryWebhook(owner, repo, callbackUrl) {
|
||||
try {
|
||||
const secret = process.env.GITHUB_WEBHOOK_SECRET;
|
||||
if (!callbackUrl) {
|
||||
console.warn('Webhook callbackUrl not provided; skipping webhook creation');
|
||||
return { created: false, reason: 'missing_callback_url' };
|
||||
}
|
||||
|
||||
const octokit = await this.getAuthenticatedOctokit();
|
||||
|
||||
// List existing hooks to avoid duplicates
|
||||
const { data: hooks } = await octokit.request('GET /repos/{owner}/{repo}/hooks', {
|
||||
owner,
|
||||
repo
|
||||
});
|
||||
|
||||
const existing = hooks.find(h => h.config && h.config.url === callbackUrl);
|
||||
if (existing) {
|
||||
// Optionally ensure events include push
|
||||
if (!existing.events || !existing.events.includes('push')) {
|
||||
try {
|
||||
await octokit.request('PATCH /repos/{owner}/{repo}/hooks/{hook_id}', {
|
||||
owner,
|
||||
repo,
|
||||
hook_id: existing.id,
|
||||
events: Array.from(new Set([...(existing.events || []), 'push']))
|
||||
});
|
||||
} catch (e) {
|
||||
console.warn('Failed to update existing webhook events:', e.message);
|
||||
}
|
||||
}
|
||||
return { created: false, reason: 'exists', hook_id: existing.id };
|
||||
}
|
||||
|
||||
// Create new webhook
|
||||
const createResp = await octokit.request('POST /repos/{owner}/{repo}/hooks', {
|
||||
owner,
|
||||
repo,
|
||||
config: {
|
||||
url: callbackUrl,
|
||||
content_type: 'json',
|
||||
secret: secret || undefined,
|
||||
insecure_ssl: '0'
|
||||
},
|
||||
events: ['push'],
|
||||
active: true
|
||||
});
|
||||
|
||||
return { created: true, hook_id: createResp.data.id };
|
||||
} catch (error) {
|
||||
// Common cases: insufficient permissions, private repo without correct scope
|
||||
console.warn('ensureRepositoryWebhook failed:', error.status, error.message);
|
||||
return { created: false, error: error.message };
|
||||
}
|
||||
}
|
||||
|
||||
// Git-based: clone or update local repo and re-index into DB
|
||||
async syncRepositoryWithGit(owner, repo, branch, repositoryId) {
|
||||
const database = require('../config/database');
|
||||
const localPath = this.gitRepoService.getLocalRepoPath(owner, repo, branch);
|
||||
let storageRecord = null;
|
||||
|
||||
try {
|
||||
await this.gitRepoService.ensureDirectory(path.dirname(localPath));
|
||||
|
||||
// Initialize storage record as downloading
|
||||
storageRecord = await this.fileStorageService.initializeRepositoryStorage(
|
||||
repositoryId,
|
||||
localPath
|
||||
);
|
||||
|
||||
// Clone if missing (prefer authenticated HTTPS with OAuth token), otherwise fetch & fast-forward
|
||||
let repoPath = null;
|
||||
try {
|
||||
const tokenRecord = await this.oauthService.getToken();
|
||||
if (tokenRecord?.access_token) {
|
||||
repoPath = await this.gitRepoService.cloneIfMissingWithAuth(
|
||||
owner,
|
||||
repo,
|
||||
branch,
|
||||
'github.com',
|
||||
tokenRecord.access_token,
|
||||
'oauth2'
|
||||
);
|
||||
}
|
||||
} catch (_) {}
|
||||
if (!repoPath) {
|
||||
repoPath = await this.gitRepoService.cloneIfMissing(owner, repo, branch);
|
||||
}
|
||||
const beforeSha = await this.gitRepoService.getHeadSha(repoPath);
|
||||
const { afterSha } = await this.gitRepoService.fetchAndFastForward(repoPath, branch);
|
||||
|
||||
// Index filesystem into DB
|
||||
await this.fileStorageService.processDirectoryStructure(
|
||||
storageRecord.id,
|
||||
repositoryId,
|
||||
repoPath
|
||||
);
|
||||
|
||||
const finalStorage = await this.fileStorageService.completeRepositoryStorage(storageRecord.id);
|
||||
|
||||
// Persist last synced commit
|
||||
try {
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET last_synced_commit_sha = $1, last_synced_at = NOW(), updated_at = NOW() WHERE id = $2',
|
||||
[afterSha || beforeSha || null, repositoryId]
|
||||
);
|
||||
} catch (_) {}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
targetDir: repoPath,
|
||||
beforeSha,
|
||||
afterSha: afterSha || beforeSha,
|
||||
storage: finalStorage
|
||||
};
|
||||
} catch (error) {
|
||||
if (storageRecord) {
|
||||
await this.fileStorageService.markStorageFailed(storageRecord.id, error.message);
|
||||
}
|
||||
return { success: false, error: error.message };
|
||||
}
|
||||
}
|
||||
|
||||
// Git-based: get unified diff between two SHAs in local repo
|
||||
async getRepositoryDiff(owner, repo, branch, fromSha, toSha) {
|
||||
// Ensure local repo exists and is up to date; handle main/master mismatch gracefully
|
||||
const preferredBranch = branch || 'main';
|
||||
const alternateBranch = preferredBranch === 'main' ? 'master' : 'main';
|
||||
|
||||
let repoPath = this.gitRepoService.getLocalRepoPath(owner, repo, preferredBranch);
|
||||
try {
|
||||
// Try to ensure repo exists for the preferred branch
|
||||
try {
|
||||
const tokenRecord = await this.oauthService.getToken().catch(() => null);
|
||||
if (tokenRecord?.access_token) {
|
||||
repoPath = await this.gitRepoService.cloneIfMissingWithAuth(owner, repo, preferredBranch, 'github.com', tokenRecord.access_token, 'oauth2');
|
||||
} else {
|
||||
repoPath = await this.gitRepoService.cloneIfMissing(owner, repo, preferredBranch);
|
||||
}
|
||||
} catch (cloneErr) {
|
||||
// If the branch doesn't exist (e.g., refs/heads not found), try the alternate branch
|
||||
try {
|
||||
const tokenRecordAlt = await this.oauthService.getToken().catch(() => null);
|
||||
repoPath = tokenRecordAlt?.access_token
|
||||
? await this.gitRepoService.cloneIfMissingWithAuth(owner, repo, alternateBranch, 'github.com', tokenRecordAlt.access_token, 'oauth2')
|
||||
: await this.gitRepoService.cloneIfMissing(owner, repo, alternateBranch);
|
||||
} catch (_) {
|
||||
// Fall through; we'll try to use any existing local copy next
|
||||
}
|
||||
}
|
||||
|
||||
// If a local repo exists for alternate branch, prefer that to avoid failures
|
||||
const fs = require('fs');
|
||||
const altPath = this.gitRepoService.getLocalRepoPath(owner, repo, alternateBranch);
|
||||
if ((!fs.existsSync(repoPath) || !fs.existsSync(require('path').join(repoPath, '.git'))) && fs.existsSync(altPath)) {
|
||||
repoPath = altPath;
|
||||
}
|
||||
|
||||
// Update and checkout target ref if possible
|
||||
try {
|
||||
await this.gitRepoService.fetchAndFastForward(repoPath, preferredBranch);
|
||||
} catch (_) {
|
||||
// If checkout fails for preferred branch, attempt alternate
|
||||
try { await this.gitRepoService.fetchAndFastForward(repoPath, alternateBranch); } catch (_) {}
|
||||
}
|
||||
|
||||
const patch = await this.gitRepoService.getDiff(repoPath, fromSha || null, toSha || 'HEAD', { patch: true });
|
||||
return patch;
|
||||
} catch (error) {
|
||||
// Surface a clearer error including both attempted paths
|
||||
const attempted = [this.gitRepoService.getLocalRepoPath(owner, repo, preferredBranch), this.gitRepoService.getLocalRepoPath(owner, repo, alternateBranch)].join(' | ');
|
||||
throw new Error(`${error.message} (attempted paths: ${attempted})`);
|
||||
}
|
||||
}
|
||||
|
||||
// Git-based: list changed files since a SHA
|
||||
async getRepositoryChangesSince(owner, repo, branch, sinceSha) {
|
||||
const preferredBranch = branch || 'main';
|
||||
const alternateBranch = preferredBranch === 'main' ? 'master' : 'main';
|
||||
|
||||
let repoPath = this.gitRepoService.getLocalRepoPath(owner, repo, preferredBranch);
|
||||
try {
|
||||
// Ensure repo exists similarly to diff flow
|
||||
try {
|
||||
const tokenRecord = await this.oauthService.getToken().catch(() => null);
|
||||
if (tokenRecord?.access_token) {
|
||||
repoPath = await this.gitRepoService.cloneIfMissingWithAuth(owner, repo, preferredBranch, 'github.com', tokenRecord.access_token, 'oauth2');
|
||||
} else {
|
||||
repoPath = await this.gitRepoService.cloneIfMissing(owner, repo, preferredBranch);
|
||||
}
|
||||
} catch (_) {
|
||||
try {
|
||||
const tokenRecordAlt = await this.oauthService.getToken().catch(() => null);
|
||||
repoPath = tokenRecordAlt?.access_token
|
||||
? await this.gitRepoService.cloneIfMissingWithAuth(owner, repo, alternateBranch, 'github.com', tokenRecordAlt.access_token, 'oauth2')
|
||||
: await this.gitRepoService.cloneIfMissing(owner, repo, alternateBranch);
|
||||
} catch (_) {}
|
||||
}
|
||||
|
||||
const fs = require('fs');
|
||||
const altPath = this.gitRepoService.getLocalRepoPath(owner, repo, alternateBranch);
|
||||
if ((!fs.existsSync(repoPath) || !fs.existsSync(require('path').join(repoPath, '.git'))) && fs.existsSync(altPath)) {
|
||||
repoPath = altPath;
|
||||
}
|
||||
|
||||
try {
|
||||
await this.gitRepoService.fetchAndFastForward(repoPath, preferredBranch);
|
||||
} catch (_) {
|
||||
try { await this.gitRepoService.fetchAndFastForward(repoPath, alternateBranch); } catch (_) {}
|
||||
}
|
||||
|
||||
const files = await this.gitRepoService.getChangedFilesSince(repoPath, sinceSha);
|
||||
return files;
|
||||
} catch (error) {
|
||||
const attempted = [this.gitRepoService.getLocalRepoPath(owner, repo, preferredBranch), this.gitRepoService.getLocalRepoPath(owner, repo, alternateBranch)].join(' | ');
|
||||
throw new Error(`${error.message} (attempted paths: ${attempted})`);
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up repository storage
|
||||
async cleanupRepositoryStorage(repositoryId) {
|
||||
return await this.fileStorageService.cleanupRepositoryStorage(repositoryId);
|
||||
}
|
||||
|
||||
// Try git-based sync first, fall back to GitHub API download on failure
|
||||
async syncRepositoryWithFallback(owner, repo, branch, repositoryId) {
|
||||
// First attempt: full git clone/fetch and index
|
||||
const gitResult = await this.syncRepositoryWithGit(owner, repo, branch, repositoryId);
|
||||
if (gitResult && gitResult.success) {
|
||||
return { method: 'git', ...gitResult };
|
||||
}
|
||||
|
||||
// Fallback: API-based download and storage
|
||||
const apiResult = await this.downloadRepositoryWithStorage(owner, repo, branch, repositoryId);
|
||||
if (apiResult && apiResult.success) {
|
||||
return { method: 'api', ...apiResult, git_error: gitResult?.error };
|
||||
}
|
||||
|
||||
return { success: false, error: apiResult?.error || gitResult?.error || 'Unknown sync failure' };
|
||||
}
|
||||
|
||||
// Download repository files locally and store in database
|
||||
async downloadRepositoryWithStorage(owner, repo, branch, repositoryId) {
|
||||
const targetDir = path.join(
|
||||
process.env.ATTACHED_REPOS_DIR || '/tmp/attached-repos',
|
||||
process.env.ATTACHED_REPOS_DIR,
|
||||
`${owner}__${repo}__${branch}`
|
||||
);
|
||||
|
||||
@ -320,7 +699,7 @@ class GitHubIntegrationService {
|
||||
// Legacy method - download repository files locally (backwards compatibility)
|
||||
async downloadRepository(owner, repo, branch) {
|
||||
const targetDir = path.join(
|
||||
process.env.ATTACHED_REPOS_DIR || '/tmp/attached-repos',
|
||||
process.env.ATTACHED_REPOS_DIR,
|
||||
`${owner}__${repo}__${branch}`
|
||||
);
|
||||
|
||||
@ -399,4 +778,4 @@ class GitHubIntegrationService {
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = GitHubIntegrationService;
|
||||
module.exports = GitHubIntegrationService;
|
||||
@ -14,16 +14,26 @@ class GitHubOAuthService {
|
||||
}
|
||||
|
||||
// Generate GitHub OAuth URL
|
||||
getAuthUrl(state) {
|
||||
getAuthUrl(state, userId = null) {
|
||||
if (!this.clientId) {
|
||||
throw new Error('GitHub OAuth not configured');
|
||||
}
|
||||
|
||||
// If a userId is provided, append it to the redirect_uri so the callback can link token to that user
|
||||
let redirectUri = this.redirectUri;
|
||||
if (userId) {
|
||||
const hasQuery = redirectUri.includes('?');
|
||||
redirectUri = `${redirectUri}${hasQuery ? '&' : '?'}user_id=${encodeURIComponent(userId)}`;
|
||||
}
|
||||
|
||||
// Also embed userId into the OAuth state for fallback extraction in callback
|
||||
const stateWithUser = userId ? `${state}|uid=${userId}` : state;
|
||||
|
||||
const params = new URLSearchParams({
|
||||
client_id: this.clientId,
|
||||
redirect_uri: this.redirectUri,
|
||||
redirect_uri: redirectUri,
|
||||
scope: 'repo,user:email',
|
||||
state: state,
|
||||
state: stateWithUser,
|
||||
allow_signup: 'false'
|
||||
});
|
||||
|
||||
@ -61,48 +71,116 @@ class GitHubOAuthService {
|
||||
return user;
|
||||
}
|
||||
|
||||
// Store GitHub token (no user ID)
|
||||
async storeToken(accessToken, githubUser) {
|
||||
// Store GitHub token with user ID
|
||||
async storeToken(accessToken, githubUser, userId = null) {
|
||||
const query = `
|
||||
INSERT INTO github_user_tokens (access_token, github_username, github_user_id, scopes, expires_at)
|
||||
VALUES ($1, $2, $3, $4, $5)
|
||||
ON CONFLICT (id)
|
||||
INSERT INTO github_user_tokens (access_token, github_username, github_user_id, scopes, expires_at, user_id, is_primary)
|
||||
VALUES ($1, $2, $3, $4, $5, $6, $7)
|
||||
ON CONFLICT (user_id, github_username)
|
||||
DO UPDATE SET
|
||||
access_token = $1,
|
||||
github_username = $2,
|
||||
github_user_id = $3,
|
||||
scopes = $4,
|
||||
expires_at = $5,
|
||||
is_primary = $7,
|
||||
updated_at = NOW()
|
||||
RETURNING *
|
||||
`;
|
||||
|
||||
// If this is the first GitHub account for the user, make it primary
|
||||
const isPrimary = userId ? await this.isFirstGitHubAccountForUser(userId) : false;
|
||||
|
||||
const result = await database.query(query, [
|
||||
accessToken,
|
||||
githubUser.login,
|
||||
githubUser.id,
|
||||
JSON.stringify(['repo', 'user:email']),
|
||||
null
|
||||
null,
|
||||
userId,
|
||||
isPrimary
|
||||
]);
|
||||
|
||||
return result.rows[0];
|
||||
}
|
||||
|
||||
// Get stored token
|
||||
// Check if this is the first GitHub account for a user
|
||||
async isFirstGitHubAccountForUser(userId) {
|
||||
const result = await database.query(
|
||||
'SELECT COUNT(*) as count FROM github_user_tokens WHERE user_id = $1',
|
||||
[userId]
|
||||
);
|
||||
return parseInt(result.rows[0].count) === 0;
|
||||
}
|
||||
|
||||
// Get stored token (legacy method - gets any token)
|
||||
async getToken() {
|
||||
const query = 'SELECT * FROM github_user_tokens ORDER BY created_at DESC LIMIT 1';
|
||||
const result = await database.query(query);
|
||||
return result.rows[0];
|
||||
}
|
||||
|
||||
// Get all tokens for a specific user
|
||||
async getUserTokens(userId) {
|
||||
const query = 'SELECT * FROM github_user_tokens WHERE user_id = $1 ORDER BY is_primary DESC, created_at DESC';
|
||||
const result = await database.query(query, [userId]);
|
||||
return result.rows;
|
||||
}
|
||||
|
||||
// Get primary token for a user
|
||||
async getUserPrimaryToken(userId) {
|
||||
const query = 'SELECT * FROM github_user_tokens WHERE user_id = $1 AND is_primary = true LIMIT 1';
|
||||
const result = await database.query(query, [userId]);
|
||||
return result.rows[0] || null;
|
||||
}
|
||||
|
||||
// Find the right token for accessing a specific repository
|
||||
async findTokenForRepository(userId, owner, repo) {
|
||||
const tokens = await this.getUserTokens(userId);
|
||||
|
||||
for (const token of tokens) {
|
||||
try {
|
||||
const octokit = new Octokit({ auth: token.access_token });
|
||||
// Try to access the repository with this token
|
||||
await octokit.repos.get({ owner, repo });
|
||||
console.log(`✅ Found token for ${owner}/${repo}: ${token.github_username}`);
|
||||
return token;
|
||||
} catch (error) {
|
||||
console.log(`❌ Token ${token.github_username} cannot access ${owner}/${repo}: ${error.message}`);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
return null; // No token found that can access this repository
|
||||
}
|
||||
|
||||
// Validate if a token is still valid
|
||||
async validateToken(accessToken) {
|
||||
try {
|
||||
const octokit = new Octokit({ auth: accessToken });
|
||||
await octokit.users.getAuthenticated();
|
||||
return true;
|
||||
} catch (error) {
|
||||
if (error.status === 401) {
|
||||
return false;
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// Create authenticated Octokit instance
|
||||
async getAuthenticatedOctokit() {
|
||||
const tokenRecord = await this.getToken();
|
||||
|
||||
if (!tokenRecord) {
|
||||
return new Octokit({
|
||||
userAgent: 'CodeNuk-GitIntegration/1.0.0',
|
||||
});
|
||||
throw new Error('No GitHub token found. Please authenticate with GitHub first.');
|
||||
}
|
||||
|
||||
// Validate token before using it
|
||||
const isValid = await this.validateToken(tokenRecord.access_token);
|
||||
if (!isValid) {
|
||||
console.warn('GitHub token is invalid or expired, removing from database');
|
||||
await this.removeInvalidToken(tokenRecord.id);
|
||||
throw new Error('GitHub token has expired. Please re-authenticate with GitHub.');
|
||||
}
|
||||
|
||||
return new Octokit({
|
||||
@ -125,6 +203,51 @@ class GitHubOAuthService {
|
||||
}
|
||||
}
|
||||
|
||||
// Remove invalid token from database
|
||||
async removeInvalidToken(tokenId) {
|
||||
try {
|
||||
await database.query('DELETE FROM github_user_tokens WHERE id = $1', [tokenId]);
|
||||
} catch (error) {
|
||||
console.error('Error removing invalid token:', error);
|
||||
}
|
||||
}
|
||||
|
||||
// Check authentication status
|
||||
async getAuthStatus() {
|
||||
const tokenRecord = await this.getToken();
|
||||
|
||||
if (!tokenRecord) {
|
||||
return {
|
||||
connected: false,
|
||||
requires_auth: true,
|
||||
auth_url: this.getAuthUrl(Math.random().toString(36).substring(7))
|
||||
};
|
||||
}
|
||||
|
||||
// Validate token by making a test API call
|
||||
try {
|
||||
const octokit = new Octokit({ auth: tokenRecord.access_token });
|
||||
await octokit.users.getAuthenticated();
|
||||
|
||||
return {
|
||||
connected: true,
|
||||
github_username: tokenRecord.github_username,
|
||||
github_user_id: tokenRecord.github_user_id,
|
||||
scopes: tokenRecord.scopes,
|
||||
created_at: tokenRecord.created_at
|
||||
};
|
||||
} catch (error) {
|
||||
console.warn('GitHub token validation failed:', error.message);
|
||||
// Remove invalid token
|
||||
await this.removeInvalidToken(tokenRecord.id);
|
||||
return {
|
||||
connected: false,
|
||||
requires_auth: true,
|
||||
auth_url: this.getAuthUrl(Math.random().toString(36).substring(7))
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// Revoke token
|
||||
async revokeToken() {
|
||||
const tokenRecord = await this.getToken();
|
||||
|
||||
70
services/git-integration/src/services/gitlab-oauth.js
Normal file
70
services/git-integration/src/services/gitlab-oauth.js
Normal file
@ -0,0 +1,70 @@
|
||||
// services/gitlab-oauth.js
|
||||
const database = require('../config/database');
|
||||
|
||||
class GitLabOAuthService {
|
||||
constructor() {
|
||||
this.clientId = process.env.GITLAB_CLIENT_ID;
|
||||
this.clientSecret = process.env.GITLAB_CLIENT_SECRET;
|
||||
this.baseUrl = (process.env.GITLAB_BASE_URL || 'https://gitlab.com').replace(/\/$/, '');
|
||||
this.redirectUri = process.env.GITLAB_REDIRECT_URI || 'http://localhost:8012/api/vcs/gitlab/auth/callback';
|
||||
}
|
||||
|
||||
getAuthUrl(state) {
|
||||
if (!this.clientId) throw new Error('GitLab OAuth not configured');
|
||||
const authUrl = `${this.baseUrl}/oauth/authorize`;
|
||||
const params = new URLSearchParams({
|
||||
client_id: this.clientId,
|
||||
redirect_uri: this.redirectUri,
|
||||
response_type: 'code',
|
||||
scope: 'read_api api read_user',
|
||||
state
|
||||
});
|
||||
return `${authUrl}?${params.toString()}`;
|
||||
}
|
||||
|
||||
async exchangeCodeForToken(code) {
|
||||
const tokenUrl = `${this.baseUrl}/oauth/token`;
|
||||
const resp = await fetch(tokenUrl, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
client_id: this.clientId,
|
||||
client_secret: this.clientSecret,
|
||||
code,
|
||||
grant_type: 'authorization_code',
|
||||
redirect_uri: this.redirectUri
|
||||
})
|
||||
});
|
||||
const data = await resp.json();
|
||||
if (!resp.ok || data.error) throw new Error(data.error_description || 'GitLab token exchange failed');
|
||||
return data.access_token;
|
||||
}
|
||||
|
||||
async getUserInfo(accessToken) {
|
||||
const resp = await fetch(`${this.baseUrl}/api/v4/user`, {
|
||||
headers: { Authorization: `Bearer ${accessToken}` }
|
||||
});
|
||||
if (!resp.ok) throw new Error('Failed to fetch GitLab user');
|
||||
return await resp.json();
|
||||
}
|
||||
|
||||
async storeToken(accessToken, user) {
|
||||
const result = await database.query(
|
||||
`INSERT INTO gitlab_user_tokens (access_token, gitlab_username, gitlab_user_id, scopes, expires_at)
|
||||
VALUES ($1, $2, $3, $4, $5)
|
||||
ON CONFLICT (id) DO UPDATE SET access_token = EXCLUDED.access_token, gitlab_username = EXCLUDED.gitlab_username, gitlab_user_id = EXCLUDED.gitlab_user_id, scopes = EXCLUDED.scopes, expires_at = EXCLUDED.expires_at, updated_at = NOW()
|
||||
RETURNING *`,
|
||||
[accessToken, user.username, user.id, JSON.stringify(['read_api','api','read_user']), null]
|
||||
);
|
||||
return result.rows[0];
|
||||
}
|
||||
|
||||
async getToken() {
|
||||
const r = await database.query('SELECT * FROM gitlab_user_tokens ORDER BY created_at DESC LIMIT 1');
|
||||
return r.rows[0];
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = GitLabOAuthService;
|
||||
|
||||
|
||||
84
services/git-integration/src/services/provider-registry.js
Normal file
84
services/git-integration/src/services/provider-registry.js
Normal file
@ -0,0 +1,84 @@
|
||||
// services/provider-registry.js
|
||||
// Simple provider registry/factory to resolve adapters by provider key.
|
||||
|
||||
const GithubIntegrationService = require('./github-integration.service');
|
||||
const GitlabAdapter = require('./providers/gitlab.adapter');
|
||||
const BitbucketAdapter = require('./providers/bitbucket.adapter');
|
||||
const GiteaAdapter = require('./providers/gitea.adapter');
|
||||
|
||||
class GithubAdapter {
|
||||
constructor() {
|
||||
this.impl = new GithubIntegrationService();
|
||||
}
|
||||
|
||||
parseRepoUrl(url) {
|
||||
return this.impl.parseGitHubUrl(url);
|
||||
}
|
||||
|
||||
async checkRepositoryAccess(owner, repo) {
|
||||
return await this.impl.checkRepositoryAccess(owner, repo);
|
||||
}
|
||||
|
||||
async fetchRepositoryMetadata(owner, repo) {
|
||||
return await this.impl.fetchRepositoryMetadata(owner, repo);
|
||||
}
|
||||
|
||||
async analyzeCodebase(owner, repo, branch) {
|
||||
return await this.impl.analyzeCodebase(owner, repo, branch);
|
||||
}
|
||||
|
||||
async ensureRepositoryWebhook(owner, repo, callbackUrl) {
|
||||
return await this.impl.ensureRepositoryWebhook(owner, repo, callbackUrl);
|
||||
}
|
||||
|
||||
async syncRepositoryWithGit(owner, repo, branch, repositoryId) {
|
||||
return await this.impl.syncRepositoryWithGit(owner, repo, branch, repositoryId);
|
||||
}
|
||||
|
||||
async downloadRepositoryWithStorage(owner, repo, branch, repositoryId) {
|
||||
return await this.impl.downloadRepositoryWithStorage(owner, repo, branch, repositoryId);
|
||||
}
|
||||
|
||||
async syncRepositoryWithFallback(owner, repo, branch, repositoryId) {
|
||||
return await this.impl.syncRepositoryWithFallback(owner, repo, branch, repositoryId);
|
||||
}
|
||||
|
||||
async getRepositoryDiff(owner, repo, branch, fromSha, toSha) {
|
||||
return await this.impl.getRepositoryDiff(owner, repo, branch, fromSha, toSha);
|
||||
}
|
||||
|
||||
async getRepositoryChangesSince(owner, repo, branch, sinceSha) {
|
||||
return await this.impl.getRepositoryChangesSince(owner, repo, branch, sinceSha);
|
||||
}
|
||||
|
||||
async cleanupRepositoryStorage(repositoryId) {
|
||||
return await this.impl.cleanupRepositoryStorage(repositoryId);
|
||||
}
|
||||
}
|
||||
|
||||
class ProviderRegistry {
|
||||
constructor() {
|
||||
this.providers = new Map();
|
||||
// Register GitHub by default
|
||||
this.providers.set('github', () => new GithubAdapter());
|
||||
this.providers.set('gitlab', () => new GitlabAdapter());
|
||||
this.providers.set('bitbucket', () => new BitbucketAdapter());
|
||||
this.providers.set('gitea', () => new GiteaAdapter());
|
||||
}
|
||||
|
||||
register(providerKey, factoryFn) {
|
||||
this.providers.set(providerKey, factoryFn);
|
||||
}
|
||||
|
||||
resolve(providerKey) {
|
||||
const factory = this.providers.get((providerKey || '').toLowerCase());
|
||||
if (!factory) {
|
||||
throw new Error(`Unsupported provider: ${providerKey}`);
|
||||
}
|
||||
return factory();
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = new ProviderRegistry();
|
||||
|
||||
|
||||
@ -0,0 +1,166 @@
|
||||
// services/providers/bitbucket.adapter.js
|
||||
const VcsProviderInterface = require('../vcs-provider.interface');
|
||||
const FileStorageService = require('../file-storage.service');
|
||||
const GitRepoService = require('../git-repo.service');
|
||||
const BitbucketOAuthService = require('../bitbucket-oauth');
|
||||
|
||||
class BitbucketAdapter extends VcsProviderInterface {
|
||||
constructor() {
|
||||
super();
|
||||
this.fileStorageService = new FileStorageService();
|
||||
this.gitRepoService = new GitRepoService();
|
||||
this.host = process.env.BITBUCKET_BASE_URL || 'bitbucket.org';
|
||||
this.oauth = new BitbucketOAuthService();
|
||||
}
|
||||
|
||||
parseRepoUrl(url) {
|
||||
if (!url || typeof url !== 'string') throw new Error('URL must be a non-empty string');
|
||||
let normalized = url.trim();
|
||||
if (!normalized.startsWith('http')) normalized = 'https://' + normalized;
|
||||
const host = normalized.replace(/^https?:\/\//, '').split('/')[0];
|
||||
if (!host.includes('bitbucket')) throw new Error(`Invalid Bitbucket repository URL: ${url}`);
|
||||
const parts = normalized.split(host)[1].replace(/^\//, '').split('#')[0].split('?')[0].split('/');
|
||||
const owner = parts[0];
|
||||
const repo = (parts[1] || '').replace(/\.git$/, '');
|
||||
if (!owner || !repo) throw new Error(`Invalid Bitbucket repository URL: ${url}`);
|
||||
let branch = 'main';
|
||||
// Bitbucket uses /branch/ sometimes in URLs
|
||||
const branchIdx = parts.findIndex(p => p === 'branch');
|
||||
if (branchIdx >= 0 && parts[branchIdx + 1]) branch = parts[branchIdx + 1];
|
||||
return { owner, repo, branch };
|
||||
}
|
||||
|
||||
async checkRepositoryAccess(owner, repo) {
|
||||
const token = await this.oauth.getToken();
|
||||
|
||||
try {
|
||||
// Always try with authentication first (like GitHub behavior)
|
||||
if (token?.access_token) {
|
||||
const resp = await fetch(`https://api.bitbucket.org/2.0/repositories/${owner}/${repo}`, { headers: { Authorization: `Bearer ${token.access_token}` } });
|
||||
if (resp.status === 200) {
|
||||
const d = await resp.json();
|
||||
const isPrivate = !!d.is_private;
|
||||
return { exists: true, isPrivate, hasAccess: true, requiresAuth: isPrivate };
|
||||
}
|
||||
}
|
||||
|
||||
// No token or token failed: try without authentication
|
||||
const resp = await fetch(`https://api.bitbucket.org/2.0/repositories/${owner}/${repo}`);
|
||||
if (resp.status === 200) {
|
||||
const d = await resp.json();
|
||||
const isPrivate = !!d.is_private;
|
||||
return { exists: true, isPrivate, hasAccess: true, requiresAuth: false };
|
||||
}
|
||||
if (resp.status === 404 || resp.status === 403) {
|
||||
// Repository exists but requires authentication (like GitHub behavior)
|
||||
return { exists: resp.status !== 404 ? true : false, isPrivate: true, hasAccess: false, requiresAuth: true };
|
||||
}
|
||||
} catch (error) {
|
||||
// If any error occurs, assume repository requires authentication
|
||||
return { exists: false, isPrivate: null, hasAccess: false, requiresAuth: true, error: 'Repository not found or requires authentication' };
|
||||
}
|
||||
|
||||
return { exists: false, isPrivate: null, hasAccess: false, requiresAuth: true, error: 'Repository not found or requires authentication' };
|
||||
}
|
||||
|
||||
async fetchRepositoryMetadata(owner, repo) {
|
||||
const token = await this.oauth.getToken();
|
||||
if (token?.access_token) {
|
||||
try {
|
||||
const resp = await fetch(`https://api.bitbucket.org/2.0/repositories/${owner}/${repo}`, { headers: { Authorization: `Bearer ${token.access_token}` } });
|
||||
if (resp.ok) {
|
||||
const d = await resp.json();
|
||||
// Bitbucket default branch is in mainbranch.name
|
||||
return { full_name: d.full_name, visibility: d.is_private ? 'private' : 'public', default_branch: d.mainbranch?.name || 'main', updated_at: d.updated_on };
|
||||
}
|
||||
} catch (_) {}
|
||||
}
|
||||
return { full_name: `${owner}/${repo}`, visibility: 'public', default_branch: 'main', updated_at: new Date().toISOString() };
|
||||
}
|
||||
|
||||
async analyzeCodebase(owner, repo, branch) {
|
||||
return { total_files: 0, total_size: 0, directories: [], branch };
|
||||
}
|
||||
|
||||
async ensureRepositoryWebhook(owner, repo, callbackUrl) {
|
||||
try {
|
||||
if (!callbackUrl) return { created: false, reason: 'missing_callback_url' };
|
||||
const token = await this.oauth.getToken();
|
||||
if (!token?.access_token) return { created: false, reason: 'missing_token' };
|
||||
// Bitbucket Cloud webhooks don't support shared secret directly; create basic push webhook
|
||||
const resp = await fetch(`https://api.bitbucket.org/2.0/repositories/${owner}/${repo}/hooks`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token.access_token}` },
|
||||
body: JSON.stringify({ description: 'CodeNuk Git Integration', url: callbackUrl, active: true, events: ['repo:push'] })
|
||||
});
|
||||
if (resp.ok) { const d = await resp.json(); return { created: true, hook_id: d.uuid || d.id }; }
|
||||
return { created: false, reason: `status_${resp.status}` };
|
||||
} catch (e) {
|
||||
return { created: false, error: e.message };
|
||||
}
|
||||
}
|
||||
|
||||
async syncRepositoryWithGit(owner, repo, branch, repositoryId) {
|
||||
const database = require('../../config/database');
|
||||
let storageRecord = null;
|
||||
try {
|
||||
const token = await this.oauth.getToken();
|
||||
let repoPath = null;
|
||||
if (token?.access_token) {
|
||||
repoPath = await this.gitRepoService.cloneIfMissingWithAuth(owner, repo, branch, this.host, token.access_token, 'bearer');
|
||||
} else {
|
||||
repoPath = await this.gitRepoService.cloneIfMissingWithHost(owner, repo, branch, this.host);
|
||||
}
|
||||
storageRecord = await this.fileStorageService.initializeRepositoryStorage(repositoryId, repoPath);
|
||||
await this.fileStorageService.processDirectoryStructure(storageRecord.id, repositoryId, repoPath);
|
||||
const finalStorage = await this.fileStorageService.completeRepositoryStorage(storageRecord.id);
|
||||
|
||||
// Get the current HEAD commit SHA and update the repository record
|
||||
try {
|
||||
const headSha = await this.gitRepoService.getHeadSha(repoPath);
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET last_synced_at = NOW(), last_synced_commit_sha = $1, updated_at = NOW() WHERE id = $2',
|
||||
[headSha, repositoryId]
|
||||
);
|
||||
} catch (e) {
|
||||
// If we can't get the SHA, still update the sync time
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET last_synced_at = NOW(), updated_at = NOW() WHERE id = $1',
|
||||
[repositoryId]
|
||||
);
|
||||
}
|
||||
return { success: true, method: 'git', targetDir: repoPath, storage: finalStorage };
|
||||
} catch (e) {
|
||||
if (storageRecord) await this.fileStorageService.markStorageFailed(storageRecord.id, e.message);
|
||||
return { success: false, error: e.message };
|
||||
}
|
||||
}
|
||||
|
||||
async downloadRepositoryWithStorage() {
|
||||
return { success: false, error: 'api_download_not_implemented' };
|
||||
}
|
||||
|
||||
async syncRepositoryWithFallback(owner, repo, branch, repositoryId) {
|
||||
const git = await this.syncRepositoryWithGit(owner, repo, branch, repositoryId);
|
||||
if (git.success) return git;
|
||||
return { success: false, error: git.error };
|
||||
}
|
||||
|
||||
async getRepositoryDiff(owner, repo, branch, fromSha, toSha) {
|
||||
const repoPath = this.gitRepoService.getLocalRepoPath(owner, repo, branch);
|
||||
return await this.gitRepoService.getDiff(repoPath, fromSha || null, toSha || 'HEAD', { patch: true });
|
||||
}
|
||||
|
||||
async getRepositoryChangesSince(owner, repo, branch, sinceSha) {
|
||||
const repoPath = this.gitRepoService.getLocalRepoPath(owner, repo, branch);
|
||||
return await this.gitRepoService.getChangedFilesSince(repoPath, sinceSha);
|
||||
}
|
||||
|
||||
async cleanupRepositoryStorage(repositoryId) {
|
||||
return await this.fileStorageService.cleanupRepositoryStorage(repositoryId);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = BitbucketAdapter;
|
||||
|
||||
|
||||
166
services/git-integration/src/services/providers/gitea.adapter.js
Normal file
166
services/git-integration/src/services/providers/gitea.adapter.js
Normal file
@ -0,0 +1,166 @@
|
||||
// services/providers/gitea.adapter.js
|
||||
const VcsProviderInterface = require('../vcs-provider.interface');
|
||||
const FileStorageService = require('../file-storage.service');
|
||||
const GitRepoService = require('../git-repo.service');
|
||||
const GiteaOAuthService = require('../gitea-oauth');
|
||||
|
||||
class GiteaAdapter extends VcsProviderInterface {
|
||||
constructor() {
|
||||
super();
|
||||
this.fileStorageService = new FileStorageService();
|
||||
this.gitRepoService = new GitRepoService();
|
||||
this.host = process.env.GITEA_BASE_URL || 'gitea.com';
|
||||
this.oauth = new GiteaOAuthService();
|
||||
}
|
||||
|
||||
parseRepoUrl(url) {
|
||||
if (!url || typeof url !== 'string') throw new Error('URL must be a non-empty string');
|
||||
let normalized = url.trim();
|
||||
if (!normalized.startsWith('http')) normalized = 'https://' + normalized;
|
||||
const host = normalized.replace(/^https?:\/\//, '').split('/')[0];
|
||||
// Gitea can be self-hosted; accept any host when explicitly using /api/vcs/gitea
|
||||
const parts = normalized.split(host)[1].replace(/^\//, '').split('#')[0].split('?')[0].split('/');
|
||||
const owner = parts[0];
|
||||
const repo = (parts[1] || '').replace(/\.git$/, '');
|
||||
if (!owner || !repo) throw new Error(`Invalid Gitea repository URL: ${url}`);
|
||||
let branch = 'main';
|
||||
const treeIdx = parts.findIndex(p => p === 'tree');
|
||||
if (treeIdx >= 0 && parts[treeIdx + 1]) branch = parts[treeIdx + 1];
|
||||
return { owner, repo, branch };
|
||||
}
|
||||
|
||||
async checkRepositoryAccess(owner, repo) {
|
||||
const token = await this.oauth.getToken();
|
||||
const base = (process.env.GITEA_BASE_URL || 'https://gitea.com').replace(/\/$/, '');
|
||||
|
||||
try {
|
||||
// Always try with authentication first (like GitHub behavior)
|
||||
if (token?.access_token) {
|
||||
const resp = await fetch(`${base}/api/v1/repos/${owner}/${repo}`, { headers: { Authorization: `Bearer ${token.access_token}` } });
|
||||
if (resp.status === 200) {
|
||||
const d = await resp.json();
|
||||
const isPrivate = !!d.private;
|
||||
return { exists: true, isPrivate, hasAccess: true, requiresAuth: isPrivate };
|
||||
}
|
||||
}
|
||||
|
||||
// No token or token failed: try without authentication
|
||||
const resp = await fetch(`${base}/api/v1/repos/${owner}/${repo}`);
|
||||
if (resp.status === 200) {
|
||||
const d = await resp.json();
|
||||
return { exists: true, isPrivate: !!d.private, hasAccess: true, requiresAuth: false };
|
||||
}
|
||||
if (resp.status === 404 || resp.status === 403) {
|
||||
// Repository exists but requires authentication (like GitHub behavior)
|
||||
return { exists: resp.status !== 404 ? true : false, isPrivate: true, hasAccess: false, requiresAuth: true };
|
||||
}
|
||||
} catch (error) {
|
||||
// If any error occurs, assume repository requires authentication
|
||||
return { exists: false, isPrivate: null, hasAccess: false, requiresAuth: true, error: 'Repository not found or requires authentication' };
|
||||
}
|
||||
|
||||
return { exists: false, isPrivate: null, hasAccess: false, requiresAuth: true, error: 'Repository not found or requires authentication' };
|
||||
}
|
||||
|
||||
async fetchRepositoryMetadata(owner, repo) {
|
||||
const token = await this.oauth.getToken();
|
||||
const base = (process.env.GITEA_BASE_URL || 'https://gitea.com').replace(/\/$/, '');
|
||||
if (token?.access_token) {
|
||||
try {
|
||||
const resp = await fetch(`${base}/api/v1/repos/${owner}/${repo}`, { headers: { Authorization: `Bearer ${token.access_token}` } });
|
||||
if (resp.ok) {
|
||||
const d = await resp.json();
|
||||
return { full_name: d.full_name || `${owner}/${repo}`, visibility: d.private ? 'private' : 'public', default_branch: d.default_branch || 'main', updated_at: d.updated_at };
|
||||
}
|
||||
} catch (_) {}
|
||||
}
|
||||
return { full_name: `${owner}/${repo}`, visibility: 'public', default_branch: 'main', updated_at: new Date().toISOString() };
|
||||
}
|
||||
|
||||
async analyzeCodebase(owner, repo, branch) {
|
||||
return { total_files: 0, total_size: 0, directories: [], branch };
|
||||
}
|
||||
|
||||
async ensureRepositoryWebhook(owner, repo, callbackUrl) {
|
||||
try {
|
||||
if (!callbackUrl) return { created: false, reason: 'missing_callback_url' };
|
||||
const token = await this.oauth.getToken();
|
||||
if (!token?.access_token) return { created: false, reason: 'missing_token' };
|
||||
const base = (process.env.GITEA_BASE_URL || 'https://gitea.com').replace(/\/$/, '');
|
||||
const secret = process.env.GITEA_WEBHOOK_SECRET || '';
|
||||
const resp = await fetch(`${base}/api/v1/repos/${owner}/${repo}/hooks`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token.access_token}` },
|
||||
body: JSON.stringify({ type: 'gitea', config: { url: callbackUrl, content_type: 'json', secret: secret || undefined }, events: ['push'], active: true })
|
||||
});
|
||||
if (resp.ok) { const d = await resp.json(); return { created: true, hook_id: d.id }; }
|
||||
return { created: false, reason: `status_${resp.status}` };
|
||||
} catch (e) {
|
||||
return { created: false, error: e.message };
|
||||
}
|
||||
}
|
||||
|
||||
async syncRepositoryWithGit(owner, repo, branch, repositoryId) {
|
||||
const database = require('../../config/database');
|
||||
let storageRecord = null;
|
||||
try {
|
||||
const token = await this.oauth.getToken();
|
||||
let repoPath = null;
|
||||
if (token?.access_token) {
|
||||
repoPath = await this.gitRepoService.cloneIfMissingWithAuth(owner, repo, branch, this.host, token.access_token, 'oauth2');
|
||||
} else {
|
||||
repoPath = await this.gitRepoService.cloneIfMissingWithHost(owner, repo, branch, this.host);
|
||||
}
|
||||
storageRecord = await this.fileStorageService.initializeRepositoryStorage(repositoryId, repoPath);
|
||||
await this.fileStorageService.processDirectoryStructure(storageRecord.id, repositoryId, repoPath);
|
||||
const finalStorage = await this.fileStorageService.completeRepositoryStorage(storageRecord.id);
|
||||
|
||||
// Get the current HEAD commit SHA and update the repository record
|
||||
try {
|
||||
const headSha = await this.gitRepoService.getHeadSha(repoPath);
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET last_synced_at = NOW(), last_synced_commit_sha = $1, updated_at = NOW() WHERE id = $2',
|
||||
[headSha, repositoryId]
|
||||
);
|
||||
} catch (e) {
|
||||
// If we can't get the SHA, still update the sync time
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET last_synced_at = NOW(), updated_at = NOW() WHERE id = $1',
|
||||
[repositoryId]
|
||||
);
|
||||
}
|
||||
return { success: true, method: 'git', targetDir: repoPath, storage: finalStorage };
|
||||
} catch (e) {
|
||||
if (storageRecord) await this.fileStorageService.markStorageFailed(storageRecord.id, e.message);
|
||||
return { success: false, error: e.message };
|
||||
}
|
||||
}
|
||||
|
||||
async downloadRepositoryWithStorage() {
|
||||
return { success: false, error: 'api_download_not_implemented' };
|
||||
}
|
||||
|
||||
async syncRepositoryWithFallback(owner, repo, branch, repositoryId) {
|
||||
const git = await this.syncRepositoryWithGit(owner, repo, branch, repositoryId);
|
||||
if (git.success) return git;
|
||||
return { success: false, error: git.error };
|
||||
}
|
||||
|
||||
async getRepositoryDiff(owner, repo, branch, fromSha, toSha) {
|
||||
const repoPath = this.gitRepoService.getLocalRepoPath(owner, repo, branch);
|
||||
return await this.gitRepoService.getDiff(repoPath, fromSha || null, toSha || 'HEAD', { patch: true });
|
||||
}
|
||||
|
||||
async getRepositoryChangesSince(owner, repo, branch, sinceSha) {
|
||||
const repoPath = this.gitRepoService.getLocalRepoPath(owner, repo, branch);
|
||||
return await this.gitRepoService.getChangedFilesSince(repoPath, sinceSha);
|
||||
}
|
||||
|
||||
async cleanupRepositoryStorage(repositoryId) {
|
||||
return await this.fileStorageService.cleanupRepositoryStorage(repositoryId);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = GiteaAdapter;
|
||||
|
||||
|
||||
@ -0,0 +1,177 @@
|
||||
// services/providers/gitlab.adapter.js
|
||||
const VcsProviderInterface = require('../vcs-provider.interface');
|
||||
const FileStorageService = require('../file-storage.service');
|
||||
const GitRepoService = require('../git-repo.service');
|
||||
const GitLabOAuthService = require('../gitlab-oauth');
|
||||
|
||||
class GitlabAdapter extends VcsProviderInterface {
|
||||
constructor() {
|
||||
super();
|
||||
this.fileStorageService = new FileStorageService();
|
||||
this.gitRepoService = new GitRepoService();
|
||||
this.host = process.env.GITLAB_BASE_URL || 'gitlab.com';
|
||||
this.oauth = new GitLabOAuthService();
|
||||
}
|
||||
|
||||
parseRepoUrl(url) {
|
||||
if (!url || typeof url !== 'string') throw new Error('URL must be a non-empty string');
|
||||
let normalized = url.trim();
|
||||
if (!normalized.startsWith('http')) normalized = 'https://' + normalized;
|
||||
const host = normalized.replace(/^https?:\/\//, '').split('/')[0];
|
||||
if (!host.includes('gitlab')) throw new Error(`Invalid GitLab repository URL: ${url}`);
|
||||
const parts = normalized.split(host)[1].replace(/^\//, '').split('#')[0].split('?')[0].split('/');
|
||||
const owner = parts[0];
|
||||
const repo = (parts[1] || '').replace(/\.git$/, '');
|
||||
if (!owner || !repo) throw new Error(`Invalid GitLab repository URL: ${url}`);
|
||||
let branch = 'main';
|
||||
const treeIdx = parts.findIndex(p => p === 'tree');
|
||||
if (treeIdx >= 0 && parts[treeIdx + 1]) branch = parts[treeIdx + 1];
|
||||
return { owner, repo, branch };
|
||||
}
|
||||
|
||||
async checkRepositoryAccess(owner, repo) {
|
||||
const token = await this.oauth.getToken();
|
||||
const base = (process.env.GITLAB_BASE_URL || 'https://gitlab.com').replace(/\/$/, '');
|
||||
|
||||
try {
|
||||
// Always try with authentication first (like GitHub behavior)
|
||||
if (token?.access_token) {
|
||||
const resp = await fetch(`${base}/api/v4/projects/${encodeURIComponent(`${owner}/${repo}`)}`, { headers: { Authorization: `Bearer ${token.access_token}` } });
|
||||
if (resp.status === 200) {
|
||||
const data = await resp.json();
|
||||
return { exists: true, isPrivate: data.visibility !== 'public', hasAccess: true, requiresAuth: data.visibility !== 'public' };
|
||||
}
|
||||
}
|
||||
|
||||
// No token or token failed: try without authentication
|
||||
const resp = await fetch(`${base}/api/v4/projects/${encodeURIComponent(`${owner}/${repo}`)}`);
|
||||
if (resp.status === 200) {
|
||||
const data = await resp.json();
|
||||
return { exists: true, isPrivate: data.visibility !== 'public', hasAccess: true, requiresAuth: false };
|
||||
}
|
||||
if (resp.status === 404 || resp.status === 403) {
|
||||
// Repository exists but requires authentication (like GitHub behavior)
|
||||
return { exists: resp.status !== 404 ? true : false, isPrivate: true, hasAccess: false, requiresAuth: true };
|
||||
}
|
||||
} catch (error) {
|
||||
// If any error occurs, assume repository requires authentication
|
||||
return { exists: false, isPrivate: null, hasAccess: false, requiresAuth: true, error: 'Repository not found or requires authentication' };
|
||||
}
|
||||
|
||||
return { exists: false, isPrivate: null, hasAccess: false, requiresAuth: true, error: 'Repository not found or requires authentication' };
|
||||
}
|
||||
|
||||
async fetchRepositoryMetadata(owner, repo) {
|
||||
const token = await this.oauth.getToken();
|
||||
const base = (process.env.GITLAB_BASE_URL || 'https://gitlab.com').replace(/\/$/, '');
|
||||
if (token?.access_token) {
|
||||
try {
|
||||
const resp = await fetch(`${base}/api/v4/projects/${encodeURIComponent(`${owner}/${repo}`)}`, { headers: { Authorization: `Bearer ${token.access_token}` } });
|
||||
if (resp.ok) {
|
||||
const d = await resp.json();
|
||||
return { full_name: d.path_with_namespace, visibility: d.visibility === 'public' ? 'public' : 'private', default_branch: d.default_branch || 'main', updated_at: d.last_activity_at };
|
||||
}
|
||||
} catch (_) {}
|
||||
}
|
||||
return { full_name: `${owner}/${repo}`, visibility: 'public', default_branch: 'main', updated_at: new Date().toISOString() };
|
||||
}
|
||||
|
||||
async analyzeCodebase(owner, repo, branch) {
|
||||
// Not using API; actual analysis happens after sync in storage
|
||||
return { total_files: 0, total_size: 0, directories: [], branch };
|
||||
}
|
||||
|
||||
async ensureRepositoryWebhook(owner, repo, callbackUrl) {
|
||||
try {
|
||||
if (!callbackUrl) return { created: false, reason: 'missing_callback_url' };
|
||||
const token = await this.oauth.getToken();
|
||||
if (!token?.access_token) return { created: false, reason: 'missing_token' };
|
||||
const base = (process.env.GITLAB_BASE_URL || 'https://gitlab.com').replace(/\/$/, '');
|
||||
const secret = process.env.GITLAB_WEBHOOK_SECRET || '';
|
||||
const resp = await fetch(`${base}/api/v4/projects/${encodeURIComponent(`${owner}/${repo}`)}/hooks`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${token.access_token}` },
|
||||
body: JSON.stringify({ url: callbackUrl, push_events: true, token: secret || undefined, enable_ssl_verification: true })
|
||||
});
|
||||
if (resp.ok) { const data = await resp.json(); return { created: true, hook_id: data.id }; }
|
||||
return { created: false, reason: `status_${resp.status}` };
|
||||
} catch (e) {
|
||||
return { created: false, error: e.message };
|
||||
}
|
||||
}
|
||||
|
||||
async syncRepositoryWithGit(owner, repo, branch, repositoryId) {
|
||||
const database = require('../../config/database');
|
||||
let storageRecord = null;
|
||||
try {
|
||||
const token = await this.oauth.getToken();
|
||||
let repoPath = null;
|
||||
|
||||
// Always try with authentication first for GitLab, even for public repos
|
||||
// because GitLab often requires auth for git operations
|
||||
if (token?.access_token) {
|
||||
repoPath = await this.gitRepoService.cloneIfMissingWithAuth(owner, repo, branch, this.host, token.access_token, 'oauth2');
|
||||
} else {
|
||||
// If no token, try without auth first, but if it fails, require authentication
|
||||
try {
|
||||
repoPath = await this.gitRepoService.cloneIfMissingWithHost(owner, repo, branch, this.host);
|
||||
} catch (cloneError) {
|
||||
// If clone fails without auth, this means the repo requires authentication
|
||||
throw new Error(`GitLab repository requires authentication: ${cloneError.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
storageRecord = await this.fileStorageService.initializeRepositoryStorage(repositoryId, repoPath);
|
||||
await this.fileStorageService.processDirectoryStructure(storageRecord.id, repositoryId, repoPath);
|
||||
const finalStorage = await this.fileStorageService.completeRepositoryStorage(storageRecord.id);
|
||||
|
||||
// Get the current HEAD commit SHA and update the repository record
|
||||
try {
|
||||
const headSha = await this.gitRepoService.getHeadSha(repoPath);
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET last_synced_at = NOW(), last_synced_commit_sha = $1, updated_at = NOW() WHERE id = $2',
|
||||
[headSha, repositoryId]
|
||||
);
|
||||
} catch (e) {
|
||||
// If we can't get the SHA, still update the sync time
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET last_synced_at = NOW(), updated_at = NOW() WHERE id = $1',
|
||||
[repositoryId]
|
||||
);
|
||||
}
|
||||
return { success: true, method: 'git', targetDir: repoPath, storage: finalStorage };
|
||||
} catch (e) {
|
||||
if (storageRecord) await this.fileStorageService.markStorageFailed(storageRecord.id, e.message);
|
||||
return { success: false, error: e.message };
|
||||
}
|
||||
}
|
||||
|
||||
async downloadRepositoryWithStorage(owner, repo, branch, repositoryId) {
|
||||
// Not implemented for GitLab without API token; fallback to git
|
||||
return { success: false, error: 'api_download_not_implemented' };
|
||||
}
|
||||
|
||||
async syncRepositoryWithFallback(owner, repo, branch, repositoryId) {
|
||||
const git = await this.syncRepositoryWithGit(owner, repo, branch, repositoryId);
|
||||
if (git.success) return git;
|
||||
return { success: false, error: git.error };
|
||||
}
|
||||
|
||||
async getRepositoryDiff(owner, repo, branch, fromSha, toSha) {
|
||||
const repoPath = this.gitRepoService.getLocalRepoPath(owner, repo, branch);
|
||||
return await this.gitRepoService.getDiff(repoPath, fromSha || null, toSha || 'HEAD', { patch: true });
|
||||
}
|
||||
|
||||
async getRepositoryChangesSince(owner, repo, branch, sinceSha) {
|
||||
const repoPath = this.gitRepoService.getLocalRepoPath(owner, repo, branch);
|
||||
return await this.gitRepoService.getChangedFilesSince(repoPath, sinceSha);
|
||||
}
|
||||
|
||||
async cleanupRepositoryStorage(repositoryId) {
|
||||
return await this.fileStorageService.cleanupRepositoryStorage(repositoryId);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = GitlabAdapter;
|
||||
|
||||
|
||||
@ -0,0 +1,62 @@
|
||||
// services/vcs-provider.interface.js
|
||||
// Provider-agnostic interface (shape) for VCS adapters.
|
||||
|
||||
class VcsProviderInterface {
|
||||
// Parse a repository URL and return { owner, repo, branch }
|
||||
parseRepoUrl(url) {
|
||||
throw new Error('parseRepoUrl not implemented');
|
||||
}
|
||||
|
||||
// Access check for repository
|
||||
async checkRepositoryAccess(owner, repo) {
|
||||
throw new Error('checkRepositoryAccess not implemented');
|
||||
}
|
||||
|
||||
// Fetch repository metadata
|
||||
async fetchRepositoryMetadata(owner, repo) {
|
||||
throw new Error('fetchRepositoryMetadata not implemented');
|
||||
}
|
||||
|
||||
// Analyze codebase (lightweight tree analysis)
|
||||
async analyzeCodebase(owner, repo, branch) {
|
||||
throw new Error('analyzeCodebase not implemented');
|
||||
}
|
||||
|
||||
// Ensure a webhook exists for this repository
|
||||
async ensureRepositoryWebhook(owner, repo, callbackUrl) {
|
||||
throw new Error('ensureRepositoryWebhook not implemented');
|
||||
}
|
||||
|
||||
// Sync using git; index to storage/DB via file storage service
|
||||
async syncRepositoryWithGit(owner, repo, branch, repositoryId) {
|
||||
throw new Error('syncRepositoryWithGit not implemented');
|
||||
}
|
||||
|
||||
// Fallback: API download + storage
|
||||
async downloadRepositoryWithStorage(owner, repo, branch, repositoryId) {
|
||||
throw new Error('downloadRepositoryWithStorage not implemented');
|
||||
}
|
||||
|
||||
// Try git first then API
|
||||
async syncRepositoryWithFallback(owner, repo, branch, repositoryId) {
|
||||
throw new Error('syncRepositoryWithFallback not implemented');
|
||||
}
|
||||
|
||||
// Get diff and change lists
|
||||
async getRepositoryDiff(owner, repo, branch, fromSha, toSha) {
|
||||
throw new Error('getRepositoryDiff not implemented');
|
||||
}
|
||||
|
||||
async getRepositoryChangesSince(owner, repo, branch, sinceSha) {
|
||||
throw new Error('getRepositoryChangesSince not implemented');
|
||||
}
|
||||
|
||||
// Cleanup local storage/DB artifacts
|
||||
async cleanupRepositoryStorage(repositoryId) {
|
||||
throw new Error('cleanupRepositoryStorage not implemented');
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = VcsProviderInterface;
|
||||
|
||||
|
||||
456
services/git-integration/src/services/vcs-webhook.service.js
Normal file
456
services/git-integration/src/services/vcs-webhook.service.js
Normal file
@ -0,0 +1,456 @@
|
||||
// services/vcs-webhook.service.js
|
||||
const database = require('../config/database');
|
||||
const providerRegistry = require('./provider-registry');
|
||||
|
||||
class VcsWebhookService {
|
||||
constructor() {
|
||||
this._schemaChecked = false;
|
||||
this._webhookEventColumns = new Map();
|
||||
}
|
||||
|
||||
// Process webhook events for any VCS provider
|
||||
async processWebhookEvent(providerKey, eventType, payload) {
|
||||
console.log(`Processing ${providerKey} webhook event: ${eventType}`);
|
||||
|
||||
try {
|
||||
switch (eventType) {
|
||||
case 'push':
|
||||
await this.handlePushEvent(providerKey, payload);
|
||||
break;
|
||||
case 'pull_request':
|
||||
case 'merge_request':
|
||||
await this.handlePullRequestEvent(providerKey, payload);
|
||||
break;
|
||||
case 'repository':
|
||||
await this.handleRepositoryEvent(providerKey, payload);
|
||||
break;
|
||||
case 'ping':
|
||||
await this.handlePingEvent(providerKey, payload);
|
||||
break;
|
||||
default:
|
||||
console.log(`Unhandled webhook event type: ${eventType} for provider: ${providerKey}`);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Error processing ${providerKey} webhook event ${eventType}:`, error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// Handle push events for any provider
|
||||
async handlePushEvent(providerKey, payload) {
|
||||
const { repository, project, ref, commits, pusher, user } = payload;
|
||||
|
||||
// Build a provider-normalized repo object for extraction
|
||||
let repoForExtraction = repository || {};
|
||||
if (providerKey === 'gitlab') {
|
||||
// GitLab push payload includes repository (limited fields) and project (full namespace)
|
||||
// Prefer project.path_with_namespace when available
|
||||
repoForExtraction = {
|
||||
path_with_namespace: project?.path_with_namespace || repository?.path_with_namespace,
|
||||
full_name: project?.path_with_namespace || repository?.name,
|
||||
default_branch: project?.default_branch || repository?.default_branch
|
||||
};
|
||||
}
|
||||
|
||||
// Extract provider-specific data
|
||||
const repoData = this.extractRepositoryData(providerKey, repoForExtraction);
|
||||
const commitData = this.extractCommitData(providerKey, commits);
|
||||
const branchFromRef = this.extractBranchFromRef(providerKey, ref, repoForExtraction);
|
||||
|
||||
console.log(`Push event received for ${repoData.full_name} on ${branchFromRef}`);
|
||||
console.log(`Pusher: ${pusher?.name || user?.name || 'Unknown'}, Commits: ${commitData.length}`);
|
||||
|
||||
// Persist raw webhook and commit SHAs
|
||||
try {
|
||||
// Find repository_id in our DB if attached
|
||||
const repoLookup = await database.query(
|
||||
'SELECT id FROM github_repositories WHERE owner_name = $1 AND repository_name = $2 ORDER BY created_at DESC LIMIT 1',
|
||||
[repoData.owner, repoData.name]
|
||||
);
|
||||
const repoId = repoLookup.rows[0]?.id || null;
|
||||
|
||||
// Insert into provider-specific webhooks table
|
||||
await this.insertWebhookEvent(providerKey, {
|
||||
delivery_id: payload.delivery_id || payload.object_attributes?.id || null,
|
||||
event_type: 'push',
|
||||
action: null,
|
||||
owner_name: repoData.owner,
|
||||
repository_name: repoData.name,
|
||||
repository_id: repoId,
|
||||
ref: ref,
|
||||
before_sha: payload.before || null,
|
||||
after_sha: payload.after || null,
|
||||
commit_count: commitData.length,
|
||||
payload: JSON.stringify(payload)
|
||||
});
|
||||
|
||||
if (repoId) {
|
||||
// Insert into repository_commit_events
|
||||
await database.query(
|
||||
`INSERT INTO repository_commit_events (repository_id, ref, before_sha, after_sha, commit_count)
|
||||
VALUES ($1, $2, $3, $4, $5)`,
|
||||
[repoId, ref, payload.before || null, payload.after || null, commitData.length]
|
||||
);
|
||||
|
||||
// Persist per-commit details and file paths
|
||||
if (commitData.length > 0) {
|
||||
for (const commit of commitData) {
|
||||
try {
|
||||
const commitInsert = await database.query(
|
||||
`INSERT INTO repository_commit_details (repository_id, commit_sha, author_name, author_email, message, url)
|
||||
VALUES ($1, $2, $3, $4, $5, $6)
|
||||
ON CONFLICT (repository_id, commit_sha) DO UPDATE SET
|
||||
author_name = EXCLUDED.author_name,
|
||||
author_email = EXCLUDED.author_email,
|
||||
message = EXCLUDED.message,
|
||||
url = EXCLUDED.url
|
||||
RETURNING id`,
|
||||
[
|
||||
repoId,
|
||||
commit.id,
|
||||
commit.author?.name || null,
|
||||
commit.author?.email || null,
|
||||
commit.message || null,
|
||||
commit.url || null
|
||||
]
|
||||
);
|
||||
|
||||
const commitId = commitInsert.rows[0].id;
|
||||
|
||||
// Insert file changes
|
||||
const addFiles = (paths = [], changeType) => paths.forEach(async (p) => {
|
||||
try {
|
||||
await database.query(
|
||||
`INSERT INTO repository_commit_files (commit_id, change_type, file_path)
|
||||
VALUES ($1, $2, $3)`,
|
||||
[commitId, changeType, p]
|
||||
);
|
||||
} catch (_) {}
|
||||
});
|
||||
|
||||
addFiles(commit.added || [], 'added');
|
||||
addFiles(commit.modified || [], 'modified');
|
||||
addFiles(commit.removed || [], 'removed');
|
||||
} catch (commitErr) {
|
||||
console.warn('Failed to persist commit details:', commitErr.message);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Kick off background re-sync
|
||||
setImmediate(async () => {
|
||||
try {
|
||||
const provider = providerRegistry.resolve(providerKey);
|
||||
|
||||
// Mark syncing
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET sync_status = $1, updated_at = NOW() WHERE id = $2',
|
||||
['syncing', repoId]
|
||||
);
|
||||
|
||||
// Clean existing storage then git-sync and re-index
|
||||
await provider.cleanupRepositoryStorage(repoId);
|
||||
const downloadResult = await provider.syncRepositoryWithFallback(
|
||||
repoData.owner,
|
||||
repoData.name,
|
||||
branchFromRef,
|
||||
repoId
|
||||
);
|
||||
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET sync_status = $1, last_synced_at = NOW(), updated_at = NOW() WHERE id = $2',
|
||||
[downloadResult.success ? 'synced' : 'error', repoId]
|
||||
);
|
||||
} catch (syncErr) {
|
||||
console.warn('Auto-sync failed:', syncErr.message);
|
||||
try {
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET sync_status = $1, updated_at = NOW() WHERE id = $2',
|
||||
['error', repoId]
|
||||
);
|
||||
} catch (_) {}
|
||||
}
|
||||
});
|
||||
}
|
||||
} catch (e) {
|
||||
console.warn('Failed to persist push webhook details:', e.message);
|
||||
}
|
||||
}
|
||||
|
||||
// Extract repository data based on provider
|
||||
extractRepositoryData(providerKey, repository) {
|
||||
switch (providerKey) {
|
||||
case 'github':
|
||||
return {
|
||||
owner: repository.owner.login,
|
||||
name: repository.name,
|
||||
full_name: repository.full_name
|
||||
};
|
||||
case 'gitlab':
|
||||
{
|
||||
const ns = repository?.path_with_namespace || repository?.full_name || '';
|
||||
const parts = typeof ns === 'string' ? ns.split('/') : [];
|
||||
return {
|
||||
owner: parts[0] || null,
|
||||
name: parts[1] || repository?.name || null,
|
||||
full_name: ns || [parts[0], parts[1]].filter(Boolean).join('/')
|
||||
};
|
||||
}
|
||||
case 'bitbucket':
|
||||
return {
|
||||
owner: repository.full_name.split('/')[0],
|
||||
name: repository.full_name.split('/')[1],
|
||||
full_name: repository.full_name
|
||||
};
|
||||
case 'gitea':
|
||||
return {
|
||||
owner: repository.full_name.split('/')[0],
|
||||
name: repository.full_name.split('/')[1],
|
||||
full_name: repository.full_name
|
||||
};
|
||||
default:
|
||||
return { owner: 'unknown', name: 'unknown', full_name: 'unknown/unknown' };
|
||||
}
|
||||
}
|
||||
|
||||
// Extract commit data based on provider
|
||||
extractCommitData(providerKey, commits) {
|
||||
if (!Array.isArray(commits)) return [];
|
||||
|
||||
switch (providerKey) {
|
||||
case 'github':
|
||||
return commits.map(commit => ({
|
||||
id: commit.id,
|
||||
author: commit.author,
|
||||
message: commit.message,
|
||||
url: commit.url,
|
||||
added: commit.added || [],
|
||||
modified: commit.modified || [],
|
||||
removed: commit.removed || []
|
||||
}));
|
||||
case 'gitlab':
|
||||
return commits.map(commit => ({
|
||||
id: commit.id,
|
||||
author: {
|
||||
name: commit.author?.name,
|
||||
email: commit.author?.email
|
||||
},
|
||||
message: commit.message,
|
||||
url: commit.url,
|
||||
added: commit.added || [],
|
||||
modified: commit.modified || [],
|
||||
removed: commit.removed || []
|
||||
}));
|
||||
case 'bitbucket':
|
||||
return commits.map(commit => ({
|
||||
id: commit.hash,
|
||||
author: {
|
||||
name: commit.author?.user?.display_name,
|
||||
email: commit.author?.user?.email_address
|
||||
},
|
||||
message: commit.message,
|
||||
url: commit.links?.html?.href,
|
||||
added: commit.added || [],
|
||||
modified: commit.modified || [],
|
||||
removed: commit.removed || []
|
||||
}));
|
||||
case 'gitea':
|
||||
return commits.map(commit => ({
|
||||
id: commit.id,
|
||||
author: {
|
||||
name: commit.author?.name,
|
||||
email: commit.author?.email
|
||||
},
|
||||
message: commit.message,
|
||||
url: commit.url,
|
||||
added: commit.added || [],
|
||||
modified: commit.modified || [],
|
||||
removed: commit.removed || []
|
||||
}));
|
||||
default:
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
// Extract branch from ref based on provider
|
||||
extractBranchFromRef(providerKey, ref, repository) {
|
||||
if (!ref) return repository?.default_branch || 'main';
|
||||
|
||||
switch (providerKey) {
|
||||
case 'github':
|
||||
case 'gitlab':
|
||||
case 'gitea':
|
||||
return ref.startsWith('refs/heads/') ? ref.replace('refs/heads/', '') : ref;
|
||||
case 'bitbucket':
|
||||
return ref.startsWith('refs/heads/') ? ref.replace('refs/heads/', '') : ref;
|
||||
default:
|
||||
return 'main';
|
||||
}
|
||||
}
|
||||
|
||||
// Insert webhook event into provider-specific table
|
||||
async insertWebhookEvent(providerKey, eventData) {
|
||||
const tableName = `${providerKey}_webhooks`;
|
||||
const query = `
|
||||
INSERT INTO ${tableName} (delivery_id, event_type, action, owner_name, repository_name, repository_id, ref, before_sha, after_sha, commit_count, payload)
|
||||
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11)
|
||||
`;
|
||||
|
||||
await database.query(query, [
|
||||
eventData.delivery_id,
|
||||
eventData.event_type,
|
||||
eventData.action,
|
||||
eventData.owner_name,
|
||||
eventData.repository_name,
|
||||
eventData.repository_id,
|
||||
eventData.ref,
|
||||
eventData.before_sha,
|
||||
eventData.after_sha,
|
||||
eventData.commit_count,
|
||||
eventData.payload
|
||||
]);
|
||||
}
|
||||
|
||||
// Handle pull/merge request events
|
||||
async handlePullRequestEvent(providerKey, payload) {
|
||||
const { action, pull_request, merge_request } = payload;
|
||||
const pr = pull_request || merge_request;
|
||||
const repository = payload.repository;
|
||||
|
||||
console.log(`Pull/Merge request ${action} for ${repository?.full_name || repository?.path_with_namespace}: #${pr?.number || pr?.iid}`);
|
||||
|
||||
// Log PR events for potential future integration
|
||||
await this.logWebhookEvent(providerKey, 'pull_request', action, repository?.full_name || repository?.path_with_namespace, {
|
||||
pr_number: pr?.number || pr?.iid,
|
||||
pr_title: pr?.title,
|
||||
pr_state: pr?.state,
|
||||
pr_url: pr?.html_url || pr?.web_url
|
||||
});
|
||||
}
|
||||
|
||||
// Handle repository events
|
||||
async handleRepositoryEvent(providerKey, payload) {
|
||||
const { action, repository } = payload;
|
||||
|
||||
console.log(`Repository ${action} event for ${repository?.full_name || repository?.path_with_namespace}`);
|
||||
|
||||
if (action === 'deleted') {
|
||||
// Handle repository deletion
|
||||
const repoData = this.extractRepositoryData(providerKey, repository);
|
||||
const query = `
|
||||
SELECT gr.id, gr.template_id
|
||||
FROM github_repositories gr
|
||||
WHERE gr.owner_name = $1 AND gr.repository_name = $2
|
||||
`;
|
||||
|
||||
const result = await database.query(query, [repoData.owner, repoData.name]);
|
||||
|
||||
if (result.rows.length > 0) {
|
||||
console.log(`Repository ${repoData.full_name} was deleted, marking as inactive`);
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET sync_status = $1, updated_at = NOW() WHERE id = $2',
|
||||
['deleted', result.rows[0].id]
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handle ping events
|
||||
async handlePingEvent(providerKey, payload) {
|
||||
console.log(`${providerKey} webhook ping received - webhook is working correctly`);
|
||||
console.log(`Repository: ${payload.repository?.full_name || payload.repository?.path_with_namespace || 'Unknown'}`);
|
||||
}
|
||||
|
||||
// Log webhook events for debugging and analytics
|
||||
async logWebhookEvent(providerKey, eventType, action, repositoryFullName, metadata = {}, deliveryId = null, fullPayload = null) {
|
||||
try {
|
||||
await this._ensureWebhookEventsSchemaCached();
|
||||
|
||||
// Build a flexible INSERT based on existing columns
|
||||
const columns = [];
|
||||
const placeholders = [];
|
||||
const values = [];
|
||||
let i = 1;
|
||||
|
||||
columns.push('event_type');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(eventType);
|
||||
|
||||
if (this._webhookEventColumns.has('action')) {
|
||||
columns.push('action');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(action || null);
|
||||
}
|
||||
if (this._webhookEventColumns.has('repository_full_name')) {
|
||||
columns.push('repository_full_name');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(repositoryFullName || null);
|
||||
}
|
||||
if (this._webhookEventColumns.has('delivery_id')) {
|
||||
columns.push('delivery_id');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(deliveryId || null);
|
||||
}
|
||||
if (this._webhookEventColumns.has('metadata')) {
|
||||
columns.push('metadata');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(JSON.stringify({ ...metadata, provider: providerKey }));
|
||||
}
|
||||
if (this._webhookEventColumns.has('event_payload')) {
|
||||
columns.push('event_payload');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(JSON.stringify(fullPayload || {}));
|
||||
}
|
||||
if (this._webhookEventColumns.has('received_at')) {
|
||||
columns.push('received_at');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(new Date());
|
||||
}
|
||||
if (this._webhookEventColumns.has('processing_status')) {
|
||||
columns.push('processing_status');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push('pending');
|
||||
}
|
||||
|
||||
const query = `INSERT INTO webhook_events (${columns.join(', ')}) VALUES (${placeholders.join(', ')})`;
|
||||
await database.query(query, values);
|
||||
} catch (error) {
|
||||
console.warn('Failed to log webhook event:', error.message);
|
||||
}
|
||||
}
|
||||
|
||||
async _ensureWebhookEventsSchemaCached() {
|
||||
if (this._schemaChecked) return;
|
||||
try {
|
||||
const result = await database.query(
|
||||
"SELECT column_name, is_nullable FROM information_schema.columns WHERE table_schema='public' AND table_name='webhook_events'"
|
||||
);
|
||||
for (const row of result.rows) {
|
||||
this._webhookEventColumns.set(row.column_name, row.is_nullable);
|
||||
}
|
||||
} catch (e) {
|
||||
console.warn('Could not introspect webhook_events schema:', e.message);
|
||||
} finally {
|
||||
this._schemaChecked = true;
|
||||
}
|
||||
}
|
||||
|
||||
// Get recent webhook events
|
||||
async getRecentWebhookEvents(limit = 50) {
|
||||
try {
|
||||
const query = `
|
||||
SELECT * FROM webhook_events
|
||||
ORDER BY received_at DESC
|
||||
LIMIT $1
|
||||
`;
|
||||
|
||||
const result = await database.query(query, [limit]);
|
||||
return result.rows;
|
||||
} catch (error) {
|
||||
console.error('Failed to get webhook events:', error.message);
|
||||
return [];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = VcsWebhookService;
|
||||
361
services/git-integration/src/services/webhook.service.js
Normal file
361
services/git-integration/src/services/webhook.service.js
Normal file
@ -0,0 +1,361 @@
|
||||
// services/webhook.service.js
|
||||
const crypto = require('crypto');
|
||||
const database = require('../config/database');
|
||||
const GitHubIntegrationService = require('./github-integration.service');
|
||||
|
||||
class WebhookService {
|
||||
constructor() {
|
||||
this.webhookSecret = process.env.GITHUB_WEBHOOK_SECRET || 'default-webhook-secret';
|
||||
this._schemaChecked = false;
|
||||
this._webhookEventColumns = new Map();
|
||||
this.githubService = new GitHubIntegrationService();
|
||||
}
|
||||
|
||||
// Verify GitHub webhook signature
|
||||
verifySignature(payload, signature) {
|
||||
if (!signature) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const expectedSignature = crypto
|
||||
.createHmac('sha256', this.webhookSecret)
|
||||
.update(payload)
|
||||
.digest('hex');
|
||||
|
||||
const providedSignature = signature.replace('sha256=', '');
|
||||
|
||||
return crypto.timingSafeEqual(
|
||||
Buffer.from(expectedSignature, 'hex'),
|
||||
Buffer.from(providedSignature, 'hex')
|
||||
);
|
||||
}
|
||||
|
||||
// Process GitHub webhook events
|
||||
async processWebhookEvent(eventType, payload) {
|
||||
console.log(`Processing GitHub webhook event: ${eventType}`);
|
||||
|
||||
try {
|
||||
switch (eventType) {
|
||||
case 'push':
|
||||
await this.handlePushEvent(payload);
|
||||
break;
|
||||
case 'pull_request':
|
||||
await this.handlePullRequestEvent(payload);
|
||||
break;
|
||||
case 'repository':
|
||||
await this.handleRepositoryEvent(payload);
|
||||
break;
|
||||
case 'ping':
|
||||
await this.handlePingEvent(payload);
|
||||
break;
|
||||
default:
|
||||
console.log(`Unhandled webhook event type: ${eventType}`);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Error processing webhook event ${eventType}:`, error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// Handle push events
|
||||
async handlePushEvent(payload) {
|
||||
const { repository, ref, commits, pusher } = payload;
|
||||
|
||||
console.log(`Push event received for ${repository.full_name} on ${ref}`);
|
||||
console.log(`Pusher: ${pusher.name}, Commits: ${commits.length}`);
|
||||
|
||||
// Persist raw webhook and commit SHAs
|
||||
try {
|
||||
const repoOwner = repository.owner.login;
|
||||
const repoName = repository.name;
|
||||
const branchFromRef = (ref || '').startsWith('refs/heads/') ? ref.replace('refs/heads/', '') : (repository.default_branch || 'main');
|
||||
|
||||
// Find repository_id in our DB if attached
|
||||
const repoLookup = await database.query(
|
||||
'SELECT id FROM github_repositories WHERE owner_name = $1 AND repository_name = $2 ORDER BY created_at DESC LIMIT 1',
|
||||
[repoOwner, repoName]
|
||||
);
|
||||
const repoId = repoLookup.rows[0]?.id || null;
|
||||
|
||||
// Insert into durable github_webhooks table
|
||||
await database.query(
|
||||
`INSERT INTO github_webhooks (delivery_id, event_type, action, owner_name, repository_name, repository_id, ref, before_sha, after_sha, commit_count, payload)
|
||||
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11)`,
|
||||
[
|
||||
payload.delivery_id || null, // may be null if not provided by route; route passes header separately
|
||||
'push',
|
||||
null,
|
||||
repoOwner,
|
||||
repoName,
|
||||
repoId,
|
||||
ref,
|
||||
payload.before || null,
|
||||
payload.after || null,
|
||||
Array.isArray(commits) ? commits.length : 0,
|
||||
JSON.stringify(payload)
|
||||
]
|
||||
);
|
||||
|
||||
if (repoId) {
|
||||
await database.query(
|
||||
`INSERT INTO repository_commit_events (repository_id, ref, before_sha, after_sha, commit_count)
|
||||
VALUES ($1, $2, $3, $4, $5)`,
|
||||
[repoId, ref, payload.before || null, payload.after || null, Array.isArray(commits) ? commits.length : 0]
|
||||
);
|
||||
|
||||
// Persist per-commit details and file paths (added/modified/removed)
|
||||
if (Array.isArray(commits) && commits.length > 0) {
|
||||
for (const commit of commits) {
|
||||
try {
|
||||
const commitInsert = await database.query(
|
||||
`INSERT INTO repository_commit_details (repository_id, commit_sha, author_name, author_email, message, url)
|
||||
VALUES ($1, $2, $3, $4, $5, $6)
|
||||
ON CONFLICT (repository_id, commit_sha) DO UPDATE SET
|
||||
author_name = EXCLUDED.author_name,
|
||||
author_email = EXCLUDED.author_email,
|
||||
message = EXCLUDED.message,
|
||||
url = EXCLUDED.url
|
||||
RETURNING id`,
|
||||
[
|
||||
repoId,
|
||||
commit.id,
|
||||
commit.author?.name || null,
|
||||
commit.author?.email || null,
|
||||
commit.message || null,
|
||||
commit.url || null
|
||||
]
|
||||
);
|
||||
|
||||
const commitId = commitInsert.rows[0].id;
|
||||
|
||||
const addFiles = (paths = [], changeType) => paths.forEach(async (p) => {
|
||||
try {
|
||||
await database.query(
|
||||
`INSERT INTO repository_commit_files (commit_id, change_type, file_path)
|
||||
VALUES ($1, $2, $3)`,
|
||||
[commitId, changeType, p]
|
||||
);
|
||||
} catch (_) {}
|
||||
});
|
||||
|
||||
addFiles(commit.added || [], 'added');
|
||||
addFiles(commit.modified || [], 'modified');
|
||||
addFiles(commit.removed || [], 'removed');
|
||||
} catch (commitErr) {
|
||||
console.warn('Failed to persist commit details:', commitErr.message);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Kick off background re-sync to refresh local files and DB (git-based)
|
||||
setImmediate(async () => {
|
||||
try {
|
||||
// Mark syncing
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET sync_status = $1, updated_at = NOW() WHERE id = $2',
|
||||
['syncing', repoId]
|
||||
);
|
||||
|
||||
// Clean existing storage then git-sync and re-index
|
||||
await this.githubService.cleanupRepositoryStorage(repoId);
|
||||
const downloadResult = await this.githubService.syncRepositoryWithFallback(
|
||||
repoOwner,
|
||||
repoName,
|
||||
branchFromRef,
|
||||
repoId
|
||||
);
|
||||
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET sync_status = $1, last_synced_at = NOW(), updated_at = NOW() WHERE id = $2',
|
||||
[downloadResult.success ? 'synced' : 'error', repoId]
|
||||
);
|
||||
} catch (syncErr) {
|
||||
console.warn('Auto-sync failed:', syncErr.message);
|
||||
try {
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET sync_status = $1, updated_at = NOW() WHERE id = $2',
|
||||
['error', repoId]
|
||||
);
|
||||
} catch (_) {}
|
||||
}
|
||||
});
|
||||
}
|
||||
} catch (e) {
|
||||
console.warn('Failed to persist push webhook details:', e.message);
|
||||
}
|
||||
|
||||
// Find repositories in our database that match this GitHub repository
|
||||
const query = `
|
||||
SELECT gr.*, rs.storage_status, rs.local_path
|
||||
FROM github_repositories gr
|
||||
LEFT JOIN repository_storage rs ON gr.id = rs.repository_id
|
||||
WHERE gr.owner_name = $1 AND gr.repository_name = $2
|
||||
`;
|
||||
|
||||
const result = await database.query(query, [repository.owner.login, repository.name]);
|
||||
|
||||
if (result.rows.length > 0) {
|
||||
console.log(`Found ${result.rows.length} matching repositories in database`);
|
||||
|
||||
// Update last synced timestamp
|
||||
for (const repo of result.rows) {
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET last_synced_at = NOW(), updated_at = NOW() WHERE id = $1',
|
||||
[repo.id]
|
||||
);
|
||||
|
||||
// If repository is synced, we could trigger a re-sync here
|
||||
if (repo.storage_status === 'completed') {
|
||||
console.log(`Repository ${repo.repository_name} is synced, could trigger re-sync`);
|
||||
// You could add logic here to trigger a background sync
|
||||
}
|
||||
}
|
||||
} else {
|
||||
console.log(`No matching repositories found for ${repository.full_name}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Handle pull request events
|
||||
async handlePullRequestEvent(payload) {
|
||||
const { action, pull_request, repository } = payload;
|
||||
|
||||
console.log(`Pull request ${action} for ${repository.full_name}: #${pull_request.number}`);
|
||||
console.log(`PR Title: ${pull_request.title}`);
|
||||
console.log(`PR State: ${pull_request.state}`);
|
||||
|
||||
// Log PR events for potential future integration
|
||||
await this.logWebhookEvent('pull_request', action, repository.full_name, {
|
||||
pr_number: pull_request.number,
|
||||
pr_title: pull_request.title,
|
||||
pr_state: pull_request.state,
|
||||
pr_url: pull_request.html_url
|
||||
});
|
||||
}
|
||||
|
||||
// Handle repository events
|
||||
async handleRepositoryEvent(payload) {
|
||||
const { action, repository } = payload;
|
||||
|
||||
console.log(`Repository ${action} event for ${repository.full_name}`);
|
||||
|
||||
if (action === 'deleted') {
|
||||
// Handle repository deletion
|
||||
const query = `
|
||||
SELECT gr.id, gr.template_id
|
||||
FROM github_repositories gr
|
||||
WHERE gr.owner_name = $1 AND gr.repository_name = $2
|
||||
`;
|
||||
|
||||
const result = await database.query(query, [repository.owner.login, repository.name]);
|
||||
|
||||
if (result.rows.length > 0) {
|
||||
console.log(`Repository ${repository.full_name} was deleted, marking as inactive`);
|
||||
await database.query(
|
||||
'UPDATE github_repositories SET sync_status = $1, updated_at = NOW() WHERE id = $2',
|
||||
['deleted', result.rows[0].id]
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handle ping events (GitHub webhook test)
|
||||
async handlePingEvent(payload) {
|
||||
console.log('GitHub webhook ping received - webhook is working correctly');
|
||||
console.log(`Repository: ${payload.repository?.full_name || 'Unknown'}`);
|
||||
console.log(`Zen: ${payload.zen || 'No zen message'}`);
|
||||
}
|
||||
|
||||
// Log webhook events for debugging and analytics
|
||||
async _ensureWebhookEventsSchemaCached() {
|
||||
if (this._schemaChecked) return;
|
||||
try {
|
||||
const result = await database.query(
|
||||
"SELECT column_name, is_nullable FROM information_schema.columns WHERE table_schema='public' AND table_name='webhook_events'"
|
||||
);
|
||||
for (const row of result.rows) {
|
||||
this._webhookEventColumns.set(row.column_name, row.is_nullable);
|
||||
}
|
||||
} catch (e) {
|
||||
// If schema check fails, proceed with best-effort insert
|
||||
console.warn('Could not introspect webhook_events schema:', e.message);
|
||||
} finally {
|
||||
this._schemaChecked = true;
|
||||
}
|
||||
}
|
||||
|
||||
async logWebhookEvent(eventType, action, repositoryFullName, metadata = {}, deliveryId = null, fullPayload = null) {
|
||||
try {
|
||||
await this._ensureWebhookEventsSchemaCached();
|
||||
|
||||
// Build a flexible INSERT based on existing columns
|
||||
const columns = [];
|
||||
const placeholders = [];
|
||||
const values = [];
|
||||
let i = 1;
|
||||
|
||||
columns.push('event_type');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(eventType);
|
||||
|
||||
if (this._webhookEventColumns.has('action')) {
|
||||
columns.push('action');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(action || null);
|
||||
}
|
||||
if (this._webhookEventColumns.has('repository_full_name')) {
|
||||
columns.push('repository_full_name');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(repositoryFullName || null);
|
||||
}
|
||||
if (this._webhookEventColumns.has('delivery_id')) {
|
||||
columns.push('delivery_id');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(deliveryId || null);
|
||||
}
|
||||
if (this._webhookEventColumns.has('metadata')) {
|
||||
columns.push('metadata');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(JSON.stringify(metadata || {}));
|
||||
}
|
||||
if (this._webhookEventColumns.has('event_payload')) {
|
||||
columns.push('event_payload');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(JSON.stringify(fullPayload || {}));
|
||||
}
|
||||
if (this._webhookEventColumns.has('received_at')) {
|
||||
columns.push('received_at');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push(new Date());
|
||||
}
|
||||
if (this._webhookEventColumns.has('processing_status')) {
|
||||
columns.push('processing_status');
|
||||
placeholders.push(`$${i++}`);
|
||||
values.push('pending');
|
||||
}
|
||||
|
||||
const query = `INSERT INTO webhook_events (${columns.join(', ')}) VALUES (${placeholders.join(', ')})`;
|
||||
await database.query(query, values);
|
||||
} catch (error) {
|
||||
console.warn('Failed to log webhook event:', error.message);
|
||||
}
|
||||
}
|
||||
|
||||
// Get recent webhook events
|
||||
async getRecentWebhookEvents(limit = 50) {
|
||||
try {
|
||||
const query = `
|
||||
SELECT * FROM webhook_events
|
||||
ORDER BY received_at DESC
|
||||
LIMIT $1
|
||||
`;
|
||||
|
||||
const result = await database.query(query, [limit]);
|
||||
return result.rows;
|
||||
} catch (error) {
|
||||
console.error('Failed to get webhook events:', error.message);
|
||||
return [];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = WebhookService;
|
||||
70
services/git-integration/test-webhook.js
Normal file
70
services/git-integration/test-webhook.js
Normal file
@ -0,0 +1,70 @@
|
||||
// test-webhook.js - Simple test script for webhook endpoint
|
||||
const fetch = require('node-fetch');
|
||||
|
||||
const WEBHOOK_URL = 'http://localhost:8012/api/github/webhook';
|
||||
|
||||
// Test webhook with a sample GitHub push event
|
||||
const testPayload = {
|
||||
ref: 'refs/heads/main',
|
||||
before: 'abc123',
|
||||
after: 'def456',
|
||||
repository: {
|
||||
id: 123456,
|
||||
name: 'test-repo',
|
||||
full_name: 'testuser/test-repo',
|
||||
owner: {
|
||||
login: 'testuser',
|
||||
id: 789
|
||||
}
|
||||
},
|
||||
pusher: {
|
||||
name: 'testuser',
|
||||
email: 'test@example.com'
|
||||
},
|
||||
commits: [
|
||||
{
|
||||
id: 'def456',
|
||||
message: 'Test commit',
|
||||
author: {
|
||||
name: 'Test User',
|
||||
email: 'test@example.com'
|
||||
}
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
async function testWebhook() {
|
||||
try {
|
||||
console.log('🧪 Testing webhook endpoint...');
|
||||
console.log(`📡 Sending POST request to: ${WEBHOOK_URL}`);
|
||||
|
||||
const response = await fetch(WEBHOOK_URL, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-GitHub-Event': 'push',
|
||||
'X-GitHub-Delivery': 'test-delivery-123'
|
||||
},
|
||||
body: JSON.stringify(testPayload)
|
||||
});
|
||||
|
||||
const result = await response.json();
|
||||
|
||||
console.log('📊 Response Status:', response.status);
|
||||
console.log('📋 Response Body:', JSON.stringify(result, null, 2));
|
||||
|
||||
if (response.ok) {
|
||||
console.log('✅ Webhook test successful!');
|
||||
} else {
|
||||
console.log('❌ Webhook test failed!');
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error('❌ Error testing webhook:', error.message);
|
||||
}
|
||||
}
|
||||
|
||||
// Run the test
|
||||
testWebhook();
|
||||
|
||||
|
||||
@ -1,37 +1,36 @@
|
||||
FROM python:3.12-slim
|
||||
|
||||
# Install system dependencies needed for building Python packages
|
||||
RUN apt-get update && apt-get install -y \
|
||||
curl \
|
||||
gcc \
|
||||
g++ \
|
||||
make \
|
||||
libc6-dev \
|
||||
libffi-dev \
|
||||
libssl-dev \
|
||||
build-essential \
|
||||
pkg-config \
|
||||
python3-dev \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
# Use official Python runtime as a parent image
|
||||
FROM python:3.9-slim
|
||||
|
||||
# Set the working directory in the container
|
||||
WORKDIR /app
|
||||
|
||||
# Upgrade pip and install build tools
|
||||
RUN pip install --no-cache-dir --upgrade pip setuptools wheel
|
||||
# Set environment variables
|
||||
ENV PYTHONDONTWRITEBYTECODE 1
|
||||
ENV PYTHONUNBUFFERED 1
|
||||
|
||||
# Copy requirements and install Python dependencies
|
||||
# Install system dependencies including PostgreSQL client and netcat
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
build-essential \
|
||||
libpq-dev \
|
||||
postgresql-client \
|
||||
curl \
|
||||
netcat-openbsd \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install Python dependencies
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
# Copy the current directory contents into the container at /app
|
||||
COPY . .
|
||||
|
||||
# Expose port
|
||||
# Copy and set up startup scripts
|
||||
COPY start.sh /app/start.sh
|
||||
COPY docker-start.sh /app/docker-start.sh
|
||||
RUN chmod +x /app/start.sh /app/docker-start.sh
|
||||
|
||||
# Expose the port the app runs on
|
||||
EXPOSE 8002
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
|
||||
CMD curl -f http://localhost:8002/health || exit 1
|
||||
|
||||
# Start the application
|
||||
CMD ["uvicorn", "src.main:app", "--host", "0.0.0.0", "--port", "8002", "--reload"]
|
||||
# Run Docker-optimized startup script
|
||||
CMD ["/app/docker-start.sh"]
|
||||
120
services/tech-stack-selector/Neo4j_From_Postgres.cql
Normal file
120
services/tech-stack-selector/Neo4j_From_Postgres.cql
Normal file
@ -0,0 +1,120 @@
|
||||
// =====================================================
|
||||
// NEO4J SCHEMA FROM POSTGRESQL DATA
|
||||
// Price-focused migration from existing PostgreSQL database
|
||||
// =====================================================
|
||||
|
||||
// Clear existing data
|
||||
MATCH (n) DETACH DELETE n;
|
||||
|
||||
// =====================================================
|
||||
// CREATE CONSTRAINTS AND INDEXES
|
||||
// =====================================================
|
||||
|
||||
// Create uniqueness constraints
|
||||
CREATE CONSTRAINT price_tier_name_unique IF NOT EXISTS FOR (p:PriceTier) REQUIRE p.tier_name IS UNIQUE;
|
||||
CREATE CONSTRAINT technology_name_unique IF NOT EXISTS FOR (t:Technology) REQUIRE t.name IS UNIQUE;
|
||||
CREATE CONSTRAINT tool_name_unique IF NOT EXISTS FOR (tool:Tool) REQUIRE tool.name IS UNIQUE;
|
||||
CREATE CONSTRAINT stack_name_unique IF NOT EXISTS FOR (s:TechStack) REQUIRE s.name IS UNIQUE;
|
||||
|
||||
// Create indexes for performance
|
||||
CREATE INDEX price_tier_range_idx IF NOT EXISTS FOR (p:PriceTier) ON (p.min_price_usd, p.max_price_usd);
|
||||
CREATE INDEX tech_category_idx IF NOT EXISTS FOR (t:Technology) ON (t.category);
|
||||
CREATE INDEX tech_cost_idx IF NOT EXISTS FOR (t:Technology) ON (t.monthly_cost_usd);
|
||||
CREATE INDEX tool_category_idx IF NOT EXISTS FOR (tool:Tool) ON (tool.category);
|
||||
CREATE INDEX tool_cost_idx IF NOT EXISTS FOR (tool:Tool) ON (tool.monthly_cost_usd);
|
||||
|
||||
// =====================================================
|
||||
// PRICE TIER NODES (from PostgreSQL price_tiers table)
|
||||
// =====================================================
|
||||
|
||||
// These will be populated from PostgreSQL data
|
||||
// Structure matches PostgreSQL price_tiers table:
|
||||
// - id, tier_name, min_price_usd, max_price_usd, target_audience, typical_project_scale, description
|
||||
|
||||
// =====================================================
|
||||
// TECHNOLOGY NODES (from PostgreSQL technology tables)
|
||||
// =====================================================
|
||||
|
||||
// These will be populated from PostgreSQL data
|
||||
// Categories: frontend_technologies, backend_technologies, database_technologies,
|
||||
// cloud_technologies, testing_technologies, mobile_technologies,
|
||||
// devops_technologies, ai_ml_technologies
|
||||
|
||||
// =====================================================
|
||||
// TOOL NODES (from PostgreSQL tools table)
|
||||
// =====================================================
|
||||
|
||||
// These will be populated from PostgreSQL data
|
||||
// Structure matches PostgreSQL tools table with pricing:
|
||||
// - id, name, category, description, monthly_cost_usd, setup_cost_usd,
|
||||
// price_tier_id, total_cost_of_ownership_score, price_performance_ratio
|
||||
|
||||
// =====================================================
|
||||
// TECH STACK NODES (will be generated from combinations)
|
||||
// =====================================================
|
||||
|
||||
// These will be dynamically created based on:
|
||||
// - Price tier constraints
|
||||
// - Technology compatibility
|
||||
// - Budget optimization
|
||||
// - Domain requirements
|
||||
|
||||
// =====================================================
|
||||
// RELATIONSHIP TYPES
|
||||
// =====================================================
|
||||
|
||||
// Price-based relationships
|
||||
// - [:BELONGS_TO_TIER] - Technology/Tool belongs to price tier
|
||||
// - [:WITHIN_BUDGET] - Technology/Tool fits within budget range
|
||||
// - [:COST_OPTIMIZED] - Optimal cost-performance ratio
|
||||
|
||||
// Technology relationships
|
||||
// - [:COMPATIBLE_WITH] - Technology compatibility
|
||||
// - [:USES_FRONTEND] - Stack uses frontend technology
|
||||
// - [:USES_BACKEND] - Stack uses backend technology
|
||||
// - [:USES_DATABASE] - Stack uses database technology
|
||||
// - [:USES_CLOUD] - Stack uses cloud technology
|
||||
// - [:USES_TESTING] - Stack uses testing technology
|
||||
// - [:USES_MOBILE] - Stack uses mobile technology
|
||||
// - [:USES_DEVOPS] - Stack uses devops technology
|
||||
// - [:USES_AI_ML] - Stack uses AI/ML technology
|
||||
|
||||
// Tool relationships
|
||||
// - [:RECOMMENDED_FOR] - Tool recommended for domain/use case
|
||||
// - [:INTEGRATES_WITH] - Tool integrates with technology
|
||||
// - [:SUITABLE_FOR] - Tool suitable for price tier
|
||||
|
||||
// =====================================================
|
||||
// PRICE-BASED QUERIES (examples)
|
||||
// =====================================================
|
||||
|
||||
// Query 1: Find technologies within budget
|
||||
// MATCH (t:Technology)-[:BELONGS_TO_TIER]->(p:PriceTier)
|
||||
// WHERE $budget >= p.min_price_usd AND $budget <= p.max_price_usd
|
||||
// RETURN t, p ORDER BY t.total_cost_of_ownership_score DESC
|
||||
|
||||
// Query 2: Find optimal tech stack for budget
|
||||
// MATCH (frontend:Technology {category: "frontend"})-[:BELONGS_TO_TIER]->(p1:PriceTier)
|
||||
// MATCH (backend:Technology {category: "backend"})-[:BELONGS_TO_TIER]->(p2:PriceTier)
|
||||
// MATCH (database:Technology {category: "database"})-[:BELONGS_TO_TIER]->(p3:PriceTier)
|
||||
// MATCH (cloud:Technology {category: "cloud"})-[:BELONGS_TO_TIER]->(p4:PriceTier)
|
||||
// WHERE (frontend.monthly_cost_usd + backend.monthly_cost_usd +
|
||||
// database.monthly_cost_usd + cloud.monthly_cost_usd) <= $budget
|
||||
// RETURN frontend, backend, database, cloud,
|
||||
// (frontend.monthly_cost_usd + backend.monthly_cost_usd +
|
||||
// database.monthly_cost_usd + cloud.monthly_cost_usd) as total_cost
|
||||
// ORDER BY total_cost ASC,
|
||||
// (frontend.total_cost_of_ownership_score + backend.total_cost_of_ownership_score +
|
||||
// database.total_cost_of_ownership_score + cloud.total_cost_of_ownership_score) DESC
|
||||
|
||||
// Query 3: Find tools for specific price tier
|
||||
// MATCH (tool:Tool)-[:BELONGS_TO_TIER]->(p:PriceTier {tier_name: $tier_name})
|
||||
// RETURN tool ORDER BY tool.price_performance_ratio DESC
|
||||
|
||||
// =====================================================
|
||||
// COMPLETION STATUS
|
||||
// =====================================================
|
||||
|
||||
RETURN "✅ Neo4j Schema Ready for PostgreSQL Migration!" as status,
|
||||
"🎯 Focus: Price-based relationships from existing PostgreSQL data" as focus,
|
||||
"📊 Ready for data migration and relationship creation" as ready_state;
|
||||
@ -502,4 +502,16 @@ if domain == 'gaming':
|
||||
**Last Updated**: July 3, 2025
|
||||
**Version**: 4.0.0
|
||||
**Maintainer**: AI Development Pipeline Team
|
||||
**Status**: Production Ready ✅
|
||||
**Status**: Production Ready ✅
|
||||
|
||||
# Normal startup (auto-detects if migration needed)
|
||||
./start_migrated.sh
|
||||
|
||||
# Force re-migration (useful when you add new data)
|
||||
./start_migrated.sh --force-migration
|
||||
|
||||
# Show help
|
||||
./start_migrated.sh --help
|
||||
|
||||
|
||||
healthcare, finance, gaming, education, media, iot, social, elearning, realestate, travel, manufacturing, ecommerce, saas
|
||||
@ -0,0 +1,189 @@
|
||||
# Tech Stack Selector -- Postgres + Neo4j Knowledge Graph
|
||||
|
||||
This project provides a **price-focused technology stack selector**.\
|
||||
It uses a **Postgres relational database** for storing technologies and
|
||||
pricing, and builds a **Neo4j knowledge graph** to support advanced
|
||||
queries like:
|
||||
|
||||
> *"Show me all backend, frontend, and cloud technologies that fit a
|
||||
> \$10-\$50 budget."*
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## 📌 1. Database Schema (Postgres)
|
||||
|
||||
The schema is designed to ensure **data integrity** and
|
||||
**price-tier-driven recommendations**.
|
||||
|
||||
### Core Tables
|
||||
|
||||
- **`price_tiers`** -- Foundation table for price categories (tiers
|
||||
like *Free*, *Low*, *Medium*, *Enterprise*).
|
||||
- **Category-Specific Tables** -- Each technology domain has its own
|
||||
table:
|
||||
- `frontend_technologies`
|
||||
- `backend_technologies`
|
||||
- `cloud_technologies`
|
||||
- `database_technologies`
|
||||
- `testing_technologies`
|
||||
- `mobile_technologies`
|
||||
- `devops_technologies`
|
||||
- `ai_ml_technologies`
|
||||
- **`tools`** -- Central table for business/productivity tools with:
|
||||
- `name`, `category`, `description`
|
||||
- `primary_use_cases`
|
||||
- `popularity_score`
|
||||
- Pricing fields: `monthly_cost_usd`, `setup_cost_usd`,
|
||||
`license_cost_usd`, `training_cost_usd`,
|
||||
`total_cost_of_ownership_score`
|
||||
- Foreign key to `price_tiers`
|
||||
|
||||
All category tables reference `price_tiers(id)` ensuring **referential
|
||||
integrity**.
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## 🧱 2. Migration Files
|
||||
|
||||
Your migrations are structured as follows:
|
||||
|
||||
1. **`001_schema.sql`** -- Creates all tables, constraints, indexes.
|
||||
2. **`002_tools_migration.sql`** -- Adds `tools` table and full-text
|
||||
search indexes.
|
||||
3. **`003_tools_pricing_migration.sql`** -- Adds cost-related fields to
|
||||
`tools` and links to `price_tiers`.
|
||||
|
||||
Run them in order:
|
||||
|
||||
``` bash
|
||||
psql -U <user> -d <database> -f sql/001_schema.sql
|
||||
psql -U <user> -d <database> -f sql/002_tools_migration.sql
|
||||
psql -U <user> -d <database> -f sql/003_tools_pricing_migration.sql
|
||||
```
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## 🕸️ 3. Neo4j Knowledge Graph Design
|
||||
|
||||
We map relational data into a graph for semantic querying.
|
||||
|
||||
### Node Types
|
||||
|
||||
- **Technology** → `{name, category, description, popularity_score}`
|
||||
- **Category** → `{name}`
|
||||
- **PriceTier** → `{tier_name, min_price, max_price}`
|
||||
|
||||
### Relationships
|
||||
|
||||
- `(Technology)-[:BELONGS_TO]->(Category)`
|
||||
- `(Technology)-[:HAS_PRICE_TIER]->(PriceTier)`
|
||||
|
||||
Example graph:
|
||||
|
||||
(:Technology {name:"NodeJS"})-[:BELONGS_TO]->(:Category {name:"Backend"})
|
||||
(:Technology {name:"NodeJS"})-[:HAS_PRICE_TIER]->(:PriceTier {tier_name:"Medium"})
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## 🔄 4. ETL (Extract → Transform → Load)
|
||||
|
||||
Use a Python ETL script to pull from Postgres and load into Neo4j.
|
||||
|
||||
### Example Script
|
||||
|
||||
``` python
|
||||
from neo4j import GraphDatabase
|
||||
import psycopg2
|
||||
|
||||
pg_conn = psycopg2.connect(host="localhost", database="techstack", user="user", password="pass")
|
||||
pg_cur = pg_conn.cursor()
|
||||
|
||||
driver = GraphDatabase.driver("bolt://localhost:7687", auth=("neo4j", "password"))
|
||||
|
||||
def insert_data(tx, tech_name, category, price_tier):
|
||||
tx.run("""
|
||||
MERGE (c:Category {name: $category})
|
||||
MERGE (t:Technology {name: $tech})
|
||||
ON CREATE SET t.category = $category
|
||||
MERGE (p:PriceTier {tier_name: $price_tier})
|
||||
MERGE (t)-[:BELONGS_TO]->(c)
|
||||
MERGE (t)-[:HAS_PRICE_TIER]->(p)
|
||||
""", tech=tech_name, category=category, price_tier=price_tier)
|
||||
|
||||
pg_cur.execute("SELECT name, category, tier_name FROM tools JOIN price_tiers ON price_tiers.id = tools.price_tier_id")
|
||||
rows = pg_cur.fetchall()
|
||||
|
||||
with driver.session() as session:
|
||||
for name, category, tier in rows:
|
||||
session.write_transaction(insert_data, name, category, tier)
|
||||
|
||||
pg_conn.close()
|
||||
driver.close()
|
||||
```
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## 🔍 5. Querying the Knowledge Graph
|
||||
|
||||
### Find technologies in a price range:
|
||||
|
||||
``` cypher
|
||||
MATCH (t:Technology)-[:HAS_PRICE_TIER]->(p:PriceTier)
|
||||
WHERE p.min_price >= 10 AND p.max_price <= 50
|
||||
RETURN t.name, p.tier_name
|
||||
ORDER BY p.min_price ASC
|
||||
```
|
||||
|
||||
### Find technologies for a specific domain:
|
||||
|
||||
``` cypher
|
||||
MATCH (t:Technology)-[:BELONGS_TO]->(c:Category)
|
||||
WHERE c.name = "Backend"
|
||||
RETURN t.name, t.popularity_score
|
||||
ORDER BY t.popularity_score DESC
|
||||
```
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## 🗂️ 6. Suggested Project Structure
|
||||
|
||||
techstack-selector/
|
||||
├── sql/
|
||||
│ ├── 001_schema.sql
|
||||
│ ├── 002_tools_migration.sql
|
||||
│ └── 003_tools_pricing_migration.sql
|
||||
├── etl/
|
||||
│ └── postgres_to_neo4j.py
|
||||
├── api/
|
||||
│ └── app.py (Flask/FastAPI server for exposing queries)
|
||||
├── docs/
|
||||
│ └── README.md
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## 🚀 7. API Layer (Optional)
|
||||
|
||||
You can wrap Neo4j queries inside a REST/GraphQL API.
|
||||
|
||||
Example response:
|
||||
|
||||
``` json
|
||||
{
|
||||
"price_range": [10, 50],
|
||||
"technologies": [
|
||||
{"name": "NodeJS", "category": "Backend", "tier": "Medium"},
|
||||
{"name": "React", "category": "Frontend", "tier": "Medium"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## ✅ Summary
|
||||
|
||||
This README covers: - Postgres schema with pricing and foreign keys -
|
||||
Migration execution steps - Neo4j graph model - Python ETL script -
|
||||
Example Cypher queries - Suggested folder structure
|
||||
|
||||
This setup enables **price-driven technology recommendations** with a
|
||||
clear path for building APIs and AI-powered analytics.
|
||||
7769
services/tech-stack-selector/db/001_schema.sql
Normal file
7769
services/tech-stack-selector/db/001_schema.sql
Normal file
File diff suppressed because it is too large
Load Diff
162
services/tech-stack-selector/db/002_tools_migration.sql
Normal file
162
services/tech-stack-selector/db/002_tools_migration.sql
Normal file
@ -0,0 +1,162 @@
|
||||
-- =====================================================
|
||||
-- Tools Table Migration
|
||||
-- Business/Productivity Tools for Domain-Based Recommendations
|
||||
-- =====================================================
|
||||
|
||||
-- Create tools table
|
||||
CREATE TABLE tools (
|
||||
id SERIAL PRIMARY KEY,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
category VARCHAR(100) NOT NULL,
|
||||
description TEXT,
|
||||
primary_use_cases TEXT,
|
||||
popularity_score INT CHECK (popularity_score >= 1 AND popularity_score <= 100),
|
||||
created_at TIMESTAMP DEFAULT now()
|
||||
);
|
||||
|
||||
-- Create indexes for better performance
|
||||
CREATE INDEX idx_tools_category ON tools(category);
|
||||
CREATE INDEX idx_tools_popularity ON tools(popularity_score);
|
||||
CREATE INDEX idx_tools_name_search ON tools USING gin(to_tsvector('english', name));
|
||||
|
||||
-- =====================================================
|
||||
-- SEED DATA - BUSINESS/PRODUCTIVITY TOOLS
|
||||
-- =====================================================
|
||||
|
||||
INSERT INTO tools (name, category, description, primary_use_cases, popularity_score) VALUES
|
||||
|
||||
-- E-commerce Tools
|
||||
('Shopify', 'e-commerce', 'Complete e-commerce platform for online stores with built-in payment processing, inventory management, and marketing tools', 'Online store creation, product management, order processing, payment handling, inventory tracking, customer management, marketing automation', 95),
|
||||
('WooCommerce', 'e-commerce', 'WordPress plugin that transforms any WordPress site into a fully functional e-commerce store', 'WordPress e-commerce, product catalog, payment processing, order management, inventory control, customer accounts', 90),
|
||||
('Magento', 'e-commerce', 'Enterprise-grade e-commerce platform with advanced customization and scalability features', 'Large-scale e-commerce, B2B commerce, multi-store management, advanced catalog management, enterprise integrations', 85),
|
||||
('BigCommerce', 'e-commerce', 'SaaS e-commerce platform with built-in features for growing online businesses', 'Online store setup, payment processing, SEO optimization, multi-channel selling, inventory management', 80),
|
||||
('Squarespace Commerce', 'e-commerce', 'Website builder with integrated e-commerce capabilities for small to medium businesses', 'Website creation with e-commerce, product showcase, payment processing, inventory management, customer management', 75),
|
||||
('PrestaShop', 'e-commerce', 'Open-source e-commerce platform with extensive customization options', 'Custom e-commerce solutions, multi-language stores, advanced product management, payment gateway integration', 70),
|
||||
|
||||
-- CRM Tools
|
||||
('HubSpot CRM', 'crm', 'Free CRM platform with sales, marketing, and customer service tools for growing businesses', 'Lead management, contact tracking, sales pipeline management, email marketing, customer support, analytics', 95),
|
||||
('Salesforce CRM', 'crm', 'Enterprise-grade CRM platform with extensive customization and integration capabilities', 'Enterprise sales management, customer relationship management, marketing automation, analytics, custom applications', 98),
|
||||
('Zoho CRM', 'crm', 'Comprehensive CRM solution with sales, marketing, and customer support features', 'Lead and contact management, sales automation, email marketing, customer support, analytics, mobile access', 85),
|
||||
('Pipedrive', 'crm', 'Sales-focused CRM with visual pipeline management and automation features', 'Sales pipeline management, deal tracking, contact management, email integration, sales reporting', 80),
|
||||
('Freshworks CRM', 'crm', 'Modern CRM platform with AI-powered insights and automation capabilities', 'Lead management, contact tracking, sales automation, email marketing, customer support, AI insights', 75),
|
||||
('Monday.com CRM', 'crm', 'Visual CRM platform with customizable workflows and team collaboration features', 'Sales pipeline management, contact tracking, team collaboration, project management, automation', 70),
|
||||
|
||||
-- Analytics Tools
|
||||
('Google Analytics', 'analytics', 'Web analytics service that tracks and reports website traffic and user behavior', 'Website traffic analysis, user behavior tracking, conversion tracking, audience insights, performance monitoring', 98),
|
||||
('Mixpanel', 'analytics', 'Advanced analytics platform focused on user behavior and product analytics', 'User behavior analysis, funnel analysis, cohort analysis, A/B testing, product analytics, retention tracking', 85),
|
||||
('Amplitude', 'analytics', 'Product analytics platform for understanding user behavior and driving growth', 'User journey analysis, behavioral analytics, cohort analysis, retention analysis, feature adoption tracking', 80),
|
||||
('Hotjar', 'analytics', 'User behavior analytics tool with heatmaps, session recordings, and feedback collection', 'Heatmap analysis, session recordings, user feedback, conversion optimization, user experience analysis', 75),
|
||||
('Tableau', 'analytics', 'Business intelligence and data visualization platform for advanced analytics', 'Data visualization, business intelligence, advanced analytics, reporting, data exploration, dashboard creation', 90),
|
||||
('Power BI', 'analytics', 'Microsoft business analytics service for data visualization and business intelligence', 'Data visualization, business intelligence, reporting, dashboard creation, data modeling, advanced analytics', 85),
|
||||
|
||||
-- Payment Processing
|
||||
('Stripe', 'payments', 'Online payment processing platform for internet businesses with developer-friendly APIs', 'Online payments, subscription billing, marketplace payments, international payments, fraud prevention, API integration', 95),
|
||||
('PayPal', 'payments', 'Global payment platform supporting online payments, money transfers, and business solutions', 'Online payments, money transfers, business payments, international transactions, mobile payments, invoicing', 90),
|
||||
('Razorpay', 'payments', 'Payment gateway solution designed for Indian businesses with local payment methods', 'Indian payment processing, UPI payments, card payments, subscription billing, payment links, business banking', 85),
|
||||
('Square', 'payments', 'Payment processing platform with point-of-sale and online payment solutions', 'Point-of-sale payments, online payments, invoicing, business management, payment analytics, mobile payments', 80),
|
||||
('Adyen', 'payments', 'Global payment platform for enterprise businesses with advanced fraud prevention', 'Enterprise payments, global payment processing, fraud prevention, payment optimization, unified commerce', 75),
|
||||
('Braintree', 'payments', 'PayPal-owned payment platform with advanced features for online and mobile payments', 'Online payments, mobile payments, marketplace payments, subscription billing, fraud protection, global payments', 70),
|
||||
|
||||
-- Communication Tools
|
||||
('Slack', 'communication', 'Business communication platform with channels, direct messaging, and app integrations', 'Team communication, project collaboration, file sharing, app integrations, video calls, workflow automation', 95),
|
||||
('Microsoft Teams', 'communication', 'Collaboration platform with chat, video meetings, and Microsoft 365 integration', 'Team communication, video conferencing, file collaboration, Microsoft 365 integration, project management', 90),
|
||||
('Discord', 'communication', 'Voice, video, and text communication platform popular with gaming and tech communities', 'Community building, voice/video calls, text chat, server management, bot integration, streaming', 85),
|
||||
('Zoom', 'communication', 'Video conferencing platform with meeting, webinar, and collaboration features', 'Video meetings, webinars, screen sharing, recording, virtual events, team collaboration', 90),
|
||||
('Telegram', 'communication', 'Cloud-based messaging platform with group chats, channels, and bot support', 'Messaging, group chats, channels, file sharing, bot integration, voice/video calls, cloud storage', 80),
|
||||
('WhatsApp Business', 'communication', 'Business messaging platform for customer communication and marketing', 'Customer communication, business messaging, marketing campaigns, catalog sharing, payment integration', 75),
|
||||
|
||||
-- Project Management
|
||||
('Trello', 'project-management', 'Visual project management tool using boards, lists, and cards for task organization', 'Task management, project tracking, team collaboration, workflow visualization, deadline management, progress tracking', 85),
|
||||
('Jira', 'project-management', 'Agile project management tool designed for software development teams', 'Agile project management, issue tracking, sprint planning, bug tracking, release management, team collaboration', 90),
|
||||
('Asana', 'project-management', 'Work management platform for teams to organize, track, and manage their work', 'Task management, project planning, team collaboration, workflow automation, progress tracking, deadline management', 85),
|
||||
('Monday.com', 'project-management', 'Work operating system with customizable workflows and visual project management', 'Project management, team collaboration, workflow automation, resource management, time tracking, reporting', 80),
|
||||
('Notion', 'project-management', 'All-in-one workspace combining notes, docs, wikis, and project management', 'Note-taking, documentation, project management, team collaboration, knowledge management, task tracking', 85),
|
||||
('Basecamp', 'project-management', 'Project management and team communication platform with simple, organized interface', 'Project management, team communication, file sharing, scheduling, progress tracking, client collaboration', 75),
|
||||
|
||||
-- Marketing Tools
|
||||
('Mailchimp', 'marketing', 'Email marketing and automation platform with audience management and analytics', 'Email marketing, marketing automation, audience segmentation, campaign management, analytics, landing pages', 90),
|
||||
('Klaviyo', 'marketing', 'E-commerce marketing automation platform with advanced segmentation and personalization', 'E-commerce marketing, email automation, SMS marketing, customer segmentation, personalization, analytics', 85),
|
||||
('SEMrush', 'marketing', 'Digital marketing toolkit with SEO, PPC, content, and social media marketing tools', 'SEO analysis, keyword research, competitor analysis, PPC management, content marketing, social media management', 80),
|
||||
('HubSpot Marketing', 'marketing', 'Inbound marketing platform with lead generation, email marketing, and analytics', 'Lead generation, email marketing, marketing automation, landing pages, analytics, CRM integration', 85),
|
||||
('Hootsuite', 'marketing', 'Social media management platform for scheduling, monitoring, and analytics', 'Social media scheduling, content management, social listening, analytics, team collaboration, brand monitoring', 80),
|
||||
('Canva', 'marketing', 'Graphic design platform with templates and tools for creating marketing materials', 'Graphic design, social media graphics, presentations, marketing materials, brand assets, team collaboration', 90),
|
||||
|
||||
-- Design & Content Creation
|
||||
('Figma', 'design', 'Collaborative interface design tool with real-time editing and prototyping features', 'UI/UX design, prototyping, design systems, team collaboration, design handoff, component libraries', 95),
|
||||
('Adobe Creative Suite', 'design', 'Comprehensive suite of creative tools for design, photography, and video production', 'Graphic design, photo editing, video production, web design, illustration, animation, print design', 90),
|
||||
('Sketch', 'design', 'Digital design toolkit for creating user interfaces and user experiences', 'UI design, prototyping, design systems, vector graphics, collaboration, design handoff', 85),
|
||||
('InVision', 'design', 'Digital product design platform with prototyping and collaboration features', 'Prototyping, design collaboration, user testing, design handoff, design systems, workflow management', 80),
|
||||
('Adobe XD', 'design', 'User experience design tool with prototyping and collaboration capabilities', 'UX design, prototyping, design systems, collaboration, user testing, design handoff', 85),
|
||||
('Framer', 'design', 'Interactive design tool for creating high-fidelity prototypes and animations', 'Interactive prototyping, animation design, responsive design, user testing, design handoff', 75),
|
||||
|
||||
-- Development & DevOps
|
||||
('GitHub', 'development', 'Code hosting platform with version control, collaboration, and project management features', 'Code hosting, version control, collaboration, project management, CI/CD, code review, issue tracking', 95),
|
||||
('GitLab', 'development', 'DevOps platform with Git repository management, CI/CD, and project management', 'Version control, CI/CD, project management, code review, issue tracking, DevOps automation', 85),
|
||||
('Bitbucket', 'development', 'Git repository management solution with built-in CI/CD and collaboration tools', 'Version control, code collaboration, CI/CD, project management, code review, issue tracking', 80),
|
||||
('Jira Software', 'development', 'Agile project management tool specifically designed for software development teams', 'Agile project management, sprint planning, issue tracking, release management, team collaboration', 90),
|
||||
('Confluence', 'development', 'Team collaboration and documentation platform for knowledge sharing and project documentation', 'Documentation, knowledge management, team collaboration, project documentation, meeting notes, wikis', 85),
|
||||
('Jenkins', 'development', 'Open-source automation server for building, testing, and deploying software', 'CI/CD automation, build automation, testing automation, deployment automation, pipeline management', 80),
|
||||
|
||||
-- Customer Support
|
||||
('Zendesk', 'customer-support', 'Customer service platform with ticketing, knowledge base, and communication tools', 'Customer support, ticket management, knowledge base, live chat, customer communication, analytics', 90),
|
||||
('Intercom', 'customer-support', 'Customer messaging platform with support, engagement, and marketing features', 'Customer support, live chat, messaging, customer engagement, marketing automation, analytics', 85),
|
||||
('Freshdesk', 'customer-support', 'Cloud-based customer support software with ticketing and communication features', 'Customer support, ticket management, knowledge base, live chat, customer communication, automation', 80),
|
||||
('Help Scout', 'customer-support', 'Customer service platform focused on team collaboration and customer satisfaction', 'Customer support, ticket management, team collaboration, customer communication, knowledge base, analytics', 75),
|
||||
('LiveChat', 'customer-support', 'Live chat software for customer support and sales with automation features', 'Live chat, customer support, sales chat, chat automation, visitor tracking, analytics', 70),
|
||||
('Crisp', 'customer-support', 'Customer messaging platform with live chat, email, and social media integration', 'Live chat, customer support, email integration, social media integration, visitor tracking, analytics', 65),
|
||||
|
||||
-- Business Intelligence & Reporting
|
||||
('Google Data Studio', 'business-intelligence', 'Free data visualization and reporting tool that integrates with Google services', 'Data visualization, reporting, dashboard creation, Google Analytics integration, data exploration', 80),
|
||||
('Looker', 'business-intelligence', 'Business intelligence platform with data modeling and visualization capabilities', 'Business intelligence, data modeling, visualization, reporting, analytics, data exploration', 85),
|
||||
('Qlik Sense', 'business-intelligence', 'Self-service data visualization and business intelligence platform', 'Data visualization, business intelligence, self-service analytics, reporting, data exploration', 80),
|
||||
('Sisense', 'business-intelligence', 'Business intelligence platform with embedded analytics and data visualization', 'Business intelligence, embedded analytics, data visualization, reporting, data modeling', 75),
|
||||
('Domo', 'business-intelligence', 'Cloud-based business intelligence platform with real-time data visualization', 'Business intelligence, real-time analytics, data visualization, reporting, dashboard creation', 70),
|
||||
('Metabase', 'business-intelligence', 'Open-source business intelligence tool with easy-to-use interface for data exploration', 'Business intelligence, data exploration, reporting, dashboard creation, SQL queries, data visualization', 75),
|
||||
|
||||
-- Accounting & Finance
|
||||
('QuickBooks', 'accounting', 'Accounting software for small and medium businesses with invoicing and expense tracking', 'Accounting, invoicing, expense tracking, financial reporting, tax preparation, payroll management', 90),
|
||||
('Xero', 'accounting', 'Cloud-based accounting software for small businesses with bank reconciliation and reporting', 'Accounting, bank reconciliation, invoicing, expense tracking, financial reporting, inventory management', 85),
|
||||
('FreshBooks', 'accounting', 'Cloud-based accounting software designed for small businesses and freelancers', 'Accounting, invoicing, expense tracking, time tracking, project management, financial reporting', 80),
|
||||
('Wave', 'accounting', 'Free accounting software for small businesses with invoicing and receipt scanning', 'Accounting, invoicing, expense tracking, receipt scanning, financial reporting, tax preparation', 75),
|
||||
('Sage', 'accounting', 'Business management software with accounting, payroll, and HR features', 'Accounting, payroll management, HR management, financial reporting, inventory management, business intelligence', 80),
|
||||
('Zoho Books', 'accounting', 'Online accounting software with invoicing, expense tracking, and financial reporting', 'Accounting, invoicing, expense tracking, financial reporting, inventory management, project management', 75);
|
||||
|
||||
-- =====================================================
|
||||
-- VERIFICATION QUERIES
|
||||
-- =====================================================
|
||||
|
||||
-- Verify data insertion
|
||||
SELECT
|
||||
category,
|
||||
COUNT(*) as tool_count,
|
||||
AVG(popularity_score) as avg_popularity
|
||||
FROM tools
|
||||
GROUP BY category
|
||||
ORDER BY tool_count DESC;
|
||||
|
||||
-- Example query: Get tools by category
|
||||
SELECT name, description, popularity_score
|
||||
FROM tools
|
||||
WHERE category = 'e-commerce'
|
||||
ORDER BY popularity_score DESC;
|
||||
|
||||
-- Example query: Search for tools by use case
|
||||
SELECT name, category, primary_use_cases
|
||||
FROM tools
|
||||
WHERE primary_use_cases ILIKE '%payment%'
|
||||
ORDER BY popularity_score DESC;
|
||||
|
||||
-- =====================================================
|
||||
-- MIGRATION COMPLETED
|
||||
-- =====================================================
|
||||
|
||||
-- Display completion message
|
||||
DO $$
|
||||
BEGIN
|
||||
RAISE NOTICE 'Tools table migration completed successfully!';
|
||||
RAISE NOTICE 'Created tools table with % categories and % total tools',
|
||||
(SELECT COUNT(DISTINCT category) FROM tools),
|
||||
(SELECT COUNT(*) FROM tools);
|
||||
RAISE NOTICE 'Ready for domain-based tool recommendations';
|
||||
END $$;
|
||||
|
||||
788
services/tech-stack-selector/db/003_tools_pricing_migration.sql
Normal file
788
services/tech-stack-selector/db/003_tools_pricing_migration.sql
Normal file
@ -0,0 +1,788 @@
|
||||
-- =====================================================
|
||||
-- Tools Pricing Migration
|
||||
-- Add pricing fields and data to tools table
|
||||
-- =====================================================
|
||||
|
||||
-- Add pricing fields to tools table
|
||||
ALTER TABLE tools ADD COLUMN IF NOT EXISTS price_tier_id INTEGER REFERENCES price_tiers(id);
|
||||
ALTER TABLE tools ADD COLUMN IF NOT EXISTS monthly_cost_usd DECIMAL(10,2) DEFAULT 0;
|
||||
ALTER TABLE tools ADD COLUMN IF NOT EXISTS setup_cost_usd DECIMAL(10,2) DEFAULT 0;
|
||||
ALTER TABLE tools ADD COLUMN IF NOT EXISTS license_cost_usd DECIMAL(10,2) DEFAULT 0;
|
||||
ALTER TABLE tools ADD COLUMN IF NOT EXISTS training_cost_usd DECIMAL(10,2) DEFAULT 0;
|
||||
ALTER TABLE tools ADD COLUMN IF NOT EXISTS total_cost_of_ownership_score INTEGER CHECK (total_cost_of_ownership_score >= 1 AND total_cost_of_ownership_score <= 100);
|
||||
ALTER TABLE tools ADD COLUMN IF NOT EXISTS price_performance_ratio INTEGER CHECK (price_performance_ratio >= 1 AND price_performance_ratio <= 100);
|
||||
|
||||
-- Create index for better performance
|
||||
CREATE INDEX IF NOT EXISTS idx_tools_price_tier ON tools(price_tier_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_tools_monthly_cost ON tools(monthly_cost_usd);
|
||||
|
||||
-- =====================================================
|
||||
-- UPDATE TOOLS WITH PRICING DATA
|
||||
-- =====================================================
|
||||
|
||||
-- E-commerce Tools Pricing
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 29.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 85,
|
||||
price_performance_ratio = 90
|
||||
WHERE name = 'Shopify';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 100.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 95,
|
||||
price_performance_ratio = 95
|
||||
WHERE name = 'WooCommerce';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Growth Stage'),
|
||||
monthly_cost_usd = 200.00,
|
||||
setup_cost_usd = 2000.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 500.00,
|
||||
total_cost_of_ownership_score = 75,
|
||||
price_performance_ratio = 80
|
||||
WHERE name = 'Magento';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 39.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'BigCommerce';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 18.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'Squarespace Commerce';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 300.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 92,
|
||||
price_performance_ratio = 90
|
||||
WHERE name = 'PrestaShop';
|
||||
|
||||
-- CRM Tools Pricing
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 50.00,
|
||||
total_cost_of_ownership_score = 98,
|
||||
price_performance_ratio = 95
|
||||
WHERE name = 'HubSpot CRM';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Scale-Up'),
|
||||
monthly_cost_usd = 150.00,
|
||||
setup_cost_usd = 1000.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 800.00,
|
||||
total_cost_of_ownership_score = 80,
|
||||
price_performance_ratio = 75
|
||||
WHERE name = 'Salesforce CRM';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 20.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Zoho CRM';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 15.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'Pipedrive';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 29.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 85,
|
||||
price_performance_ratio = 82
|
||||
WHERE name = 'Freshworks CRM';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 25.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 87,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Monday.com CRM';
|
||||
|
||||
-- Analytics Tools Pricing
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 50.00,
|
||||
total_cost_of_ownership_score = 98,
|
||||
price_performance_ratio = 95
|
||||
WHERE name = 'Google Analytics';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 25.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 85,
|
||||
price_performance_ratio = 80
|
||||
WHERE name = 'Mixpanel';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 20.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Amplitude';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 15.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'Hotjar';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Growth Stage'),
|
||||
monthly_cost_usd = 70.00,
|
||||
setup_cost_usd = 500.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 400.00,
|
||||
total_cost_of_ownership_score = 80,
|
||||
price_performance_ratio = 75
|
||||
WHERE name = 'Tableau';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 10.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 92,
|
||||
price_performance_ratio = 90
|
||||
WHERE name = 'Power BI';
|
||||
|
||||
-- Payment Processing Tools Pricing
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 95,
|
||||
price_performance_ratio = 95
|
||||
WHERE name = 'Stripe';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 50.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 90
|
||||
WHERE name = 'PayPal';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Razorpay';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'Square';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Growth Stage'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 300.00,
|
||||
total_cost_of_ownership_score = 85,
|
||||
price_performance_ratio = 80
|
||||
WHERE name = 'Adyen';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 87,
|
||||
price_performance_ratio = 82
|
||||
WHERE name = 'Braintree';
|
||||
|
||||
-- Communication Tools Pricing
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 8.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 92,
|
||||
price_performance_ratio = 90
|
||||
WHERE name = 'Slack';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 6.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'Microsoft Teams';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 50.00,
|
||||
total_cost_of_ownership_score = 95,
|
||||
price_performance_ratio = 95
|
||||
WHERE name = 'Discord';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 15.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Zoom';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 25.00,
|
||||
total_cost_of_ownership_score = 95,
|
||||
price_performance_ratio = 95
|
||||
WHERE name = 'Telegram';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 10.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'WhatsApp Business';
|
||||
|
||||
-- Project Management Tools Pricing
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 6.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'Trello';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 8.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Jira';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 11.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 87,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Asana';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 10.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Monday.com';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 8.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'Notion';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 15.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 85,
|
||||
price_performance_ratio = 82
|
||||
WHERE name = 'Basecamp';
|
||||
|
||||
-- Marketing Tools Pricing
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 95,
|
||||
price_performance_ratio = 95
|
||||
WHERE name = 'Mailchimp';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 20.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 85,
|
||||
price_performance_ratio = 80
|
||||
WHERE name = 'Klaviyo';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 120.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 300.00,
|
||||
total_cost_of_ownership_score = 75,
|
||||
price_performance_ratio = 70
|
||||
WHERE name = 'SEMrush';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 50.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 80,
|
||||
price_performance_ratio = 75
|
||||
WHERE name = 'HubSpot Marketing';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 49.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 85,
|
||||
price_performance_ratio = 80
|
||||
WHERE name = 'Hootsuite';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 50.00,
|
||||
total_cost_of_ownership_score = 95,
|
||||
price_performance_ratio = 95
|
||||
WHERE name = 'Canva';
|
||||
|
||||
-- Design & Content Creation Tools Pricing
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 12.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'Figma';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 53.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 400.00,
|
||||
total_cost_of_ownership_score = 80,
|
||||
price_performance_ratio = 75
|
||||
WHERE name = 'Adobe Creative Suite';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 9.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Sketch';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 8.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 87,
|
||||
price_performance_ratio = 82
|
||||
WHERE name = 'InVision';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 10.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Adobe XD';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 20.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 85,
|
||||
price_performance_ratio = 80
|
||||
WHERE name = 'Framer';
|
||||
|
||||
-- Development & DevOps Tools Pricing
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 50.00,
|
||||
total_cost_of_ownership_score = 98,
|
||||
price_performance_ratio = 95
|
||||
WHERE name = 'GitHub';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 95,
|
||||
price_performance_ratio = 90
|
||||
WHERE name = 'GitLab';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'Bitbucket';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 8.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Jira Software';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 6.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'Confluence';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 92,
|
||||
price_performance_ratio = 90
|
||||
WHERE name = 'Jenkins';
|
||||
|
||||
-- Customer Support Tools Pricing
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 19.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Zendesk';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 39.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 85,
|
||||
price_performance_ratio = 80
|
||||
WHERE name = 'Intercom';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 15.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'Freshdesk';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 20.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Help Scout';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 16.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 87,
|
||||
price_performance_ratio = 82
|
||||
WHERE name = 'LiveChat';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 25.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 85,
|
||||
price_performance_ratio = 80
|
||||
WHERE name = 'Crisp';
|
||||
|
||||
-- Business Intelligence & Reporting Tools Pricing
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 95,
|
||||
price_performance_ratio = 95
|
||||
WHERE name = 'Google Data Studio';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Growth Stage'),
|
||||
monthly_cost_usd = 90.00,
|
||||
setup_cost_usd = 500.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 400.00,
|
||||
total_cost_of_ownership_score = 80,
|
||||
price_performance_ratio = 75
|
||||
WHERE name = 'Looker';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 15.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Qlik Sense';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Growth Stage'),
|
||||
monthly_cost_usd = 83.00,
|
||||
setup_cost_usd = 1000.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 500.00,
|
||||
total_cost_of_ownership_score = 75,
|
||||
price_performance_ratio = 70
|
||||
WHERE name = 'Sisense';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 25.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 85,
|
||||
price_performance_ratio = 80
|
||||
WHERE name = 'Domo';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 95,
|
||||
price_performance_ratio = 90
|
||||
WHERE name = 'Metabase';
|
||||
|
||||
-- Accounting & Finance Tools Pricing
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 15.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 200.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'QuickBooks';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 13.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 92,
|
||||
price_performance_ratio = 90
|
||||
WHERE name = 'Xero';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 15.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 100.00,
|
||||
total_cost_of_ownership_score = 90,
|
||||
price_performance_ratio = 88
|
||||
WHERE name = 'FreshBooks';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Micro Budget'),
|
||||
monthly_cost_usd = 0.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 50.00,
|
||||
total_cost_of_ownership_score = 98,
|
||||
price_performance_ratio = 95
|
||||
WHERE name = 'Wave';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Small Business'),
|
||||
monthly_cost_usd = 25.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 300.00,
|
||||
total_cost_of_ownership_score = 85,
|
||||
price_performance_ratio = 80
|
||||
WHERE name = 'Sage';
|
||||
|
||||
UPDATE tools SET
|
||||
price_tier_id = (SELECT id FROM price_tiers WHERE tier_name = 'Startup Budget'),
|
||||
monthly_cost_usd = 15.00,
|
||||
setup_cost_usd = 0.00,
|
||||
license_cost_usd = 0.00,
|
||||
training_cost_usd = 150.00,
|
||||
total_cost_of_ownership_score = 88,
|
||||
price_performance_ratio = 85
|
||||
WHERE name = 'Zoho Books';
|
||||
|
||||
-- =====================================================
|
||||
-- VERIFICATION QUERIES
|
||||
-- =====================================================
|
||||
|
||||
-- Verify tools pricing data
|
||||
SELECT
|
||||
t.name,
|
||||
t.category,
|
||||
pt.tier_name,
|
||||
t.monthly_cost_usd,
|
||||
t.setup_cost_usd,
|
||||
t.total_cost_of_ownership_score,
|
||||
t.price_performance_ratio
|
||||
FROM tools t
|
||||
LEFT JOIN price_tiers pt ON t.price_tier_id = pt.id
|
||||
ORDER BY t.monthly_cost_usd DESC, t.name;
|
||||
|
||||
-- Summary by price tier
|
||||
SELECT
|
||||
pt.tier_name,
|
||||
COUNT(t.id) as tool_count,
|
||||
AVG(t.monthly_cost_usd) as avg_monthly_cost,
|
||||
AVG(t.total_cost_of_ownership_score) as avg_tco_score
|
||||
FROM price_tiers pt
|
||||
LEFT JOIN tools t ON pt.id = t.price_tier_id
|
||||
GROUP BY pt.id, pt.tier_name
|
||||
ORDER BY pt.min_price_usd;
|
||||
|
||||
-- =====================================================
|
||||
-- MIGRATION COMPLETED
|
||||
-- =====================================================
|
||||
|
||||
-- Migration completed successfully
|
||||
-- Tools are now connected to price tiers and can be included in budget calculations
|
||||
305
services/tech-stack-selector/docker-start.sh
Normal file
305
services/tech-stack-selector/docker-start.sh
Normal file
@ -0,0 +1,305 @@
|
||||
#!/bin/bash
|
||||
|
||||
# ================================================================================================
|
||||
# ENHANCED TECH STACK SELECTOR - DOCKER STARTUP SCRIPT
|
||||
# Optimized for Docker environment with proper service discovery
|
||||
# ================================================================================================
|
||||
|
||||
set -e
|
||||
|
||||
# Parse command line arguments
|
||||
FORCE_MIGRATION=false
|
||||
if [ "$1" = "--force-migration" ] || [ "$1" = "-f" ]; then
|
||||
FORCE_MIGRATION=true
|
||||
echo "🔄 Force migration mode enabled"
|
||||
elif [ "$1" = "--help" ] || [ "$1" = "-h" ]; then
|
||||
echo "Usage: $0 [OPTIONS]"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --force-migration, -f Force re-run all migrations"
|
||||
echo " --help, -h Show this help message"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 # Normal startup with auto-migration detection"
|
||||
echo " $0 --force-migration # Force re-run all migrations"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "="*60
|
||||
echo "🚀 ENHANCED TECH STACK SELECTOR v15.0 - DOCKER VERSION"
|
||||
echo "="*60
|
||||
echo "✅ PostgreSQL data migrated to Neo4j"
|
||||
echo "✅ Price-based relationships"
|
||||
echo "✅ Real data from PostgreSQL"
|
||||
echo "✅ Comprehensive pricing analysis"
|
||||
echo "✅ Docker-optimized startup"
|
||||
echo "="*60
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Function to print colored output
|
||||
print_status() {
|
||||
echo -e "${GREEN}✅ $1${NC}"
|
||||
}
|
||||
|
||||
print_warning() {
|
||||
echo -e "${YELLOW}⚠️ $1${NC}"
|
||||
}
|
||||
|
||||
print_error() {
|
||||
echo -e "${RED}❌ $1${NC}"
|
||||
}
|
||||
|
||||
print_info() {
|
||||
echo -e "${BLUE}ℹ️ $1${NC}"
|
||||
}
|
||||
|
||||
# Get environment variables with defaults
|
||||
POSTGRES_HOST=${POSTGRES_HOST:-postgres}
|
||||
POSTGRES_PORT=${POSTGRES_PORT:-5432}
|
||||
POSTGRES_USER=${POSTGRES_USER:-pipeline_admin}
|
||||
POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-secure_pipeline_2024}
|
||||
POSTGRES_DB=${POSTGRES_DB:-dev_pipeline}
|
||||
NEO4J_URI=${NEO4J_URI:-bolt://neo4j:7687}
|
||||
NEO4J_USER=${NEO4J_USER:-neo4j}
|
||||
NEO4J_PASSWORD=${NEO4J_PASSWORD:-password}
|
||||
CLAUDE_API_KEY=${CLAUDE_API_KEY:-}
|
||||
|
||||
print_status "Environment variables loaded"
|
||||
print_info "PostgreSQL: ${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}"
|
||||
print_info "Neo4j: ${NEO4J_URI}"
|
||||
|
||||
# Function to wait for service to be ready
|
||||
wait_for_service() {
|
||||
local service_name=$1
|
||||
local host=$2
|
||||
local port=$3
|
||||
local max_attempts=30
|
||||
local attempt=1
|
||||
|
||||
print_info "Waiting for ${service_name} to be ready..."
|
||||
|
||||
while [ $attempt -le $max_attempts ]; do
|
||||
if nc -z $host $port 2>/dev/null; then
|
||||
print_status "${service_name} is ready!"
|
||||
return 0
|
||||
fi
|
||||
|
||||
print_info "Attempt ${attempt}/${max_attempts}: ${service_name} not ready yet, waiting 2 seconds..."
|
||||
sleep 2
|
||||
attempt=$((attempt + 1))
|
||||
done
|
||||
|
||||
print_error "${service_name} failed to become ready after ${max_attempts} attempts"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Wait for PostgreSQL
|
||||
if ! wait_for_service "PostgreSQL" $POSTGRES_HOST $POSTGRES_PORT; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Wait for Neo4j
|
||||
if ! wait_for_service "Neo4j" neo4j 7687; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Function to check if database needs migration
|
||||
check_database_migration() {
|
||||
print_info "Checking if database needs migration..."
|
||||
|
||||
# Check if price_tiers table exists and has data
|
||||
if ! python3 -c "
|
||||
import psycopg2
|
||||
import os
|
||||
try:
|
||||
conn = psycopg2.connect(
|
||||
host=os.getenv('POSTGRES_HOST', 'postgres'),
|
||||
port=int(os.getenv('POSTGRES_PORT', '5432')),
|
||||
user=os.getenv('POSTGRES_USER', 'pipeline_admin'),
|
||||
password=os.getenv('POSTGRES_PASSWORD', 'secure_pipeline_2024'),
|
||||
database=os.getenv('POSTGRES_DB', 'dev_pipeline')
|
||||
)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if price_tiers table exists
|
||||
cursor.execute(\"\"\"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'public'
|
||||
AND table_name = 'price_tiers'
|
||||
);
|
||||
\"\"\")
|
||||
table_exists = cursor.fetchone()[0]
|
||||
|
||||
if not table_exists:
|
||||
print('price_tiers table does not exist - migration needed')
|
||||
exit(1)
|
||||
|
||||
# Check if price_tiers has data
|
||||
cursor.execute('SELECT COUNT(*) FROM price_tiers;')
|
||||
count = cursor.fetchone()[0]
|
||||
|
||||
if count == 0:
|
||||
print('price_tiers table is empty - migration needed')
|
||||
exit(1)
|
||||
|
||||
# Check if stack_recommendations has sufficient data
|
||||
cursor.execute('SELECT COUNT(*) FROM stack_recommendations;')
|
||||
rec_count = cursor.fetchone()[0]
|
||||
|
||||
if rec_count < 20: # Reduced threshold for Docker environment
|
||||
print(f'stack_recommendations has only {rec_count} records - migration needed')
|
||||
exit(1)
|
||||
|
||||
print('Database appears to be fully migrated')
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
except Exception as e:
|
||||
print(f'Error checking database: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; then
|
||||
return 1 # Migration needed
|
||||
else
|
||||
return 0 # Migration not needed
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to run PostgreSQL migrations
|
||||
run_postgres_migrations() {
|
||||
print_info "Running PostgreSQL migrations..."
|
||||
|
||||
# Migration files in order
|
||||
migration_files=(
|
||||
"db/001_schema.sql"
|
||||
"db/002_tools_migration.sql"
|
||||
"db/003_tools_pricing_migration.sql"
|
||||
)
|
||||
|
||||
# Set PGPASSWORD to avoid password prompts
|
||||
export PGPASSWORD="$POSTGRES_PASSWORD"
|
||||
|
||||
for migration_file in "${migration_files[@]}"; do
|
||||
if [ ! -f "$migration_file" ]; then
|
||||
print_error "Migration file not found: $migration_file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
print_info "Running migration: $migration_file"
|
||||
|
||||
# Run migration with error handling
|
||||
if psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB -f "$migration_file" -q 2>/dev/null; then
|
||||
print_status "Migration completed: $migration_file"
|
||||
else
|
||||
print_error "Migration failed: $migration_file"
|
||||
print_info "Check the error logs above for details"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Unset password
|
||||
unset PGPASSWORD
|
||||
|
||||
print_status "All PostgreSQL migrations completed successfully"
|
||||
}
|
||||
|
||||
# Check if migration is needed and run if necessary
|
||||
if [ "$FORCE_MIGRATION" = true ]; then
|
||||
print_warning "Force migration enabled - running migrations..."
|
||||
run_postgres_migrations
|
||||
|
||||
# Verify migration was successful
|
||||
print_info "Verifying migration..."
|
||||
if check_database_migration; then
|
||||
print_status "Migration verification successful"
|
||||
else
|
||||
print_error "Migration verification failed"
|
||||
exit 1
|
||||
fi
|
||||
elif check_database_migration; then
|
||||
print_status "Database is already migrated"
|
||||
else
|
||||
print_warning "Database needs migration - running migrations..."
|
||||
run_postgres_migrations
|
||||
|
||||
# Verify migration was successful
|
||||
print_info "Verifying migration..."
|
||||
if check_database_migration; then
|
||||
print_status "Migration verification successful"
|
||||
else
|
||||
print_error "Migration verification failed"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check if Neo4j migration has been run
|
||||
print_info "Checking if Neo4j migration has been completed..."
|
||||
if ! python3 -c "
|
||||
from neo4j import GraphDatabase
|
||||
import os
|
||||
try:
|
||||
driver = GraphDatabase.driver(
|
||||
os.getenv('NEO4J_URI', 'bolt://neo4j:7687'),
|
||||
auth=(os.getenv('NEO4J_USER', 'neo4j'), os.getenv('NEO4J_PASSWORD', 'password'))
|
||||
)
|
||||
with driver.session() as session:
|
||||
result = session.run('MATCH (p:PriceTier) RETURN count(p) as count')
|
||||
price_tiers = result.single()['count']
|
||||
if price_tiers == 0:
|
||||
print('No data found in Neo4j - migration needed')
|
||||
exit(1)
|
||||
else:
|
||||
print(f'Found {price_tiers} price tiers - migration appears complete')
|
||||
driver.close()
|
||||
except Exception as e:
|
||||
print(f'Error checking migration status: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; then
|
||||
print_warning "No data found in Neo4j - running migration..."
|
||||
|
||||
# Run migration
|
||||
if python3 migrate_postgres_to_neo4j.py; then
|
||||
print_status "Migration completed successfully"
|
||||
else
|
||||
print_error "Migration failed"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
print_status "Migration appears to be complete"
|
||||
fi
|
||||
|
||||
# Set environment variables for the application
|
||||
export NEO4J_URI="$NEO4J_URI"
|
||||
export NEO4J_USER="$NEO4J_USER"
|
||||
export NEO4J_PASSWORD="$NEO4J_PASSWORD"
|
||||
export POSTGRES_HOST="$POSTGRES_HOST"
|
||||
export POSTGRES_PORT="$POSTGRES_PORT"
|
||||
export POSTGRES_USER="$POSTGRES_USER"
|
||||
export POSTGRES_PASSWORD="$POSTGRES_PASSWORD"
|
||||
export POSTGRES_DB="$POSTGRES_DB"
|
||||
export CLAUDE_API_KEY="$CLAUDE_API_KEY"
|
||||
|
||||
print_status "Environment variables set"
|
||||
|
||||
# Create logs directory if it doesn't exist
|
||||
mkdir -p logs
|
||||
|
||||
# Start the migrated application
|
||||
print_info "Starting Enhanced Tech Stack Selector (Docker Version)..."
|
||||
print_info "Server will be available at: http://localhost:8002"
|
||||
print_info "API documentation: http://localhost:8002/docs"
|
||||
print_info "Health check: http://localhost:8002/health"
|
||||
print_info "Diagnostics: http://localhost:8002/api/diagnostics"
|
||||
print_info ""
|
||||
print_info "Press Ctrl+C to stop the server"
|
||||
print_info ""
|
||||
|
||||
# Start the application
|
||||
cd src
|
||||
python3 main_migrated.py
|
||||
232
services/tech-stack-selector/migrate_postgres_to_neo4j.py
Normal file
232
services/tech-stack-selector/migrate_postgres_to_neo4j.py
Normal file
@ -0,0 +1,232 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
PostgreSQL to Neo4j Migration Script
|
||||
Migrates existing PostgreSQL data to Neo4j with proper price-based relationships
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import subprocess
|
||||
from loguru import logger
|
||||
|
||||
def run_migration():
|
||||
"""Run the complete migration process"""
|
||||
|
||||
logger.info("="*60)
|
||||
logger.info("🚀 POSTGRESQL TO NEO4J MIGRATION")
|
||||
logger.info("="*60)
|
||||
logger.info("✅ Using existing PostgreSQL data")
|
||||
logger.info("✅ Creating price-based relationships")
|
||||
logger.info("✅ Migrating to Neo4j knowledge graph")
|
||||
logger.info("="*60)
|
||||
|
||||
# Get environment variables with defaults
|
||||
postgres_host = os.getenv("POSTGRES_HOST", "postgres")
|
||||
postgres_port = int(os.getenv("POSTGRES_PORT", "5432"))
|
||||
postgres_user = os.getenv("POSTGRES_USER", "pipeline_admin")
|
||||
postgres_password = os.getenv("POSTGRES_PASSWORD", "secure_pipeline_2024")
|
||||
postgres_db = os.getenv("POSTGRES_DB", "dev_pipeline")
|
||||
neo4j_uri = os.getenv("NEO4J_URI", "bolt://neo4j:7687")
|
||||
neo4j_user = os.getenv("NEO4J_USER", "neo4j")
|
||||
neo4j_password = os.getenv("NEO4J_PASSWORD", "password")
|
||||
|
||||
# Check if PostgreSQL is running
|
||||
logger.info("🔍 Checking PostgreSQL connection...")
|
||||
try:
|
||||
import psycopg2
|
||||
conn = psycopg2.connect(
|
||||
host=postgres_host,
|
||||
port=postgres_port,
|
||||
user=postgres_user,
|
||||
password=postgres_password,
|
||||
database=postgres_db
|
||||
)
|
||||
conn.close()
|
||||
logger.info("✅ PostgreSQL is running and accessible")
|
||||
except Exception as e:
|
||||
logger.error(f"❌ PostgreSQL connection failed: {e}")
|
||||
logger.error("Please ensure PostgreSQL is running and the database is set up")
|
||||
return False
|
||||
|
||||
# Check if Neo4j is running
|
||||
logger.info("🔍 Checking Neo4j connection...")
|
||||
try:
|
||||
from neo4j import GraphDatabase
|
||||
driver = GraphDatabase.driver(neo4j_uri, auth=(neo4j_user, neo4j_password))
|
||||
driver.verify_connectivity()
|
||||
driver.close()
|
||||
logger.info("✅ Neo4j is running and accessible")
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Neo4j connection failed: {e}")
|
||||
logger.error("Please ensure Neo4j is running")
|
||||
return False
|
||||
|
||||
# Set up Neo4j schema
|
||||
logger.info("🔧 Setting up Neo4j schema...")
|
||||
try:
|
||||
from neo4j import GraphDatabase
|
||||
driver = GraphDatabase.driver(neo4j_uri, auth=(neo4j_user, neo4j_password))
|
||||
|
||||
with driver.session() as session:
|
||||
# Read and execute the schema file
|
||||
with open("Neo4j_From_Postgres.cql", 'r') as f:
|
||||
cql_content = f.read()
|
||||
|
||||
# Split by semicolon and execute each statement
|
||||
statements = [stmt.strip() for stmt in cql_content.split(';') if stmt.strip()]
|
||||
|
||||
for i, statement in enumerate(statements):
|
||||
if statement and not statement.startswith('//'):
|
||||
try:
|
||||
session.run(statement)
|
||||
logger.info(f"✅ Executed schema statement {i+1}/{len(statements)}")
|
||||
except Exception as e:
|
||||
logger.warning(f"⚠️ Schema statement {i+1} failed: {e}")
|
||||
continue
|
||||
|
||||
driver.close()
|
||||
logger.info("✅ Neo4j schema setup completed")
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Neo4j schema setup failed: {e}")
|
||||
return False
|
||||
|
||||
# Run the migration
|
||||
logger.info("🔄 Running PostgreSQL to Neo4j migration...")
|
||||
try:
|
||||
# Add src to path
|
||||
sys.path.append('src')
|
||||
|
||||
from postgres_to_neo4j_migration import PostgresToNeo4jMigration
|
||||
|
||||
# Configuration
|
||||
postgres_config = {
|
||||
"host": postgres_host,
|
||||
"port": postgres_port,
|
||||
"user": postgres_user,
|
||||
"password": postgres_password,
|
||||
"database": postgres_db
|
||||
}
|
||||
|
||||
neo4j_config = {
|
||||
"uri": neo4j_uri,
|
||||
"user": neo4j_user,
|
||||
"password": neo4j_password
|
||||
}
|
||||
|
||||
# Run migration
|
||||
migration = PostgresToNeo4jMigration(postgres_config, neo4j_config)
|
||||
success = migration.run_full_migration()
|
||||
|
||||
if success:
|
||||
logger.info("✅ Migration completed successfully!")
|
||||
return True
|
||||
else:
|
||||
logger.error("❌ Migration failed!")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Migration failed: {e}")
|
||||
return False
|
||||
|
||||
def test_migrated_data():
|
||||
"""Test the migrated data"""
|
||||
logger.info("🧪 Testing migrated data...")
|
||||
|
||||
try:
|
||||
from neo4j import GraphDatabase
|
||||
|
||||
driver = GraphDatabase.driver(neo4j_uri, auth=(neo4j_user, neo4j_password))
|
||||
|
||||
with driver.session() as session:
|
||||
# Test price tiers
|
||||
result = session.run("MATCH (p:PriceTier) RETURN count(p) as count")
|
||||
price_tiers_count = result.single()["count"]
|
||||
logger.info(f"✅ Price tiers: {price_tiers_count}")
|
||||
|
||||
# Test technologies
|
||||
result = session.run("MATCH (t:Technology) RETURN count(t) as count")
|
||||
technologies_count = result.single()["count"]
|
||||
logger.info(f"✅ Technologies: {technologies_count}")
|
||||
|
||||
# Test tools
|
||||
result = session.run("MATCH (tool:Tool) RETURN count(tool) as count")
|
||||
tools_count = result.single()["count"]
|
||||
logger.info(f"✅ Tools: {tools_count}")
|
||||
|
||||
# Test tech stacks
|
||||
result = session.run("MATCH (s:TechStack) RETURN count(s) as count")
|
||||
stacks_count = result.single()["count"]
|
||||
logger.info(f"✅ Tech stacks: {stacks_count}")
|
||||
|
||||
# Test relationships
|
||||
result = session.run("MATCH ()-[r]->() RETURN count(r) as count")
|
||||
relationships_count = result.single()["count"]
|
||||
logger.info(f"✅ Relationships: {relationships_count}")
|
||||
|
||||
# Test complete stacks
|
||||
result = session.run("""
|
||||
MATCH (s:TechStack)
|
||||
WHERE exists((s)-[:BELONGS_TO_TIER]->())
|
||||
AND exists((s)-[:USES_FRONTEND]->())
|
||||
AND exists((s)-[:USES_BACKEND]->())
|
||||
AND exists((s)-[:USES_DATABASE]->())
|
||||
AND exists((s)-[:USES_CLOUD]->())
|
||||
RETURN count(s) as count
|
||||
""")
|
||||
complete_stacks_count = result.single()["count"]
|
||||
logger.info(f"✅ Complete stacks: {complete_stacks_count}")
|
||||
|
||||
driver.close()
|
||||
logger.info("✅ Data validation completed successfully!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Data validation failed: {e}")
|
||||
return False
|
||||
|
||||
def start_migrated_service():
|
||||
"""Start the migrated service"""
|
||||
logger.info("🚀 Starting migrated service...")
|
||||
|
||||
try:
|
||||
# Set environment variables
|
||||
os.environ["NEO4J_URI"] = neo4j_uri
|
||||
os.environ["NEO4J_USER"] = neo4j_user
|
||||
os.environ["NEO4J_PASSWORD"] = neo4j_password
|
||||
os.environ["POSTGRES_HOST"] = postgres_host
|
||||
os.environ["POSTGRES_PORT"] = str(postgres_port)
|
||||
os.environ["POSTGRES_USER"] = postgres_user
|
||||
os.environ["POSTGRES_PASSWORD"] = postgres_password
|
||||
os.environ["POSTGRES_DB"] = postgres_db
|
||||
os.environ["CLAUDE_API_KEY"] = "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"
|
||||
|
||||
# Start the service
|
||||
subprocess.run([
|
||||
sys.executable, "src/main_migrated.py"
|
||||
])
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Failed to start migrated service: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run migration
|
||||
if run_migration():
|
||||
logger.info("✅ Migration completed successfully!")
|
||||
|
||||
# Test migrated data
|
||||
if test_migrated_data():
|
||||
logger.info("✅ Data validation passed!")
|
||||
|
||||
# Ask user if they want to start the service
|
||||
response = input("\n🚀 Start the migrated service? (y/n): ")
|
||||
if response.lower() in ['y', 'yes']:
|
||||
start_migrated_service()
|
||||
else:
|
||||
logger.info("✅ Migration completed. You can start the service later with:")
|
||||
logger.info(" python src/main_migrated.py")
|
||||
else:
|
||||
logger.error("❌ Data validation failed!")
|
||||
sys.exit(1)
|
||||
else:
|
||||
logger.error("❌ Migration failed!")
|
||||
sys.exit(1)
|
||||
1337
services/tech-stack-selector/postman_collection.json
Normal file
1337
services/tech-stack-selector/postman_collection.json
Normal file
File diff suppressed because it is too large
Load Diff
@ -38,3 +38,5 @@ yarl>=1.9.0
|
||||
six>=1.16.0
|
||||
pytz>=2023.3
|
||||
greenlet>=3.0.0
|
||||
psycopg2-binary==2.9.9
|
||||
neo4j>=5.0.0
|
||||
|
||||
@ -1,944 +0,0 @@
|
||||
|
||||
|
||||
# import os
|
||||
# import sys
|
||||
# import json
|
||||
# from datetime import datetime
|
||||
# from typing import Dict, Any, Optional, List
|
||||
# from pydantic import BaseModel
|
||||
# from fastapi import FastAPI, HTTPException, Request
|
||||
# from fastapi.middleware.cors import CORSMiddleware
|
||||
# from loguru import logger
|
||||
|
||||
# # AI integration
|
||||
# try:
|
||||
# import anthropic
|
||||
# CLAUDE_AVAILABLE = True
|
||||
# except ImportError:
|
||||
# CLAUDE_AVAILABLE = False
|
||||
|
||||
# # Configure logging
|
||||
# logger.remove()
|
||||
# logger.add(sys.stdout, level="INFO", format="{time} | {level} | {message}")
|
||||
|
||||
# # API Key
|
||||
# CLAUDE_API_KEY = "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"
|
||||
|
||||
# if not os.getenv("CLAUDE_API_KEY") and CLAUDE_API_KEY:
|
||||
# os.environ["CLAUDE_API_KEY"] = CLAUDE_API_KEY
|
||||
|
||||
# # ================================================================================================
|
||||
# # ENHANCED TECH STACK SELECTOR - WITH FUNCTIONAL REQUIREMENTS DISPLAY
|
||||
# # ================================================================================================
|
||||
|
||||
# class EnhancedTechStackSelector:
|
||||
# """Enhanced selector that handles business context + functional requirements"""
|
||||
|
||||
# def __init__(self):
|
||||
# self.claude_client = anthropic.Anthropic(api_key=CLAUDE_API_KEY) if CLAUDE_AVAILABLE else None
|
||||
# logger.info("Enhanced Tech Stack Selector initialized")
|
||||
|
||||
# # ================================================================================================
|
||||
# # FASTAPI APPLICATION
|
||||
# # ================================================================================================
|
||||
|
||||
# app = FastAPI(
|
||||
# title="Enhanced Tech Stack Selector",
|
||||
# description="Enhanced tech stack recommendations with functional requirements display",
|
||||
# version="11.0.0"
|
||||
# )
|
||||
|
||||
# app.add_middleware(
|
||||
# CORSMiddleware,
|
||||
# allow_origins=["*"],
|
||||
# allow_credentials=True,
|
||||
# allow_methods=["*"],
|
||||
# allow_headers=["*"],
|
||||
# )
|
||||
|
||||
# # Initialize enhanced selector
|
||||
# enhanced_selector = EnhancedTechStackSelector()
|
||||
|
||||
# @app.get("/health")
|
||||
# async def health_check():
|
||||
# """Health check"""
|
||||
# return {
|
||||
# "status": "healthy",
|
||||
# "service": "enhanced-tech-stack-selector",
|
||||
# "version": "11.0.0",
|
||||
# "approach": "functional_requirements_aware_recommendations"
|
||||
# }
|
||||
|
||||
# @app.post("/api/v1/select")
|
||||
# async def select_enhanced_tech_stack(request: Request):
|
||||
# """ENHANCED VERSION - Shows functional requirements + tech recommendations for architecture-designer"""
|
||||
# try:
|
||||
# request_data = await request.json()
|
||||
|
||||
# # Log exactly what we receive
|
||||
# logger.info("=== RECEIVED ENHANCED DATA START ===")
|
||||
# logger.info(json.dumps(request_data, indent=2, default=str))
|
||||
# logger.info("=== RECEIVED ENHANCED DATA END ===")
|
||||
|
||||
# # Extract enhanced data components
|
||||
# extracted_data = extract_enhanced_data(request_data)
|
||||
|
||||
# if not extracted_data["features"] and not extracted_data["feature_name"]:
|
||||
# logger.error("❌ NO FEATURES OR FEATURE DATA FOUND")
|
||||
# return {
|
||||
# "error": "No features or feature data found in request",
|
||||
# "received_data_keys": list(request_data.keys()) if isinstance(request_data, dict) else "not_dict",
|
||||
# "extraction_attempted": "enhanced_data_extraction"
|
||||
# }
|
||||
|
||||
# # Build comprehensive context for Claude
|
||||
# context = build_comprehensive_context(extracted_data)
|
||||
|
||||
# # Generate enhanced tech stack recommendations
|
||||
# recommendations = await generate_enhanced_recommendations(context)
|
||||
|
||||
# # NEW: Build complete response with functional requirements for architecture-designer
|
||||
# complete_response = {
|
||||
# "success": True,
|
||||
# "enhanced_analysis": True,
|
||||
|
||||
# # PROJECT CONTEXT - For Web Dashboard Display
|
||||
# "project_context": {
|
||||
# "project_name": extracted_data["project_name"],
|
||||
# "project_type": extracted_data["project_type"],
|
||||
# "features_analyzed": len(extracted_data["features"]),
|
||||
# "business_questions_answered": len(extracted_data["business_answers"]),
|
||||
# "complexity": extracted_data["complexity"]
|
||||
# },
|
||||
|
||||
# # FUNCTIONAL REQUIREMENTS - For Web Dashboard Display & Architecture Designer
|
||||
# "functional_requirements": {
|
||||
# "feature_name": extracted_data["feature_name"],
|
||||
# "description": extracted_data["description"],
|
||||
# "technical_requirements": extracted_data["requirements"],
|
||||
# "business_logic_rules": extracted_data["logic_rules"],
|
||||
# "complexity_level": extracted_data["complexity"],
|
||||
# "all_features": extracted_data["features"],
|
||||
# "business_context": {
|
||||
# "questions": extracted_data["business_questions"],
|
||||
# "answers": extracted_data["business_answers"]
|
||||
# }
|
||||
# },
|
||||
|
||||
# # TECHNOLOGY RECOMMENDATIONS - Claude Generated
|
||||
# "claude_recommendations": recommendations,
|
||||
|
||||
# # COMPLETE DATA FOR ARCHITECTURE DESIGNER
|
||||
# "architecture_designer_input": {
|
||||
# "project_data": {
|
||||
# "project_name": extracted_data["project_name"],
|
||||
# "project_type": extracted_data["project_type"],
|
||||
# "complexity": extracted_data["complexity"]
|
||||
# },
|
||||
# "functional_specifications": {
|
||||
# "primary_feature": {
|
||||
# "name": extracted_data["feature_name"],
|
||||
# "description": extracted_data["description"],
|
||||
# "requirements": extracted_data["requirements"],
|
||||
# "logic_rules": extracted_data["logic_rules"]
|
||||
# },
|
||||
# "all_features": extracted_data["features"],
|
||||
# "business_context": extracted_data["business_answers"]
|
||||
# },
|
||||
# "technology_stack": recommendations,
|
||||
# "business_requirements": context["business_context"]
|
||||
# },
|
||||
|
||||
# "analysis_timestamp": datetime.utcnow().isoformat(),
|
||||
# "ready_for_architecture_design": True
|
||||
# }
|
||||
|
||||
# logger.info(f"✅ Enhanced tech stack analysis completed with functional requirements")
|
||||
# logger.info(f" Feature: {extracted_data['feature_name']}")
|
||||
# logger.info(f" Requirements: {len(extracted_data['requirements'])}")
|
||||
# logger.info(f" Logic Rules: {len(extracted_data['logic_rules'])}")
|
||||
# logger.info(f" Business Answers: {len(extracted_data['business_answers'])}")
|
||||
|
||||
# return complete_response
|
||||
|
||||
# except Exception as e:
|
||||
# logger.error(f"💥 ERROR in enhanced tech stack selection: {e}")
|
||||
# return {
|
||||
# "error": str(e),
|
||||
# "debug": "Check service logs for detailed error information"
|
||||
# }
|
||||
|
||||
# def extract_enhanced_data(request_data: Dict) -> Dict:
|
||||
# """Extract enhanced data from web dashboard request"""
|
||||
# extracted = {
|
||||
# "project_name": "Unknown Project",
|
||||
# "project_type": "unknown",
|
||||
# "feature_name": "",
|
||||
# "description": "",
|
||||
# "requirements": [],
|
||||
# "complexity": "medium",
|
||||
# "logic_rules": [],
|
||||
# "business_questions": [],
|
||||
# "business_answers": [],
|
||||
# "features": [],
|
||||
# "all_features": []
|
||||
# }
|
||||
|
||||
# logger.info("🔍 Extracting enhanced data with functional requirements...")
|
||||
|
||||
# # Path 1: Direct enhanced data format from web dashboard
|
||||
# if isinstance(request_data, dict):
|
||||
# # Extract main feature data
|
||||
# extracted["feature_name"] = request_data.get("featureName", "")
|
||||
# extracted["description"] = request_data.get("description", "")
|
||||
# extracted["requirements"] = request_data.get("requirements", [])
|
||||
# extracted["complexity"] = request_data.get("complexity", "medium")
|
||||
# extracted["logic_rules"] = request_data.get("logicRules", [])
|
||||
# extracted["business_questions"] = request_data.get("businessQuestions", [])
|
||||
# extracted["business_answers"] = request_data.get("businessAnswers", [])
|
||||
# extracted["project_name"] = request_data.get("projectName", "Unknown Project")
|
||||
# extracted["project_type"] = request_data.get("projectType", "unknown")
|
||||
# extracted["all_features"] = request_data.get("allFeatures", [])
|
||||
|
||||
# # If we have business answers in object format, convert to list
|
||||
# if isinstance(extracted["business_answers"], dict):
|
||||
# ba_list = []
|
||||
# for key, value in extracted["business_answers"].items():
|
||||
# if isinstance(value, str) and value.strip():
|
||||
# question_idx = int(key) if key.isdigit() else 0
|
||||
# if question_idx < len(extracted["business_questions"]):
|
||||
# ba_list.append({
|
||||
# "question": extracted["business_questions"][question_idx],
|
||||
# "answer": value.strip()
|
||||
# })
|
||||
# extracted["business_answers"] = ba_list
|
||||
|
||||
# # Extract features list
|
||||
# if extracted["feature_name"]:
|
||||
# extracted["features"] = [extracted["feature_name"]]
|
||||
|
||||
# # Add all features if available
|
||||
# if extracted["all_features"]:
|
||||
# feature_names = []
|
||||
# for feature in extracted["all_features"]:
|
||||
# if isinstance(feature, dict):
|
||||
# feature_names.append(feature.get("name", feature.get("featureName", "")))
|
||||
# else:
|
||||
# feature_names.append(str(feature))
|
||||
# extracted["features"].extend([f for f in feature_names if f])
|
||||
|
||||
# logger.info(f"✅ Extracted enhanced data with functional requirements:")
|
||||
# logger.info(f" Project: {extracted['project_name']} ({extracted['project_type']})")
|
||||
# logger.info(f" Main feature: {extracted['feature_name']}")
|
||||
# logger.info(f" Requirements: {len(extracted['requirements'])}")
|
||||
# logger.info(f" Logic Rules: {len(extracted['logic_rules'])}")
|
||||
# logger.info(f" Complexity: {extracted['complexity']}")
|
||||
# logger.info(f" Business answers: {len(extracted['business_answers'])}")
|
||||
# logger.info(f" Total features: {len(extracted['features'])}")
|
||||
|
||||
# return extracted
|
||||
|
||||
# def build_comprehensive_context(extracted_data: Dict) -> Dict:
|
||||
# """Build comprehensive context for Claude analysis"""
|
||||
|
||||
# # Combine all requirements and business insights
|
||||
# functional_requirements = []
|
||||
# if extracted_data["feature_name"]:
|
||||
# functional_requirements.append(f"Core Feature: {extracted_data['feature_name']}")
|
||||
|
||||
# if extracted_data["requirements"]:
|
||||
# functional_requirements.extend([f"• {req}" for req in extracted_data["requirements"]])
|
||||
|
||||
# if extracted_data["features"]:
|
||||
# for feature in extracted_data["features"]:
|
||||
# if feature and feature != extracted_data["feature_name"]:
|
||||
# functional_requirements.append(f"• {feature}")
|
||||
|
||||
# # Business context from answers
|
||||
# business_context = {}
|
||||
# if extracted_data["business_answers"]:
|
||||
# for answer_data in extracted_data["business_answers"]:
|
||||
# if isinstance(answer_data, dict):
|
||||
# question = answer_data.get("question", "")
|
||||
# answer = answer_data.get("answer", "")
|
||||
# if question and answer:
|
||||
# # Categorize business answers
|
||||
# if any(keyword in question.lower() for keyword in ["user", "scale", "concurrent"]):
|
||||
# business_context["scale_requirements"] = business_context.get("scale_requirements", [])
|
||||
# business_context["scale_requirements"].append(f"{question}: {answer}")
|
||||
# elif any(keyword in question.lower() for keyword in ["compliance", "security", "encryption"]):
|
||||
# business_context["security_requirements"] = business_context.get("security_requirements", [])
|
||||
# business_context["security_requirements"].append(f"{question}: {answer}")
|
||||
# elif any(keyword in question.lower() for keyword in ["budget", "timeline", "timeline"]):
|
||||
# business_context["project_constraints"] = business_context.get("project_constraints", [])
|
||||
# business_context["project_constraints"].append(f"{question}: {answer}")
|
||||
# else:
|
||||
# business_context["other_requirements"] = business_context.get("other_requirements", [])
|
||||
# business_context["other_requirements"].append(f"{question}: {answer}")
|
||||
|
||||
# return {
|
||||
# "project_name": extracted_data["project_name"],
|
||||
# "project_type": extracted_data["project_type"],
|
||||
# "complexity": extracted_data["complexity"],
|
||||
# "functional_requirements": functional_requirements,
|
||||
# "business_context": business_context,
|
||||
# "logic_rules": extracted_data["logic_rules"]
|
||||
# }
|
||||
|
||||
# async def generate_enhanced_recommendations(context: Dict) -> Dict:
|
||||
# """Generate enhanced tech stack recommendations using Claude with business context"""
|
||||
|
||||
# if not enhanced_selector.claude_client:
|
||||
# logger.error("❌ Claude client not available")
|
||||
# return {
|
||||
# "error": "Claude AI not available",
|
||||
# "fallback": "Basic recommendations would go here"
|
||||
# }
|
||||
|
||||
# # Build comprehensive prompt with business context
|
||||
# functional_reqs_text = "\n".join(context["functional_requirements"])
|
||||
|
||||
# business_context_text = ""
|
||||
# for category, requirements in context["business_context"].items():
|
||||
# business_context_text += f"\n{category.replace('_', ' ').title()}:\n"
|
||||
# business_context_text += "\n".join([f" - {req}" for req in requirements]) + "\n"
|
||||
|
||||
# logic_rules_text = "\n".join([f" - {rule}" for rule in context["logic_rules"]])
|
||||
|
||||
# prompt = f"""You are a senior software architect. Analyze this comprehensive project context and recommend the optimal technology stack.
|
||||
|
||||
# PROJECT CONTEXT:
|
||||
# - Name: {context["project_name"]}
|
||||
# - Type: {context["project_type"]}
|
||||
# - Complexity: {context["complexity"]}
|
||||
|
||||
# FUNCTIONAL REQUIREMENTS:
|
||||
# {functional_reqs_text}
|
||||
|
||||
# BUSINESS CONTEXT & CONSTRAINTS:
|
||||
# {business_context_text}
|
||||
|
||||
# BUSINESS LOGIC RULES:
|
||||
# {logic_rules_text}
|
||||
|
||||
# Based on this comprehensive analysis, provide detailed technology recommendations as a JSON object:
|
||||
|
||||
# {{
|
||||
# "technology_recommendations": {{
|
||||
# "frontend": {{
|
||||
# "framework": "recommended framework",
|
||||
# "libraries": ["lib1", "lib2", "lib3"],
|
||||
# "reasoning": "detailed reasoning based on requirements and business context"
|
||||
# }},
|
||||
# "backend": {{
|
||||
# "framework": "recommended backend framework",
|
||||
# "language": "programming language",
|
||||
# "libraries": ["lib1", "lib2", "lib3"],
|
||||
# "reasoning": "detailed reasoning based on complexity and business needs"
|
||||
# }},
|
||||
# "database": {{
|
||||
# "primary": "primary database choice",
|
||||
# "secondary": ["cache", "search", "analytics"],
|
||||
# "reasoning": "database choice based on data requirements and scale"
|
||||
# }},
|
||||
# "infrastructure": {{
|
||||
# "cloud_provider": "recommended cloud provider",
|
||||
# "orchestration": "container/orchestration choice",
|
||||
# "services": ["service1", "service2", "service3"],
|
||||
# "reasoning": "infrastructure reasoning based on scale and budget"
|
||||
# }},
|
||||
# "security": {{
|
||||
# "authentication": "auth strategy",
|
||||
# "authorization": "authorization approach",
|
||||
# "data_protection": "data protection measures",
|
||||
# "compliance": "compliance approach",
|
||||
# "reasoning": "security reasoning based on business context"
|
||||
# }},
|
||||
# "third_party_services": {{
|
||||
# "communication": "communication services",
|
||||
# "monitoring": "monitoring solution",
|
||||
# "payment": "payment processing",
|
||||
# "other_services": ["service1", "service2"],
|
||||
# "reasoning": "third-party service reasoning"
|
||||
# }}
|
||||
# }},
|
||||
# "implementation_strategy": {{
|
||||
# "architecture_pattern": "recommended architecture pattern",
|
||||
# "development_phases": ["phase1", "phase2", "phase3"],
|
||||
# "deployment_strategy": "deployment approach",
|
||||
# "scalability_approach": "scalability strategy",
|
||||
# "timeline_estimate": "development timeline estimate"
|
||||
# }},
|
||||
# "business_alignment": {{
|
||||
# "addresses_scale_requirements": "how recommendations address scale needs",
|
||||
# "addresses_security_requirements": "how recommendations address security needs",
|
||||
# "addresses_budget_constraints": "how recommendations fit budget",
|
||||
# "addresses_timeline_constraints": "how recommendations fit timeline",
|
||||
# "compliance_considerations": "compliance alignment"
|
||||
# }}
|
||||
# }}
|
||||
|
||||
# CRITICAL: Return ONLY valid JSON, no additional text. Base all recommendations on the provided functional requirements and business context."""
|
||||
|
||||
# try:
|
||||
# logger.info("📞 Calling Claude for enhanced recommendations with functional requirements...")
|
||||
# message = enhanced_selector.claude_client.messages.create(
|
||||
# model="claude-3-5-sonnet-20241022",
|
||||
# max_tokens=8000,
|
||||
# temperature=0.1,
|
||||
# messages=[{"role": "user", "content": prompt}]
|
||||
# )
|
||||
|
||||
# claude_response = message.content[0].text.strip()
|
||||
# logger.info("✅ Received Claude response for enhanced recommendations")
|
||||
|
||||
# # Parse JSON response
|
||||
# try:
|
||||
# recommendations = json.loads(claude_response)
|
||||
# logger.info("✅ Successfully parsed enhanced recommendations JSON")
|
||||
# return recommendations
|
||||
# except json.JSONDecodeError as e:
|
||||
# logger.error(f"❌ JSON parse error: {e}")
|
||||
# return {
|
||||
# "parse_error": str(e),
|
||||
# "raw_response": claude_response[:1000] + "..." if len(claude_response) > 1000 else claude_response
|
||||
# }
|
||||
|
||||
# except Exception as e:
|
||||
# logger.error(f"❌ Claude API error: {e}")
|
||||
# return {
|
||||
# "error": str(e),
|
||||
# "fallback": "Enhanced recommendations generation failed"
|
||||
# }
|
||||
|
||||
# if __name__ == "__main__":
|
||||
# import uvicorn
|
||||
|
||||
# logger.info("="*60)
|
||||
# logger.info("🚀 ENHANCED TECH STACK SELECTOR v11.0 - FUNCTIONAL REQUIREMENTS AWARE")
|
||||
# logger.info("="*60)
|
||||
# logger.info("✅ Enhanced data extraction from web dashboard")
|
||||
# logger.info("✅ Functional requirements display")
|
||||
# logger.info("✅ Business context analysis")
|
||||
# logger.info("✅ Complete data for architecture-designer")
|
||||
# logger.info("✅ Comprehensive Claude recommendations")
|
||||
# logger.info("="*60)
|
||||
|
||||
# uvicorn.run("main:app", host="0.0.0.0", port=8002, log_level="info")
|
||||
|
||||
|
||||
|
||||
# ENHANCED TECH STACK SELECTOR - SHOWS FUNCTIONAL REQUIREMENTS + TECH RECOMMENDATIONS
|
||||
# Now includes requirement-processor data in output for architecture-designer
|
||||
# ENHANCED: Added tagged rules support while preserving ALL working functionality
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any, Optional, List
|
||||
from pydantic import BaseModel
|
||||
from fastapi import FastAPI, HTTPException, Request
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from loguru import logger
|
||||
|
||||
# AI integration
|
||||
try:
|
||||
import anthropic
|
||||
CLAUDE_AVAILABLE = True
|
||||
except ImportError:
|
||||
CLAUDE_AVAILABLE = False
|
||||
|
||||
# Configure logging
|
||||
logger.remove()
|
||||
logger.add(sys.stdout, level="INFO", format="{time} | {level} | {message}")
|
||||
|
||||
# API Key
|
||||
CLAUDE_API_KEY = "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"
|
||||
|
||||
if not os.getenv("CLAUDE_API_KEY") and CLAUDE_API_KEY:
|
||||
os.environ["CLAUDE_API_KEY"] = CLAUDE_API_KEY
|
||||
|
||||
# ================================================================================================
|
||||
# ENHANCED TECH STACK SELECTOR - WITH FUNCTIONAL REQUIREMENTS DISPLAY
|
||||
# ================================================================================================
|
||||
|
||||
class EnhancedTechStackSelector:
|
||||
"""Enhanced selector that handles business context + functional requirements"""
|
||||
|
||||
def __init__(self):
|
||||
self.claude_client = anthropic.Anthropic(api_key=CLAUDE_API_KEY) if CLAUDE_AVAILABLE else None
|
||||
logger.info("Enhanced Tech Stack Selector initialized")
|
||||
|
||||
# ================================================================================================
|
||||
# FASTAPI APPLICATION
|
||||
# ================================================================================================
|
||||
|
||||
app = FastAPI(
|
||||
title="Enhanced Tech Stack Selector",
|
||||
description="Enhanced tech stack recommendations with functional requirements display",
|
||||
version="11.1.0"
|
||||
)
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Initialize enhanced selector
|
||||
enhanced_selector = EnhancedTechStackSelector()
|
||||
|
||||
@app.get("/health")
|
||||
async def health_check():
|
||||
"""Health check"""
|
||||
return {
|
||||
"status": "healthy",
|
||||
"service": "enhanced-tech-stack-selector",
|
||||
"version": "11.1.0",
|
||||
"approach": "functional_requirements_aware_recommendations",
|
||||
"new_features": ["tagged_rules_support"]
|
||||
}
|
||||
|
||||
@app.post("/api/v1/select")
|
||||
async def select_enhanced_tech_stack(request: Request):
|
||||
"""ENHANCED VERSION - Shows functional requirements + tech recommendations for architecture-designer"""
|
||||
try:
|
||||
request_data = await request.json()
|
||||
|
||||
# Log exactly what we receive
|
||||
logger.info("=== RECEIVED ENHANCED DATA START ===")
|
||||
logger.info(json.dumps(request_data, indent=2, default=str))
|
||||
logger.info("=== RECEIVED ENHANCED DATA END ===")
|
||||
|
||||
# Extract enhanced data components - ENHANCED with tagged rules
|
||||
extracted_data = extract_enhanced_data(request_data)
|
||||
|
||||
if not extracted_data["features"] and not extracted_data["feature_name"]:
|
||||
logger.error("❌ NO FEATURES OR FEATURE DATA FOUND")
|
||||
return {
|
||||
"error": "No features or feature data found in request",
|
||||
"received_data_keys": list(request_data.keys()) if isinstance(request_data, dict) else "not_dict",
|
||||
"extraction_attempted": "enhanced_data_extraction"
|
||||
}
|
||||
|
||||
# Build comprehensive context for Claude - ENHANCED with tagged rules
|
||||
context = build_comprehensive_context(extracted_data)
|
||||
|
||||
# Generate enhanced tech stack recommendations - SAME working logic
|
||||
recommendations = await generate_enhanced_recommendations(context)
|
||||
|
||||
# NEW: Build complete response with functional requirements for architecture-designer - ENHANCED
|
||||
complete_response = {
|
||||
"success": True,
|
||||
"enhanced_analysis": True,
|
||||
|
||||
# PROJECT CONTEXT - For Web Dashboard Display
|
||||
"project_context": {
|
||||
"project_name": extracted_data["project_name"],
|
||||
"project_type": extracted_data["project_type"],
|
||||
"features_analyzed": len(extracted_data["features"]),
|
||||
"business_questions_answered": len(extracted_data["business_answers"]),
|
||||
"complexity": extracted_data["complexity"],
|
||||
# NEW: Tagged rules info
|
||||
"detailed_requirements_count": len(extracted_data.get("detailed_requirements", [])),
|
||||
"total_tagged_rules": extracted_data.get("total_tagged_rules", 0)
|
||||
},
|
||||
|
||||
# FUNCTIONAL REQUIREMENTS - For Web Dashboard Display & Architecture Designer - ENHANCED
|
||||
"functional_requirements": {
|
||||
"feature_name": extracted_data["feature_name"],
|
||||
"description": extracted_data["description"],
|
||||
"technical_requirements": extracted_data["requirements"],
|
||||
"business_logic_rules": extracted_data["logic_rules"],
|
||||
"complexity_level": extracted_data["complexity"],
|
||||
"all_features": extracted_data["features"],
|
||||
# NEW: Tagged rules data
|
||||
"detailed_requirements": extracted_data.get("detailed_requirements", []),
|
||||
"tagged_rules": extracted_data.get("tagged_rules", []),
|
||||
"business_context": {
|
||||
"questions": extracted_data["business_questions"],
|
||||
"answers": extracted_data["business_answers"]
|
||||
}
|
||||
},
|
||||
|
||||
# TECHNOLOGY RECOMMENDATIONS - Claude Generated - SAME working logic
|
||||
"claude_recommendations": recommendations,
|
||||
|
||||
# COMPLETE DATA FOR ARCHITECTURE DESIGNER - ENHANCED
|
||||
"architecture_designer_input": {
|
||||
"project_data": {
|
||||
"project_name": extracted_data["project_name"],
|
||||
"project_type": extracted_data["project_type"],
|
||||
"complexity": extracted_data["complexity"]
|
||||
},
|
||||
"functional_specifications": {
|
||||
"primary_feature": {
|
||||
"name": extracted_data["feature_name"],
|
||||
"description": extracted_data["description"],
|
||||
"requirements": extracted_data["requirements"],
|
||||
"logic_rules": extracted_data["logic_rules"]
|
||||
},
|
||||
"all_features": extracted_data["features"],
|
||||
# NEW: Tagged rules for architecture designer
|
||||
"detailed_requirements": extracted_data.get("detailed_requirements", []),
|
||||
"tagged_rules": extracted_data.get("tagged_rules", []),
|
||||
"business_context": extracted_data["business_answers"]
|
||||
},
|
||||
"technology_stack": recommendations,
|
||||
"business_requirements": context["business_context"]
|
||||
},
|
||||
|
||||
"analysis_timestamp": datetime.utcnow().isoformat(),
|
||||
"ready_for_architecture_design": True
|
||||
}
|
||||
|
||||
logger.info(f"✅ Enhanced tech stack analysis completed with functional requirements")
|
||||
logger.info(f" Feature: {extracted_data['feature_name']}")
|
||||
logger.info(f" Requirements: {len(extracted_data['requirements'])}")
|
||||
logger.info(f" Logic Rules: {len(extracted_data['logic_rules'])}")
|
||||
logger.info(f" Business Answers: {len(extracted_data['business_answers'])}")
|
||||
# NEW: Tagged rules logging
|
||||
logger.info(f" Detailed Requirements: {len(extracted_data.get('detailed_requirements', []))}")
|
||||
logger.info(f" Tagged Rules: {extracted_data.get('total_tagged_rules', 0)}")
|
||||
|
||||
return complete_response
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"💥 ERROR in enhanced tech stack selection: {e}")
|
||||
return {
|
||||
"error": str(e),
|
||||
"debug": "Check service logs for detailed error information"
|
||||
}
|
||||
|
||||
def extract_enhanced_data(request_data: Dict) -> Dict:
|
||||
"""Extract enhanced data from web dashboard request - ENHANCED with tagged rules support"""
|
||||
extracted = {
|
||||
"project_name": "Unknown Project",
|
||||
"project_type": "unknown",
|
||||
"feature_name": "",
|
||||
"description": "",
|
||||
"requirements": [],
|
||||
"complexity": "medium",
|
||||
"logic_rules": [],
|
||||
"business_questions": [],
|
||||
"business_answers": [],
|
||||
"features": [],
|
||||
"all_features": [],
|
||||
# NEW: Tagged rules support
|
||||
"detailed_requirements": [],
|
||||
"tagged_rules": [],
|
||||
"total_tagged_rules": 0
|
||||
}
|
||||
|
||||
logger.info("🔍 Extracting enhanced data with functional requirements and tagged rules...")
|
||||
|
||||
# Path 1: Direct enhanced data format from web dashboard - SAME working logic
|
||||
if isinstance(request_data, dict):
|
||||
# Extract main feature data - SAME as before
|
||||
extracted["feature_name"] = request_data.get("featureName", "")
|
||||
extracted["description"] = request_data.get("description", "")
|
||||
extracted["requirements"] = request_data.get("requirements", [])
|
||||
extracted["complexity"] = request_data.get("complexity", "medium")
|
||||
extracted["logic_rules"] = request_data.get("logicRules", [])
|
||||
extracted["business_questions"] = request_data.get("businessQuestions", [])
|
||||
extracted["business_answers"] = request_data.get("businessAnswers", [])
|
||||
extracted["project_name"] = request_data.get("projectName", "Unknown Project")
|
||||
extracted["project_type"] = request_data.get("projectType", "unknown")
|
||||
extracted["all_features"] = request_data.get("allFeatures", [])
|
||||
|
||||
# If we have business answers in object format, convert to list - SAME as before
|
||||
if isinstance(extracted["business_answers"], dict):
|
||||
ba_list = []
|
||||
for key, value in extracted["business_answers"].items():
|
||||
if isinstance(value, str) and value.strip():
|
||||
question_idx = int(key) if key.isdigit() else 0
|
||||
if question_idx < len(extracted["business_questions"]):
|
||||
ba_list.append({
|
||||
"question": extracted["business_questions"][question_idx],
|
||||
"answer": value.strip()
|
||||
})
|
||||
extracted["business_answers"] = ba_list
|
||||
|
||||
# Extract features list - SAME as before
|
||||
if extracted["feature_name"]:
|
||||
extracted["features"] = [extracted["feature_name"]]
|
||||
|
||||
# Add all features if available - ENHANCED with tagged rules extraction
|
||||
if extracted["all_features"]:
|
||||
feature_names = []
|
||||
for feature in extracted["all_features"]:
|
||||
if isinstance(feature, dict):
|
||||
feature_name = feature.get("name", feature.get("featureName", ""))
|
||||
feature_names.append(feature_name)
|
||||
|
||||
# NEW: Extract tagged rules from requirementAnalysis
|
||||
requirement_analysis = feature.get("requirementAnalysis", [])
|
||||
if requirement_analysis:
|
||||
logger.info(f"📋 Found tagged rules for feature: {feature_name}")
|
||||
|
||||
for req_analysis in requirement_analysis:
|
||||
requirement_name = req_analysis.get("requirement", "Unknown Requirement")
|
||||
requirement_rules = req_analysis.get("logicRules", [])
|
||||
|
||||
# Create detailed requirement entry
|
||||
detailed_req = {
|
||||
"feature_name": feature_name,
|
||||
"requirement_name": requirement_name,
|
||||
"description": feature.get("description", ""),
|
||||
"complexity": req_analysis.get("complexity", "medium"),
|
||||
"rules": requirement_rules
|
||||
}
|
||||
extracted["detailed_requirements"].append(detailed_req)
|
||||
|
||||
# Add tagged rules
|
||||
for rule_idx, rule in enumerate(requirement_rules):
|
||||
if rule and rule.strip():
|
||||
tagged_rule = {
|
||||
"rule_id": f"R{rule_idx + 1}",
|
||||
"rule_text": rule.strip(),
|
||||
"feature_name": feature_name,
|
||||
"requirement_name": requirement_name
|
||||
}
|
||||
extracted["tagged_rules"].append(tagged_rule)
|
||||
extracted["total_tagged_rules"] += 1
|
||||
|
||||
# Fallback: Add regular logic rules to main logic_rules if no tagged rules
|
||||
elif feature.get("logicRules"):
|
||||
regular_rules = feature.get("logicRules", [])
|
||||
extracted["logic_rules"].extend(regular_rules)
|
||||
|
||||
else:
|
||||
feature_names.append(str(feature))
|
||||
|
||||
extracted["features"].extend([f for f in feature_names if f])
|
||||
|
||||
logger.info(f"✅ Extracted enhanced data with functional requirements and tagged rules:")
|
||||
logger.info(f" Project: {extracted['project_name']} ({extracted['project_type']})")
|
||||
logger.info(f" Main feature: {extracted['feature_name']}")
|
||||
logger.info(f" Requirements: {len(extracted['requirements'])}")
|
||||
logger.info(f" Logic Rules: {len(extracted['logic_rules'])}")
|
||||
logger.info(f" Complexity: {extracted['complexity']}")
|
||||
logger.info(f" Business answers: {len(extracted['business_answers'])}")
|
||||
logger.info(f" Total features: {len(extracted['features'])}")
|
||||
# NEW: Tagged rules logging
|
||||
logger.info(f" Detailed Requirements: {len(extracted['detailed_requirements'])}")
|
||||
logger.info(f" Tagged Rules: {extracted['total_tagged_rules']}")
|
||||
|
||||
return extracted
|
||||
|
||||
def build_comprehensive_context(extracted_data: Dict) -> Dict:
|
||||
"""Build comprehensive context for Claude analysis - ENHANCED with tagged rules"""
|
||||
|
||||
# Combine all requirements and business insights - SAME working logic
|
||||
functional_requirements = []
|
||||
if extracted_data["feature_name"]:
|
||||
functional_requirements.append(f"Core Feature: {extracted_data['feature_name']}")
|
||||
|
||||
if extracted_data["requirements"]:
|
||||
functional_requirements.extend([f"• {req}" for req in extracted_data["requirements"]])
|
||||
|
||||
if extracted_data["features"]:
|
||||
for feature in extracted_data["features"]:
|
||||
if feature and feature != extracted_data["feature_name"]:
|
||||
functional_requirements.append(f"• {feature}")
|
||||
|
||||
# NEW: Add detailed requirements with tagged rules to functional requirements
|
||||
detailed_requirements_text = []
|
||||
for detailed_req in extracted_data.get("detailed_requirements", []):
|
||||
req_text = f"📋 {detailed_req['feature_name']} → {detailed_req['requirement_name']}:"
|
||||
for rule in detailed_req["rules"]:
|
||||
req_text += f"\n - {rule}"
|
||||
detailed_requirements_text.append(req_text)
|
||||
|
||||
if detailed_requirements_text:
|
||||
functional_requirements.extend(detailed_requirements_text)
|
||||
|
||||
# Business context from answers - SAME working logic
|
||||
business_context = {}
|
||||
if extracted_data["business_answers"]:
|
||||
for answer_data in extracted_data["business_answers"]:
|
||||
if isinstance(answer_data, dict):
|
||||
question = answer_data.get("question", "")
|
||||
answer = answer_data.get("answer", "")
|
||||
if question and answer:
|
||||
# Categorize business answers - SAME logic
|
||||
if any(keyword in question.lower() for keyword in ["user", "scale", "concurrent"]):
|
||||
business_context["scale_requirements"] = business_context.get("scale_requirements", [])
|
||||
business_context["scale_requirements"].append(f"{question}: {answer}")
|
||||
elif any(keyword in question.lower() for keyword in ["compliance", "security", "encryption"]):
|
||||
business_context["security_requirements"] = business_context.get("security_requirements", [])
|
||||
business_context["security_requirements"].append(f"{question}: {answer}")
|
||||
elif any(keyword in question.lower() for keyword in ["budget", "timeline", "timeline"]):
|
||||
business_context["project_constraints"] = business_context.get("project_constraints", [])
|
||||
business_context["project_constraints"].append(f"{question}: {answer}")
|
||||
else:
|
||||
business_context["other_requirements"] = business_context.get("other_requirements", [])
|
||||
business_context["other_requirements"].append(f"{question}: {answer}")
|
||||
|
||||
return {
|
||||
"project_name": extracted_data["project_name"],
|
||||
"project_type": extracted_data["project_type"],
|
||||
"complexity": extracted_data["complexity"],
|
||||
"functional_requirements": functional_requirements,
|
||||
"business_context": business_context,
|
||||
"logic_rules": extracted_data["logic_rules"],
|
||||
# NEW: Include tagged rules data
|
||||
"detailed_requirements": extracted_data.get("detailed_requirements", []),
|
||||
"tagged_rules": extracted_data.get("tagged_rules", [])
|
||||
}
|
||||
|
||||
async def generate_enhanced_recommendations(context: Dict) -> Dict:
|
||||
"""Generate enhanced tech stack recommendations using Claude with business context - SAME working logic + tagged rules"""
|
||||
|
||||
if not enhanced_selector.claude_client:
|
||||
logger.error("❌ Claude client not available")
|
||||
return {
|
||||
"error": "Claude AI not available",
|
||||
"fallback": "Basic recommendations would go here"
|
||||
}
|
||||
|
||||
# Build comprehensive prompt with business context - SAME working logic
|
||||
functional_reqs_text = "\n".join(context["functional_requirements"])
|
||||
|
||||
business_context_text = ""
|
||||
for category, requirements in context["business_context"].items():
|
||||
business_context_text += f"\n{category.replace('_', ' ').title()}:\n"
|
||||
business_context_text += "\n".join([f" - {req}" for req in requirements]) + "\n"
|
||||
|
||||
logic_rules_text = "\n".join([f" - {rule}" for rule in context["logic_rules"]])
|
||||
|
||||
# NEW: Add tagged rules info to prompt (only if tagged rules exist)
|
||||
tagged_rules_text = ""
|
||||
if context.get("tagged_rules"):
|
||||
tagged_rules_text = f"\n\nDETAILED TAGGED RULES:\n"
|
||||
for tagged_rule in context["tagged_rules"][:10]: # Limit to first 10 for prompt size
|
||||
tagged_rules_text += f" {tagged_rule['rule_id']}: {tagged_rule['rule_text']} (Feature: {tagged_rule['feature_name']})\n"
|
||||
if len(context["tagged_rules"]) > 10:
|
||||
tagged_rules_text += f" ... and {len(context['tagged_rules']) - 10} more tagged rules\n"
|
||||
|
||||
# SAME working prompt structure with minimal enhancement
|
||||
prompt = f"""You are a senior software architect. Analyze this comprehensive project context and recommend the optimal technology stack.
|
||||
|
||||
PROJECT CONTEXT:
|
||||
- Name: {context["project_name"]}
|
||||
- Type: {context["project_type"]}
|
||||
- Complexity: {context["complexity"]}
|
||||
|
||||
FUNCTIONAL REQUIREMENTS:
|
||||
{functional_reqs_text}
|
||||
|
||||
BUSINESS CONTEXT & CONSTRAINTS:
|
||||
{business_context_text}
|
||||
|
||||
BUSINESS LOGIC RULES:
|
||||
{logic_rules_text}
|
||||
{tagged_rules_text}
|
||||
|
||||
Based on this comprehensive analysis, provide detailed technology recommendations as a JSON object:
|
||||
|
||||
{{
|
||||
"technology_recommendations": {{
|
||||
"frontend": {{
|
||||
"framework": "recommended framework",
|
||||
"libraries": ["lib1", "lib2", "lib3"],
|
||||
"reasoning": "detailed reasoning based on requirements and business context"
|
||||
}},
|
||||
"backend": {{
|
||||
"framework": "recommended backend framework",
|
||||
"language": "programming language",
|
||||
"libraries": ["lib1", "lib2", "lib3"],
|
||||
"reasoning": "detailed reasoning based on complexity and business needs"
|
||||
}},
|
||||
"database": {{
|
||||
"primary": "primary database choice",
|
||||
"secondary": ["cache", "search", "analytics"],
|
||||
"reasoning": "database choice based on data requirements and scale"
|
||||
}},
|
||||
"infrastructure": {{
|
||||
"cloud_provider": "recommended cloud provider",
|
||||
"orchestration": "container/orchestration choice",
|
||||
"services": ["service1", "service2", "service3"],
|
||||
"reasoning": "infrastructure reasoning based on scale and budget"
|
||||
}},
|
||||
"security": {{
|
||||
"authentication": "auth strategy",
|
||||
"authorization": "authorization approach",
|
||||
"data_protection": "data protection measures",
|
||||
"compliance": "compliance approach",
|
||||
"reasoning": "security reasoning based on business context"
|
||||
}},
|
||||
"third_party_services": {{
|
||||
"communication": "communication services",
|
||||
"monitoring": "monitoring solution",
|
||||
"payment": "payment processing",
|
||||
"other_services": ["service1", "service2"],
|
||||
"reasoning": "third-party service reasoning"
|
||||
}}
|
||||
}},
|
||||
"implementation_strategy": {{
|
||||
"architecture_pattern": "recommended architecture pattern",
|
||||
"development_phases": ["phase1", "phase2", "phase3"],
|
||||
"deployment_strategy": "deployment approach",
|
||||
"scalability_approach": "scalability strategy",
|
||||
"timeline_estimate": "development timeline estimate"
|
||||
}},
|
||||
"business_alignment": {{
|
||||
"addresses_scale_requirements": "how recommendations address scale needs",
|
||||
"addresses_security_requirements": "how recommendations address security needs",
|
||||
"addresses_budget_constraints": "how recommendations fit budget",
|
||||
"addresses_timeline_constraints": "how recommendations fit timeline",
|
||||
"compliance_considerations": "compliance alignment"
|
||||
}}
|
||||
}}
|
||||
|
||||
CRITICAL: Return ONLY valid JSON, no additional text. Base all recommendations on the provided functional requirements and business context."""
|
||||
|
||||
try:
|
||||
logger.info("📞 Calling Claude for enhanced recommendations with functional requirements and tagged rules...")
|
||||
message = enhanced_selector.claude_client.messages.create(
|
||||
model="claude-3-5-sonnet-20241022",
|
||||
max_tokens=8000,
|
||||
temperature=0.1,
|
||||
messages=[{"role": "user", "content": prompt}]
|
||||
)
|
||||
|
||||
claude_response = message.content[0].text.strip()
|
||||
logger.info("✅ Received Claude response for enhanced recommendations")
|
||||
|
||||
# Parse JSON response - SAME working logic
|
||||
try:
|
||||
recommendations = json.loads(claude_response)
|
||||
logger.info("✅ Successfully parsed enhanced recommendations JSON")
|
||||
return recommendations
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"❌ JSON parse error: {e}")
|
||||
return {
|
||||
"parse_error": str(e),
|
||||
"raw_response": claude_response[:1000] + "..." if len(claude_response) > 1000 else claude_response
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Claude API error: {e}")
|
||||
return {
|
||||
"error": str(e),
|
||||
"fallback": "Enhanced recommendations generation failed"
|
||||
}
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
|
||||
logger.info("="*60)
|
||||
logger.info("🚀 ENHANCED TECH STACK SELECTOR v11.1 - FUNCTIONAL REQUIREMENTS + TAGGED RULES")
|
||||
logger.info("="*60)
|
||||
logger.info("✅ Enhanced data extraction from web dashboard")
|
||||
logger.info("✅ Functional requirements display")
|
||||
logger.info("✅ Business context analysis")
|
||||
logger.info("✅ NEW: Tagged rules support")
|
||||
logger.info("✅ Complete data for architecture-designer")
|
||||
logger.info("✅ Comprehensive Claude recommendations")
|
||||
logger.info("="*60)
|
||||
|
||||
uvicorn.run("main:app", host="0.0.0.0", port=8002, log_level="info")
|
||||
File diff suppressed because it is too large
Load Diff
1030
services/tech-stack-selector/src/main_migrated.py
Normal file
1030
services/tech-stack-selector/src/main_migrated.py
Normal file
File diff suppressed because it is too large
Load Diff
722
services/tech-stack-selector/src/postgres_to_neo4j_migration.py
Normal file
722
services/tech-stack-selector/src/postgres_to_neo4j_migration.py
Normal file
@ -0,0 +1,722 @@
|
||||
# ================================================================================================
|
||||
# POSTGRESQL TO NEO4J MIGRATION SERVICE
|
||||
# Migrates existing PostgreSQL data to Neo4j with price-based relationships
|
||||
# ================================================================================================
|
||||
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any, Optional, List, Tuple
|
||||
from neo4j import GraphDatabase
|
||||
import psycopg2
|
||||
from psycopg2.extras import RealDictCursor
|
||||
from loguru import logger
|
||||
|
||||
class PostgresToNeo4jMigration:
|
||||
def __init__(self,
|
||||
postgres_config: Dict[str, Any],
|
||||
neo4j_config: Dict[str, Any]):
|
||||
"""
|
||||
Initialize migration service with PostgreSQL and Neo4j configurations
|
||||
"""
|
||||
self.postgres_config = postgres_config
|
||||
self.neo4j_config = neo4j_config
|
||||
self.postgres_conn = None
|
||||
self.neo4j_driver = None
|
||||
|
||||
def connect_postgres(self):
|
||||
"""Connect to PostgreSQL database"""
|
||||
try:
|
||||
self.postgres_conn = psycopg2.connect(**self.postgres_config)
|
||||
logger.info("✅ Connected to PostgreSQL successfully")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"❌ PostgreSQL connection failed: {e}")
|
||||
return False
|
||||
|
||||
def connect_neo4j(self):
|
||||
"""Connect to Neo4j database"""
|
||||
try:
|
||||
self.neo4j_driver = GraphDatabase.driver(
|
||||
self.neo4j_config["uri"],
|
||||
auth=(self.neo4j_config["user"], self.neo4j_config["password"])
|
||||
)
|
||||
self.neo4j_driver.verify_connectivity()
|
||||
logger.info("✅ Connected to Neo4j successfully")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Neo4j connection failed: {e}")
|
||||
return False
|
||||
|
||||
def close_connections(self):
|
||||
"""Close all database connections"""
|
||||
if self.postgres_conn:
|
||||
self.postgres_conn.close()
|
||||
if self.neo4j_driver:
|
||||
self.neo4j_driver.close()
|
||||
|
||||
def run_postgres_query(self, query: str, params: Optional[Dict] = None):
|
||||
"""Execute PostgreSQL query and return results"""
|
||||
with self.postgres_conn.cursor(cursor_factory=RealDictCursor) as cursor:
|
||||
cursor.execute(query, params or {})
|
||||
return cursor.fetchall()
|
||||
|
||||
def run_neo4j_query(self, query: str, params: Optional[Dict] = None):
|
||||
"""Execute Neo4j query"""
|
||||
with self.neo4j_driver.session() as session:
|
||||
result = session.run(query, params or {})
|
||||
return [record.data() for record in result]
|
||||
|
||||
def migrate_price_tiers(self):
|
||||
"""Migrate price tiers from PostgreSQL to Neo4j"""
|
||||
logger.info("🔄 Migrating price tiers...")
|
||||
|
||||
# Get price tiers from PostgreSQL
|
||||
price_tiers = self.run_postgres_query("""
|
||||
SELECT id, tier_name, min_price_usd, max_price_usd,
|
||||
target_audience, typical_project_scale, description
|
||||
FROM price_tiers
|
||||
ORDER BY min_price_usd
|
||||
""")
|
||||
|
||||
# Create price tier nodes in Neo4j
|
||||
for tier in price_tiers:
|
||||
# Convert decimal values to float
|
||||
tier_data = dict(tier)
|
||||
tier_data['min_price_usd'] = float(tier_data['min_price_usd'])
|
||||
tier_data['max_price_usd'] = float(tier_data['max_price_usd'])
|
||||
|
||||
query = """
|
||||
CREATE (p:PriceTier {
|
||||
id: $id,
|
||||
tier_name: $tier_name,
|
||||
min_price_usd: $min_price_usd,
|
||||
max_price_usd: $max_price_usd,
|
||||
target_audience: $target_audience,
|
||||
typical_project_scale: $typical_project_scale,
|
||||
description: $description,
|
||||
migrated_at: datetime()
|
||||
})
|
||||
"""
|
||||
self.run_neo4j_query(query, tier_data)
|
||||
|
||||
logger.info(f"✅ Migrated {len(price_tiers)} price tiers")
|
||||
return len(price_tiers)
|
||||
|
||||
def migrate_technologies(self):
|
||||
"""Migrate all technology categories from PostgreSQL to Neo4j"""
|
||||
logger.info("🔄 Migrating technologies...")
|
||||
|
||||
technology_tables = [
|
||||
("frontend_technologies", "frontend"),
|
||||
("backend_technologies", "backend"),
|
||||
("database_technologies", "database"),
|
||||
("cloud_technologies", "cloud"),
|
||||
("testing_technologies", "testing"),
|
||||
("mobile_technologies", "mobile"),
|
||||
("devops_technologies", "devops"),
|
||||
("ai_ml_technologies", "ai_ml")
|
||||
]
|
||||
|
||||
total_technologies = 0
|
||||
|
||||
for table_name, category in technology_tables:
|
||||
logger.info(f" 📊 Migrating {category} technologies...")
|
||||
|
||||
# Get technologies from PostgreSQL
|
||||
technologies = self.run_postgres_query(f"""
|
||||
SELECT * FROM {table_name}
|
||||
ORDER BY name
|
||||
""")
|
||||
|
||||
# Create technology nodes in Neo4j
|
||||
for tech in technologies:
|
||||
# Convert PostgreSQL row to Neo4j properties
|
||||
properties = dict(tech)
|
||||
properties['category'] = category
|
||||
properties['migrated_at'] = datetime.now().isoformat()
|
||||
|
||||
# Convert decimal values to float
|
||||
for key, value in properties.items():
|
||||
if hasattr(value, '__class__') and 'Decimal' in str(value.__class__):
|
||||
properties[key] = float(value)
|
||||
|
||||
# Create the node (use MERGE to handle duplicates)
|
||||
query = f"""
|
||||
MERGE (t:Technology {{name: $name}})
|
||||
SET t += {{
|
||||
{', '.join([f'{k}: ${k}' for k in properties.keys() if k != 'name'])}
|
||||
}}
|
||||
SET t:{category.title()}
|
||||
"""
|
||||
self.run_neo4j_query(query, properties)
|
||||
|
||||
logger.info(f" ✅ Migrated {len(technologies)} {category} technologies")
|
||||
total_technologies += len(technologies)
|
||||
|
||||
logger.info(f"✅ Total technologies migrated: {total_technologies}")
|
||||
return total_technologies
|
||||
|
||||
def migrate_tech_pricing(self):
|
||||
"""Migrate technology pricing data"""
|
||||
logger.info("🔄 Migrating technology pricing...")
|
||||
|
||||
# Get tech pricing from PostgreSQL
|
||||
pricing_data = self.run_postgres_query("""
|
||||
SELECT tp.*, pt.tier_name as price_tier_name
|
||||
FROM tech_pricing tp
|
||||
JOIN price_tiers pt ON tp.price_tier_id = pt.id
|
||||
ORDER BY tp.tech_name
|
||||
""")
|
||||
|
||||
# Update technologies with pricing data
|
||||
for pricing in pricing_data:
|
||||
# Convert decimal values to float
|
||||
pricing_dict = dict(pricing)
|
||||
for key, value in pricing_dict.items():
|
||||
if hasattr(value, '__class__') and 'Decimal' in str(value.__class__):
|
||||
pricing_dict[key] = float(value)
|
||||
|
||||
# Update technology with pricing
|
||||
query = """
|
||||
MATCH (t:Technology {name: $tech_name})
|
||||
SET t.monthly_cost_usd = $monthly_operational_cost_usd,
|
||||
t.setup_cost_usd = $development_cost_usd,
|
||||
t.license_cost_usd = $license_cost_usd,
|
||||
t.training_cost_usd = $training_cost_usd,
|
||||
t.total_cost_of_ownership_score = $total_cost_of_ownership_score,
|
||||
t.price_performance_ratio = $price_performance_ratio,
|
||||
t.price_tier_name = $price_tier_name,
|
||||
t.min_cpu_cores = $min_cpu_cores,
|
||||
t.min_ram_gb = $min_ram_gb,
|
||||
t.min_storage_gb = $min_storage_gb
|
||||
"""
|
||||
self.run_neo4j_query(query, pricing_dict)
|
||||
|
||||
logger.info(f"✅ Updated {len(pricing_data)} technologies with pricing data")
|
||||
return len(pricing_data)
|
||||
|
||||
def migrate_price_based_stacks(self):
|
||||
"""Migrate complete tech stacks from price_based_stacks table"""
|
||||
logger.info("🔄 Migrating price-based tech stacks...")
|
||||
|
||||
# Get price-based stacks from PostgreSQL
|
||||
stacks = self.run_postgres_query("""
|
||||
SELECT pbs.*, pt.tier_name as price_tier_name
|
||||
FROM price_based_stacks pbs
|
||||
JOIN price_tiers pt ON pbs.price_tier_id = pt.id
|
||||
ORDER BY pbs.total_monthly_cost_usd
|
||||
""")
|
||||
|
||||
# Create tech stack nodes in Neo4j
|
||||
for stack in stacks:
|
||||
# Convert decimal values to float
|
||||
stack_dict = dict(stack)
|
||||
for key, value in stack_dict.items():
|
||||
if hasattr(value, '__class__') and 'Decimal' in str(value.__class__):
|
||||
stack_dict[key] = float(value)
|
||||
|
||||
# Create the tech stack node
|
||||
query = """
|
||||
CREATE (s:TechStack {
|
||||
name: $stack_name,
|
||||
monthly_cost: $total_monthly_cost_usd,
|
||||
setup_cost: $total_setup_cost_usd,
|
||||
team_size_range: $team_size_range,
|
||||
development_time_months: $development_time_months,
|
||||
satisfaction_score: $user_satisfaction_score,
|
||||
success_rate: $success_rate_percentage,
|
||||
price_tier: $price_tier_name,
|
||||
maintenance_complexity: $maintenance_complexity,
|
||||
scalability_ceiling: $scalability_ceiling,
|
||||
recommended_domains: $recommended_domains,
|
||||
description: $description,
|
||||
pros: $pros,
|
||||
cons: $cons,
|
||||
frontend_tech: $frontend_tech,
|
||||
backend_tech: $backend_tech,
|
||||
database_tech: $database_tech,
|
||||
cloud_tech: $cloud_tech,
|
||||
testing_tech: $testing_tech,
|
||||
mobile_tech: $mobile_tech,
|
||||
devops_tech: $devops_tech,
|
||||
ai_ml_tech: $ai_ml_tech,
|
||||
migrated_at: datetime()
|
||||
})
|
||||
"""
|
||||
self.run_neo4j_query(query, stack_dict)
|
||||
|
||||
logger.info(f"✅ Migrated {len(stacks)} price-based tech stacks")
|
||||
return len(stacks)
|
||||
|
||||
def migrate_stack_recommendations(self):
|
||||
"""Migrate domain-specific stack recommendations"""
|
||||
logger.info("🔄 Migrating stack recommendations...")
|
||||
|
||||
# Get stack recommendations from PostgreSQL
|
||||
# Handle case where price_tier_id might be NULL
|
||||
recommendations = self.run_postgres_query("""
|
||||
SELECT sr.*,
|
||||
COALESCE(pt.tier_name, 'Not Specified') as price_tier_name,
|
||||
pbs.stack_name,
|
||||
pbs.price_tier_id as stack_price_tier_id
|
||||
FROM stack_recommendations sr
|
||||
LEFT JOIN price_tiers pt ON sr.price_tier_id = pt.id
|
||||
JOIN price_based_stacks pbs ON sr.recommended_stack_id = pbs.id
|
||||
ORDER BY sr.business_domain, sr.confidence_score DESC
|
||||
""")
|
||||
|
||||
# Create domain nodes and recommendations
|
||||
for rec in recommendations:
|
||||
# Convert arrays to lists
|
||||
rec_dict = dict(rec)
|
||||
for key, value in rec_dict.items():
|
||||
if hasattr(value, '__class__') and 'list' in str(value.__class__):
|
||||
rec_dict[key] = list(value)
|
||||
|
||||
# Create domain node
|
||||
domain_query = """
|
||||
MERGE (d:Domain {name: $business_domain})
|
||||
SET d.project_scale = $project_scale,
|
||||
d.team_experience_level = $team_experience_level
|
||||
"""
|
||||
self.run_neo4j_query(domain_query, rec_dict)
|
||||
|
||||
# Get the actual price tier for the stack
|
||||
stack_tier_query = """
|
||||
MATCH (s:TechStack {name: $stack_name})-[:BELONGS_TO_TIER]->(pt:PriceTier)
|
||||
RETURN pt.tier_name as actual_tier_name
|
||||
"""
|
||||
tier_result = self.run_neo4j_query(stack_tier_query, {"stack_name": rec_dict["stack_name"]})
|
||||
actual_tier = tier_result[0]["actual_tier_name"] if tier_result else rec_dict["price_tier_name"]
|
||||
|
||||
# Create recommendation relationship
|
||||
rec_query = """
|
||||
MATCH (d:Domain {name: $business_domain})
|
||||
MATCH (s:TechStack {name: $stack_name})
|
||||
CREATE (d)-[:RECOMMENDS {
|
||||
confidence_score: $confidence_score,
|
||||
recommendation_reasons: $recommendation_reasons,
|
||||
potential_risks: $potential_risks,
|
||||
alternative_stacks: $alternative_stacks,
|
||||
price_tier: $actual_tier
|
||||
}]->(s)
|
||||
"""
|
||||
rec_dict["actual_tier"] = actual_tier
|
||||
self.run_neo4j_query(rec_query, rec_dict)
|
||||
|
||||
logger.info(f"✅ Migrated {len(recommendations)} stack recommendations")
|
||||
return len(recommendations)
|
||||
|
||||
def migrate_tools(self):
|
||||
"""Migrate tools with pricing from PostgreSQL to Neo4j"""
|
||||
logger.info("🔄 Migrating tools with pricing...")
|
||||
|
||||
# Get tools with pricing from PostgreSQL
|
||||
tools = self.run_postgres_query("""
|
||||
SELECT t.*, pt.tier_name as price_tier_name
|
||||
FROM tools t
|
||||
LEFT JOIN price_tiers pt ON t.price_tier_id = pt.id
|
||||
ORDER BY t.name
|
||||
""")
|
||||
|
||||
# Create tool nodes in Neo4j
|
||||
for tool in tools:
|
||||
properties = dict(tool)
|
||||
properties['migrated_at'] = datetime.now().isoformat()
|
||||
|
||||
# Convert decimal values to float
|
||||
for key, value in properties.items():
|
||||
if hasattr(value, '__class__') and 'Decimal' in str(value.__class__):
|
||||
properties[key] = float(value)
|
||||
|
||||
# Create the tool node (use MERGE to handle duplicates)
|
||||
query = f"""
|
||||
MERGE (tool:Tool {{name: $name}})
|
||||
SET tool += {{
|
||||
{', '.join([f'{k}: ${k}' for k in properties.keys() if k != 'name'])}
|
||||
}}
|
||||
"""
|
||||
self.run_neo4j_query(query, properties)
|
||||
|
||||
logger.info(f"✅ Migrated {len(tools)} tools")
|
||||
return len(tools)
|
||||
|
||||
def create_price_relationships(self):
|
||||
"""Create price-based relationships between technologies/tools and price tiers"""
|
||||
logger.info("🔗 Creating price-based relationships...")
|
||||
|
||||
# Create relationships for technologies
|
||||
technology_categories = ["frontend", "backend", "database", "cloud", "testing", "mobile", "devops", "ai_ml"]
|
||||
|
||||
for category in technology_categories:
|
||||
logger.info(f" 📊 Creating price relationships for {category} technologies...")
|
||||
|
||||
# Get technologies and their price tiers
|
||||
query = f"""
|
||||
MATCH (t:Technology {{category: '{category}'}})
|
||||
MATCH (p:PriceTier)
|
||||
WHERE t.monthly_cost_usd >= p.min_price_usd
|
||||
AND t.monthly_cost_usd <= p.max_price_usd
|
||||
CREATE (t)-[:BELONGS_TO_TIER {{
|
||||
fit_score: CASE
|
||||
WHEN t.monthly_cost_usd = 0.0 THEN 100.0
|
||||
ELSE 100.0 - ((t.monthly_cost_usd - p.min_price_usd) / (p.max_price_usd - p.min_price_usd) * 20.0)
|
||||
END,
|
||||
cost_efficiency: t.total_cost_of_ownership_score,
|
||||
price_performance: t.price_performance_ratio
|
||||
}}]->(p)
|
||||
RETURN count(*) as relationships_created
|
||||
"""
|
||||
|
||||
result = self.run_neo4j_query(query)
|
||||
if result:
|
||||
logger.info(f" ✅ Created {result[0]['relationships_created']} price relationships for {category}")
|
||||
|
||||
# Create relationships for tools
|
||||
logger.info(" 📊 Creating price relationships for tools...")
|
||||
query = """
|
||||
MATCH (tool:Tool)
|
||||
MATCH (p:PriceTier)
|
||||
WHERE tool.monthly_cost_usd >= p.min_price_usd
|
||||
AND tool.monthly_cost_usd <= p.max_price_usd
|
||||
CREATE (tool)-[:BELONGS_TO_TIER {
|
||||
fit_score: CASE
|
||||
WHEN tool.monthly_cost_usd = 0.0 THEN 100.0
|
||||
ELSE 100.0 - ((tool.monthly_cost_usd - p.min_price_usd) / (p.max_price_usd - p.min_price_usd) * 20.0)
|
||||
END,
|
||||
cost_efficiency: tool.total_cost_of_ownership_score,
|
||||
price_performance: tool.price_performance_ratio
|
||||
}]->(p)
|
||||
RETURN count(*) as relationships_created
|
||||
"""
|
||||
|
||||
result = self.run_neo4j_query(query)
|
||||
if result:
|
||||
logger.info(f" ✅ Created {result[0]['relationships_created']} price relationships for tools")
|
||||
|
||||
def create_technology_compatibility_relationships(self):
|
||||
"""Create compatibility relationships between technologies"""
|
||||
logger.info("🔗 Creating technology compatibility relationships...")
|
||||
|
||||
query = """
|
||||
MATCH (t1:Technology), (t2:Technology)
|
||||
WHERE t1.name <> t2.name
|
||||
AND (
|
||||
// Same category, different technologies
|
||||
(t1.category = t2.category AND t1.name <> t2.name) OR
|
||||
// Frontend-Backend compatibility
|
||||
(t1.category = "frontend" AND t2.category = "backend") OR
|
||||
(t1.category = "backend" AND t2.category = "frontend") OR
|
||||
// Backend-Database compatibility
|
||||
(t1.category = "backend" AND t2.category = "database") OR
|
||||
(t1.category = "database" AND t2.category = "backend") OR
|
||||
// Cloud compatibility with all
|
||||
(t1.category = "cloud" AND t2.category IN ["frontend", "backend", "database"]) OR
|
||||
(t2.category = "cloud" AND t1.category IN ["frontend", "backend", "database"])
|
||||
)
|
||||
MERGE (t1)-[r:COMPATIBLE_WITH {
|
||||
compatibility_score: CASE
|
||||
WHEN t1.category = t2.category THEN 0.8
|
||||
WHEN (t1.category = "frontend" AND t2.category = "backend") THEN 0.9
|
||||
WHEN (t1.category = "backend" AND t2.category = "database") THEN 0.9
|
||||
WHEN (t1.category = "cloud" AND t2.category IN ["frontend", "backend", "database"]) THEN 0.85
|
||||
ELSE 0.7
|
||||
END,
|
||||
integration_effort: CASE
|
||||
WHEN t1.category = t2.category THEN "Low"
|
||||
WHEN (t1.category = "frontend" AND t2.category = "backend") THEN "Medium"
|
||||
WHEN (t1.category = "backend" AND t2.category = "database") THEN "Low"
|
||||
WHEN (t1.category = "cloud" AND t2.category IN ["frontend", "backend", "database"]) THEN "Low"
|
||||
ELSE "High"
|
||||
END,
|
||||
reason: "Auto-generated compatibility relationship",
|
||||
created_at: datetime()
|
||||
}]->(t2)
|
||||
RETURN count(r) as relationships_created
|
||||
"""
|
||||
|
||||
result = self.run_neo4j_query(query)
|
||||
if result:
|
||||
logger.info(f"✅ Created {result[0]['relationships_created']} compatibility relationships")
|
||||
|
||||
def create_tech_stack_relationships(self):
|
||||
"""Create relationships between tech stacks and their technologies"""
|
||||
logger.info("🔗 Creating tech stack relationships...")
|
||||
|
||||
# Create relationships for each technology type separately
|
||||
tech_relationships = [
|
||||
("frontend_tech", "USES_FRONTEND", "frontend"),
|
||||
("backend_tech", "USES_BACKEND", "backend"),
|
||||
("database_tech", "USES_DATABASE", "database"),
|
||||
("cloud_tech", "USES_CLOUD", "cloud"),
|
||||
("testing_tech", "USES_TESTING", "testing"),
|
||||
("mobile_tech", "USES_MOBILE", "mobile"),
|
||||
("devops_tech", "USES_DEVOPS", "devops"),
|
||||
("ai_ml_tech", "USES_AI_ML", "ai_ml")
|
||||
]
|
||||
|
||||
total_relationships = 0
|
||||
|
||||
for tech_field, relationship_type, category in tech_relationships:
|
||||
# For testing technologies, also check frontend category since some testing tools are categorized as frontend
|
||||
if category == "testing":
|
||||
query = f"""
|
||||
MATCH (s:TechStack)
|
||||
WHERE s.{tech_field} IS NOT NULL
|
||||
MATCH (t:Technology {{name: s.{tech_field}}})
|
||||
WHERE t.category = '{category}' OR (t.category = 'frontend' AND s.{tech_field} IN ['Jest', 'Cypress', 'Playwright', 'Selenium', 'Vitest', 'Testing Library'])
|
||||
MERGE (s)-[:{relationship_type} {{role: '{category}', importance: 'critical'}}]->(t)
|
||||
RETURN count(s) as relationships_created
|
||||
"""
|
||||
else:
|
||||
query = f"""
|
||||
MATCH (s:TechStack)
|
||||
WHERE s.{tech_field} IS NOT NULL
|
||||
MATCH (t:Technology {{name: s.{tech_field}, category: '{category}'}})
|
||||
MERGE (s)-[:{relationship_type} {{role: '{category}', importance: 'critical'}}]->(t)
|
||||
RETURN count(s) as relationships_created
|
||||
"""
|
||||
|
||||
result = self.run_neo4j_query(query)
|
||||
if result:
|
||||
count = result[0]['relationships_created']
|
||||
total_relationships += count
|
||||
logger.info(f" ✅ Created {count} {relationship_type} relationships")
|
||||
|
||||
logger.info(f"✅ Created {total_relationships} total tech stack relationships")
|
||||
|
||||
# Create price tier relationships for tech stacks
|
||||
price_tier_query = """
|
||||
MATCH (s:TechStack)
|
||||
MATCH (p:PriceTier {tier_name: s.price_tier})
|
||||
MERGE (s)-[:BELONGS_TO_TIER {fit_score: 100.0}]->(p)
|
||||
RETURN count(s) as relationships_created
|
||||
"""
|
||||
|
||||
result = self.run_neo4j_query(price_tier_query)
|
||||
if result:
|
||||
logger.info(f"✅ Created price tier relationships for {result[0]['relationships_created']} tech stacks")
|
||||
|
||||
def create_optimal_tech_stacks(self, max_stacks_per_tier: int = 5):
|
||||
"""Create optimal tech stacks based on price tiers and compatibility"""
|
||||
logger.info("🏗️ Creating optimal tech stacks...")
|
||||
|
||||
# Get price tiers
|
||||
price_tiers = self.run_neo4j_query("MATCH (p:PriceTier) RETURN p ORDER BY p.min_price_usd")
|
||||
|
||||
total_stacks = 0
|
||||
|
||||
for tier in price_tiers:
|
||||
tier_name = tier['p']['tier_name']
|
||||
min_price = tier['p']['min_price_usd']
|
||||
max_price = tier['p']['max_price_usd']
|
||||
|
||||
logger.info(f" 📊 Creating stacks for {tier_name} (${min_price}-${max_price})...")
|
||||
|
||||
# Find optimal combinations within this price tier
|
||||
query = """
|
||||
MATCH (frontend:Technology {category: "frontend"})-[:BELONGS_TO_TIER]->(p:PriceTier {tier_name: $tier_name})
|
||||
MATCH (backend:Technology {category: "backend"})-[:BELONGS_TO_TIER]->(p)
|
||||
MATCH (database:Technology {category: "database"})-[:BELONGS_TO_TIER]->(p)
|
||||
MATCH (cloud:Technology {category: "cloud"})-[:BELONGS_TO_TIER]->(p)
|
||||
|
||||
WITH frontend, backend, database, cloud, p,
|
||||
(frontend.monthly_cost_usd + backend.monthly_cost_usd +
|
||||
database.monthly_cost_usd + cloud.monthly_cost_usd) as total_cost,
|
||||
(frontend.total_cost_of_ownership_score + backend.total_cost_of_ownership_score +
|
||||
database.total_cost_of_ownership_score + cloud.total_cost_of_ownership_score) as total_score
|
||||
|
||||
WHERE total_cost >= p.min_price_usd AND total_cost <= p.max_price_usd
|
||||
|
||||
WITH frontend, backend, database, cloud, total_cost, total_score,
|
||||
(total_score / 4.0) as avg_score,
|
||||
(100.0 - ((total_cost - p.min_price_usd) / (p.max_price_usd - p.min_price_usd) * 20.0)) as budget_efficiency
|
||||
|
||||
ORDER BY avg_score DESC, budget_efficiency DESC, total_cost ASC
|
||||
LIMIT $max_stacks
|
||||
|
||||
CREATE (s:TechStack {
|
||||
name: "Optimal " + $tier_name + " Stack - $" + toString(round(total_cost)) + "/month",
|
||||
monthly_cost: total_cost,
|
||||
setup_cost: total_cost * 0.5,
|
||||
team_size_range: CASE
|
||||
WHEN $tier_name = "Micro Budget" THEN "1-2"
|
||||
WHEN $tier_name = "Startup Budget" THEN "2-4"
|
||||
WHEN $tier_name = "Small Business" THEN "3-6"
|
||||
WHEN $tier_name = "Growth Stage" THEN "5-10"
|
||||
ELSE "8-15"
|
||||
END,
|
||||
development_time_months: CASE
|
||||
WHEN $tier_name = "Micro Budget" THEN 1
|
||||
WHEN $tier_name = "Startup Budget" THEN 2
|
||||
WHEN $tier_name = "Small Business" THEN 3
|
||||
WHEN $tier_name = "Growth Stage" THEN 4
|
||||
ELSE 6
|
||||
END,
|
||||
satisfaction_score: toInteger(avg_score),
|
||||
success_rate: toInteger(avg_score * 0.9),
|
||||
price_tier: $tier_name,
|
||||
budget_efficiency: budget_efficiency,
|
||||
created_at: datetime()
|
||||
})
|
||||
|
||||
CREATE (s)-[:BELONGS_TO_TIER {fit_score: budget_efficiency}]->(p)
|
||||
CREATE (s)-[:USES_FRONTEND {role: "frontend", importance: "critical"}]->(frontend)
|
||||
CREATE (s)-[:USES_BACKEND {role: "backend", importance: "critical"}]->(backend)
|
||||
CREATE (s)-[:USES_DATABASE {role: "database", importance: "critical"}]->(database)
|
||||
CREATE (s)-[:USES_CLOUD {role: "cloud", importance: "critical"}]->(cloud)
|
||||
|
||||
RETURN count(s) as stacks_created
|
||||
"""
|
||||
|
||||
result = self.run_neo4j_query(query, {
|
||||
"tier_name": tier_name,
|
||||
"max_stacks": max_stacks_per_tier
|
||||
})
|
||||
|
||||
if result and result[0]['stacks_created'] > 0:
|
||||
stacks_created = result[0]['stacks_created']
|
||||
logger.info(f" ✅ Created {stacks_created} optimal stacks for {tier_name}")
|
||||
total_stacks += stacks_created
|
||||
|
||||
logger.info(f"✅ Total tech stacks created: {total_stacks}")
|
||||
return total_stacks
|
||||
|
||||
def validate_migration(self):
|
||||
"""Validate the migration results"""
|
||||
logger.info("🔍 Validating migration...")
|
||||
|
||||
# Count nodes
|
||||
node_counts = self.run_neo4j_query("""
|
||||
MATCH (n)
|
||||
RETURN labels(n)[0] as label, count(n) as count
|
||||
ORDER BY count DESC
|
||||
""")
|
||||
|
||||
logger.info("📊 Node counts:")
|
||||
for item in node_counts:
|
||||
logger.info(f" {item['label']}: {item['count']}")
|
||||
|
||||
# Count relationships
|
||||
rel_counts = self.run_neo4j_query("""
|
||||
MATCH ()-[r]->()
|
||||
RETURN type(r) as type, count(r) as count
|
||||
ORDER BY count DESC
|
||||
""")
|
||||
|
||||
logger.info("🔗 Relationship counts:")
|
||||
for item in rel_counts:
|
||||
logger.info(f" {item['type']}: {item['count']}")
|
||||
|
||||
# Validate tech stacks
|
||||
stack_validation = self.run_neo4j_query("""
|
||||
MATCH (s:TechStack)
|
||||
RETURN s.name,
|
||||
exists((s)-[:BELONGS_TO_TIER]->()) as has_price_tier,
|
||||
exists((s)-[:USES_FRONTEND]->()) as has_frontend,
|
||||
exists((s)-[:USES_BACKEND]->()) as has_backend,
|
||||
exists((s)-[:USES_DATABASE]->()) as has_database,
|
||||
exists((s)-[:USES_CLOUD]->()) as has_cloud
|
||||
""")
|
||||
|
||||
complete_stacks = [s for s in stack_validation if all([
|
||||
s['has_price_tier'], s['has_frontend'], s['has_backend'],
|
||||
s['has_database'], s['has_cloud']
|
||||
])]
|
||||
|
||||
logger.info(f"✅ Complete tech stacks: {len(complete_stacks)}/{len(stack_validation)}")
|
||||
|
||||
return {
|
||||
"node_counts": node_counts,
|
||||
"relationship_counts": rel_counts,
|
||||
"complete_stacks": len(complete_stacks),
|
||||
"total_stacks": len(stack_validation)
|
||||
}
|
||||
|
||||
def run_full_migration(self):
|
||||
"""Run the complete migration process"""
|
||||
logger.info("🚀 Starting PostgreSQL to Neo4j migration...")
|
||||
|
||||
try:
|
||||
# Connect to databases
|
||||
if not self.connect_postgres():
|
||||
return False
|
||||
if not self.connect_neo4j():
|
||||
return False
|
||||
|
||||
# Clear Neo4j
|
||||
logger.info("🧹 Clearing Neo4j database...")
|
||||
self.run_neo4j_query("MATCH (n) DETACH DELETE n")
|
||||
|
||||
# Run migrations
|
||||
price_tiers_count = self.migrate_price_tiers()
|
||||
technologies_count = self.migrate_technologies()
|
||||
tech_pricing_count = self.migrate_tech_pricing()
|
||||
price_based_stacks_count = self.migrate_price_based_stacks()
|
||||
stack_recommendations_count = self.migrate_stack_recommendations()
|
||||
tools_count = self.migrate_tools()
|
||||
|
||||
# Create relationships
|
||||
self.create_price_relationships()
|
||||
self.create_technology_compatibility_relationships()
|
||||
self.create_tech_stack_relationships()
|
||||
|
||||
# Create optimal tech stacks (only if no existing stacks)
|
||||
if price_based_stacks_count == 0:
|
||||
stacks_count = self.create_optimal_tech_stacks()
|
||||
else:
|
||||
stacks_count = price_based_stacks_count
|
||||
|
||||
# Validate migration
|
||||
validation = self.validate_migration()
|
||||
|
||||
logger.info("🎉 Migration completed successfully!")
|
||||
logger.info(f"📊 Summary:")
|
||||
logger.info(f" Price tiers: {price_tiers_count}")
|
||||
logger.info(f" Technologies: {technologies_count}")
|
||||
logger.info(f" Tech pricing: {tech_pricing_count}")
|
||||
logger.info(f" Price-based stacks: {price_based_stacks_count}")
|
||||
logger.info(f" Stack recommendations: {stack_recommendations_count}")
|
||||
logger.info(f" Tools: {tools_count}")
|
||||
logger.info(f" Total tech stacks: {stacks_count}")
|
||||
logger.info(f" Complete stacks: {validation['complete_stacks']}/{validation['total_stacks']}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Migration failed: {e}")
|
||||
return False
|
||||
finally:
|
||||
self.close_connections()
|
||||
|
||||
# ================================================================================================
|
||||
# MAIN EXECUTION
|
||||
# ================================================================================================
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Configuration
|
||||
postgres_config = {
|
||||
"host": os.getenv("POSTGRES_HOST", "localhost"),
|
||||
"port": int(os.getenv("POSTGRES_PORT", "5432")),
|
||||
"user": os.getenv("POSTGRES_USER", "pipeline_admin"),
|
||||
"password": os.getenv("POSTGRES_PASSWORD", "secure_pipeline_2024"),
|
||||
"database": os.getenv("POSTGRES_DB", "dev_pipeline")
|
||||
}
|
||||
|
||||
neo4j_config = {
|
||||
"uri": os.getenv("NEO4J_URI", "bolt://localhost:7687"),
|
||||
"user": os.getenv("NEO4J_USER", "neo4j"),
|
||||
"password": os.getenv("NEO4J_PASSWORD", "password")
|
||||
}
|
||||
|
||||
# Run migration
|
||||
migration = PostgresToNeo4jMigration(postgres_config, neo4j_config)
|
||||
success = migration.run_full_migration()
|
||||
|
||||
if success:
|
||||
logger.info("✅ Migration completed successfully!")
|
||||
sys.exit(0)
|
||||
else:
|
||||
logger.error("❌ Migration failed!")
|
||||
sys.exit(1)
|
||||
431
services/tech-stack-selector/start.sh
Normal file
431
services/tech-stack-selector/start.sh
Normal file
@ -0,0 +1,431 @@
|
||||
#!/bin/bash
|
||||
|
||||
# ================================================================================================
|
||||
# ENHANCED TECH STACK SELECTOR - MIGRATED VERSION STARTUP SCRIPT
|
||||
# Uses PostgreSQL data migrated to Neo4j with proper price-based relationships
|
||||
# ================================================================================================
|
||||
|
||||
set -e
|
||||
|
||||
# Parse command line arguments
|
||||
FORCE_MIGRATION=false
|
||||
if [ "$1" = "--force-migration" ] || [ "$1" = "-f" ]; then
|
||||
FORCE_MIGRATION=true
|
||||
echo "🔄 Force migration mode enabled"
|
||||
elif [ "$1" = "--help" ] || [ "$1" = "-h" ]; then
|
||||
echo "Usage: $0 [OPTIONS]"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --force-migration, -f Force re-run all migrations"
|
||||
echo " --help, -h Show this help message"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 # Normal startup with auto-migration detection"
|
||||
echo " $0 --force-migration # Force re-run all migrations"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "="*60
|
||||
echo "🚀 ENHANCED TECH STACK SELECTOR v15.0 - MIGRATED VERSION"
|
||||
echo "="*60
|
||||
echo "✅ PostgreSQL data migrated to Neo4j"
|
||||
echo "✅ Price-based relationships"
|
||||
echo "✅ Real data from PostgreSQL"
|
||||
echo "✅ Comprehensive pricing analysis"
|
||||
echo "="*60
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Function to print colored output
|
||||
print_status() {
|
||||
echo -e "${GREEN}✅ $1${NC}"
|
||||
}
|
||||
|
||||
print_warning() {
|
||||
echo -e "${YELLOW}⚠️ $1${NC}"
|
||||
}
|
||||
|
||||
print_error() {
|
||||
echo -e "${RED}❌ $1${NC}"
|
||||
}
|
||||
|
||||
print_info() {
|
||||
echo -e "${BLUE}ℹ️ $1${NC}"
|
||||
}
|
||||
|
||||
# Check if Python is available
|
||||
if ! command -v python3 &> /dev/null; then
|
||||
print_error "Python3 is not installed or not in PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
print_status "Python3 found: $(python3 --version)"
|
||||
|
||||
# Check if pip is available
|
||||
if ! command -v pip3 &> /dev/null; then
|
||||
print_error "pip3 is not installed or not in PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
print_status "pip3 found: $(pip3 --version)"
|
||||
|
||||
# Check if psql is available
|
||||
if ! command -v psql &> /dev/null; then
|
||||
print_error "psql is not installed or not in PATH"
|
||||
print_info "Please install PostgreSQL client tools:"
|
||||
print_info " Ubuntu/Debian: sudo apt-get install postgresql-client"
|
||||
print_info " CentOS/RHEL: sudo yum install postgresql"
|
||||
print_info " macOS: brew install postgresql"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
print_status "psql found: $(psql --version)"
|
||||
|
||||
# Check if createdb is available
|
||||
if ! command -v createdb &> /dev/null; then
|
||||
print_error "createdb is not installed or not in PATH"
|
||||
print_info "Please install PostgreSQL client tools:"
|
||||
print_info " Ubuntu/Debian: sudo apt-get install postgresql-client"
|
||||
print_info " CentOS/RHEL: sudo yum install postgresql"
|
||||
print_info " macOS: brew install postgresql"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
print_status "createdb found: $(createdb --version)"
|
||||
|
||||
# Install/upgrade required packages
|
||||
print_info "Installing/upgrading required packages..."
|
||||
pip3 install --upgrade fastapi uvicorn neo4j psycopg2-binary anthropic loguru pydantic
|
||||
|
||||
# Function to create database if it doesn't exist
|
||||
create_database_if_not_exists() {
|
||||
print_info "Checking if database 'dev_pipeline' exists..."
|
||||
|
||||
# Try to connect to the specific database
|
||||
if python3 -c "
|
||||
import psycopg2
|
||||
try:
|
||||
conn = psycopg2.connect(
|
||||
host='localhost',
|
||||
port=5432,
|
||||
user='pipeline_admin',
|
||||
password='secure_pipeline_2024',
|
||||
database='dev_pipeline'
|
||||
)
|
||||
conn.close()
|
||||
print('Database dev_pipeline exists')
|
||||
except Exception as e:
|
||||
print(f'Database dev_pipeline does not exist: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; then
|
||||
print_status "Database 'dev_pipeline' exists"
|
||||
return 0
|
||||
else
|
||||
print_warning "Database 'dev_pipeline' does not exist - creating it..."
|
||||
|
||||
# Try to create the database
|
||||
if createdb -h localhost -p 5432 -U pipeline_admin dev_pipeline 2>/dev/null; then
|
||||
print_status "Database 'dev_pipeline' created successfully"
|
||||
return 0
|
||||
else
|
||||
print_error "Failed to create database 'dev_pipeline'"
|
||||
print_info "Please create the database manually:"
|
||||
print_info " createdb -h localhost -p 5432 -U pipeline_admin dev_pipeline"
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Check if PostgreSQL is running
|
||||
print_info "Checking PostgreSQL connection..."
|
||||
if ! python3 -c "
|
||||
import psycopg2
|
||||
try:
|
||||
conn = psycopg2.connect(
|
||||
host='localhost',
|
||||
port=5432,
|
||||
user='pipeline_admin',
|
||||
password='secure_pipeline_2024',
|
||||
database='postgres'
|
||||
)
|
||||
conn.close()
|
||||
print('PostgreSQL connection successful')
|
||||
except Exception as e:
|
||||
print(f'PostgreSQL connection failed: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; then
|
||||
print_error "PostgreSQL is not running or not accessible"
|
||||
print_info "Please ensure PostgreSQL is running and accessible"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
print_status "PostgreSQL is running and accessible"
|
||||
|
||||
# Create database if it doesn't exist
|
||||
if ! create_database_if_not_exists; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Function to check if database needs migration
|
||||
check_database_migration() {
|
||||
print_info "Checking if database needs migration..."
|
||||
|
||||
# Check if price_tiers table exists and has data
|
||||
if ! python3 -c "
|
||||
import psycopg2
|
||||
try:
|
||||
conn = psycopg2.connect(
|
||||
host='localhost',
|
||||
port=5432,
|
||||
user='pipeline_admin',
|
||||
password='secure_pipeline_2024',
|
||||
database='dev_pipeline'
|
||||
)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if price_tiers table exists
|
||||
cursor.execute(\"\"\"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'public'
|
||||
AND table_name = 'price_tiers'
|
||||
);
|
||||
\"\"\")
|
||||
table_exists = cursor.fetchone()[0]
|
||||
|
||||
if not table_exists:
|
||||
print('price_tiers table does not exist - migration needed')
|
||||
exit(1)
|
||||
|
||||
# Check if price_tiers has data
|
||||
cursor.execute('SELECT COUNT(*) FROM price_tiers;')
|
||||
count = cursor.fetchone()[0]
|
||||
|
||||
if count == 0:
|
||||
print('price_tiers table is empty - migration needed')
|
||||
exit(1)
|
||||
|
||||
# Check if stack_recommendations has sufficient data (should have more than 8 records)
|
||||
cursor.execute('SELECT COUNT(*) FROM stack_recommendations;')
|
||||
rec_count = cursor.fetchone()[0]
|
||||
|
||||
if rec_count < 50: # Expect at least 50 domain recommendations
|
||||
print(f'stack_recommendations has only {rec_count} records - migration needed for additional domains')
|
||||
exit(1)
|
||||
|
||||
# Check for specific new domains
|
||||
cursor.execute(\"\"\"
|
||||
SELECT COUNT(DISTINCT business_domain) FROM stack_recommendations
|
||||
WHERE business_domain IN ('healthcare', 'finance', 'gaming', 'education', 'media', 'iot', 'social', 'elearning', 'realestate', 'travel', 'manufacturing', 'ecommerce', 'saas')
|
||||
\"\"\")
|
||||
new_domains_count = cursor.fetchone()[0]
|
||||
|
||||
if new_domains_count < 12: # Expect at least 12 domains
|
||||
print(f'Only {new_domains_count} domains found - migration needed for additional domains')
|
||||
exit(1)
|
||||
|
||||
print('Database appears to be fully migrated with all domains')
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
except Exception as e:
|
||||
print(f'Error checking database: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; then
|
||||
return 1 # Migration needed
|
||||
else
|
||||
return 0 # Migration not needed
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to run PostgreSQL migrations
|
||||
run_postgres_migrations() {
|
||||
print_info "Running PostgreSQL migrations..."
|
||||
|
||||
# Migration files in order
|
||||
migration_files=(
|
||||
"db/001_schema.sql"
|
||||
"db/002_tools_migration.sql"
|
||||
"db/003_tools_pricing_migration.sql"
|
||||
)
|
||||
|
||||
# Set PGPASSWORD to avoid password prompts
|
||||
export PGPASSWORD="secure_pipeline_2024"
|
||||
|
||||
for migration_file in "${migration_files[@]}"; do
|
||||
if [ ! -f "$migration_file" ]; then
|
||||
print_error "Migration file not found: $migration_file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
print_info "Running migration: $migration_file"
|
||||
|
||||
# Run migration with error handling
|
||||
if psql -h localhost -p 5432 -U pipeline_admin -d dev_pipeline -f "$migration_file" -q 2>/dev/null; then
|
||||
print_status "Migration completed: $migration_file"
|
||||
else
|
||||
print_error "Migration failed: $migration_file"
|
||||
print_info "Check the error logs above for details"
|
||||
print_info "You may need to run the migration manually:"
|
||||
print_info " psql -h localhost -p 5432 -U pipeline_admin -d dev_pipeline -f $migration_file"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Unset password
|
||||
unset PGPASSWORD
|
||||
|
||||
print_status "All PostgreSQL migrations completed successfully"
|
||||
}
|
||||
|
||||
# Check if migration is needed and run if necessary
|
||||
if [ "$FORCE_MIGRATION" = true ]; then
|
||||
print_warning "Force migration enabled - running migrations..."
|
||||
run_postgres_migrations
|
||||
|
||||
# Verify migration was successful
|
||||
print_info "Verifying migration..."
|
||||
if check_database_migration; then
|
||||
print_status "Migration verification successful"
|
||||
else
|
||||
print_error "Migration verification failed"
|
||||
exit 1
|
||||
fi
|
||||
elif check_database_migration; then
|
||||
print_status "Database is already migrated"
|
||||
else
|
||||
print_warning "Database needs migration - running migrations..."
|
||||
run_postgres_migrations
|
||||
|
||||
# Verify migration was successful
|
||||
print_info "Verifying migration..."
|
||||
if check_database_migration; then
|
||||
print_status "Migration verification successful"
|
||||
else
|
||||
print_error "Migration verification failed"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Show migration summary
|
||||
print_info "Migration Summary:"
|
||||
python3 -c "
|
||||
import psycopg2
|
||||
try:
|
||||
conn = psycopg2.connect(
|
||||
host='localhost',
|
||||
port=5432,
|
||||
user='pipeline_admin',
|
||||
password='secure_pipeline_2024',
|
||||
database='dev_pipeline'
|
||||
)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get table counts
|
||||
tables = ['price_tiers', 'frontend_technologies', 'backend_technologies', 'database_technologies',
|
||||
'cloud_technologies', 'testing_technologies', 'mobile_technologies', 'devops_technologies',
|
||||
'ai_ml_technologies', 'tools', 'price_based_stacks', 'stack_recommendations']
|
||||
|
||||
print('📊 Database Statistics:')
|
||||
for table in tables:
|
||||
try:
|
||||
cursor.execute(f'SELECT COUNT(*) FROM {table};')
|
||||
count = cursor.fetchone()[0]
|
||||
print(f' {table}: {count} records')
|
||||
except Exception as e:
|
||||
print(f' {table}: Error - {e}')
|
||||
|
||||
cursor.close()
|
||||
conn.close()
|
||||
except Exception as e:
|
||||
print(f'Error getting migration summary: {e}')
|
||||
" 2>/dev/null
|
||||
|
||||
# Check if Neo4j is running
|
||||
print_info "Checking Neo4j connection..."
|
||||
if ! python3 -c "
|
||||
from neo4j import GraphDatabase
|
||||
try:
|
||||
driver = GraphDatabase.driver('bolt://localhost:7687', auth=('neo4j', 'password'))
|
||||
driver.verify_connectivity()
|
||||
print('Neo4j connection successful')
|
||||
driver.close()
|
||||
except Exception as e:
|
||||
print(f'Neo4j connection failed: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; then
|
||||
print_error "Neo4j is not running or not accessible"
|
||||
print_info "Please start Neo4j first:"
|
||||
print_info " docker run -d --name neo4j -p 7474:7474 -p 7687:7687 -e NEO4J_AUTH=neo4j/password neo4j:latest"
|
||||
print_info " Wait for Neo4j to start (check http://localhost:7474)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
print_status "Neo4j is running and accessible"
|
||||
|
||||
# Check if migration has been run
|
||||
print_info "Checking if migration has been completed..."
|
||||
if ! python3 -c "
|
||||
from neo4j import GraphDatabase
|
||||
try:
|
||||
driver = GraphDatabase.driver('bolt://localhost:7687', auth=('neo4j', 'password'))
|
||||
with driver.session() as session:
|
||||
result = session.run('MATCH (p:PriceTier) RETURN count(p) as count')
|
||||
price_tiers = result.single()['count']
|
||||
if price_tiers == 0:
|
||||
print('No data found in Neo4j - migration needed')
|
||||
exit(1)
|
||||
else:
|
||||
print(f'Found {price_tiers} price tiers - migration appears complete')
|
||||
driver.close()
|
||||
except Exception as e:
|
||||
print(f'Error checking migration status: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; then
|
||||
print_warning "No data found in Neo4j - running migration..."
|
||||
|
||||
# Run migration
|
||||
if python3 migrate_postgres_to_neo4j.py; then
|
||||
print_status "Migration completed successfully"
|
||||
else
|
||||
print_error "Migration failed"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
print_status "Migration appears to be complete"
|
||||
fi
|
||||
|
||||
# Set environment variables
|
||||
export NEO4J_URI="bolt://localhost:7687"
|
||||
export NEO4J_USER="neo4j"
|
||||
export NEO4J_PASSWORD="password"
|
||||
export POSTGRES_HOST="localhost"
|
||||
export POSTGRES_PORT="5432"
|
||||
export POSTGRES_USER="pipeline_admin"
|
||||
export POSTGRES_PASSWORD="secure_pipeline_2024"
|
||||
export POSTGRES_DB="dev_pipeline"
|
||||
export CLAUDE_API_KEY="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"
|
||||
|
||||
print_status "Environment variables set"
|
||||
|
||||
# Create logs directory if it doesn't exist
|
||||
mkdir -p logs
|
||||
|
||||
# Start the migrated application
|
||||
print_info "Starting Enhanced Tech Stack Selector (Migrated Version)..."
|
||||
print_info "Server will be available at: http://localhost:8002"
|
||||
print_info "API documentation: http://localhost:8002/docs"
|
||||
print_info "Health check: http://localhost:8002/health"
|
||||
print_info "Diagnostics: http://localhost:8002/api/diagnostics"
|
||||
print_info ""
|
||||
print_info "Press Ctrl+C to stop the server"
|
||||
print_info ""
|
||||
|
||||
# Start the application
|
||||
cd src
|
||||
python3 main_migrated.py
|
||||
90
services/tech-stack-selector/test_domains.py
Normal file
90
services/tech-stack-selector/test_domains.py
Normal file
@ -0,0 +1,90 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script to verify domain recommendations are working properly
|
||||
"""
|
||||
|
||||
import requests
|
||||
import json
|
||||
|
||||
def test_domain_recommendations():
|
||||
"""Test recommendations for different domains"""
|
||||
|
||||
base_url = "http://localhost:8002"
|
||||
|
||||
# Test domains
|
||||
test_domains = [
|
||||
"saas",
|
||||
"SaaS", # Test case sensitivity
|
||||
"ecommerce",
|
||||
"E-commerce", # Test case sensitivity and hyphen
|
||||
"healthcare",
|
||||
"finance",
|
||||
"gaming",
|
||||
"education",
|
||||
"media",
|
||||
"iot",
|
||||
"social",
|
||||
"elearning",
|
||||
"realestate",
|
||||
"travel",
|
||||
"manufacturing",
|
||||
"personal",
|
||||
"startup",
|
||||
"enterprise"
|
||||
]
|
||||
|
||||
print("🧪 Testing Domain Recommendations")
|
||||
print("=" * 50)
|
||||
|
||||
for domain in test_domains:
|
||||
print(f"\n🔍 Testing domain: '{domain}'")
|
||||
|
||||
# Test recommendation endpoint
|
||||
payload = {
|
||||
"domain": domain,
|
||||
"budget": 900.0
|
||||
}
|
||||
|
||||
try:
|
||||
response = requests.post(f"{base_url}/recommend/best", json=payload, timeout=10)
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
recommendations = data.get('recommendations', [])
|
||||
|
||||
print(f" ✅ Status: {response.status_code}")
|
||||
print(f" 📝 Response: {recommendations}")
|
||||
print(f" 📊 Recommendations: {len(recommendations)}")
|
||||
|
||||
if recommendations:
|
||||
print(f" 🏆 Top recommendation: {recommendations[0]['stack_name']}")
|
||||
print(f" 💰 Cost: ${recommendations[0]['monthly_cost']}")
|
||||
print(f" 🎯 Domains: {recommendations[0].get('recommended_domains', 'N/A')}")
|
||||
else:
|
||||
print(" ⚠️ No recommendations found")
|
||||
else:
|
||||
print(f" ❌ Error: {response.status_code}")
|
||||
print(f" 📝 Response: {response.text}")
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
print(f" ❌ Request failed: {e}")
|
||||
except Exception as e:
|
||||
print(f" ❌ Unexpected error: {e}")
|
||||
|
||||
# Test available domains endpoint
|
||||
print(f"\n🌐 Testing available domains endpoint")
|
||||
try:
|
||||
response = requests.get(f"{base_url}/api/domains", timeout=10)
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
domains = data.get('domains', [])
|
||||
print(f" ✅ Available domains: {len(domains)}")
|
||||
for domain in domains:
|
||||
print(f" - {domain['domain_name']} ({domain['project_scale']}, {domain['team_experience_level']})")
|
||||
else:
|
||||
print(f" ❌ Error: {response.status_code}")
|
||||
except Exception as e:
|
||||
print(f" ❌ Error: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_domain_recommendations()
|
||||
100
services/tech-stack-selector/test_migration.py
Normal file
100
services/tech-stack-selector/test_migration.py
Normal file
@ -0,0 +1,100 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script to verify PostgreSQL migration is working properly
|
||||
"""
|
||||
|
||||
import psycopg2
|
||||
import sys
|
||||
|
||||
def test_database_migration():
|
||||
"""Test if the database migration was successful"""
|
||||
|
||||
try:
|
||||
# Connect to PostgreSQL
|
||||
conn = psycopg2.connect(
|
||||
host='localhost',
|
||||
port=5432,
|
||||
user='pipeline_admin',
|
||||
password='secure_pipeline_2024',
|
||||
database='dev_pipeline'
|
||||
)
|
||||
cursor = conn.cursor()
|
||||
|
||||
print("🧪 Testing PostgreSQL Migration")
|
||||
print("=" * 40)
|
||||
|
||||
# Test tables exist
|
||||
tables_to_check = [
|
||||
'price_tiers',
|
||||
'frontend_technologies',
|
||||
'backend_technologies',
|
||||
'database_technologies',
|
||||
'cloud_technologies',
|
||||
'testing_technologies',
|
||||
'mobile_technologies',
|
||||
'devops_technologies',
|
||||
'ai_ml_technologies',
|
||||
'tools',
|
||||
'price_based_stacks',
|
||||
'stack_recommendations'
|
||||
]
|
||||
|
||||
print("📋 Checking table existence:")
|
||||
for table in tables_to_check:
|
||||
cursor.execute(f"""
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = 'public'
|
||||
AND table_name = '{table}'
|
||||
);
|
||||
""")
|
||||
exists = cursor.fetchone()[0]
|
||||
status = "✅" if exists else "❌"
|
||||
print(f" {status} {table}")
|
||||
|
||||
print("\n📊 Checking data counts:")
|
||||
for table in tables_to_check:
|
||||
try:
|
||||
cursor.execute(f'SELECT COUNT(*) FROM {table};')
|
||||
count = cursor.fetchone()[0]
|
||||
print(f" {table}: {count} records")
|
||||
except Exception as e:
|
||||
print(f" {table}: Error - {e}")
|
||||
|
||||
# Test specific data
|
||||
print("\n🔍 Testing specific data:")
|
||||
|
||||
# Test price tiers
|
||||
cursor.execute("SELECT tier_name, min_price_usd, max_price_usd FROM price_tiers ORDER BY min_price_usd;")
|
||||
price_tiers = cursor.fetchall()
|
||||
print(f" Price tiers: {len(price_tiers)}")
|
||||
for tier in price_tiers:
|
||||
print(f" - {tier[0]}: ${tier[1]} - ${tier[2]}")
|
||||
|
||||
# Test stack recommendations
|
||||
cursor.execute("SELECT business_domain, COUNT(*) FROM stack_recommendations GROUP BY business_domain;")
|
||||
domains = cursor.fetchall()
|
||||
print(f" Domain recommendations: {len(domains)}")
|
||||
for domain in domains:
|
||||
print(f" - {domain[0]}: {domain[1]} recommendations")
|
||||
|
||||
# Test tools
|
||||
cursor.execute("SELECT category, COUNT(*) FROM tools GROUP BY category;")
|
||||
tool_categories = cursor.fetchall()
|
||||
print(f" Tool categories: {len(tool_categories)}")
|
||||
for category in tool_categories:
|
||||
print(f" - {category[0]}: {category[1]} tools")
|
||||
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
print("\n✅ Database migration test completed successfully!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Database migration test failed: {e}")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = test_database_migration()
|
||||
sys.exit(0 if success else 1)
|
||||
@ -3,7 +3,7 @@ FROM node:18-alpine
|
||||
WORKDIR /app
|
||||
|
||||
# Install curl for health checks
|
||||
RUN apk add --no-cache curl
|
||||
RUN apk add --no-cache curl python3 py3-pip py3-virtualenv
|
||||
|
||||
# Ensure shared pipeline schema can be applied automatically when missing
|
||||
ENV APPLY_SCHEMAS_SQL=true
|
||||
@ -17,6 +17,15 @@ RUN npm install
|
||||
# Copy source code
|
||||
COPY . .
|
||||
|
||||
# Setup Python venv and install AI dependencies if present
|
||||
RUN if [ -f "/app/ai/requirements.txt" ]; then \
|
||||
python3 -m venv /opt/venv && \
|
||||
/opt/venv/bin/pip install --no-cache-dir -r /app/ai/requirements.txt; \
|
||||
fi
|
||||
|
||||
# Ensure venv binaries are on PATH
|
||||
ENV PATH="/opt/venv/bin:${PATH}"
|
||||
|
||||
# Create non-root user
|
||||
RUN addgroup -g 1001 -S nodejs
|
||||
RUN adduser -S template-manager -u 1001
|
||||
@ -26,11 +35,11 @@ RUN chown -R template-manager:nodejs /app
|
||||
USER template-manager
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8009
|
||||
EXPOSE 8009 8013
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
|
||||
CMD curl -f http://localhost:8009/health || exit 1
|
||||
CMD curl -f http://localhost:8009/health || curl -f http://localhost:8013/health || exit 1
|
||||
|
||||
# Start the application
|
||||
CMD ["npm", "start"]
|
||||
CMD ["/bin/sh", "/app/start.sh"]
|
||||
@ -1,121 +0,0 @@
|
||||
require('dotenv').config();
|
||||
const database = require('./src/config/database');
|
||||
|
||||
const SAMPLE_TEMPLATES = [
|
||||
{
|
||||
type: 'blog_platform',
|
||||
title: 'Blog Platform',
|
||||
description: 'Modern blog with content management, comments, and SEO',
|
||||
icon: '📝',
|
||||
category: 'Content',
|
||||
gradient: 'from-purple-50 to-purple-100',
|
||||
border: 'border-purple-200',
|
||||
text: 'text-purple-900',
|
||||
subtext: 'text-purple-700'
|
||||
},
|
||||
{
|
||||
type: 'task_manager',
|
||||
title: 'Task Manager',
|
||||
description: 'Project and task management with team collaboration',
|
||||
icon: '✅',
|
||||
category: 'Productivity',
|
||||
gradient: 'from-green-50 to-green-100',
|
||||
border: 'border-green-200',
|
||||
text: 'text-green-900',
|
||||
subtext: 'text-green-700'
|
||||
},
|
||||
{
|
||||
type: 'analytics_dashboard',
|
||||
title: 'Analytics Dashboard',
|
||||
description: 'Data visualization and business intelligence platform',
|
||||
icon: '📊',
|
||||
category: 'Business',
|
||||
gradient: 'from-blue-50 to-blue-100',
|
||||
border: 'border-blue-200',
|
||||
text: 'text-blue-900',
|
||||
subtext: 'text-blue-700'
|
||||
},
|
||||
{
|
||||
type: 'social_network',
|
||||
title: 'Social Network',
|
||||
description: 'Connect with friends, share content, and build communities',
|
||||
icon: '🌐',
|
||||
category: 'Social',
|
||||
gradient: 'from-pink-50 to-pink-100',
|
||||
border: 'border-pink-200',
|
||||
text: 'text-pink-900',
|
||||
subtext: 'text-pink-700'
|
||||
},
|
||||
{
|
||||
type: 'learning_platform',
|
||||
title: 'Learning Platform',
|
||||
description: 'Online courses, quizzes, and educational content',
|
||||
icon: '🎓',
|
||||
category: 'Education',
|
||||
gradient: 'from-yellow-50 to-yellow-100',
|
||||
border: 'border-yellow-200',
|
||||
text: 'text-yellow-900',
|
||||
subtext: 'text-yellow-700'
|
||||
}
|
||||
];
|
||||
|
||||
async function addSampleTemplates() {
|
||||
const client = await database.connect();
|
||||
|
||||
try {
|
||||
await client.query('BEGIN');
|
||||
|
||||
console.log('🚀 Adding sample templates...');
|
||||
|
||||
for (const template of SAMPLE_TEMPLATES) {
|
||||
const query = `
|
||||
INSERT INTO templates (
|
||||
id, type, title, description, icon, category,
|
||||
gradient, border, text, subtext, is_active, created_at, updated_at
|
||||
) VALUES (
|
||||
gen_random_uuid(), $1, $2, $3, $4, $5, $6, $7, $8, $9, true, NOW(), NOW()
|
||||
)
|
||||
`;
|
||||
|
||||
const values = [
|
||||
template.type,
|
||||
template.title,
|
||||
template.description,
|
||||
template.icon,
|
||||
template.category,
|
||||
template.gradient,
|
||||
template.border,
|
||||
template.text,
|
||||
template.subtext
|
||||
];
|
||||
|
||||
await client.query(query, values);
|
||||
console.log(`✅ Added template: ${template.title}`);
|
||||
}
|
||||
|
||||
await client.query('COMMIT');
|
||||
console.log('🎉 Sample templates added successfully!');
|
||||
|
||||
} catch (error) {
|
||||
await client.query('ROLLBACK');
|
||||
console.error('❌ Error adding sample templates:', error.message);
|
||||
throw error;
|
||||
} finally {
|
||||
client.release();
|
||||
}
|
||||
}
|
||||
|
||||
// Run if called directly
|
||||
if (require.main === module) {
|
||||
addSampleTemplates()
|
||||
.then(() => {
|
||||
console.log('🎉 Process completed!');
|
||||
process.exit(0);
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error('💥 Process failed:', error.message);
|
||||
process.exit(1);
|
||||
});
|
||||
}
|
||||
|
||||
module.exports = { addSampleTemplates };
|
||||
12
services/template-manager/ai/requirements.txt
Normal file
12
services/template-manager/ai/requirements.txt
Normal file
@ -0,0 +1,12 @@
|
||||
# Python dependencies for AI features
|
||||
asyncpg==0.30.0
|
||||
anthropic>=0.34.0
|
||||
loguru==0.7.2
|
||||
requests==2.31.0
|
||||
python-dotenv==1.0.0
|
||||
neo4j==5.15.0
|
||||
fastapi==0.104.1
|
||||
uvicorn==0.24.0
|
||||
pydantic==2.11.9
|
||||
httpx>=0.25.0
|
||||
|
||||
2031
services/template-manager/ai/tech_stack_service.py
Normal file
2031
services/template-manager/ai/tech_stack_service.py
Normal file
File diff suppressed because it is too large
Load Diff
552
services/template-manager/package-lock.json
generated
552
services/template-manager/package-lock.json
generated
@ -8,6 +8,7 @@
|
||||
"name": "template-manager",
|
||||
"version": "1.0.0",
|
||||
"dependencies": {
|
||||
"@anthropic-ai/sdk": "^0.24.3",
|
||||
"axios": "^1.12.2",
|
||||
"cors": "^2.8.5",
|
||||
"dotenv": "^16.0.3",
|
||||
@ -16,10 +17,12 @@
|
||||
"joi": "^17.7.0",
|
||||
"jsonwebtoken": "^9.0.2",
|
||||
"morgan": "^1.10.0",
|
||||
"neo4j-driver": "^5.15.0",
|
||||
"pg": "^8.8.0",
|
||||
"redis": "^4.6.0",
|
||||
"socket.io": "^4.8.1",
|
||||
"uuid": "^9.0.0"
|
||||
"uuid": "^9.0.0",
|
||||
"winston": "^3.11.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"nodemon": "^2.0.22"
|
||||
@ -28,6 +31,57 @@
|
||||
"node": ">=18.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@anthropic-ai/sdk": {
|
||||
"version": "0.24.3",
|
||||
"resolved": "https://registry.npmjs.org/@anthropic-ai/sdk/-/sdk-0.24.3.tgz",
|
||||
"integrity": "sha512-916wJXO6T6k8R6BAAcLhLPv/pnLGy7YSEBZXZ1XTFbLcTZE8oTy3oDW9WJf9KKZwMvVcePIfoTSvzXHRcGxkQQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@types/node": "^18.11.18",
|
||||
"@types/node-fetch": "^2.6.4",
|
||||
"abort-controller": "^3.0.0",
|
||||
"agentkeepalive": "^4.2.1",
|
||||
"form-data-encoder": "1.7.2",
|
||||
"formdata-node": "^4.3.2",
|
||||
"node-fetch": "^2.6.7",
|
||||
"web-streams-polyfill": "^3.2.1"
|
||||
}
|
||||
},
|
||||
"node_modules/@anthropic-ai/sdk/node_modules/@types/node": {
|
||||
"version": "18.19.127",
|
||||
"resolved": "https://registry.npmjs.org/@types/node/-/node-18.19.127.tgz",
|
||||
"integrity": "sha512-gSjxjrnKXML/yo0BO099uPixMqfpJU0TKYjpfLU7TrtA2WWDki412Np/RSTPRil1saKBhvVVKzVx/p/6p94nVA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"undici-types": "~5.26.4"
|
||||
}
|
||||
},
|
||||
"node_modules/@anthropic-ai/sdk/node_modules/undici-types": {
|
||||
"version": "5.26.5",
|
||||
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-5.26.5.tgz",
|
||||
"integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/@colors/colors": {
|
||||
"version": "1.6.0",
|
||||
"resolved": "https://registry.npmjs.org/@colors/colors/-/colors-1.6.0.tgz",
|
||||
"integrity": "sha512-Ir+AOibqzrIsL6ajt3Rz3LskB7OiMVHqltZmspbW/TJuTVuyOMirVqAkjfY6JISiLHgyNqicAC8AyHHGzNd/dA==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=0.1.90"
|
||||
}
|
||||
},
|
||||
"node_modules/@dabh/diagnostics": {
|
||||
"version": "2.0.3",
|
||||
"resolved": "https://registry.npmjs.org/@dabh/diagnostics/-/diagnostics-2.0.3.tgz",
|
||||
"integrity": "sha512-hrlQOIi7hAfzsMqlGSFyVucrx38O+j6wiGOf//H2ecvIEqYN4ADBSS2iLMh5UFyDunCNniUIPk/q3riFv45xRA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"colorspace": "1.1.x",
|
||||
"enabled": "2.0.x",
|
||||
"kuler": "^2.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@hapi/hoek": {
|
||||
"version": "9.3.0",
|
||||
"resolved": "https://registry.npmjs.org/@hapi/hoek/-/hoek-9.3.0.tgz",
|
||||
@ -147,6 +201,34 @@
|
||||
"undici-types": "~7.10.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@types/node-fetch": {
|
||||
"version": "2.6.13",
|
||||
"resolved": "https://registry.npmjs.org/@types/node-fetch/-/node-fetch-2.6.13.tgz",
|
||||
"integrity": "sha512-QGpRVpzSaUs30JBSGPjOg4Uveu384erbHBoT1zeONvyCfwQxIkUshLAOqN/k9EjGviPRmWTTe6aH2qySWKTVSw==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@types/node": "*",
|
||||
"form-data": "^4.0.4"
|
||||
}
|
||||
},
|
||||
"node_modules/@types/triple-beam": {
|
||||
"version": "1.3.5",
|
||||
"resolved": "https://registry.npmjs.org/@types/triple-beam/-/triple-beam-1.3.5.tgz",
|
||||
"integrity": "sha512-6WaYesThRMCl19iryMYP7/x2OVgCtbIVflDGFpWnb9irXI3UjYE4AzmYuiUKY1AJstGijoY+MgUszMgRxIYTYw==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/abort-controller": {
|
||||
"version": "3.0.0",
|
||||
"resolved": "https://registry.npmjs.org/abort-controller/-/abort-controller-3.0.0.tgz",
|
||||
"integrity": "sha512-h8lQ8tacZYnR3vNQTgibj+tODHI5/+l06Au2Pcriv/Gmet0eaj4TwWH41sO9wnHDiQsEj19q0drzdWdeAHtweg==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"event-target-shim": "^5.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=6.5"
|
||||
}
|
||||
},
|
||||
"node_modules/accepts": {
|
||||
"version": "1.3.8",
|
||||
"resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.8.tgz",
|
||||
@ -160,6 +242,18 @@
|
||||
"node": ">= 0.6"
|
||||
}
|
||||
},
|
||||
"node_modules/agentkeepalive": {
|
||||
"version": "4.6.0",
|
||||
"resolved": "https://registry.npmjs.org/agentkeepalive/-/agentkeepalive-4.6.0.tgz",
|
||||
"integrity": "sha512-kja8j7PjmncONqaTsB8fQ+wE2mSU2DJ9D4XKoJ5PFWIdRMa6SLSN1ff4mOr4jCbfRSsxR4keIiySJU0N9T5hIQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"humanize-ms": "^1.2.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 8.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/anymatch": {
|
||||
"version": "3.1.3",
|
||||
"resolved": "https://registry.npmjs.org/anymatch/-/anymatch-3.1.3.tgz",
|
||||
@ -180,6 +274,12 @@
|
||||
"integrity": "sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/async": {
|
||||
"version": "3.2.6",
|
||||
"resolved": "https://registry.npmjs.org/async/-/async-3.2.6.tgz",
|
||||
"integrity": "sha512-htCUDlxyyCLMgaM3xXg0C0LW2xqfuQ6p05pCEIsXuyQ+a1koYKTuBMzRNwmybfLgvJDMd0r1LTn4+E0Ti6C2AA==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/asynckit": {
|
||||
"version": "0.4.0",
|
||||
"resolved": "https://registry.npmjs.org/asynckit/-/asynckit-0.4.0.tgz",
|
||||
@ -204,6 +304,26 @@
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/base64-js": {
|
||||
"version": "1.5.1",
|
||||
"resolved": "https://registry.npmjs.org/base64-js/-/base64-js-1.5.1.tgz",
|
||||
"integrity": "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/feross"
|
||||
},
|
||||
{
|
||||
"type": "patreon",
|
||||
"url": "https://www.patreon.com/feross"
|
||||
},
|
||||
{
|
||||
"type": "consulting",
|
||||
"url": "https://feross.org/support"
|
||||
}
|
||||
],
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/base64id": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/base64id/-/base64id-2.0.0.tgz",
|
||||
@ -292,6 +412,30 @@
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/buffer": {
|
||||
"version": "6.0.3",
|
||||
"resolved": "https://registry.npmjs.org/buffer/-/buffer-6.0.3.tgz",
|
||||
"integrity": "sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/feross"
|
||||
},
|
||||
{
|
||||
"type": "patreon",
|
||||
"url": "https://www.patreon.com/feross"
|
||||
},
|
||||
{
|
||||
"type": "consulting",
|
||||
"url": "https://feross.org/support"
|
||||
}
|
||||
],
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"base64-js": "^1.3.1",
|
||||
"ieee754": "^1.2.1"
|
||||
}
|
||||
},
|
||||
"node_modules/buffer-equal-constant-time": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/buffer-equal-constant-time/-/buffer-equal-constant-time-1.0.1.tgz",
|
||||
@ -370,6 +514,51 @@
|
||||
"node": ">=0.10.0"
|
||||
}
|
||||
},
|
||||
"node_modules/color": {
|
||||
"version": "3.2.1",
|
||||
"resolved": "https://registry.npmjs.org/color/-/color-3.2.1.tgz",
|
||||
"integrity": "sha512-aBl7dZI9ENN6fUGC7mWpMTPNHmWUSNan9tuWN6ahh5ZLNk9baLJOnSMlrQkHcrfFgz2/RigjUVAjdx36VcemKA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"color-convert": "^1.9.3",
|
||||
"color-string": "^1.6.0"
|
||||
}
|
||||
},
|
||||
"node_modules/color-convert": {
|
||||
"version": "1.9.3",
|
||||
"resolved": "https://registry.npmjs.org/color-convert/-/color-convert-1.9.3.tgz",
|
||||
"integrity": "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"color-name": "1.1.3"
|
||||
}
|
||||
},
|
||||
"node_modules/color-name": {
|
||||
"version": "1.1.3",
|
||||
"resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz",
|
||||
"integrity": "sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/color-string": {
|
||||
"version": "1.9.1",
|
||||
"resolved": "https://registry.npmjs.org/color-string/-/color-string-1.9.1.tgz",
|
||||
"integrity": "sha512-shrVawQFojnZv6xM40anx4CkoDP+fZsw/ZerEMsW/pyzsRbElpsL/DBVW7q3ExxwusdNXI3lXpuhEZkzs8p5Eg==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"color-name": "^1.0.0",
|
||||
"simple-swizzle": "^0.2.2"
|
||||
}
|
||||
},
|
||||
"node_modules/colorspace": {
|
||||
"version": "1.1.4",
|
||||
"resolved": "https://registry.npmjs.org/colorspace/-/colorspace-1.1.4.tgz",
|
||||
"integrity": "sha512-BgvKJiuVu1igBUF2kEjRCZXol6wiiGbY5ipL/oVPwm0BL9sIpMIzM8IK7vwuxIIzOXMV3Ey5w+vxhm0rR/TN8w==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"color": "^3.1.3",
|
||||
"text-hex": "1.0.x"
|
||||
}
|
||||
},
|
||||
"node_modules/combined-stream": {
|
||||
"version": "1.0.8",
|
||||
"resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.8.tgz",
|
||||
@ -516,6 +705,12 @@
|
||||
"integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/enabled": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/enabled/-/enabled-2.0.0.tgz",
|
||||
"integrity": "sha512-AKrN98kuwOzMIdAizXGI86UFBoo26CL21UM763y1h/GMSJ4/OHU9k2YlsmBpyScFo/wbLzWQJBMCW4+IO3/+OQ==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/encodeurl": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz",
|
||||
@ -646,6 +841,15 @@
|
||||
"node": ">= 0.6"
|
||||
}
|
||||
},
|
||||
"node_modules/event-target-shim": {
|
||||
"version": "5.0.1",
|
||||
"resolved": "https://registry.npmjs.org/event-target-shim/-/event-target-shim-5.0.1.tgz",
|
||||
"integrity": "sha512-i/2XbnSz/uxRCU6+NdVJgKWDTM427+MqYbkQzD321DuCQJUqOuJKIA0IM2+W2xtYHdKOmZ4dR6fExsd4SXL+WQ==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=6"
|
||||
}
|
||||
},
|
||||
"node_modules/express": {
|
||||
"version": "4.21.2",
|
||||
"resolved": "https://registry.npmjs.org/express/-/express-4.21.2.tgz",
|
||||
@ -692,6 +896,12 @@
|
||||
"url": "https://opencollective.com/express"
|
||||
}
|
||||
},
|
||||
"node_modules/fecha": {
|
||||
"version": "4.2.3",
|
||||
"resolved": "https://registry.npmjs.org/fecha/-/fecha-4.2.3.tgz",
|
||||
"integrity": "sha512-OP2IUU6HeYKJi3i0z4A19kHMQoLVs4Hc+DPqqxI2h/DPZHTm/vjsfC6P0b4jCMy14XizLBqvndQ+UilD7707Jw==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/fill-range": {
|
||||
"version": "7.1.1",
|
||||
"resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz",
|
||||
@ -723,6 +933,12 @@
|
||||
"node": ">= 0.8"
|
||||
}
|
||||
},
|
||||
"node_modules/fn.name": {
|
||||
"version": "1.1.0",
|
||||
"resolved": "https://registry.npmjs.org/fn.name/-/fn.name-1.1.0.tgz",
|
||||
"integrity": "sha512-GRnmB5gPyJpAhTQdSZTSp9uaPSvl09KoYcMQtsB9rQoOmzs9dH6ffeccH+Z+cv6P68Hu5bC6JjRh4Ah/mHSNRw==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/follow-redirects": {
|
||||
"version": "1.15.11",
|
||||
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.11.tgz",
|
||||
@ -759,6 +975,34 @@
|
||||
"node": ">= 6"
|
||||
}
|
||||
},
|
||||
"node_modules/form-data-encoder": {
|
||||
"version": "1.7.2",
|
||||
"resolved": "https://registry.npmjs.org/form-data-encoder/-/form-data-encoder-1.7.2.tgz",
|
||||
"integrity": "sha512-qfqtYan3rxrnCk1VYaA4H+Ms9xdpPqvLZa6xmMgFvhO32x7/3J/ExcTd6qpxM0vH2GdMI+poehyBZvqfMTto8A==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/formdata-node": {
|
||||
"version": "4.4.1",
|
||||
"resolved": "https://registry.npmjs.org/formdata-node/-/formdata-node-4.4.1.tgz",
|
||||
"integrity": "sha512-0iirZp3uVDjVGt9p49aTaqjk84TrglENEDuqfdlZQ1roC9CWlPk6Avf8EEnZNcAqPonwkG35x4n3ww/1THYAeQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"node-domexception": "1.0.0",
|
||||
"web-streams-polyfill": "4.0.0-beta.3"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 12.20"
|
||||
}
|
||||
},
|
||||
"node_modules/formdata-node/node_modules/web-streams-polyfill": {
|
||||
"version": "4.0.0-beta.3",
|
||||
"resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-4.0.0-beta.3.tgz",
|
||||
"integrity": "sha512-QW95TCTaHmsYfHDybGMwO5IJIM93I/6vTRk+daHTWFPhwh+C8Cg7j7XyKrwrj8Ib6vYXe0ocYNrmzY4xAAN6ug==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">= 14"
|
||||
}
|
||||
},
|
||||
"node_modules/forwarded": {
|
||||
"version": "0.2.0",
|
||||
"resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz",
|
||||
@ -946,6 +1190,15 @@
|
||||
"node": ">= 0.8"
|
||||
}
|
||||
},
|
||||
"node_modules/humanize-ms": {
|
||||
"version": "1.2.1",
|
||||
"resolved": "https://registry.npmjs.org/humanize-ms/-/humanize-ms-1.2.1.tgz",
|
||||
"integrity": "sha512-Fl70vYtsAFb/C06PTS9dZBo7ihau+Tu/DNCk/OyHhea07S+aeMWpFFkUaXRa8fI+ScZbEI8dfSxwY7gxZ9SAVQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ms": "^2.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/iconv-lite": {
|
||||
"version": "0.4.24",
|
||||
"resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz",
|
||||
@ -958,6 +1211,26 @@
|
||||
"node": ">=0.10.0"
|
||||
}
|
||||
},
|
||||
"node_modules/ieee754": {
|
||||
"version": "1.2.1",
|
||||
"resolved": "https://registry.npmjs.org/ieee754/-/ieee754-1.2.1.tgz",
|
||||
"integrity": "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/feross"
|
||||
},
|
||||
{
|
||||
"type": "patreon",
|
||||
"url": "https://www.patreon.com/feross"
|
||||
},
|
||||
{
|
||||
"type": "consulting",
|
||||
"url": "https://feross.org/support"
|
||||
}
|
||||
],
|
||||
"license": "BSD-3-Clause"
|
||||
},
|
||||
"node_modules/ignore-by-default": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/ignore-by-default/-/ignore-by-default-1.0.1.tgz",
|
||||
@ -980,6 +1253,12 @@
|
||||
"node": ">= 0.10"
|
||||
}
|
||||
},
|
||||
"node_modules/is-arrayish": {
|
||||
"version": "0.3.4",
|
||||
"resolved": "https://registry.npmjs.org/is-arrayish/-/is-arrayish-0.3.4.tgz",
|
||||
"integrity": "sha512-m6UrgzFVUYawGBh1dUsWR5M2Clqic9RVXC/9f8ceNlv2IcO9j9J/z8UoCLPqtsPBFNzEpfR3xftohbfqDx8EQA==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/is-binary-path": {
|
||||
"version": "2.1.0",
|
||||
"resolved": "https://registry.npmjs.org/is-binary-path/-/is-binary-path-2.1.0.tgz",
|
||||
@ -1026,6 +1305,18 @@
|
||||
"node": ">=0.12.0"
|
||||
}
|
||||
},
|
||||
"node_modules/is-stream": {
|
||||
"version": "2.0.1",
|
||||
"resolved": "https://registry.npmjs.org/is-stream/-/is-stream-2.0.1.tgz",
|
||||
"integrity": "sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/joi": {
|
||||
"version": "17.13.3",
|
||||
"resolved": "https://registry.npmjs.org/joi/-/joi-17.13.3.tgz",
|
||||
@ -1100,6 +1391,12 @@
|
||||
"safe-buffer": "^5.0.1"
|
||||
}
|
||||
},
|
||||
"node_modules/kuler": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/kuler/-/kuler-2.0.0.tgz",
|
||||
"integrity": "sha512-Xq9nH7KlWZmXAtodXDDRE7vs6DU1gTU8zYDHDiWLSip45Egwq3plLHzPn27NgvzL2r1LMPC1vdqh98sQxtqj4A==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/lodash.includes": {
|
||||
"version": "4.3.0",
|
||||
"resolved": "https://registry.npmjs.org/lodash.includes/-/lodash.includes-4.3.0.tgz",
|
||||
@ -1142,6 +1439,29 @@
|
||||
"integrity": "sha512-Sb487aTOCr9drQVL8pIxOzVhafOjZN9UU54hiN8PU3uAiSV7lx1yYNpbNmex2PK6dSJoNTSJUUswT651yww3Mg==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/logform": {
|
||||
"version": "2.7.0",
|
||||
"resolved": "https://registry.npmjs.org/logform/-/logform-2.7.0.tgz",
|
||||
"integrity": "sha512-TFYA4jnP7PVbmlBIfhlSe+WKxs9dklXMTEGcBCIvLhE/Tn3H6Gk1norupVW7m5Cnd4bLcr08AytbyV/xj7f/kQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@colors/colors": "1.6.0",
|
||||
"@types/triple-beam": "^1.3.2",
|
||||
"fecha": "^4.2.0",
|
||||
"ms": "^2.1.1",
|
||||
"safe-stable-stringify": "^2.3.1",
|
||||
"triple-beam": "^1.3.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 12.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/logform/node_modules/ms": {
|
||||
"version": "2.1.3",
|
||||
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
|
||||
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/math-intrinsics": {
|
||||
"version": "1.1.0",
|
||||
"resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz",
|
||||
@ -1267,6 +1587,74 @@
|
||||
"node": ">= 0.6"
|
||||
}
|
||||
},
|
||||
"node_modules/neo4j-driver": {
|
||||
"version": "5.28.2",
|
||||
"resolved": "https://registry.npmjs.org/neo4j-driver/-/neo4j-driver-5.28.2.tgz",
|
||||
"integrity": "sha512-nix4Canllf7Tl4FZL9sskhkKYoCp40fg7VsknSRTRgbm1JaE2F1Ej/c2nqlM06nqh3WrkI0ww3taVB+lem7w7w==",
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"neo4j-driver-bolt-connection": "5.28.2",
|
||||
"neo4j-driver-core": "5.28.2",
|
||||
"rxjs": "^7.8.2"
|
||||
}
|
||||
},
|
||||
"node_modules/neo4j-driver-bolt-connection": {
|
||||
"version": "5.28.2",
|
||||
"resolved": "https://registry.npmjs.org/neo4j-driver-bolt-connection/-/neo4j-driver-bolt-connection-5.28.2.tgz",
|
||||
"integrity": "sha512-dEX06iNPEo9iyCb0NssxJeA3REN+H+U/Y0MdAjJBEoil4tGz5PxBNZL6/+noQnu2pBJT5wICepakXCrN3etboA==",
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"buffer": "^6.0.3",
|
||||
"neo4j-driver-core": "5.28.2",
|
||||
"string_decoder": "^1.3.0"
|
||||
}
|
||||
},
|
||||
"node_modules/neo4j-driver-core": {
|
||||
"version": "5.28.2",
|
||||
"resolved": "https://registry.npmjs.org/neo4j-driver-core/-/neo4j-driver-core-5.28.2.tgz",
|
||||
"integrity": "sha512-fBMk4Ox379oOz4FcfdS6ZOxsTEypjkcAelNm9LcWQZ981xCdOnGMzlWL+qXECvL0qUwRfmZxoqbDlJzuzFrdvw==",
|
||||
"license": "Apache-2.0"
|
||||
},
|
||||
"node_modules/node-domexception": {
|
||||
"version": "1.0.0",
|
||||
"resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz",
|
||||
"integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ==",
|
||||
"deprecated": "Use your platform's native DOMException instead",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/jimmywarting"
|
||||
},
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://paypal.me/jimmywarting"
|
||||
}
|
||||
],
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=10.5.0"
|
||||
}
|
||||
},
|
||||
"node_modules/node-fetch": {
|
||||
"version": "2.7.0",
|
||||
"resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.7.0.tgz",
|
||||
"integrity": "sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"whatwg-url": "^5.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": "4.x || >=6.0.0"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"encoding": "^0.1.0"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"encoding": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/nodemon": {
|
||||
"version": "2.0.22",
|
||||
"resolved": "https://registry.npmjs.org/nodemon/-/nodemon-2.0.22.tgz",
|
||||
@ -1365,6 +1753,15 @@
|
||||
"node": ">= 0.8"
|
||||
}
|
||||
},
|
||||
"node_modules/one-time": {
|
||||
"version": "1.0.0",
|
||||
"resolved": "https://registry.npmjs.org/one-time/-/one-time-1.0.0.tgz",
|
||||
"integrity": "sha512-5DXOiRKwuSEcQ/l0kGCF6Q3jcADFv5tSmRaJck/OqkVFcOzutB134KRSfF0xDrL39MNnqxbHBbUUcjZIhTgb2g==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"fn.name": "1.x.x"
|
||||
}
|
||||
},
|
||||
"node_modules/parseurl": {
|
||||
"version": "1.3.3",
|
||||
"resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz",
|
||||
@ -1586,6 +1983,20 @@
|
||||
"node": ">= 0.8"
|
||||
}
|
||||
},
|
||||
"node_modules/readable-stream": {
|
||||
"version": "3.6.2",
|
||||
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.2.tgz",
|
||||
"integrity": "sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"inherits": "^2.0.3",
|
||||
"string_decoder": "^1.1.1",
|
||||
"util-deprecate": "^1.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 6"
|
||||
}
|
||||
},
|
||||
"node_modules/readdirp": {
|
||||
"version": "3.6.0",
|
||||
"resolved": "https://registry.npmjs.org/readdirp/-/readdirp-3.6.0.tgz",
|
||||
@ -1616,6 +2027,15 @@
|
||||
"@redis/time-series": "1.1.0"
|
||||
}
|
||||
},
|
||||
"node_modules/rxjs": {
|
||||
"version": "7.8.2",
|
||||
"resolved": "https://registry.npmjs.org/rxjs/-/rxjs-7.8.2.tgz",
|
||||
"integrity": "sha512-dhKf903U/PQZY6boNNtAGdWbG85WAbjT/1xYoZIC7FAY0yWapOBQVsVrDl58W86//e1VpMNBtRV4MaXfdMySFA==",
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"tslib": "^2.1.0"
|
||||
}
|
||||
},
|
||||
"node_modules/safe-buffer": {
|
||||
"version": "5.2.1",
|
||||
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz",
|
||||
@ -1636,6 +2056,15 @@
|
||||
],
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/safe-stable-stringify": {
|
||||
"version": "2.5.0",
|
||||
"resolved": "https://registry.npmjs.org/safe-stable-stringify/-/safe-stable-stringify-2.5.0.tgz",
|
||||
"integrity": "sha512-b3rppTKm9T+PsVCBEOUR46GWI7fdOs00VKZ1+9c1EWDaDMvjQc6tUwuFyIprgGgTcWoVHSKrU8H31ZHA2e0RHA==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=10"
|
||||
}
|
||||
},
|
||||
"node_modules/safer-buffer": {
|
||||
"version": "2.1.2",
|
||||
"resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz",
|
||||
@ -1784,6 +2213,15 @@
|
||||
"url": "https://github.com/sponsors/ljharb"
|
||||
}
|
||||
},
|
||||
"node_modules/simple-swizzle": {
|
||||
"version": "0.2.4",
|
||||
"resolved": "https://registry.npmjs.org/simple-swizzle/-/simple-swizzle-0.2.4.tgz",
|
||||
"integrity": "sha512-nAu1WFPQSMNr2Zn9PGSZK9AGn4t/y97lEm+MXTtUDwfP0ksAIX4nO+6ruD9Jwut4C49SB1Ws+fbXsm/yScWOHw==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"is-arrayish": "^0.3.1"
|
||||
}
|
||||
},
|
||||
"node_modules/simple-update-notifier": {
|
||||
"version": "1.1.0",
|
||||
"resolved": "https://registry.npmjs.org/simple-update-notifier/-/simple-update-notifier-1.1.0.tgz",
|
||||
@ -1926,6 +2364,15 @@
|
||||
"node": ">= 10.x"
|
||||
}
|
||||
},
|
||||
"node_modules/stack-trace": {
|
||||
"version": "0.0.10",
|
||||
"resolved": "https://registry.npmjs.org/stack-trace/-/stack-trace-0.0.10.tgz",
|
||||
"integrity": "sha512-KGzahc7puUKkzyMt+IqAep+TVNbKP+k2Lmwhub39m1AsTSkaDutx56aDCo+HLDzf/D26BIHTJWNiTG1KAJiQCg==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": "*"
|
||||
}
|
||||
},
|
||||
"node_modules/statuses": {
|
||||
"version": "2.0.1",
|
||||
"resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz",
|
||||
@ -1935,6 +2382,15 @@
|
||||
"node": ">= 0.8"
|
||||
}
|
||||
},
|
||||
"node_modules/string_decoder": {
|
||||
"version": "1.3.0",
|
||||
"resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.3.0.tgz",
|
||||
"integrity": "sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"safe-buffer": "~5.2.0"
|
||||
}
|
||||
},
|
||||
"node_modules/supports-color": {
|
||||
"version": "5.5.0",
|
||||
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz",
|
||||
@ -1948,6 +2404,12 @@
|
||||
"node": ">=4"
|
||||
}
|
||||
},
|
||||
"node_modules/text-hex": {
|
||||
"version": "1.0.0",
|
||||
"resolved": "https://registry.npmjs.org/text-hex/-/text-hex-1.0.0.tgz",
|
||||
"integrity": "sha512-uuVGNWzgJ4yhRaNSiubPY7OjISw4sw4E5Uv0wbjp+OzcbmVU/rsT8ujgcXJhn9ypzsgr5vlzpPqP+MBBKcGvbg==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/to-regex-range": {
|
||||
"version": "5.0.1",
|
||||
"resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz",
|
||||
@ -1980,6 +2442,27 @@
|
||||
"nodetouch": "bin/nodetouch.js"
|
||||
}
|
||||
},
|
||||
"node_modules/tr46": {
|
||||
"version": "0.0.3",
|
||||
"resolved": "https://registry.npmjs.org/tr46/-/tr46-0.0.3.tgz",
|
||||
"integrity": "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/triple-beam": {
|
||||
"version": "1.4.1",
|
||||
"resolved": "https://registry.npmjs.org/triple-beam/-/triple-beam-1.4.1.tgz",
|
||||
"integrity": "sha512-aZbgViZrg1QNcG+LULa7nhZpJTZSLm/mXnHXnbAbjmN5aSa0y7V+wvv6+4WaBtpISJzThKy+PIPxc1Nq1EJ9mg==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">= 14.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/tslib": {
|
||||
"version": "2.8.1",
|
||||
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
|
||||
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
|
||||
"license": "0BSD"
|
||||
},
|
||||
"node_modules/type-is": {
|
||||
"version": "1.6.18",
|
||||
"resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.18.tgz",
|
||||
@ -2015,6 +2498,12 @@
|
||||
"node": ">= 0.8"
|
||||
}
|
||||
},
|
||||
"node_modules/util-deprecate": {
|
||||
"version": "1.0.2",
|
||||
"resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz",
|
||||
"integrity": "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/utils-merge": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz",
|
||||
@ -2046,6 +2535,67 @@
|
||||
"node": ">= 0.8"
|
||||
}
|
||||
},
|
||||
"node_modules/web-streams-polyfill": {
|
||||
"version": "3.3.3",
|
||||
"resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-3.3.3.tgz",
|
||||
"integrity": "sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">= 8"
|
||||
}
|
||||
},
|
||||
"node_modules/webidl-conversions": {
|
||||
"version": "3.0.1",
|
||||
"resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz",
|
||||
"integrity": "sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ==",
|
||||
"license": "BSD-2-Clause"
|
||||
},
|
||||
"node_modules/whatwg-url": {
|
||||
"version": "5.0.0",
|
||||
"resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-5.0.0.tgz",
|
||||
"integrity": "sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"tr46": "~0.0.3",
|
||||
"webidl-conversions": "^3.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/winston": {
|
||||
"version": "3.17.0",
|
||||
"resolved": "https://registry.npmjs.org/winston/-/winston-3.17.0.tgz",
|
||||
"integrity": "sha512-DLiFIXYC5fMPxaRg832S6F5mJYvePtmO5G9v9IgUFPhXm9/GkXarH/TUrBAVzhTCzAj9anE/+GjrgXp/54nOgw==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@colors/colors": "^1.6.0",
|
||||
"@dabh/diagnostics": "^2.0.2",
|
||||
"async": "^3.2.3",
|
||||
"is-stream": "^2.0.0",
|
||||
"logform": "^2.7.0",
|
||||
"one-time": "^1.0.0",
|
||||
"readable-stream": "^3.4.0",
|
||||
"safe-stable-stringify": "^2.3.1",
|
||||
"stack-trace": "0.0.x",
|
||||
"triple-beam": "^1.3.0",
|
||||
"winston-transport": "^4.9.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 12.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/winston-transport": {
|
||||
"version": "4.9.0",
|
||||
"resolved": "https://registry.npmjs.org/winston-transport/-/winston-transport-4.9.0.tgz",
|
||||
"integrity": "sha512-8drMJ4rkgaPo1Me4zD/3WLfI/zPdA9o2IipKODunnGDcuqbHwjsbB79ylv04LCGGzU0xQ6vTznOMpQGaLhhm6A==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"logform": "^2.7.0",
|
||||
"readable-stream": "^3.6.2",
|
||||
"triple-beam": "^1.3.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 12.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/ws": {
|
||||
"version": "8.17.1",
|
||||
"resolved": "https://registry.npmjs.org/ws/-/ws-8.17.1.tgz",
|
||||
|
||||
@ -5,6 +5,8 @@ const axios = require('axios');
|
||||
const app = express();
|
||||
const PORT = process.env.PORT || 8009;
|
||||
|
||||
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
|
||||
|
||||
// Claude API configuration
|
||||
const CLAUDE_API_KEY = process.env.CLAUDE_API_KEY || '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';
|
||||
const CLAUDE_AVAILABLE = !!CLAUDE_API_KEY;
|
||||
|
||||
@ -1,11 +1,7 @@
|
||||
-- Template Manager Database Schema
|
||||
-- Self-learning template and feature management system
|
||||
|
||||
-- Drop tables if they exist (for development)
|
||||
DROP TABLE IF EXISTS feature_usage CASCADE;
|
||||
DROP TABLE IF EXISTS custom_features CASCADE;
|
||||
DROP TABLE IF EXISTS template_features CASCADE;
|
||||
DROP TABLE IF EXISTS templates CASCADE;
|
||||
-- Create tables only if they don't exist (production-safe)
|
||||
|
||||
-- Enable UUID extension (only if we have permission)
|
||||
DO $$
|
||||
@ -20,7 +16,7 @@ BEGIN
|
||||
END $$;
|
||||
|
||||
-- Templates table
|
||||
CREATE TABLE templates (
|
||||
CREATE TABLE IF NOT EXISTS templates (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
type VARCHAR(100) NOT NULL UNIQUE,
|
||||
title VARCHAR(200) NOT NULL,
|
||||
@ -37,7 +33,7 @@ CREATE TABLE templates (
|
||||
);
|
||||
|
||||
-- Template features table
|
||||
CREATE TABLE template_features (
|
||||
CREATE TABLE IF NOT EXISTS template_features (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
template_id UUID REFERENCES templates(id) ON DELETE CASCADE,
|
||||
feature_id VARCHAR(100) NOT NULL,
|
||||
@ -56,7 +52,7 @@ CREATE TABLE template_features (
|
||||
);
|
||||
|
||||
-- Feature usage tracking
|
||||
CREATE TABLE feature_usage (
|
||||
CREATE TABLE IF NOT EXISTS feature_usage (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
template_id UUID REFERENCES templates(id) ON DELETE CASCADE,
|
||||
feature_id UUID REFERENCES template_features(id) ON DELETE CASCADE,
|
||||
@ -66,7 +62,7 @@ CREATE TABLE feature_usage (
|
||||
);
|
||||
|
||||
-- User-added custom features
|
||||
CREATE TABLE custom_features (
|
||||
CREATE TABLE IF NOT EXISTS custom_features (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
template_id UUID REFERENCES templates(id) ON DELETE CASCADE,
|
||||
name VARCHAR(200) NOT NULL,
|
||||
|
||||
479
services/template-manager/src/migrations/009_ai_features.sql
Normal file
479
services/template-manager/src/migrations/009_ai_features.sql
Normal file
@ -0,0 +1,479 @@
|
||||
-- =====================================================
|
||||
-- 009_ai_features.sql
|
||||
-- AI-related schema for Template Manager: keywords, recommendations, queue, triggers
|
||||
-- Safe for existing monorepo by using IF EXISTS/OR REPLACE and drop-if-exists for triggers
|
||||
-- =====================================================
|
||||
|
||||
-- =====================================================
|
||||
-- 1. CORE TABLES
|
||||
-- NOTE: templates and custom_templates are already managed by existing migrations.
|
||||
-- This migration intentionally does NOT create or modify those core tables.
|
||||
|
||||
-- =====================================================
|
||||
-- 2. AI FEATURES TABLES
|
||||
-- =====================================================
|
||||
|
||||
CREATE TABLE IF NOT EXISTS tech_stack_recommendations (
|
||||
id SERIAL PRIMARY KEY,
|
||||
template_id UUID NOT NULL,
|
||||
stack_name VARCHAR(255) NOT NULL,
|
||||
monthly_cost DECIMAL(10,2) NOT NULL,
|
||||
setup_cost DECIMAL(10,2) NOT NULL,
|
||||
team_size VARCHAR(50) NOT NULL,
|
||||
development_time INTEGER NOT NULL,
|
||||
satisfaction INTEGER NOT NULL CHECK (satisfaction >= 0 AND satisfaction <= 100),
|
||||
success_rate INTEGER NOT NULL CHECK (success_rate >= 0 AND success_rate <= 100),
|
||||
frontend VARCHAR(255) NOT NULL,
|
||||
backend VARCHAR(255) NOT NULL,
|
||||
database VARCHAR(255) NOT NULL,
|
||||
cloud VARCHAR(255) NOT NULL,
|
||||
testing VARCHAR(255) NOT NULL,
|
||||
mobile VARCHAR(255) NOT NULL,
|
||||
devops VARCHAR(255) NOT NULL,
|
||||
ai_ml VARCHAR(255) NOT NULL,
|
||||
recommended_tool VARCHAR(255) NOT NULL,
|
||||
recommendation_score DECIMAL(5,2) NOT NULL CHECK (recommendation_score >= 0 AND recommendation_score <= 100),
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS extracted_keywords (
|
||||
id SERIAL PRIMARY KEY,
|
||||
template_id UUID NOT NULL,
|
||||
template_source VARCHAR(20) NOT NULL CHECK (template_source IN ('templates', 'custom_templates')),
|
||||
keywords_json JSONB NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW(),
|
||||
UNIQUE(template_id, template_source)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS migration_queue (
|
||||
id SERIAL PRIMARY KEY,
|
||||
template_id UUID NOT NULL,
|
||||
migration_type VARCHAR(50) NOT NULL,
|
||||
status VARCHAR(20) DEFAULT 'pending' CHECK (status IN ('pending', 'processing', 'completed', 'failed')),
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
processed_at TIMESTAMP,
|
||||
error_message TEXT,
|
||||
UNIQUE(template_id, migration_type)
|
||||
);
|
||||
|
||||
-- =====================================================
|
||||
-- 3. INDEXES (idempotent)
|
||||
-- =====================================================
|
||||
|
||||
-- (No new indexes on templates/custom_templates here)
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_tech_stack_recommendations_template_id ON tech_stack_recommendations(template_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_tech_stack_recommendations_score ON tech_stack_recommendations(recommendation_score);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_extracted_keywords_template_id ON extracted_keywords(template_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_extracted_keywords_template_source ON extracted_keywords(template_source);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_migration_queue_status ON migration_queue(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_migration_queue_template_id ON migration_queue(template_id);
|
||||
|
||||
-- =====================================================
|
||||
-- 4. FUNCTIONS (OR REPLACE)
|
||||
-- =====================================================
|
||||
|
||||
CREATE OR REPLACE FUNCTION update_updated_at_column()
|
||||
RETURNS TRIGGER AS $$
|
||||
BEGIN
|
||||
NEW.updated_at = NOW();
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
CREATE OR REPLACE FUNCTION extract_keywords_for_template()
|
||||
RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
keywords_list TEXT[];
|
||||
title_keywords TEXT[];
|
||||
desc_keywords TEXT[];
|
||||
final_keywords TEXT[];
|
||||
word TEXT;
|
||||
clean_word TEXT;
|
||||
BEGIN
|
||||
IF NEW.type IN ('_system', '_migration', '_test', '_auto_tech_stack_migration', '_extracted_keywords_fix', '_migration_test', '_automation_fix', '_migration_queue_fix', '_workflow_fix', '_sql_ambiguity_fix', '_consolidated_schema') THEN
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
IF EXISTS (SELECT 1 FROM extracted_keywords WHERE template_id = NEW.id AND template_source = 'templates') THEN
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
keywords_list := ARRAY[]::TEXT[];
|
||||
|
||||
IF NEW.title IS NOT NULL AND LENGTH(TRIM(NEW.title)) > 0 THEN
|
||||
title_keywords := string_to_array(LOWER(REGEXP_REPLACE(NEW.title, '[^a-zA-Z0-9\s]', ' ', 'g')), ' ');
|
||||
FOREACH word IN ARRAY title_keywords LOOP
|
||||
clean_word := TRIM(word);
|
||||
IF LENGTH(clean_word) > 2 AND clean_word NOT IN ('the','and','for','are','but','not','you','all','can','had','her','was','one','our','out','day','get','has','him','his','how','its','may','new','now','old','see','two','way','who','boy','did','man','men','put','say','she','too','use') THEN
|
||||
keywords_list := array_append(keywords_list, clean_word);
|
||||
END IF;
|
||||
END LOOP;
|
||||
END IF;
|
||||
|
||||
IF NEW.description IS NOT NULL AND LENGTH(TRIM(NEW.description)) > 0 THEN
|
||||
desc_keywords := string_to_array(LOWER(REGEXP_REPLACE(NEW.description, '[^a-zA-Z0-9\s]', ' ', 'g')), ' ');
|
||||
FOREACH word IN ARRAY desc_keywords LOOP
|
||||
clean_word := TRIM(word);
|
||||
IF LENGTH(clean_word) > 2 AND clean_word NOT IN ('the','and','for','are','but','not','you','all','can','had','her','was','one','our','out','day','get','has','him','his','how','its','may','new','now','old','see','two','way','who','boy','did','man','men','put','say','she','too','use') THEN
|
||||
keywords_list := array_append(keywords_list, clean_word);
|
||||
END IF;
|
||||
END LOOP;
|
||||
END IF;
|
||||
|
||||
IF NEW.category IS NOT NULL THEN
|
||||
keywords_list := array_append(keywords_list, LOWER(REGEXP_REPLACE(NEW.category, '[^a-zA-Z0-9]', '_', 'g')));
|
||||
END IF;
|
||||
|
||||
IF NEW.type IS NOT NULL THEN
|
||||
keywords_list := array_append(keywords_list, LOWER(REGEXP_REPLACE(NEW.type, '[^a-zA-Z0-9]', '_', 'g')));
|
||||
END IF;
|
||||
|
||||
SELECT ARRAY(
|
||||
SELECT DISTINCT unnest(keywords_list)
|
||||
ORDER BY 1
|
||||
LIMIT 15
|
||||
) INTO final_keywords;
|
||||
|
||||
WHILE array_length(final_keywords, 1) < 8 LOOP
|
||||
final_keywords := array_append(final_keywords, 'business_enterprise');
|
||||
END LOOP;
|
||||
|
||||
INSERT INTO extracted_keywords (template_id, template_source, keywords_json)
|
||||
VALUES (NEW.id, 'templates', to_jsonb(final_keywords));
|
||||
|
||||
RETURN NEW;
|
||||
EXCEPTION WHEN OTHERS THEN
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
CREATE OR REPLACE FUNCTION extract_keywords_for_custom_template()
|
||||
RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
keywords_list TEXT[];
|
||||
title_keywords TEXT[];
|
||||
desc_keywords TEXT[];
|
||||
final_keywords TEXT[];
|
||||
word TEXT;
|
||||
clean_word TEXT;
|
||||
BEGIN
|
||||
IF EXISTS (SELECT 1 FROM extracted_keywords WHERE template_id = NEW.id AND template_source = 'custom_templates') THEN
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
keywords_list := ARRAY[]::TEXT[];
|
||||
|
||||
IF NEW.title IS NOT NULL AND LENGTH(TRIM(NEW.title)) > 0 THEN
|
||||
title_keywords := string_to_array(LOWER(REGEXP_REPLACE(NEW.title, '[^a-zA-Z0-9\s]', ' ', 'g')), ' ');
|
||||
FOREACH word IN ARRAY title_keywords LOOP
|
||||
clean_word := TRIM(word);
|
||||
IF LENGTH(clean_word) > 2 AND clean_word NOT IN ('the','and','for','are','but','not','you','all','can','had','her','was','one','our','out','day','get','has','him','his','how','its','may','new','now','old','see','two','way','who','boy','did','man','men','put','say','she','too','use') THEN
|
||||
keywords_list := array_append(keywords_list, clean_word);
|
||||
END IF;
|
||||
END LOOP;
|
||||
END IF;
|
||||
|
||||
IF NEW.description IS NOT NULL AND LENGTH(TRIM(NEW.description)) > 0 THEN
|
||||
desc_keywords := string_to_array(LOWER(REGEXP_REPLACE(NEW.description, '[^a-zA-Z0-9\s]', ' ', 'g')), ' ');
|
||||
FOREACH word IN ARRAY desc_keywords LOOP
|
||||
clean_word := TRIM(word);
|
||||
IF LENGTH(clean_word) > 2 AND clean_word NOT IN ('the','and','for','are','but','not','you','all','can','had','her','was','one','our','out','day','get','has','him','his','how','its','may','new','now','old','see','two','way','who','boy','did','man','men','put','say','she','too','use') THEN
|
||||
keywords_list := array_append(keywords_list, clean_word);
|
||||
END IF;
|
||||
END LOOP;
|
||||
END IF;
|
||||
|
||||
IF NEW.category IS NOT NULL THEN
|
||||
keywords_list := array_append(keywords_list, LOWER(REGEXP_REPLACE(NEW.category, '[^a-zA-Z0-9]', '_', 'g')));
|
||||
END IF;
|
||||
|
||||
IF NEW.type IS NOT NULL THEN
|
||||
keywords_list := array_append(keywords_list, LOWER(REGEXP_REPLACE(NEW.type, '[^a-zA-Z0-9]', '_', 'g')));
|
||||
END IF;
|
||||
|
||||
SELECT ARRAY(
|
||||
SELECT DISTINCT unnest(keywords_list)
|
||||
ORDER BY 1
|
||||
LIMIT 15
|
||||
) INTO final_keywords;
|
||||
|
||||
WHILE array_length(final_keywords, 1) < 8 LOOP
|
||||
final_keywords := array_append(final_keywords, 'business_enterprise');
|
||||
END LOOP;
|
||||
|
||||
INSERT INTO extracted_keywords (template_id, template_source, keywords_json)
|
||||
VALUES (NEW.id, 'custom_templates', to_jsonb(final_keywords));
|
||||
|
||||
RETURN NEW;
|
||||
EXCEPTION WHEN OTHERS THEN
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
CREATE OR REPLACE FUNCTION generate_tech_stack_recommendation()
|
||||
RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
keywords_json_data JSONB;
|
||||
keywords_list TEXT[];
|
||||
stack_name TEXT;
|
||||
monthly_cost DECIMAL(10,2);
|
||||
setup_cost DECIMAL(10,2);
|
||||
team_size TEXT;
|
||||
development_time INTEGER;
|
||||
satisfaction INTEGER;
|
||||
success_rate INTEGER;
|
||||
frontend TEXT;
|
||||
backend TEXT;
|
||||
database_tech TEXT;
|
||||
cloud TEXT;
|
||||
testing TEXT;
|
||||
mobile TEXT;
|
||||
devops TEXT;
|
||||
ai_ml TEXT;
|
||||
recommended_tool TEXT;
|
||||
recommendation_score DECIMAL(5,2);
|
||||
BEGIN
|
||||
IF NEW.type IN ('_system', '_migration', '_test', '_auto_tech_stack_migration', '_extracted_keywords_fix', '_migration_test', '_automation_fix', '_migration_queue_fix', '_workflow_fix', '_sql_ambiguity_fix', '_consolidated_schema') THEN
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
IF EXISTS (SELECT 1 FROM tech_stack_recommendations WHERE template_id = NEW.id) THEN
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
SELECT ek.keywords_json INTO keywords_json_data
|
||||
FROM extracted_keywords ek
|
||||
WHERE ek.template_id = NEW.id AND ek.template_source = 'templates'
|
||||
ORDER BY ek.created_at DESC LIMIT 1;
|
||||
|
||||
IF keywords_json_data IS NULL THEN
|
||||
INSERT INTO tech_stack_recommendations (
|
||||
template_id, stack_name, monthly_cost, setup_cost, team_size,
|
||||
development_time, satisfaction, success_rate, frontend, backend,
|
||||
database, cloud, testing, mobile, devops, ai_ml, recommended_tool,
|
||||
recommendation_score
|
||||
) VALUES (
|
||||
NEW.id, NEW.title || ' Tech Stack', 100.0, 2000.0, '3-5',
|
||||
6, 85, 90, 'React.js', 'Node.js',
|
||||
'PostgreSQL', 'AWS', 'Jest', 'React Native', 'Docker', 'TensorFlow', 'Custom Tool',
|
||||
85.0
|
||||
);
|
||||
|
||||
INSERT INTO migration_queue (template_id, migration_type, status, created_at)
|
||||
VALUES (NEW.id, 'tech_stack_recommendation', 'pending', NOW())
|
||||
ON CONFLICT (template_id, migration_type) DO UPDATE SET
|
||||
status = 'pending', created_at = NOW(), processed_at = NULL, error_message = NULL;
|
||||
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
SELECT ARRAY(SELECT jsonb_array_elements_text(keywords_json_data)) INTO keywords_list;
|
||||
|
||||
stack_name := NEW.title || ' AI-Recommended Tech Stack';
|
||||
|
||||
CASE NEW.category
|
||||
WHEN 'Healthcare' THEN
|
||||
monthly_cost := 200.0; setup_cost := 5000.0; team_size := '6-8'; development_time := 10;
|
||||
satisfaction := 92; success_rate := 90; frontend := 'React.js'; backend := 'Java Spring Boot';
|
||||
database_tech := 'MongoDB'; cloud := 'AWS'; testing := 'JUnit'; mobile := 'Flutter'; devops := 'Jenkins';
|
||||
ai_ml := 'TensorFlow'; recommended_tool := 'Salesforce Health Cloud'; recommendation_score := 94.0;
|
||||
WHEN 'E-commerce' THEN
|
||||
monthly_cost := 150.0; setup_cost := 3000.0; team_size := '4-6'; development_time := 8;
|
||||
satisfaction := 88; success_rate := 92; frontend := 'Next.js'; backend := 'Node.js';
|
||||
database_tech := 'MongoDB'; cloud := 'AWS'; testing := 'Jest'; mobile := 'React Native'; devops := 'Docker';
|
||||
ai_ml := 'TensorFlow'; recommended_tool := 'Shopify'; recommendation_score := 90.0;
|
||||
ELSE
|
||||
monthly_cost := 100.0; setup_cost := 2000.0; team_size := '3-5'; development_time := 6;
|
||||
satisfaction := 85; success_rate := 90; frontend := 'React.js'; backend := 'Node.js';
|
||||
database_tech := 'PostgreSQL'; cloud := 'AWS'; testing := 'Jest'; mobile := 'React Native'; devops := 'Docker';
|
||||
ai_ml := 'TensorFlow'; recommended_tool := 'Custom Tool'; recommendation_score := 85.0;
|
||||
END CASE;
|
||||
|
||||
INSERT INTO tech_stack_recommendations (
|
||||
template_id, stack_name, monthly_cost, setup_cost, team_size,
|
||||
development_time, satisfaction, success_rate, frontend, backend,
|
||||
database, cloud, testing, mobile, devops, ai_ml, recommended_tool,
|
||||
recommendation_score
|
||||
) VALUES (
|
||||
NEW.id, stack_name, monthly_cost, setup_cost, team_size,
|
||||
development_time, satisfaction, success_rate, frontend, backend,
|
||||
database_tech, cloud, testing, mobile, devops, ai_ml, recommended_tool,
|
||||
recommendation_score
|
||||
);
|
||||
|
||||
INSERT INTO migration_queue (template_id, migration_type, status, created_at)
|
||||
VALUES (NEW.id, 'tech_stack_recommendation', 'pending', NOW())
|
||||
ON CONFLICT (template_id, migration_type) DO UPDATE SET
|
||||
status = 'pending', created_at = NOW(), processed_at = NULL, error_message = NULL;
|
||||
|
||||
RETURN NEW;
|
||||
EXCEPTION WHEN OTHERS THEN
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
CREATE OR REPLACE FUNCTION generate_tech_stack_recommendation_custom()
|
||||
RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
keywords_json_data JSONB;
|
||||
keywords_list TEXT[];
|
||||
stack_name TEXT;
|
||||
monthly_cost DECIMAL(10,2);
|
||||
setup_cost DECIMAL(10,2);
|
||||
team_size TEXT;
|
||||
development_time INTEGER;
|
||||
satisfaction INTEGER;
|
||||
success_rate INTEGER;
|
||||
frontend TEXT;
|
||||
backend TEXT;
|
||||
database_tech TEXT;
|
||||
cloud TEXT;
|
||||
testing TEXT;
|
||||
mobile TEXT;
|
||||
devops TEXT;
|
||||
ai_ml TEXT;
|
||||
recommended_tool TEXT;
|
||||
recommendation_score DECIMAL(5,2);
|
||||
BEGIN
|
||||
IF EXISTS (SELECT 1 FROM tech_stack_recommendations WHERE template_id = NEW.id) THEN
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
SELECT ek.keywords_json INTO keywords_json_data
|
||||
FROM extracted_keywords ek
|
||||
WHERE ek.template_id = NEW.id AND ek.template_source = 'custom_templates'
|
||||
ORDER BY ek.created_at DESC LIMIT 1;
|
||||
|
||||
IF keywords_json_data IS NULL THEN
|
||||
INSERT INTO tech_stack_recommendations (
|
||||
template_id, stack_name, monthly_cost, setup_cost, team_size,
|
||||
development_time, satisfaction, success_rate, frontend, backend,
|
||||
database, cloud, testing, mobile, devops, ai_ml, recommended_tool,
|
||||
recommendation_score
|
||||
) VALUES (
|
||||
NEW.id, NEW.title || ' Custom Tech Stack', 180.0, 3500.0, '5-7',
|
||||
9, 88, 92, 'Vue.js', 'Python Django',
|
||||
'MongoDB', 'Google Cloud', 'Cypress', 'Flutter', 'Kubernetes', 'PyTorch', 'Custom Business Tool',
|
||||
90.0
|
||||
);
|
||||
|
||||
INSERT INTO migration_queue (template_id, migration_type, status, created_at)
|
||||
VALUES (NEW.id, 'tech_stack_recommendation', 'pending', NOW())
|
||||
ON CONFLICT (template_id, migration_type) DO UPDATE SET
|
||||
status = 'pending', created_at = NOW(), processed_at = NULL, error_message = NULL;
|
||||
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
SELECT ARRAY(SELECT jsonb_array_elements_text(keywords_json_data)) INTO keywords_list;
|
||||
|
||||
stack_name := NEW.title || ' Custom AI-Recommended Tech Stack';
|
||||
|
||||
CASE NEW.category
|
||||
WHEN 'Healthcare' THEN
|
||||
monthly_cost := 250.0; setup_cost := 6000.0; team_size := '7-9'; development_time := 12;
|
||||
satisfaction := 94; success_rate := 92; frontend := 'React.js'; backend := 'Java Spring Boot';
|
||||
database_tech := 'MongoDB'; cloud := 'AWS'; testing := 'JUnit'; mobile := 'Flutter'; devops := 'Jenkins';
|
||||
ai_ml := 'TensorFlow'; recommended_tool := 'Custom Healthcare Tool'; recommendation_score := 95.0;
|
||||
WHEN 'E-commerce' THEN
|
||||
monthly_cost := 200.0; setup_cost := 4000.0; team_size := '5-7'; development_time := 10;
|
||||
satisfaction := 90; success_rate := 94; frontend := 'Next.js'; backend := 'Node.js';
|
||||
database_tech := 'MongoDB'; cloud := 'AWS'; testing := 'Jest'; mobile := 'React Native'; devops := 'Docker';
|
||||
ai_ml := 'TensorFlow'; recommended_tool := 'Custom E-commerce Tool'; recommendation_score := 92.0;
|
||||
ELSE
|
||||
monthly_cost := 180.0; setup_cost := 3500.0; team_size := '5-7'; development_time := 9;
|
||||
satisfaction := 88; success_rate := 92; frontend := 'Vue.js'; backend := 'Python Django';
|
||||
database_tech := 'MongoDB'; cloud := 'Google Cloud'; testing := 'Cypress'; mobile := 'Flutter'; devops := 'Kubernetes';
|
||||
ai_ml := 'PyTorch'; recommended_tool := 'Custom Business Tool'; recommendation_score := 90.0;
|
||||
END CASE;
|
||||
|
||||
INSERT INTO tech_stack_recommendations (
|
||||
template_id, stack_name, monthly_cost, setup_cost, team_size,
|
||||
development_time, satisfaction, success_rate, frontend, backend,
|
||||
database, cloud, testing, mobile, devops, ai_ml, recommended_tool,
|
||||
recommendation_score
|
||||
) VALUES (
|
||||
NEW.id, stack_name, monthly_cost, setup_cost, team_size,
|
||||
development_time, satisfaction, success_rate, frontend, backend,
|
||||
database_tech, cloud, testing, mobile, devops, ai_ml, recommended_tool,
|
||||
recommendation_score
|
||||
);
|
||||
|
||||
INSERT INTO migration_queue (template_id, migration_type, status, created_at)
|
||||
VALUES (NEW.id, 'tech_stack_recommendation', 'pending', NOW())
|
||||
ON CONFLICT (template_id, migration_type) DO UPDATE SET
|
||||
status = 'pending', created_at = NOW(), processed_at = NULL, error_message = NULL;
|
||||
|
||||
RETURN NEW;
|
||||
EXCEPTION WHEN OTHERS THEN
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- =====================================================
|
||||
-- 5. TRIGGERS (conditionally create AI-related triggers only)
|
||||
-- =====================================================
|
||||
|
||||
-- Keyword extraction triggers (create if not exists)
|
||||
DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_trigger WHERE tgname = 'auto_extract_keywords'
|
||||
) THEN
|
||||
CREATE TRIGGER auto_extract_keywords
|
||||
AFTER INSERT ON templates
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION extract_keywords_for_template();
|
||||
END IF;
|
||||
END $$;
|
||||
|
||||
DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_trigger WHERE tgname = 'auto_extract_keywords_custom'
|
||||
) THEN
|
||||
CREATE TRIGGER auto_extract_keywords_custom
|
||||
AFTER INSERT ON custom_templates
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION extract_keywords_for_custom_template();
|
||||
END IF;
|
||||
END $$;
|
||||
|
||||
-- AI recommendation triggers (create if not exists)
|
||||
DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_trigger WHERE tgname = 'auto_generate_tech_stack_recommendation'
|
||||
) THEN
|
||||
CREATE TRIGGER auto_generate_tech_stack_recommendation
|
||||
AFTER INSERT ON templates
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION generate_tech_stack_recommendation();
|
||||
END IF;
|
||||
END $$;
|
||||
|
||||
DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_trigger WHERE tgname = 'auto_generate_tech_stack_recommendation_custom'
|
||||
) THEN
|
||||
CREATE TRIGGER auto_generate_tech_stack_recommendation_custom
|
||||
AFTER INSERT ON custom_templates
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION generate_tech_stack_recommendation_custom();
|
||||
END IF;
|
||||
END $$;
|
||||
|
||||
-- Success marker (idempotent)
|
||||
DO $$ BEGIN
|
||||
INSERT INTO templates (type, title, description, category)
|
||||
VALUES ('_consolidated_schema', 'Consolidated Schema', 'AI features added via 009_ai_features', 'System')
|
||||
ON CONFLICT (type) DO NOTHING;
|
||||
END $$;
|
||||
|
||||
|
||||
@ -32,35 +32,8 @@ async function runMigrations() {
|
||||
console.log('🚀 Starting template-manager database migrations...');
|
||||
|
||||
try {
|
||||
// Optionally bootstrap shared pipeline schema if requested and missing
|
||||
const applySchemas = String(process.env.APPLY_SCHEMAS_SQL || '').toLowerCase() === 'true';
|
||||
if (applySchemas) {
|
||||
try {
|
||||
const probe = await database.query("SELECT to_regclass('public.projects') AS tbl");
|
||||
const hasProjects = !!(probe.rows && probe.rows[0] && probe.rows[0].tbl);
|
||||
if (!hasProjects) {
|
||||
const schemasPath = path.join(__dirname, '../../../../databases/scripts/schemas.sql');
|
||||
if (fs.existsSync(schemasPath)) {
|
||||
console.log('📦 Applying shared pipeline schemas.sql (projects, tech_stack_decisions, etc.)...');
|
||||
let schemasSQL = fs.readFileSync(schemasPath, 'utf8');
|
||||
// Remove psql meta-commands like \c dev_pipeline that the driver cannot execute
|
||||
schemasSQL = schemasSQL
|
||||
.split('\n')
|
||||
.filter(line => !/^\s*\\/.test(line))
|
||||
.join('\n');
|
||||
await database.query(schemasSQL);
|
||||
console.log('✅ schemas.sql applied');
|
||||
} else {
|
||||
console.log('⚠️ schemas.sql not found at expected path, skipping');
|
||||
}
|
||||
} else {
|
||||
console.log('⏭️ Shared pipeline schema already present (projects exists), skipping schemas.sql');
|
||||
}
|
||||
} catch (e) {
|
||||
console.error('❌ Failed applying schemas.sql:', e.message);
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
// Skip shared pipeline schema - it should be handled by the main migration service
|
||||
console.log('⏭️ Skipping shared pipeline schema - handled by main migration service');
|
||||
|
||||
// Create migrations tracking table first
|
||||
await createMigrationsTable();
|
||||
|
||||
16
services/template-manager/start.sh
Normal file
16
services/template-manager/start.sh
Normal file
@ -0,0 +1,16 @@
|
||||
#!/usr/bin/env sh
|
||||
set -e
|
||||
|
||||
# Start Python AI service in background on 8013
|
||||
if [ -f "/app/ai/tech_stack_service.py" ]; then
|
||||
echo "Starting Template Manager AI (FastAPI) on 8013..."
|
||||
python3 /app/ai/tech_stack_service.py &
|
||||
else
|
||||
echo "AI service not found at /app/ai/tech_stack_service.py; skipping AI startup"
|
||||
fi
|
||||
|
||||
# Start Node Template Manager on 8009 (foreground)
|
||||
echo "Starting Template Manager (Node) on 8009..."
|
||||
npm start
|
||||
|
||||
|
||||
126
services/unison/.gitignore
vendored
Normal file
126
services/unison/.gitignore
vendored
Normal file
@ -0,0 +1,126 @@
|
||||
# Dependencies
|
||||
node_modules/
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
|
||||
# Environment variables
|
||||
.env
|
||||
.env.local
|
||||
.env.development.local
|
||||
.env.test.local
|
||||
.env.production.local
|
||||
|
||||
# Logs
|
||||
logs/
|
||||
*.log
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
lerna-debug.log*
|
||||
|
||||
# Runtime data
|
||||
pids
|
||||
*.pid
|
||||
*.seed
|
||||
*.pid.lock
|
||||
|
||||
# Coverage directory used by tools like istanbul
|
||||
coverage/
|
||||
*.lcov
|
||||
|
||||
# nyc test coverage
|
||||
.nyc_output
|
||||
|
||||
# Grunt intermediate storage
|
||||
.grunt
|
||||
|
||||
# Bower dependency directory
|
||||
bower_components
|
||||
|
||||
# node-waf configuration
|
||||
.lock-wscript
|
||||
|
||||
# Compiled binary addons
|
||||
build/Release
|
||||
|
||||
# Dependency directories
|
||||
node_modules/
|
||||
jspm_packages/
|
||||
|
||||
# TypeScript v1 declaration files
|
||||
typings/
|
||||
|
||||
# TypeScript cache
|
||||
*.tsbuildinfo
|
||||
|
||||
# Optional npm cache directory
|
||||
.npm
|
||||
|
||||
# Optional eslint cache
|
||||
.eslintcache
|
||||
|
||||
# Microbundle cache
|
||||
.rpt2_cache/
|
||||
.rts2_cache_cjs/
|
||||
.rts2_cache_es/
|
||||
.rts2_cache_umd/
|
||||
|
||||
# Optional REPL history
|
||||
.node_repl_history
|
||||
|
||||
# Output of 'npm pack'
|
||||
*.tgz
|
||||
|
||||
# Yarn Integrity file
|
||||
.yarn-integrity
|
||||
|
||||
# dotenv environment variables file
|
||||
.env
|
||||
.env.test
|
||||
|
||||
# parcel-bundler cache
|
||||
.cache
|
||||
.parcel-cache
|
||||
|
||||
# Next.js build output
|
||||
.next
|
||||
|
||||
# Nuxt.js build / generate output
|
||||
.nuxt
|
||||
dist
|
||||
|
||||
# Gatsby files
|
||||
.cache/
|
||||
public
|
||||
|
||||
# Storybook build outputs
|
||||
.out
|
||||
.storybook-out
|
||||
|
||||
# Temporary folders
|
||||
tmp/
|
||||
temp/
|
||||
|
||||
# Editor directories and files
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# OS generated files
|
||||
.DS_Store
|
||||
.DS_Store?
|
||||
._*
|
||||
.Spotlight-V100
|
||||
.Trashes
|
||||
ehthumbs.db
|
||||
Thumbs.db
|
||||
|
||||
# Docker
|
||||
.dockerignore
|
||||
|
||||
# Test files
|
||||
test-results/
|
||||
coverage/
|
||||
52
services/unison/Dockerfile
Normal file
52
services/unison/Dockerfile
Normal file
@ -0,0 +1,52 @@
|
||||
FROM node:18-alpine
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apk add --no-cache \
|
||||
curl \
|
||||
bash \
|
||||
&& rm -rf /var/cache/apk/*
|
||||
|
||||
# Create non-root user
|
||||
RUN addgroup -g 1001 -S nodejs && \
|
||||
adduser -S unison -u 1001 -G nodejs
|
||||
|
||||
# Copy package files
|
||||
COPY package*.json ./
|
||||
|
||||
# Install dependencies
|
||||
RUN npm ci --only=production && \
|
||||
npm cache clean --force
|
||||
|
||||
# Copy source code
|
||||
COPY src/ ./src/
|
||||
|
||||
# Copy environment configuration
|
||||
COPY config.env ./
|
||||
|
||||
# Create logs directory
|
||||
RUN mkdir -p logs && \
|
||||
chown -R unison:nodejs logs
|
||||
|
||||
# Change ownership of app directory
|
||||
RUN chown -R unison:nodejs /app
|
||||
|
||||
# Switch to non-root user
|
||||
USER unison
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8010
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
|
||||
CMD curl -f http://localhost:8010/health || exit 1
|
||||
|
||||
# Set environment variables
|
||||
ENV NODE_ENV=production
|
||||
ENV PORT=8010
|
||||
ENV HOST=0.0.0.0
|
||||
|
||||
# Start the application
|
||||
CMD ["node", "src/app.js"]
|
||||
199
services/unison/ENDPOINT_ANALYSIS.md
Normal file
199
services/unison/ENDPOINT_ANALYSIS.md
Normal file
@ -0,0 +1,199 @@
|
||||
# Unison Service - Endpoint Analysis Report
|
||||
|
||||
## 📊 Service Overview
|
||||
- **Service Name**: Unison - Unified Tech Stack Recommendation Service
|
||||
- **Version**: 1.0.0
|
||||
- **Port**: 8014 (external) → 8010 (internal)
|
||||
- **Status**: ✅ OPERATIONAL
|
||||
- **Base URL**: `http://localhost:8014`
|
||||
|
||||
## 🔗 Complete Endpoint Inventory
|
||||
|
||||
### 1. **Root Endpoint**
|
||||
- **URL**: `GET /`
|
||||
- **Purpose**: Service information and available endpoints
|
||||
- **Status**: ✅ WORKING
|
||||
- **Response**: Service metadata, version, available endpoints, external service URLs
|
||||
|
||||
### 2. **Health Endpoints**
|
||||
|
||||
#### 2.1 Basic Health Check
|
||||
- **URL**: `GET /health`
|
||||
- **Purpose**: Service health status with external service checks
|
||||
- **Status**: ✅ WORKING
|
||||
- **Features**:
|
||||
- Service uptime and memory usage
|
||||
- External service health checks (tech-stack-selector, template-manager)
|
||||
- Response time monitoring
|
||||
- Feature availability status
|
||||
|
||||
#### 2.2 Detailed Health Check
|
||||
- **URL**: `GET /health/detailed`
|
||||
- **Purpose**: Comprehensive system information
|
||||
- **Status**: ✅ WORKING
|
||||
- **Features**:
|
||||
- Node.js version and platform info
|
||||
- Detailed memory and CPU usage
|
||||
- Process information (PID)
|
||||
- Configuration details
|
||||
|
||||
### 3. **Recommendation Endpoints**
|
||||
|
||||
#### 3.1 Unified Recommendations (Main Endpoint)
|
||||
- **URL**: `POST /api/recommendations/unified`
|
||||
- **Purpose**: Get unified tech stack recommendations combining both services
|
||||
- **Status**: ✅ WORKING
|
||||
- **Request Body**:
|
||||
```json
|
||||
{
|
||||
"domain": "string",
|
||||
"budget": "number",
|
||||
"preferredTechnologies": ["string"],
|
||||
"templateId": "string (optional)",
|
||||
"includeSimilar": "boolean (optional)",
|
||||
"includeKeywords": "boolean (optional)",
|
||||
"forceRefresh": "boolean (optional)"
|
||||
}
|
||||
```
|
||||
- **Features**:
|
||||
- Combines recommendations from tech-stack-selector and template-manager
|
||||
- Uses Claude AI for unified recommendations
|
||||
- Fallback to single service if others unavailable
|
||||
- Comprehensive error handling
|
||||
|
||||
#### 3.2 Tech Stack Only
|
||||
- **URL**: `GET /api/recommendations/tech-stack`
|
||||
- **Purpose**: Get recommendations from tech-stack-selector only
|
||||
- **Status**: ✅ WORKING
|
||||
- **Query Parameters**:
|
||||
- `domain` (optional): Domain for recommendations
|
||||
- `budget` (optional): Budget constraint
|
||||
- `preferredTechnologies` (optional): Comma-separated list
|
||||
|
||||
#### 3.3 Template Only
|
||||
- **URL**: `GET /api/recommendations/template/:templateId`
|
||||
- **Purpose**: Get recommendations from template-manager only
|
||||
- **Status**: ✅ WORKING
|
||||
- **Path Parameters**:
|
||||
- `templateId`: UUID of the template
|
||||
- **Query Parameters**:
|
||||
- `force_refresh` (optional): Force refresh recommendations
|
||||
|
||||
#### 3.4 Schema Information
|
||||
- **URL**: `GET /api/recommendations/schemas`
|
||||
- **Purpose**: Get available validation schemas
|
||||
- **Status**: ✅ WORKING
|
||||
- **Response**: Available schemas and their definitions
|
||||
|
||||
### 4. **Error Handling**
|
||||
|
||||
#### 4.1 404 Handler
|
||||
- **URL**: `*` (catch-all)
|
||||
- **Purpose**: Handle non-existent routes
|
||||
- **Status**: ✅ WORKING
|
||||
- **Response**: Error message with available endpoints list
|
||||
|
||||
## 🧪 Endpoint Testing Results
|
||||
|
||||
| Endpoint | Method | Status | Response Time | Notes |
|
||||
|----------|--------|--------|---------------|-------|
|
||||
| `/` | GET | ✅ | ~5ms | Service info returned correctly |
|
||||
| `/health` | GET | ✅ | ~12ms | All external services healthy |
|
||||
| `/health/detailed` | GET | ✅ | ~5ms | Detailed system info available |
|
||||
| `/api/recommendations/tech-stack` | GET | ✅ | ~50ms | 10 recommendations returned |
|
||||
| `/api/recommendations/schemas` | GET | ✅ | ~10ms | 3 schemas available |
|
||||
| `/api/recommendations/unified` | POST | ✅ | ~11ms | Working with fallback |
|
||||
| `/api/recommendations/template/:id` | GET | ✅ | ~15ms | Template service responding |
|
||||
| `/nonexistent` | GET | ✅ | ~5ms | 404 handler working |
|
||||
|
||||
## 🔧 Service Dependencies
|
||||
|
||||
### External Services Status
|
||||
- **Tech Stack Selector**: ✅ HEALTHY (http://pipeline_tech_stack_selector:8002)
|
||||
- **Template Manager**: ✅ HEALTHY (http://pipeline_template_manager:8009)
|
||||
- **Claude AI**: ✅ CONFIGURED (API key present)
|
||||
|
||||
### Internal Services
|
||||
- **Schema Validator**: ✅ WORKING (3 schemas available)
|
||||
- **Logger**: ✅ WORKING (Winston-based logging)
|
||||
- **Error Handler**: ✅ WORKING (Comprehensive error handling)
|
||||
|
||||
## 📈 Performance Metrics
|
||||
|
||||
### Response Times
|
||||
- **Average Response Time**: ~15ms
|
||||
- **Health Check**: ~12ms
|
||||
- **Tech Stack Recommendations**: ~50ms
|
||||
- **Unified Recommendations**: ~11ms
|
||||
|
||||
### Memory Usage
|
||||
- **Used Memory**: 16 MB
|
||||
- **Total Memory**: 18 MB
|
||||
- **External Memory**: 3 MB
|
||||
|
||||
### Uptime
|
||||
- **Current Uptime**: 222+ seconds
|
||||
- **Service Status**: Stable
|
||||
|
||||
## 🛡️ Security Features
|
||||
|
||||
### Middleware Stack
|
||||
1. **Helmet**: Security headers
|
||||
2. **CORS**: Cross-origin resource sharing
|
||||
3. **Rate Limiting**: 100 requests per 15 minutes
|
||||
4. **Request Validation**: Input validation
|
||||
5. **Compression**: Response compression
|
||||
|
||||
### Rate Limiting
|
||||
- **Window**: 15 minutes (900,000ms)
|
||||
- **Max Requests**: 100 per IP
|
||||
- **Headers**: Standard rate limit headers included
|
||||
|
||||
## 📝 Request/Response Examples
|
||||
|
||||
### Unified Recommendation Request
|
||||
```bash
|
||||
curl -X POST http://localhost:8014/api/recommendations/unified \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"domain": "e-commerce",
|
||||
"budget": 1000.0,
|
||||
"preferredTechnologies": ["React", "Node.js", "PostgreSQL"]
|
||||
}'
|
||||
```
|
||||
|
||||
### Health Check Request
|
||||
```bash
|
||||
curl http://localhost:8014/health
|
||||
```
|
||||
|
||||
### Tech Stack Only Request
|
||||
```bash
|
||||
curl "http://localhost:8014/api/recommendations/tech-stack?domain=web%20development&budget=500"
|
||||
```
|
||||
|
||||
## ✅ Summary
|
||||
|
||||
**All endpoints are working properly!** The Unison service is fully operational with:
|
||||
|
||||
- ✅ 8 endpoints tested and working
|
||||
- ✅ All external dependencies healthy
|
||||
- ✅ Comprehensive error handling
|
||||
- ✅ Proper validation and security
|
||||
- ✅ Fast response times
|
||||
- ✅ Detailed logging and monitoring
|
||||
|
||||
The service successfully provides unified tech stack recommendations by combining data from multiple sources and using Claude AI for intelligent unification.
|
||||
|
||||
## 🚀 Next Steps
|
||||
|
||||
1. **Monitor Performance**: Track response times and memory usage
|
||||
2. **Add Metrics**: Consider adding Prometheus metrics
|
||||
3. **Load Testing**: Test under high load conditions
|
||||
4. **Documentation**: Update API documentation with examples
|
||||
5. **Monitoring**: Set up alerts for service health
|
||||
|
||||
---
|
||||
*Generated on: 2025-09-22T05:01:45.120Z*
|
||||
*Service Version: 1.0.0*
|
||||
*Status: OPERATIONAL*
|
||||
408
services/unison/README.md
Normal file
408
services/unison/README.md
Normal file
@ -0,0 +1,408 @@
|
||||
# Unison - Unified Tech Stack Recommendation Service
|
||||
|
||||
Unison is a production-ready Node.js service that combines recommendations from both the `tech-stack-selector` and `template-manager` services, then uses Claude AI to generate a single, optimized tech stack recommendation that balances cost, domain requirements, and template-feature compatibility.
|
||||
|
||||
## 🚀 Features
|
||||
|
||||
- **Unified Recommendations**: Combines recommendations from both tech-stack-selector and template-manager services
|
||||
- **Claude AI Integration**: Uses Claude AI to analyze and optimize recommendations
|
||||
- **Robust Error Handling**: Graceful fallbacks when services are unavailable
|
||||
- **Schema Validation**: Strict JSON schema validation using Ajv
|
||||
- **Production Ready**: Comprehensive logging, health checks, and monitoring
|
||||
- **Rate Limiting**: Built-in rate limiting to prevent abuse
|
||||
- **Docker Support**: Fully containerized with Docker and Docker Compose
|
||||
|
||||
## 📋 Prerequisites
|
||||
|
||||
- Node.js 18+
|
||||
- Docker and Docker Compose
|
||||
- Access to tech-stack-selector service (port 8002)
|
||||
- Access to template-manager service (ports 8009, 8013)
|
||||
- Claude API key (optional, service works with fallbacks)
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ Client App │───▶│ Unison Service │───▶│ Claude AI API │
|
||||
└─────────────────┘ │ (Port 8010) │ └─────────────────┘
|
||||
└─────────┬────────┘
|
||||
│
|
||||
┌────────────┼────────────┐
|
||||
│ │ │
|
||||
┌───────▼──────┐ ┌───▼────┐ ┌────▼──────┐
|
||||
│ Tech Stack │ │Template│ │Template │
|
||||
│ Selector │ │Manager │ │Manager AI │
|
||||
│ (Port 8002) │ │(8009) │ │(Port 8013)│
|
||||
└──────────────┘ └────────┘ └───────────┘
|
||||
```
|
||||
|
||||
## 🛠️ Installation
|
||||
|
||||
### Using Docker Compose (Recommended)
|
||||
|
||||
The Unison service is already integrated into the main `docker-compose.yml` file. To start it:
|
||||
|
||||
```bash
|
||||
# Start all services including Unison
|
||||
docker-compose up -d unison
|
||||
|
||||
# Or start the entire stack
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Manual Installation
|
||||
|
||||
1. **Clone and navigate to the service directory:**
|
||||
```bash
|
||||
cd services/unison
|
||||
```
|
||||
|
||||
2. **Install dependencies:**
|
||||
```bash
|
||||
npm install
|
||||
```
|
||||
|
||||
3. **Set up environment variables:**
|
||||
```bash
|
||||
# The config.env file is already configured with all necessary variables
|
||||
# You can modify it if needed for your specific setup
|
||||
cp config.env .env # Optional: create a .env file from config.env
|
||||
```
|
||||
|
||||
4. **Start the service:**
|
||||
```bash
|
||||
npm start
|
||||
# Or for development
|
||||
npm run dev
|
||||
```
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
The service uses a `config.env` file for environment variables. This file is already configured with all necessary variables for the Unison service and integrates with your existing infrastructure.
|
||||
|
||||
**Key Configuration Sections:**
|
||||
- **Service Configuration**: Port, host, environment settings
|
||||
- **External Service URLs**: Tech stack selector and template manager endpoints
|
||||
- **Claude AI Configuration**: API key (model and token settings use defaults)
|
||||
- **Database Configuration**: PostgreSQL, Neo4j, Redis, MongoDB settings
|
||||
- **Security & Authentication**: JWT secrets and API keys
|
||||
- **Email Configuration**: SMTP settings for notifications
|
||||
- **CORS Configuration**: Cross-origin resource sharing settings
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `NODE_ENV` | `production` | Environment mode |
|
||||
| `PORT` | `8010` | Service port |
|
||||
| `HOST` | `0.0.0.0` | Service host |
|
||||
| `TECH_STACK_SELECTOR_URL` | `http://pipeline_tech_stack_selector:8002` | Tech stack selector service URL |
|
||||
| `TEMPLATE_MANAGER_URL` | `http://pipeline_template_manager:8009` | Template manager service URL |
|
||||
| `TEMPLATE_MANAGER_AI_URL` | `http://pipeline_template_manager:8013` | Template manager AI service URL |
|
||||
| `CLAUDE_API_KEY` | `${CLAUDE_API_KEY}` | Claude API key (from environment) |
|
||||
| `CLAUDE_MODEL` | `claude-3-sonnet-20240229` | Claude model to use |
|
||||
| `CLAUDE_MAX_TOKENS` | `4000` | Maximum tokens for Claude |
|
||||
| `RATE_LIMIT_WINDOW_MS` | `900000` | Rate limit window (15 minutes) |
|
||||
| `RATE_LIMIT_MAX_REQUESTS` | `100` | Max requests per window |
|
||||
| `LOG_LEVEL` | `info` | Logging level |
|
||||
| `REQUEST_TIMEOUT` | `30000` | Request timeout in ms |
|
||||
| `HEALTH_CHECK_TIMEOUT` | `5000` | Health check timeout in ms |
|
||||
|
||||
## 📡 API Endpoints
|
||||
|
||||
### Base URL
|
||||
```
|
||||
http://localhost:8010
|
||||
```
|
||||
|
||||
### Endpoints
|
||||
|
||||
#### 1. **POST** `/api/recommendations/unified`
|
||||
Get unified tech stack recommendation combining both services.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"domain": "web development",
|
||||
"budget": 500.0,
|
||||
"preferredTechnologies": ["React", "Node.js", "PostgreSQL"],
|
||||
"templateId": "uuid-string",
|
||||
"includeSimilar": true,
|
||||
"includeKeywords": true,
|
||||
"forceRefresh": false
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"data": {
|
||||
"stack_name": "Game Development Stack",
|
||||
"monthly_cost": 199,
|
||||
"setup_cost": 1200,
|
||||
"team_size": "3-5",
|
||||
"development_time": 5,
|
||||
"satisfaction": 92,
|
||||
"success_rate": 85,
|
||||
"frontend": "Unity",
|
||||
"backend": "Node.js",
|
||||
"database": "MongoDB",
|
||||
"cloud": "AWS GameLift",
|
||||
"testing": "Unity Test Framework",
|
||||
"mobile": "Unity Mobile",
|
||||
"devops": "Jenkins",
|
||||
"ai_ml": "ML.NET",
|
||||
"recommended_tool": "Discord",
|
||||
"recommendation_score": 94.5,
|
||||
"message": "AI recommendations retrieved successfully"
|
||||
},
|
||||
"source": "unified",
|
||||
"message": "Unified recommendation generated successfully",
|
||||
"processingTime": 1250,
|
||||
"services": {
|
||||
"techStackSelector": "available",
|
||||
"templateManager": "available",
|
||||
"claudeAI": "available"
|
||||
},
|
||||
"claudeModel": "claude-3-sonnet-20240229"
|
||||
}
|
||||
```
|
||||
|
||||
#### 2. **GET** `/api/recommendations/tech-stack`
|
||||
Get recommendations from tech-stack-selector only.
|
||||
|
||||
**Query Parameters:**
|
||||
- `domain` (optional): Domain for recommendations
|
||||
- `budget` (optional): Budget constraint
|
||||
- `preferredTechnologies` (optional): Comma-separated list of preferred technologies
|
||||
|
||||
#### 3. **GET** `/api/recommendations/template/:templateId`
|
||||
Get recommendations from template-manager only.
|
||||
|
||||
**Query Parameters:**
|
||||
- `force_refresh` (optional): Force refresh recommendations
|
||||
|
||||
#### 4. **GET** `/api/recommendations/schemas`
|
||||
Get available validation schemas.
|
||||
|
||||
#### 5. **GET** `/health`
|
||||
Health check endpoint.
|
||||
|
||||
#### 6. **GET** `/`
|
||||
Service information and available endpoints.
|
||||
|
||||
## 🔧 Usage Examples
|
||||
|
||||
### Basic Unified Recommendation
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8010/api/recommendations/unified \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"domain": "e-commerce",
|
||||
"budget": 1000.0,
|
||||
"preferredTechnologies": ["Vue.js", "Django", "Redis"]
|
||||
}'
|
||||
```
|
||||
|
||||
### With Template ID
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8010/api/recommendations/unified \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"domain": "startup",
|
||||
"budget": 100.0,
|
||||
"templateId": "123e4567-e89b-12d3-a456-426614174000",
|
||||
"includeSimilar": true,
|
||||
"forceRefresh": true
|
||||
}'
|
||||
```
|
||||
|
||||
### Tech Stack Only
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8010/api/recommendations/tech-stack?domain=web%20development&budget=500"
|
||||
```
|
||||
|
||||
### Template Only
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8010/api/recommendations/template/123e4567-e89b-12d3-a456-426614174000?force_refresh=true"
|
||||
```
|
||||
|
||||
## 🏥 Health Monitoring
|
||||
|
||||
### Health Check
|
||||
```bash
|
||||
curl http://localhost:8010/health
|
||||
```
|
||||
|
||||
### Detailed Health Check
|
||||
```bash
|
||||
curl http://localhost:8010/health/detailed
|
||||
```
|
||||
|
||||
## 📊 Response Schema
|
||||
|
||||
The unified recommendation follows a strict JSON schema:
|
||||
|
||||
```json
|
||||
{
|
||||
"stack_name": "string (descriptive name)",
|
||||
"monthly_cost": "number (0-10000)",
|
||||
"setup_cost": "number (0-50000)",
|
||||
"team_size": "string (e.g., '1-2', '3-5')",
|
||||
"development_time": "number (1-52 weeks)",
|
||||
"satisfaction": "number (0-100)",
|
||||
"success_rate": "number (0-100)",
|
||||
"frontend": "string (frontend technology)",
|
||||
"backend": "string (backend technology)",
|
||||
"database": "string (database technology)",
|
||||
"cloud": "string (cloud platform)",
|
||||
"testing": "string (testing framework)",
|
||||
"mobile": "string (mobile technology)",
|
||||
"devops": "string (devops tool)",
|
||||
"ai_ml": "string (AI/ML technology)",
|
||||
"recommended_tool": "string (primary tool)",
|
||||
"recommendation_score": "number (0-100)",
|
||||
"message": "string (explanation)"
|
||||
}
|
||||
```
|
||||
|
||||
## 🔄 Service Dependencies
|
||||
|
||||
Unison depends on the following services:
|
||||
|
||||
1. **tech-stack-selector** (port 8002)
|
||||
- Provides budget and domain-based recommendations
|
||||
- Must be healthy for full functionality
|
||||
|
||||
2. **template-manager** (ports 8009, 8013)
|
||||
- Provides template-based recommendations
|
||||
- AI service on port 8013 for Claude integration
|
||||
- Must be healthy for full functionality
|
||||
|
||||
3. **Claude AI** (external)
|
||||
- Optional but recommended for unified recommendations
|
||||
- Falls back to tech-stack-selector if unavailable
|
||||
|
||||
## 🚨 Error Handling
|
||||
|
||||
The service includes comprehensive error handling:
|
||||
|
||||
- **Service Unavailable**: Falls back to available services
|
||||
- **Invalid Requests**: Returns detailed validation errors
|
||||
- **Claude AI Errors**: Falls back to tech-stack-selector
|
||||
- **Schema Validation**: Ensures response format compliance
|
||||
- **Rate Limiting**: Prevents abuse with configurable limits
|
||||
|
||||
## 📝 Logging
|
||||
|
||||
Logs are written to:
|
||||
- Console (development)
|
||||
- `logs/combined.log` (all logs)
|
||||
- `logs/error.log` (error logs only)
|
||||
|
||||
Log levels: `error`, `warn`, `info`, `debug`
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
```bash
|
||||
# Run tests
|
||||
npm test
|
||||
|
||||
# Run with coverage
|
||||
npm run test:coverage
|
||||
|
||||
# Lint code
|
||||
npm run lint
|
||||
```
|
||||
|
||||
## 🐳 Docker
|
||||
|
||||
### Build Image
|
||||
```bash
|
||||
docker build -t unison .
|
||||
```
|
||||
|
||||
### Run Container
|
||||
```bash
|
||||
docker run -p 8010:8010 \
|
||||
-e CLAUDE_API_KEY=your_key_here \
|
||||
-e TECH_STACK_SELECTOR_URL=http://tech-stack-selector:8002 \
|
||||
-e TEMPLATE_MANAGER_URL=http://template-manager:8009 \
|
||||
unison
|
||||
```
|
||||
|
||||
## 🔧 Development
|
||||
|
||||
### Project Structure
|
||||
```
|
||||
services/unison/
|
||||
├── src/
|
||||
│ ├── app.js # Main application
|
||||
│ ├── middleware/ # Express middleware
|
||||
│ ├── routes/ # API routes
|
||||
│ ├── services/ # External service integrations
|
||||
│ └── utils/ # Utility functions
|
||||
├── logs/ # Log files
|
||||
├── Dockerfile # Docker configuration
|
||||
├── package.json # Dependencies
|
||||
├── start.sh # Startup script
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
### Adding New Features
|
||||
|
||||
1. **New API Endpoints**: Add to `src/routes/`
|
||||
2. **External Services**: Add to `src/services/`
|
||||
3. **Middleware**: Add to `src/middleware/`
|
||||
4. **Validation**: Update schemas in `src/utils/schemaValidator.js`
|
||||
|
||||
## 📈 Monitoring
|
||||
|
||||
### Metrics to Monitor
|
||||
- Response times
|
||||
- Error rates
|
||||
- Service availability
|
||||
- Claude AI usage
|
||||
- Rate limit hits
|
||||
|
||||
### Health Indicators
|
||||
- All external services healthy
|
||||
- Claude AI available
|
||||
- Response time < 5 seconds
|
||||
- Error rate < 1%
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a feature branch
|
||||
3. Make your changes
|
||||
4. Add tests
|
||||
5. Submit a pull request
|
||||
|
||||
## 📄 License
|
||||
|
||||
MIT License - see LICENSE file for details.
|
||||
|
||||
## 🆘 Support
|
||||
|
||||
For issues and questions:
|
||||
1. Check the logs in `logs/` directory
|
||||
2. Verify external services are running
|
||||
3. Check environment variables
|
||||
4. Review the health endpoint
|
||||
|
||||
## 🔄 Changelog
|
||||
|
||||
### v1.0.0
|
||||
- Initial release
|
||||
- Unified recommendation service
|
||||
- Claude AI integration
|
||||
- Comprehensive error handling
|
||||
- Docker support
|
||||
- Production-ready logging and monitoring
|
||||
376
services/unison/UNISON_WORKFLOW.md
Normal file
376
services/unison/UNISON_WORKFLOW.md
Normal file
@ -0,0 +1,376 @@
|
||||
# Unison Service - Complete Workflow Analysis
|
||||
|
||||
## 🏗️ Architecture Overview
|
||||
|
||||
The Unison service acts as a **unified orchestration layer** that combines recommendations from multiple sources and uses Claude AI to generate optimized tech stack recommendations.
|
||||
|
||||
## 🔄 Complete Workflow Diagram
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────────┐
|
||||
│ UNISON SERVICE WORKFLOW │
|
||||
└─────────────────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ Client App │───▶│ Unison Service │───▶│ Claude AI API │
|
||||
│ │ │ (Port 8014) │ │ │
|
||||
└─────────────────┘ └─────────┬────────┘ └─────────────────┘
|
||||
│
|
||||
┌────────────┼────────────┐
|
||||
│ │ │
|
||||
┌───────▼──────┐ ┌───▼────┐ ┌────▼──────┐
|
||||
│ Tech Stack │ │Template│ │Template │
|
||||
│ Selector │ │Manager │ │Manager AI │
|
||||
│ (Port 8002) │ │(8009) │ │(Port 8013)│
|
||||
└──────────────┘ └────────┘ └───────────┘
|
||||
```
|
||||
|
||||
## 📋 Detailed Workflow Steps
|
||||
|
||||
### 1. **Request Reception & Validation**
|
||||
```
|
||||
Client Request → Unison Service → Middleware Stack
|
||||
```
|
||||
|
||||
**Components:**
|
||||
- **Express Server** (Port 8014)
|
||||
- **Security Middleware** (Helmet, CORS)
|
||||
- **Rate Limiting** (100 req/15min per IP)
|
||||
- **Request Validation** (Joi schema validation)
|
||||
- **Body Parsing** (JSON, URL-encoded)
|
||||
|
||||
**Validation Rules:**
|
||||
- Domain: 1-100 characters
|
||||
- Budget: Positive number
|
||||
- Preferred Technologies: Array of strings (1-50 chars each)
|
||||
- Template ID: Valid UUID format
|
||||
- Boolean flags: includeSimilar, includeKeywords, forceRefresh
|
||||
|
||||
### 2. **Route Processing**
|
||||
```
|
||||
POST /api/recommendations/unified → Unified Recommendation Handler
|
||||
GET /api/recommendations/tech-stack → Tech Stack Only Handler
|
||||
GET /api/recommendations/template/:id → Template Only Handler
|
||||
GET /health → Health Check Handler
|
||||
```
|
||||
|
||||
### 3. **Unified Recommendation Workflow** (Main Flow)
|
||||
|
||||
#### 3.1 **Input Validation**
|
||||
```javascript
|
||||
// Validate tech stack request parameters
|
||||
const techStackRequest = { domain, budget, preferredTechnologies };
|
||||
const techStackValidation = schemaValidator.validateTechStackRequest(techStackRequest);
|
||||
|
||||
// Validate template request if templateId provided
|
||||
if (templateId) {
|
||||
const templateRequest = { templateId, includeSimilar, includeKeywords, forceRefresh };
|
||||
const templateValidation = schemaValidator.validateTemplateRequest(templateRequest);
|
||||
}
|
||||
```
|
||||
|
||||
#### 3.2 **Parallel Service Calls**
|
||||
```javascript
|
||||
// Always fetch from tech-stack-selector
|
||||
const techStackPromise = techStackService.getRecommendations({
|
||||
domain, budget, preferredTechnologies
|
||||
}).catch(error => ({ success: false, error: error.message, source: 'tech-stack-selector' }));
|
||||
|
||||
// Fetch from template-manager if templateId provided
|
||||
const templatePromise = templateId ?
|
||||
templateService.getAIRecommendations(templateId, { forceRefresh })
|
||||
.catch(error => ({ success: false, error: error.message, source: 'template-manager' })) :
|
||||
Promise.resolve({ success: false, error: 'No template ID provided', source: 'template-manager' });
|
||||
|
||||
// Execute both calls in parallel
|
||||
const [techStackResult, templateResult] = await Promise.all([techStackPromise, templatePromise]);
|
||||
```
|
||||
|
||||
#### 3.3 **Service Integration Details**
|
||||
|
||||
**Tech Stack Selector Integration:**
|
||||
- **Endpoint**: `POST /recommend/best`
|
||||
- **Data Source**: PostgreSQL + Neo4j (migrated data)
|
||||
- **Features**: Price-based relationships, Claude AI recommendations
|
||||
- **Response**: Array of tech stack recommendations with costs, team sizes, etc.
|
||||
|
||||
**Template Manager Integration:**
|
||||
- **Endpoint**: `GET /api/templates/{id}/ai-recommendations`
|
||||
- **Data Source**: Template database with AI analysis
|
||||
- **Features**: Template-based recommendations, feature learning
|
||||
- **Response**: Template-specific tech stack recommendations
|
||||
|
||||
#### 3.4 **Decision Logic & Fallback Strategy**
|
||||
|
||||
```javascript
|
||||
// Check if we have at least one successful recommendation
|
||||
if (!techStackResult.success && !templateResult.success) {
|
||||
return res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch recommendations from both services'
|
||||
});
|
||||
}
|
||||
|
||||
// If only one service succeeded, return its result
|
||||
if (!techStackResult.success || !templateResult.success) {
|
||||
const successfulResult = techStackResult.success ? techStackResult : templateResult;
|
||||
return res.json({
|
||||
success: true,
|
||||
data: successfulResult.data,
|
||||
source: successfulResult.source,
|
||||
message: 'Single service recommendation (other service unavailable)'
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
#### 3.5 **Claude AI Unification** (When Both Services Succeed)
|
||||
|
||||
**Claude AI Integration:**
|
||||
- **Model**: claude-3-sonnet-20240229
|
||||
- **Max Tokens**: 4000
|
||||
- **Timeout**: 30 seconds
|
||||
- **API**: Anthropic Claude API
|
||||
|
||||
**Prompt Engineering:**
|
||||
```javascript
|
||||
const prompt = `You are an expert tech stack architect. I need you to analyze two different tech stack recommendations and create a single, optimized recommendation that balances cost, domain requirements, and template-feature compatibility.
|
||||
|
||||
## Original Request Parameters:
|
||||
- Domain: ${requestParams.domain}
|
||||
- Budget: $${requestParams.budget}
|
||||
- Preferred Technologies: ${requestParams.preferredTechnologies?.join(', ')}
|
||||
- Template ID: ${requestParams.templateId}
|
||||
|
||||
## Tech Stack Selector Recommendation:
|
||||
${JSON.stringify(techStackRecommendation.data, null, 2)}
|
||||
|
||||
## Template Manager Recommendation:
|
||||
${JSON.stringify(templateRecommendation.data, null, 2)}
|
||||
|
||||
## Your Task:
|
||||
Analyze both recommendations and create a single, optimized tech stack recommendation that:
|
||||
1. Balances cost-effectiveness with the budget constraint
|
||||
2. Matches the domain requirements
|
||||
3. Incorporates the best features from the template recommendation
|
||||
4. Considers the preferred technologies when possible
|
||||
5. Provides realistic team size, development time, and success metrics
|
||||
|
||||
## Required Output Format:
|
||||
[Detailed JSON schema requirements...]`;
|
||||
```
|
||||
|
||||
**Response Processing:**
|
||||
```javascript
|
||||
// Parse Claude's response
|
||||
const claudeResponse = response.data.content[0].text;
|
||||
const unifiedRecommendation = this.parseClaudeResponse(claudeResponse);
|
||||
|
||||
// Validate the unified recommendation
|
||||
const validation = schemaValidator.validateUnifiedRecommendation(unifiedRecommendation);
|
||||
if (!validation.valid) {
|
||||
// Fallback to tech-stack-selector recommendation
|
||||
return res.json({
|
||||
success: true,
|
||||
data: techStackResult.data,
|
||||
source: 'tech-stack-selector (fallback)',
|
||||
message: 'Claude generated invalid recommendation, using tech-stack-selector as fallback'
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### 4. **Response Generation & Validation**
|
||||
|
||||
**Schema Validation:**
|
||||
- **Unified Recommendation Schema**: 18 required fields with strict validation
|
||||
- **Numeric Ranges**: Monthly cost (0-10000), Setup cost (0-50000), etc.
|
||||
- **String Constraints**: Team size pattern, length limits
|
||||
- **Required Fields**: stack_name, monthly_cost, setup_cost, team_size, development_time, satisfaction, success_rate, frontend, backend, database, cloud, testing, mobile, devops, ai_ml, recommended_tool, recommendation_score, message
|
||||
|
||||
**Response Format:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"data": {
|
||||
"stack_name": "Optimized E-commerce Stack",
|
||||
"monthly_cost": 150,
|
||||
"setup_cost": 2000,
|
||||
"team_size": "3-5",
|
||||
"development_time": 8,
|
||||
"satisfaction": 92,
|
||||
"success_rate": 88,
|
||||
"frontend": "React",
|
||||
"backend": "Node.js",
|
||||
"database": "PostgreSQL",
|
||||
"cloud": "AWS",
|
||||
"testing": "Jest",
|
||||
"mobile": "React Native",
|
||||
"devops": "Docker",
|
||||
"ai_ml": "TensorFlow",
|
||||
"recommended_tool": "Vercel",
|
||||
"recommendation_score": 94,
|
||||
"message": "Balanced solution combining cost-effectiveness with modern tech stack"
|
||||
},
|
||||
"source": "unified",
|
||||
"message": "Unified recommendation generated successfully",
|
||||
"processingTime": 1250,
|
||||
"services": {
|
||||
"techStackSelector": "available",
|
||||
"templateManager": "available",
|
||||
"claudeAI": "available"
|
||||
},
|
||||
"claudeModel": "claude-3-sonnet-20240229"
|
||||
}
|
||||
```
|
||||
|
||||
### 5. **Error Handling & Logging**
|
||||
|
||||
**Error Types:**
|
||||
- **Validation Errors**: Invalid input parameters
|
||||
- **Service Errors**: External service failures
|
||||
- **Claude AI Errors**: API failures or invalid responses
|
||||
- **Schema Validation Errors**: Invalid output format
|
||||
- **Network Errors**: Timeout or connection issues
|
||||
|
||||
**Logging Strategy:**
|
||||
- **Winston Logger**: Structured JSON logging
|
||||
- **Log Levels**: error, warn, info, debug
|
||||
- **Log Files**: error.log, combined.log
|
||||
- **Console Logging**: Development mode
|
||||
- **Request Tracking**: Unique request IDs
|
||||
|
||||
**Fallback Mechanisms:**
|
||||
1. **Single Service Fallback**: If one service fails, use the other
|
||||
2. **Claude AI Fallback**: If Claude fails, use tech-stack-selector
|
||||
3. **Schema Validation Fallback**: If Claude output is invalid, use tech-stack-selector
|
||||
4. **Graceful Degradation**: Always return some recommendation
|
||||
|
||||
### 6. **Health Monitoring**
|
||||
|
||||
**Health Check Endpoints:**
|
||||
- **Basic Health**: `/health` - Service status with external service checks
|
||||
- **Detailed Health**: `/health/detailed` - Comprehensive system information
|
||||
|
||||
**External Service Monitoring:**
|
||||
- **Tech Stack Selector**: `http://pipeline_tech_stack_selector:8002/health`
|
||||
- **Template Manager**: `http://pipeline_template_manager:8009/health`
|
||||
- **Response Time Tracking**: Individual service response times
|
||||
- **Status Aggregation**: Overall service health status
|
||||
|
||||
## 🔧 Service Dependencies
|
||||
|
||||
### External Services
|
||||
1. **Tech Stack Selector** (Port 8002)
|
||||
- **Purpose**: Budget and domain-based recommendations
|
||||
- **Data Source**: PostgreSQL + Neo4j
|
||||
- **Features**: Price analysis, Claude AI integration
|
||||
- **Health Check**: `/health`
|
||||
|
||||
2. **Template Manager** (Port 8009)
|
||||
- **Purpose**: Template-based recommendations
|
||||
- **Data Source**: Template database
|
||||
- **Features**: Feature learning, usage tracking
|
||||
- **Health Check**: `/health`
|
||||
|
||||
3. **Template Manager AI** (Port 8013)
|
||||
- **Purpose**: AI-powered template analysis
|
||||
- **Features**: Claude AI integration for templates
|
||||
- **Health Check**: `/health`
|
||||
|
||||
4. **Claude AI** (External API)
|
||||
- **Purpose**: Intelligent recommendation unification
|
||||
- **Model**: claude-3-sonnet-20240229
|
||||
- **Features**: Natural language processing, optimization
|
||||
|
||||
### Internal Components
|
||||
1. **Schema Validator**: JSON schema validation using Ajv
|
||||
2. **Logger**: Winston-based structured logging
|
||||
3. **Error Handler**: Comprehensive error handling
|
||||
4. **Request Validator**: Joi-based input validation
|
||||
5. **Health Check Middleware**: External service monitoring
|
||||
|
||||
## 📊 Performance Characteristics
|
||||
|
||||
### Response Times
|
||||
- **Health Check**: ~12ms
|
||||
- **Tech Stack Only**: ~50ms
|
||||
- **Template Only**: ~15ms
|
||||
- **Unified Recommendation**: ~11ms (with fallback)
|
||||
- **Claude AI Unification**: ~2-5 seconds
|
||||
|
||||
### Memory Usage
|
||||
- **Base Memory**: ~16MB
|
||||
- **Peak Memory**: ~18MB
|
||||
- **External Memory**: ~3MB
|
||||
|
||||
### Throughput
|
||||
- **Rate Limit**: 100 requests per 15 minutes per IP
|
||||
- **Concurrent Requests**: Handled by Express.js
|
||||
- **Timeout**: 30 seconds per external service call
|
||||
|
||||
## 🛡️ Security & Reliability
|
||||
|
||||
### Security Features
|
||||
- **Helmet**: Security headers
|
||||
- **CORS**: Cross-origin resource sharing
|
||||
- **Rate Limiting**: Abuse prevention
|
||||
- **Input Validation**: XSS and injection prevention
|
||||
- **Error Sanitization**: No sensitive data in error messages
|
||||
|
||||
### Reliability Features
|
||||
- **Graceful Fallbacks**: Multiple fallback strategies
|
||||
- **Circuit Breaker Pattern**: Service failure handling
|
||||
- **Timeout Management**: Prevents hanging requests
|
||||
- **Health Monitoring**: Proactive service monitoring
|
||||
- **Structured Logging**: Comprehensive debugging
|
||||
|
||||
## 🚀 Deployment & Scaling
|
||||
|
||||
### Docker Configuration
|
||||
- **Base Image**: Node.js 18 Alpine
|
||||
- **Port Mapping**: 8014:8010
|
||||
- **Health Check**: Built-in health check endpoint
|
||||
- **Logging**: JSON file logging with rotation
|
||||
|
||||
### Environment Variables
|
||||
- **Service URLs**: External service endpoints
|
||||
- **Claude API Key**: AI integration
|
||||
- **Database URLs**: Connection strings
|
||||
- **Security Keys**: JWT secrets, API keys
|
||||
- **Performance Tuning**: Timeouts, limits
|
||||
|
||||
## 📈 Monitoring & Observability
|
||||
|
||||
### Metrics Tracked
|
||||
- **Response Times**: Per endpoint and service
|
||||
- **Error Rates**: By error type and service
|
||||
- **Service Availability**: External service health
|
||||
- **Memory Usage**: Heap and external memory
|
||||
- **Request Volume**: Rate limiting metrics
|
||||
|
||||
### Logging Strategy
|
||||
- **Structured Logs**: JSON format for easy parsing
|
||||
- **Log Levels**: Appropriate level for each event
|
||||
- **Request Tracing**: Unique identifiers for requests
|
||||
- **Error Context**: Detailed error information
|
||||
- **Performance Metrics**: Response time tracking
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Summary
|
||||
|
||||
The Unison service implements a **sophisticated orchestration workflow** that:
|
||||
|
||||
1. **Validates** incoming requests with strict schema validation
|
||||
2. **Orchestrates** parallel calls to multiple recommendation services
|
||||
3. **Unifies** recommendations using Claude AI for intelligent optimization
|
||||
4. **Validates** outputs with comprehensive schema validation
|
||||
5. **Provides** multiple fallback strategies for reliability
|
||||
6. **Monitors** health and performance continuously
|
||||
7. **Logs** everything for debugging and analysis
|
||||
|
||||
This creates a **robust, intelligent, and reliable** system that can provide high-quality tech stack recommendations even when individual services fail, while maintaining excellent performance and security standards.
|
||||
|
||||
---
|
||||
*Generated on: 2025-09-22T05:01:45.120Z*
|
||||
*Service Version: 1.0.0*
|
||||
*Status: OPERATIONAL*
|
||||
499
services/unison/WORKFLOW_DIAGRAM.md
Normal file
499
services/unison/WORKFLOW_DIAGRAM.md
Normal file
@ -0,0 +1,499 @@
|
||||
# Unison Service - Visual Workflow Diagram
|
||||
|
||||
## 🏗️ Complete System Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────────┐
|
||||
│ UNISON SERVICE ARCHITECTURE │
|
||||
└─────────────────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ Client App │───▶│ Unison Service │───▶│ Claude AI API │
|
||||
│ │ │ (Port 8014) │ │ │
|
||||
└─────────────────┘ └─────────┬────────┘ └─────────────────┘
|
||||
│
|
||||
┌────────────┼────────────┐
|
||||
│ │ │
|
||||
┌───────▼──────┐ ┌───▼────┐ ┌────▼──────┐
|
||||
│ Tech Stack │ │Template│ │Template │
|
||||
│ Selector │ │Manager │ │Manager AI │
|
||||
│ (Port 8002) │ │(8009) │ │(Port 8013)│
|
||||
└──────────────┘ └────────┘ └───────────┘
|
||||
```
|
||||
|
||||
## 🔄 Detailed Workflow Flow
|
||||
|
||||
### 1. Request Processing Pipeline
|
||||
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ Client Request │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Express Server │
|
||||
│ (Port 8014) │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Security Stack │
|
||||
│ • Helmet │
|
||||
│ • CORS │
|
||||
│ • Rate Limiting │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Request Parser │
|
||||
│ • JSON Parser │
|
||||
│ • URL Encoded │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Validation │
|
||||
│ • Joi Schema │
|
||||
│ • Input Check │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Route Handler │
|
||||
│ • Unified │
|
||||
│ • Tech Stack │
|
||||
│ • Template │
|
||||
└─────────┬───────┘
|
||||
```
|
||||
|
||||
### 2. Unified Recommendation Workflow
|
||||
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ POST /unified │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Input Validation│
|
||||
│ • Domain │
|
||||
│ • Budget │
|
||||
│ • Technologies │
|
||||
│ • Template ID │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Parallel Calls │
|
||||
│ ┌─────────────┐ │
|
||||
│ │Tech Stack │ │
|
||||
│ │Selector │ │
|
||||
│ └─────────────┘ │
|
||||
│ ┌─────────────┐ │
|
||||
│ │Template │ │
|
||||
│ │Manager │ │
|
||||
│ └─────────────┘ │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Decision Logic │
|
||||
│ • Both Success │
|
||||
│ • One Success │
|
||||
│ • Both Failed │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Claude AI │
|
||||
│ Unification │
|
||||
│ (if both OK) │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Schema │
|
||||
│ Validation │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Response │
|
||||
│ Generation │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Client Response │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### 3. Service Integration Details
|
||||
|
||||
#### Tech Stack Selector Integration
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ Unison Service │
|
||||
└─────────┬───────┘
|
||||
│ POST /recommend/best
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Tech Stack │
|
||||
│ Selector │
|
||||
│ (Port 8002) │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Data Sources │
|
||||
│ • PostgreSQL │
|
||||
│ • Neo4j │
|
||||
│ • Price Data │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Claude AI │
|
||||
│ Analysis │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Recommendations │
|
||||
│ • Cost Analysis │
|
||||
│ • Team Sizes │
|
||||
│ • Tech Stacks │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
#### Template Manager Integration
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ Unison Service │
|
||||
└─────────┬───────┘
|
||||
│ GET /api/templates/{id}/ai-recommendations
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Template │
|
||||
│ Manager │
|
||||
│ (Port 8009) │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Template │
|
||||
│ Database │
|
||||
│ • Features │
|
||||
│ • Usage Data │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Template AI │
|
||||
│ Service │
|
||||
│ (Port 8013) │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ AI Analysis │
|
||||
│ • Feature Match │
|
||||
│ • Optimization │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### 4. Claude AI Unification Process
|
||||
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ Tech Stack │
|
||||
│ Recommendation │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Claude AI │
|
||||
│ Analysis │
|
||||
│ • Cost Balance │
|
||||
│ • Domain Match │
|
||||
│ • Tech Merge │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Template │
|
||||
│ Recommendation │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Unified │
|
||||
│ Recommendation │
|
||||
│ • Optimized │
|
||||
│ • Balanced │
|
||||
│ • Validated │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### 5. Error Handling & Fallback Strategy
|
||||
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ Service Call │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Success? │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
┌─────┴─────┐
|
||||
│ │
|
||||
▼ ▼
|
||||
┌─────────┐ ┌─────────┐
|
||||
│ Success │ │ Failure │
|
||||
└────┬────┘ └────┬────┘
|
||||
│ │
|
||||
▼ ▼
|
||||
┌─────────┐ ┌─────────┐
|
||||
│ Process │ │ Log │
|
||||
│ Result │ │ Error │
|
||||
└────┬────┘ └────┬────┘
|
||||
│ │
|
||||
▼ ▼
|
||||
┌─────────┐ ┌─────────┐
|
||||
│ Return │ │ Fallback│
|
||||
│ Data │ │ Strategy│
|
||||
└─────────┘ └─────────┘
|
||||
```
|
||||
|
||||
### 6. Health Monitoring Flow
|
||||
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ Health Check │
|
||||
│ Request │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Check Internal │
|
||||
│ • Memory │
|
||||
│ • CPU │
|
||||
│ • Uptime │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Check External │
|
||||
│ Services │
|
||||
│ • Tech Stack │
|
||||
│ • Template │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Aggregate │
|
||||
│ Health Status │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Return Health │
|
||||
│ Response │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
## 🔧 Data Flow Architecture
|
||||
|
||||
### Request Data Flow
|
||||
```
|
||||
Client Request
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Input Validation│
|
||||
│ • Joi Schema │
|
||||
│ • Type Check │
|
||||
│ • Range Check │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Service Calls │
|
||||
│ • Parallel │
|
||||
│ • Async │
|
||||
│ • Timeout │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Data Processing │
|
||||
│ • Merge │
|
||||
│ • Optimize │
|
||||
│ • Validate │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Response │
|
||||
│ • JSON Format │
|
||||
│ • Error Handling│
|
||||
│ • Logging │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### Response Data Flow
|
||||
```
|
||||
Service Response
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Schema │
|
||||
│ Validation │
|
||||
│ • Ajv Validator │
|
||||
│ • Field Check │
|
||||
│ • Type Check │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Error Handling │
|
||||
│ • Validation │
|
||||
│ • Service │
|
||||
│ • Network │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Response │
|
||||
│ Formatting │
|
||||
│ • JSON │
|
||||
│ • Metadata │
|
||||
│ • Status Codes │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Client Response │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
## 📊 Performance Flow
|
||||
|
||||
### Response Time Breakdown
|
||||
```
|
||||
Total Request Time: ~50ms
|
||||
│
|
||||
├── Input Validation: ~2ms
|
||||
├── Service Calls: ~30ms
|
||||
│ ├── Tech Stack: ~15ms
|
||||
│ └── Template: ~15ms
|
||||
├── Claude AI: ~2-5s (if used)
|
||||
├── Schema Validation: ~3ms
|
||||
└── Response Formatting: ~1ms
|
||||
```
|
||||
|
||||
### Memory Usage Flow
|
||||
```
|
||||
Memory Allocation
|
||||
│
|
||||
├── Base Service: ~16MB
|
||||
├── Request Processing: ~2MB
|
||||
├── External Calls: ~1MB
|
||||
└── Response Generation: ~1MB
|
||||
```
|
||||
|
||||
## 🛡️ Security Flow
|
||||
|
||||
### Security Pipeline
|
||||
```
|
||||
Incoming Request
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Helmet │
|
||||
│ • Security │
|
||||
│ Headers │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ CORS │
|
||||
│ • Origin Check │
|
||||
│ • Method Check │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Rate Limiting │
|
||||
│ • IP Tracking │
|
||||
│ • Request Count │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Input │
|
||||
│ Validation │
|
||||
│ • XSS Prevent │
|
||||
│ • Injection │
|
||||
└─────────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Processed │
|
||||
│ Request │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
## 🚀 Deployment Flow
|
||||
|
||||
### Docker Deployment
|
||||
```
|
||||
Docker Build
|
||||
│
|
||||
├── Node.js 18 Alpine
|
||||
├── Dependencies Install
|
||||
├── Source Code Copy
|
||||
├── Permissions Set
|
||||
└── Health Check Config
|
||||
│
|
||||
▼
|
||||
Docker Run
|
||||
│
|
||||
├── Port Mapping: 8014:8010
|
||||
├── Environment Variables
|
||||
├── Volume Mounts
|
||||
└── Network Configuration
|
||||
│
|
||||
▼
|
||||
Service Running
|
||||
│
|
||||
├── Health Checks
|
||||
├── Log Monitoring
|
||||
├── Error Tracking
|
||||
└── Performance Metrics
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Key Workflow Characteristics
|
||||
|
||||
1. **Asynchronous Processing**: Parallel service calls for performance
|
||||
2. **Fault Tolerance**: Multiple fallback strategies
|
||||
3. **Data Validation**: Strict input/output validation
|
||||
4. **AI Integration**: Intelligent recommendation unification
|
||||
5. **Comprehensive Logging**: Full request/response tracking
|
||||
6. **Health Monitoring**: Proactive service monitoring
|
||||
7. **Security First**: Multiple security layers
|
||||
8. **Performance Optimized**: Fast response times
|
||||
9. **Scalable Architecture**: Containerized deployment
|
||||
10. **Observable System**: Detailed metrics and logging
|
||||
|
||||
This workflow ensures that the Unison service provides **reliable, intelligent, and high-performance** tech stack recommendations while maintaining excellent security and observability standards.
|
||||
|
||||
---
|
||||
*Generated on: 2025-09-22T05:01:45.120Z*
|
||||
*Service Version: 1.0.0*
|
||||
*Status: OPERATIONAL*
|
||||
126
services/unison/config.env
Normal file
126
services/unison/config.env
Normal file
@ -0,0 +1,126 @@
|
||||
# Unison Service Environment Configuration
|
||||
# This file contains environment variables for the Unison service
|
||||
|
||||
# =====================================
|
||||
# Service Configuration
|
||||
# =====================================
|
||||
NODE_ENV=development
|
||||
PORT=8010
|
||||
HOST=0.0.0.0
|
||||
ENVIRONMENT=development
|
||||
|
||||
# =====================================
|
||||
# External Service URLs
|
||||
# =====================================
|
||||
TECH_STACK_SELECTOR_URL=http://pipeline_tech_stack_selector:8002
|
||||
TEMPLATE_MANAGER_URL=http://pipeline_template_manager:8009
|
||||
TEMPLATE_MANAGER_AI_URL=http://pipeline_template_manager:8013
|
||||
|
||||
# Service Health Check URLs
|
||||
TECH_STACK_SELECTOR_HEALTH_URL=http://pipeline_tech_stack_selector:8002/health
|
||||
TEMPLATE_MANAGER_HEALTH_URL=http://pipeline_template_manager:8009/health
|
||||
|
||||
# =====================================
|
||||
# Claude AI Configuration
|
||||
# =====================================
|
||||
CLAUDE_API_KEY=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
|
||||
|
||||
# =====================================
|
||||
# Database Configuration
|
||||
# =====================================
|
||||
POSTGRES_HOST=postgres
|
||||
POSTGRES_PORT=5432
|
||||
POSTGRES_DB=dev_pipeline
|
||||
POSTGRES_USER=pipeline_admin
|
||||
POSTGRES_PASSWORD=secure_pipeline_2024
|
||||
DATABASE_URL=postgresql://pipeline_admin:secure_pipeline_2024@postgres:5432/dev_pipeline
|
||||
|
||||
# Neo4j Configuration
|
||||
NEO4J_URI=bolt://neo4j:7687
|
||||
NEO4J_USER=neo4j
|
||||
NEO4J_USERNAME=neo4j
|
||||
NEO4J_PASSWORD=password
|
||||
|
||||
# Redis Configuration
|
||||
REDIS_HOST=redis
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=redis_secure_2024
|
||||
|
||||
# MongoDB Configuration
|
||||
MONGODB_HOST=mongodb
|
||||
MONGODB_PORT=27017
|
||||
MONGO_INITDB_ROOT_USERNAME=pipeline_admin
|
||||
MONGO_INITDB_ROOT_PASSWORD=mongo_secure_2024
|
||||
MONGODB_PASSWORD=mongo_secure_2024
|
||||
|
||||
# =====================================
|
||||
# Message Queue Configuration
|
||||
# =====================================
|
||||
RABBITMQ_HOST=rabbitmq
|
||||
RABBITMQ_PORT=5672
|
||||
RABBITMQ_DEFAULT_USER=pipeline_admin
|
||||
RABBITMQ_DEFAULT_PASS=rabbit_secure_2024
|
||||
RABBITMQ_PASSWORD=rabbit_secure_2024
|
||||
|
||||
# =====================================
|
||||
# Security & Authentication
|
||||
# =====================================
|
||||
JWT_SECRET=ultra_secure_jwt_secret_2024
|
||||
JWT_ACCESS_SECRET=access-secret-key-2024-tech4biz-secure_pipeline_2024
|
||||
JWT_REFRESH_SECRET=refresh-secret-key-2024-tech4biz-secure_pipeline_2024
|
||||
API_KEY_HEADER=X-API-Key
|
||||
|
||||
# =====================================
|
||||
# Email Configuration
|
||||
# =====================================
|
||||
SMTP_HOST=smtp.gmail.com
|
||||
SMTP_PORT=587
|
||||
SMTP_SECURE=false
|
||||
SMTP_USER=frontendtechbiz@gmail.com
|
||||
SMTP_PASS=oidhhjeasgzbqptq
|
||||
SMTP_FROM=frontendtechbiz@gmail.com
|
||||
GMAIL_USER=frontendtechbiz@gmail.com
|
||||
GMAIL_APP_PASSWORD=oidhhjeasgzbqptq
|
||||
|
||||
# =====================================
|
||||
# CORS Configuration
|
||||
# =====================================
|
||||
CORS_ORIGIN=*
|
||||
CORS_METHODS=GET,POST,PUT,DELETE,PATCH,OPTIONS
|
||||
CORS_CREDENTIALS=true
|
||||
|
||||
# =====================================
|
||||
# Service Configuration
|
||||
# =====================================
|
||||
# Rate Limiting
|
||||
RATE_LIMIT_WINDOW_MS=900000
|
||||
RATE_LIMIT_MAX_REQUESTS=100
|
||||
|
||||
# Logging
|
||||
LOG_LEVEL=info
|
||||
LOG_FILE=logs/unison.log
|
||||
|
||||
# Request Timeouts (in milliseconds)
|
||||
REQUEST_TIMEOUT=30000
|
||||
HEALTH_CHECK_TIMEOUT=5000
|
||||
|
||||
# =====================================
|
||||
# External Service Integration
|
||||
# =====================================
|
||||
# n8n Configuration
|
||||
N8N_BASIC_AUTH_USER=admin
|
||||
N8N_BASIC_AUTH_PASSWORD=admin_n8n_2024
|
||||
N8N_ENCRYPTION_KEY=very_secure_encryption_key_2024
|
||||
|
||||
# Jenkins Configuration
|
||||
JENKINS_ADMIN_ID=admin
|
||||
JENKINS_ADMIN_PASSWORD=jenkins_secure_2024
|
||||
|
||||
# Gitea Configuration
|
||||
GITEA_ADMIN_USER=admin
|
||||
GITEA_ADMIN_PASSWORD=gitea_secure_2024
|
||||
|
||||
# Monitoring
|
||||
GRAFANA_ADMIN_USER=admin
|
||||
GRAFANA_ADMIN_PASSWORD=grafana_secure_2024
|
||||
|
||||
6686
services/unison/package-lock.json
generated
Normal file
6686
services/unison/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
48
services/unison/package.json
Normal file
48
services/unison/package.json
Normal file
@ -0,0 +1,48 @@
|
||||
{
|
||||
"name": "unison",
|
||||
"version": "1.0.0",
|
||||
"description": "Unison - Unified Tech Stack Recommendation Service",
|
||||
"main": "src/app.js",
|
||||
"scripts": {
|
||||
"start": "node src/app.js",
|
||||
"dev": "nodemon src/app.js",
|
||||
"test": "jest",
|
||||
"lint": "eslint src/",
|
||||
"docker:build": "docker build -t unison .",
|
||||
"docker:run": "docker run -p 8010:8010 unison"
|
||||
},
|
||||
"dependencies": {
|
||||
"express": "^4.18.2",
|
||||
"cors": "^2.8.5",
|
||||
"helmet": "^7.1.0",
|
||||
"morgan": "^1.10.0",
|
||||
"dotenv": "^16.3.1",
|
||||
"axios": "^1.6.0",
|
||||
"joi": "^17.11.0",
|
||||
"ajv": "^8.12.0",
|
||||
"ajv-formats": "^2.1.1",
|
||||
"uuid": "^9.0.1",
|
||||
"winston": "^3.11.0",
|
||||
"compression": "^1.7.4",
|
||||
"express-rate-limit": "^7.1.5",
|
||||
"pg": "^8.11.3"
|
||||
},
|
||||
"devDependencies": {
|
||||
"nodemon": "^3.0.2",
|
||||
"jest": "^29.7.0",
|
||||
"supertest": "^6.3.3",
|
||||
"eslint": "^8.55.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18.0.0"
|
||||
},
|
||||
"keywords": [
|
||||
"tech-stack",
|
||||
"recommendations",
|
||||
"ai",
|
||||
"claude",
|
||||
"unified"
|
||||
],
|
||||
"author": "CODENUK Team",
|
||||
"license": "MIT"
|
||||
}
|
||||
51
services/unison/setup-env.sh
Normal file
51
services/unison/setup-env.sh
Normal file
@ -0,0 +1,51 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Setup script for Unison service environment variables
|
||||
|
||||
echo "Setting up Unison service environment variables..."
|
||||
|
||||
# Check if config.env exists
|
||||
if [ ! -f "config.env" ]; then
|
||||
echo "❌ config.env file not found!"
|
||||
echo "Please ensure config.env exists in the current directory."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Found config.env file"
|
||||
|
||||
# Check if .env already exists
|
||||
if [ -f ".env" ]; then
|
||||
echo "⚠️ .env file already exists!"
|
||||
read -p "Do you want to overwrite it? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "❌ Setup cancelled."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Copy config.env to .env
|
||||
cp config.env .env
|
||||
echo "✅ Created .env file from config.env"
|
||||
|
||||
# Check if running in Docker
|
||||
if [ -f "/.dockerenv" ]; then
|
||||
echo "🐳 Running in Docker container - using config.env directly"
|
||||
echo "✅ Environment variables are loaded from config.env"
|
||||
else
|
||||
echo "🖥️ Running locally - .env file created"
|
||||
echo "📝 You can edit .env file if you need to override any settings"
|
||||
fi
|
||||
|
||||
echo "🎉 Environment setup complete!"
|
||||
echo "📋 Configuration includes:"
|
||||
echo " - Service URLs for tech-stack-selector and template-manager"
|
||||
echo " - Claude AI API key and configuration"
|
||||
echo " - Database connections (PostgreSQL, Neo4j, Redis, MongoDB)"
|
||||
echo " - Security and authentication settings"
|
||||
echo " - Email configuration"
|
||||
echo " - CORS settings"
|
||||
echo ""
|
||||
echo "🚀 Next steps:"
|
||||
echo " 1. Run: npm start"
|
||||
echo " 2. Or with Docker: docker-compose up -d unison"
|
||||
140
services/unison/src/app.js
Normal file
140
services/unison/src/app.js
Normal file
@ -0,0 +1,140 @@
|
||||
const express = require('express');
|
||||
const cors = require('cors');
|
||||
const helmet = require('helmet');
|
||||
const morgan = require('morgan');
|
||||
const compression = require('compression');
|
||||
const rateLimit = require('express-rate-limit');
|
||||
require('dotenv').config({ path: './config.env' });
|
||||
|
||||
const logger = require('./utils/logger');
|
||||
const errorHandler = require('./middleware/errorHandler');
|
||||
const requestValidator = require('./middleware/requestValidator');
|
||||
const healthCheck = require('./middleware/healthCheck');
|
||||
|
||||
// Import routes
|
||||
const recommendationRoutes = require('./routes/recommendations');
|
||||
const healthRoutes = require('./routes/health');
|
||||
|
||||
const app = express();
|
||||
const PORT = process.env.PORT || 8010;
|
||||
const HOST = process.env.HOST || '0.0.0.0';
|
||||
|
||||
// Security middleware
|
||||
app.use(helmet({
|
||||
contentSecurityPolicy: {
|
||||
directives: {
|
||||
defaultSrc: ["'self'"],
|
||||
styleSrc: ["'self'", "'unsafe-inline'"],
|
||||
scriptSrc: ["'self'"],
|
||||
imgSrc: ["'self'", "data:", "https:"],
|
||||
},
|
||||
},
|
||||
}));
|
||||
|
||||
// CORS configuration
|
||||
app.use(cors({
|
||||
origin: process.env.CORS_ORIGIN || '*',
|
||||
credentials: process.env.CORS_CREDENTIALS === 'true',
|
||||
methods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS'],
|
||||
allowedHeaders: ['Content-Type', 'Authorization', 'X-API-Key']
|
||||
}));
|
||||
|
||||
// Compression middleware
|
||||
app.use(compression());
|
||||
|
||||
// Logging middleware
|
||||
app.use(morgan('combined', {
|
||||
stream: {
|
||||
write: (message) => logger.info(message.trim())
|
||||
}
|
||||
}));
|
||||
|
||||
// Rate limiting
|
||||
const limiter = rateLimit({
|
||||
windowMs: parseInt(process.env.RATE_LIMIT_WINDOW_MS) || 15 * 60 * 1000, // 15 minutes
|
||||
max: parseInt(process.env.RATE_LIMIT_MAX_REQUESTS) || 100, // limit each IP to 100 requests per windowMs
|
||||
message: {
|
||||
error: 'Too many requests from this IP, please try again later.',
|
||||
retryAfter: Math.ceil((parseInt(process.env.RATE_LIMIT_WINDOW_MS) || 15 * 60 * 1000) / 1000)
|
||||
},
|
||||
standardHeaders: true,
|
||||
legacyHeaders: false,
|
||||
});
|
||||
|
||||
app.use(limiter);
|
||||
|
||||
// Body parsing middleware
|
||||
app.use(express.json({ limit: '10mb' }));
|
||||
app.use(express.urlencoded({ extended: true, limit: '10mb' }));
|
||||
|
||||
// Request validation middleware
|
||||
app.use(requestValidator);
|
||||
|
||||
// Health check middleware
|
||||
app.use(healthCheck);
|
||||
|
||||
// Routes
|
||||
app.use('/api/recommendations', recommendationRoutes);
|
||||
app.use('/health', healthRoutes);
|
||||
|
||||
// Root endpoint
|
||||
app.get('/', (req, res) => {
|
||||
res.json({
|
||||
message: 'Unison - Unified Tech Stack Recommendation Service',
|
||||
version: '1.0.0',
|
||||
status: 'operational',
|
||||
timestamp: new Date().toISOString(),
|
||||
endpoints: {
|
||||
health: '/health',
|
||||
recommendations: '/api/recommendations',
|
||||
unified: '/api/recommendations/unified'
|
||||
},
|
||||
services: {
|
||||
techStackSelector: process.env.TECH_STACK_SELECTOR_URL || 'http://pipeline_tech_stack_selector:8002',
|
||||
templateManager: process.env.TEMPLATE_MANAGER_URL || 'http://pipeline_template_manager:8009'
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// 404 handler
|
||||
app.use('*', (req, res) => {
|
||||
res.status(404).json({
|
||||
error: 'Not Found',
|
||||
message: `Route ${req.originalUrl} not found`,
|
||||
availableEndpoints: [
|
||||
'GET /',
|
||||
'GET /health',
|
||||
'POST /api/recommendations/unified'
|
||||
]
|
||||
});
|
||||
});
|
||||
|
||||
// Error handling middleware (must be last)
|
||||
app.use(errorHandler);
|
||||
|
||||
// Start server
|
||||
const server = app.listen(PORT, HOST, () => {
|
||||
logger.info(`🚀 Unison service started on ${HOST}:${PORT}`);
|
||||
logger.info(`📊 Environment: ${process.env.NODE_ENV || 'development'}`);
|
||||
logger.info(`🔗 Tech Stack Selector: ${process.env.TECH_STACK_SELECTOR_URL || 'http://pipeline_tech_stack_selector:8002'}`);
|
||||
logger.info(`🔗 Template Manager: ${process.env.TEMPLATE_MANAGER_URL || 'http://pipeline_template_manager:8009'}`);
|
||||
});
|
||||
|
||||
// Graceful shutdown
|
||||
process.on('SIGTERM', () => {
|
||||
logger.info('SIGTERM received, shutting down gracefully');
|
||||
server.close(() => {
|
||||
logger.info('Process terminated');
|
||||
process.exit(0);
|
||||
});
|
||||
});
|
||||
|
||||
process.on('SIGINT', () => {
|
||||
logger.info('SIGINT received, shutting down gracefully');
|
||||
server.close(() => {
|
||||
logger.info('Process terminated');
|
||||
process.exit(0);
|
||||
});
|
||||
});
|
||||
|
||||
module.exports = app;
|
||||
72
services/unison/src/middleware/errorHandler.js
Normal file
72
services/unison/src/middleware/errorHandler.js
Normal file
@ -0,0 +1,72 @@
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
const errorHandler = (err, req, res, next) => {
|
||||
let error = { ...err };
|
||||
error.message = err.message;
|
||||
|
||||
// Log error
|
||||
logger.error({
|
||||
message: err.message,
|
||||
stack: err.stack,
|
||||
url: req.originalUrl,
|
||||
method: req.method,
|
||||
ip: req.ip,
|
||||
userAgent: req.get('User-Agent')
|
||||
});
|
||||
|
||||
// Mongoose bad ObjectId
|
||||
if (err.name === 'CastError') {
|
||||
const message = 'Resource not found';
|
||||
error = { message, statusCode: 404 };
|
||||
}
|
||||
|
||||
// Mongoose duplicate key
|
||||
if (err.code === 11000) {
|
||||
const message = 'Duplicate field value entered';
|
||||
error = { message, statusCode: 400 };
|
||||
}
|
||||
|
||||
// Mongoose validation error
|
||||
if (err.name === 'ValidationError') {
|
||||
const message = Object.values(err.errors).map(val => val.message).join(', ');
|
||||
error = { message, statusCode: 400 };
|
||||
}
|
||||
|
||||
// JWT errors
|
||||
if (err.name === 'JsonWebTokenError') {
|
||||
const message = 'Invalid token';
|
||||
error = { message, statusCode: 401 };
|
||||
}
|
||||
|
||||
if (err.name === 'TokenExpiredError') {
|
||||
const message = 'Token expired';
|
||||
error = { message, statusCode: 401 };
|
||||
}
|
||||
|
||||
// Axios errors
|
||||
if (err.isAxiosError) {
|
||||
const message = `External service error: ${err.response?.data?.message || err.message}`;
|
||||
const statusCode = err.response?.status || 500;
|
||||
error = { message, statusCode };
|
||||
}
|
||||
|
||||
// Joi validation errors
|
||||
if (err.isJoi) {
|
||||
const message = err.details.map(detail => detail.message).join(', ');
|
||||
error = { message, statusCode: 400 };
|
||||
}
|
||||
|
||||
// AJV validation errors
|
||||
if (err.name === 'ValidationError' && err.errors) {
|
||||
const message = err.errors.map(e => `${e.instancePath || 'root'}: ${e.message}`).join(', ');
|
||||
error = { message, statusCode: 400 };
|
||||
}
|
||||
|
||||
res.status(error.statusCode || 500).json({
|
||||
success: false,
|
||||
error: error.message || 'Server Error',
|
||||
...(process.env.NODE_ENV === 'development' && { stack: err.stack })
|
||||
});
|
||||
};
|
||||
|
||||
module.exports = errorHandler;
|
||||
60
services/unison/src/middleware/healthCheck.js
Normal file
60
services/unison/src/middleware/healthCheck.js
Normal file
@ -0,0 +1,60 @@
|
||||
const axios = require('axios');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
// Health check middleware
|
||||
const healthCheck = async (req, res, next) => {
|
||||
// Skip health check for actual health endpoint
|
||||
if (req.path === '/health') {
|
||||
return next();
|
||||
}
|
||||
|
||||
// Check external services health
|
||||
const externalServices = {
|
||||
techStackSelector: process.env.TECH_STACK_SELECTOR_HEALTH_URL || 'http://tech-stack-selector:8002/health',
|
||||
templateManager: process.env.TEMPLATE_MANAGER_HEALTH_URL || 'http://template-manager:8009/health'
|
||||
};
|
||||
|
||||
const healthStatus = {
|
||||
unison: 'healthy',
|
||||
externalServices: {},
|
||||
timestamp: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Check each external service
|
||||
for (const [serviceName, url] of Object.entries(externalServices)) {
|
||||
try {
|
||||
const response = await axios.get(url, {
|
||||
timeout: parseInt(process.env.HEALTH_CHECK_TIMEOUT) || 5000,
|
||||
headers: {
|
||||
'User-Agent': 'Unison-HealthCheck/1.0'
|
||||
}
|
||||
});
|
||||
|
||||
healthStatus.externalServices[serviceName] = {
|
||||
status: 'healthy',
|
||||
responseTime: response.headers['x-response-time'] || 'unknown',
|
||||
lastChecked: new Date().toISOString()
|
||||
};
|
||||
} catch (error) {
|
||||
logger.warn({
|
||||
message: `External service ${serviceName} health check failed`,
|
||||
service: serviceName,
|
||||
url: url,
|
||||
error: error.message
|
||||
});
|
||||
|
||||
healthStatus.externalServices[serviceName] = {
|
||||
status: 'unhealthy',
|
||||
error: error.message,
|
||||
lastChecked: new Date().toISOString()
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// Store health status in request for use in routes
|
||||
req.healthStatus = healthStatus;
|
||||
|
||||
next();
|
||||
};
|
||||
|
||||
module.exports = healthCheck;
|
||||
45
services/unison/src/middleware/requestValidator.js
Normal file
45
services/unison/src/middleware/requestValidator.js
Normal file
@ -0,0 +1,45 @@
|
||||
const Joi = require('joi');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
// Request validation middleware
|
||||
const requestValidator = (req, res, next) => {
|
||||
// Skip validation for health checks and root endpoint
|
||||
if (req.path === '/health' || req.path === '/') {
|
||||
return next();
|
||||
}
|
||||
|
||||
// Validate request body for POST/PUT requests
|
||||
if (['POST', 'PUT', 'PATCH'].includes(req.method) && req.body) {
|
||||
// Basic validation for unified recommendation request - simplified
|
||||
if (req.path.includes('/unified')) {
|
||||
const schema = Joi.object({
|
||||
domain: Joi.string().min(1).max(100).optional(),
|
||||
budget: Joi.number().positive().optional(),
|
||||
preferredTechnologies: Joi.array().items(Joi.string().min(1).max(50)).optional(),
|
||||
templateId: Joi.string().uuid().optional(),
|
||||
includeSimilar: Joi.boolean().optional(),
|
||||
includeKeywords: Joi.boolean().optional(),
|
||||
forceRefresh: Joi.boolean().optional()
|
||||
});
|
||||
|
||||
const { error } = schema.validate(req.body);
|
||||
if (error) {
|
||||
logger.warn({
|
||||
message: 'Request validation failed',
|
||||
error: error.details[0].message,
|
||||
body: req.body,
|
||||
path: req.path
|
||||
});
|
||||
return res.status(400).json({
|
||||
success: false,
|
||||
error: 'Invalid request data',
|
||||
details: error.details[0].message
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
next();
|
||||
};
|
||||
|
||||
module.exports = requestValidator;
|
||||
160
services/unison/src/routes/health.js
Normal file
160
services/unison/src/routes/health.js
Normal file
@ -0,0 +1,160 @@
|
||||
const express = require('express');
|
||||
const axios = require('axios');
|
||||
const DatabaseService = require('../services/databaseService');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
// Create database service instance
|
||||
const databaseService = new DatabaseService();
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
// Health check endpoint
|
||||
router.get('/', async (req, res) => {
|
||||
try {
|
||||
const startTime = Date.now();
|
||||
|
||||
// Check external services
|
||||
const externalServices = {
|
||||
techStackSelector: process.env.TECH_STACK_SELECTOR_HEALTH_URL || 'http://tech-stack-selector:8002/health',
|
||||
templateManager: process.env.TEMPLATE_MANAGER_HEALTH_URL || 'http://template-manager:8009/health'
|
||||
};
|
||||
|
||||
const healthChecks = {};
|
||||
let allHealthy = true;
|
||||
|
||||
// Check database health
|
||||
const databaseHealthy = await databaseService.isHealthy();
|
||||
if (!databaseHealthy) {
|
||||
allHealthy = false;
|
||||
}
|
||||
|
||||
// Check each external service
|
||||
for (const [serviceName, url] of Object.entries(externalServices)) {
|
||||
try {
|
||||
const serviceStartTime = Date.now();
|
||||
const response = await axios.get(url, {
|
||||
timeout: parseInt(process.env.HEALTH_CHECK_TIMEOUT) || 5000,
|
||||
headers: {
|
||||
'User-Agent': 'Unison-HealthCheck/1.0'
|
||||
}
|
||||
});
|
||||
|
||||
const responseTime = Date.now() - serviceStartTime;
|
||||
|
||||
healthChecks[serviceName] = {
|
||||
status: 'healthy',
|
||||
responseTime: `${responseTime}ms`,
|
||||
statusCode: response.status,
|
||||
lastChecked: new Date().toISOString(),
|
||||
data: response.data
|
||||
};
|
||||
} catch (error) {
|
||||
allHealthy = false;
|
||||
healthChecks[serviceName] = {
|
||||
status: 'unhealthy',
|
||||
error: error.message,
|
||||
statusCode: error.response?.status || 'timeout',
|
||||
lastChecked: new Date().toISOString()
|
||||
};
|
||||
|
||||
logger.warn({
|
||||
message: `External service ${serviceName} health check failed`,
|
||||
service: serviceName,
|
||||
url: url,
|
||||
error: error.message,
|
||||
statusCode: error.response?.status
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
const totalResponseTime = Date.now() - startTime;
|
||||
const overallStatus = allHealthy ? 'healthy' : 'degraded';
|
||||
|
||||
const healthResponse = {
|
||||
status: overallStatus,
|
||||
service: 'unison',
|
||||
version: '1.0.0',
|
||||
timestamp: new Date().toISOString(),
|
||||
uptime: process.uptime(),
|
||||
responseTime: `${totalResponseTime}ms`,
|
||||
environment: process.env.NODE_ENV || 'development',
|
||||
memory: {
|
||||
used: Math.round(process.memoryUsage().heapUsed / 1024 / 1024) + ' MB',
|
||||
total: Math.round(process.memoryUsage().heapTotal / 1024 / 1024) + ' MB',
|
||||
external: Math.round(process.memoryUsage().external / 1024 / 1024) + ' MB'
|
||||
},
|
||||
externalServices: healthChecks,
|
||||
database: {
|
||||
status: databaseHealthy ? 'healthy' : 'unhealthy',
|
||||
type: 'PostgreSQL'
|
||||
},
|
||||
features: {
|
||||
unifiedRecommendations: true,
|
||||
techStackSelector: healthChecks.techStackSelector?.status === 'healthy',
|
||||
templateManager: healthChecks.templateManager?.status === 'healthy',
|
||||
claudeAI: !!process.env.CLAUDE_API_KEY,
|
||||
databaseStorage: databaseHealthy
|
||||
}
|
||||
};
|
||||
|
||||
const statusCode = allHealthy ? 200 : 503;
|
||||
res.status(statusCode).json(healthResponse);
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Health check failed',
|
||||
error: error.message,
|
||||
stack: error.stack
|
||||
});
|
||||
|
||||
res.status(500).json({
|
||||
status: 'unhealthy',
|
||||
service: 'unison',
|
||||
error: 'Health check failed',
|
||||
message: error.message,
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Detailed health check with more information
|
||||
router.get('/detailed', async (req, res) => {
|
||||
try {
|
||||
const detailedHealth = {
|
||||
status: 'healthy',
|
||||
service: 'unison',
|
||||
version: '1.0.0',
|
||||
timestamp: new Date().toISOString(),
|
||||
uptime: process.uptime(),
|
||||
environment: process.env.NODE_ENV || 'development',
|
||||
nodeVersion: process.version,
|
||||
platform: process.platform,
|
||||
architecture: process.arch,
|
||||
memory: process.memoryUsage(),
|
||||
cpu: process.cpuUsage(),
|
||||
pid: process.pid,
|
||||
config: {
|
||||
port: process.env.PORT || 8010,
|
||||
host: process.env.HOST || '0.0.0.0',
|
||||
techStackSelectorUrl: process.env.TECH_STACK_SELECTOR_URL,
|
||||
templateManagerUrl: process.env.TEMPLATE_MANAGER_URL,
|
||||
claudeApiKey: process.env.CLAUDE_API_KEY ? 'configured' : 'not configured'
|
||||
}
|
||||
};
|
||||
|
||||
res.json(detailedHealth);
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Detailed health check failed',
|
||||
error: error.message
|
||||
});
|
||||
|
||||
res.status(500).json({
|
||||
status: 'unhealthy',
|
||||
error: 'Detailed health check failed',
|
||||
message: error.message
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
601
services/unison/src/routes/recommendations.js
Normal file
601
services/unison/src/routes/recommendations.js
Normal file
@ -0,0 +1,601 @@
|
||||
const express = require('express');
|
||||
const techStackService = require('../services/techStackService');
|
||||
const templateService = require('../services/templateService');
|
||||
const claudeService = require('../services/claudeService');
|
||||
const DatabaseService = require('../services/databaseService');
|
||||
const schemaValidator = require('../utils/schemaValidator');
|
||||
const logger = require('../utils/logger');
|
||||
const { v4: uuidv4 } = require('uuid');
|
||||
|
||||
// Create database service instance
|
||||
const databaseService = new DatabaseService();
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
/**
|
||||
* POST /api/recommendations/unified
|
||||
* Get unified tech stack recommendation combining both services
|
||||
*/
|
||||
router.post('/unified', async (req, res) => {
|
||||
try {
|
||||
const startTime = Date.now();
|
||||
|
||||
// Extract request parameters with defaults
|
||||
const {
|
||||
domain = 'general',
|
||||
budget = 5000,
|
||||
preferredTechnologies = [],
|
||||
templateId,
|
||||
includeSimilar = false,
|
||||
includeKeywords = false,
|
||||
forceRefresh = false
|
||||
} = req.body;
|
||||
|
||||
logger.info({
|
||||
message: 'Processing unified recommendation request',
|
||||
domain,
|
||||
budget,
|
||||
preferredTechnologies,
|
||||
templateId,
|
||||
includeSimilar,
|
||||
includeKeywords,
|
||||
forceRefresh
|
||||
});
|
||||
|
||||
|
||||
// Use default values if not provided
|
||||
const techStackRequest = { domain, budget, preferredTechnologies };
|
||||
const techStackValidation = schemaValidator.validateTechStackRequest(techStackRequest);
|
||||
if (!techStackValidation.valid) {
|
||||
return res.status(400).json({
|
||||
success: false,
|
||||
error: 'Invalid tech stack request parameters',
|
||||
details: techStackValidation.errors
|
||||
});
|
||||
}
|
||||
|
||||
// If templateId is provided, validate it
|
||||
if (templateId) {
|
||||
const templateRequest = { templateId, includeSimilar, includeKeywords, forceRefresh };
|
||||
const templateValidation = schemaValidator.validateTemplateRequest(templateRequest);
|
||||
if (!templateValidation.valid) {
|
||||
return res.status(400).json({
|
||||
success: false,
|
||||
error: 'Invalid template request parameters',
|
||||
details: templateValidation.errors
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Fetch recommendations from services
|
||||
const promises = [];
|
||||
|
||||
// Always fetch from tech-stack-selector (domain + budget based)
|
||||
promises.push(
|
||||
techStackService.getRecommendations({
|
||||
domain,
|
||||
budget,
|
||||
preferredTechnologies
|
||||
}).catch(error => {
|
||||
logger.error({
|
||||
message: 'Failed to fetch from tech-stack-selector',
|
||||
error: error.message
|
||||
});
|
||||
return { success: false, error: error.message, source: 'tech-stack-selector' };
|
||||
})
|
||||
);
|
||||
|
||||
// Fetch from template-manager if templateId is provided
|
||||
if (templateId) {
|
||||
promises.push(
|
||||
templateService.getAIRecommendations(templateId, { forceRefresh })
|
||||
.catch(error => {
|
||||
logger.error({
|
||||
message: 'Failed to fetch from template-manager',
|
||||
error: error.message,
|
||||
templateId
|
||||
});
|
||||
return { success: false, error: error.message, source: 'template-manager' };
|
||||
})
|
||||
);
|
||||
} else {
|
||||
// If no templateId, provide a default template recommendation
|
||||
promises.push(Promise.resolve({
|
||||
success: true,
|
||||
data: {
|
||||
stack_name: 'Default General Purpose Stack',
|
||||
monthly_cost: 100,
|
||||
setup_cost: 2000,
|
||||
team_size: '2-3',
|
||||
development_time: 4,
|
||||
satisfaction: 85,
|
||||
success_rate: 80,
|
||||
frontend: 'React',
|
||||
backend: 'Node.js',
|
||||
database: 'PostgreSQL',
|
||||
cloud: 'AWS',
|
||||
testing: 'Jest',
|
||||
mobile: 'React Native',
|
||||
devops: 'Docker',
|
||||
ai_ml: 'Not specified',
|
||||
recommendation_score: 85.0
|
||||
},
|
||||
source: 'template-manager-default'
|
||||
}));
|
||||
}
|
||||
|
||||
const [techStackResult, templateResult] = await Promise.all(promises);
|
||||
|
||||
// Check if we have at least one successful recommendation
|
||||
if (!techStackResult.success && !templateResult.success) {
|
||||
return res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch recommendations from both services',
|
||||
details: {
|
||||
techStackError: techStackResult.error,
|
||||
templateError: templateResult.error
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Both services must succeed for unified recommendations
|
||||
if (!techStackResult.success || !templateResult.success) {
|
||||
return res.status(500).json({
|
||||
success: false,
|
||||
error: 'Both services are required for unified recommendations',
|
||||
message: 'Both tech-stack-selector and template-manager must be available for unified recommendations',
|
||||
processingTime: Date.now() - startTime,
|
||||
services: {
|
||||
techStackSelector: techStackResult.success ? 'available' : 'unavailable',
|
||||
templateManager: templateResult.success ? 'available' : 'unavailable'
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Both services returned recommendations - use Claude to unify them
|
||||
if (!claudeService.isAvailable()) {
|
||||
return res.status(500).json({
|
||||
success: false,
|
||||
error: 'Claude AI service is required for unified recommendations',
|
||||
message: 'Claude AI is not available. Unified recommendations require Claude AI to process both tech-stack and template recommendations.',
|
||||
processingTime: Date.now() - startTime,
|
||||
services: {
|
||||
techStackSelector: 'available',
|
||||
templateManager: 'available',
|
||||
claudeAI: 'unavailable'
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Generate unified recommendation using Claude
|
||||
const claudeResult = await claudeService.generateUnifiedRecommendation(
|
||||
techStackResult,
|
||||
templateResult,
|
||||
{ domain, budget, preferredTechnologies, templateId }
|
||||
);
|
||||
|
||||
// Log Claude AI response for debugging
|
||||
logger.info({
|
||||
message: 'Claude AI response received',
|
||||
claudeResponse: claudeResult.data,
|
||||
claudeModel: claudeResult.claudeModel
|
||||
});
|
||||
|
||||
// Validate the unified recommendation
|
||||
const validation = schemaValidator.validateUnifiedRecommendation(claudeResult.data);
|
||||
if (!validation.valid) {
|
||||
logger.warn({
|
||||
message: 'Claude generated invalid recommendation, using tech-stack-selector as fallback',
|
||||
validationErrors: validation.errors,
|
||||
claudeResponse: claudeResult.data
|
||||
});
|
||||
|
||||
return res.json({
|
||||
success: true,
|
||||
data: techStackResult.data,
|
||||
source: 'tech-stack-selector (fallback)',
|
||||
message: 'Claude generated invalid recommendation, using tech-stack-selector as fallback',
|
||||
processingTime: Date.now() - startTime,
|
||||
services: {
|
||||
techStackSelector: 'available',
|
||||
templateManager: 'available',
|
||||
claudeAI: 'invalid_output'
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
logger.info({
|
||||
message: 'Successfully generated unified recommendation',
|
||||
stackName: claudeResult.data.stack_name,
|
||||
recommendationScore: claudeResult.data.recommendation_score,
|
||||
processingTime: Date.now() - startTime
|
||||
});
|
||||
|
||||
// Store recommendation in database
|
||||
const requestId = uuidv4();
|
||||
const processingTime = Date.now() - startTime;
|
||||
|
||||
// Use a default template ID if none provided (represents "no template" case)
|
||||
const templateIdForStorage = templateId || '00000000-0000-0000-0000-000000000000';
|
||||
|
||||
try {
|
||||
const storageResult = await databaseService.storeRecommendation({
|
||||
requestId,
|
||||
domain,
|
||||
budget,
|
||||
preferredTechnologies,
|
||||
templateId: templateIdForStorage,
|
||||
stackName: claudeResult.data.stack_name,
|
||||
monthlyCost: claudeResult.data.monthly_cost,
|
||||
setupCost: claudeResult.data.setup_cost,
|
||||
teamSize: claudeResult.data.team_size,
|
||||
developmentTime: claudeResult.data.development_time,
|
||||
satisfaction: claudeResult.data.satisfaction,
|
||||
successRate: claudeResult.data.success_rate,
|
||||
frontend: claudeResult.data.frontend,
|
||||
backend: claudeResult.data.backend,
|
||||
database: claudeResult.data.database,
|
||||
cloud: claudeResult.data.cloud,
|
||||
testing: claudeResult.data.testing,
|
||||
mobile: claudeResult.data.mobile,
|
||||
devops: claudeResult.data.devops,
|
||||
aiMl: claudeResult.data.ai_ml,
|
||||
recommendedTool: claudeResult.data.recommended_tool,
|
||||
recommendationScore: claudeResult.data.recommendation_score,
|
||||
message: claudeResult.data.message,
|
||||
claudeModel: claudeResult.claudeModel,
|
||||
processingTime
|
||||
});
|
||||
|
||||
if (storageResult.success) {
|
||||
logger.info(`Recommendation stored in database with ID: ${storageResult.id}`);
|
||||
} else {
|
||||
logger.warn(`Failed to store recommendation in database: ${storageResult.error}`);
|
||||
}
|
||||
} catch (storageError) {
|
||||
logger.error('Error storing recommendation in database:', storageError);
|
||||
// Don't fail the request if storage fails
|
||||
}
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: claudeResult.data,
|
||||
source: 'unified',
|
||||
message: 'Unified recommendation generated successfully',
|
||||
processingTime,
|
||||
services: {
|
||||
techStackSelector: 'available',
|
||||
templateManager: 'available',
|
||||
claudeAI: 'available'
|
||||
},
|
||||
claudeModel: claudeResult.claudeModel,
|
||||
requestId, // Include request ID for tracking
|
||||
templateId: templateId || null // Show original templateId (null if not provided)
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Error processing unified recommendation request',
|
||||
error: error.message,
|
||||
stack: error.stack,
|
||||
body: req.body
|
||||
});
|
||||
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Internal server error',
|
||||
message: error.message
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/recommendations/tech-stack
|
||||
* Get recommendations from tech-stack-selector only
|
||||
*/
|
||||
router.get('/tech-stack', async (req, res) => {
|
||||
try {
|
||||
const { domain, budget, preferredTechnologies } = req.query;
|
||||
|
||||
// Convert string parameters to appropriate types
|
||||
const params = {
|
||||
domain: domain || undefined,
|
||||
budget: budget ? parseFloat(budget) : undefined,
|
||||
preferredTechnologies: preferredTechnologies ? preferredTechnologies.split(',') : undefined
|
||||
};
|
||||
|
||||
// Remove undefined values
|
||||
Object.keys(params).forEach(key => {
|
||||
if (params[key] === undefined) {
|
||||
delete params[key];
|
||||
}
|
||||
});
|
||||
|
||||
const result = await techStackService.getRecommendations(params);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: result.data,
|
||||
source: 'tech-stack-selector',
|
||||
message: 'Tech stack recommendations retrieved successfully'
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Error fetching tech stack recommendations',
|
||||
error: error.message,
|
||||
query: req.query
|
||||
});
|
||||
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch tech stack recommendations',
|
||||
message: error.message
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/recommendations/template/:templateId
|
||||
* Get recommendations from template-manager only
|
||||
*/
|
||||
router.get('/template/:templateId', async (req, res) => {
|
||||
try {
|
||||
const { templateId } = req.params;
|
||||
const { force_refresh } = req.query;
|
||||
|
||||
// Validate UUID format
|
||||
const uuidRegex = /^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$/i;
|
||||
if (!uuidRegex.test(templateId)) {
|
||||
return res.status(400).json({
|
||||
success: false,
|
||||
error: 'Invalid template ID format',
|
||||
message: 'Template ID must be a valid UUID format',
|
||||
providedId: templateId
|
||||
});
|
||||
}
|
||||
|
||||
const result = await templateService.getAIRecommendations(templateId, {
|
||||
forceRefresh: force_refresh === 'true'
|
||||
});
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: result.data,
|
||||
source: 'template-manager',
|
||||
message: 'Template recommendations retrieved successfully'
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Error fetching template recommendations',
|
||||
error: error.message,
|
||||
templateId: req.params.templateId
|
||||
});
|
||||
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch template recommendations',
|
||||
message: error.message
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/recommendations/endpoints
|
||||
* Get available API endpoints
|
||||
*/
|
||||
router.get('/endpoints', (req, res) => {
|
||||
res.json({
|
||||
success: true,
|
||||
data: {
|
||||
endpoints: [
|
||||
{
|
||||
method: 'POST',
|
||||
path: '/api/recommendations/unified',
|
||||
description: 'Get unified tech stack recommendation combining both services',
|
||||
parameters: {
|
||||
domain: 'string (optional, default: "general")',
|
||||
budget: 'number (optional, default: 5000)',
|
||||
preferredTechnologies: 'array (optional, default: [])',
|
||||
templateId: 'string (optional, UUID format)',
|
||||
includeSimilar: 'boolean (optional, default: false)',
|
||||
includeKeywords: 'boolean (optional, default: false)',
|
||||
forceRefresh: 'boolean (optional, default: false)'
|
||||
}
|
||||
},
|
||||
{
|
||||
method: 'GET',
|
||||
path: '/api/recommendations/tech-stack',
|
||||
description: 'Get recommendations from tech-stack-selector only',
|
||||
parameters: {
|
||||
domain: 'string (required)',
|
||||
budget: 'number (required)',
|
||||
preferredTechnologies: 'array (optional)'
|
||||
}
|
||||
},
|
||||
{
|
||||
method: 'GET',
|
||||
path: '/api/recommendations/template/:templateId',
|
||||
description: 'Get recommendations from template-manager only',
|
||||
parameters: {
|
||||
templateId: 'string (required, UUID format)',
|
||||
force_refresh: 'boolean (optional, query parameter)'
|
||||
}
|
||||
},
|
||||
{
|
||||
method: 'GET',
|
||||
path: '/api/recommendations/stored',
|
||||
description: 'Get stored recommendations from database',
|
||||
parameters: {
|
||||
limit: 'number (optional, default: 10)',
|
||||
domain: 'string (optional, filter by domain)',
|
||||
templateId: 'string (optional, filter by template ID)'
|
||||
}
|
||||
},
|
||||
{
|
||||
method: 'GET',
|
||||
path: '/api/recommendations/stored/:id',
|
||||
description: 'Get specific stored recommendation by ID',
|
||||
parameters: {
|
||||
id: 'string (required, UUID format)'
|
||||
}
|
||||
},
|
||||
{
|
||||
method: 'GET',
|
||||
path: '/api/recommendations/stats',
|
||||
description: 'Get recommendation statistics',
|
||||
parameters: 'none'
|
||||
},
|
||||
{
|
||||
method: 'GET',
|
||||
path: '/api/recommendations/schemas',
|
||||
description: 'Get available validation schemas',
|
||||
parameters: 'none'
|
||||
}
|
||||
]
|
||||
},
|
||||
message: 'Available API endpoints'
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/recommendations/schemas
|
||||
* Get available validation schemas
|
||||
*/
|
||||
router.get('/schemas', (req, res) => {
|
||||
try {
|
||||
const schemas = schemaValidator.getAvailableSchemas();
|
||||
const schemaDefinitions = {};
|
||||
|
||||
schemas.forEach(schemaName => {
|
||||
schemaDefinitions[schemaName] = schemaValidator.getSchema(schemaName);
|
||||
});
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: {
|
||||
availableSchemas: schemas,
|
||||
schemas: schemaDefinitions
|
||||
},
|
||||
message: 'Available schemas retrieved successfully'
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Error fetching schemas',
|
||||
error: error.message
|
||||
});
|
||||
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch schemas',
|
||||
message: error.message
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/recommendations/stored
|
||||
* Get stored recommendations with optional filtering
|
||||
*/
|
||||
router.get('/stored', async (req, res) => {
|
||||
try {
|
||||
const { domain, templateId, limit = 20 } = req.query;
|
||||
|
||||
let recommendations;
|
||||
if (domain) {
|
||||
recommendations = await databaseService.getRecommendationsByDomain(domain, parseInt(limit));
|
||||
} else if (templateId) {
|
||||
recommendations = await databaseService.getRecommendationsByTemplateId(templateId, parseInt(limit));
|
||||
} else {
|
||||
recommendations = await databaseService.getRecentRecommendations(parseInt(limit));
|
||||
}
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: recommendations,
|
||||
count: recommendations.length,
|
||||
filters: { domain, templateId, limit: parseInt(limit) }
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Error fetching stored recommendations',
|
||||
error: error.message,
|
||||
query: req.query
|
||||
});
|
||||
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch stored recommendations',
|
||||
message: error.message
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/recommendations/stored/:id
|
||||
* Get a specific stored recommendation by ID
|
||||
*/
|
||||
router.get('/stored/:id', async (req, res) => {
|
||||
try {
|
||||
const { id } = req.params;
|
||||
const recommendation = await databaseService.getRecommendationById(id);
|
||||
|
||||
if (!recommendation) {
|
||||
return res.status(404).json({
|
||||
success: false,
|
||||
error: 'Recommendation not found',
|
||||
message: `No recommendation found with ID: ${id}`
|
||||
});
|
||||
}
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: recommendation
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Error fetching recommendation by ID',
|
||||
error: error.message,
|
||||
id: req.params.id
|
||||
});
|
||||
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch recommendation',
|
||||
message: error.message
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* GET /api/recommendations/stats
|
||||
* Get statistics about stored recommendations
|
||||
*/
|
||||
router.get('/stats', async (req, res) => {
|
||||
try {
|
||||
const stats = await databaseService.getRecommendationStats();
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
data: stats
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Error fetching recommendation stats',
|
||||
error: error.message
|
||||
});
|
||||
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: 'Failed to fetch recommendation statistics',
|
||||
message: error.message
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
248
services/unison/src/services/claudeService.js
Normal file
248
services/unison/src/services/claudeService.js
Normal file
@ -0,0 +1,248 @@
|
||||
const axios = require('axios');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
class ClaudeService {
|
||||
constructor() {
|
||||
this.apiKey = process.env.CLAUDE_API_KEY;
|
||||
this.model = process.env.CLAUDE_MODEL || 'claude-3-5-sonnet-20241022';
|
||||
this.maxTokens = parseInt(process.env.CLAUDE_MAX_TOKENS) || 4000;
|
||||
this.timeout = parseInt(process.env.REQUEST_TIMEOUT) || 30000;
|
||||
|
||||
if (!this.apiKey) {
|
||||
logger.warn('Claude API key not configured. Claude integration will be disabled.');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate unified recommendation using Claude AI
|
||||
* @param {Object} techStackRecommendation - Recommendation from tech-stack-selector
|
||||
* @param {Object} templateRecommendation - Recommendation from template-manager
|
||||
* @param {Object} requestParams - Original request parameters
|
||||
* @returns {Promise<Object>} Unified recommendation
|
||||
*/
|
||||
async generateUnifiedRecommendation(techStackRecommendation, templateRecommendation, requestParams) {
|
||||
if (!this.apiKey) {
|
||||
throw new Error('Claude API key not configured');
|
||||
}
|
||||
|
||||
try {
|
||||
logger.info({
|
||||
message: 'Generating unified recommendation using Claude AI',
|
||||
techStackSource: techStackRecommendation.source,
|
||||
templateSource: templateRecommendation.source
|
||||
});
|
||||
|
||||
const prompt = this.buildPrompt(techStackRecommendation, templateRecommendation, requestParams);
|
||||
|
||||
const response = await axios.post(
|
||||
'https://api.anthropic.com/v1/messages',
|
||||
{
|
||||
model: this.model,
|
||||
max_tokens: this.maxTokens,
|
||||
messages: [
|
||||
{
|
||||
role: 'user',
|
||||
content: prompt
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
timeout: this.timeout,
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'x-api-key': this.apiKey,
|
||||
'anthropic-version': '2023-06-01',
|
||||
'User-Agent': 'Unison-Service/1.0'
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
const claudeResponse = response.data.content[0].text;
|
||||
|
||||
// Parse Claude's response
|
||||
const unifiedRecommendation = this.parseClaudeResponse(claudeResponse);
|
||||
|
||||
logger.info({
|
||||
message: 'Successfully generated unified recommendation using Claude AI',
|
||||
stackName: unifiedRecommendation.stack_name,
|
||||
recommendationScore: unifiedRecommendation.recommendation_score
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: unifiedRecommendation,
|
||||
source: 'claude-ai',
|
||||
claudeModel: this.model
|
||||
};
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Failed to generate unified recommendation using Claude AI',
|
||||
error: error.message,
|
||||
techStackSource: techStackRecommendation.source,
|
||||
templateSource: templateRecommendation.source
|
||||
});
|
||||
|
||||
throw new Error(`Claude AI service error: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Build the prompt for Claude AI
|
||||
* @param {Object} techStackRecommendation - Recommendation from tech-stack-selector
|
||||
* @param {Object} templateRecommendation - Recommendation from template-manager
|
||||
* @param {Object} requestParams - Original request parameters
|
||||
* @returns {string} Formatted prompt
|
||||
*/
|
||||
buildPrompt(techStackRecommendation, templateRecommendation, requestParams) {
|
||||
return `You are an expert tech stack architect. I need you to analyze two different tech stack recommendations and create a single, optimized recommendation that balances cost, domain requirements, and template-feature compatibility.
|
||||
|
||||
## Original Request Parameters:
|
||||
- Domain: ${requestParams.domain || 'Not specified'}
|
||||
- Budget: $${requestParams.budget || 'Not specified'}
|
||||
- Preferred Technologies: ${requestParams.preferredTechnologies ? requestParams.preferredTechnologies.join(', ') : 'Not specified'}
|
||||
- Template ID: ${requestParams.templateId || 'Not specified'}
|
||||
|
||||
## Tech Stack Selector Recommendation:
|
||||
${JSON.stringify(techStackRecommendation.data, null, 2)}
|
||||
|
||||
## Template Manager Recommendation:
|
||||
${JSON.stringify(templateRecommendation.data, null, 2)}
|
||||
|
||||
## Your Task:
|
||||
Analyze both recommendations and create a single, optimized tech stack recommendation that:
|
||||
1. Balances cost-effectiveness with the budget constraint
|
||||
2. Matches the domain requirements
|
||||
3. Incorporates the best features from the template recommendation
|
||||
4. Considers the preferred technologies when possible
|
||||
5. Provides realistic team size, development time, and success metrics
|
||||
|
||||
## Required Output Format:
|
||||
You MUST respond with ONLY a valid JSON object that matches this EXACT schema. Do NOT include any other text or formatting:
|
||||
|
||||
{
|
||||
"stack_name": "string (descriptive name for the tech stack)",
|
||||
"monthly_cost": number (monthly operational cost in USD),
|
||||
"setup_cost": number (one-time setup cost in USD),
|
||||
"team_size": "string (e.g., '1-2', '3-5', '6-10')",
|
||||
"development_time": number (weeks to complete, 1-52),
|
||||
"satisfaction": number (0-100, user satisfaction score),
|
||||
"success_rate": number (0-100, project success rate),
|
||||
"frontend": "string (specific frontend technology like 'React.js', 'Vue.js', 'Angular')",
|
||||
"backend": "string (specific backend technology like 'Node.js', 'Django', 'Spring Boot')",
|
||||
"database": "string (specific database like 'PostgreSQL', 'MongoDB', 'MySQL')",
|
||||
"cloud": "string (specific cloud platform like 'AWS', 'DigitalOcean', 'Azure')",
|
||||
"testing": "string (specific testing framework like 'Jest', 'pytest', 'Cypress')",
|
||||
"mobile": "string (mobile technology like 'React Native', 'Flutter', 'Ionic' or 'None')",
|
||||
"devops": "string (devops tool like 'Docker', 'GitHub Actions', 'Jenkins')",
|
||||
"ai_ml": "string (AI/ML technology like 'TensorFlow', 'scikit-learn', 'PyTorch' or 'None')",
|
||||
"recommended_tool": "string (primary recommended tool like 'Stripe', 'Firebase', 'Vercel')",
|
||||
"recommendation_score": number (0-100, overall recommendation score),
|
||||
"message": "string (brief explanation of the recommendation, max 500 characters)"
|
||||
}
|
||||
|
||||
## Important Notes:
|
||||
- The JSON must be valid and complete
|
||||
- All numeric values should be realistic
|
||||
- The recommendation should be practical and implementable
|
||||
- Consider the budget constraints carefully
|
||||
- Balance between cost and quality
|
||||
- Include reasoning in the message field
|
||||
|
||||
Respond with ONLY the JSON object, no additional text or formatting.`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse Claude's response and validate it
|
||||
* @param {string} claudeResponse - Raw response from Claude
|
||||
* @returns {Object} Parsed and validated recommendation
|
||||
*/
|
||||
parseClaudeResponse(claudeResponse) {
|
||||
try {
|
||||
// Extract JSON from the response (in case there's extra text)
|
||||
const jsonMatch = claudeResponse.match(/\{[\s\S]*\}/);
|
||||
if (!jsonMatch) {
|
||||
throw new Error('No JSON found in Claude response');
|
||||
}
|
||||
|
||||
const parsedResponse = JSON.parse(jsonMatch[0]);
|
||||
|
||||
// Validate required fields
|
||||
const requiredFields = [
|
||||
'stack_name', 'monthly_cost', 'setup_cost', 'team_size', 'development_time',
|
||||
'satisfaction', 'success_rate', 'frontend', 'backend', 'database', 'cloud',
|
||||
'testing', 'mobile', 'devops', 'ai_ml', 'recommended_tool', 'recommendation_score', 'message'
|
||||
];
|
||||
|
||||
const missingFields = requiredFields.filter(field => !(field in parsedResponse));
|
||||
if (missingFields.length > 0) {
|
||||
throw new Error(`Missing required fields: ${missingFields.join(', ')}`);
|
||||
}
|
||||
|
||||
// Validate numeric ranges
|
||||
const numericValidations = {
|
||||
monthly_cost: { min: 0, max: 10000 },
|
||||
setup_cost: { min: 0, max: 50000 },
|
||||
development_time: { min: 1, max: 52 },
|
||||
satisfaction: { min: 0, max: 100 },
|
||||
success_rate: { min: 0, max: 100 },
|
||||
recommendation_score: { min: 0, max: 100 }
|
||||
};
|
||||
|
||||
for (const [field, range] of Object.entries(numericValidations)) {
|
||||
const value = parsedResponse[field];
|
||||
if (typeof value !== 'number' || value < range.min || value > range.max) {
|
||||
throw new Error(`Invalid ${field}: ${value}. Must be a number between ${range.min} and ${range.max}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Validate string fields
|
||||
const stringFields = ['stack_name', 'team_size', 'frontend', 'backend', 'database', 'cloud', 'testing', 'mobile', 'devops', 'ai_ml', 'recommended_tool', 'message'];
|
||||
for (const field of stringFields) {
|
||||
if (typeof parsedResponse[field] !== 'string' || parsedResponse[field].trim().length === 0) {
|
||||
throw new Error(`Invalid ${field}: must be a non-empty string`);
|
||||
}
|
||||
}
|
||||
|
||||
logger.info({
|
||||
message: 'Successfully parsed and validated Claude response',
|
||||
stackName: parsedResponse.stack_name,
|
||||
recommendationScore: parsedResponse.recommendation_score
|
||||
});
|
||||
|
||||
return parsedResponse;
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Failed to parse Claude response',
|
||||
error: error.message,
|
||||
claudeResponse: claudeResponse.substring(0, 500) + '...'
|
||||
});
|
||||
|
||||
throw new Error(`Failed to parse Claude response: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if Claude service is available
|
||||
* @returns {boolean} Service availability
|
||||
*/
|
||||
isAvailable() {
|
||||
return !!this.apiKey;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get service configuration
|
||||
* @returns {Object} Service configuration
|
||||
*/
|
||||
getConfig() {
|
||||
return {
|
||||
available: this.isAvailable(),
|
||||
model: this.model,
|
||||
maxTokens: this.maxTokens,
|
||||
timeout: this.timeout
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = new ClaudeService();
|
||||
271
services/unison/src/services/databaseService.js
Normal file
271
services/unison/src/services/databaseService.js
Normal file
@ -0,0 +1,271 @@
|
||||
const { Pool } = require('pg');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
class DatabaseService {
|
||||
constructor() {
|
||||
this.pool = new Pool({
|
||||
host: process.env.POSTGRES_HOST || 'postgres',
|
||||
port: process.env.POSTGRES_PORT || 5432,
|
||||
database: process.env.POSTGRES_DB || 'dev_pipeline',
|
||||
user: process.env.POSTGRES_USER || 'pipeline_admin',
|
||||
password: process.env.POSTGRES_PASSWORD || 'secure_pipeline_2024',
|
||||
max: 20,
|
||||
idleTimeoutMillis: 30000,
|
||||
connectionTimeoutMillis: 2000,
|
||||
});
|
||||
|
||||
this.initializeDatabase();
|
||||
}
|
||||
|
||||
async initializeDatabase() {
|
||||
try {
|
||||
// Wait a bit for database to be ready
|
||||
await new Promise(resolve => setTimeout(resolve, 2000));
|
||||
await this.createRecommendationsTable();
|
||||
logger.info('Database service initialized successfully');
|
||||
} catch (error) {
|
||||
logger.error('Failed to initialize database service:', error);
|
||||
// Don't throw error, just log it
|
||||
}
|
||||
}
|
||||
|
||||
async createRecommendationsTable() {
|
||||
const createTableQuery = `
|
||||
CREATE TABLE IF NOT EXISTS claude_recommendations (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
request_id VARCHAR(255) UNIQUE NOT NULL,
|
||||
domain VARCHAR(100) NOT NULL,
|
||||
budget DECIMAL(10,2) NOT NULL,
|
||||
preferred_technologies TEXT[],
|
||||
template_id UUID,
|
||||
stack_name VARCHAR(255) NOT NULL,
|
||||
monthly_cost DECIMAL(10,2) NOT NULL,
|
||||
setup_cost DECIMAL(10,2) NOT NULL,
|
||||
team_size VARCHAR(50) NOT NULL,
|
||||
development_time INTEGER NOT NULL,
|
||||
satisfaction INTEGER NOT NULL CHECK (satisfaction >= 0 AND satisfaction <= 100),
|
||||
success_rate INTEGER NOT NULL CHECK (success_rate >= 0 AND success_rate <= 100),
|
||||
frontend VARCHAR(100) NOT NULL,
|
||||
backend VARCHAR(100) NOT NULL,
|
||||
database VARCHAR(100) NOT NULL,
|
||||
cloud VARCHAR(100) NOT NULL,
|
||||
testing VARCHAR(100) NOT NULL,
|
||||
mobile VARCHAR(100),
|
||||
devops VARCHAR(100) NOT NULL,
|
||||
ai_ml VARCHAR(100),
|
||||
recommended_tool VARCHAR(100) NOT NULL,
|
||||
recommendation_score DECIMAL(5,2) NOT NULL CHECK (recommendation_score >= 0 AND recommendation_score <= 100),
|
||||
message TEXT NOT NULL,
|
||||
claude_model VARCHAR(100) NOT NULL,
|
||||
processing_time INTEGER NOT NULL,
|
||||
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
`;
|
||||
|
||||
const createIndexQuery = `
|
||||
CREATE INDEX IF NOT EXISTS idx_claude_recommendations_domain ON claude_recommendations(domain);
|
||||
CREATE INDEX IF NOT EXISTS idx_claude_recommendations_budget ON claude_recommendations(budget);
|
||||
CREATE INDEX IF NOT EXISTS idx_claude_recommendations_template_id ON claude_recommendations(template_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_claude_recommendations_created_at ON claude_recommendations(created_at);
|
||||
`;
|
||||
|
||||
try {
|
||||
await this.pool.query(createTableQuery);
|
||||
await this.pool.query(createIndexQuery);
|
||||
logger.info('Claude recommendations table created successfully');
|
||||
} catch (error) {
|
||||
logger.error('Error creating recommendations table:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
async storeRecommendation(recommendationData) {
|
||||
const {
|
||||
requestId,
|
||||
domain,
|
||||
budget,
|
||||
preferredTechnologies,
|
||||
templateId,
|
||||
stackName,
|
||||
monthlyCost,
|
||||
setupCost,
|
||||
teamSize,
|
||||
developmentTime,
|
||||
satisfaction,
|
||||
successRate,
|
||||
frontend,
|
||||
backend,
|
||||
database,
|
||||
cloud,
|
||||
testing,
|
||||
mobile,
|
||||
devops,
|
||||
aiMl,
|
||||
recommendedTool,
|
||||
recommendationScore,
|
||||
message,
|
||||
claudeModel,
|
||||
processingTime
|
||||
} = recommendationData;
|
||||
|
||||
const insertQuery = `
|
||||
INSERT INTO claude_recommendations (
|
||||
request_id, domain, budget, preferred_technologies, template_id,
|
||||
stack_name, monthly_cost, setup_cost, team_size, development_time,
|
||||
satisfaction, success_rate, frontend, backend, database, cloud,
|
||||
testing, mobile, devops, ai_ml, recommended_tool, recommendation_score,
|
||||
message, claude_model, processing_time
|
||||
) VALUES (
|
||||
$1, $2, $3, $4, $5, $6, $7, $8, $9, $10,
|
||||
$11, $12, $13, $14, $15, $16, $17, $18, $19, $20,
|
||||
$21, $22, $23, $24, $25
|
||||
)
|
||||
RETURNING id, created_at;
|
||||
`;
|
||||
|
||||
const values = [
|
||||
requestId,
|
||||
domain,
|
||||
budget,
|
||||
preferredTechnologies || [],
|
||||
templateId,
|
||||
stackName,
|
||||
monthlyCost,
|
||||
setupCost,
|
||||
teamSize,
|
||||
developmentTime,
|
||||
satisfaction,
|
||||
successRate,
|
||||
frontend,
|
||||
backend,
|
||||
database,
|
||||
cloud,
|
||||
testing,
|
||||
mobile || null,
|
||||
devops,
|
||||
aiMl || null,
|
||||
recommendedTool,
|
||||
recommendationScore,
|
||||
message,
|
||||
claudeModel,
|
||||
processingTime
|
||||
];
|
||||
|
||||
try {
|
||||
const result = await this.pool.query(insertQuery, values);
|
||||
logger.info(`Recommendation stored successfully with ID: ${result.rows[0].id}`);
|
||||
return {
|
||||
success: true,
|
||||
id: result.rows[0].id,
|
||||
createdAt: result.rows[0].created_at
|
||||
};
|
||||
} catch (error) {
|
||||
logger.error('Error storing recommendation:', error);
|
||||
return {
|
||||
success: false,
|
||||
error: error.message
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
async getRecommendationById(id) {
|
||||
const query = 'SELECT * FROM claude_recommendations WHERE id = $1';
|
||||
try {
|
||||
const result = await this.pool.query(query, [id]);
|
||||
return result.rows[0] || null;
|
||||
} catch (error) {
|
||||
logger.error('Error fetching recommendation by ID:', error);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
async getRecommendationsByDomain(domain, limit = 10) {
|
||||
const query = `
|
||||
SELECT * FROM claude_recommendations
|
||||
WHERE domain = $1
|
||||
ORDER BY created_at DESC
|
||||
LIMIT $2
|
||||
`;
|
||||
try {
|
||||
const result = await this.pool.query(query, [domain, limit]);
|
||||
return result.rows;
|
||||
} catch (error) {
|
||||
logger.error('Error fetching recommendations by domain:', error);
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async getRecommendationsByTemplateId(templateId, limit = 10) {
|
||||
const query = `
|
||||
SELECT * FROM claude_recommendations
|
||||
WHERE template_id = $1
|
||||
ORDER BY created_at DESC
|
||||
LIMIT $2
|
||||
`;
|
||||
try {
|
||||
const result = await this.pool.query(query, [templateId, limit]);
|
||||
return result.rows;
|
||||
} catch (error) {
|
||||
logger.error('Error fetching recommendations by template ID:', error);
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async getRecentRecommendations(limit = 20) {
|
||||
const query = `
|
||||
SELECT * FROM claude_recommendations
|
||||
ORDER BY created_at DESC
|
||||
LIMIT $1
|
||||
`;
|
||||
try {
|
||||
const result = await this.pool.query(query, [limit]);
|
||||
return result.rows;
|
||||
} catch (error) {
|
||||
logger.error('Error fetching recent recommendations:', error);
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async getRecommendationStats() {
|
||||
const query = `
|
||||
SELECT
|
||||
COUNT(*) as total_recommendations,
|
||||
COUNT(DISTINCT domain) as unique_domains,
|
||||
COUNT(DISTINCT template_id) as unique_templates,
|
||||
AVG(recommendation_score) as avg_score,
|
||||
AVG(processing_time) as avg_processing_time,
|
||||
MIN(created_at) as first_recommendation,
|
||||
MAX(created_at) as last_recommendation
|
||||
FROM claude_recommendations
|
||||
`;
|
||||
try {
|
||||
const result = await this.pool.query(query);
|
||||
return result.rows[0];
|
||||
} catch (error) {
|
||||
logger.error('Error fetching recommendation stats:', error);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
async isHealthy() {
|
||||
try {
|
||||
await this.pool.query('SELECT 1');
|
||||
return true;
|
||||
} catch (error) {
|
||||
logger.error('Database health check failed:', error);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async close() {
|
||||
try {
|
||||
await this.pool.end();
|
||||
logger.info('Database connection pool closed');
|
||||
} catch (error) {
|
||||
logger.error('Error closing database connection:', error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = DatabaseService;
|
||||
210
services/unison/src/services/techStackService.js
Normal file
210
services/unison/src/services/techStackService.js
Normal file
@ -0,0 +1,210 @@
|
||||
const axios = require('axios');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
class TechStackService {
|
||||
constructor() {
|
||||
this.baseURL = process.env.TECH_STACK_SELECTOR_URL || 'http://pipeline_tech_stack_selector:8002';
|
||||
this.timeout = parseInt(process.env.REQUEST_TIMEOUT) || 30000;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get tech stack recommendations from tech-stack-selector service
|
||||
* @param {Object} params - Request parameters
|
||||
* @param {string} params.domain - Domain for recommendations
|
||||
* @param {number} params.budget - Budget constraint
|
||||
* @param {Array<string>} params.preferredTechnologies - Preferred technologies
|
||||
* @returns {Promise<Object>} Recommendations from tech-stack-selector
|
||||
*/
|
||||
async getRecommendations({ domain, budget, preferredTechnologies }) {
|
||||
try {
|
||||
logger.info({
|
||||
message: 'Fetching recommendations from tech-stack-selector',
|
||||
domain,
|
||||
budget,
|
||||
preferredTechnologies
|
||||
});
|
||||
|
||||
const requestData = {
|
||||
domain,
|
||||
budget,
|
||||
preferredTechnologies
|
||||
};
|
||||
|
||||
// Remove undefined values
|
||||
Object.keys(requestData).forEach(key => {
|
||||
if (requestData[key] === undefined) {
|
||||
delete requestData[key];
|
||||
}
|
||||
});
|
||||
|
||||
const response = await axios.post(
|
||||
`${this.baseURL}/recommend/best`,
|
||||
requestData,
|
||||
{
|
||||
timeout: this.timeout,
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'User-Agent': 'Unison-Service/1.0'
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
if (response.data.success) {
|
||||
logger.info({
|
||||
message: 'Successfully fetched recommendations from tech-stack-selector',
|
||||
count: response.data.count,
|
||||
budget: response.data.budget,
|
||||
domain: response.data.domain
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response.data,
|
||||
source: 'tech-stack-selector'
|
||||
};
|
||||
} else {
|
||||
throw new Error(`Tech-stack-selector returned error: ${response.data.error || 'Unknown error'}`);
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Failed to fetch recommendations from tech-stack-selector',
|
||||
error: error.message,
|
||||
domain,
|
||||
budget,
|
||||
preferredTechnologies
|
||||
});
|
||||
|
||||
throw new Error(`Tech-stack-selector service error: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get price tiers from tech-stack-selector service
|
||||
* @returns {Promise<Object>} Price tiers data
|
||||
*/
|
||||
async getPriceTiers() {
|
||||
try {
|
||||
logger.info('Fetching price tiers from tech-stack-selector');
|
||||
|
||||
const response = await axios.get(
|
||||
`${this.baseURL}/api/price-tiers`,
|
||||
{
|
||||
timeout: this.timeout,
|
||||
headers: {
|
||||
'User-Agent': 'Unison-Service/1.0'
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
if (response.data.success) {
|
||||
logger.info({
|
||||
message: 'Successfully fetched price tiers from tech-stack-selector',
|
||||
count: response.data.count
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response.data,
|
||||
source: 'tech-stack-selector'
|
||||
};
|
||||
} else {
|
||||
throw new Error(`Tech-stack-selector returned error: ${response.data.error || 'Unknown error'}`);
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Failed to fetch price tiers from tech-stack-selector',
|
||||
error: error.message
|
||||
});
|
||||
|
||||
throw new Error(`Tech-stack-selector service error: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get technologies by tier from tech-stack-selector service
|
||||
* @param {string} tierName - Name of the price tier
|
||||
* @returns {Promise<Object>} Technologies for the tier
|
||||
*/
|
||||
async getTechnologiesByTier(tierName) {
|
||||
try {
|
||||
logger.info({
|
||||
message: 'Fetching technologies by tier from tech-stack-selector',
|
||||
tierName
|
||||
});
|
||||
|
||||
const response = await axios.get(
|
||||
`${this.baseURL}/api/technologies/${encodeURIComponent(tierName)}`,
|
||||
{
|
||||
timeout: this.timeout,
|
||||
headers: {
|
||||
'User-Agent': 'Unison-Service/1.0'
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
if (response.data.success) {
|
||||
logger.info({
|
||||
message: 'Successfully fetched technologies by tier from tech-stack-selector',
|
||||
tierName,
|
||||
count: response.data.count
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response.data,
|
||||
source: 'tech-stack-selector'
|
||||
};
|
||||
} else {
|
||||
throw new Error(`Tech-stack-selector returned error: ${response.data.error || 'Unknown error'}`);
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Failed to fetch technologies by tier from tech-stack-selector',
|
||||
error: error.message,
|
||||
tierName
|
||||
});
|
||||
|
||||
throw new Error(`Tech-stack-selector service error: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check health of tech-stack-selector service
|
||||
* @returns {Promise<Object>} Health status
|
||||
*/
|
||||
async checkHealth() {
|
||||
try {
|
||||
const response = await axios.get(
|
||||
`${this.baseURL}/health`,
|
||||
{
|
||||
timeout: parseInt(process.env.HEALTH_CHECK_TIMEOUT) || 5000,
|
||||
headers: {
|
||||
'User-Agent': 'Unison-HealthCheck/1.0'
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
return {
|
||||
status: 'healthy',
|
||||
data: response.data,
|
||||
responseTime: response.headers['x-response-time'] || 'unknown'
|
||||
};
|
||||
|
||||
} catch (error) {
|
||||
logger.warn({
|
||||
message: 'Tech-stack-selector health check failed',
|
||||
error: error.message
|
||||
});
|
||||
|
||||
return {
|
||||
status: 'unhealthy',
|
||||
error: error.message
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = new TechStackService();
|
||||
307
services/unison/src/services/templateService.js
Normal file
307
services/unison/src/services/templateService.js
Normal file
@ -0,0 +1,307 @@
|
||||
const axios = require('axios');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
class TemplateService {
|
||||
constructor() {
|
||||
this.baseURL = process.env.TEMPLATE_MANAGER_URL || 'http://pipeline_template_manager:8009';
|
||||
this.aiURL = process.env.TEMPLATE_MANAGER_AI_URL || 'http://pipeline_template_manager:8013';
|
||||
this.timeout = parseInt(process.env.REQUEST_TIMEOUT) || 30000;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get template by ID from template-manager service
|
||||
* @param {string} templateId - Template ID
|
||||
* @returns {Promise<Object>} Template data
|
||||
*/
|
||||
async getTemplate(templateId) {
|
||||
try {
|
||||
logger.info({
|
||||
message: 'Fetching template from template-manager',
|
||||
templateId
|
||||
});
|
||||
|
||||
const response = await axios.get(
|
||||
`${this.baseURL}/api/templates/${templateId}`,
|
||||
{
|
||||
timeout: this.timeout,
|
||||
headers: {
|
||||
'User-Agent': 'Unison-Service/1.0'
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
if (response.data.success) {
|
||||
logger.info({
|
||||
message: 'Successfully fetched template from template-manager',
|
||||
templateId,
|
||||
templateName: response.data.data?.name || 'Unknown'
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response.data.data,
|
||||
source: 'template-manager'
|
||||
};
|
||||
} else {
|
||||
throw new Error(`Template-manager returned error: ${response.data.error || 'Unknown error'}`);
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Failed to fetch template from template-manager',
|
||||
error: error.message,
|
||||
templateId
|
||||
});
|
||||
|
||||
throw new Error(`Template-manager service error: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get AI recommendations for a template
|
||||
* @param {string} templateId - Template ID
|
||||
* @param {Object} options - Request options
|
||||
* @param {boolean} options.forceRefresh - Force refresh recommendations
|
||||
* @returns {Promise<Object>} AI recommendations
|
||||
*/
|
||||
async getAIRecommendations(templateId, options = {}) {
|
||||
try {
|
||||
logger.info({
|
||||
message: 'Fetching AI recommendations from template-manager',
|
||||
templateId,
|
||||
options
|
||||
});
|
||||
|
||||
const requestData = {
|
||||
template_id: templateId
|
||||
};
|
||||
|
||||
if (options.forceRefresh) {
|
||||
requestData.force_refresh = true;
|
||||
}
|
||||
|
||||
const url = `${this.aiURL}/ai/recommendations`;
|
||||
|
||||
const response = await axios.post(url, requestData, {
|
||||
timeout: this.timeout,
|
||||
headers: {
|
||||
'User-Agent': 'Unison-Service/1.0',
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
});
|
||||
|
||||
// AI service returns data directly (not wrapped in success object)
|
||||
if (response.data && response.data.stack_name) {
|
||||
logger.info({
|
||||
message: 'Successfully fetched AI recommendations from template-manager',
|
||||
templateId,
|
||||
stackName: response.data.stack_name || 'Unknown'
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response.data,
|
||||
source: 'template-manager-ai'
|
||||
};
|
||||
} else {
|
||||
throw new Error(`Template-manager AI returned invalid data: ${JSON.stringify(response.data)}`);
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Failed to fetch AI recommendations from template-manager',
|
||||
error: error.message,
|
||||
templateId,
|
||||
options
|
||||
});
|
||||
|
||||
throw new Error(`Template-manager AI service error: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Select template with additional options
|
||||
* @param {string} templateId - Template ID
|
||||
* @param {Object} options - Selection options
|
||||
* @param {boolean} options.includeSimilar - Include similar templates
|
||||
* @param {boolean} options.includeKeywords - Include keywords
|
||||
* @returns {Promise<Object>} Template selection data
|
||||
*/
|
||||
async selectTemplate(templateId, options = {}) {
|
||||
try {
|
||||
logger.info({
|
||||
message: 'Selecting template from template-manager',
|
||||
templateId,
|
||||
options
|
||||
});
|
||||
|
||||
const queryParams = new URLSearchParams();
|
||||
if (options.includeSimilar) {
|
||||
queryParams.append('include_similar', 'true');
|
||||
}
|
||||
if (options.includeKeywords) {
|
||||
queryParams.append('include_keywords', 'true');
|
||||
}
|
||||
|
||||
const url = `${this.baseURL}/api/templates/${templateId}/select${queryParams.toString() ? '?' + queryParams.toString() : ''}`;
|
||||
|
||||
const response = await axios.get(url, {
|
||||
timeout: this.timeout,
|
||||
headers: {
|
||||
'User-Agent': 'Unison-Service/1.0'
|
||||
}
|
||||
});
|
||||
|
||||
if (response.data.success) {
|
||||
logger.info({
|
||||
message: 'Successfully selected template from template-manager',
|
||||
templateId
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response.data.data,
|
||||
source: 'template-manager'
|
||||
};
|
||||
} else {
|
||||
throw new Error(`Template-manager returned error: ${response.data.error || 'Unknown error'}`);
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Failed to select template from template-manager',
|
||||
error: error.message,
|
||||
templateId,
|
||||
options
|
||||
});
|
||||
|
||||
throw new Error(`Template-manager service error: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all templates from template-manager service
|
||||
* @param {Object} options - Query options
|
||||
* @returns {Promise<Object>} Templates list
|
||||
*/
|
||||
async getTemplates(options = {}) {
|
||||
try {
|
||||
logger.info({
|
||||
message: 'Fetching templates from template-manager',
|
||||
options
|
||||
});
|
||||
|
||||
const queryParams = new URLSearchParams();
|
||||
Object.keys(options).forEach(key => {
|
||||
if (options[key] !== undefined) {
|
||||
queryParams.append(key, options[key]);
|
||||
}
|
||||
});
|
||||
|
||||
const url = `${this.baseURL}/api/templates${queryParams.toString() ? '?' + queryParams.toString() : ''}`;
|
||||
|
||||
const response = await axios.get(url, {
|
||||
timeout: this.timeout,
|
||||
headers: {
|
||||
'User-Agent': 'Unison-Service/1.0'
|
||||
}
|
||||
});
|
||||
|
||||
if (response.data.success) {
|
||||
logger.info({
|
||||
message: 'Successfully fetched templates from template-manager',
|
||||
count: response.data.data?.length || 0
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response.data.data,
|
||||
source: 'template-manager'
|
||||
};
|
||||
} else {
|
||||
throw new Error(`Template-manager returned error: ${response.data.error || 'Unknown error'}`);
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
logger.error({
|
||||
message: 'Failed to fetch templates from template-manager',
|
||||
error: error.message,
|
||||
options
|
||||
});
|
||||
|
||||
throw new Error(`Template-manager service error: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check health of template-manager service
|
||||
* @returns {Promise<Object>} Health status
|
||||
*/
|
||||
async checkHealth() {
|
||||
try {
|
||||
const response = await axios.get(
|
||||
`${this.baseURL}/health`,
|
||||
{
|
||||
timeout: parseInt(process.env.HEALTH_CHECK_TIMEOUT) || 5000,
|
||||
headers: {
|
||||
'User-Agent': 'Unison-HealthCheck/1.0'
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
return {
|
||||
status: 'healthy',
|
||||
data: response.data,
|
||||
responseTime: response.headers['x-response-time'] || 'unknown'
|
||||
};
|
||||
|
||||
} catch (error) {
|
||||
logger.warn({
|
||||
message: 'Template-manager health check failed',
|
||||
error: error.message
|
||||
});
|
||||
|
||||
return {
|
||||
status: 'unhealthy',
|
||||
error: error.message
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check health of template-manager AI service
|
||||
* @returns {Promise<Object>} Health status
|
||||
*/
|
||||
async checkAIHealth() {
|
||||
try {
|
||||
const response = await axios.get(
|
||||
`${this.aiURL}/health`,
|
||||
{
|
||||
timeout: parseInt(process.env.HEALTH_CHECK_TIMEOUT) || 5000,
|
||||
headers: {
|
||||
'User-Agent': 'Unison-HealthCheck/1.0'
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
return {
|
||||
status: 'healthy',
|
||||
data: response.data,
|
||||
responseTime: response.headers['x-response-time'] || 'unknown'
|
||||
};
|
||||
|
||||
} catch (error) {
|
||||
logger.warn({
|
||||
message: 'Template-manager AI health check failed',
|
||||
error: error.message
|
||||
});
|
||||
|
||||
return {
|
||||
status: 'unhealthy',
|
||||
error: error.message
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = new TemplateService();
|
||||
63
services/unison/src/utils/logger.js
Normal file
63
services/unison/src/utils/logger.js
Normal file
@ -0,0 +1,63 @@
|
||||
const winston = require('winston');
|
||||
const path = require('path');
|
||||
|
||||
// Create logs directory if it doesn't exist
|
||||
const fs = require('fs');
|
||||
const logDir = path.join(__dirname, '../../logs');
|
||||
if (!fs.existsSync(logDir)) {
|
||||
fs.mkdirSync(logDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Define log format
|
||||
const logFormat = winston.format.combine(
|
||||
winston.format.timestamp({
|
||||
format: 'YYYY-MM-DD HH:mm:ss'
|
||||
}),
|
||||
winston.format.errors({ stack: true }),
|
||||
winston.format.json(),
|
||||
winston.format.prettyPrint()
|
||||
);
|
||||
|
||||
// Create logger instance
|
||||
const logger = winston.createLogger({
|
||||
level: process.env.LOG_LEVEL || 'info',
|
||||
format: logFormat,
|
||||
defaultMeta: { service: 'unison' },
|
||||
transports: [
|
||||
// Write all logs with level 'error' and below to error.log
|
||||
new winston.transports.File({
|
||||
filename: path.join(logDir, 'error.log'),
|
||||
level: 'error',
|
||||
maxsize: 5242880, // 5MB
|
||||
maxFiles: 5,
|
||||
}),
|
||||
// Write all logs with level 'info' and below to combined.log
|
||||
new winston.transports.File({
|
||||
filename: path.join(logDir, 'combined.log'),
|
||||
maxsize: 5242880, // 5MB
|
||||
maxFiles: 5,
|
||||
}),
|
||||
],
|
||||
});
|
||||
|
||||
// If we're not in production, log to the console as well
|
||||
if (process.env.NODE_ENV !== 'production') {
|
||||
logger.add(new winston.transports.Console({
|
||||
format: winston.format.combine(
|
||||
winston.format.colorize(),
|
||||
winston.format.simple(),
|
||||
winston.format.printf(({ timestamp, level, message, ...meta }) => {
|
||||
return `${timestamp} [${level}]: ${message} ${Object.keys(meta).length ? JSON.stringify(meta, null, 2) : ''}`;
|
||||
})
|
||||
)
|
||||
}));
|
||||
}
|
||||
|
||||
// Create a stream object for Morgan HTTP logging
|
||||
logger.stream = {
|
||||
write: (message) => {
|
||||
logger.info(message.trim());
|
||||
}
|
||||
};
|
||||
|
||||
module.exports = logger;
|
||||
308
services/unison/src/utils/schemaValidator.js
Normal file
308
services/unison/src/utils/schemaValidator.js
Normal file
@ -0,0 +1,308 @@
|
||||
const Ajv = require('ajv');
|
||||
const addFormats = require('ajv-formats');
|
||||
const logger = require('./logger');
|
||||
|
||||
class SchemaValidator {
|
||||
constructor() {
|
||||
this.ajv = new Ajv({
|
||||
allErrors: true,
|
||||
verbose: true,
|
||||
strict: false
|
||||
});
|
||||
addFormats(this.ajv);
|
||||
|
||||
// Define schemas
|
||||
this.schemas = {
|
||||
unifiedRecommendation: this.getUnifiedRecommendationSchema(),
|
||||
techStackRequest: this.getTechStackRequestSchema(),
|
||||
templateRequest: this.getTemplateRequestSchema()
|
||||
};
|
||||
|
||||
// Compile schemas
|
||||
this.compiledSchemas = {};
|
||||
for (const [name, schema] of Object.entries(this.schemas)) {
|
||||
try {
|
||||
this.compiledSchemas[name] = this.ajv.compile(schema);
|
||||
logger.info(`Schema '${name}' compiled successfully`);
|
||||
} catch (error) {
|
||||
logger.error(`Failed to compile schema '${name}': ${error.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the unified recommendation schema
|
||||
* @returns {Object} JSON schema for unified recommendations
|
||||
*/
|
||||
getUnifiedRecommendationSchema() {
|
||||
return {
|
||||
type: 'object',
|
||||
required: [
|
||||
'stack_name', 'monthly_cost', 'setup_cost', 'team_size', 'development_time',
|
||||
'satisfaction', 'success_rate', 'frontend', 'backend', 'database', 'cloud',
|
||||
'testing', 'devops', 'recommended_tool', 'recommendation_score', 'message'
|
||||
],
|
||||
properties: {
|
||||
stack_name: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
maxLength: 100,
|
||||
description: 'Descriptive name for the tech stack'
|
||||
},
|
||||
monthly_cost: {
|
||||
type: 'number',
|
||||
minimum: 0,
|
||||
maximum: 10000,
|
||||
description: 'Monthly operational cost in USD'
|
||||
},
|
||||
setup_cost: {
|
||||
type: 'number',
|
||||
minimum: 0,
|
||||
maximum: 50000,
|
||||
description: 'One-time setup cost in USD'
|
||||
},
|
||||
team_size: {
|
||||
type: 'string',
|
||||
pattern: '^[0-9]+-[0-9]+$',
|
||||
description: 'Team size range (e.g., "1-2", "3-5")'
|
||||
},
|
||||
development_time: {
|
||||
type: 'number',
|
||||
minimum: 1,
|
||||
maximum: 52,
|
||||
description: 'Development time in weeks'
|
||||
},
|
||||
satisfaction: {
|
||||
type: 'number',
|
||||
minimum: 0,
|
||||
maximum: 100,
|
||||
description: 'User satisfaction score (0-100)'
|
||||
},
|
||||
success_rate: {
|
||||
type: 'number',
|
||||
minimum: 0,
|
||||
maximum: 100,
|
||||
description: 'Project success rate (0-100)'
|
||||
},
|
||||
frontend: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
maxLength: 50,
|
||||
description: 'Frontend technology'
|
||||
},
|
||||
backend: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
maxLength: 50,
|
||||
description: 'Backend technology'
|
||||
},
|
||||
database: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
maxLength: 50,
|
||||
description: 'Database technology'
|
||||
},
|
||||
cloud: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
maxLength: 50,
|
||||
description: 'Cloud platform'
|
||||
},
|
||||
testing: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
maxLength: 50,
|
||||
description: 'Testing framework'
|
||||
},
|
||||
mobile: {
|
||||
type: 'string',
|
||||
minLength: 0,
|
||||
maxLength: 50,
|
||||
description: 'Mobile technology'
|
||||
},
|
||||
devops: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
maxLength: 50,
|
||||
description: 'DevOps tool'
|
||||
},
|
||||
ai_ml: {
|
||||
type: 'string',
|
||||
minLength: 0,
|
||||
maxLength: 50,
|
||||
description: 'AI/ML technology'
|
||||
},
|
||||
recommended_tool: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
maxLength: 50,
|
||||
description: 'Primary recommended tool'
|
||||
},
|
||||
recommendation_score: {
|
||||
type: 'number',
|
||||
minimum: 0,
|
||||
maximum: 100,
|
||||
description: 'Overall recommendation score (0-100)'
|
||||
},
|
||||
message: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
maxLength: 500,
|
||||
description: 'Brief explanation of the recommendation'
|
||||
}
|
||||
},
|
||||
additionalProperties: false
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the tech stack request schema
|
||||
* @returns {Object} JSON schema for tech stack requests
|
||||
*/
|
||||
getTechStackRequestSchema() {
|
||||
return {
|
||||
type: 'object',
|
||||
properties: {
|
||||
domain: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
maxLength: 100,
|
||||
description: 'Domain for recommendations'
|
||||
},
|
||||
budget: {
|
||||
type: 'number',
|
||||
minimum: 0,
|
||||
maximum: 100000,
|
||||
description: 'Budget constraint in USD'
|
||||
},
|
||||
preferredTechnologies: {
|
||||
type: 'array',
|
||||
items: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
maxLength: 50
|
||||
},
|
||||
maxItems: 10,
|
||||
description: 'Preferred technologies'
|
||||
}
|
||||
},
|
||||
additionalProperties: false
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the template request schema
|
||||
* @returns {Object} JSON schema for template requests
|
||||
*/
|
||||
getTemplateRequestSchema() {
|
||||
return {
|
||||
type: 'object',
|
||||
properties: {
|
||||
templateId: {
|
||||
type: 'string',
|
||||
format: 'uuid',
|
||||
description: 'Template ID'
|
||||
},
|
||||
includeSimilar: {
|
||||
type: 'boolean',
|
||||
description: 'Include similar templates'
|
||||
},
|
||||
includeKeywords: {
|
||||
type: 'boolean',
|
||||
description: 'Include keywords'
|
||||
},
|
||||
forceRefresh: {
|
||||
type: 'boolean',
|
||||
description: 'Force refresh recommendations'
|
||||
}
|
||||
},
|
||||
additionalProperties: false
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate data against a schema
|
||||
* @param {string} schemaName - Name of the schema
|
||||
* @param {Object} data - Data to validate
|
||||
* @returns {Object} Validation result
|
||||
*/
|
||||
validate(schemaName, data) {
|
||||
if (!this.compiledSchemas[schemaName]) {
|
||||
return {
|
||||
valid: false,
|
||||
errors: [`Schema '${schemaName}' not found`]
|
||||
};
|
||||
}
|
||||
|
||||
const valid = this.compiledSchemas[schemaName](data);
|
||||
|
||||
if (valid) {
|
||||
return {
|
||||
valid: true,
|
||||
errors: []
|
||||
};
|
||||
} else {
|
||||
const errors = this.compiledSchemas[schemaName].errors.map(error => {
|
||||
const path = error.instancePath || 'root';
|
||||
return `${path}: ${error.message}`;
|
||||
});
|
||||
|
||||
logger.warn({
|
||||
message: `Schema validation failed for '${schemaName}'`,
|
||||
errors,
|
||||
data: JSON.stringify(data, null, 2)
|
||||
});
|
||||
|
||||
return {
|
||||
valid: false,
|
||||
errors
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate unified recommendation
|
||||
* @param {Object} recommendation - Recommendation to validate
|
||||
* @returns {Object} Validation result
|
||||
*/
|
||||
validateUnifiedRecommendation(recommendation) {
|
||||
return this.validate('unifiedRecommendation', recommendation);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate tech stack request
|
||||
* @param {Object} request - Request to validate
|
||||
* @returns {Object} Validation result
|
||||
*/
|
||||
validateTechStackRequest(request) {
|
||||
return this.validate('techStackRequest', request);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate template request
|
||||
* @param {Object} request - Request to validate
|
||||
* @returns {Object} Validation result
|
||||
*/
|
||||
validateTemplateRequest(request) {
|
||||
return this.validate('templateRequest', request);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all available schemas
|
||||
* @returns {Array<string>} List of schema names
|
||||
*/
|
||||
getAvailableSchemas() {
|
||||
return Object.keys(this.schemas);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get schema definition
|
||||
* @param {string} schemaName - Name of the schema
|
||||
* @returns {Object|null} Schema definition
|
||||
*/
|
||||
getSchema(schemaName) {
|
||||
return this.schemas[schemaName] || null;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = new SchemaValidator();
|
||||
212
services/unison/start.sh
Normal file
212
services/unison/start.sh
Normal file
@ -0,0 +1,212 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Unison Service Startup Script
|
||||
# This script handles the startup of the Unison service with proper error handling and logging
|
||||
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Logging function
|
||||
log() {
|
||||
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[$(date +'%Y-%m-%d %H:%M:%S')] ✓${NC} $1"
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
echo -e "${YELLOW}[$(date +'%Y-%m-%d %H:%M:%S')] ⚠${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[$(date +'%Y-%m-%d %H:%M:%S')] ✗${NC} $1"
|
||||
}
|
||||
|
||||
# Configuration
|
||||
SERVICE_NAME="Unison"
|
||||
SERVICE_PORT=${PORT:-8010}
|
||||
SERVICE_HOST=${HOST:-0.0.0.0}
|
||||
NODE_ENV=${NODE_ENV:-development}
|
||||
LOG_LEVEL=${LOG_LEVEL:-info}
|
||||
|
||||
# External service URLs (set by docker-compose.yml)
|
||||
TECH_STACK_SELECTOR_URL=${TECH_STACK_SELECTOR_URL:-http://pipeline_tech_stack_selector:8002}
|
||||
TEMPLATE_MANAGER_URL=${TEMPLATE_MANAGER_URL:-http://pipeline_template_manager:8009}
|
||||
TEMPLATE_MANAGER_AI_URL=${TEMPLATE_MANAGER_AI_URL:-http://pipeline_template_manager:8013}
|
||||
|
||||
# Health check URLs (set by docker-compose.yml)
|
||||
TECH_STACK_SELECTOR_HEALTH_URL=${TECH_STACK_SELECTOR_HEALTH_URL:-http://pipeline_tech_stack_selector:8002/health}
|
||||
TEMPLATE_MANAGER_HEALTH_URL=${TEMPLATE_MANAGER_HEALTH_URL:-http://pipeline_template_manager:8009/health}
|
||||
|
||||
# Timeouts
|
||||
REQUEST_TIMEOUT=${REQUEST_TIMEOUT:-30000}
|
||||
HEALTH_CHECK_TIMEOUT=${HEALTH_CHECK_TIMEOUT:-5000}
|
||||
|
||||
# Create logs directory
|
||||
mkdir -p logs
|
||||
|
||||
# Load environment variables from config.env if it exists
|
||||
if [ -f "config.env" ]; then
|
||||
echo "Loading environment variables from config.env..."
|
||||
export $(cat config.env | grep -v '^#' | xargs)
|
||||
fi
|
||||
|
||||
# Function to check if a service is healthy
|
||||
check_service_health() {
|
||||
local service_name=$1
|
||||
local health_url=$2
|
||||
local timeout=${3:-5000}
|
||||
|
||||
log "Checking health of $service_name at $health_url..."
|
||||
|
||||
if curl -f -s --max-time $((timeout / 1000)) "$health_url" > /dev/null 2>&1; then
|
||||
log_success "$service_name is healthy"
|
||||
return 0
|
||||
else
|
||||
log_warning "$service_name is not responding"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to wait for external services
|
||||
wait_for_services() {
|
||||
log "Waiting for external services to be available..."
|
||||
|
||||
local max_attempts=30
|
||||
local attempt=1
|
||||
|
||||
while [ $attempt -le $max_attempts ]; do
|
||||
log "Attempt $attempt/$max_attempts: Checking external services..."
|
||||
|
||||
local tech_stack_healthy=false
|
||||
local template_manager_healthy=false
|
||||
|
||||
if check_service_health "Tech Stack Selector" "$TECH_STACK_SELECTOR_HEALTH_URL" "$HEALTH_CHECK_TIMEOUT"; then
|
||||
tech_stack_healthy=true
|
||||
fi
|
||||
|
||||
if check_service_health "Template Manager" "$TEMPLATE_MANAGER_HEALTH_URL" "$HEALTH_CHECK_TIMEOUT"; then
|
||||
template_manager_healthy=true
|
||||
fi
|
||||
|
||||
if [ "$tech_stack_healthy" = true ] && [ "$template_manager_healthy" = true ]; then
|
||||
log_success "All external services are healthy"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_warning "Some services are not ready yet. Waiting 10 seconds..."
|
||||
sleep 10
|
||||
attempt=$((attempt + 1))
|
||||
done
|
||||
|
||||
log_error "Timeout waiting for external services after $max_attempts attempts"
|
||||
log_warning "Starting service anyway - it will handle service unavailability gracefully"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Function to validate environment
|
||||
validate_environment() {
|
||||
log "Validating environment configuration..."
|
||||
|
||||
# Check Node.js version
|
||||
if ! command -v node &> /dev/null; then
|
||||
log_error "Node.js is not installed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local node_version=$(node --version)
|
||||
log_success "Node.js version: $node_version"
|
||||
|
||||
# Check if package.json exists
|
||||
if [ ! -f "package.json" ]; then
|
||||
log_error "package.json not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if node_modules exists
|
||||
if [ ! -d "node_modules" ]; then
|
||||
log_warning "node_modules not found. Installing dependencies..."
|
||||
npm install
|
||||
fi
|
||||
|
||||
# Check if source directory exists
|
||||
if [ ! -d "src" ]; then
|
||||
log_error "Source directory 'src' not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if main app file exists
|
||||
if [ ! -f "src/app.js" ]; then
|
||||
log_error "Main application file 'src/app.js' not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Environment validation passed"
|
||||
}
|
||||
|
||||
# Function to start the service
|
||||
start_service() {
|
||||
log "Starting $SERVICE_NAME service..."
|
||||
|
||||
# Set environment variables
|
||||
export NODE_ENV
|
||||
export PORT=$SERVICE_PORT
|
||||
export HOST=$SERVICE_HOST
|
||||
export LOG_LEVEL
|
||||
export TECH_STACK_SELECTOR_URL
|
||||
export TEMPLATE_MANAGER_URL
|
||||
export TEMPLATE_MANAGER_AI_URL
|
||||
export TECH_STACK_SELECTOR_HEALTH_URL
|
||||
export TEMPLATE_MANAGER_HEALTH_URL
|
||||
export REQUEST_TIMEOUT
|
||||
export HEALTH_CHECK_TIMEOUT
|
||||
|
||||
# Log configuration
|
||||
log "Configuration:"
|
||||
log " Service: $SERVICE_NAME"
|
||||
log " Port: $SERVICE_PORT"
|
||||
log " Host: $SERVICE_HOST"
|
||||
log " Environment: $NODE_ENV"
|
||||
log " Log Level: $LOG_LEVEL"
|
||||
log " Tech Stack Selector: $TECH_STACK_SELECTOR_URL"
|
||||
log " Template Manager: $TEMPLATE_MANAGER_URL"
|
||||
log " Template Manager AI: $TEMPLATE_MANAGER_AI_URL"
|
||||
|
||||
# Start the service
|
||||
log "Starting Node.js application..."
|
||||
exec node src/app.js
|
||||
}
|
||||
|
||||
# Function to handle graceful shutdown
|
||||
cleanup() {
|
||||
log "Received shutdown signal. Cleaning up..."
|
||||
log_success "$SERVICE_NAME service stopped gracefully"
|
||||
exit 0
|
||||
}
|
||||
|
||||
# Set up signal handlers
|
||||
trap cleanup SIGTERM SIGINT
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
log "Starting $SERVICE_NAME service initialization..."
|
||||
|
||||
# Validate environment
|
||||
validate_environment
|
||||
|
||||
# Wait for external services (non-blocking)
|
||||
wait_for_services || true
|
||||
|
||||
# Start the service
|
||||
start_service
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
647
services/unison/unison_api.json
Normal file
647
services/unison/unison_api.json
Normal file
@ -0,0 +1,647 @@
|
||||
{
|
||||
"info": {
|
||||
"name": "Unison - Unified Tech Stack Recommendation Service",
|
||||
"_postman_id": "unison-api-complete-2025",
|
||||
"description": "Complete API collection for Unison service - unified tech stack and template recommendations",
|
||||
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
|
||||
},
|
||||
"variable": [
|
||||
{
|
||||
"key": "baseUrl",
|
||||
"value": "http://localhost:8010",
|
||||
"type": "string",
|
||||
"description": "Base URL for Unison service"
|
||||
},
|
||||
{
|
||||
"key": "templateId",
|
||||
"value": "123e4567-e89b-12d3-a456-426614174000",
|
||||
"type": "string",
|
||||
"description": "Sample template ID for testing"
|
||||
},
|
||||
{
|
||||
"key": "recommendationId",
|
||||
"value": "",
|
||||
"type": "string",
|
||||
"description": "Store recommendation ID from unified request"
|
||||
}
|
||||
],
|
||||
"item": [
|
||||
{
|
||||
"name": "Service Health & Info",
|
||||
"item": [
|
||||
{
|
||||
"name": "Root Endpoint - Service Info",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/",
|
||||
"host": ["{{baseUrl}}"]
|
||||
}
|
||||
},
|
||||
"event": [
|
||||
{
|
||||
"listen": "test",
|
||||
"script": {
|
||||
"exec": [
|
||||
"pm.test(\"Status code is 200\", function () {",
|
||||
" pm.response.to.have.status(200);",
|
||||
"});",
|
||||
"",
|
||||
"pm.test(\"Response has service info\", function () {",
|
||||
" const jsonData = pm.response.json();",
|
||||
" pm.expect(jsonData).to.have.property('message');",
|
||||
" pm.expect(jsonData).to.have.property('version');",
|
||||
" pm.expect(jsonData).to.have.property('status');",
|
||||
"});"
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Health Check",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/health",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["health"]
|
||||
}
|
||||
},
|
||||
"event": [
|
||||
{
|
||||
"listen": "test",
|
||||
"script": {
|
||||
"exec": [
|
||||
"pm.test(\"Health check responds\", function () {",
|
||||
" pm.response.to.have.status.oneOf([200, 503]);",
|
||||
"});",
|
||||
"",
|
||||
"pm.test(\"Has health status\", function () {",
|
||||
" const jsonData = pm.response.json();",
|
||||
" pm.expect(jsonData).to.have.property('status');",
|
||||
" pm.expect(jsonData).to.have.property('service', 'unison');",
|
||||
"});"
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Detailed Health Check",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/health/detailed",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["health", "detailed"]
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Unified Recommendations",
|
||||
"item": [
|
||||
{
|
||||
"name": "Unified - Domain Only (Basic)",
|
||||
"request": {
|
||||
"method": "POST",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"body": {
|
||||
"mode": "raw",
|
||||
"raw": "{\n \"domain\": \"healthcare\",\n \"budget\": 10000\n}"
|
||||
},
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/unified",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "unified"]
|
||||
}
|
||||
},
|
||||
"event": [
|
||||
{
|
||||
"listen": "test",
|
||||
"script": {
|
||||
"exec": [
|
||||
"pm.test(\"Status code is 200\", function () {",
|
||||
" pm.response.to.have.status(200);",
|
||||
"});",
|
||||
"",
|
||||
"pm.test(\"Response has recommendation data\", function () {",
|
||||
" const jsonData = pm.response.json();",
|
||||
" pm.expect(jsonData).to.have.property('success', true);",
|
||||
" pm.expect(jsonData).to.have.property('data');",
|
||||
" pm.expect(jsonData.data).to.have.property('stack_name');",
|
||||
"});",
|
||||
"",
|
||||
"pm.test(\"Save request ID\", function () {",
|
||||
" const jsonData = pm.response.json();",
|
||||
" if (jsonData.requestId) {",
|
||||
" pm.collectionVariables.set('recommendationId', jsonData.requestId);",
|
||||
" }",
|
||||
"});"
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Unified - With Template ID",
|
||||
"request": {
|
||||
"method": "POST",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"body": {
|
||||
"mode": "raw",
|
||||
"raw": "{\n \"domain\": \"ecommerce\",\n \"budget\": 15000,\n \"templateId\": \"{{templateId}}\",\n \"preferredTechnologies\": [\"React\", \"Node.js\"],\n \"includeSimilar\": true,\n \"includeKeywords\": true,\n \"forceRefresh\": false\n}"
|
||||
},
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/unified",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "unified"]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Unified - Full Parameters",
|
||||
"request": {
|
||||
"method": "POST",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"body": {
|
||||
"mode": "raw",
|
||||
"raw": "{\n \"domain\": \"fintech\",\n \"budget\": 25000,\n \"templateId\": \"{{templateId}}\",\n \"preferredTechnologies\": [\"React\", \"Python\", \"PostgreSQL\"],\n \"includeSimilar\": true,\n \"includeKeywords\": true,\n \"forceRefresh\": true\n}"
|
||||
},
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/unified",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "unified"]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Unified - Minimal Request",
|
||||
"request": {
|
||||
"method": "POST",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"body": {
|
||||
"mode": "raw",
|
||||
"raw": "{}"
|
||||
},
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/unified",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "unified"]
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Individual Service Recommendations",
|
||||
"item": [
|
||||
{
|
||||
"name": "Tech Stack Only - Basic",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/tech-stack?domain=healthcare&budget=10000",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "tech-stack"],
|
||||
"query": [
|
||||
{
|
||||
"key": "domain",
|
||||
"value": "healthcare"
|
||||
},
|
||||
{
|
||||
"key": "budget",
|
||||
"value": "10000"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Tech Stack Only - With Preferences",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/tech-stack?domain=ecommerce&budget=15000&preferredTechnologies=React,Node.js,PostgreSQL",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "tech-stack"],
|
||||
"query": [
|
||||
{
|
||||
"key": "domain",
|
||||
"value": "ecommerce"
|
||||
},
|
||||
{
|
||||
"key": "budget",
|
||||
"value": "15000"
|
||||
},
|
||||
{
|
||||
"key": "preferredTechnologies",
|
||||
"value": "React,Node.js,PostgreSQL"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Template Only - Basic",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/template/{{templateId}}",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "template", "{{templateId}}"]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Template Only - Force Refresh",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/template/{{templateId}}?force_refresh=true",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "template", "{{templateId}}"],
|
||||
"query": [
|
||||
{
|
||||
"key": "force_refresh",
|
||||
"value": "true"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Stored Recommendations",
|
||||
"item": [
|
||||
{
|
||||
"name": "Get Recent Recommendations",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/stored?limit=10",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "stored"],
|
||||
"query": [
|
||||
{
|
||||
"key": "limit",
|
||||
"value": "10"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Get Recommendations by Domain",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/stored?domain=healthcare&limit=5",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "stored"],
|
||||
"query": [
|
||||
{
|
||||
"key": "domain",
|
||||
"value": "healthcare"
|
||||
},
|
||||
{
|
||||
"key": "limit",
|
||||
"value": "5"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Get Recommendations by Template ID",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/stored?templateId={{templateId}}&limit=5",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "stored"],
|
||||
"query": [
|
||||
{
|
||||
"key": "templateId",
|
||||
"value": "{{templateId}}"
|
||||
},
|
||||
{
|
||||
"key": "limit",
|
||||
"value": "5"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Get Specific Recommendation by ID",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/stored/{{recommendationId}}",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "stored", "{{recommendationId}}"]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Get Recommendation Statistics",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/stats",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "stats"]
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Schemas & Validation",
|
||||
"item": [
|
||||
{
|
||||
"name": "Get Available Schemas",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/schemas",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "schemas"]
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Error Testing",
|
||||
"item": [
|
||||
{
|
||||
"name": "Invalid Template ID",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/template/invalid-uuid",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "template", "invalid-uuid"]
|
||||
}
|
||||
},
|
||||
"event": [
|
||||
{
|
||||
"listen": "test",
|
||||
"script": {
|
||||
"exec": [
|
||||
"pm.test(\"Should return error for invalid UUID\", function () {",
|
||||
" pm.response.to.have.status.oneOf([400, 500]);",
|
||||
"});",
|
||||
"",
|
||||
"pm.test(\"Error response has success false\", function () {",
|
||||
" const jsonData = pm.response.json();",
|
||||
" pm.expect(jsonData).to.have.property('success', false);",
|
||||
"});"
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Invalid Unified Request",
|
||||
"request": {
|
||||
"method": "POST",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"body": {
|
||||
"mode": "raw",
|
||||
"raw": "{\n \"budget\": \"invalid-budget\",\n \"preferredTechnologies\": \"not-an-array\"\n}"
|
||||
},
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/unified",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "unified"]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "404 Test - Invalid Route",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/nonexistent-endpoint",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "nonexistent-endpoint"]
|
||||
}
|
||||
},
|
||||
"event": [
|
||||
{
|
||||
"listen": "test",
|
||||
"script": {
|
||||
"exec": [
|
||||
"pm.test(\"Status code is 404\", function () {",
|
||||
" pm.response.to.have.status(404);",
|
||||
"});",
|
||||
"",
|
||||
"pm.test(\"Has error message\", function () {",
|
||||
" const jsonData = pm.response.json();",
|
||||
" pm.expect(jsonData).to.have.property('error');",
|
||||
"});"
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Load Testing Scenarios",
|
||||
"item": [
|
||||
{
|
||||
"name": "Multiple Domains Test",
|
||||
"request": {
|
||||
"method": "POST",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"body": {
|
||||
"mode": "raw",
|
||||
"raw": "{\n \"domain\": \"{{$randomArrayElement(['healthcare', 'ecommerce', 'fintech', 'education', 'gaming'])}}\",\n \"budget\": {{$randomInt}}\n}"
|
||||
},
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/unified",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "unified"]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Concurrent Request Simulation",
|
||||
"request": {
|
||||
"method": "POST",
|
||||
"header": [
|
||||
{
|
||||
"key": "Content-Type",
|
||||
"value": "application/json"
|
||||
}
|
||||
],
|
||||
"body": {
|
||||
"mode": "raw",
|
||||
"raw": "{\n \"domain\": \"stress-test\",\n \"budget\": 5000,\n \"preferredTechnologies\": [\"React\", \"Node.js\"]\n}"
|
||||
},
|
||||
"url": {
|
||||
"raw": "{{baseUrl}}/api/recommendations/unified",
|
||||
"host": ["{{baseUrl}}"],
|
||||
"path": ["api", "recommendations", "unified"]
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"event": [
|
||||
{
|
||||
"listen": "prerequest",
|
||||
"script": {
|
||||
"type": "text/javascript",
|
||||
"exec": [
|
||||
"// Log request details",
|
||||
"console.log('Making request to:', pm.request.url.toString());"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"listen": "test",
|
||||
"script": {
|
||||
"type": "text/javascript",
|
||||
"exec": [
|
||||
"// Global test - log response time",
|
||||
"const responseTime = pm.response.responseTime;",
|
||||
"console.log('Response time:', responseTime + 'ms');",
|
||||
"",
|
||||
"// Global test - check for valid JSON",
|
||||
"pm.test('Response is valid JSON', function () {",
|
||||
" pm.response.to.be.json;",
|
||||
"});"
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
@ -1,17 +1,12 @@
|
||||
-- User Authentication Database Schema
|
||||
-- JWT-based authentication with user preferences for template features
|
||||
|
||||
-- Drop tables if they exist (for development)
|
||||
DROP TABLE IF EXISTS user_feature_preferences CASCADE;
|
||||
DROP TABLE IF EXISTS user_sessions CASCADE;
|
||||
DROP TABLE IF EXISTS refresh_tokens CASCADE;
|
||||
DROP TABLE IF EXISTS users CASCADE;
|
||||
DROP TABLE IF EXISTS user_projects CASCADE;
|
||||
-- Create tables only if they don't exist (production-safe)
|
||||
|
||||
|
||||
|
||||
-- Users table - Core user accounts
|
||||
CREATE TABLE users (
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
username VARCHAR(50) NOT NULL UNIQUE,
|
||||
email VARCHAR(255) NOT NULL UNIQUE,
|
||||
@ -27,7 +22,7 @@ CREATE TABLE users (
|
||||
);
|
||||
|
||||
-- Refresh tokens table
|
||||
CREATE TABLE refresh_tokens (
|
||||
CREATE TABLE IF NOT EXISTS refresh_tokens (
|
||||
id UUID PRIMARY KEY,
|
||||
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||
token_hash VARCHAR(255) NOT NULL,
|
||||
@ -38,7 +33,7 @@ CREATE TABLE refresh_tokens (
|
||||
);
|
||||
|
||||
-- User sessions table
|
||||
CREATE TABLE user_sessions (
|
||||
CREATE TABLE IF NOT EXISTS user_sessions (
|
||||
id UUID PRIMARY KEY,
|
||||
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||
session_token VARCHAR(255) UNIQUE,
|
||||
@ -51,7 +46,7 @@ CREATE TABLE user_sessions (
|
||||
expires_at TIMESTAMP DEFAULT NOW() + INTERVAL '30 days'
|
||||
);
|
||||
-- User feature preferences table
|
||||
CREATE TABLE user_feature_preferences (
|
||||
CREATE TABLE IF NOT EXISTS user_feature_preferences (
|
||||
id UUID PRIMARY KEY,
|
||||
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||
template_type VARCHAR(100) NOT NULL,
|
||||
@ -63,7 +58,7 @@ CREATE TABLE user_feature_preferences (
|
||||
UNIQUE(user_id, template_type, feature_id, preference_type)
|
||||
);
|
||||
-- User projects table
|
||||
CREATE TABLE user_projects (
|
||||
CREATE TABLE IF NOT EXISTS user_projects (
|
||||
id UUID PRIMARY KEY,
|
||||
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||
project_name VARCHAR(200) NOT NULL,
|
||||
|
||||
Loading…
Reference in New Issue
Block a user