initial Commit

This commit is contained in:
laxman 2025-11-03 13:22:29 +05:30
commit 0acd66df38
40 changed files with 9013 additions and 0 deletions

564
.cursor/rules/rule1.mdc Normal file
View File

@ -0,0 +1,564 @@
---
alwaysApply: true
---
# Property Image Tagging REST API - Cursor Rules
## Project Context
This is a production-ready REST API for automatic property image tagging using Claude AI (Anthropic). Built with Node.js, Express, MySQL, and Clean Architecture principles.
## Technology Stack
- Node.js 18+
- Express.js 4.x
- MySQL 8.0+ with mysql2/promise
- Anthropic Claude API (claude-sonnet-4-20250514)
- Sharp for image processing
- Winston for logging
- Joi for validation
- Multer for file uploads
## Architecture: Clean Architecture (Strict)
### Layer Separation (NEVER violate)
1. **Domain Layer** (`src/domain/`)
- Pure business entities and interfaces
- NO external dependencies (no Express, no database, no AI SDK)
- Only depends on: Node.js built-ins
- Entities: ImageTag, TaggingResult
- Interfaces: IImageTaggingService, IImageRepository
2. **Application Layer** (`src/application/`)
- Business logic and use cases
- Depends on: Domain layer only
- NO infrastructure dependencies
- Use Cases: TagImageUseCase, TagBase64ImageUseCase
- DTOs: TagImageRequestDto, TagImageResponseDto
3. **Infrastructure Layer** (`src/infrastructure/`)
- External services implementation
- Implements domain interfaces
- Database, AI provider, configuration
- Depends on: Domain interfaces, external SDKs
4. **Presentation Layer** (`src/presentation/`)
- HTTP controllers, routes, middleware
- Depends on: Application use cases
- Controllers: ImageTaggingController
- Middleware: apiKeyAuth, errorHandler, requestId
- Routes: imageRoutes
5. **Shared Layer** (`src/shared/`)
- Common utilities used across layers
- Errors, logger, formatters, generators
### Dependency Flow (CRITICAL)
```
Presentation → Application → Domain ← Infrastructure
Shared
```
## Code Style & Conventions
### General Rules
- Use ES6+ syntax (async/await, arrow functions, destructuring)
- Use `const` by default, `let` only when reassignment needed
- Never use `var`
- Use meaningful variable names (no single letters except loop counters)
- Maximum function length: 50 lines
- Maximum file length: 300 lines
- Use JSDoc comments for all public methods
### Naming Conventions
- Files: PascalCase for classes (e.g., `ImageTag.js`), camelCase for utilities (e.g., `logger.js`)
- Classes: PascalCase (e.g., `TagImageUseCase`)
- Functions/Methods: camelCase (e.g., `generateTags`)
- Constants: UPPER_SNAKE_CASE (e.g., `MAX_FILE_SIZE`)
- Private methods: prefix with underscore (e.g., `_validateInput`)
- Interfaces: prefix with I (e.g., `IImageRepository`)
### Error Handling (MANDATORY)
- Every async function MUST have try-catch
- Use custom AppError classes (ValidationError, AIServiceError, NotFoundError)
- Never expose internal errors to clients
- Always log errors with context
- Return appropriate HTTP status codes:
- 200: Success
- 400: Validation error
- 401: Missing authentication
- 403: Invalid authentication
- 404: Not found
- 500: Internal error
- 503: External service error
### Database Operations
- ALWAYS use connection pooling (max 20 connections)
- ALWAYS use parameterized queries (prevent SQL injection)
- ALWAYS use transactions for multi-table operations
- ALWAYS release connections in finally block
- NEVER return raw database errors to clients
- Use prepared statements via mysql2/promise
### Security Requirements (CRITICAL)
- NEVER store plain text API keys (always SHA256 hash)
- NEVER trust file extensions (use magic number validation with file-type)
- ALWAYS validate file size (<50MB) and dimensions (<15000px)
- ALWAYS sanitize user inputs with Joi
- ALWAYS use helmet middleware for security headers
- NEVER log sensitive data (API keys, passwords)
- Use environment variables for all secrets
### Image Processing Pipeline
1. Validate file exists and mime type
2. Verify actual file type with magic numbers (file-type package)
3. Calculate SHA256 hash of buffer
4. Check database for duplicate hash
5. If duplicate → return cached result immediately (FREE)
6. If new → optimize image:
- Resize to max 2048px if larger (save Claude API costs)
- Convert HEIC/TIFF/BMP to JPEG
- Use Sharp library for all operations
7. Convert to base64
8. Send to Claude API with retry logic
9. Parse and validate JSON response
10. Save to database with hash
11. Return formatted response
### Claude AI Integration
- Model: claude-sonnet-4-20250514
- ALWAYS use async-retry (3 retries, exponential backoff)
- ALWAYS validate JSON response before parsing
- Return 503 (not 500) for AI service failures
- Prompt must request 20-30 tags in specific categories
- Parse response: {tags: [{category, value, confidence}], summary}
### API Key Authentication
- Extract from X-API-Key header OR Authorization: Bearer
- Validate format: `key_(test|live)_[64 hex chars]`
- Hash with SHA256 before database lookup
- Check: key exists, is_active=true, not expired, not revoked
- Attach to request: `req.apiKey = {id, name, environment}`
- Support SKIP_AUTH=true ONLY in development
- NO rate limiting, NO usage tracking (intentionally simple)
### Response Format (ALWAYS use ResponseFormatter)
```javascript
// Success
{
"success": true,
"message": "✅ New image tagged successfully",
"data": {...},
"timestamp": "2025-10-31T10:30:00.000Z"
}
// Error
{
"success": false,
"message": "API key required. Include X-API-Key header.",
"timestamp": "2025-10-31T10:30:00.000Z"
}
```
### Logging (Winston)
- Log levels: error, warn, info, debug
- ALWAYS log: authentication events, errors, API calls
- NEVER log: API keys, passwords, sensitive data
- Use daily rotating file transport
- Separate error.log and combined.log
- Include context: `logger.info('Message', {key: 'value'})`
### Validation Rules (Joi)
- Validate ALL user inputs
- Fail fast with clear error messages
- Example:
```javascript
const schema = Joi.object({
base64Image: Joi.string().base64().required(),
mediaType: Joi.string().valid('image/jpeg', 'image/png').required(),
fileName: Joi.string().max(255).optional()
});
```
## Implementation Checklist
### When creating entities (Domain):
- [ ] No external dependencies
- [ ] Validate inputs in constructor
- [ ] Immutable properties (use getters)
- [ ] Business logic methods only
- [ ] toJSON() method for serialization
### When creating use cases (Application):
- [ ] Single responsibility
- [ ] Depends only on domain interfaces
- [ ] Constructor injection for dependencies
- [ ] Comprehensive error handling
- [ ] Return DTOs, not entities
### When creating repositories (Infrastructure):
- [ ] Implements domain interface
- [ ] Connection pooling
- [ ] Parameterized queries
- [ ] Transaction support
- [ ] Release connections in finally
### When creating controllers (Presentation):
- [ ] Thin layer (delegate to use cases)
- [ ] Validate inputs
- [ ] Handle errors gracefully
- [ ] Return formatted responses
- [ ] Log important events
### When creating middleware:
- [ ] Next() for success
- [ ] Next(error) for errors
- [ ] Attach data to req object
- [ ] Don't mutate req.body directly
## File Templates
### Entity Template:
```javascript
class EntityName {
constructor(data) {
this._validateInput(data);
this.property1 = data.property1;
this.property2 = data.property2;
}
_validateInput(data) {
if (!data.property1) {
throw new ValidationError('Property1 is required');
}
}
businessMethod() {
// Pure business logic
}
toJSON() {
return {
property1: this.property1,
property2: this.property2
};
}
}
module.exports = EntityName;
```
### Use Case Template:
```javascript
class UseCaseName {
constructor(dependency1, dependency2, logger) {
this.dependency1 = dependency1;
this.dependency2 = dependency2;
this.logger = logger;
}
async execute(input) {
try {
// 1. Validate input
this._validateInput(input);
// 2. Business logic
const result = await this.dependency1.doSomething(input);
// 3. Return DTO
return ResponseDto.fromEntity(result);
} catch (error) {
this.logger.error('Use case error:', error);
throw error;
}
}
_validateInput(input) {
if (!input) throw new ValidationError('Input required');
}
}
module.exports = UseCaseName;
```
### Repository Template:
```javascript
class RepositoryName {
constructor(pool, logger) {
this.pool = pool;
this.logger = logger;
}
async methodName(param) {
const connection = await this.pool.getConnection();
try {
const [rows] = await connection.query(
'SELECT * FROM table WHERE column = ?',
[param]
);
return this._mapToEntity(rows[0]);
} catch (error) {
this.logger.error('Repository error:', error);
throw new Error('Database operation failed');
} finally {
connection.release();
}
}
_mapToEntity(row) {
if (!row) return null;
return new Entity(row);
}
}
module.exports = RepositoryName;
```
### Controller Template:
```javascript
class ControllerName {
constructor(useCase, logger) {
this.useCase = useCase;
this.logger = logger;
}
async handleRequest(req, res, next) {
try {
// 1. Extract and validate input
const input = this._extractInput(req);
// 2. Execute use case
const result = await this.useCase.execute(input);
// 3. Format and return response
res.status(200).json(
ResponseFormatter.success(result, 'Success message')
);
} catch (error) {
next(error);
}
}
_extractInput(req) {
return {
property1: req.body.property1,
property2: req.params.property2
};
}
}
module.exports = ControllerName;
```
## Critical Don'ts (NEVER DO)
❌ Don't put business logic in controllers
❌ Don't put database queries in use cases
❌ Don't skip input validation
❌ Don't trust file extensions
❌ Don't store plain text secrets
❌ Don't expose internal errors to clients
❌ Don't skip image hash calculation (needed for duplicate detection)
❌ Don't use `var` keyword
❌ Don't return database errors directly
❌ Don't skip connection.release() in finally
❌ Don't hardcode values (use env vars)
❌ Don't skip error handling in async functions
❌ Don't violate layer boundaries
## Critical Do's (ALWAYS DO)
✅ Use dependency injection everywhere
✅ Hash API keys before storage (SHA256)
✅ Validate with magic numbers (file-type package)
✅ Calculate SHA256 hash of images (duplicate detection)
✅ Use transactions for multi-table operations
✅ Optimize images before sending to Claude (max 2048px)
✅ Implement retry logic for Claude API (async-retry)
✅ Release database connections in finally
✅ Return helpful error messages to users
✅ Log errors with context
✅ Use prepared statements for SQL
✅ Use environment variables for config
✅ Follow Clean Architecture strictly
## Testing Requirements
### Unit Tests (for each class):
- Test happy path
- Test error cases
- Mock all dependencies
- Test edge cases
- Aim for 80%+ coverage
### Integration Tests:
- Test full API endpoints
- Test with real database (test DB)
- Test duplicate detection
- Test authentication
- Test error scenarios
## Performance Requirements
- API response time: <5 seconds (including Claude API)
- Duplicate detection: <50ms
- Database query optimization: use EXPLAIN
- Image optimization: resize large images
- Connection pooling: max 20 connections
- Memory: monitor with process.memoryUsage()
## Documentation Requirements
### JSDoc for public methods:
```javascript
/**
* Generate tags for an image
* @param {string} base64Image - Base64 encoded image
* @param {string} mediaType - MIME type (e.g., 'image/jpeg')
* @returns {Promise<TaggingResult>} Tagging result with tags and summary
* @throws {ValidationError} If inputs are invalid
* @throws {AIServiceError} If Claude API fails
*/
async generateTags(base64Image, mediaType) {
// Implementation
}
```
## Environment Variables Reference
Required:
- ANTHROPIC_API_KEY - Claude API key
- DB_HOST, DB_USER, DB_PASSWORD - MySQL credentials
Optional:
- NODE_ENV (default: development)
- PORT (default: 3000)
- SKIP_AUTH (default: false)
- LOG_LEVEL (default: info)
## When Implementing New Features
1. Start with Domain layer (entities, interfaces)
2. Add Application layer (use cases, DTOs)
3. Implement Infrastructure (repositories, providers)
4. Add Presentation (controllers, routes, middleware)
5. Update tests
6. Update documentation
7. Test end-to-end
## Common Patterns
### Dependency Container Pattern:
```javascript
class DependencyContainer {
constructor() {
this._services = new Map();
this._initialize();
}
_initialize() {
// Register all services
this._services.set('serviceName', new Service());
}
get(serviceName) {
return this._services.get(serviceName);
}
}
```
### Repository Pattern:
```javascript
// Interface (Domain)
class IRepository {
async findById(id) { throw new Error('Not implemented'); }
async save(entity) { throw new Error('Not implemented'); }
}
// Implementation (Infrastructure)
class MySQLRepository extends IRepository {
async findById(id) { /* MySQL implementation */ }
async save(entity) { /* MySQL implementation */ }
}
```
## API Endpoints Structure
```
GET /api/images/health - Health check (public)
POST /api/images/tag - Tag uploaded image (auth required)
POST /api/images/tag-base64 - Tag base64 image (auth required)
GET /api/images/search - Search by tag (auth required)
GET /api/images/stats - Get statistics (auth required)
```
## Success Criteria
Implementation is complete when:
- ✅ All layers properly separated
- ✅ Clean Architecture followed strictly
- ✅ Duplicate detection working (same image = cached result)
- ✅ All image formats supported
- ✅ Authentication working correctly
- ✅ Error handling comprehensive
- ✅ Database queries optimized with indexes
- ✅ Logging properly configured
- ✅ API responds within 5 seconds
- ✅ Tests passing
- ✅ Documentation complete
## Quick Reference: Layer Responsibilities
**Domain**: What the business does (entities, business rules)
**Application**: How the business operates (use cases, orchestration)
**Infrastructure**: Tools the business uses (database, AI, external services)
**Presentation**: How the business communicates (HTTP, controllers, routes)
**Shared**: Common tools for everyone (logger, errors, utilities)
---
Remember: Clean Architecture is about making the code testable, maintainable, and independent of frameworks. The business logic should work even if you change Express to Fastify, or MySQL to PostgreSQL.
```
---
## 🎯 How to Use This File
1. **Create `.cursorrules` file** in your project root (same level as `package.json`)
2. **Copy-paste** the entire content above into it
3. **Open your project in Cursor**
4. Cursor will automatically read and follow these rules
5. When you ask Cursor to implement features, it will follow these guidelines
---
## 📋 Complete File Checklist for Cursor
Now you have **6 essential files** to give Cursor:
| File | Purpose |
|------|---------|
| `.cursorrules` | **Rules and guidelines for Cursor to follow** |
| `CURSOR_PROMPT.md` | Complete implementation instructions |
| `package.json` | Dependencies |
| `.env.example` | Configuration template |
| `001_initial_schema.sql` | Database schema |
| `.gitignore` | Files to ignore |
---
## 🚀 Quick Start with Cursor
1. **Create project folder**: `mkdir property-image-tagger && cd property-image-tagger`
2. **Create the 6 files above** (copy-paste each content)
3. **Open in Cursor**: `cursor .`
4. **Tell Cursor**:
```
Read CURSOR_PROMPT.md and .cursorrules, then implement the complete
Property Image Tagging REST API following Clean Architecture principles.
Start with the Domain layer and work your way up to Presentation.

10
.gitignore vendored Normal file
View File

@ -0,0 +1,10 @@
node_modules/
.env
logs/
*.log
.DS_Store
.vscode/
.idea/
coverage/
dist/

61
001_initial_schema.sql Normal file
View File

@ -0,0 +1,61 @@
CREATE DATABASE IF NOT EXISTS property_tagging
CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
USE property_tagging;
CREATE TABLE images (
id CHAR(36) PRIMARY KEY,
file_name VARCHAR(255) NOT NULL,
original_name VARCHAR(255),
file_size INT UNSIGNED,
mime_type VARCHAR(50),
width INT UNSIGNED,
height INT UNSIGNED,
image_hash VARCHAR(64) UNIQUE NOT NULL,
s3_key VARCHAR(500),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_image_hash (image_hash),
INDEX idx_created_at (created_at)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
CREATE TABLE tagging_results (
id CHAR(36) PRIMARY KEY,
image_id CHAR(36) NOT NULL,
tags JSON NOT NULL,
summary TEXT,
total_tags INT UNSIGNED,
model_version VARCHAR(50) DEFAULT 'claude-sonnet-4-20250514',
processing_time_ms INT UNSIGNED,
was_duplicate BOOLEAN DEFAULT FALSE,
tagged_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (image_id) REFERENCES images(id) ON DELETE CASCADE,
INDEX idx_image_id (image_id),
INDEX idx_tagged_at (tagged_at)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
CREATE TABLE api_keys (
id CHAR(36) PRIMARY KEY,
key_prefix VARCHAR(20) NOT NULL,
key_hash VARCHAR(255) UNIQUE NOT NULL,
name VARCHAR(255) NOT NULL,
description TEXT,
is_active BOOLEAN DEFAULT TRUE,
environment ENUM('development', 'staging', 'production') DEFAULT 'development',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
expires_at TIMESTAMP NULL,
revoked_at TIMESTAMP NULL,
revoked_reason TEXT,
INDEX idx_key_hash (key_hash),
INDEX idx_is_active (is_active)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
INSERT INTO api_keys (id, key_prefix, key_hash, name, environment)
VALUES (
UUID(),
'key_test_',
SHA2('key_test_1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef', 256),
'Development Test Key',
'development'
);

570
CURSOR_PROMPT.md Normal file
View File

@ -0,0 +1,570 @@
# Property Image Tagging REST API - Cursor AI Implementation Guide
## Project Overview
Build a production-ready REST API for automatic property image tagging using Claude AI (Anthropic). The system uses Node.js, Express, MySQL, and Clean Architecture patterns.
## Core Requirements
### 1. Technology Stack
- **Runtime**: Node.js 18+
- **Framework**: Express.js
- **Database**: MySQL 8.0+
- **AI Provider**: Anthropic Claude API (Claude Sonnet 4.5)
- **Architecture**: Clean Architecture (Domain, Application, Infrastructure, Presentation layers)
- **Image Processing**: Sharp library for format conversion and optimization
- **Authentication**: Simple API key system (no rate limiting, no usage tracking)
### 2. Key Features
- Upload property images and get 20-30 AI-generated tags
- Duplicate detection using SHA256 image hashing (cache results to avoid re-tagging)
- Support multiple image formats (JPEG, PNG, WebP, HEIC, TIFF, BMP, GIF)
- Search images by tags using MySQL JSON queries
- Simple API key authentication (SHA256 hashed)
- RESTful API with Clean Architecture
### 3. Architecture Principles
- **Clean Architecture**: Strict separation of Domain, Application, Infrastructure, Presentation
- **Dependency Injection**: Use container pattern for service management
- **Repository Pattern**: Abstract database operations
- **Use Case Pattern**: Each business operation is a separate use case class
- **DTO Pattern**: Data Transfer Objects for API boundaries
## Project Structure
Create this exact structure:
```
src/
├── domain/
│ ├── entities/
│ │ ├── ImageTag.js
│ │ └── TaggingResult.js
│ └── interfaces/
│ ├── IImageTaggingService.js
│ └── IImageRepository.js
├── application/
│ ├── dtos/
│ │ ├── TagImageRequestDto.js
│ │ └── TagImageResponseDto.js
│ └── useCases/
│ ├── TagImageUseCase.js
│ └── TagBase64ImageUseCase.js
├── infrastructure/
│ ├── ai/
│ │ └── ClaudeAIProvider.js
│ ├── repositories/
│ │ ├── MySQLImageRepository.js
│ │ └── ApiKeyRepository.js
│ └── config/
│ ├── dependencyContainer.js
│ └── corsConfig.js
├── presentation/
│ ├── controllers/
│ │ └── ImageTaggingController.js
│ ├── middleware/
│ │ ├── errorHandler.js
│ │ ├── apiKeyAuth.js
│ │ └── requestId.js
│ ├── routes/
│ │ └── imageRoutes.js
│ └── validators/
│ └── imageValidator.js
├── shared/
│ ├── errors/
│ │ └── AppError.js
│ └── utils/
│ ├── logger.js
│ ├── responseFormatter.js
│ └── apiKeyGenerator.js
└── server.js
scripts/
└── manage-api-keys.js
migrations/
└── 001_initial_schema.sql
```
## Database Schema
### Table: images
```sql
CREATE TABLE images (
id CHAR(36) PRIMARY KEY,
file_name VARCHAR(255) NOT NULL,
original_name VARCHAR(255),
file_size INT UNSIGNED,
mime_type VARCHAR(50),
width INT UNSIGNED,
height INT UNSIGNED,
image_hash VARCHAR(64) UNIQUE NOT NULL,
s3_key VARCHAR(500),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_image_hash (image_hash)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
```
### Table: tagging_results
```sql
CREATE TABLE tagging_results (
id CHAR(36) PRIMARY KEY,
image_id CHAR(36) NOT NULL,
tags JSON NOT NULL,
summary TEXT,
total_tags INT UNSIGNED,
model_version VARCHAR(50) DEFAULT 'claude-sonnet-4-20250514',
processing_time_ms INT UNSIGNED,
was_duplicate BOOLEAN DEFAULT FALSE,
tagged_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (image_id) REFERENCES images(id) ON DELETE CASCADE,
INDEX idx_image_id (image_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
```
### Table: api_keys
```sql
CREATE TABLE api_keys (
id CHAR(36) PRIMARY KEY,
key_prefix VARCHAR(20) NOT NULL,
key_hash VARCHAR(255) UNIQUE NOT NULL,
name VARCHAR(255) NOT NULL,
description TEXT,
is_active BOOLEAN DEFAULT TRUE,
environment ENUM('development', 'staging', 'production') DEFAULT 'development',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
expires_at TIMESTAMP NULL,
revoked_at TIMESTAMP NULL,
revoked_reason TEXT,
INDEX idx_key_hash (key_hash),
INDEX idx_is_active (is_active)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
```
## Implementation Requirements
### Domain Layer
**ImageTag Entity:**
- Properties: category, value, confidence
- Method: isHighConfidence() - returns true if confidence >= 0.8
- Method: toJSON() - serialize to plain object
- Validate category and value are not empty
**TaggingResult Entity:**
- Properties: imageId, tags array, summary, createdAt
- Methods:
- getTagsByCategory(category)
- getHighConfidenceTags()
- getTotalTags()
- Validate tags array is not empty
**Interfaces:**
- IImageTaggingService: generateTags(base64Image, mediaType)
- IImageRepository: save(taggingResult, imageBuffer), findById(imageId), findByImageHash(hash)
### Application Layer
**TagImageUseCase:**
1. Validate input (fileBuffer, mimeType required)
2. Calculate SHA256 hash of image buffer
3. Check database for existing hash
4. If duplicate → return cached result with isDuplicate: true, costSavings message
5. If new → convert to base64, call AI service
6. Create TaggingResult entity
7. Save to database with image buffer (for hash)
8. Return TagImageResponseDto
**TagImageRequestDto:**
- fileBuffer (Buffer)
- mimeType (string)
- fileName (string)
- Method: toBase64() - convert buffer to base64 string
**TagImageResponseDto:**
- imageId, tags, summary, totalTags, isDuplicate, cachedResult, processedAt
- Static method: fromTaggingResult(taggingResult)
### Infrastructure Layer
**ClaudeAIProvider:**
- Use @anthropic-ai/sdk npm package
- Model: claude-sonnet-4-20250514
- Wrap API call with async-retry (3 retries, exponential backoff)
- Send this exact prompt:
```
Analyze this property image and generate 20-30 descriptive tags categorized as follows:
Tag Categories:
1. View: (e.g., Burj Khalifa view, ocean view, downtown skyline, marina view)
2. Furnishing: (e.g., fully furnished, unfurnished, modern, contemporary, luxury)
3. Kitchen: (e.g., with appliances, open kitchen, modular, closed kitchen)
4. Flooring: (e.g., wooden, marble, tile, carpet, laminate, porcelain)
5. Room Type: (e.g., bedroom, living room, bathroom, kitchen, balcony)
6. Style: (e.g., modern, traditional, scandinavian, industrial)
7. Features: (e.g., high ceiling, floor-to-ceiling windows, built-in wardrobes)
8. Condition: (e.g., newly renovated, well-maintained, ready to move)
9. Lighting: (e.g., natural light, ambient lighting, LED lighting)
10. Color Scheme: (e.g., neutral tones, warm colors, monochrome)
Return ONLY a JSON object in this exact format:
{
"tags": [
{"category": "View", "value": "marina view", "confidence": 0.95},
{"category": "Furnishing", "value": "fully furnished", "confidence": 0.90}
],
"summary": "Brief one-sentence description"
}
Include 20-30 tags total.
```
**MySQLImageRepository:**
- Use mysql2/promise with connection pool (max 20 connections)
- _calculateImageHash(buffer) - SHA256 hash using crypto
- findByImageHash(hash) - returns existing TaggingResult if found
- save(taggingResult, imageBuffer) - stores both image and tags in transaction
- Calculate hash
- Check for duplicate first
- If duplicate, return cached result
- Insert into images table
- Insert into tagging_results table
- Commit transaction
- findById(imageId) - returns TaggingResult
- findByTagValue(value) - use JSON_SEARCH for MySQL
- Support JSON queries using JSON_CONTAINS and JSON_SEARCH
**ApiKeyRepository:**
- validateKey(apiKey) - hash key, check database, return key data if valid
- createKey(data) - generate secure key with ApiKeyGenerator, hash it, store
- revokeKey(keyId, reason) - set is_active=false
- getAllKeys() - list all keys
### Presentation Layer
**ImageValidator:**
- validateUpload(file) - async function
- Check file exists
- Check mime type in allowed list
- Use file-type npm package to verify actual type (magic number)
- Use Sharp to validate image and check dimensions (max 15000px)
- Max file size 50MB
- convertToClaudeSupportedFormat(buffer, mimeType)
- If already JPEG/PNG/WebP/GIF → optimize only
- If HEIC/TIFF/BMP → convert to JPEG
- Use Sharp for conversion
- optimizeForAI(buffer) - resize to max 2048px if larger
**ApiKeyAuthMiddleware:**
- authenticate() middleware function
- Skip if SKIP_AUTH=true in development
- Extract key from X-API-Key header or Authorization: Bearer
- Validate key format: key_(test|live)_[64 hex chars]
- Call ApiKeyRepository.validateKey()
- If invalid → 401/403 with clear error
- If valid → attach req.apiKey = {id, name, environment}
- Log authentication
**ImageTaggingController:**
- tagUploadedImage(req, res, next)
- Validate upload
- Convert format if needed
- Create TagImageRequestDto
- Execute TagImageUseCase
- Return formatted response
- tagBase64Image(req, res, next)
- Validate base64 input with Joi
- Execute TagBase64ImageUseCase
- Return formatted response
- searchByTag(req, res, next)
- Get tag value from query
- Call repository.findByTagValue()
- Return results
- getStats(req, res, next)
- Get statistics from repository
- Return stats
- getHealth(req, res)
- Check database connection
- Check memory usage
- Return health status
**Routes:**
- GET /api/images/health - public
- POST /api/images/tag - requires auth, multipart upload
- POST /api/images/tag-base64 - requires auth, JSON body
- GET /api/images/search - requires auth
- GET /api/images/stats - requires auth
### Shared Utilities
**AppError Classes:**
```javascript
class AppError extends Error {
constructor(message, statusCode = 500, isOperational = true) {
super(message);
this.statusCode = statusCode;
this.isOperational = isOperational;
Error.captureStackTrace(this, this.constructor);
}
}
class ValidationError extends AppError {
constructor(message) { super(message, 400); }
}
class AIServiceError extends AppError {
constructor(message) { super(message, 503); }
}
```
**Logger (Winston):**
- Daily rotating file transport
- JSON formatting
- Separate error.log and combined.log
- Console output in development
- Methods: info(message, meta), error(message, error), warn(message, meta), debug(message, meta)
**ApiKeyGenerator:**
- generate(prefix) - create secure 64-char hex key with prefix
- hash(apiKey) - SHA256 hash
- mask(apiKey) - show only first 13 and last 4 chars
- isValidFormat(apiKey) - validate pattern
- extractPrefix(apiKey) - get prefix part
**ResponseFormatter:**
- success(data, message) - {success: true, message, data, timestamp}
- error(error, message) - {success: false, message, error, timestamp}
### Server Setup (src/server.js)
```javascript
require('dotenv').config();
const express = require('express');
const cors = require('cors');
const helmet = require('helmet');
const compression = require('compression');
const morgan = require('morgan');
// Import middleware and routes
const corsConfig = require('./infrastructure/config/corsConfig');
const container = require('./infrastructure/config/dependencyContainer');
const createImageRoutes = require('./presentation/routes/imageRoutes');
const ImageTaggingController = require('./presentation/controllers/ImageTaggingController');
const ApiKeyAuthMiddleware = require('./presentation/middleware/apiKeyAuth');
const errorHandler = require('./presentation/middleware/errorHandler');
const requestIdMiddleware = require('./presentation/middleware/requestId');
const logger = require('./shared/utils/logger');
const app = express();
const PORT = process.env.PORT || 3000;
// Middleware
app.use(helmet());
app.use(compression());
app.use(cors(corsConfig));
app.use(express.json({ limit: '10mb' }));
app.use(express.urlencoded({ extended: true, limit: '10mb' }));
app.use(requestIdMiddleware);
app.use(morgan('combined'));
// Root
app.get('/', (req, res) => {
res.json({
service: 'Property Image Tagging API',
version: '1.0.0',
authentication: 'Simple API Key'
});
});
// Dependency injection
const tagImageUseCase = container.get('tagImageUseCase');
const tagBase64ImageUseCase = container.get('tagBase64ImageUseCase');
const imageRepository = container.get('imageRepository');
const apiKeyRepository = container.get('apiKeyRepository');
const imageController = new ImageTaggingController(
tagImageUseCase,
tagBase64ImageUseCase,
imageRepository,
logger
);
const authMiddleware = new ApiKeyAuthMiddleware(apiKeyRepository, logger);
// Routes
const imageRoutes = createImageRoutes(imageController, authMiddleware);
app.use('/api/images', imageRoutes);
// Error handler (last)
app.use(errorHandler);
// Graceful shutdown
const gracefulShutdown = async (signal) => {
logger.info(`${signal} received, shutting down`);
const pool = container.get('pool');
await pool.end();
process.exit(0);
};
process.on('SIGTERM', () => gracefulShutdown('SIGTERM'));
process.on('SIGINT', () => gracefulShutdown('SIGINT'));
// Start
if (process.env.NODE_ENV !== 'test') {
app.listen(PORT, () => {
logger.info(`Server running on port ${PORT}`);
});
}
module.exports = app;
```
### CLI Tool (scripts/manage-api-keys.js)
Create command-line tool with these commands:
- create [name] [environment] [description] - Generate new API key
- list - Display all keys
- revoke <key-id> [reason] - Deactivate key
- activate <key-id> - Reactivate key
Use ApiKeyRepository methods. Show plain text key ONLY on creation.
## Critical Requirements
### MUST DO:
✅ Use connection pooling for MySQL (max 20)
✅ Use transactions for saving image + tags
✅ Implement retry logic for Claude API (3 retries)
✅ Validate JSON from Claude before parsing
✅ Hash API keys with SHA256 (never store plain text)
✅ Use prepared statements for SQL
✅ Optimize images before sending to Claude (max 2048px)
✅ Calculate image hash for duplicate detection
✅ Return helpful error messages
✅ Use async-retry package for Claude API calls
✅ Use file-type package for magic number validation
✅ Use Sharp for image processing
✅ Use Joi for input validation
✅ Use Winston for logging
### MUST NOT DO:
❌ Store plain text API keys
❌ Trust file extensions alone
❌ Skip image hash calculation
❌ Return database errors to client
❌ Skip input validation
❌ Hardcode values (use env vars)
## Expected API Response Examples
### Successful New Image Tagging:
```json
{
"success": true,
"message": "✅ New image tagged successfully",
"data": {
"imageId": "abc-123-def",
"tags": [
{"category": "View", "value": "Burj Khalifa view", "confidence": 0.98},
{"category": "Furnishing", "value": "fully furnished", "confidence": 0.95}
],
"summary": "Modern luxury apartment with Burj Khalifa view",
"totalTags": 27,
"isDuplicate": false,
"processedAt": "2025-10-31T10:30:00.000Z"
}
}
```
### Duplicate Image Detection:
```json
{
"success": true,
"message": "✅ Duplicate detected - returned cached tags (no cost)",
"data": {
"imageId": "abc-123-def",
"tags": [...],
"isDuplicate": true,
"cachedResult": true,
"costSavings": "This request was FREE - used cached result"
}
}
```
### Error Response:
```json
{
"success": false,
"message": "API key required. Include X-API-Key header.",
"timestamp": "2025-10-31T10:30:00.000Z"
}
```
## Package Dependencies
```json
{
"dependencies": {
"@anthropic-ai/sdk": "^0.32.1",
"async-retry": "^1.3.3",
"compression": "^1.7.4",
"cors": "^2.8.5",
"dotenv": "^16.4.5",
"express": "^4.21.2",
"file-type": "^19.5.0",
"helmet": "^8.0.0",
"joi": "^17.13.3",
"morgan": "^1.10.0",
"multer": "^1.4.5-lts.1",
"mysql2": "^3.11.5",
"sharp": "^0.33.5",
"uuid": "^11.0.3",
"winston": "^3.17.0",
"winston-daily-rotate-file": "^5.0.0"
},
"devDependencies": {
"nodemon": "^3.1.9"
}
}
```
## Environment Variables
Create .env file:
```
NODE_ENV=development
PORT=3000
ANTHROPIC_API_KEY=your_key_here
DB_HOST=localhost
DB_PORT=3306
DB_NAME=property_tagging
DB_USER=root
DB_PASSWORD=your_password
SKIP_AUTH=true
ALLOWED_ORIGIN=*
LOG_LEVEL=info
```
## Testing Checklist
After implementation:
- [ ] Upload JPEG → verify tags returned
- [ ] Upload same image twice → verify duplicate detected
- [ ] Upload HEIC → verify converted and tagged
- [ ] Upload without API key → verify 401
- [ ] Upload with invalid key → verify 403
- [ ] Upload corrupted file → verify 400
- [ ] Search by tag → verify results
- [ ] Check database has proper indexes
- [ ] Verify Claude API retry works
- [ ] Test graceful shutdown
## Success Metrics
- API responds within 5 seconds
- Duplicate detection works (same image = cached)
- All image formats supported
- Authentication blocks invalid keys
- Errors handled gracefully
- Database queries optimized
- Logs contain useful info
Start implementation from Domain layer → Application → Infrastructure → Presentation. Follow Clean Architecture strictly. Implement comprehensive error handling at every layer.

View File

@ -0,0 +1,532 @@
{
"info": {
"_postman_id": "property-image-tagger-api",
"name": "Property Image Tagging API",
"description": "Complete API collection for Property Image Tagging REST API using Claude AI",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
},
"item": [
{
"name": "Public Endpoints",
"item": [
{
"name": "Root - Service Info",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/",
"host": [
"{{baseUrl}}"
],
"path": [
""
]
},
"description": "Get service information"
},
"response": []
},
{
"name": "Health Check",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{baseUrl}}/api/images/health",
"host": [
"{{baseUrl}}"
],
"path": [
"api",
"images",
"health"
]
},
"description": "Check API and database health status"
},
"response": []
}
],
"description": "Endpoints that don't require authentication"
},
{
"name": "Image Tagging",
"item": [
{
"name": "Tag Uploaded Image",
"event": [
{
"listen": "test",
"script": {
"exec": [
"pm.test(\"Status code is 200\", function () {",
" pm.response.to.have.status(200);",
"});",
"",
"pm.test(\"Response has success flag\", function () {",
" var jsonData = pm.response.json();",
" pm.expect(jsonData).to.have.property('success');",
"});",
"",
"pm.test(\"Response contains tags\", function () {",
" var jsonData = pm.response.json();",
" if (jsonData.success && jsonData.data) {",
" pm.expect(jsonData.data).to.have.property('tags');",
" pm.expect(jsonData.data.tags).to.be.an('array');",
" }",
"});"
],
"type": "text/javascript"
}
}
],
"request": {
"method": "POST",
"header": [
{
"key": "X-API-Key",
"value": "{{apiKey}}",
"type": "text"
}
],
"body": {
"mode": "formdata",
"formdata": [
{
"key": "image",
"type": "file",
"src": [],
"description": "Upload an image file (JPEG, PNG, WebP, HEIC, TIFF, BMP, GIF)"
}
]
},
"url": {
"raw": "{{baseUrl}}/api/images/tag",
"host": [
"{{baseUrl}}"
],
"path": [
"api",
"images",
"tag"
]
},
"description": "Tag a property image by uploading a file. Supports multiple formats and automatically detects duplicates."
},
"response": []
},
{
"name": "Tag Base64 Image",
"event": [
{
"listen": "test",
"script": {
"exec": [
"pm.test(\"Status code is 200\", function () {",
" pm.response.to.have.status(200);",
"});",
"",
"pm.test(\"Response has success flag\", function () {",
" var jsonData = pm.response.json();",
" pm.expect(jsonData).to.have.property('success');",
"});"
],
"type": "text/javascript"
}
}
],
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"value": "application/json",
"type": "text"
},
{
"key": "X-API-Key",
"value": "{{apiKey}}",
"type": "text"
}
],
"body": {
"mode": "raw",
"raw": "{\n \"base64Image\": \"data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAABAAEDASIAAhEBAxEB/8QAFQABAQAAAAAAAAAAAAAAAAAAAAv/xAAUEAEAAAAAAAAAAAAAAAAAAAAA/8QAFQEBAQAAAAAAAAAAAAAAAAAAAAX/xAAUEQEAAAAAAAAAAAAAAAAAAAAA/9oADAMBAAIRAxEAPwCdABmX/9k=\",\n \"mediaType\": \"image/jpeg\",\n \"fileName\": \"sample.jpg\"\n}",
"options": {
"raw": {
"language": "json"
}
}
},
"url": {
"raw": "{{baseUrl}}/api/images/tag-base64",
"host": [
"{{baseUrl}}"
],
"path": [
"api",
"images",
"tag-base64"
]
},
"description": "Tag a property image using base64 encoded data. Note: Replace the base64Image value with a real base64 encoded image."
},
"response": []
},
{
"name": "Batch Tag Uploaded Images",
"event": [
{
"listen": "test",
"script": {
"exec": [
"pm.test(\"Status code is 200\", function () {",
" pm.response.to.have.status(200);",
"});",
"",
"pm.test(\"Response contains batch results\", function () {",
" var jsonData = pm.response.json();",
" if (jsonData.success && jsonData.data) {",
" pm.expect(jsonData.data).to.have.property('total');",
" pm.expect(jsonData.data).to.have.property('results');",
" pm.expect(jsonData.data.results).to.be.an('array');",
" }",
"});"
],
"type": "text/javascript"
}
}
],
"request": {
"method": "POST",
"header": [
{
"key": "X-API-Key",
"value": "{{apiKey}}",
"type": "text"
}
],
"body": {
"mode": "formdata",
"formdata": [
{
"key": "images",
"type": "file",
"src": [],
"description": "Upload multiple images (up to 50)"
},
{
"key": "images",
"type": "file",
"src": [],
"description": "Add more images as needed"
}
]
},
"url": {
"raw": "{{baseUrl}}/api/images/tag/batch",
"host": [
"{{baseUrl}}"
],
"path": [
"api",
"images",
"tag",
"batch"
]
},
"description": "Tag multiple images in a single batch request. Maximum 50 images per request. All images are processed in parallel."
},
"response": []
},
{
"name": "Batch Tag Base64 Images",
"event": [
{
"listen": "test",
"script": {
"exec": [
"pm.test(\"Status code is 200\", function () {",
" pm.response.to.have.status(200);",
"});",
"",
"pm.test(\"Response contains batch results\", function () {",
" var jsonData = pm.response.json();",
" if (jsonData.success && jsonData.data) {",
" pm.expect(jsonData.data).to.have.property('total');",
" pm.expect(jsonData.data).to.have.property('succeeded');",
" pm.expect(jsonData.data).to.have.property('failed');",
" }",
"});"
],
"type": "text/javascript"
}
}
],
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"value": "application/json",
"type": "text"
},
{
"key": "X-API-Key",
"value": "{{apiKey}}",
"type": "text"
}
],
"body": {
"mode": "raw",
"raw": "{\n \"images\": [\n {\n \"base64Image\": \"data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAABAAEDASIAAhEBAxEB/8QAFQABAQAAAAAAAAAAAAAAAAAAAAv/xAAUEAEAAAAAAAAAAAAAAAAAAAAA/8QAFQEBAQAAAAAAAAAAAAAAAAAAAAX/xAAUEQEAAAAAAAAAAAAAAAAAAAAA/9oADAMBAAIRAxEAPwCdABmX/9k=\",\n \"mediaType\": \"image/jpeg\",\n \"fileName\": \"image1.jpg\"\n },\n {\n \"base64Image\": \"data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAABAAEDASIAAhEBAxEB/8QAFQABAQAAAAAAAAAAAAAAAAAAAAv/xAAUEAEAAAAAAAAAAAAAAAAAAAAA/8QAFQEBAQAAAAAAAAAAAAAAAAAAAAX/xAAUEQEAAAAAAAAAAAAAAAAAAAAA/9oADAMBAAIRAxEAPwCdABmX/9k=\",\n \"mediaType\": \"image/jpeg\",\n \"fileName\": \"image2.jpg\"\n }\n ]\n}",
"options": {
"raw": {
"language": "json"
}
}
},
"url": {
"raw": "{{baseUrl}}/api/images/tag-base64/batch",
"host": [
"{{baseUrl}}"
],
"path": [
"api",
"images",
"tag-base64",
"batch"
]
},
"description": "Tag multiple base64 encoded images in a single batch request. Maximum 50 images per request."
},
"response": []
}
],
"description": "Endpoints for tagging property images"
},
{
"name": "Search & Statistics",
"item": [
{
"name": "Search by Tag",
"event": [
{
"listen": "test",
"script": {
"exec": [
"pm.test(\"Status code is 200\", function () {",
" pm.response.to.have.status(200);",
"});",
"",
"pm.test(\"Response contains search results\", function () {",
" var jsonData = pm.response.json();",
" pm.expect(jsonData).to.have.property('success');",
" if (jsonData.data) {",
" pm.expect(jsonData.data).to.be.an('array');",
" }",
"});"
],
"type": "text/javascript"
}
}
],
"request": {
"method": "GET",
"header": [
{
"key": "X-API-Key",
"value": "{{apiKey}}",
"type": "text"
}
],
"url": {
"raw": "{{baseUrl}}/api/images/search?tag=marina view",
"host": [
"{{baseUrl}}"
],
"path": [
"api",
"images",
"search"
],
"query": [
{
"key": "tag",
"value": "marina view",
"description": "Tag value to search for"
}
]
},
"description": "Search for images by tag value. Returns all images that have been tagged with the specified tag."
},
"response": []
},
{
"name": "Get Statistics",
"event": [
{
"listen": "test",
"script": {
"exec": [
"pm.test(\"Status code is 200\", function () {",
" pm.response.to.have.status(200);",
"});",
"",
"pm.test(\"Response contains statistics\", function () {",
" var jsonData = pm.response.json();",
" if (jsonData.success && jsonData.data) {",
" pm.expect(jsonData.data).to.have.property('totalImages');",
" pm.expect(jsonData.data).to.have.property('totalTagged');",
" pm.expect(jsonData.data).to.have.property('totalDuplicates');",
" }",
"});"
],
"type": "text/javascript"
}
}
],
"request": {
"method": "GET",
"header": [
{
"key": "X-API-Key",
"value": "{{apiKey}}",
"type": "text"
}
],
"url": {
"raw": "{{baseUrl}}/api/images/stats",
"host": [
"{{baseUrl}}"
],
"path": [
"api",
"images",
"stats"
]
},
"description": "Get statistics about tagged images including total images, tagged count, duplicates detected, and average tags per image."
},
"response": []
}
],
"description": "Endpoints for searching and getting statistics"
},
{
"name": "Authentication Examples",
"item": [
{
"name": "Missing API Key",
"request": {
"method": "POST",
"header": [],
"url": {
"raw": "{{baseUrl}}/api/images/tag",
"host": [
"{{baseUrl}}"
],
"path": [
"api",
"images",
"tag"
]
},
"description": "Example request without API key - should return 401"
},
"response": []
},
{
"name": "Invalid API Key",
"request": {
"method": "POST",
"header": [
{
"key": "X-API-Key",
"value": "invalid_key_here",
"type": "text"
}
],
"url": {
"raw": "{{baseUrl}}/api/images/tag",
"host": [
"{{baseUrl}}"
],
"path": [
"api",
"images",
"tag"
]
},
"description": "Example request with invalid API key - should return 403"
},
"response": []
},
{
"name": "Using Authorization Header",
"request": {
"method": "POST",
"header": [
{
"key": "Authorization",
"value": "Bearer {{apiKey}}",
"type": "text"
}
],
"url": {
"raw": "{{baseUrl}}/api/images/stats",
"host": [
"{{baseUrl}}"
],
"path": [
"api",
"images",
"stats"
]
},
"description": "Alternative authentication using Authorization: Bearer header instead of X-API-Key"
},
"response": []
}
],
"description": "Examples of authentication methods and error cases"
}
],
"event": [
{
"listen": "prerequest",
"script": {
"type": "text/javascript",
"exec": [
""
]
}
},
{
"listen": "test",
"script": {
"type": "text/javascript",
"exec": [
""
]
}
}
],
"variable": [
{
"key": "baseUrl",
"value": "http://localhost:3000",
"type": "string"
},
{
"key": "apiKey",
"value": "your_api_key_here",
"type": "string",
"description": "Replace with your actual API key. Get one using: npm run apikey:create"
}
]
}

298
QUICK_START.md Normal file
View File

@ -0,0 +1,298 @@
# 🚀 Quick Start - Property Image Tagging API
## Current Status: ✅ RUNNING
**URL:** http://localhost:3000
**Mode:** Development (auto-reload enabled)
**Auth:** Disabled (SKIP_AUTH=true)
**Database:** Connected with 2 tagged images
---
## 🧪 Test Right Now
```bash
# Service information
curl http://localhost:3000/
# Health check
curl http://localhost:3000/api/images/health
# Statistics
curl http://localhost:3000/api/images/stats
# Search by tag
curl "http://localhost:3000/api/images/search?tag=modern"
```
---
## 📝 To Tag NEW Images
### Step 1: Get Anthropic API Key
1. Visit: https://console.anthropic.com/
2. Sign up and create an API key
3. Copy your key (starts with `sk-ant-api-...`)
### Step 2: Add to .env
Edit the `.env` file in the project root:
```env
ANTHROPIC_API_KEY=sk-ant-api-03-xxxxxxxxxxxxxxxxxxxx
```
### Step 3: Done!
The server will auto-reload. Now you can tag images:
```bash
# Upload and tag an image
curl -X POST http://localhost:3000/api/images/tag \
-F "image=@/path/to/your/image.jpg"
# Tag a base64 image
curl -X POST http://localhost:3000/api/images/tag-base64 \
-H "Content-Type: application/json" \
-d '{
"base64Image": "data:image/jpeg;base64,/9j/4AAQSkZJRg...",
"fileName": "property.jpg"
}'
```
---
## 🎯 Key Features
### Automatic Duplicate Detection
- Same image = cached result (FREE, no API call)
- SHA256 hash-based deduplication
### Smart Image Processing
- Supports: JPEG, PNG, WebP, HEIC, TIFF, BMP
- Auto-resize to 2048px (saves API costs)
- Magic number validation (security)
### Rich Tagging
- 20-30 tags per image across categories:
- Room Type (kitchen, bedroom, etc.)
- Style (modern, traditional, etc.)
- Condition (well-maintained, renovated, etc.)
- Features (hardwood floors, granite counters, etc.)
- Includes AI-generated summary
### Batch Processing
- Tag up to 10 images in one request
- Parallel processing for speed
---
## 🛠️ Common Tasks
### Start/Stop Server
```bash
# Start in development mode (auto-reload)
npm run dev
# Start in production mode
npm start
# Stop server
Ctrl+C
```
### Database Management
```bash
# Set up database (creates tables)
npm run db:setup
# View logs
tail -f logs/combined-*.log
tail -f logs/error-*.log
```
### API Key Management
```bash
# Create new API key
npm run apikey:create
# List all API keys
npm run apikey:list
# Revoke an API key
npm run apikey:revoke
```
### Enable Authentication
Edit `.env`:
```env
SKIP_AUTH=false
```
Then use API keys in requests:
```bash
curl -X POST http://localhost:3000/api/images/tag \
-H "X-API-Key: key_live_xxxxx" \
-F "image=@photo.jpg"
```
---
## 📚 All API Endpoints
| Method | Endpoint | Description |
|--------|----------|-------------|
| GET | `/` | Service information |
| GET | `/api/images/health` | Health check |
| GET | `/api/images/stats` | Tagging statistics |
| POST | `/api/images/tag` | Tag uploaded image file |
| POST | `/api/images/tag-base64` | Tag base64-encoded image |
| POST | `/api/images/tag-batch` | Tag multiple files (up to 10) |
| POST | `/api/images/tag-batch-base64` | Tag multiple base64 images |
| GET | `/api/images/search?tag=value` | Search images by tag |
---
## 📦 Use Postman for Easy Testing
1. Open Postman
2. Import collection: `Property_Image_Tagging_API.postman_collection.json`
3. Test all endpoints with pre-configured requests
---
## 🏗️ Project Architecture
**Clean Architecture** - Separation of concerns:
```
src/
├── domain/ # Core business logic (entities, interfaces)
├── application/ # Use cases (business workflows)
├── infrastructure/ # External services (database, AI)
├── presentation/ # HTTP layer (controllers, routes)
└── shared/ # Common utilities (logger, errors)
```
**Benefits:**
- Testable (mock any layer)
- Maintainable (clear separation)
- Swappable (change DB or AI provider easily)
---
## ⚙️ Environment Variables Reference
```env
# Server
NODE_ENV=development # or production
PORT=3000 # Server port
# Database
DB_HOST=localhost
DB_PORT=3306
DB_USER=root
DB_PASSWORD= # Your MySQL password
DB_NAME=property_tagging
# AI Provider (REQUIRED for tagging)
ANTHROPIC_API_KEY= # From console.anthropic.com
# Development
SKIP_AUTH=true # Skip API key auth (dev only)
LOG_LEVEL=info # debug, info, warn, error
```
---
## 🔒 Security Features
- ✅ SHA256 hashed API keys (never stored plain text)
- ✅ Magic number file type validation
- ✅ File size limits (50MB)
- ✅ Image dimension limits (15000px)
- ✅ Helmet.js security headers
- ✅ CORS protection
- ✅ SQL injection prevention (parameterized queries)
- ✅ Input validation (Joi schemas)
---
## 📊 Response Format
### Success Response
```json
{
"success": true,
"message": "✅ New image tagged successfully",
"data": {
"imageId": 3,
"imageHash": "abc123...",
"tags": [
{
"category": "room_type",
"value": "kitchen",
"confidence": "high"
}
],
"summary": "Modern kitchen with...",
"totalTags": 28,
"isDuplicate": false
},
"timestamp": "2025-11-03T10:30:00.000Z"
}
```
### Error Response
```json
{
"success": false,
"message": "File size exceeds maximum allowed size",
"timestamp": "2025-11-03T10:30:00.000Z"
}
```
---
## 🐛 Troubleshooting
### Server won't start
- Check MySQL is running: `systemctl status mysql`
- Verify `.env` credentials
- Check port 3000 is available
### Can't tag images
- Verify `ANTHROPIC_API_KEY` is set in `.env`
- Check API key is valid at console.anthropic.com
- View logs: `tail -f logs/error-*.log`
### Database connection error
- MySQL credentials in `.env`
- Run: `npm run db:setup`
### API key authentication failing
- Set `SKIP_AUTH=true` in `.env` for development
- Or create API key: `npm run apikey:create`
---
## 📖 Further Reading
- **SETUP_GUIDE.md** - Complete setup instructions
- **CURSOR_PROMPT.md** - Full project specification
- **.cursorrules** - Development best practices
- **logs/** - Application logs for debugging
---
## 🎉 You're All Set!
The server is running at **http://localhost:3000**
Add your Anthropic API key to start tagging images with AI! 🤖

186
SETUP_GUIDE.md Normal file
View File

@ -0,0 +1,186 @@
# 🚀 Quick Setup Guide
## Prerequisites
- Node.js 18+ installed
- MySQL 8.0+ running
- Anthropic Claude API key (get from https://console.anthropic.com/)
## Step-by-Step Setup
### 1. Configure Environment Variables
Edit the `.env` file in the project root and fill in the required values:
```bash
# Set your MySQL password (if root has a password)
DB_PASSWORD=your_mysql_root_password
# Add your Anthropic API key (REQUIRED for image tagging)
ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxx
```
**Note:** If your MySQL root user doesn't have a password, leave `DB_PASSWORD` empty.
### 2. Set Up the Database
Run the database setup script:
```bash
npm run db:setup
```
This will:
- Create the `property_tagging` database
- Import all required tables (api_keys, tagged_images, image_tags)
- Verify the setup
### 3. Create an API Key (Optional)
If you want to test with authentication (SKIP_AUTH=false), create an API key:
```bash
npm run apikey:create
```
Save the generated API key for testing.
### 4. Start the Server
**Development mode (with auto-reload):**
```bash
npm run dev
```
**Production mode:**
```bash
npm start
```
The server will start on `http://localhost:3000`
## Testing the API
### Health Check (No Auth Required)
```bash
curl http://localhost:3000/
```
### Tag an Image (With SKIP_AUTH=true)
```bash
curl -X POST http://localhost:3000/api/images/tag-base64 \
-H "Content-Type: application/json" \
-d '{
"base64Image": "data:image/jpeg;base64,/9j/4AAQ...",
"fileName": "test.jpg"
}'
```
### Tag an Image (With API Key Authentication)
```bash
curl -X POST http://localhost:3000/api/images/tag-base64 \
-H "Content-Type: application/json" \
-H "X-API-Key: key_live_xxxxxxxxxx" \
-d '{
"base64Image": "data:image/jpeg;base64,/9j/4AAQ...",
"fileName": "test.jpg"
}'
```
## Available Scripts
| Command | Description |
|---------|-------------|
| `npm start` | Start the server in production mode |
| `npm run dev` | Start the server in development mode (with nodemon) |
| `npm run db:setup` | Set up the database and import schema |
| `npm run apikey:create` | Create a new API key |
| `npm run apikey:list` | List all API keys |
| `npm run apikey:revoke` | Revoke an API key |
## API Endpoints
- `GET /` - Health check and API information
- `GET /api/images/health` - Detailed health check
- `POST /api/images/tag` - Tag an uploaded image file
- `POST /api/images/tag-base64` - Tag a base64-encoded image
- `POST /api/images/tag-batch` - Tag multiple uploaded images
- `POST /api/images/tag-batch-base64` - Tag multiple base64 images
- `GET /api/images/search?tag=kitchen` - Search images by tag
- `GET /api/images/stats` - Get tagging statistics
## Troubleshooting
### MySQL Connection Error
If you see "Access denied for user 'root'@'localhost'":
1. Check your MySQL password in `.env`
2. Or try connecting with sudo: `sudo mysql`
3. Create a new MySQL user if needed:
```sql
CREATE USER 'property_tagger'@'localhost' IDENTIFIED BY 'your_password';
GRANT ALL PRIVILEGES ON property_tagging.* TO 'property_tagger'@'localhost';
FLUSH PRIVILEGES;
```
Then update your `.env`:
```
DB_USER=property_tagger
DB_PASSWORD=your_password
```
### Anthropic API Key Error
If you see "ANTHROPIC_API_KEY not set":
1. Sign up at https://console.anthropic.com/
2. Create an API key
3. Add it to your `.env` file
### Port Already in Use
If port 3000 is already in use, change it in `.env`:
```
PORT=3001
```
## Development Mode
For development with authentication disabled, set in `.env`:
```
SKIP_AUTH=true
```
This allows testing without API keys.
## Next Steps
- Import the `Property_Image_Tagging_API.postman_collection.json` into Postman for easy API testing
- Check the logs in the `logs/` directory for debugging
- Review the code structure in the `src/` directory
## Architecture
This project follows Clean Architecture:
```
src/
├── domain/ # Business entities and interfaces
├── application/ # Use cases and business logic
├── infrastructure/ # External services (DB, AI)
├── presentation/ # HTTP controllers and routes
└── shared/ # Common utilities
```
## Support
For issues or questions, check the logs:
- `logs/combined-YYYY-MM-DD.log` - All logs
- `logs/error-YYYY-MM-DD.log` - Error logs only

670
enhanced_property_prompt.md Normal file
View File

@ -0,0 +1,670 @@
# Enterprise Property Image Analysis System v3.0
## System Identity & Expertise
You are an elite real estate property analyst AI with comprehensive expertise in:
- **Architectural Photography Analysis**: Professional-grade image interpretation and spatial understanding
- **Interior Design Recognition**: Contemporary and classical design styles, furniture identification, spatial planning
- **Property Valuation Intelligence**: Feature recognition that impacts property value and marketability
- **Regional Market Knowledge**: Understanding of luxury property standards across global markets (Dubai, Mumbai, Singapore, London, NYC)
Your core mission: Generate precise, actionable metadata from property images that drives listing performance, search relevance, and buyer engagement.
---
## Analysis Objective & Output Requirements
**Primary Goal**: Analyze property images and generate **exactly 30 high-confidence tags** across **10 standardized categories**, each with calibrated confidence scores that reflect detection certainty.
**Quality Standards**:
- Precision over quantity: Every tag must be visually verifiable
- Enterprise-grade accuracy: >92% alignment with human expert assessment
- Search optimization: Tags must match common buyer search patterns
- No hallucination: Never infer elements that aren't visible or strongly evidenced
---
## Systematic Analysis Framework (Chain-of-Thought Process)
Execute this structured reasoning sequence before tag generation:
### Phase 1: Scene Understanding (15 seconds)
**Primary Recognition**
- **Q1**: What is the dominant room type? (bedroom/living/kitchen/bathroom/exterior)
- **Q2**: What is the camera perspective? (wide-angle/corner/centered/detail shot)
- **Q3**: What is the overall quality tier? (luxury/premium/mid-range/budget/economy)
- **Q4**: What is the property age/condition? (brand new/modern/dated/renovated/original)
**Visual Inventory**
- List 5-7 most prominent features in order of visual dominance
- Identify any unique or premium elements (views, materials, fixtures)
- Note architectural style indicators (ceiling height, window types, built-ins)
### Phase 2: Category-Specific Deep Analysis
#### 🌆 View Analysis (Target: 2-4 tags)
**Detection Protocol**:
1. Scan for windows, glass doors, or balcony access
2. Analyze exterior visibility: clear landmark/general vista/obstructed/none
3. Identify specific elements: water bodies, skyline, greenery, adjacent buildings
**Tag Decision Tree**:
```
Windows visible?
├─ YES → Exterior clearly visible?
│ ├─ YES → Specific landmark visible? → Tag: "[landmark] view" (0.95-1.0)
│ │ └─ NO → General type visible? → Tag: "city view"/"ocean view" (0.85-0.95)
│ └─ NO → Tag: "limited view" or "building view" (0.70-0.80)
└─ NO → Skip View tags or use "interior view only" (0.75-0.85)
```
**Premium View Keywords**: Burj Khalifa view, Palm Jumeirah view, Dubai Marina view, ocean view, sea view, beachfront view, golf course view, park view, downtown skyline, city lights view, mountain view
**Standard View Keywords**: partial view, garden view, pool view, courtyard view, street view, building view, unobstructed view
**Confidence Calibration**:
- 0.95-1.0: Named landmark clearly visible and identifiable
- 0.85-0.94: View type unmistakable (ocean/city/garden) with clear visual evidence
- 0.75-0.84: View present but partially obstructed or at distance
- <0.75: Too ambiguous - omit tag
---
#### 🛋️ Furnishing Analysis (Target: 3-4 tags)
**Furnishing Status Assessment**:
```
Furniture visible?
├─ NO → Tag: "unfurnished" (0.95-1.0)
├─ PARTIAL (1-3 items) → Tag: "semi-furnished" (0.85-0.92)
└─ FULL (complete room setup) → Tag: "fully furnished" (0.90-0.98)
```
**Style Classification Matrix**:
| Visual Indicators | Primary Tag | Secondary Tags | Confidence |
|------------------|-------------|----------------|------------|
| Clean lines, minimal decor, neutral palette | modern | contemporary, minimalist | 0.90-0.97 |
| Ornate details, rich fabrics, traditional wood | traditional | classic, elegant | 0.85-0.93 |
| Premium brands, designer pieces, high-end materials | luxury | premium, designer | 0.88-0.96 |
| Mix of old and new, eclectic | transitional | contemporary, mixed-style | 0.75-0.87 |
| Exposed elements, metal accents, raw materials | industrial | modern, loft-style | 0.82-0.91 |
**Quality Tier Indicators**:
- **Luxury** (0.90+): Recognizable designer furniture, custom built-ins, premium upholstery, coordinated decor
- **Premium** (0.85+): High-quality furniture, cohesive styling, name-brand pieces
- **Standard** (0.75+): Functional furniture, basic coordination, mid-range quality
- **Budget** (0.70+): Basic furniture, minimal styling, entry-level pieces
**Tag Examples**: fully furnished, unfurnished, semi-furnished, modern, contemporary, luxury, traditional, minimalist, designer, premium, custom furniture, eclectic, scandinavian, mid-century modern
---
#### 🍳 Kitchen Analysis (Target: 2-4 tags)
**Appliance Detection Protocol**:
1. Scan for visible appliances: refrigerator, oven/range, dishwasher, microwave, hood
2. Count visible appliances: 0 / 1-2 (basic) / 3+ (fully equipped)
3. Assess appliance quality: entry-level / mid-range / premium (stainless steel/integrated)
**Layout Classification**:
```
Kitchen Layout Decision Tree:
├─ Open to living/dining area? → "open kitchen" (0.92-0.98)
├─ Separated by wall/door? → "closed kitchen" (0.90-0.97)
├─ Central island visible? → "island kitchen" (0.93-0.99)
├─ Linear along one wall? → "galley kitchen" (0.88-0.95)
└─ L-shaped or U-shaped? → "modular kitchen" (0.85-0.92)
```
**Feature Recognition**:
- **Countertop Material**: granite, marble, quartz, laminate, butcher block
- **Cabinet Style**: shaker, flat-panel, glass-front, handleless, two-tone
- **Special Features**: breakfast bar, pantry, double sink, waterfall countertop, wine fridge
**Tag Examples**: with appliances, fully equipped, without appliances, basic appliances, open kitchen, closed kitchen, island kitchen, galley kitchen, modular kitchen, modern appliances, premium appliances, European-style, breakfast bar, granite countertops, custom cabinetry
**Confidence Rules**:
- "with appliances" requires 2+ visible appliances (0.90+)
- "fully equipped" requires 4+ appliances including major units (0.92+)
- Layout tags require clear visual confirmation of spatial arrangement (0.88+)
---
#### 🏠 Flooring Analysis (Target: 2-3 tags)
**Material Identification Strategy**:
**Step 1: Visual Characteristics Analysis**
| Material | Key Visual Markers | Confidence Threshold |
|----------|-------------------|---------------------|
| **Marble** | Natural veining, high polish, cool color palette | 0.90+ |
| **Wooden/Hardwood** | Visible grain patterns, warm tones, plank seams | 0.88+ |
| **Tile/Ceramic** | Grout lines, uniform pattern, matte or glazed finish | 0.85+ |
| **Carpet** | Soft texture, no reflections, fabric appearance | 0.92+ |
| **Laminate** | Wood-look but synthetic, uniform pattern, moderate shine | 0.75+ |
| **Porcelain** | Large format, minimal grout, consistent appearance | 0.82+ |
| **Concrete** | Industrial look, seamless or minimal joints, gray tones | 0.80+ |
**Step 2: Finish Assessment**
- **Polished**: Mirror-like reflections, high gloss (marble, porcelain)
- **Matte**: No reflections, flat appearance (some tile, carpet)
- **Textured**: Visible surface variation (wood grain, stone)
- **Glossy**: Moderate shine without mirror effect (glazed tile)
**Step 3: Quality Indicators**
- **Premium**: Exotic wood species, book-matched marble, large-format porcelain
- **Standard**: Oak/maple hardwood, standard marble, regular ceramic tile
- **Budget**: Laminate, vinyl, basic carpet
**Advanced Detection Techniques**:
```
IF reflections visible AND veining patterns present:
→ marble (0.90-0.97)
ELSE IF wood grain visible AND plank seams visible:
→ IF grain is natural and varied: hardwood (0.88-0.95)
→ IF grain is repetitive pattern: laminate (0.75-0.85)
ELSE IF grout lines in grid pattern:
→ tile/ceramic (0.85-0.93)
```
**Tag Examples**: wooden, hardwood, engineered wood, parquet, marble, polished marble, tile, ceramic tile, porcelain, carpet, laminate, vinyl, concrete, polished, matte, textured, light wood, dark wood
---
#### 🚪 Room Type Analysis (Target: 2-4 tags)
**Primary Space Identification** (Confidence: 0.95-1.0)
- bedroom, living room, kitchen, bathroom, dining room, entrance/foyer, hallway
**Specificity Enhancement** (Confidence: 0.85-0.95)
- master bedroom, guest bedroom, children's bedroom
- ensuite bathroom, powder room, guest bathroom
- formal dining, dining area
- study, home office, library
**Secondary Spaces** (Confidence: 0.80-0.92)
- balcony, terrace, patio, rooftop
- walk-in closet, dressing room
- laundry room, utility room
- storage room, maid's room
**Multi-Space Recognition**:
```
IF multiple rooms visible:
1. Tag primary room (largest/central) → 0.95-1.0
2. Tag secondary visible spaces → 0.85-0.92
3. Add "open-plan" or "connected spaces" to Features
```
**Decision Rules**:
- **Bedroom indicators**: Bed, nightstands, wardrobes, soft lighting
- **Living room indicators**: Sofa/seating area, TV/entertainment, coffee table, larger open space
- **Kitchen indicators**: Appliances, cabinets, sink, cooking surfaces
- **Bathroom indicators**: Sink/vanity, shower/tub, toilet, tiles
**Tag Examples**: bedroom, master bedroom, living room, kitchen, bathroom, ensuite bathroom, dining room, balcony, study, walk-in closet, entrance, multi-purpose space
---
#### 🎨 Style Analysis (Target: 3-4 tags)
**Primary Style Classification Framework**:
**Contemporary Styles**:
- **Modern** (0.88-0.96): Clean lines, minimal ornamentation, open spaces, neutral colors, innovative materials
- *Key markers*: Floor-to-ceiling windows, handleless cabinets, integrated appliances, geometric patterns
- **Contemporary** (0.85-0.93): Current design trends, mix of materials, comfortable yet stylish, evolved modern
- *Key markers*: Mixed textures, statement lighting, curved furniture, bold accents
- **Minimalist** (0.86-0.94): Extreme simplicity, "less is more", hidden storage, monochromatic palette
- *Key markers*: Sparse furnishing, concealed storage, neutral whites/grays, clean surfaces
**Classic Styles**:
- **Traditional** (0.82-0.91): Timeless elegance, rich woods, ornate details, symmetry
- *Key markers*: Crown molding, chair rails, antique-style furniture, chandeliers
- **Classical** (0.80-0.90): Formal, luxurious, European influences, refined details
- *Key markers*: Columns, arches, elaborate ceiling details, formal arrangements
**Specialized Styles**:
- **Industrial** (0.83-0.92): Exposed elements, raw materials, urban loft aesthetic
- *Key markers*: Exposed brick/concrete/ductwork, metal fixtures, Edison bulbs
- **Scandinavian** (0.81-0.90): Light woods, white walls, functional design, cozy (hygge)
- *Key markers*: Light wood floors, white/light gray walls, natural textiles, plants
- **Mid-Century Modern** (0.79-0.88): 1950s-60s revival, organic curves, mixed materials
- *Key markers*: Tapered legs, sunburst patterns, teak wood, geometric shapes
**Regional Styles**:
- **Arabic/Middle Eastern** (0.77-0.87): Intricate patterns, rich colors, ornate details
- **Mediterranean** (0.78-0.86): Warm colors, terracotta, arches, rustic elements
- **Asian-Inspired** (0.76-0.85): Zen aesthetics, natural materials, low furniture, minimalism
**Style Confidence Matrix**:
```
Single clear style indicators (3+) → Primary style: 0.88-0.96
Mixed style elements → Multiple tags with: 0.80-0.88
Ambiguous or neutral → Use "contemporary" or "modern": 0.75-0.83
```
**Tag Examples**: modern, contemporary, minimalist, traditional, classical, luxury, industrial, scandinavian, mid-century modern, transitional, eclectic, mediterranean, art deco, bohemian, zen
---
#### ⭐ Features Analysis (Target: 4-5 tags)
**Feature Hierarchy & Valuation Impact**:
**Tier 1 - High-Value Architectural Features** (Confidence: 0.90-0.99)
- floor-to-ceiling windows, floor-to-ceiling glass doors
- high ceiling (if visibly >3m/10ft)
- vaulted ceiling, coffered ceiling, tray ceiling
- exposed beams (wood or metal)
- architectural columns or arches
**Tier 2 - Built-In & Storage Features** (Confidence: 0.85-0.95)
- built-in wardrobes, walk-in closet, dressing room
- built-in shelving, custom cabinetry
- entertainment center, media wall
- window seats, bay windows
**Tier 3 - Premium Finishes & Details** (Confidence: 0.82-0.93)
- crown molding, ceiling molding
- wainscoting, wall paneling
- accent wall, feature wall
- fireplace (gas or electric visible)
- decorative columns
**Tier 4 - Functional Features** (Confidence: 0.78-0.90)
- balcony access, terrace access
- ensuite bathroom (if bedroom visible)
- separate shower and tub
- double vanity, double sink
- smart home features (if visible: smart lighting, automated blinds)
- abundant storage space
**Tier 5 - Material & Surface Features** (Confidence: 0.80-0.92)
- granite countertops, marble countertops, quartz countertops
- stainless steel appliances
- glass partitions, sliding glass doors
- marble walls, feature tile wall
- wooden accents, metal accents
**Detection Best Practices**:
1. Prioritize clearly visible features over inferred ones
2. Use specific terms over general (e.g., "crown molding" not just "molding")
3. Combine related features appropriately (e.g., "high ceiling" + "floor-to-ceiling windows")
4. Avoid redundancy with other categories (don't repeat flooring materials here)
**Tag Examples**: floor-to-ceiling windows, high ceiling, built-in wardrobes, walk-in closet, crown molding, accent wall, balcony access, ensuite bathroom, granite countertops, smart home features, fireplace, vaulted ceiling, exposed beams, bay windows, double vanity, glass partitions
---
#### ✅ Condition Analysis (Target: 2-3 tags)
**Condition Assessment Matrix**:
**Renovation Status**:
```
Visual Indicators Analysis:
├─ Brand new fixtures + modern finishes + no wear → "newly renovated" (0.90-0.97)
├─ Contemporary updates + good condition → "recently updated" (0.85-0.93)
├─ Mixed old/new elements → "partially renovated" (0.78-0.87)
├─ Dated finishes but clean → "original condition" (0.80-0.90)
└─ Visible wear/outdated → "needs renovation" (0.82-0.91)
```
**Maintenance Level**:
- **Pristine/Immaculate** (0.92-0.98): Spotless, showroom condition, no visible wear
- *Indicators*: Perfect surfaces, no scuffs, pristine walls, flawless finishes
- **Well-Maintained** (0.88-0.95): Clean, good care, minor natural wear acceptable
- *Indicators*: Clean surfaces, good paint condition, functioning fixtures
- **Average Maintenance** (0.75-0.85): Acceptable condition, some visible use
- *Indicators*: Some wear visible, functional but not pristine
- **Requires Work** (0.80-0.90): Visible issues, outdated elements, needs attention
- *Indicators*: Dated fixtures, visible wear, repair needs apparent
**Occupancy Readiness**:
- **Ready to Move / Move-In Ready** (0.90-0.97): Fully functional, no work required
- Must meet: Clean + Functional + Up-to-date
- **Requires Cosmetic Work** (0.82-0.90): Functional but needs updates
- **Needs Renovation** (0.85-0.93): Major work required before occupancy
**Confidence Calibration Rules**:
```
IF all visible elements are new/pristine: 0.93-0.97
IF clear signs of recent work: 0.88-0.94
IF condition is mixed: 0.80-0.87
IF wear is obvious: 0.85-0.92 (for "needs work" tags)
```
**Tag Examples**: newly renovated, recently updated, well-maintained, pristine condition, ready to move, move-in ready, original condition, requires cosmetic work, needs renovation, immaculate, like-new, turnkey condition
---
#### 💡 Lighting Analysis (Target: 3-4 tags)
**Multi-Layer Lighting Assessment**:
**Natural Lighting Evaluation**:
```
Step 1: Window Assessment
├─ Large windows (floor-to-ceiling) → "abundant natural light" (0.92-0.98)
├─ Multiple standard windows → "natural light" (0.88-0.95)
├─ Small/few windows → "limited natural light" (0.82-0.90)
└─ No visible windows → Skip natural light tags
Step 2: Light Quality
├─ Bright, even illumination → "bright" / "well-lit" (0.85-0.93)
├─ Soft, diffused light → "soft lighting" (0.80-0.88)
└─ Strong directional shadows → "directional light" (0.75-0.85)
```
**Artificial Lighting Detection**:
**Fixture Types** (Confidence: 0.85-0.95 if clearly visible):
- **Recessed lighting**: Ceiling-mounted downlights, can lights, pot lights
- **Pendant lights**: Hanging fixtures over islands/tables/counters
- **Chandeliers**: Decorative hanging multi-light fixtures
- **Track lighting**: Adjustable directional lights on tracks
- **LED strip lighting**: Under-cabinet, cove, or accent strips
- **Wall sconces**: Wall-mounted decorative fixtures
**Lighting Purposes** (Confidence: 0.78-0.90):
- **Ambient lighting**: General overall illumination (recessed, central fixtures)
- **Task lighting**: Focused work areas (under-cabinet, desk lamps, vanity lights)
- **Accent lighting**: Highlighting features (spotlights, picture lights, LED strips)
- **Mood lighting**: Decorative, dimmable, creating atmosphere
**Lighting Quality Descriptors**:
- **Well-lit** (0.88-0.95): Adequate illumination for function and viewing
- **Bright** (0.85-0.93): High illumination level, energetic atmosphere
- **Warm lighting** (0.80-0.90): Yellow-toned, cozy illumination
- **Cool lighting** (0.80-0.90): White/blue-toned, modern/clinical
- **Layered lighting** (0.82-0.91): Multiple light sources/types visible
**Advanced Detection Logic**:
```
IF (large windows visible) AND (bright interior):
→ "abundant natural light" (0.92-0.98)
→ "well-lit" (0.88-0.94)
IF (recessed lights visible) OR (modern ceiling fixtures):
→ "ambient lighting" (0.85-0.92)
→ "LED lighting" (0.82-0.90)
IF (pendant lights over island/table):
→ "pendant lights" (0.90-0.96)
→ "task lighting" (0.83-0.90)
```
**Tag Examples**: natural light, abundant natural light, limited natural light, ambient lighting, LED lighting, recessed lighting, pendant lights, chandelier, track lighting, well-lit, bright, warm lighting, mood lighting, task lighting, accent lighting, layered lighting
---
#### 🎨 Color Scheme Analysis (Target: 2-3 tags)
**Color Analysis Protocol**:
**Step 1: Dominant Color Identification**
Analyze the 3 most prominent colors across:
- Walls (largest surface area - highest weight)
- Flooring (second-largest surface - medium weight)
- Furniture & Accents (visual impact - lower weight)
**Step 2: Palette Classification**
**Neutral Palettes** (Confidence: 0.90-0.97):
- **Neutral tones**: White, beige, gray, taupe as dominant colors (60%+ of visible surfaces)
- Variations: "white and gray", "beige palette", "greige tones"
- **Monochrome**: Single color in various shades (all grays, all whites, all beiges)
- **Earth tones**: Browns, tans, warm beiges, terracotta, natural wood colors
**Temperature-Based** (Confidence: 0.82-0.91):
- **Warm colors**: Reds, oranges, yellows, warm browns, gold accents (20%+ presence)
- **Cool colors**: Blues, greens, purples, cool grays, silver accents (20%+ presence)
**Specific Color Palettes** (Confidence: 0.85-0.93):
- **Blue and white**: Coastal, Mediterranean, nautical themes
- **Black and white**: High contrast, modern, dramatic
- **White and wood**: Scandinavian, natural, warm minimalism
- **Gray and yellow**: Contemporary, cheerful, balanced
**Intensity Classification** (Confidence: 0.80-0.90):
- **Bold colors**: High saturation, vibrant hues, statement colors
- **Pastel shades**: Soft, light, desaturated colors (mint, blush, sky blue)
- **Muted tones**: Low saturation, subdued, sophisticated colors
- **Vibrant**: Bright, energetic, high-impact colors
**Color Scheme Decision Matrix**:
```
IF 70%+ surfaces are white/beige/gray:
→ "neutral tones" (0.92-0.97)
IF warm wood tones prominent:
→ "warm colors" (0.85-0.92) OR "earth tones" (0.88-0.94)
IF bold accent colors visible (furniture/decor):
→ Primary: "neutral tones" + Secondary: "[color] accents" (0.83-0.90)
IF all shades of one color:
→ "monochrome" (0.88-0.95)
```
**Tag Examples**: neutral tones, warm colors, cool colors, earth tones, monochrome, white and gray, beige palette, blue accents, wood tones, bold colors, pastel shades, muted tones, vibrant, black and white, blue and white, greige
**Confidence Guidelines**:
- Dominant palette (60%+ of space): 0.90-0.97
- Secondary color theme (30-60%): 0.85-0.92
- Accent colors (10-30%): 0.80-0.88
- Minor color presence (<10%): 0.75-0.82 or omit
---
## Tag Distribution Strategy
### Optimal Distribution Guidelines
**Mandatory Minimums** (Must be satisfied):
- Each category: Minimum 1 tag
- Total tags: Exactly 30
**Recommended Distribution** (Adjust based on image content):
```
High Visual Prominence Categories (3-4 tags each):
├─ Furnishing: 3-4 tags
├─ Style: 3-4 tags
├─ Features: 4-5 tags (most valuable for property differentiation)
└─ Lighting: 3-4 tags
Medium Prominence Categories (2-3 tags each):
├─ Room Type: 2-4 tags
├─ Flooring: 2-3 tags
├─ Color Scheme: 2-3 tags
└─ View: 2-4 tags (if applicable)
Standard Categories (2-3 tags each):
├─ Kitchen: 2-4 tags (if visible)
└─ Condition: 2-3 tags
Total: 30 tags
```
**Dynamic Adjustment Rules**:
1. **If kitchen not visible**: Redistribute 2-3 tags to Features, Furnishing, or Style
2. **If no views available**: Compensate with additional Features or Room Type tags
3. **If image shows multiple rooms**: Increase Room Type tags (3-4), reduce others slightly
4. **If unfurnished space**: Prioritize Features, Flooring, Style architectural elements
**Quality Over Quantity Balance**:
```
PRIORITY 1: High-confidence tags (0.90+) = 15-20 tags
PRIORITY 2: Medium-confidence tags (0.80-0.89) = 8-12 tags
PRIORITY 3: Acceptable-confidence tags (0.70-0.79) = 2-3 tags
NEVER: Low-confidence tags (<0.70) = 0 tags
```
---
## Confidence Scoring System
### Master Confidence Calibration Framework
**Tier 1: Exceptional Certainty (0.95-1.0)**
- Element is the primary focus of the image
- Zero ambiguity in identification
- Multiple confirming visual markers present
- Professional expert would have 100% agreement
*Examples*:
- Room type when furniture/fixtures clearly indicate function (0.98-1.0)
- Named landmark fully visible in view (0.97-1.0)
- Flooring material in close-up with texture visible (0.95-0.98)
**Tier 2: High Confidence (0.85-0.94)**
- Element clearly visible with strong supporting evidence
- Minor ambiguity possible but unlikely
- 2-3 confirming indicators present
- 90-95% expert agreement expected
*Examples*:
- Style classification with consistent design elements (0.88-0.93)
- Furnishing status when furniture clearly visible (0.90-0.95)
- Kitchen layout obvious from spatial arrangement (0.87-0.93)
**Tier 3: Moderate Confidence (0.75-0.84)**
- Element reasonably inferable from visible evidence
- Some ambiguity or partial obstruction present
- 1-2 confirming indicators
- 75-85% expert agreement expected
*Examples*:
- View type when partially obstructed (0.78-0.84)
- Flooring material at distance or under furniture (0.76-0.82)
- Condition assessment based on limited visible surfaces (0.77-0.83)
**Tier 4: Acceptable Minimum (0.65-0.74)**
- Element suggested by context but not definitively confirmed
- Significant ambiguity or very limited visual information
- Educated inference based on partial evidence
- 65-75% expert agreement expected
- **Use sparingly** - only when necessary to reach 30-tag requirement
*Examples*:
- Specific style when only generic modern elements visible (0.68-0.73)
- Feature presence inferred from partial view (0.70-0.75)
**REJECTION ZONE (<0.65)**
- Too uncertain to include
- Would likely mislead buyers or search algorithms
- Insufficient visual evidence
- **Never include these tags**
### Confidence Adjustment Factors
**Increase Confidence (+0.03 to +0.08)**:
- Element appears in multiple areas of image
- Professional photography with good lighting
- Close-up or detailed view available
- Recent/modern property (easier style classification)
- Distinctive, unique features
**Decrease Confidence (-0.05 to -0.15)**:
- Poor image quality, blur, or low resolution
- Extreme lighting (overexposed/underexposed)
- Long-distance view or small element in frame
- Partial obstruction or limited visibility
- Ambiguous or transitional styles
**Context-Specific Rules**:
```
IF image quality is poor:
REDUCE all confidence scores by 0.08-0.12
IF element is central focus:
INCREASE confidence by 0.05-0.08
IF multiple indicators confirm tag:
INCREASE confidence by 0.03-0.06
IF inference required (not directly visible):
REDUCE confidence by 0.10-0.15
IF expert might disagree:
REDUCE confidence to 0.75-0.82 range
```
---
## Cross-Category Validation & Consistency Checks
### Logical Coherence Matrix
Perform these validation checks after initial tag generation:
**Rule 1: Furnishing-Style Alignment**
```
IF "luxury" IN Furnishing:
THEN Style MUST include: "modern" OR "contemporary" OR "traditional" OR "classical"
AND Features SHOULD include premium elements
CONFIDENCE CHECK: All luxury indicators should be 0.88+
IF "minimalist" IN Style:
THEN Furnishing SHOULD NOT include: "traditional", "ornate", "eclectic"
IF conflict exists: REDUCE confidence of conflicting tag by 0.10
```
**Rule 2: Condition-Feature Consistency**
```
IF "newly renovated" IN Condition (confidence >0.90):
THEN expect: Modern fixtures, contemporary finishes
CHECK: Flooring, Lighting, Features should reflect newness
IF "needs renovation" IN Condition:
THEN Features SHOULD NOT include: "newly installed", "modern appliances" (high confidence)
REDUCE confidence if modern features appear pristine
```
**Rule 3: Kitchen-Appliance Logic**
```
IF "with appliances" IN Kitchen:
VERIFY: At least 2 appliances visible in image
IF NOT visible: REDUCE confidence to <0.75 or REMOVE tag
IF "open kitchen" IN Kitchen:
VERIFY: Kitchen connects visibly to living/dining space
CHECK: Room Type should include "living room" or "dining area" or note "open-plan"
```
**Rule 4: Lighting-View Correlation**
```
IF "abundant natural light" IN Lighting:
VERIFY: Large windows OR glass doors visible
EXPECT: View tags present (unless obstructed building view)
IF NO windows visible:
ENSURE: No "natural light" tags above 0.75 confidence
FOCUS: Artificial lighting tags only
```
**Rule 5: Room Type-Feature Matching**
```
IF "bedroom" IN Room Type:
EXPECT: "wardrobe" OR "closet" features if visible
FURNISHING: Bed-related furniture mentions
IF "bathroom" IN Room Type:
EXPECT: Tile flooring (not carpet/wood with high confidence)
FEATURES: Vanity, shower, bathtub if visible
```
**Rule 6: Style Consistency Across Tags**
```
IF "traditional" IN Style (confidence >0.85):
CHECK: Furn

2953
package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

43
package.json Normal file
View File

@ -0,0 +1,43 @@
{
"name": "property-image-tagger",
"version": "1.0.0",
"description": "REST API for automatic property image tagging using Claude AI",
"main": "src/server.js",
"scripts": {
"start": "node src/server.js",
"dev": "nodemon src/server.js",
"setup": "node scripts/interactive-setup.js",
"db:setup": "node scripts/setup-database.js",
"apikey:create": "node scripts/manage-api-keys.js create",
"apikey:list": "node scripts/manage-api-keys.js list",
"apikey:revoke": "node scripts/manage-api-keys.js revoke"
},
"keywords": [
"image-tagging",
"ai",
"claude",
"rest-api"
],
"license": "MIT",
"dependencies": {
"@anthropic-ai/sdk": "^0.32.1",
"async-retry": "^1.3.3",
"compression": "^1.7.4",
"cors": "^2.8.5",
"dotenv": "^16.4.5",
"express": "^4.21.2",
"file-type": "^18.7.0",
"helmet": "^8.0.0",
"joi": "^17.13.3",
"morgan": "^1.10.0",
"multer": "^1.4.5-lts.1",
"mysql2": "^3.11.5",
"sharp": "^0.33.5",
"uuid": "^11.0.3",
"winston": "^3.17.0",
"winston-daily-rotate-file": "^5.0.0"
},
"devDependencies": {
"nodemon": "^3.1.9"
}
}

View File

@ -0,0 +1,178 @@
#!/usr/bin/env node
/**
* Interactive Setup Wizard
* Guides users through the complete setup process
*/
const readline = require('readline');
const fs = require('fs');
const path = require('path');
const { execSync } = require('child_process');
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
function question(query) {
return new Promise(resolve => rl.question(query, resolve));
}
function displayBanner() {
console.log('\n================================================');
console.log('🚀 Property Image Tagging API - Setup Wizard');
console.log('================================================\n');
}
async function checkEnvFile() {
const envPath = path.join(process.cwd(), '.env');
if (fs.existsSync(envPath)) {
console.log('✅ .env file found\n');
return true;
} else {
console.log('❌ .env file not found\n');
return false;
}
}
async function setupEnvironment() {
console.log('📝 Environment Configuration\n');
const dbPassword = await question('Enter MySQL root password (press Enter if no password): ');
const anthropicKey = await question('Enter your Anthropic API key (or press Enter to skip): ');
const port = await question('Server port (default 3000): ') || '3000';
const skipAuth = await question('Skip authentication for development? (y/n, default: y): ');
const envContent = `# Environment Configuration
NODE_ENV=development
PORT=${port}
# Database Configuration
DB_HOST=localhost
DB_PORT=3306
DB_USER=root
DB_PASSWORD=${dbPassword}
DB_NAME=property_tagging
# Claude AI Configuration (REQUIRED)
# Get your API key from: https://console.anthropic.com/
ANTHROPIC_API_KEY=${anthropicKey}
# Authentication (Optional - for development only)
# Set to true to skip API key authentication during development
SKIP_AUTH=${skipAuth.toLowerCase() !== 'n'}
# Logging
LOG_LEVEL=info
`;
const envPath = path.join(process.cwd(), '.env');
fs.writeFileSync(envPath, envContent);
console.log('\n✅ .env file created\n');
}
async function setupDatabase() {
console.log('🗄️ Setting up database...\n');
try {
execSync('node scripts/setup-database.js', { stdio: 'inherit' });
return true;
} catch (error) {
console.log('\n❌ Database setup failed\n');
console.log('You can try setting it up manually later with: npm run db:setup\n');
return false;
}
}
async function createApiKey() {
const create = await question('\n🔑 Do you want to create an API key now? (y/n): ');
if (create.toLowerCase() === 'y') {
try {
console.log('\n');
execSync('node scripts/manage-api-keys.js create', { stdio: 'inherit' });
console.log('\n✅ API key created! Save it securely.\n');
} catch (error) {
console.log('\n❌ Failed to create API key\n');
console.log('You can create one later with: npm run apikey:create\n');
}
}
}
async function displayNextSteps(dbSetup) {
console.log('\n================================================');
console.log('✅ Setup Complete!');
console.log('================================================\n');
console.log('Next steps:\n');
if (!dbSetup) {
console.log('⚠️ Database setup incomplete. Run:');
console.log(' npm run db:setup\n');
}
console.log('🚀 Start the server:');
console.log(' npm start (production mode)');
console.log(' npm run dev (development mode with auto-reload)\n');
console.log('📚 Available commands:');
console.log(' npm run db:setup - Set up database');
console.log(' npm run apikey:create - Create API key');
console.log(' npm run apikey:list - List API keys');
console.log(' npm run apikey:revoke - Revoke API key\n');
console.log('📖 For more information, see SETUP_GUIDE.md\n');
console.log('🌐 The API will be available at: http://localhost:' + (process.env.PORT || '3000'));
console.log('');
}
async function main() {
try {
displayBanner();
const envExists = await checkEnvFile();
if (!envExists) {
const create = await question('Would you like to create it now? (y/n): ');
if (create.toLowerCase() === 'y') {
await setupEnvironment();
} else {
console.log('\nSetup cancelled. Please create a .env file manually.\n');
console.log('You can copy .env.example and fill in the values:\n');
console.log(' cp .env.example .env\n');
rl.close();
return;
}
} else {
// Load existing .env
require('dotenv').config();
const update = await question('Would you like to update the .env configuration? (y/n): ');
if (update.toLowerCase() === 'y') {
await setupEnvironment();
// Reload after update
require('dotenv').config();
}
}
const dbSetup = await setupDatabase();
if (dbSetup) {
await createApiKey();
}
await displayNextSteps(dbSetup);
} catch (error) {
console.error('\n❌ Setup failed:', error.message);
console.error('\nPlease check the error and try again.\n');
} finally {
rl.close();
}
}
// Run the wizard
main();

150
scripts/manage-api-keys.js Normal file
View File

@ -0,0 +1,150 @@
#!/usr/bin/env node
require('dotenv').config();
const mysql = require('mysql2/promise');
const ApiKeyRepository = require('../src/infrastructure/repositories/ApiKeyRepository');
const logger = require('../src/shared/utils/logger');
/**
* CLI tool for managing API keys
*/
async function main() {
const command = process.argv[2];
const args = process.argv.slice(3);
// Create database connection pool
const pool = mysql.createPool({
host: process.env.DB_HOST || 'localhost',
port: process.env.DB_PORT || 3306,
user: process.env.DB_USER || 'root',
password: process.env.DB_PASSWORD || '',
database: process.env.DB_NAME || 'property_tagging',
waitForConnections: true,
connectionLimit: 5
});
const apiKeyRepository = new ApiKeyRepository(pool, logger);
try {
switch (command) {
case 'create':
await handleCreate(apiKeyRepository, args);
break;
case 'list':
await handleList(apiKeyRepository);
break;
case 'revoke':
await handleRevoke(apiKeyRepository, args);
break;
case 'activate':
await handleActivate(apiKeyRepository, args);
break;
default:
console.log(`
Usage: node scripts/manage-api-keys.js <command> [options]
Commands:
create <name> [environment] [description] Generate new API key
list List all API keys
revoke <key-id> [reason] Revoke an API key
activate <key-id> Activate a revoked API key
Examples:
node scripts/manage-api-keys.js create "My App" production "Production API key"
node scripts/manage-api-keys.js list
node scripts/manage-api-keys.js revoke abc-123 "Security breach"
`);
process.exit(1);
}
} catch (error) {
console.error('Error:', error.message);
process.exit(1);
} finally {
await pool.end();
}
}
async function handleCreate(repository, args) {
if (args.length < 1) {
throw new Error('Name is required: create <name> [environment] [description]');
}
const name = args[0];
const environment = args[1] || 'development';
const description = args[2] || null;
if (!['development', 'staging', 'production'].includes(environment)) {
throw new Error('Environment must be: development, staging, or production');
}
const result = await repository.createKey({ name, environment, description });
console.log('\n✅ API Key created successfully!\n');
console.log('⚠️ IMPORTANT: Save this key now. It will NOT be shown again!\n');
console.log(`Key ID: ${result.id}`);
console.log(`Name: ${result.name}`);
console.log(`Environment: ${result.environment}`);
console.log(`API Key: ${result.key}\n`);
console.log('---');
}
async function handleList(repository) {
const keys = await repository.getAllKeys();
if (keys.length === 0) {
console.log('No API keys found.');
return;
}
console.log('\nAPI Keys:\n');
keys.forEach((key, index) => {
console.log(`${index + 1}. ${key.name}`);
console.log(` ID: ${key.id}`);
console.log(` Prefix: ${key.prefix}`);
console.log(` Environment: ${key.environment}`);
console.log(` Active: ${key.isActive ? '✅' : '❌'}`);
console.log(` Created: ${key.createdAt}`);
if (key.revokedAt) {
console.log(` Revoked: ${key.revokedAt}${key.revokedReason ? ` (${key.revokedReason})` : ''}`);
}
if (key.expiresAt) {
console.log(` Expires: ${key.expiresAt}`);
}
console.log('');
});
}
async function handleRevoke(repository, args) {
if (args.length < 1) {
throw new Error('Key ID is required: revoke <key-id> [reason]');
}
const keyId = args[0];
const reason = args[1] || null;
await repository.revokeKey(keyId, reason);
console.log(`✅ API key ${keyId} has been revoked.`);
}
async function handleActivate(repository, args) {
if (args.length < 1) {
throw new Error('Key ID is required: activate <key-id>');
}
const keyId = args[0];
await repository.activateKey(keyId);
console.log(`✅ API key ${keyId} has been activated.`);
}
// Run if called directly
if (require.main === module) {
main().catch(error => {
console.error('Fatal error:', error);
process.exit(1);
});
}
module.exports = { main };

104
scripts/setup-database.js Normal file
View File

@ -0,0 +1,104 @@
#!/usr/bin/env node
/**
* Database Setup Script
* Creates the database and imports the schema
*/
require('dotenv').config();
const mysql = require('mysql2/promise');
const fs = require('fs');
const path = require('path');
async function setupDatabase() {
console.log('================================================');
console.log('🗄️ Database Setup');
console.log('================================================\n');
const config = {
host: process.env.DB_HOST || 'localhost',
port: process.env.DB_PORT || 3306,
user: process.env.DB_USER || 'root',
password: process.env.DB_PASSWORD || '',
};
const dbName = process.env.DB_NAME || 'property_tagging';
let connection;
try {
// Connect to MySQL (without selecting a database)
console.log(`📡 Connecting to MySQL at ${config.host}:${config.port}...`);
connection = await mysql.createConnection(config);
console.log('✅ Connected to MySQL\n');
// Create database if it doesn't exist
console.log(`📦 Creating database '${dbName}' if it doesn't exist...`);
await connection.query(`CREATE DATABASE IF NOT EXISTS \`${dbName}\``);
console.log('✅ Database created/verified\n');
// Switch to the database
await connection.query(`USE \`${dbName}\``);
// Check if tables already exist
const [tables] = await connection.query('SHOW TABLES');
if (tables.length > 0) {
console.log(' Tables already exist:');
tables.forEach(table => {
console.log(` - ${Object.values(table)[0]}`);
});
console.log('\n✅ Database is already set up!\n');
} else {
// Import schema
console.log('📄 Importing schema from 001_initial_schema.sql...');
const schemaPath = path.join(__dirname, '..', '001_initial_schema.sql');
if (!fs.existsSync(schemaPath)) {
throw new Error(`Schema file not found: ${schemaPath}`);
}
const schema = fs.readFileSync(schemaPath, 'utf8');
// Split by semicolon and execute each statement
const statements = schema
.split(';')
.map(s => s.trim())
.filter(s => s.length > 0 && !s.startsWith('--'));
for (const statement of statements) {
await connection.query(statement);
}
console.log('✅ Schema imported successfully\n');
// Verify tables were created
const [newTables] = await connection.query('SHOW TABLES');
console.log('📊 Created tables:');
newTables.forEach(table => {
console.log(` - ${Object.values(table)[0]}`);
});
console.log('');
}
console.log('================================================');
console.log('✅ Database setup complete!');
console.log('================================================\n');
} catch (error) {
console.error('❌ Database setup failed:');
console.error(error.message);
console.error('\nPlease check:');
console.error(' 1. MySQL is running: systemctl status mysql');
console.error(' 2. Credentials in .env file are correct');
console.error(' 3. MySQL user has CREATE DATABASE privileges\n');
process.exit(1);
} finally {
if (connection) {
await connection.end();
}
}
}
// Run setup
setupDatabase();

76
setup.sh Normal file
View File

@ -0,0 +1,76 @@
#!/bin/bash
echo "================================================"
echo "🚀 Property Image Tagging API - Quick Setup"
echo "================================================"
echo ""
# Check if .env exists
if [ ! -f .env ]; then
echo "❌ .env file not found!"
echo ""
echo "Please create a .env file by copying .env.example:"
echo " cp .env.example .env"
echo ""
echo "Then edit .env and add:"
echo " 1. Your MySQL password (DB_PASSWORD)"
echo " 2. Your Anthropic API key (ANTHROPIC_API_KEY)"
echo ""
echo "Get your Anthropic API key from: https://console.anthropic.com/"
echo ""
exit 1
fi
echo "✅ .env file found"
echo ""
# Load environment variables
export $(grep -v '^#' .env | xargs)
# Check if database exists
echo "🔍 Checking database connection..."
mysql -h"${DB_HOST}" -u"${DB_USER}" -p"${DB_PASSWORD}" -e "SHOW DATABASES LIKE '${DB_NAME}';" 2>/dev/null | grep -q "${DB_NAME}"
if [ $? -ne 0 ]; then
echo "❌ Database '${DB_NAME}' not found!"
echo ""
echo "Creating database and tables..."
# Create database
mysql -h"${DB_HOST}" -u"${DB_USER}" -p"${DB_PASSWORD}" -e "CREATE DATABASE IF NOT EXISTS ${DB_NAME};" 2>/dev/null
if [ $? -ne 0 ]; then
echo "❌ Failed to create database. Please check your MySQL credentials in .env"
exit 1
fi
# Import schema
mysql -h"${DB_HOST}" -u"${DB_USER}" -p"${DB_PASSWORD}" "${DB_NAME}" < 001_initial_schema.sql 2>/dev/null
if [ $? -eq 0 ]; then
echo "✅ Database created and schema imported successfully!"
else
echo "❌ Failed to import schema. Please check 001_initial_schema.sql"
exit 1
fi
else
echo "✅ Database '${DB_NAME}' exists"
fi
echo ""
echo "🔑 Creating a test API key..."
npm run apikey:create
echo ""
echo "================================================"
echo "✅ Setup Complete!"
echo "================================================"
echo ""
echo "You can now:"
echo " • Start the server: npm start"
echo " • Development mode: npm run dev"
echo " • List API keys: npm run apikey:list"
echo ""
echo "The API will be available at: http://localhost:${PORT:-3000}"
echo ""

View File

@ -0,0 +1,29 @@
/**
* Data Transfer Object for tag image request
*/
class TagImageRequestDto {
/**
* @param {Object} data - Request data
* @param {Buffer} data.fileBuffer - Image file buffer
* @param {string} data.mimeType - MIME type
* @param {string} [data.fileName] - Original file name
*/
constructor(data) {
this.fileBuffer = data.fileBuffer;
this.mimeType = data.mimeType;
this.fileName = data.fileName || 'unknown';
}
/**
* Convert buffer to base64 string
* @returns {string} Base64 encoded image
*/
toBase64() {
return this.fileBuffer.toString('base64');
}
}
module.exports = TagImageRequestDto;

View File

@ -0,0 +1,50 @@
const TaggingResult = require('../../domain/entities/TaggingResult');
/**
* Data Transfer Object for tag image response
*/
class TagImageResponseDto {
/**
* @param {Object} data - Response data
* @param {string} data.imageId - Image ID
* @param {Array} data.tags - Tags array
* @param {string} data.summary - Summary
* @param {number} data.totalTags - Total tags count
* @param {boolean} data.isDuplicate - Whether result was cached
* @param {Date} data.processedAt - Processing timestamp
* @param {string} [data.costSavings] - Cost savings message for duplicates
*/
constructor(data) {
this.imageId = data.imageId;
this.tags = data.tags;
this.summary = data.summary;
this.totalTags = data.totalTags;
this.isDuplicate = data.isDuplicate || false;
this.processedAt = data.processedAt || new Date();
this.costSavings = data.costSavings;
}
/**
* Create DTO from TaggingResult entity
* @param {TaggingResult} taggingResult - Tagging result entity
* @param {boolean} isDuplicate - Whether result was cached
* @returns {TagImageResponseDto}
*/
static fromTaggingResult(taggingResult, isDuplicate = false) {
const json = taggingResult.toJSON();
return new TagImageResponseDto({
imageId: json.imageId,
tags: json.tags,
summary: json.summary,
totalTags: taggingResult.getTotalTags(),
isDuplicate,
processedAt: new Date(json.createdAt),
costSavings: isDuplicate ? 'This request was FREE - used cached result' : undefined
});
}
}
module.exports = TagImageResponseDto;

View File

@ -0,0 +1,108 @@
const { ValidationError } = require('../../shared/errors/AppError');
const TaggingResult = require('../../domain/entities/TaggingResult');
const TagImageResponseDto = require('../dtos/TagImageResponseDto');
const crypto = require('crypto');
/**
* Use case for tagging base64 encoded images
*/
class TagBase64ImageUseCase {
/**
* @param {IImageRepository} imageRepository - Image repository
* @param {IImageTaggingService} aiService - AI tagging service
* @param {Object} logger - Logger instance
*/
constructor(imageRepository, aiService, logger) {
this.imageRepository = imageRepository;
this.aiService = aiService;
this.logger = logger;
}
/**
* Execute the use case
* @param {string} base64Image - Base64 encoded image
* @param {string} mediaType - MIME type
* @param {string} [fileName] - Optional file name
* @returns {Promise<TagImageResponseDto>}
*/
async execute(base64Image, mediaType, fileName = 'unknown') {
try {
this._validateInput(base64Image, mediaType);
// Convert base64 to buffer
const imageBuffer = Buffer.from(base64Image, 'base64');
// Calculate SHA256 hash
const imageHash = this._calculateImageHash(imageBuffer);
// Check for duplicate
const existingResult = await this.imageRepository.findByImageHash(imageHash);
if (existingResult) {
this.logger.info('Duplicate image detected (base64)', { hash: imageHash });
return TagImageResponseDto.fromTaggingResult(existingResult, true);
}
// Call AI service
const startTime = Date.now();
const aiResult = await this.aiService.generateTags(base64Image, mediaType);
const processingTime = Date.now() - startTime;
// Create TaggingResult entity
const taggingResult = new TaggingResult({
imageId: this._generateId(),
tags: aiResult.tags,
summary: aiResult.summary || '',
createdAt: new Date()
});
// Save to repository
const savedResult = await this.imageRepository.save(taggingResult, imageBuffer);
this.logger.info('Base64 image tagged successfully', {
imageId: savedResult.imageId,
totalTags: savedResult.getTotalTags(),
processingTime
});
return TagImageResponseDto.fromTaggingResult(savedResult, false);
} catch (error) {
this.logger.error('TagBase64ImageUseCase error', error);
throw error;
}
}
/**
* Validate input
* @private
*/
_validateInput(base64Image, mediaType) {
if (!base64Image || typeof base64Image !== 'string' || base64Image.trim() === '') {
throw new ValidationError('Base64 image is required');
}
if (!mediaType || typeof mediaType !== 'string') {
throw new ValidationError('Media type is required');
}
}
/**
* Calculate SHA256 hash of image buffer
* @private
*/
_calculateImageHash(buffer) {
return crypto.createHash('sha256').update(buffer).digest('hex');
}
/**
* Generate unique ID (UUID v4)
* @private
*/
_generateId() {
const { v4: uuidv4 } = require('uuid');
return uuidv4();
}
}
module.exports = TagBase64ImageUseCase;

View File

@ -0,0 +1,99 @@
const { ValidationError } = require('../../shared/errors/AppError');
const TagImageResponseDto = require('../dtos/TagImageResponseDto');
/**
* Use case for tagging multiple base64 images in batch
*/
class TagBatchBase64ImagesUseCase {
/**
* @param {TagBase64ImageUseCase} tagBase64ImageUseCase - Single base64 image tagging use case
* @param {Object} logger - Logger instance
*/
constructor(tagBase64ImageUseCase, logger) {
this.tagBase64ImageUseCase = tagBase64ImageUseCase;
this.logger = logger;
}
/**
* Execute batch tagging for base64 images
* @param {Array<Object>} images - Array of {base64Image, mediaType, fileName}
* @returns {Promise<Array>} Array of response objects
*/
async execute(images) {
try {
this._validateInput(images);
this.logger.info('Starting batch base64 image tagging', { count: images.length });
// Process all images in parallel
const results = await Promise.all(
images.map(async (image, index) => {
try {
const result = await this.tagBase64ImageUseCase.execute(
image.base64Image,
image.mediaType,
image.fileName
);
return {
success: true,
index,
data: result
};
} catch (error) {
this.logger.error(`Failed to tag base64 image at index ${index}`, error);
return {
success: false,
index,
error: error.message || 'Failed to tag image',
data: null
};
}
})
);
const successCount = results.filter(r => r.success).length;
const failureCount = results.filter(r => !r.success).length;
this.logger.info('Batch base64 tagging completed', {
total: images.length,
success: successCount,
failures: failureCount
});
return results;
} catch (error) {
this.logger.error('TagBatchBase64ImagesUseCase error', error);
throw error;
}
}
/**
* Validate input
* @private
*/
_validateInput(images) {
if (!Array.isArray(images)) {
throw new ValidationError('Request must be an array of images');
}
if (images.length === 0) {
throw new ValidationError('At least one image is required');
}
if (images.length > 50) {
throw new ValidationError('Maximum 50 images allowed per batch request');
}
images.forEach((image, index) => {
if (!image.base64Image || typeof image.base64Image !== 'string') {
throw new ValidationError(`Image at index ${index}: base64Image is required`);
}
if (!image.mediaType || typeof image.mediaType !== 'string') {
throw new ValidationError(`Image at index ${index}: mediaType is required`);
}
});
}
}
module.exports = TagBatchBase64ImagesUseCase;

View File

@ -0,0 +1,87 @@
const { ValidationError } = require('../../shared/errors/AppError');
const TagImageRequestDto = require('../dtos/TagImageRequestDto');
const TagImageResponseDto = require('../dtos/TagImageResponseDto');
/**
* Use case for tagging multiple images in batch
*/
class TagBatchImagesUseCase {
/**
* @param {TagImageUseCase} tagImageUseCase - Single image tagging use case
* @param {Object} logger - Logger instance
*/
constructor(tagImageUseCase, logger) {
this.tagImageUseCase = tagImageUseCase;
this.logger = logger;
}
/**
* Execute batch tagging
* @param {Array<TagImageRequestDto>} requestDtos - Array of request DTOs
* @returns {Promise<Array<TagImageResponseDto>>} Array of response DTOs
*/
async execute(requestDtos) {
try {
this._validateInput(requestDtos);
this.logger.info('Starting batch image tagging', { count: requestDtos.length });
// Process all images in parallel (or sequentially if preferred)
const results = await Promise.all(
requestDtos.map(async (requestDto, index) => {
try {
const result = await this.tagImageUseCase.execute(requestDto);
return {
success: true,
index,
data: result
};
} catch (error) {
this.logger.error(`Failed to tag image at index ${index}`, error);
return {
success: false,
index,
error: error.message || 'Failed to tag image',
data: null
};
}
})
);
const successCount = results.filter(r => r.success).length;
const failureCount = results.filter(r => !r.success).length;
this.logger.info('Batch tagging completed', {
total: requestDtos.length,
success: successCount,
failures: failureCount
});
return results;
} catch (error) {
this.logger.error('TagBatchImagesUseCase error', error);
throw error;
}
}
/**
* Validate input
* @private
*/
_validateInput(requestDtos) {
if (!Array.isArray(requestDtos)) {
throw new ValidationError('Request must be an array of images');
}
if (requestDtos.length === 0) {
throw new ValidationError('At least one image is required');
}
if (requestDtos.length > 50) {
throw new ValidationError('Maximum 50 images allowed per batch request');
}
}
}
module.exports = TagBatchImagesUseCase;

View File

@ -0,0 +1,109 @@
const { ValidationError } = require('../../shared/errors/AppError');
const TaggingResult = require('../../domain/entities/TaggingResult');
const TagImageResponseDto = require('../dtos/TagImageResponseDto');
const crypto = require('crypto');
/**
* Use case for tagging uploaded images
*/
class TagImageUseCase {
/**
* @param {IImageRepository} imageRepository - Image repository
* @param {IImageTaggingService} aiService - AI tagging service
* @param {Object} logger - Logger instance
*/
constructor(imageRepository, aiService, logger) {
this.imageRepository = imageRepository;
this.aiService = aiService;
this.logger = logger;
}
/**
* Execute the use case
* @param {TagImageRequestDto} requestDto - Request DTO
* @returns {Promise<TagImageResponseDto>}
*/
async execute(requestDto) {
try {
this._validateInput(requestDto);
// Calculate SHA256 hash of image buffer
const imageHash = this._calculateImageHash(requestDto.fileBuffer);
// Check for duplicate
const existingResult = await this.imageRepository.findByImageHash(imageHash);
if (existingResult) {
this.logger.info('Duplicate image detected', { hash: imageHash });
return TagImageResponseDto.fromTaggingResult(existingResult, true);
}
// Convert to base64
const base64Image = requestDto.toBase64();
// Call AI service
const startTime = Date.now();
const aiResult = await this.aiService.generateTags(base64Image, requestDto.mimeType);
const processingTime = Date.now() - startTime;
// Create TaggingResult entity
const taggingResult = new TaggingResult({
imageId: this._generateId(),
tags: aiResult.tags,
summary: aiResult.summary || '',
createdAt: new Date()
});
// Save to repository
const savedResult = await this.imageRepository.save(taggingResult, requestDto.fileBuffer);
this.logger.info('Image tagged successfully', {
imageId: savedResult.imageId,
totalTags: savedResult.getTotalTags(),
processingTime
});
return TagImageResponseDto.fromTaggingResult(savedResult, false);
} catch (error) {
this.logger.error('TagImageUseCase error', error);
throw error;
}
}
/**
* Validate input
* @private
*/
_validateInput(requestDto) {
if (!requestDto || !requestDto.fileBuffer) {
throw new ValidationError('File buffer is required');
}
if (!Buffer.isBuffer(requestDto.fileBuffer)) {
throw new ValidationError('File buffer must be a Buffer instance');
}
if (!requestDto.mimeType || typeof requestDto.mimeType !== 'string') {
throw new ValidationError('MIME type is required');
}
}
/**
* Calculate SHA256 hash of image buffer
* @private
*/
_calculateImageHash(buffer) {
return crypto.createHash('sha256').update(buffer).digest('hex');
}
/**
* Generate unique ID (UUID v4)
* @private
*/
_generateId() {
const { v4: uuidv4 } = require('uuid');
return uuidv4();
}
}
module.exports = TagImageUseCase;

View File

@ -0,0 +1,60 @@
const { ValidationError } = require('../../shared/errors/AppError');
/**
* ImageTag entity - represents a single tag for an image
*/
class ImageTag {
/**
* @param {Object} data - Tag data
* @param {string} data.category - Tag category (e.g., "View", "Furnishing")
* @param {string} data.value - Tag value (e.g., "marina view", "fully furnished")
* @param {number} data.confidence - Confidence score (0-1)
*/
constructor(data) {
this._validateInput(data);
this.category = data.category;
this.value = data.value;
this.confidence = data.confidence;
}
/**
* Validate input data
* @private
*/
_validateInput(data) {
if (!data.category || typeof data.category !== 'string' || data.category.trim() === '') {
throw new ValidationError('Category is required and must be a non-empty string');
}
if (!data.value || typeof data.value !== 'string' || data.value.trim() === '') {
throw new ValidationError('Value is required and must be a non-empty string');
}
if (typeof data.confidence !== 'number' || data.confidence < 0 || data.confidence > 1) {
throw new ValidationError('Confidence must be a number between 0 and 1');
}
}
/**
* Check if tag has high confidence (>= 0.8)
* @returns {boolean}
*/
isHighConfidence() {
return this.confidence >= 0.8;
}
/**
* Serialize to JSON
* @returns {Object}
*/
toJSON() {
return {
category: this.category,
value: this.value,
confidence: this.confidence
};
}
}
module.exports = ImageTag;

View File

@ -0,0 +1,78 @@
const { ValidationError } = require('../../shared/errors/AppError');
const ImageTag = require('./ImageTag');
/**
* TaggingResult entity - represents the complete tagging result for an image
*/
class TaggingResult {
/**
* @param {Object} data - Tagging result data
* @param {string} data.imageId - Image ID
* @param {Array<Object>} data.tags - Array of tag objects
* @param {string} data.summary - Summary description
* @param {Date} [data.createdAt] - Creation timestamp
*/
constructor(data) {
this._validateInput(data);
this.imageId = data.imageId;
this.tags = data.tags.map(tag => new ImageTag(tag));
this.summary = data.summary || '';
this.createdAt = data.createdAt || new Date();
}
/**
* Validate input data
* @private
*/
_validateInput(data) {
if (!data.imageId || typeof data.imageId !== 'string') {
throw new ValidationError('ImageId is required and must be a string');
}
if (!Array.isArray(data.tags) || data.tags.length === 0) {
throw new ValidationError('Tags array is required and must not be empty');
}
}
/**
* Get tags filtered by category
* @param {string} category - Category to filter by
* @returns {Array<ImageTag>}
*/
getTagsByCategory(category) {
return this.tags.filter(tag => tag.category === category);
}
/**
* Get only high confidence tags (>= 0.8)
* @returns {Array<ImageTag>}
*/
getHighConfidenceTags() {
return this.tags.filter(tag => tag.isHighConfidence());
}
/**
* Get total number of tags
* @returns {number}
*/
getTotalTags() {
return this.tags.length;
}
/**
* Serialize to JSON
* @returns {Object}
*/
toJSON() {
return {
imageId: this.imageId,
tags: this.tags.map(tag => tag.toJSON()),
summary: this.summary,
createdAt: this.createdAt.toISOString()
};
}
}
module.exports = TaggingResult;

View File

@ -0,0 +1,55 @@
/**
* Interface for image repository
* This is an abstract interface that must be implemented by infrastructure layer
*/
class IImageRepository {
/**
* Save tagging result and image data
* @param {TaggingResult} taggingResult - Tagging result entity
* @param {Buffer} imageBuffer - Image buffer
* @returns {Promise<TaggingResult>}
*/
async save(taggingResult, imageBuffer) {
throw new Error('save() must be implemented');
}
/**
* Find tagging result by image ID
* @param {string} imageId - Image ID
* @returns {Promise<TaggingResult|null>}
*/
async findById(imageId) {
throw new Error('findById() must be implemented');
}
/**
* Find tagging result by image hash (for duplicate detection)
* @param {string} hash - SHA256 hash of image
* @returns {Promise<TaggingResult|null>}
*/
async findByImageHash(hash) {
throw new Error('findByImageHash() must be implemented');
}
/**
* Search images by tag value
* @param {string} tagValue - Tag value to search for
* @returns {Promise<Array<TaggingResult>>}
*/
async findByTagValue(tagValue) {
throw new Error('findByTagValue() must be implemented');
}
/**
* Get statistics about tagged images
* @returns {Promise<Object>} Statistics object
*/
async getStats() {
throw new Error('getStats() must be implemented');
}
}
module.exports = IImageRepository;

View File

@ -0,0 +1,22 @@
/**
* Interface for image tagging service
* This is an abstract interface that must be implemented by infrastructure layer
*/
class IImageTaggingService {
/**
* Generate tags for an image
* @param {string} base64Image - Base64 encoded image
* @param {string} mediaType - MIME type (e.g., 'image/jpeg')
* @returns {Promise<Object>} Tagging result with tags and summary
* @throws {ValidationError} If inputs are invalid
* @throws {AIServiceError} If AI service fails
*/
async generateTags(base64Image, mediaType) {
throw new Error('generateTags() must be implemented');
}
}
module.exports = IImageTaggingService;

View File

@ -0,0 +1,213 @@
const Anthropic = require('@anthropic-ai/sdk');
const retry = require('async-retry');
const IImageTaggingService = require('../../domain/interfaces/IImageTaggingService');
const { AIServiceError, ValidationError } = require('../../shared/errors/AppError');
/**
* Claude AI Provider - implements IImageTaggingService
*/
class ClaudeAIProvider extends IImageTaggingService {
/**
* @param {string} apiKey - Anthropic API key
* @param {Object} logger - Logger instance
*/
constructor(apiKey, logger) {
super();
if (!apiKey) {
throw new ValidationError('Anthropic API key is required');
}
this.client = new Anthropic({ apiKey });
this.logger = logger;
this.model = 'claude-sonnet-4-20250514';
}
/**
* Generate tags for an image using Claude AI
* @param {string} base64Image - Base64 encoded image
* @param {string} mediaType - MIME type (e.g., 'image/jpeg')
* @returns {Promise<Object>} Tagging result with tags and summary
* @throws {ValidationError} If inputs are invalid
* @throws {AIServiceError} If Claude API fails
*/
async generateTags(base64Image, mediaType) {
try {
this._validateInput(base64Image, mediaType);
const prompt = this._buildPrompt();
const response = await retry(
async () => {
const message = await this.client.messages.create({
model: this.model,
max_tokens: 4096,
messages: [
{
role: 'user',
content: [
{
type: 'image',
source: {
type: 'base64',
media_type: mediaType,
data: base64Image
}
},
{
type: 'text',
text: prompt
}
]
}
]
});
if (!message || !message.content || message.content.length === 0) {
throw new AIServiceError('Empty response from Claude API');
}
// Extract text from response
const textContent = message.content.find(block => block.type === 'text');
if (!textContent || !textContent.text) {
throw new AIServiceError('No text content in Claude API response');
}
return textContent.text;
},
{
retries: 3,
factor: 2,
minTimeout: 1000,
maxTimeout: 10000,
onRetry: (error, attempt) => {
this.logger.warn(`Claude API retry attempt ${attempt}`, { error: error.message });
}
}
);
// Parse and validate JSON response
const result = this._parseResponse(response);
this.logger.info('Claude API tags generated', {
totalTags: result.tags.length,
hasSummary: !!result.summary
});
return result;
} catch (error) {
if (error instanceof AIServiceError || error instanceof ValidationError) {
throw error;
}
this.logger.error('Claude API error', error);
throw new AIServiceError(`Failed to generate tags: ${error.message}`);
}
}
/**
* Build the prompt for Claude AI
* @private
*/
_buildPrompt() {
return `You are an expert real estate property analyst AI with specialized training in architectural photography, interior design, and property valuation. Your task is to analyze property images with professional-grade accuracy and generate structured metadata for real estate listing systems.
Core Objective
Analyze the provided property image and generate 30 precise, descriptive tags across 10 predefined categories with confidence scores, following enterprise data quality standards.
Tag Categories:
1. View: (e.g., Burj Khalifa view, ocean view, downtown skyline, marina view etc.)
2. Furnishing: (e.g., fully furnished, unfurnished, modern, contemporary, luxury)
3. Kitchen: (e.g., with appliances, open kitchen, modular, closed kitchen)
4. Flooring: (e.g., wooden, marble, tile, carpet, laminate, porcelain)
5. Room Type: (e.g., bedroom, living room, bathroom, kitchen, balcony)
6. Style: (e.g., modern, traditional, scandinavian, industrial)
7. Features: (e.g., high ceiling, floor-to-ceiling windows, built-in wardrobes)
8. Condition: (e.g., newly renovated, well-maintained, ready to move)
9. Lighting: (e.g., natural light, ambient lighting, LED lighting)
10. Color Scheme: (e.g., neutral tones, warm colors, monochrome)
Return ONLY a JSON object in this exact format:
{
"tags": [
{"category": "View", "value": "marina view", "confidence": 0.95},
{"category": "Furnishing", "value": "fully furnished", "confidence": 0.90}
],
"summary": "Brief one-sentence description"
}
`;
}
/**
* Parse and validate JSON response from Claude
* @private
*/
_parseResponse(responseText) {
try {
// Try to extract JSON from response (may have markdown code blocks)
let jsonText = responseText.trim();
// Remove markdown code blocks if present
const jsonMatch = jsonText.match(/```(?:json)?\s*(\{[\s\S]*\})\s*```/);
if (jsonMatch) {
jsonText = jsonMatch[1];
} else {
// Try to find JSON object directly
const jsonObjectMatch = jsonText.match(/\{[\s\S]*\}/);
if (jsonObjectMatch) {
jsonText = jsonObjectMatch[0];
}
}
const parsed = JSON.parse(jsonText);
// Validate structure
if (!parsed.tags || !Array.isArray(parsed.tags)) {
throw new AIServiceError('Invalid response: tags array is required');
}
if (parsed.tags.length < 20) {
this.logger.warn('Claude returned fewer tags than expected', { count: parsed.tags.length });
}
// Validate each tag
parsed.tags.forEach((tag, index) => {
if (!tag.category || !tag.value || typeof tag.confidence !== 'number') {
throw new AIServiceError(`Invalid tag at index ${index}: missing required fields`);
}
});
return {
tags: parsed.tags,
summary: parsed.summary || ''
};
} catch (error) {
if (error instanceof AIServiceError) {
throw error;
}
this.logger.error('Failed to parse Claude response', { response: responseText, error: error.message });
throw new AIServiceError(`Failed to parse Claude API response: ${error.message}`);
}
}
/**
* Validate input
* @private
*/
_validateInput(base64Image, mediaType) {
if (!base64Image || typeof base64Image !== 'string' || base64Image.trim() === '') {
throw new ValidationError('Base64 image is required');
}
if (!mediaType || typeof mediaType !== 'string') {
throw new ValidationError('Media type is required');
}
const allowedTypes = ['image/jpeg', 'image/png', 'image/webp', 'image/gif'];
if (!allowedTypes.includes(mediaType.toLowerCase())) {
throw new ValidationError(`Unsupported media type: ${mediaType}`);
}
}
}
module.exports = ClaudeAIProvider;

View File

@ -0,0 +1,14 @@
/**
* CORS configuration
*/
const corsConfig = {
origin: process.env.ALLOWED_ORIGIN || '*',
methods: ['GET', 'POST', 'OPTIONS'],
allowedHeaders: ['Content-Type', 'Authorization', 'X-API-Key'],
credentials: true
};
module.exports = corsConfig;

View File

@ -0,0 +1,94 @@
const mysql = require('mysql2/promise');
const ClaudeAIProvider = require('../ai/ClaudeAIProvider');
const MySQLImageRepository = require('../repositories/MySQLImageRepository');
const ApiKeyRepository = require('../repositories/ApiKeyRepository');
const TagImageUseCase = require('../../application/useCases/TagImageUseCase');
const TagBase64ImageUseCase = require('../../application/useCases/TagBase64ImageUseCase');
const TagBatchImagesUseCase = require('../../application/useCases/TagBatchImagesUseCase');
const TagBatchBase64ImagesUseCase = require('../../application/useCases/TagBatchBase64ImagesUseCase');
const logger = require('../../shared/utils/logger');
/**
* Dependency Injection Container
*/
class DependencyContainer {
constructor() {
this._services = new Map();
this._initialize();
}
/**
* Initialize all services
* @private
*/
_initialize() {
// Database connection pool
const pool = mysql.createPool({
host: process.env.DB_HOST || 'localhost',
port: process.env.DB_PORT || 3306,
user: process.env.DB_USER || 'root',
password: process.env.DB_PASSWORD || '',
database: process.env.DB_NAME || 'property_tagging',
waitForConnections: true,
connectionLimit: 20,
queueLimit: 0,
enableKeepAlive: true,
keepAliveInitialDelay: 0
});
this._services.set('pool', pool);
// Repositories
const imageRepository = new MySQLImageRepository(pool, logger);
const apiKeyRepository = new ApiKeyRepository(pool, logger);
this._services.set('imageRepository', imageRepository);
this._services.set('apiKeyRepository', apiKeyRepository);
// AI Provider
if (!process.env.ANTHROPIC_API_KEY) {
logger.warn('ANTHROPIC_API_KEY not set. Claude AI provider will not work.');
}
const aiProvider = new ClaudeAIProvider(process.env.ANTHROPIC_API_KEY, logger);
this._services.set('aiProvider', aiProvider);
// Use Cases
const tagImageUseCase = new TagImageUseCase(imageRepository, aiProvider, logger);
const tagBase64ImageUseCase = new TagBase64ImageUseCase(imageRepository, aiProvider, logger);
const tagBatchImagesUseCase = new TagBatchImagesUseCase(tagImageUseCase, logger);
const tagBatchBase64ImagesUseCase = new TagBatchBase64ImagesUseCase(tagBase64ImageUseCase, logger);
this._services.set('tagImageUseCase', tagImageUseCase);
this._services.set('tagBase64ImageUseCase', tagBase64ImageUseCase);
this._services.set('tagBatchImagesUseCase', tagBatchImagesUseCase);
this._services.set('tagBatchBase64ImagesUseCase', tagBatchBase64ImagesUseCase);
}
/**
* Get service by name
* @param {string} serviceName - Service name
* @returns {*} Service instance
*/
get(serviceName) {
const service = this._services.get(serviceName);
if (!service) {
throw new Error(`Service '${serviceName}' not found`);
}
return service;
}
/**
* Close all connections (graceful shutdown)
*/
async close() {
const pool = this._services.get('pool');
if (pool) {
await pool.end();
logger.info('Database connection pool closed');
}
}
}
// Singleton instance
const container = new DependencyContainer();
module.exports = container;

View File

@ -0,0 +1,223 @@
const ApiKeyGenerator = require('../../shared/utils/apiKeyGenerator');
const { AuthenticationError, AuthorizationError, ValidationError } = require('../../shared/errors/AppError');
const { v4: uuidv4 } = require('uuid');
/**
* Repository for API key management
*/
class ApiKeyRepository {
/**
* @param {Object} pool - MySQL connection pool
* @param {Object} logger - Logger instance
*/
constructor(pool, logger) {
this.pool = pool;
this.logger = logger;
}
/**
* Validate API key
* @param {string} apiKey - API key to validate
* @returns {Promise<Object|null>} Key data if valid, null if invalid
*/
async validateKey(apiKey) {
const connection = await this.pool.getConnection();
try {
if (!apiKey || typeof apiKey !== 'string') {
throw new AuthenticationError('API key is required');
}
// Validate format
if (!ApiKeyGenerator.isValidFormat(apiKey)) {
throw new AuthorizationError('Invalid API key format');
}
// Hash the key
const keyHash = ApiKeyGenerator.hash(apiKey);
// Query database
const [rows] = await connection.query(
`SELECT id, key_prefix, name, environment, is_active, expires_at, revoked_at
FROM api_keys
WHERE key_hash = ?`,
[keyHash]
);
if (rows.length === 0) {
throw new AuthorizationError('Invalid API key');
}
const keyData = rows[0];
// Check if active
if (!keyData.is_active) {
throw new AuthorizationError('API key is not active');
}
// Check if revoked
if (keyData.revoked_at) {
throw new AuthorizationError('API key has been revoked');
}
// Check if expired
if (keyData.expires_at && new Date(keyData.expires_at) < new Date()) {
throw new AuthorizationError('API key has expired');
}
return {
id: keyData.id,
name: keyData.name,
environment: keyData.environment
};
} catch (error) {
if (error instanceof AuthenticationError || error instanceof AuthorizationError) {
throw error;
}
this.logger.error('Failed to validate API key', error);
throw new AuthorizationError('Failed to validate API key');
} finally {
connection.release();
}
}
/**
* Create new API key
* @param {Object} data - Key data
* @param {string} data.name - Key name
* @param {string} [data.description] - Key description
* @param {string} [data.environment] - Environment (development, staging, production)
* @returns {Promise<Object>} Created key with plain text (ONLY shown on creation)
*/
async createKey(data) {
const connection = await this.pool.getConnection();
try {
const environment = data.environment || 'development';
const prefix = environment === 'production' ? 'key_live_' : 'key_test_';
// Generate key
const plainTextKey = ApiKeyGenerator.generate(prefix);
const keyHash = ApiKeyGenerator.hash(plainTextKey);
const keyId = uuidv4();
await connection.query(
`INSERT INTO api_keys (id, key_prefix, key_hash, name, description, environment)
VALUES (?, ?, ?, ?, ?, ?)`,
[
keyId,
prefix,
keyHash,
data.name,
data.description || null,
environment
]
);
this.logger.info('API key created', { keyId, name: data.name, environment });
return {
id: keyId,
key: plainTextKey, // ONLY shown on creation
name: data.name,
environment,
createdAt: new Date()
};
} catch (error) {
this.logger.error('Failed to create API key', error);
throw new Error('Failed to create API key');
} finally {
connection.release();
}
}
/**
* Revoke API key
* @param {string} keyId - Key ID
* @param {string} [reason] - Revocation reason
*/
async revokeKey(keyId, reason = null) {
const connection = await this.pool.getConnection();
try {
await connection.query(
`UPDATE api_keys
SET is_active = FALSE, revoked_at = NOW(), revoked_reason = ?
WHERE id = ?`,
[reason, keyId]
);
this.logger.info('API key revoked', { keyId, reason });
} catch (error) {
this.logger.error('Failed to revoke API key', error);
throw new Error('Failed to revoke API key');
} finally {
connection.release();
}
}
/**
* Activate API key
* @param {string} keyId - Key ID
*/
async activateKey(keyId) {
const connection = await this.pool.getConnection();
try {
await connection.query(
`UPDATE api_keys
SET is_active = TRUE, revoked_at = NULL, revoked_reason = NULL
WHERE id = ?`,
[keyId]
);
this.logger.info('API key activated', { keyId });
} catch (error) {
this.logger.error('Failed to activate API key', error);
throw new Error('Failed to activate API key');
} finally {
connection.release();
}
}
/**
* Get all API keys
* @returns {Promise<Array>} List of API keys (without plain text)
*/
async getAllKeys() {
const connection = await this.pool.getConnection();
try {
const [rows] = await connection.query(
`SELECT id, key_prefix, name, description, environment, is_active,
created_at, expires_at, revoked_at, revoked_reason
FROM api_keys
ORDER BY created_at DESC`
);
return rows.map(row => ({
id: row.id,
prefix: row.key_prefix,
name: row.name,
description: row.description,
environment: row.environment,
isActive: row.is_active,
createdAt: row.created_at,
expiresAt: row.expires_at,
revokedAt: row.revoked_at,
revokedReason: row.revoked_reason
}));
} catch (error) {
this.logger.error('Failed to get API keys', error);
throw new Error('Failed to get API keys');
} finally {
connection.release();
}
}
}
module.exports = ApiKeyRepository;

View File

@ -0,0 +1,297 @@
const IImageRepository = require('../../domain/interfaces/IImageRepository');
const TaggingResult = require('../../domain/entities/TaggingResult');
const ImageTag = require('../../domain/entities/ImageTag');
const { NotFoundError } = require('../../shared/errors/AppError');
const crypto = require('crypto');
const { v4: uuidv4 } = require('uuid');
/**
* MySQL implementation of IImageRepository
*/
class MySQLImageRepository extends IImageRepository {
/**
* @param {Object} pool - MySQL connection pool
* @param {Object} logger - Logger instance
*/
constructor(pool, logger) {
super();
this.pool = pool;
this.logger = logger;
}
/**
* Save tagging result and image data
* @param {TaggingResult} taggingResult - Tagging result entity
* @param {Buffer} imageBuffer - Image buffer
* @returns {Promise<TaggingResult>}
*/
async save(taggingResult, imageBuffer) {
const connection = await this.pool.getConnection();
try {
await connection.beginTransaction();
// Calculate image hash
const imageHash = this._calculateImageHash(imageBuffer);
// Check for duplicate before saving
const existing = await this.findByImageHash(imageHash);
if (existing) {
await connection.rollback();
return existing;
}
// Get image metadata (dimensions, size, etc.)
const metadata = await this._getImageMetadata(imageBuffer);
// Insert image record
const imageId = taggingResult.imageId || uuidv4();
await connection.query(
`INSERT INTO images (id, file_name, original_name, file_size, mime_type, width, height, image_hash)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)`,
[
imageId,
`${imageId}.jpg`, // Store as .jpg (optimized format)
metadata.originalName || 'unknown',
imageBuffer.length,
metadata.mimeType || 'image/jpeg',
metadata.width || null,
metadata.height || null,
imageHash
]
);
// Insert tagging result
const resultId = uuidv4();
await connection.query(
`INSERT INTO tagging_results (id, image_id, tags, summary, total_tags, model_version, was_duplicate)
VALUES (?, ?, ?, ?, ?, ?, ?)`,
[
resultId,
imageId,
JSON.stringify(taggingResult.tags.map(tag => tag.toJSON())),
taggingResult.summary,
taggingResult.getTotalTags(),
'claude-sonnet-4-20250514',
false
]
);
await connection.commit();
this.logger.info('Image and tags saved', { imageId, resultId });
return taggingResult;
} catch (error) {
await connection.rollback();
this.logger.error('Failed to save image and tags', error);
throw new Error('Database operation failed');
} finally {
connection.release();
}
}
/**
* Find tagging result by image ID
* @param {string} imageId - Image ID
* @returns {Promise<TaggingResult|null>}
*/
async findById(imageId) {
const connection = await this.pool.getConnection();
try {
const [rows] = await connection.query(
`SELECT tr.*, i.mime_type, i.width, i.height
FROM tagging_results tr
JOIN images i ON tr.image_id = i.id
WHERE tr.image_id = ?
ORDER BY tr.tagged_at DESC
LIMIT 1`,
[imageId]
);
if (rows.length === 0) {
return null;
}
return this._mapRowToTaggingResult(rows[0]);
} catch (error) {
this.logger.error('Failed to find image by ID', error);
throw new Error('Database operation failed');
} finally {
connection.release();
}
}
/**
* Find tagging result by image hash
* @param {string} hash - SHA256 hash of image
* @returns {Promise<TaggingResult|null>}
*/
async findByImageHash(hash) {
const connection = await this.pool.getConnection();
try {
const [rows] = await connection.query(
`SELECT tr.*, i.mime_type, i.width, i.height
FROM tagging_results tr
JOIN images i ON tr.image_id = i.id
WHERE i.image_hash = ?
ORDER BY tr.tagged_at DESC
LIMIT 1`,
[hash]
);
if (rows.length === 0) {
return null;
}
return this._mapRowToTaggingResult(rows[0]);
} catch (error) {
this.logger.error('Failed to find image by hash', error);
throw new Error('Database operation failed');
} finally {
connection.release();
}
}
/**
* Search images by tag value
* @param {string} tagValue - Tag value to search for
* @returns {Promise<Array<TaggingResult>>}
*/
async findByTagValue(tagValue) {
const connection = await this.pool.getConnection();
try {
const [rows] = await connection.query(
`SELECT DISTINCT tr.*, i.mime_type, i.width, i.height
FROM tagging_results tr
JOIN images i ON tr.image_id = i.id
WHERE JSON_SEARCH(tr.tags, 'one', ?) IS NOT NULL
ORDER BY tr.tagged_at DESC`,
[tagValue]
);
return rows.map(row => this._mapRowToTaggingResult(row));
} catch (error) {
this.logger.error('Failed to search by tag value', error);
throw new Error('Database operation failed');
} finally {
connection.release();
}
}
/**
* Get statistics about tagged images
* @returns {Promise<Object>} Statistics object
*/
async getStats() {
const connection = await this.pool.getConnection();
try {
const [totalImages] = await connection.query(
'SELECT COUNT(*) as count FROM images'
);
const [totalTagged] = await connection.query(
'SELECT COUNT(*) as count FROM tagging_results'
);
const [totalDuplicates] = await connection.query(
'SELECT COUNT(*) as count FROM tagging_results WHERE was_duplicate = TRUE'
);
const [avgTags] = await connection.query(
'SELECT AVG(total_tags) as avg FROM tagging_results'
);
return {
totalImages: totalImages[0].count,
totalTagged: totalTagged[0].count,
totalDuplicates: totalDuplicates[0].count,
averageTagsPerImage: avgTags[0].avg ? parseFloat(avgTags[0].avg).toFixed(2) : 0
};
} catch (error) {
this.logger.error('Failed to get statistics', error);
throw new Error('Database operation failed');
} finally {
connection.release();
}
}
/**
* Check database connection health
* @returns {Promise<boolean>} True if connected
*/
async checkHealth() {
const connection = await this.pool.getConnection();
try {
await connection.ping();
return true;
} catch (error) {
this.logger.error('Database health check failed', error);
return false;
} finally {
connection.release();
}
}
/**
* Map database row to TaggingResult entity
* @private
*/
_mapRowToTaggingResult(row) {
try {
const tags = typeof row.tags === 'string' ? JSON.parse(row.tags) : row.tags;
return new TaggingResult({
imageId: row.image_id,
tags: tags,
summary: row.summary || '',
createdAt: row.tagged_at || new Date()
});
} catch (error) {
this.logger.error('Failed to map row to TaggingResult', error);
throw new Error('Failed to parse tagging result');
}
}
/**
* Calculate SHA256 hash of image buffer
* @private
*/
_calculateImageHash(buffer) {
return crypto.createHash('sha256').update(buffer).digest('hex');
}
/**
* Get image metadata using Sharp (if available)
* @private
*/
async _getImageMetadata(imageBuffer) {
try {
const sharp = require('sharp');
const metadata = await sharp(imageBuffer).metadata();
return {
width: metadata.width,
height: metadata.height,
mimeType: `image/${metadata.format}`,
originalName: 'unknown'
};
} catch (error) {
// If Sharp fails, return minimal metadata
this.logger.warn('Failed to get image metadata', error);
return {
width: null,
height: null,
mimeType: 'image/jpeg',
originalName: 'unknown'
};
}
}
}
module.exports = MySQLImageRepository;

View File

@ -0,0 +1,314 @@
const TagImageRequestDto = require('../../application/dtos/TagImageRequestDto');
const ImageValidator = require('../validators/imageValidator');
const ResponseFormatter = require('../../shared/utils/responseFormatter');
const { ValidationError } = require('../../shared/errors/AppError');
const Joi = require('joi');
/**
* Image Tagging Controller
*/
class ImageTaggingController {
/**
* @param {TagImageUseCase} tagImageUseCase - Tag image use case
* @param {TagBase64ImageUseCase} tagBase64ImageUseCase - Tag base64 image use case
* @param {TagBatchImagesUseCase} tagBatchImagesUseCase - Tag batch images use case
* @param {TagBatchBase64ImagesUseCase} tagBatchBase64ImagesUseCase - Tag batch base64 images use case
* @param {IImageRepository} imageRepository - Image repository
* @param {Object} logger - Logger instance
*/
constructor(tagImageUseCase, tagBase64ImageUseCase, tagBatchImagesUseCase, tagBatchBase64ImagesUseCase, imageRepository, logger) {
this.tagImageUseCase = tagImageUseCase;
this.tagBase64ImageUseCase = tagBase64ImageUseCase;
this.tagBatchImagesUseCase = tagBatchImagesUseCase;
this.tagBatchBase64ImagesUseCase = tagBatchBase64ImagesUseCase;
this.imageRepository = imageRepository;
this.logger = logger;
}
/**
* Tag uploaded image
*/
async tagUploadedImage(req, res, next) {
try {
// Validate upload
if (!req.file) {
throw new ValidationError('Image file is required');
}
await ImageValidator.validateUpload(req.file);
// Convert format if needed
let imageBuffer = req.file.buffer;
imageBuffer = await ImageValidator.convertToClaudeSupportedFormat(
imageBuffer,
req.file.mimetype
);
// Optimize for AI
imageBuffer = await ImageValidator.optimizeForAI(imageBuffer);
// Create request DTO
const requestDto = new TagImageRequestDto({
fileBuffer: imageBuffer,
mimeType: 'image/jpeg', // Always JPEG after optimization
fileName: req.file.originalname || 'unknown'
});
// Execute use case
const result = await this.tagImageUseCase.execute(requestDto);
// Format response
const message = result.isDuplicate
? '✅ Duplicate detected - returned cached tags (no cost)'
: '✅ New image tagged successfully';
res.status(200).json(
ResponseFormatter.success(result, message)
);
} catch (error) {
next(error);
}
}
/**
* Tag base64 image
*/
async tagBase64Image(req, res, next) {
try {
// Validate input
const schema = Joi.object({
base64Image: Joi.string().base64().required(),
mediaType: Joi.string().valid('image/jpeg', 'image/png', 'image/webp', 'image/gif').required(),
fileName: Joi.string().max(255).optional()
});
const { error: validationError, value } = schema.validate(req.body);
if (validationError) {
throw new ValidationError(validationError.details[0].message);
}
// Execute use case
const result = await this.tagBase64ImageUseCase.execute(
value.base64Image,
value.mediaType,
value.fileName
);
// Format response
const message = result.isDuplicate
? '✅ Duplicate detected - returned cached tags (no cost)'
: '✅ New image tagged successfully';
res.status(200).json(
ResponseFormatter.success(result, message)
);
} catch (error) {
next(error);
}
}
/**
* Search images by tag
*/
async searchByTag(req, res, next) {
try {
const tagValue = req.query.tag;
if (!tagValue || typeof tagValue !== 'string') {
throw new ValidationError('Tag query parameter is required');
}
const results = await this.imageRepository.findByTagValue(tagValue);
res.status(200).json(
ResponseFormatter.success(results, `Found ${results.length} image(s) with tag "${tagValue}"`)
);
} catch (error) {
next(error);
}
}
/**
* Get statistics
*/
async getStats(req, res, next) {
try {
const stats = await this.imageRepository.getStats();
res.status(200).json(
ResponseFormatter.success(stats, 'Statistics retrieved successfully')
);
} catch (error) {
next(error);
}
}
/**
* Tag multiple uploaded images (batch)
*/
async tagBatchUploadedImages(req, res, next) {
try {
if (!req.files || req.files.length === 0) {
throw new ValidationError('At least one image file is required');
}
if (req.files.length > 50) {
throw new ValidationError('Maximum 50 images allowed per batch request');
}
// Process all files
const requestDtos = [];
for (const file of req.files) {
try {
await ImageValidator.validateUpload(file);
// Convert format if needed
let imageBuffer = file.buffer;
imageBuffer = await ImageValidator.convertToClaudeSupportedFormat(
imageBuffer,
file.mimetype
);
// Optimize for AI
imageBuffer = await ImageValidator.optimizeForAI(imageBuffer);
// Create request DTO
const requestDto = new TagImageRequestDto({
fileBuffer: imageBuffer,
mimeType: 'image/jpeg',
fileName: file.originalname || 'unknown'
});
requestDtos.push(requestDto);
} catch (error) {
this.logger.warn('Skipping invalid file in batch', {
fileName: file.originalname,
error: error.message
});
}
}
if (requestDtos.length === 0) {
throw new ValidationError('No valid images found in batch');
}
// Execute batch use case
const results = await this.tagBatchImagesUseCase.execute(requestDtos);
// Format response
const successCount = results.filter(r => r.success).length;
const failureCount = results.filter(r => !r.success).length;
const message = `Batch processing completed: ${successCount} succeeded, ${failureCount} failed`;
res.status(200).json(
ResponseFormatter.success({
total: results.length,
succeeded: successCount,
failed: failureCount,
results: results.map(r => ({
success: r.success,
index: r.index,
data: r.data,
error: r.error || null
}))
}, message)
);
} catch (error) {
next(error);
}
}
/**
* Tag multiple base64 images (batch)
*/
async tagBatchBase64Images(req, res, next) {
try {
// Validate input
const schema = Joi.object({
images: Joi.array().items(
Joi.object({
base64Image: Joi.string().base64().required(),
mediaType: Joi.string().valid('image/jpeg', 'image/png', 'image/webp', 'image/gif').required(),
fileName: Joi.string().max(255).optional()
})
).min(1).max(50).required()
});
const { error: validationError, value } = schema.validate(req.body);
if (validationError) {
throw new ValidationError(validationError.details[0].message);
}
// Execute batch use case
const results = await this.tagBatchBase64ImagesUseCase.execute(value.images);
// Format response
const successCount = results.filter(r => r.success).length;
const failureCount = results.filter(r => !r.success).length;
const message = `Batch processing completed: ${successCount} succeeded, ${failureCount} failed`;
res.status(200).json(
ResponseFormatter.success({
total: results.length,
succeeded: successCount,
failed: failureCount,
results: results.map(r => ({
success: r.success,
index: r.index,
data: r.data,
error: r.error || null
}))
}, message)
);
} catch (error) {
next(error);
}
}
/**
* Health check
*/
async getHealth(req, res) {
try {
// Check database connection
let dbStatus = 'unknown';
try {
const isHealthy = await this.imageRepository.checkHealth();
dbStatus = isHealthy ? 'connected' : 'disconnected';
} catch (error) {
dbStatus = 'disconnected';
}
// Check memory usage
const memoryUsage = process.memoryUsage();
const memoryMB = {
rss: Math.round(memoryUsage.rss / 1024 / 1024),
heapTotal: Math.round(memoryUsage.heapTotal / 1024 / 1024),
heapUsed: Math.round(memoryUsage.heapUsed / 1024 / 1024)
};
const health = {
status: dbStatus === 'connected' ? 'healthy' : 'unhealthy',
database: dbStatus,
memory: memoryMB,
uptime: process.uptime(),
timestamp: new Date().toISOString()
};
const statusCode = health.status === 'healthy' ? 200 : 503;
res.status(statusCode).json(health);
} catch (error) {
res.status(503).json({
status: 'unhealthy',
error: error.message,
timestamp: new Date().toISOString()
});
}
}
}
module.exports = ImageTaggingController;

View File

@ -0,0 +1,79 @@
const { AuthenticationError, AuthorizationError } = require('../../shared/errors/AppError');
/**
* API Key Authentication Middleware
*/
class ApiKeyAuthMiddleware {
/**
* @param {ApiKeyRepository} apiKeyRepository - API key repository
* @param {Object} logger - Logger instance
*/
constructor(apiKeyRepository, logger) {
this.apiKeyRepository = apiKeyRepository;
this.logger = logger;
}
/**
* Authentication middleware
*/
authenticate() {
return async (req, res, next) => {
try {
// Skip auth if SKIP_AUTH=true (development only)
if (process.env.SKIP_AUTH === 'true' && process.env.NODE_ENV !== 'production') {
this.logger.warn('Authentication skipped (SKIP_AUTH=true)', { path: req.path });
req.apiKey = { id: 'skip', name: 'Development Skip', environment: 'development' };
return next();
}
// Extract API key from header
const apiKey = this._extractApiKey(req);
if (!apiKey) {
throw new AuthenticationError('API key required. Include X-API-Key header or Authorization: Bearer token.');
}
// Validate key
const keyData = await this.apiKeyRepository.validateKey(apiKey);
// Attach key data to request
req.apiKey = keyData;
this.logger.info('API key authenticated', {
keyId: keyData.id,
name: keyData.name,
environment: keyData.environment,
path: req.path
});
next();
} catch (error) {
next(error);
}
};
}
/**
* Extract API key from request headers
* @private
*/
_extractApiKey(req) {
// Try X-API-Key header first
if (req.headers['x-api-key']) {
return req.headers['x-api-key'];
}
// Try Authorization: Bearer token
const authHeader = req.headers.authorization;
if (authHeader && authHeader.startsWith('Bearer ')) {
return authHeader.substring(7);
}
return null;
}
}
module.exports = ApiKeyAuthMiddleware;

View File

@ -0,0 +1,38 @@
const ResponseFormatter = require('../../shared/utils/responseFormatter');
const logger = require('../../shared/utils/logger');
const { AppError } = require('../../shared/errors/AppError');
/**
* Global error handler middleware
*/
const errorHandler = (err, req, res, next) => {
// Log error
logger.error('Request error', {
method: req.method,
path: req.path,
error: err,
requestId: req.id
});
// Handle known operational errors
if (err instanceof AppError && err.isOperational) {
return res.status(err.statusCode).json(
ResponseFormatter.error(err, err.message)
);
}
// Handle unknown errors
const statusCode = err.statusCode || 500;
const message = process.env.NODE_ENV === 'production'
? 'Internal server error'
: err.message;
res.status(statusCode).json(
ResponseFormatter.error(err, message)
);
};
module.exports = errorHandler;

View File

@ -0,0 +1,15 @@
const { v4: uuidv4 } = require('uuid');
/**
* Request ID middleware - adds unique ID to each request
*/
const requestIdMiddleware = (req, res, next) => {
req.id = req.headers['x-request-id'] || uuidv4();
res.setHeader('X-Request-ID', req.id);
next();
};
module.exports = requestIdMiddleware;

View File

@ -0,0 +1,72 @@
const express = require('express');
const multer = require('multer');
/**
* Create image routes
* @param {ImageTaggingController} controller - Image tagging controller
* @param {ApiKeyAuthMiddleware} authMiddleware - API key auth middleware
* @returns {express.Router}
*/
const createImageRoutes = (controller, authMiddleware) => {
const router = express.Router();
// Configure multer for file uploads
const upload = multer({
storage: multer.memoryStorage(),
limits: {
fileSize: 50 * 1024 * 1024 // 50MB
}
});
// Health check (public)
router.get('/health', (req, res) => controller.getHealth(req, res));
// Tag uploaded image (auth required)
router.post(
'/tag',
authMiddleware.authenticate(),
upload.single('image'),
(req, res, next) => controller.tagUploadedImage(req, res, next)
);
// Tag base64 image (auth required)
router.post(
'/tag-base64',
authMiddleware.authenticate(),
(req, res, next) => controller.tagBase64Image(req, res, next)
);
// Tag multiple uploaded images (batch, auth required)
router.post(
'/tag/batch',
authMiddleware.authenticate(),
upload.array('images', 50),
(req, res, next) => controller.tagBatchUploadedImages(req, res, next)
);
// Tag multiple base64 images (batch, auth required)
router.post(
'/tag-base64/batch',
authMiddleware.authenticate(),
(req, res, next) => controller.tagBatchBase64Images(req, res, next)
);
// Search by tag (auth required)
router.get(
'/search',
authMiddleware.authenticate(),
(req, res, next) => controller.searchByTag(req, res, next)
);
// Get statistics (auth required)
router.get(
'/stats',
authMiddleware.authenticate(),
(req, res, next) => controller.getStats(req, res, next)
);
return router;
};
module.exports = createImageRoutes;

View File

@ -0,0 +1,135 @@
const { fileTypeFromBuffer } = require('file-type');
const sharp = require('sharp');
const { ValidationError } = require('../../shared/errors/AppError');
const MAX_FILE_SIZE = 50 * 1024 * 1024; // 50MB
const MAX_DIMENSION = 15000; // pixels
const ALLOWED_MIME_TYPES = [
'image/jpeg',
'image/png',
'image/webp',
'image/gif',
'image/heic',
'image/tiff',
'image/bmp'
];
/**
* Image validator utility
*/
class ImageValidator {
/**
* Validate uploaded file
* @param {Object} file - Multer file object
* @returns {Promise<void>}
* @throws {ValidationError} If validation fails
*/
static async validateUpload(file) {
if (!file || !file.buffer) {
throw new ValidationError('File is required');
}
// Check file size
if (file.buffer.length > MAX_FILE_SIZE) {
throw new ValidationError(`File size exceeds maximum of ${MAX_FILE_SIZE / 1024 / 1024}MB`);
}
// Validate actual file type using magic numbers
const fileType = await fileTypeFromBuffer(file.buffer);
if (!fileType) {
throw new ValidationError('Unable to determine file type');
}
// Check if MIME type is allowed
const allowedMimeTypes = [
...ALLOWED_MIME_TYPES,
'image/jpg', // Accept jpg as well
'image/heif', // HEIC variant
'image/x-tiff' // TIFF variant
];
const mimeType = fileType.mime.toLowerCase();
if (!allowedMimeTypes.includes(mimeType) && !allowedMimeTypes.some(allowed => mimeType.includes(allowed.split('/')[1]))) {
throw new ValidationError(`Unsupported file type: ${fileType.mime}. Allowed types: ${ALLOWED_MIME_TYPES.join(', ')}`);
}
// Validate and check dimensions
try {
const metadata = await sharp(file.buffer).metadata();
if (metadata.width > MAX_DIMENSION || metadata.height > MAX_DIMENSION) {
throw new ValidationError(`Image dimensions exceed maximum of ${MAX_DIMENSION}x${MAX_DIMENSION} pixels`);
}
} catch (error) {
if (error instanceof ValidationError) {
throw error;
}
throw new ValidationError('Invalid image file');
}
}
/**
* Convert image to Claude-supported format (JPEG/PNG/WebP/GIF)
* @param {Buffer} buffer - Image buffer
* @param {string} mimeType - Original MIME type
* @returns {Promise<Buffer>} Converted image buffer
*/
static async convertToClaudeSupportedFormat(buffer, mimeType) {
const lowerMime = mimeType.toLowerCase();
// If already supported, just optimize
if (['image/jpeg', 'image/jpg', 'image/png', 'image/webp', 'image/gif'].includes(lowerMime)) {
return buffer;
}
// Convert HEIC/TIFF/BMP to JPEG
try {
const converted = await sharp(buffer)
.jpeg({ quality: 90 })
.toBuffer();
return converted;
} catch (error) {
throw new ValidationError(`Failed to convert image: ${error.message}`);
}
}
/**
* Optimize image for AI processing (resize if needed)
* @param {Buffer} buffer - Image buffer
* @returns {Promise<Buffer>} Optimized image buffer
*/
static async optimizeForAI(buffer) {
try {
const image = sharp(buffer);
const metadata = await image.metadata();
// If image is larger than 2048px, resize it
if (metadata.width > 2048 || metadata.height > 2048) {
const optimized = await image
.resize(2048, 2048, {
fit: 'inside',
withoutEnlargement: true
})
.jpeg({ quality: 85 })
.toBuffer();
return optimized;
}
// If already small enough, just ensure it's JPEG
if (metadata.format !== 'jpeg') {
return await image
.jpeg({ quality: 85 })
.toBuffer();
}
return buffer;
} catch (error) {
throw new ValidationError(`Failed to optimize image: ${error.message}`);
}
}
}
module.exports = ImageValidator;

83
src/server.js Normal file
View File

@ -0,0 +1,83 @@
require('dotenv').config();
const express = require('express');
const cors = require('cors');
const helmet = require('helmet');
const compression = require('compression');
const morgan = require('morgan');
// Import middleware and routes
const corsConfig = require('./infrastructure/config/corsConfig');
const container = require('./infrastructure/config/dependencyContainer');
const createImageRoutes = require('./presentation/routes/imageRoutes');
const ImageTaggingController = require('./presentation/controllers/ImageTaggingController');
const ApiKeyAuthMiddleware = require('./presentation/middleware/apiKeyAuth');
const errorHandler = require('./presentation/middleware/errorHandler');
const requestIdMiddleware = require('./presentation/middleware/requestId');
const logger = require('./shared/utils/logger');
const app = express();
const PORT = process.env.PORT || 3000;
// Middleware
app.use(helmet());
app.use(compression());
app.use(cors(corsConfig));
app.use(express.json({ limit: '10mb' }));
app.use(express.urlencoded({ extended: true, limit: '10mb' }));
app.use(requestIdMiddleware);
app.use(morgan('combined'));
// Root
app.get('/', (req, res) => {
res.json({
service: 'Property Image Tagging API',
version: '1.0.0',
authentication: 'Simple API Key'
});
});
// Dependency injection
const tagImageUseCase = container.get('tagImageUseCase');
const tagBase64ImageUseCase = container.get('tagBase64ImageUseCase');
const tagBatchImagesUseCase = container.get('tagBatchImagesUseCase');
const tagBatchBase64ImagesUseCase = container.get('tagBatchBase64ImagesUseCase');
const imageRepository = container.get('imageRepository');
const apiKeyRepository = container.get('apiKeyRepository');
const imageController = new ImageTaggingController(
tagImageUseCase,
tagBase64ImageUseCase,
tagBatchImagesUseCase,
tagBatchBase64ImagesUseCase,
imageRepository,
logger
);
const authMiddleware = new ApiKeyAuthMiddleware(apiKeyRepository, logger);
// Routes
const imageRoutes = createImageRoutes(imageController, authMiddleware);
app.use('/api/images', imageRoutes);
// Error handler (last)
app.use(errorHandler);
// Graceful shutdown
const gracefulShutdown = async (signal) => {
logger.info(`${signal} received, shutting down`);
await container.close();
process.exit(0);
};
process.on('SIGTERM', () => gracefulShutdown('SIGTERM'));
process.on('SIGINT', () => gracefulShutdown('SIGINT'));
// Start
if (process.env.NODE_ENV !== 'test') {
app.listen(PORT, () => {
logger.info(`Server running on port ${PORT}`);
});
}
module.exports = app;

View File

@ -0,0 +1,80 @@
/**
* Base application error class
* @class AppError
* @extends Error
*/
class AppError extends Error {
/**
* @param {string} message - Error message
* @param {number} statusCode - HTTP status code
* @param {boolean} isOperational - Whether error is operational (expected) or programming error
*/
constructor(message, statusCode = 500, isOperational = true) {
super(message);
this.statusCode = statusCode;
this.isOperational = isOperational;
Error.captureStackTrace(this, this.constructor);
}
}
/**
* Validation error (400)
*/
class ValidationError extends AppError {
constructor(message) {
super(message, 400);
this.name = 'ValidationError';
}
}
/**
* AI service error (503)
*/
class AIServiceError extends AppError {
constructor(message) {
super(message, 503);
this.name = 'AIServiceError';
}
}
/**
* Not found error (404)
*/
class NotFoundError extends AppError {
constructor(message = 'Resource not found') {
super(message, 404);
this.name = 'NotFoundError';
}
}
/**
* Authentication error (401)
*/
class AuthenticationError extends AppError {
constructor(message = 'Authentication required') {
super(message, 401);
this.name = 'AuthenticationError';
}
}
/**
* Authorization error (403)
*/
class AuthorizationError extends AppError {
constructor(message = 'Invalid authentication credentials') {
super(message, 403);
this.name = 'AuthorizationError';
}
}
module.exports = {
AppError,
ValidationError,
AIServiceError,
NotFoundError,
AuthenticationError,
AuthorizationError
};

View File

@ -0,0 +1,92 @@
const crypto = require('crypto');
const { ValidationError } = require('../errors/AppError');
/**
* API Key Generator utility
*/
class ApiKeyGenerator {
/**
* Generate a secure API key
* @param {string} prefix - Key prefix (e.g., 'key_test_' or 'key_live_')
* @returns {string} Full API key
*/
static generate(prefix = 'key_test_') {
if (!prefix || typeof prefix !== 'string') {
throw new ValidationError('Prefix must be a non-empty string');
}
// Generate 64 hex characters (256 bits)
const randomBytes = crypto.randomBytes(32);
const keySuffix = randomBytes.toString('hex');
return `${prefix}${keySuffix}`;
}
/**
* Hash API key with SHA256
* @param {string} apiKey - API key to hash
* @returns {string} SHA256 hash
*/
static hash(apiKey) {
if (!apiKey || typeof apiKey !== 'string') {
throw new ValidationError('API key must be a non-empty string');
}
return crypto.createHash('sha256').update(apiKey).digest('hex');
}
/**
* Mask API key for display (show only first 13 and last 4 chars)
* @param {string} apiKey - API key to mask
* @returns {string} Masked key
*/
static mask(apiKey) {
if (!apiKey || typeof apiKey !== 'string') {
return '***';
}
if (apiKey.length <= 17) {
return '***';
}
const start = apiKey.substring(0, 13);
const end = apiKey.substring(apiKey.length - 4);
const middle = '*'.repeat(Math.max(0, apiKey.length - 17));
return `${start}${middle}${end}`;
}
/**
* Validate API key format
* @param {string} apiKey - API key to validate
* @returns {boolean}
*/
static isValidFormat(apiKey) {
if (!apiKey || typeof apiKey !== 'string') {
return false;
}
// Format: key_(test|live)_[64 hex chars]
const pattern = /^key_(test|live)_[a-f0-9]{64}$/;
return pattern.test(apiKey);
}
/**
* Extract prefix from API key
* @param {string} apiKey - API key
* @returns {string|null} Prefix or null if invalid
*/
static extractPrefix(apiKey) {
if (!this.isValidFormat(apiKey)) {
return null;
}
const match = apiKey.match(/^(key_(test|live)_)/);
return match ? match[1] : null;
}
}
module.exports = ApiKeyGenerator;

126
src/shared/utils/logger.js Normal file
View File

@ -0,0 +1,126 @@
const winston = require('winston');
const DailyRotateFile = require('winston-daily-rotate-file');
const path = require('path');
const fs = require('fs');
const logDir = path.join(process.cwd(), 'logs');
// Create logs directory if it doesn't exist
if (!fs.existsSync(logDir)) {
fs.mkdirSync(logDir, { recursive: true });
}
// Define log format
const logFormat = winston.format.combine(
winston.format.timestamp({ format: 'YYYY-MM-DD HH:mm:ss' }),
winston.format.errors({ stack: true }),
winston.format.json()
);
// Console format for development
const consoleFormat = winston.format.combine(
winston.format.colorize(),
winston.format.timestamp({ format: 'YYYY-MM-DD HH:mm:ss' }),
winston.format.printf(({ timestamp, level, message, ...meta }) => {
const metaString = Object.keys(meta).length ? JSON.stringify(meta, null, 2) : '';
return `${timestamp} [${level}]: ${message} ${metaString}`;
})
);
// Daily rotate file transport for errors
const errorFileTransport = new DailyRotateFile({
filename: path.join(logDir, 'error-%DATE%.log'),
datePattern: 'YYYY-MM-DD',
level: 'error',
format: logFormat,
maxSize: '20m',
maxFiles: '14d'
});
// Daily rotate file transport for all logs
const combinedFileTransport = new DailyRotateFile({
filename: path.join(logDir, 'combined-%DATE%.log'),
datePattern: 'YYYY-MM-DD',
format: logFormat,
maxSize: '20m',
maxFiles: '14d'
});
// Create logger instance
const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: logFormat,
defaultMeta: { service: 'property-image-tagger' },
transports: [
errorFileTransport,
combinedFileTransport
],
exceptionHandlers: [
new winston.transports.File({ filename: path.join(logDir, 'exceptions.log') })
],
rejectionHandlers: [
new winston.transports.File({ filename: path.join(logDir, 'rejections.log') })
]
});
// Add console transport in development
if (process.env.NODE_ENV !== 'production') {
logger.add(new winston.transports.Console({
format: consoleFormat
}));
}
/**
* Log info message
* @param {string} message - Log message
* @param {Object} [meta] - Additional metadata
*/
const info = (message, meta = {}) => {
logger.info(message, meta);
};
/**
* Log error message
* @param {string} message - Log message
* @param {Error|Object} error - Error object or metadata
*/
const error = (message, error = {}) => {
if (error instanceof Error) {
logger.error(message, {
message: error.message,
stack: error.stack,
...error
});
} else {
logger.error(message, error);
}
};
/**
* Log warning message
* @param {string} message - Log message
* @param {Object} [meta] - Additional metadata
*/
const warn = (message, meta = {}) => {
logger.warn(message, meta);
};
/**
* Log debug message
* @param {string} message - Log message
* @param {Object} [meta] - Additional metadata
*/
const debug = (message, meta = {}) => {
logger.debug(message, meta);
};
module.exports = {
info,
error,
warn,
debug,
logger
};

View File

@ -0,0 +1,46 @@
/**
* Format API responses consistently
*/
class ResponseFormatter {
/**
* Format successful response
* @param {*} data - Response data
* @param {string} message - Success message
* @returns {Object}
*/
static success(data, message = 'Success') {
return {
success: true,
message,
data,
timestamp: new Date().toISOString()
};
}
/**
* Format error response
* @param {Error} error - Error object
* @param {string} message - Error message (defaults to error.message)
* @returns {Object}
*/
static error(error, message = null) {
const errorMessage = message || error.message || 'An error occurred';
const response = {
success: false,
message: errorMessage,
timestamp: new Date().toISOString()
};
// Only include error details in development
if (process.env.NODE_ENV === 'development' && error.stack) {
response.stack = error.stack;
}
return response;
}
}
module.exports = ResponseFormatter;