Building a $0.50/Month Market Intelligence Platform on AWS
How I built a professional-grade financial analysis system for tracking market trends for less than the cost of a coffee
The Challenge
I wanted to build a market intelligence platform for tracking market trends that could:
- Track 214 stocks across all major sectors
- Generate market commentary twice daily
- Send professional HTML emails to subscribers
- Provide real-time technical analysis via web interface
- Run Monte Carlo simulations for price projections
- Cost less than a Netflix subscription
Traditional approaches would cost $200-500/month. Using AWS serverless architecture and Kiro (AWS AI assistant), I built it for $0.50.
Architecture Overview
The Serverless Stack
Data Layer: DynamoDB (5 tables, 13,500+ records)
Price history, technical features, projections, cache, signals
Compute Layer: AWS Lambda (10+ functions)
Data ingestion, feature engineering, AI analysis, email delivery
AI Layer: Amazon Bedrock (Claude 3 Haiku)
Generates institutional-quality market commentary
Orchestration: EventBridge
Runs twice daily: 7am (pre-market) and 5pm (post-market)
Frontend: S3 + CloudFront
Static website with interactive charts
Messaging: Amazon SES
HTML emails with market analysis and trade signals
The Daily Pipeline
Morning Flow (7:00-8:00 AM ET)
7:00 AM - Data Ingestion
Fetch 214 tickers from Yahoo Finance, store OHLCV data in DynamoDB. Takes 60 seconds, costs $0.0001
7:10 AM - Feature Engineering
Calculate 20+ technical indicators: RSI, moving averages, volatility, returns, Bollinger Bands, MACD, ATR. Process 124 tickers in 30 seconds.
7:15 AM - AI Analysis
Claude 3 Haiku generates structured JSON (market overview, insights, levels) and professional prose letter (400+ words). Uses only observable price data - no speculation.
8:00 AM - Email Delivery
Send HTML emails to subscribers with AI commentary, technical levels, trade signals, and unusual activity.
Evening Flow (5:00-5:20 PM ET)
- 5:00 PM - Refresh data with market close prices
- 5:10 PM - Recalculate features
- 5:15 PM - Generate updated analysis
- 5:20 PM - Run Monte Carlo simulations (90-day projections)
Signal Generation Logic
Multi-Factor Scoring System
Signals are generated by combining multiple technical indicators into a composite score:
Momentum Signals (40% weight)
- RSI: Oversold (<30) = bullish, Overbought (>70) = bearish
- MACD: Crossover above signal line = bullish, below = bearish
- Price vs MA20: Above = bullish momentum, below = bearish
Trend Signals (30% weight)
- MA20 vs MA50: Golden cross = bullish, death cross = bearish
- Price position: Above MA50 = uptrend, below = downtrend
- Trend strength: Measured by distance from moving averages
Volatility Signals (20% weight)
- Bollinger Bands: Price at lower band = oversold, upper band = overbought
- ATR expansion: High volatility = caution, low volatility = potential breakout
- Volatility percentile: Compare current to 90-day range
Volume Confirmation (10% weight)
- Volume vs 20-day average: Above = conviction, below = weak signal
- Price-volume divergence: Rising price + falling volume = warning
Signal Classification
Composite scores are translated into actionable signals:
- Strong Buy: Score > 70, multiple indicators aligned, volume confirmation
- Buy: Score 50-70, positive momentum, trend support
- Hold: Score 30-50, mixed signals, wait for clarity
- Sell: Score 10-30, negative momentum, trend breakdown
- Strong Sell: Score < 10, multiple bearish indicators, high conviction
Signal Validation
Before sending signals, the system validates:
- Data Quality: Minimum 90 days of price history required
- Liquidity Filter: Only stocks with avg volume > 500K shares
- Volatility Check: Flag extreme moves (>3 standard deviations)
- Correlation Analysis: Compare to sector ETF for context
Signals are tracked in DynamoDB with timestamps, allowing performance measurement and backtesting.
Key Technical Decisions
1. Why Serverless?
Cost Efficiency
- Lambda: Pay per 100ms of execution
- DynamoDB: Pay per read/write
- No idle server costs
Auto-Scaling
- 1 user or 10,000 users - same code
- No capacity planning
- No server management
Reliability
- AWS manages infrastructure
- Built-in redundancy
- 99.99% uptime SLA
2. Why DynamoDB Over RDS?
Performance: Single-digit millisecond latency, no connection pooling issues, scales automatically
Cost: On-demand pricing at $0.09/month for my workload vs RDS t3.micro at $15/month minimum
Simplicity: No database maintenance, backups, or version upgrades
3. Why Claude 3 Haiku?
Quality: Generates institutional-grade analysis, follows complex prompts precisely, understands financial terminology
Cost: $0.25/1M input tokens, $1.25/1M output tokens. My usage: $0.14/month
Speed: Responds in 2-3 seconds, fast enough for real-time API
The AI Prompt Engineering
Challenge: Data-Only Analysis
I constrained Claude to use ONLY observable price data:
CRITICAL CONSTRAINT: Use ONLY the data provided.
Do not reference any external information, news,
earnings, analyst actions, or events not directly
observable in this price data.
Prompt Refinement
Iterated through 5 versions to achieve:
- Precise language (basis points, not percentages)
- Sector rotation analysis
- Correlation observations (commodities vs equities)
- Technical level identification
- Forward-looking bias assessment
Cost Breakdown
I built this for myself, but here's what it would cost at scale:
| Service | Monthly Cost | Notes |
|---|---|---|
| Lambda | $0.00 | Within free tier (400K GB-seconds) |
| DynamoDB | $0.09 | 137 MB storage + 165K operations |
| Bedrock | $0.14 | 60 analysis runs/month |
| SES | $0.20 | 3,000 emails (first 1K free) |
| S3 | $0.01 | 25 MB storage |
| CloudFront | $0.00 | Within free tier (1 TB) |
| CloudWatch | $0.06 | Logs and monitoring |
| TOTAL | $0.50 | $0.005 per user |
Scaling Economics
- 1,000 users: $2.50/month ($0.0025/user)
- 10,000 users: $20/month ($0.002/user)
Key insight: Most costs are fixed (data processing). Only email scales linearly.
Lessons Learned
1. Serverless Isn't Always Cheaper
For always-on, high-traffic applications, EC2 can be cheaper. But for scheduled jobs (cron-like), bursty traffic, and low-medium volume APIs, serverless wins on both cost and simplicity.
2. DynamoDB Requires Different Thinking
Coming from SQL: No JOINs (denormalize data), design for access patterns not normalization, use composite keys (ticker + date), leverage GSIs sparingly.
But once you adapt: Blazing fast queries, no connection limits, predictable performance.
3. AI Prompt Engineering > Model Selection
I spent more time refining prompts than choosing models. Claude 3 Haiku (cheapest) produces better results with a good prompt than GPT-4 with a mediocre one.
Prompt iteration:
- v1: Generic "analyze the market" → vague output
- v2: Structured sections → better but robotic
- v3: Data-only constraint → accurate but dry
- v4: Institutional voice → professional but verbose
- v5: Flowing prose + precise metrics → perfect
4. Cache Aggressively
Added 2-hour TTL cache in DynamoDB:
- Reduced Lambda invocations by 80%
- Improved API response time (50ms vs 2s)
- Cost: $0.01/month
5. EventBridge > Cron
EventBridge advantages: Visual workflow in AWS Console, built-in retry logic, dead letter queues, CloudWatch integration, no server to run cron on.
Performance Metrics
Latency
- Data fetch: 60s (214 tickers from Yahoo Finance)
- Feature calculation: 30s (124 tickers, 20+ indicators)
- AI analysis: 3s (Claude 3 Haiku)
- Email delivery: 2s per email
- API response: 50ms (cached), 2s (uncached)
Reliability
- Uptime: 99.9% (AWS SLA)
- Failed Lambda invocations: <0.1%
- Email delivery rate: 99.5%
Conclusion
Building a professional-grade market intelligence platform doesn't require expensive servers, complex Kubernetes clusters, dedicated DevOps teams, or $10K/month infrastructure budgets.
It requires:
- Smart architecture choices
- Leveraging managed services
- Understanding your access patterns
- Aggressive caching
- Good prompt engineering
Final stats:
- 214 tickers tracked
- 13,500+ historical records
- 20+ technical indicators
- Market analysis twice daily
- Professional HTML emails
- Interactive web interface
- $0.50/month (scales to 100 users at this cost)
I built this for my own investing research, but the architecture scales effortlessly. The serverless revolution is real.
Tech Stack Summary
Backend: AWS Lambda (Python 3.11), DynamoDB (NoSQL), EventBridge (Orchestration), Bedrock (AI), SES (Email)
Frontend: S3 (Static hosting), CloudFront (CDN), Vanilla JavaScript (No framework)
Data Sources: Yahoo Finance (Free)
Development Tool: Built with Kiro (AWS AI assistant)
Total Lines of Code: ~2,500
Development Time: 1 weekend
Monthly Cost: $0.50 (my usage, scales to 100 users)
I built this for tracking market trends using Kiro to accelerate development. The economics are remarkable: professional-grade infrastructure for less than a coffee per month.
What's Next: Data Sources
The current system uses Yahoo Finance for price data. Here are additional data sources planned for integration:
Earnings Calendar
- Upcoming earnings dates for tracked tickers
- Alert system 1-2 days before earnings
- Historical earnings surprise data
- Post-earnings price movement analysis
Options Flow Data
- Unusual options activity detection
- Large block trades (whales)
- Put/call ratio by ticker
- Implied volatility changes
Insider Trading
- SEC Form 4 filings (insider buys/sells)
- Cluster detection (multiple insiders buying)
- Historical correlation with price moves
Short Interest
- Days to cover ratio
- Short interest % of float
- Short squeeze potential scoring
- Trend analysis (increasing/decreasing)
Analyst Ratings
- Consensus ratings (buy/hold/sell)
- Price target aggregation
- Upgrade/downgrade alerts
- Analyst accuracy tracking
All data sources will be integrated using the same serverless architecture, keeping costs minimal while adding significant analytical depth.
Comments (0)