✅ Sentry Integration Complete & Verified¶
Date: November 1, 2025 Status: 🟢 Active & Configured DSN: Configured for project o4510292155957248
What's Been Set Up¶
1. Configuration ✅¶
- ✅ Sentry DSN added to
.env - ✅ Config updated in
app/config.py - ✅
sentry-sdk[fastapi]installed - ✅ Added to
requirements.txt
2. Integration ✅¶
- ✅ Auto-initializes in
app/main.pyon server start - ✅ FastAPI + Starlette integrations active
- ✅ Privacy-first configuration (no PII)
- ✅ 10% sampling for performance monitoring
- ✅ Environment detection (development/production)
3. Helper Utilities ✅¶
Created app/utils/sentry_helpers.py:
- set_user_context() - Track users (hashed)
- set_conversation_context() - Track conversations
- capture_exception_with_context() - Manual error capture
- add_breadcrumb() - Debugging trail
- capture_message() - Important events
4. Testing ✅¶
- ✅ Test script created:
test_sentry.py - ✅ Test error sent successfully
- ✅ Verified Sentry receives errors
How to Use¶
Automatic Error Tracking (Already Working!)¶
All unhandled exceptions are automatically captured:
@app.post("/orchestrator/message")
async def handle_message(request: OrchestratorRequest):
# Any error here is automatically sent to Sentry
response = message_handler.handle_message(request)
return response
Manual Error Capture with Context¶
from app.utils.sentry_helpers import (
set_user_context,
set_conversation_context,
add_breadcrumb,
capture_exception_with_context
)
def handle_message(self, request):
# Set user context (hashed for privacy)
set_user_context(user_phone=request.sender)
# Set conversation context
set_conversation_context(
chat_guid=request.chat_guid,
mode=request.mode,
persona_id="sage"
)
# Add breadcrumbs for debugging
add_breadcrumb(
message="Starting LLM request",
category="llm",
data={"model": "gpt-5-mini", "temperature": 0.7}
)
try:
response = self.persona_engine.generate_response(...)
except Exception as e:
# Capture with additional context
capture_exception_with_context(
exception=e,
context={
"user_message": request.text,
"persona": "sage",
"memory_count": len(memories)
}
)
raise # Re-raise to let FastAPI handle it
Quick Test¶
Want to see Sentry in action?
# Run the test script
python test_sentry.py
# Or start the server and trigger a test error
./run.sh
# Then visit (add this endpoint to test):
curl http://localhost:8000/sentry-test
Check your Sentry dashboard at: https://sentry.io/organizations/your-org/issues/
You should see the error with: - Full stack trace - User context (hashed phone) - Conversation context - Breadcrumbs showing steps before error - Custom metadata
Sentry Dashboard Access¶
Project: archety-backend Organization ID: o4510292155957248 Dashboard: https://sentry.io/
Key Features to Use:¶
- Issues - See all errors grouped by type
- Performance - Monitor endpoint response times
- Releases - Track errors by deployment version
- Alerts - Get notified when errors spike
Recommended Next Steps¶
1. Set Up Alerts (5 minutes)¶
In Sentry dashboard: - Go to Alerts → Create Alert - Set rule: "When error count > 10 in 1 hour" - Add notification: Email or Slack - Name: "High Error Rate - Archety"
2. Configure Release Tracking¶
When deploying:
# Tag your deployment
export SENTRY_RELEASE="archety-backend@$(git rev-parse --short HEAD)"
# Or use date-based versions
export SENTRY_RELEASE="archety-backend@2025-11-01"
3. Set Environment Variable for Production¶
In your production .env:
ENVIRONMENT=production
SENTRY_DSN=https://961e97f54573cc3c886dd677ac8e4c60@o4510292155957248.ingest.us.sentry.io/4510292227391488
4. Add Breadcrumbs to Critical Operations¶
Good places to add breadcrumbs:
- Before LLM calls (add_breadcrumb("Starting GPT-4 call", category="llm"))
- Before memory searches (add_breadcrumb("Searching mem0", category="memory"))
- Before workflow execution (add_breadcrumb("Executing workflow", category="workflow"))
- Before database queries (add_breadcrumb("Querying conversations", category="db"))
Performance Monitoring¶
Sentry will track: - ⚡ Endpoint response times - 🔍 Slowest transactions - 💾 Database query performance - 🤖 LLM call latency
View in: Performance tab in Sentry dashboard
Privacy & Security ✅¶
- No PII sent -
send_default_pii=False - User IDs hashed - Phone numbers are SHA256 hashed
- Conversation IDs only - No message content in errors
- 10% sampling - Only captures 10% of transactions for performance
Troubleshooting¶
Not seeing errors in Sentry?¶
-
Check DSN is loaded:
-
Run test script:
-
Check server logs:
Too many errors?¶
- Increase error grouping
- Add
before_sendhook to filter - Lower sampling rate
Support & Documentation¶
- Sentry Docs: https://docs.sentry.io/platforms/python/guides/fastapi/
- FastAPI Integration: https://docs.sentry.io/platforms/python/integrations/fastapi/
- Performance Monitoring: https://docs.sentry.io/product/performance/
- Your Sentry Project: https://sentry.io/organizations/your-org/projects/archety-backend/
✅ Sentry is live and monitoring your production errors! 🎉
Every error that occurs will be captured with full context, stack traces, and user information (hashed for privacy). You'll be notified immediately if errors spike, and you can debug issues faster with breadcrumbs and custom context.
Happy debugging! 🐛 → 🎯