Skip to content

Keywords.AI Tracing Debug Report

Date: November 16, 2025 Status: ✅ Root cause identified

Problem Summary

Keywords.AI tracing is working correctly but OpenAI SDK calls are being filtered out. The tracing system only captures spans that are explicitly wrapped with @workflow or @task decorators.

Root Cause

From the debug logs:

[KeywordsAI Debug] Filtering out auto-instrumentation span: openai.chat
(no TRACELOOP_SPAN_KIND or entityPath)

Keywords.AI's tracing library automatically filters out bare OpenAI SDK calls unless they are: 1. Wrapped in a @workflow or @task decorator, OR 2. Used through the Keywords.AI gateway (different integration method)

Current Integration Status

✅ What's Working

  • Keywords.AI tracing library is installed (keywordsai-tracing==0.0.45)
  • API key is configured correctly
  • Telemetry initialization is successful
  • Spans ARE being sent to Keywords.AI API (POST /api/v1/traces)
  • Decorated workflows/tasks appear in dashboard

❌ What's Not Working

  • Bare OpenAI SDK calls in app/utils/llm_client.py are filtered out
  • Only manually decorated functions are tracked
  • Most LLM calls in the app are NOT captured

Why This Happens

Keywords.AI tracing uses a selective filtering strategy where: - Auto-instrumentation spans (like openai.chat) are filtered by default - Only user-decorated spans (with @workflow or @task) are sent - This prevents spam but means bare LLM calls aren't tracked

Solutions

Current code (app/utils/llm_client.py):

@task(name="llm_generate_response")  # Already has decorator!
def generate_response(self, system_prompt: str, user_message: str, ...):
    # This SHOULD be captured

Issue: The @task decorator is already applied, but the OpenAI call inside might still be filtered.

Solution: Ensure the decorator is from keywordsai_tracing.decorators:

from keywordsai_tracing.decorators import task  # Not from app.utils.tracing

@task(name="llm_generate_response")
def generate_response(self, ...):
    # Now it will be captured

Option 2: Use Keywords.AI Gateway (Alternative)

Instead of tracing, route all requests through Keywords.AI's proxy:

from openai import OpenAI

client = OpenAI(
    api_key=settings.openai_api_key,
    base_url="https://api.keywordsai.co/api/",
    default_headers={
        "Authorization": f"Bearer {settings.keywordsai_api_key}",
    }
)

Pros: - Guaranteed to track every request - No filtering issues - Built-in request/response logging

Cons: - Requires adding OpenAI API key to Keywords.AI dashboard - All requests go through Keywords.AI proxy (latency?) - Different integration model

Option 3: Disable Filtering (If Possible)

Check if Keywords.AI has a configuration option to capture all OpenAI spans, not just decorated ones.

Verification Test

Run this to verify the fix:

python test_keywords_ai_debug.py

Expected output: - ✅ All tests pass - ✅ OpenAI call appears in dashboard (check after 60 seconds) - ✅ Workflow/task decorators working

Next Steps

  1. Verify your LLM client is using the right decorator
  2. Check import statement in app/utils/llm_client.py
  3. Ensure it's from app.utils.tracing import task (which wraps keywordsai_tracing.decorators.task)

  4. Check if other LLM calls need decorators

  5. call_gpt5() method
  6. call_gpt5_mini() method
  7. classify_intent() method

  8. Verify in dashboard

  9. Wait 60 seconds after test
  10. Check: https://platform.keywordsai.co/monitoring/logs
  11. Look for app_name='archety' or 'archety-debug'

  12. Consider gateway approach if tracing continues to have issues

  13. Add OpenAI API key to Keywords.AI dashboard: Settings → Providers
  14. Update app/utils/llm_client.py to use gateway base URL
  15. Test with a single request first

Tracing Architecture

Your App (archety)
    ├─ app/utils/llm_client.py
    │   ├─ @task generate_response() ← Should be captured ✅
    │   ├─ @task classify_intent() ← Should be captured ✅
    │   └─ call_gpt5() ← NOT captured (no decorator) ❌
    ├─ app/utils/tracing.py
    │   └─ initialize_tracing() ← Wraps keywordsai_tracing
    └─ keywordsai_tracing (library)
        ├─ Instruments.OPENAI ← Auto-instruments OpenAI SDK
        ├─ Filtering logic ← Filters bare calls!
        └─ Exporter ← Sends to api.keywordsai.co/api/v1/traces

Files to Check

  1. app/utils/llm_client.py - Add @task to all LLM methods
  2. app/orchestrator/two_stage_handler.py - Check if LLM calls are decorated
  3. app/persona/engine.py - Check prompt building logic
  4. app/main.py - Tracing initialization (already correct ✅)

Debugging Commands

# Check if tracing is loaded
python -c "from app.utils.tracing import is_tracing_enabled; print(is_tracing_enabled())"

# Run diagnostic
python test_keywords_ai_debug.py

# Check Railway env vars
railway variables --service archety-backend-prod | grep KEYWORDS

# Test with decorator
python -c "
from keywordsai_tracing.decorators import task
from openai import OpenAI
import os

@task(name='test_openai')
def test():
    client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))
    response = client.chat.completions.create(
        model='gpt-4o-mini',
        messages=[{'role': 'user', 'content': 'hi'}],
        max_tokens=5
    )
    print(response.choices[0].message.content)

test()
"
  • Traces/Logs: https://platform.keywordsai.co/monitoring/logs
  • Settings: https://platform.keywordsai.co/settings
  • API Keys: https://platform.keywordsai.co/settings/api-keys
  • Providers: https://platform.keywordsai.co/settings/providers (for gateway approach)

Conclusion

Root Cause: OpenAI calls are filtered unless wrapped with @task decorator Solution: Ensure all LLM calls in app/utils/llm_client.py use @task from app.utils.tracing Status: Already partially implemented, needs verification Alternative: Use Keywords.AI gateway instead of tracing