17 Commits

Author SHA1 Message Date
Bhavya U
945c0c61d9 Move docs/prompts.md to .github/instructions/model-prompts.instructions.md (#4942)
- Converted standalone doc into a scoped instructions file (applyTo: src/extension/prompts/node/agent/**)
- Fixed outdated method names (resolvePrompt -> resolveSystemPrompt, PromptConstructor -> SystemPrompt)
- Added resolver interface table, DI examples, resolution order docs
- Restored concrete examples for common model misbehaviors
- Auto-loads when editing agent prompt files
2026-04-02 18:38:27 +00:00
Zhichao Li
601b3c97f6 fix: address review feedback on OTel agent activity metrics (#4801)
* fix: address review feedback on OTel agent activity metrics

* fix: guard recordEditAcceptance for accept/reject only, fix doc wording
2026-03-29 00:28:06 +00:00
Zhichao Li
05da8fb689 feat: add OTel events and metrics for agentic edit quality signals (#4794)
* docs: add OTel backfill plan for agentic change metrics

* docs: add Claude Code OTel parity analysis with feasibility + line estimates

* docs: expand plan to cover all agentic surfaces (inline chat, CLI, cloud, NES)

* docs: remove NES and Claude Code comparison, keep plan lean

* docs: add 3-pillar signal type mapping (metrics/events/traces)

* docs: re-audit signals — counters when easy, events only with useful attrs

* feat: add OTel event emitters for agentic edit quality metrics

* feat: add OTel counters and histograms for agentic edit quality metrics

* feat: wire OTel events/metrics into userActions.ts for all agentic user actions

* feat: wire OTel survival events into apply_patch, replace_string, and code_mapper tools

* feat: wire OTel counters for agent summarization and edit response metrics

* fix: resolve TypeScript errors — thread IOTelService through intent class hierarchy

* docs: update sprint plan with completion notes

* style: fix import ordering from editor auto-sort

* feat: wire OTel counters for cloud session invoke, PR ready, and CLI PR creation

* docs: consolidate OTel edit quality metrics into agent_monitoring.md

* docs: rename Edit Quality to Agent Activity & Outcome

* docs: align Edit Quality references to Agent Activity naming

* refactor: adopt Harald's type-safe metrics API (EditSource/EditOutcome, 2 survival histograms)
2026-03-28 17:13:22 +00:00
Zhichao Li
d2c8aa5d67 fix: always enable content capture for CLI debug panel (#4581)
Set OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true before
SDK init so the debug panel always shows full prompts, responses,
and tool arguments. Without this, users without the env var set
see empty content in the debug panel.

When user OTel is disabled, SDK spans go to /dev/null so captured
content never leaves the process.
2026-03-21 05:46:45 +00:00
Zhichao Li
4a4411e88e Native OTel instrumentation for Copilot CLI (background, terminal, debug panel) (#4507)
* add OTel instrumentation spec and plan for all agents

* feat: OTel instrumentation for Copilot CLI background agent

- Add agentOTelEnv.ts config derivation helpers (CLI + Claude)
- Enable SDK OtelLifecycle via env vars before LocalSessionManager ctor
- Add invoke_agent copilotcli wrapper span with traceparent propagation
- Forward OTel env vars to terminal CLI sessions
- Update spec and plan docs for all agents
- 33 tests passing (14 new + 19 existing)

* feat: filter debug-panel-only spans from OTLP export

Spans with non-standard gen_ai.operation.name values (content_event,
user_message) are excluded from external OTLP export while remaining
visible in the Agent Debug Log panel via onDidCompleteSpan.

Only GenAI-conventional operations (invoke_agent, chat, execute_tool,
embeddings, execute_hook) are exported to the user's collector.

* fix: add IOTelService to CopilotCLISessionService ctor in participant test

* fix: pass chatSessionId to CapturingToken for debug panel routing

The CapturingToken was created without chatSessionId, so the debug panel
couldn't route copilotcli OTel spans to the correct session view.

Also: Copilot CLI runtime only supports otlp-http (not gRPC). Terminal
CLI sessions require an HTTP-compatible OTLP endpoint.

* docs: add CLI HTTP-only limitation to spec and dual-port Aspire setup to test plan

* fix: forward OTel env vars to CLI terminal sessions

- Include OTel env vars in terminal profile provider path (dropdown)
  which previously only set shell info without auth/OTel env
- Pass empty env to deriveCopilotCliOTelEnv for terminal sessions so
  vars are always included regardless of process.env pollution from
  the in-process background agent
- Update test plan to use Grafana LGTM stack

* fix: add CHAT_SESSION_ID to attributes in CopilotCLISession

* docs: update OTel instrumentation specification for Copilot CLI and Claude Code

* feat: bridge SDK native OTel spans to Agent Debug panel

Replace synthetic span approach (PR #4494) with a bridge SpanProcessor
that forwards SDK-native spans from the Copilot CLI runtime's
BasicTracerProvider into the extension's IOTelService event stream.

This gives the debug panel the full SDK span hierarchy (subagents,
permissions, hooks, nested tool calls) — identical to what Grafana shows.

Architecture:
- Add injectCompletedSpan() to IOTelService interface for external span
  injection without OTLP re-export
- Create CopilotCliBridgeSpanProcessor that converts ReadableSpan to
  ICompletedSpanData, injects copilot_chat.chat_session_id from a
  traceId→sessionId map, and fires onDidCompleteSpan
- Install bridge on SDK's TracerProvider via internal
  MultiSpanProcessor._spanProcessors array (OTel SDK v2 removed the
  public addSpanProcessor API, but this internal array is the same
  pattern the SDK itself uses in forceFlush)
- Propagate traceparent from extension root span to SDK via
  otelLifecycle.updateParentTraceContext() so all spans share a traceId
- Filter bridge to only forward spans from registered CLI sessions

Code changes:
- copilotCliBridgeSpanProcessor.ts: new bridge processor
- copilotcliSession.ts: remove all synthetic spans (chat, tool, error),
  keep root invoke_agent span + traceparent propagation + bridge wiring
- copilotcliSessionService.ts: install bridge after first session
  creation, wire bridge + SDK trace context updater to sessions
- IOTelService: add injectCompletedSpan to interface + all impls
- Remove outdated synthetic span tests
- Add OTel data flow architecture diagram (HTML)

* fix: update span processing to use parent span context and enhance subagent event identification

* display names for tool call and subagent events

* docs: merge arch and spec into single developer guide

Combine agent_monitoring_arch.md (foreground-only) and agent-otel-spec.md
(all agents) into a single comprehensive developer reference covering all
four agent paths, bridge architecture, and SDK internal access warnings.

* docs: fix stale addSpanProcessor reference in data flow diagram

* chore: move plan and test docs to offline archive

These documents are reference material for the OTel sprint, not needed
in the shipped PR. Archived to ~/Documents/copilot-otel-archive/.

* test: add bridge SpanProcessor unit tests

13 tests covering: traceId filtering, parentSpanContext conversion,
CHAT_SESSION_ID injection, attribute flattening, event conversion,
HrTime→ms conversion, unregister/shutdown behavior.

* test: add span event identification and naming tests

7 tests covering invoke_agent identification logic: top-level skip,
SDK wrapper skip (no agent name), subagent detection (name attribute
and span name parsing), unknown/missing operation name handling.

* fix: always enable SDK OTel for debug panel regardless of user config

The CLI SDK's OtelLifecycle must always initialize so the bridge
processor can forward native spans to the debug panel. When user
OTel is disabled, COPILOT_OTEL_ENABLED is still set but no OTLP
endpoint is configured — the SDK creates spans (for debug panel)
but doesn't export to any external collector.

The bridge installation is also now unconditional — it installs
even when user OTel is disabled.

* chore: remove transient sprint plan

* fix: suppress SDK OTLP export when user OTel is disabled

When user OTel is disabled, force the SDK to use file exporter to
/dev/null instead of letting it default to OTLP. Also clear any
leftover OTEL_EXPORTER_OTLP_ENDPOINT from previous sessions to
prevent orphaned traces in Grafana.

* docs: add background agents section to user monitoring guide

Cover Copilot CLI (background + terminal) and Claude Code agent
tracing in the user-facing guide. Includes span hierarchy examples,
service.name filtering table, and CLI HTTP-only limitation note.

* docs: remove Claude Code from user guide (not yet supported)

* fixup! feat: OTel instrumentation for Copilot CLI background agent

* fix: address PR review comments

- Use GenAiOperationName constants in EXPORTABLE_OPERATION_NAMES (avoids drift)
- Remove unnecessary delete of OTEL_EXPORTER_OTLP_ENDPOINT from process.env
- Replace 'as any' OTel mocks with typed NoopOTelService in terminal tests
- Clarify comment on empty env arg for terminal OTel env derivation
- Add ExportResultCode.SUCCESS comment for clarity

* fixup! fix: always enable SDK OTel for debug panel regardless of user config

* fix: handle SDK native hook spans in debug panel

The SDK's OtelSessionTracker creates 'hook {type}' spans with
github.copilot.hook.type attributes (not gen_ai.operation.name).
These were silently dropped by completedSpanToDebugEvent. Now
detected by span name prefix and converted to Hook: {type} events.

* add execute_hook spans for Claude hook executions in monitoring documentation

* docs: add hook spans to CLI trace hierarchy in user guide
2026-03-20 22:53:20 +00:00
Zhichao Li
568ea9428e docs: improve OTel monitoring doc with Quick Start guide and VS Code settings examples (#4243)
- Replace env var Quick Start with step-by-step Aspire Dashboard guide
- Add concise intro explaining what the Aspire Dashboard is
- Convert all example configurations from env vars to VS Code settings JSON
- Keep env var reference table for official documentation
- Note where env vars are still required (e.g. auth headers)
2026-03-06 18:15:53 +00:00
Ariel Agranovich
b27de12576 docs: fixed incorrect documented jaeger port. (#4251) 2026-03-06 17:31:32 +00:00
Zhichao Li
ddb6f98ce6 feat(otel): Add OpenTelemetry GenAI instrumentation to Copilot Chat (#3917)
* feat: add OTel GenAI instrumentation foundation

Phase 0 complete:
- spec.md: Full spec with decisions, GenAI semconv, dual-write, eval signals,
  lessons from Gemini CLI + Claude Code
- plan.md: E2E demo plan (chat ext + eval repo + Azure backend)
- src/platform/otel/: IOTelService, config, attributes, metrics, events,
  message formatters, NodeOTelService, file exporters
- package.json: Added @opentelemetry/* dependencies

OTel opt-in behind OTEL_EXPORTER_OTLP_ENDPOINT env var.

* refactor: reorder OTel type imports for consistency

* refactor: reorder OTel type imports for consistency

* feat(otel): wire OTel spans into chat extension — Phase 1 core

- Register IOTelService in DI (NodeOTelService when enabled, NoopOTelService when disabled)
- Add OTelContrib lifecycle contribution for OTel init/shutdown
- Add `chat {model}` inference span in ChatMLFetcherImpl._doFetchAndStreamChat()
- Add `execute_tool {name}` span in ToolsService.invokeTool()
- Add `invoke_agent {participant}` parent span in ToolCallingLoop.run()
- Record gen_ai.client.operation.duration, tool call count/duration, agent metrics
- Thread IOTelService through all ToolCallingLoop subclasses
- Update test files with NoopOTelService
- Zero overhead when OTel is disabled (noop providers, no dynamic imports)

* feat(otel): add embeddings span, config UI settings, and unit tests

- Add `embeddings {model}` span in RemoteEmbeddingsComputer.computeEmbeddings()
- Add VS Code settings under github.copilot.chat.otel.* in package.json
  (enabled, exporterType, otlpEndpoint, captureContent, outfile)
- Wire VS Code settings into resolveOTelConfig in services.ts
- Add unit tests for:
  - resolveOTelConfig: env precedence, kill switch, all config paths (16 tests)
  - NoopOTelService: zero-overhead noop behavior (8 tests)
  - GenAiMetrics: metric recording with correct attributes (7 tests)

* test(otel): add unit tests for messageFormatters, genAiEvents, fileExporters

- messageFormatters: 18 tests covering toInputMessages, toOutputMessages,
  toSystemInstructions, toToolDefinitions (edge cases, empty inputs, invalid JSON)
- genAiEvents: 9 tests covering all 4 event emitters, content capture on/off
- fileExporters: 5 tests covering write/read round-trip for span, log, metric
  exporters plus aggregation temporality

Total OTel test suite: 63 tests across 6 files

* feat(otel): record token usage and time-to-first-token metrics

Add gen_ai.client.token.usage (input/output) and copilot_chat.time_to_first_token
histogram metrics at the fetchMany success path where token counts and TTFT
are available from the processSuccessfulResponse result.

* docs: finalize sprint plan with completion status

* style: apply formatter changes to OTel files

* feat(otel): emit gen_ai.client.inference.operation.details event with token usage

Wire emitInferenceDetailsEvent into fetchMany success path where full
token usage (prompt_tokens, completion_tokens), resolved model, request ID,
and finish reasons are available from processSuccessfulResponse.

This follows the OTel GenAI spec pattern:
- Spans: timing + hierarchy + error tracking
- Events: full request/response details including token counts

The data mirrors what RequestLogger captures for chat-export-logs.json.

* feat(otel): add aggregated token usage to invoke_agent span

Per the OTel GenAI agent spans spec, add gen_ai.usage.input_tokens and
gen_ai.usage.output_tokens as Recommended attributes on the invoke_agent span.

Tokens are accumulated across all LLM turns by listening to onDidReceiveResponse
events during the agent loop, then set on the span before it ends.

Ref: https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-agent-spans/

* feat(otel): add token usage attributes to chat inference span

Defer the `chat {model}` span completion from _doFetchAndStreamChat to
fetchMany where processSuccessfulResponse has extracted token counts.

The chat span now carries:
- gen_ai.usage.input_tokens (prompt_tokens)
- gen_ai.usage.output_tokens (completion_tokens)
- gen_ai.response.model (resolved model)

The span handle is returned from _doFetchAndStreamChat via the result
object so fetchMany can set attributes and end it after tokens are known.

This matches the chat-export-logs.json pattern where each request entry
carries full usage data alongside the response.

* style: apply formatter changes

* fix: correct import paths in otelContrib and add IOTelService to test

* feat: add diagnostic span exporter to log first successful export and failures

* feat: add content capture to OTel spans (messages, responses, tool args/results)

- Chat spans: add copilot.debug_name attribute for identifying orphan spans
- Chat spans: capture gen_ai.input.messages and gen_ai.output.messages when captureContent enabled
- Tool spans: capture gen_ai.tool.call.arguments and gen_ai.tool.call.result when captureContent enabled
- Extension chat endpoint: capture input/output messages when captureContent enabled
- Add CopilotAttr.DEBUG_NAME constant

* fix: register IOTelService in chatLib setupServices for NES test

* fix: register OTel ConfigKey settings in Advanced namespace for configurations test

* fix: register IOTelService in shared test services (createExtensionUnitTestingServices)

* fix: register IOTelService in platform test services

* feat(otel): enhance GenAI span attributes per OTel semantic conventions

- Change gen_ai.provider.name from 'openai' to 'github' for CAPI models
- Rename CopilotAttr to CopilotChatAttr, prefix values with copilot_chat.*
- Add GITHUB to GenAiProviderName enum
- Replace copilot.debug_name with gen_ai.agent.name on chat spans
- Add gen_ai.request.temperature, gen_ai.request.top_p to chat spans
- Add gen_ai.response.id, gen_ai.response.finish_reasons on success
- Add gen_ai.usage.cache_read.input_tokens from cached_tokens
- Add copilot_chat.request.max_prompt_tokens and copilot_chat.time_to_first_token
- Add gen_ai.tool.description to execute_tool spans
- Fix gen_ai.tool.call.id to read chatStreamToolCallId (was reading nonexistent prop)
- Fix tool result capture to handle PromptTsxPart and DataPart (not just TextPart)
- Add gen_ai.input.messages and gen_ai.output.messages to invoke_agent span (opt-in)
- Move gen_ai.tool.definitions from chat spans to invoke_agent span (opt-in)
- Add gen_ai.system_instructions to chat spans (opt-in)
- Fix error.type raw strings to use StdAttr.ERROR_TYPE constant
- Centralize hardcoded copilot.turn_count and copilot.endpoint_type into CopilotChatAttr
- Add COPILOT_OTEL_CAPTURE_CONTENT=true to launch.json for testing
- Document span hierarchy fixes needed in plan.md

* feat(otel): connect subagent spans to parent trace via context propagation

- Add TraceContext type and getActiveTraceContext() to IOTelService
- Add storeTraceContext/getStoredTraceContext for cross-boundary propagation
- Add parentTraceContext option to SpanOptions for explicit parent linking
- Implement in NodeOTelService using OTel remote span context
- Capture trace context when execute_tool runSubagent fires (keyed by toolCallId)
- Restore parent context in subagent invoke_agent span (via subAgentInvocationId)
- Auto-cleanup stored contexts after 5 minutes to prevent memory leaks
- Update test mocks with new IOTelService methods
- Update plan.md with investigation findings

* fix(otel): fix subagent trace context key to use parentRequestId

The previous implementation stored trace context keyed by chatStreamToolCallId
(model-assigned tool call ID), but looked it up by subAgentInvocationId
(VS Code internal invocation.callId UUID). These are different IDs that don't
match across the IPC boundary.

Fix: key by chatRequestId on store side (available on invocation options),
and look up by parentRequestId on subagent side (same value, available on
ChatRequest). Both reference the parent agent's request ID.

Verified: 21-span trace with subagent correctly nested under parent agent.

* fix(otel): add model attrs to invoke_agent and max_prompt_tokens to BYOK chat

- Set gen_ai.request.model on invoke_agent span from endpoint
- Track gen_ai.response.model from last LLM response resolvedModel
- Add copilot_chat.request.max_prompt_tokens to BYOK chat spans
- Document upstream gaps in plan.md (BYOK token usage, programmatic tool IDs)

* test(otel): add trace context propagation tests for subagent linkage

Tests verify:
- storeTraceContext/getStoredTraceContext round-trip and single-use semantics
- getActiveTraceContext returns context inside startActiveSpan
- parentTraceContext makes child span inherit traceId from parent
- Independent spans get different traceIds without parentTraceContext
- Full subagent flow: store context in tool call, retrieve in subagent

* fix(otel): add finish_reasons and ttft to BYOK chat spans, document orphan spans

- Set gen_ai.response.finish_reasons on BYOK chat success
- Set copilot_chat.time_to_first_token on BYOK chat success
- Document Gap 4: duplicate orphan spans from CopilotLanguageModelWrapper
- Identify all orphan span categories (title, progressMessages, promptCategorization, wrapper)

* docs(otel): update Gap 4 analysis — wrapper spans have actual token usage data

The copilotLanguageModelWrapper orphan spans are the actual CAPI HTTP
handlers, not duplicates. They contain real token usage, cache read tokens,
resolved model names, and temperature — all missing from the consumer-side
extChatEndpoint spans due to VS Code LM API limitations.

Updated plan.md with:
- Side-by-side attribute comparison table
- Three fix approaches (context propagation, span suppression, enrichment)
- Recommendation: Option 1 (propagate trace context through IPC)

* feat(otel): propagate trace context through BYOK IPC to link wrapper spans

- Pass _otelTraceContext through modelOptions alongside _capturingTokenCorrelationId
- Inject IOTelService into CopilotLanguageModelWrapper
- Wrap makeRequest in startActiveSpan with parentTraceContext when available
- This creates a byok-provider bridge span that makes chatMLFetcher's chat span
  a child of the original invoke_agent trace, bringing real token usage data
  into the agent trace hierarchy

* debug(otel): add debug attribute to verify trace context capture in BYOK path

* fix(otel): remove debug attribute, BYOK trace context propagation verified working

Verified: 63-span trace with Azure BYOK (gpt-5) correctly shows:
- byok-provider bridge spans linking wrapper chat spans into agent trace
- Real token usage (in:21458 out:1730 cache:19072) visible on wrapper chat spans
- hasCtx:true on all extChatEndpoint spans confirming context capture
- Two subagent invoke_agent spans correctly nested under main agent
- Zero orphan copilotLanguageModelWrapper spans

* refactor(otel): replace byok-provider bridge span with invisible context propagation

Add runWithTraceContext() to IOTelService — sets parent trace context
without creating a visible span. The wrapper's chat spans now appear
directly as children of invoke_agent, eliminating the noisy
byok-provider intermediary span.

Before: invoke_agent → byok-provider → chat (wrapper)
After:  invoke_agent → chat (wrapper)

* refactor(otel): remove duplicate BYOK consumer-side chat span

The extChatEndpoint no longer creates its own chat span. The wrapper's
chatMLFetcher span (via CopilotLanguageModelWrapper) is the single source
of truth with full token usage, cache data, and resolved model.

Before: invoke_agent → chat (empty, extChatEndpoint) + chat (rich, wrapper)
After:  invoke_agent → chat (rich, wrapper only)

* fix(otel): restore chat span for non-wrapper BYOK providers (Anthropic, Gemini)

The previous commit removed the extChatEndpoint chat span, which was correct
for Azure/OpenAI BYOK (served by CopilotLanguageModelWrapper via chatMLFetcher).
But Anthropic and Gemini BYOK providers call their native SDKs directly,
bypassing CopilotLanguageModelWrapper — so they need the consumer-side span.

Now: always create a chat span in extChatEndpoint with basic metadata
(model, provider, response.id, finish_reasons). For wrapper-based providers,
the chatMLFetcher also creates a richer sibling span with token usage.

* fix(otel): skip consumer chat span for wrapper-based BYOK providers

Only create the extChatEndpoint chat span for non-wrapper providers
(Anthropic, Gemini) that need it as their only span. Wrapper-based
providers (Azure, OpenAI, OpenRouter, Ollama, xAI) get a single rich
span from chatMLFetcher via CopilotLanguageModelWrapper.

Result: 1 chat span per LLM call for all provider types.

* fix: remove unnecessary 'google' from non-wrapper vendor set

* feat(otel): add rich chat span with usage data for Anthropic BYOK provider

Move chat span creation into AnthropicLMProvider where actual API response
data (token usage, cache reads) is available. The span is linked to the
agent trace via runWithTraceContext and enriched with:
- gen_ai.usage.input_tokens / output_tokens
- gen_ai.usage.cache_read.input_tokens
- gen_ai.response.model / response.id / finish_reasons

Remove consumer-side extChatEndpoint span for all vendors (nonWrapperVendors
now empty) since both wrapper-based and Anthropic providers create their
own spans with full data.

Next: apply same pattern to Gemini provider.

* feat(otel): add rich chat span for Gemini BYOK, clean up extChatEndpoint

- Add OTel chat span with full usage data to GeminiNativeBYOKLMProvider
- Remove all consumer-side span code from extChatEndpoint (dead code)
- Each provider now owns its chat span with real API response data:
  * CAPI: chatMLFetcher
  * OpenAI-compat BYOK: CopilotLanguageModelWrapper → chatMLFetcher
  * Anthropic: AnthropicLMProvider
  * Gemini: GeminiNativeBYOKLMProvider
- Fix Gemini test to pass IOTelService

* feat(otel): enrich Anthropic/Gemini chat spans with full metadata

Add to both providers:
- copilot_chat.request.max_prompt_tokens (model.maxInputTokens)
- server.address (api.anthropic.com / generativelanguage.googleapis.com)
- gen_ai.conversation.id (requestId)
- copilot_chat.time_to_first_token (result.ttft)

Now matches CAPI chat span attribute parity.

* feat(otel): add server.address to CAPI/Azure BYOK chat spans

Extract hostname from urlOrRequestMetadata when it's a URL string
and set as server.address on the chat span. Works for both CAPI
and CopilotLanguageModelWrapper (Azure BYOK) paths.

* feat(otel): add max_tokens and output_messages to Anthropic/Gemini chat spans

- gen_ai.request.max_tokens from model.maxOutputTokens
- gen_ai.output.messages (opt-in) from response text
- Closes remaining attribute gaps vs CAPI/Azure BYOK spans

* fix(otel): capture tool calls in output_messages for chat spans

When model responds with tool calls instead of text, the output_messages
attribute was empty. Now captures both text parts and tool call parts
in the output_messages, matching the OTel GenAI output messages schema.

Also: Azure BYOK invoke_agent zero tokens is a known upstream gap —
extChatEndpoint returns hardcoded usage:0 since VS Code LM API doesn't
expose actual usage from the provider side.

* fix(otel): capture tool calls in output_messages for Anthropic/Gemini BYOK spans

Same fix as CAPI — when model responds with tool calls, include them
in gen_ai.output.messages alongside text parts. All three provider
paths (CAPI, Anthropic, Gemini) now consistently capture both text
and tool call parts in output messages.

* fix(otel): add input_messages and agent_name to Anthropic/Gemini chat spans

- gen_ai.input.messages (opt-in) captured from provider messages parameter
- gen_ai.agent.name set to AnthropicBYOK / GeminiBYOK for identification

Closes the last attribute gaps vs CAPI/Azure BYOK chat spans.

* fix(otel): fix input_messages serialization for Anthropic/Gemini BYOK

- Map enum role values to names (1→user, 2→assistant, 3→system)
- Extract text from LanguageModelTextPart content arrays instead of
  showing '[complex]' for all messages
- Use OTel GenAI input messages schema with role + parts format

* docs(otel): add remaining metrics/events work to plan.md

Coverage matrix showing:
- Anthropic/Gemini BYOK missing: operation.duration, token.usage,
  time_to_first_token metrics, and inference.details event
- CAPI and Azure BYOK (via wrapper) fully covered
- Tool/agent/session metrics covered across all providers
- 4 tasks (M1-M4) to close the gap

* feat(otel): add metrics and inference events to Anthropic/Gemini BYOK providers

Both providers now record:
- gen_ai.client.operation.duration histogram
- gen_ai.client.token.usage histograms (input + output)
- copilot_chat.time_to_first_token histogram
- gen_ai.client.inference.operation.details log event

All metrics/events now have full parity across CAPI, Azure BYOK,
Anthropic BYOK, and Gemini BYOK.

* fix(otel): fix LoggerProvider constructor — use 'processors' key (SDK v2)

The OTel SDK v2 changed the LoggerProvider constructor option from
'logRecordProcessors' to 'processors'. The old key was silently
ignored, causing all log records to be dropped.

This is why logs never appeared in Loki despite traces working fine.

* docs: add agent monitoring guide with OTel usage and Claude/Gemini comparison

* docs: remove Claude/Gemini comparison from monitoring guide

* docs: add OTel comparison with Claude Code and Gemini CLI

* docs: reorganize monitoring docs — user guide + dev architecture

- agent_monitoring.md: polished user-facing guide (for VS Code website)
- agent_monitoring_arch.md: developer-facing architecture & instrumentation guide
- Removed internal plan/spec/comparison files from repo (moved to ~/Documents)

* fix(otel): restore _doFetchViaHttp body and _fetchWithInstrumentation after rebase

* fix(otel): propagate otelSpan through WebSocket/HTTP routing paths

The otelSpan was created in _doFetchAndStreamChat but not included
in returns from _doFetchViaWebSocket and _doFetchViaHttp, causing
the caller (fetchMany) to always receive undefined for otelSpan.

Fix: await both routing paths and spread otelSpan into the result.

* docs(otel): improve monitoring docs, add collector setup, fix trace context

- Expand agent_monitoring.md with detailed span/metric/event attribute tables
- Add BYOK provider coverage, subagent trace propagation docs
- Add Backend Considerations: Azure App Insights (via collector), Langfuse, Grafana
- Add End-to-End Setup & Verification section with KQL examples
- Add OTel Collector config + docker-compose for Azure App Insights
- Fix: emit inference details event before span.end() in chatMLFetcher
  (fixes 'No trace ID' log records in App Insights)
- Fix: pass active context in emitLogRecord for trace correlation
- Update launch.json to point at OTel Collector (localhost:4328)

* docs(otel): merge Backend Considerations and E2E sections to remove redundancy

* docs(otel): remove internal dev debug reference from user-facing guide

* docs(otel): remove Grafana section and Jaeger refs from App Insights section

* docs(otel): trim Backend section to factual setup guides, remove claims

* docs(otel): final accuracy audit — fix false claims against code

- Mark copilot_chat.session.start event as 'not yet emitted' (defined but no call site)
- Mark copilot_chat.agent.turn event as 'not yet emitted' (defined but no call site)
- Mark copilot_chat.session.count metric as 'not yet wired up'
- Fix OTEL_EXPORTER_OTLP_PROTOCOL desc: only 'grpc' changes behavior
- Fix telemetry kill switch claim: vscodeTelemetryLevel not wired in services.ts
- Remove false toolCalling.tsx instrumentation point from arch doc
- Fix docker-compose comments: wrong port numbers (16686→16687, 4318→4328)
- Add reference to full collector config file from inline snippet

* docs(otel): remove telemetry.telemetryLevel references — OTel is independent

* feat(otel): wire up session.start event, agent.turn event, and session.count metric

- emitSessionStartEvent + incrementSessionCount at invoke_agent start (top-level only)
- emitAgentTurnEvent per LLM response in onDidReceiveResponse listener
- Remove 'not yet wired' markers from docs

* chore: untrack .playwright-mcp/ and add to .gitignore

* chore: remove otel spec reference files

* chore(otel): remove OpenTelemetry environment variables from launch configurations

* fix(otel): add 64KB truncation limit for content capture attributes

Prevents OTLP batch export failures when large prompts/responses are
captured. Aligned with gemini-cli's limitTotalLength pattern.

Applied truncateForOTel() to all JSON.stringify calls feeding span
attributes across chatMLFetcher, toolCallingLoop, toolsService,
anthropicProvider, geminiNativeProvider, and genAiEvents.

* refactor(otel): make GenAiMetrics methods static to avoid per-call allocations

Aligned with gemini-cli pattern of module-level metric functions.
Eliminates 17+ throwaway GenAiMetrics instances per agent run.

* fix(otel): fix timer leak, cap buffered ops, rate-limit export logs

- storeTraceContext: track timers for clearTimeout on retrieval/shutdown,
  add 100-entry max with LRU eviction
- BufferedSpanHandle: cap _ops at 200 to prevent unbounded growth
- DiagnosticSpanExporter: rate-limit failure logs to once per 60s

* docs(otel): fix Jaeger UI port to match docker-compose (16687)

* chore(otel): update sprint plan — mark P0/P1 tasks done

* fix(otel): remove as any casts in BYOK provider content capture

Use proper Array.isArray + instanceof checks instead of as any[]
casts for LanguageModelChatMessage.content iteration.

* refactor(otel): extract OTelModelOptions shared interface

Replaces 3 duplicated inline type assertions for _otelTraceContext
and _capturingTokenCorrelationId with a single shared interface.

* refactor(otel): route OTel logs through ILogService output channel

Replace console.info/error/warn in NodeOTelService with a log callback.
OTelContrib logs essential status to the Copilot Chat output channel
for user troubleshooting (enabled/disabled, exporter config, shutdown).

* fix(otel): remove orphaned OTel ConfigKey definitions

OTel config is read via workspace.getConfiguration in services.ts,
not through IConfigurationService.get(ConfigKey). These constants
were unused dead code.

* test(otel): add comprehensive OTel instrumentation tests

- Agent trace hierarchy (invoke_agent → chat → execute_tool, subagent
  propagation, error states, metrics, events)
- BYOK provider span emission (CLIENT kind, token usage, error.type,
  content capture gating, parentTraceContext linking)
- chatMLFetcher two-phase span lifecycle (create → enrich → end,
  error path, operation duration metric)
- Service robustness (runWithTraceContext, startActiveSpan error
  lifecycle, storeTraceContext overwrite)
- CapturingOTelService reusable test mock for all OTel assertions

* chore: apply formatter import sorting

* chore: remove outdated sprint plan document

* feat(otel): add OTel configuration settings for tracing and logging

* fix(otel): ensure metric reader is flushed and shutdown properly
2026-03-02 20:46:30 +00:00
Ulugbek Abdullaev
eed28ec3a1 Revert "request logger debug view grouping and ordering (#3019)" (#3114)
This reverts commit 3616847b8d.
2026-01-23 13:40:31 +00:00
Connor Peet
ba56721dfa tools: add support for model-specific tool registration (#2857)
* tools: add support for model-specific tool registration

This PR goes with https://github.com/microsoft/vscode/pull/287666

This allows the registration of tools that are scoped to specific
language models. These tools can be registered at runtime with
definitions derived from e.g. the server.

I think we should adopt this and go away from the current
`alternativeDefinitions` pattern which we have used previously.

Example of having tools specific for GPT 4.1 vs 4o:

```ts
ToolRegistry.registerModelSpecificTool(
	{
		name: 'gpt41_get_time',
		inputSchema: {},
		description: 'Get the current date and time (4.1)',
		displayName: 'Get Time (GPT 4.1)',
		toolReferenceName: 'get_time',
		source: undefined,
		tags: [],
		models: [{ id: 'gpt-4.1' }],
	},
	class implements ICopilotTool<unknown> {
		invoke() {
			return new vscode.LanguageModelToolResult([new vscode.LanguageModelTextPart('Current year is 2041 (GPT 4.1)')]);
		}
	}
);

ToolRegistry.registerModelSpecificTool(
	{
		name: 'gpt4o_get_time',
		inputSchema: {},
		description: 'Get the current date and time (4o)',
		displayName: 'Get Time (GPT 4o)',
		toolReferenceName: 'get_time',
		source: undefined,
		tags: [],
		models: [{ id: 'gpt-4o' }],
	},
	class implements ICopilotTool<unknown> {
		invoke() {
			return new vscode.LanguageModelToolResult([new vscode.LanguageModelTextPart('Current year is 2040 (GPT 4o)')]);
		}
	}
);
```

* demo

* fix

* overrides

* add overridesTool

* fix inverted logic

* test fixes and back compat

* make memory tool model specific

* fix tests and contribute memory to the vscode toolset

* verison

* fix unit tests

* rm config

* fix missing askquestions

---------

Co-authored-by: bhavyaus <bhavyau@microsoft.com>
2026-01-22 18:34:05 +00:00
Aaron Munger
3616847b8d request logger debug view grouping and ordering (#3019)
* Add hierarchical token support for request logger grouping

* chronological ordering, pr feedback

* revert subagent as child, not working

* Revert subagent hierarchy features from CapturingToken

Remove parentToken, createChild(), getRoot(), isDescendantOf() and currentToken
since tool invocation happens outside the parent's captureInvocation() context,
making AsyncLocalStorage-based context propagation infeasible.

Updated docs with Future Improvements section describing potential solutions.

* clean up tests

* perf feedback
2026-01-21 18:44:14 +00:00
Ulugbek Abdullaev
9d043e2602 nes: feat: stest creator (#2982)
* feat: add NES expected edit capture feature

Add functionality to capture expected edits when NES suggestions are rejected:
- Add ExpectedEditCaptureController for managing capture sessions
- Add configuration settings for enabling the feature
- Register capture commands with keybindings
- Add context key for inlineEditsEnabled to enable keybindings
- Include documentation for the feature

* Revert debug recorder to original settings

* feat: add tests for filtering sensitive files in inline edit logs

- Implemented unit tests to ensure sensitive files such as .env files, private keys, and files in sensitive directories are filtered out correctly from logs.
- Added handling for Windows-style backslash paths in the filtering function.
- Preserved non-document log entries during filtering.

feat: add NesFeedbackSubmitter tests

- Created comprehensive tests for the NesFeedbackSubmitter class, covering methods for extracting document paths and filtering recordings by excluded paths.
- Ensured proper handling of metadata files and invalid JSON scenarios.
- Included performance tests for filtering large recordings efficiently.

fix: enhance inline completion provider with document path metadata

- Updated inline completion provider to include document path in metadata during rejection captures.

feat: add command for submitting expected edits in inline edit provider feature

- Registered a new command to allow submission of captured edits in the inline edit provider feature.

* refactor: update GitHub session retrieval method and adjust test expectations for performance

* test: update inlineEditDebugComponent tests for consistent string quoting

* feat: enhance sensitive file filtering with case-insensitive matching and additional patterns

* feat: add User-Agent header to GitHub API requests in NesFeedbackSubmitter

* Move NES Expected Edit Capture documentation

* fix compilation

* move spec to docs/

* simplify by not having `| undefiend`

* nicer tracing

* remove redundancy

* migrate to sublogger

* web-compat Buffer

* reuse code

* correct composition of edits

* use cmd+enter to save the captured edits

---------

Co-authored-by: Erik Portillo <6964428+erikportillo@users.noreply.github.com>
2026-01-20 17:30:49 +00:00
Rob Lourens
ccbc4b9bb7 Tweak to tools.md (#2988) 2026-01-19 19:16:57 +00:00
Rob Lourens
04b6016192 Gpt prompt cleanups (#2020)
* Refactor agent prompt tests

* Gpt prompt cleanups
Includes switching the gpt-5.1-codex-mini prompt to use the codex prompt
2025-11-15 00:03:09 +00:00
Bhavya U
237bb568c9 Add authoring guide for model-specific prompts (#1625)
* Add authoring guide for model-specific prompts

* Update docs/prompts.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update docs/prompts.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update docs/prompts.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-26 03:36:50 +00:00
Rob Lourens
0893eaecfd Rename executePrompt to runSubagent (#1420) 2025-10-19 21:52:40 +00:00
Rob Lourens
91fe4863d3 Add doc about implementing tools (#146)
* Add doc about implementing tools
Ported from old wiki and updated

* A couple more notes
2025-07-07 22:09:41 +00:00