Arshdeep Singh • 5 days ago
Prompt Opinion A2A times out after receiving and rendering valid completed task
## Summary
Prompt Opinion successfully calls our external A2A agent, receives HTTP 200, and visibly renders the returned completed task payload in the
browser transcript.
However, after the `SendA2AMessage` tool call succeeds, the Prompt Opinion chat stream still emits:
> The LLM took too long to respond and the operation was cancelled
So the remaining issue appears to be Prompt Opinion post-tool LLM synthesis after a successful external A2A response, not public routing or
our A2A runtime response.
## External A2A setup
Agent name:
```text
external A2A orchestrator
Prompt Opinion calls this endpoint shape:
POST /message:send/v1/message:send
No authentication headers or secrets are required for this test endpoint.
## Request body Prompt Opinion sends
Prompt Opinion sends a compact A2A message body shaped like this:
{
"message": {
"role": "ROLE_USER",
"parts": [
{
"text": "is this patient safe to discharge today?"
}
],
"messageId": "..."
}
}
## Runtime response summary
Our runtime returns:
{
"http_status": 200,
"response_bytes": 5356,
"content_type": "application/json; charset=utf-8",
"has_jsonrpc_2_0": true,
"has_result_task": true,
"has_top_level_task_alias": true,
"task_state": "TASK_STATE_COMPLETED",
"has_status_message_parts_text": true,
"has_artifact_text": true,
"includes_final_verdict_not_ready": true,
"includes_required_evidence_anchors": true,
"both_downstream_services_hit": true
}
## Response shape
The response is a JSON-RPC 2.0 envelope with result.task and a top-level task alias:
{
"jsonrpc": "2.0",
"id": "...",
"result": {
"task": {
"id": "...",
"contextId": "...",
"status": {
"state": "TASK_STATE_COMPLETED",
"message": {
"role": "ROLE_AGENT",
"parts": [
{
"text": "Care Transitions Command result:\nFinal verdict: not_ready.\nStructured baseline: ready.\nHidden-risk result:
hidden_risk_present.\n\nWhy the answer changed:\nThe structured chart looked discharge-ready at rest, but narrative evidence shows
exertional oxygen desaturation and unsafe home setup tonight.\n\nEvidence:\n- Nursing Note 2026-04-18 20:40: SpO2 dropped to 82% after
walking/stairs with dyspnea.\n- Case Management Addendum 2026-04-18 20:55: home oxygen delivery delayed until tomorrow; daughter cannot stay
overnight.\n\nImmediate blockers:\n- clinical_stability\n- equipment_and_transport\n- home_support_and_services\n\nRequired before
discharge:\nHold discharge today; reassess exertional oxygen needs; confirm oxygen delivery; confirm overnight support/transport plan;
update clinician handoff."
}
]
}
},
"artifacts": [
{
"name": "Care Transitions Command fused response",
"parts": [
{
"text": "Care Transitions Command result:\nFinal verdict: not_ready.\nStructured baseline: ready.\nHidden-risk result:
hidden_risk_present.\n\nWhy the answer changed:\nThe structured chart looked discharge-ready at rest, but narrative evidence shows
exertional oxygen desaturation and unsafe home setup tonight.\n\nEvidence:\n- Nursing Note 2026-04-18 20:40: SpO2 dropped to 82% after
walking/stairs with dyspnea.\n- Case Management Addendum 2026-04-18 20:55: home oxygen delivery delayed until tomorrow; daughter cannot stay
overnight.\n\nImmediate blockers:\n- clinical_stability\n- equipment_and_transport\n- home_support_and_services\n\nRequired before
discharge:\nHold discharge today; reassess exertional oxygen needs; confirm oxygen delivery; confirm overnight support/transport plan;
update clinician handoff."
}
]
}
],
"metadata": {
"final_verdict": "not_ready",
"hidden_risk_result": "hidden_risk_present",
"narrative_source_count": 3,
"both_mcps_hit": true
}
}
},
"task": {
"...": "same task object as result.task"
}
}
## What renders in the browser
Prompt Opinion visibly renders the returned task content in the transcript. The visible browser transcript includes both STATUS_MESSAGE and
ARTIFACT_MESSAGES, for example:
STATUS_MESSAGE: Care Transitions Command result:
Final verdict: not_ready.
Structured baseline: ready.
Hidden-risk result: hidden_risk_present.
Why the answer changed:
The structured chart looked discharge-ready at rest, but narrative evidence shows exertional oxygen desaturation and unsafe home setup
tonight.
Evidence:
Nursing Note 2026-04-18 20:40: SpO2 dropped to 82% after walking/stairs with dyspnea.
Case Management Addendum 2026-04-18 20:55: home oxygen delivery delayed until tomorrow; daughter cannot stay overnight.
So Prompt Opinion is receiving and displaying the completed task payload.
## Remaining error
After that successful external A2A result, the Prompt Opinion chat stream still emits:
The LLM took too long to respond and the operation was cancelled
This appears to happen after the external A2A runtime has already returned HTTP 200 with the completed task.
## What seems ruled out
This does not appear to be:
- Public endpoint routing failure
- Runtime POST failure
- Non-200 A2A response
- Missing result.task
- Missing completed task state
- Missing visible task text
- Oversized payload
- Missing evidence text
- Downstream service failure
The response is compact, about 5.3 KB, and renders visibly in the transcript.
## Question
Is there a different response shape or convention Prompt Opinion expects from SendA2AMessage so that the chat model stops cleanly after the
external A2A task completes?
In particular:
1. Should the final answer be in result.task.status.message.parts[0].text, result.task.artifacts[0].parts[0].text, or somewhere else?
2. Should the response include a separate result.message in addition to result.task?
3. Is the top-level task alias acceptable, ignored, or potentially harmful?
4. Should HTTP+JSON message endpoints return application/json or application/a2a+json?
5. Is there a known Prompt Opinion post-tool synthesis timeout after successful SendA2AMessage responses?
6. Is there a way for an external A2A agent response to tell Prompt Opinion “this task is complete; render this text as the final answer
without further LLM synthesis”?
## Expected behavior
Once Prompt Opinion receives the completed A2A task with visible text and TASK_STATE_COMPLETED, it should render the returned text and
complete the chat turn without a timeout/cancelled error.
## Actual behavior
Prompt Opinion renders the returned task text, but the chat turn still ends with:
The LLM took too long to respond and the operation was cancelled
Log in or sign up for Devpost to join the conversation.

3 comments
Arshdeep Singh • 5 days ago
I really thought this had markdown support, guess not. I hope things are still legible and understandable.
Arshdeep Singh • 3 days ago
I did fix it by increasing the timeout and implementing cache warm-up for the first pass.
Mahbubul Haque Manager • 3 days ago
Hi Arshdeep, good to know you were able to overcome the issue by increasing the timeout. To answer some of your other questions:
1. Po reads from both of those locations so it doesn't matter.
2. No. Only have either task or message, not both.
3. Unfortunately, not very sure what you are asking here.
4. I believe either should work, but let us know if you run into any issues using one or the other.
5. It's cumulative. From the time you send the your prompt, we start a timer until the response from the llm. There isn't an individual timeout for post response of the external agent. You can control the timeout from the editor when you create a byo/external agent as you already found out.
6. You can modify the consult prompt of your byo agent in the editor to fit the behavior you need. Just keep in mind to keep the replacement variables included in your custom prompt.