Edit online

AI Positron Troubleshooting and FAQ

18 Feb 2026
Read time: 18 minute(s)

Common questions and troubleshooting guidance for AI Positron users, covering LLM selection, configuration, error messages, and best practices.

Overview

This FAQ addresses common issues and questions encountered when using AI Positron with Oxygen desktop, Oxygen Content Fusion CMS, or integrated in Web Author with a third-party CMS. The guidance provided is based on real-world user experiences and technical support interactions.

LLM Selection and Configuration

Q: What LLM models are recommended for AI Positron?

A: AI Positron works best with capable LLMs such as:

  • Claude Sonnet
  • Claude Haiku
  • OpenAI GPT 5.1
  • OpenAI GPT 4.1

If you use a less capable LLM, AI Positron may not produce optimal results. The quality of responses is directly dependent on the LLM's capabilities.

Q: Why is my LLM choice important?

A: The LLM model is the most critical factor in AI Positron's performance. Many issues reported (poor response quality, hallucinations, slow performance) are related to LLM limitations rather than AI Positron itself. Ensure that you're using a capable model.

Q: What is a project context prompt and why do I need one?

A: A project context prompt is a set of instructions that describes your project's characteristics, terminology, style guidelines, and specific requirements. It helps the LLM generate more accurate and relevant responses tailored to your documentation standards. For detailed guidance, see Customizing AI Positron for Your DITA XML Project.

Token and Context Window Errors

Q: What does "Exceeded … allowed tokens …" error mean?

A: This error indicates that the LLM's context window is not large enough to process your file. Modern LLMs typically have about 200k tokens of context window and can process files up to approximately 40 kilobytes without issues. If you encounter this error:

  • Break your content into smaller files.
  • Use an LLM with a larger context window.
  • Reduce the amount of content sent to the LLM in a single request.

This is a limitation of the LLM itself, not something AI Positron can fix.

Q: How can I avoid token limit errors?

A: Keep individual topics reasonably sized (under 40KB). If you have very large topics, consider breaking them into smaller, more focused topics following DITA best practices.

Reporting problems

Q: What should I do when I want to report a problem based on an interaction in the Chat view?

A: Export your chat conversation (for AI Positron Desktop, use the export action in the "Actions" drop-down menu; for AI Positron Web Author, type "/" in the chat input and select "Export chat") and contact support with the exported JSON file.

MathML and LaTeX Issues

Q: Why does AI Positron generate invalid MathML from LaTeX?

A: This depends on the LLM model's capabilities. To improve results, add examples of correct LaTeX to MathML conversions in your project context prompt. This helps guide the LLM toward the correct format.

Topic Checkout and Editing Issues

Q: Why do I get errors when using AI Positron without checking out the topic?

A: When Positron is used in a CMS integration and a topic is not checked out, the add-on cannot make changes to the opened document. Always check out your topic before using AI Positron features.

Q: Why don't my changes appear until I refresh the browser?

A: In some scenarios of third-party CMS integrations, AI Positron edits the topic source but changes remain hidden until you refresh the browser. This is an integration issue between the CMS and AI Positron. Open an issue with the CMS requesting better integration with AI Positron.

Preview and Compare Tool Issues

Q: Why does the Compare tool show LLM reasoning instead of topic changes?

A: The LLM may output reasoning along with the modified content. To fix this, instruct the LLM in your project context prompt to wrap modified content in Markdown codeblocks when providing explanations:

When replying with both explanations and modified content, ALWAYS wrap the modified content in Markdown codeblocks.

Once the LLM returns content in Markdown code blocks, AI Positron will correctly identify and use only the modified content.

Q: Why do content references appear unresolved in the Compare tool?

A: Keyrefs and conkeyrefs are not being resolved in the preview. This is sometimes a CMS integration issue. Ask the CMS to contact the Oxygen team about improving this functionality.

Response Insertion and Content Issues

Q: Why does "Insert response at caret position" insert LLM commentary instead of the requested content?

A: The LLM is providing both explanations and modified content without clear separation. Update your project context prompt to instruct the LLM:

When replying with both explanations and modified content, ALWAYS wrap the modified content in Markdown codeblocks.

This helps AI Positron distinguish between the actual content and explanatory text.

Q: Why does "Suggest improved title" insert the entire response instead of just the title?

A: Same issue as above. The LLM needs to wrap the suggested title in Markdown codeblocks if it also wants to provide discussion about the suggestion.

Q: Why does Positron insert reasoning into the topic source?

A: The LLM is responding with both explanations and modified content without clear separation. Use the Markdown codeblock approach described above to resolve this.

AI Action Issues

Q: Why does the "Correct Grammar" button report errors in metadata elements?

A: The Correct Grammar action checks the underlying XML metadata along with the text content. This depends on the LLM model quality. You can create a custom AI action with a more refined prompt. Contact support for the definition of the Correct Grammar action if you want to customize it.

Q: Why does "Join Items" create verbose responses?

A: This depends on the LLM model. You can create your own custom Join Items AI action with a more concise prompt. Contact support for the action definition.

Q: Why does "Readability" provide vague suggestions?

A: The default Readability action may be too generic for your needs.

Contact support for the Readability action definition and create a custom version tailored to your project's needs.

Q: Why does "Use Active Voice" include false positives?

A: The action may flag metadata elements and attributes incorrectly. This depends on the LLM model quality. You can create a custom AI action with a more refined prompt. Contact support for the action definition.

Q: Why does "Generate Alt Text" produce descriptions that are too brief?

A: After invoking the Generate Alt Text action, continue the conversation in the chat. Ask the LLM to expand the description or make it more detailed. Alternatively, create a custom AI action with a prompt that emphasizes detail and specificity. Contact support for the action definition.

Q: Why does "Formula/Equation" button fail?

A: This may depend on the LLM model quality. You can create your own custom AI action for formula generation. Contact support for the default action definition if you want to customize it.

UI and Usability Issues

Q: Why don't my Favorites persist across sessions?

A: In certain CMS AI Positron Web Author integrations, favorites only persist within the current topic's session. To persist them across sessions, the CMS needs to implement APIs on their side. Open an issue with the CMS requesting this feature.

Q: Why doesn't the History drop-down persist?

A: Similar to Favorites, the History dropdown requires CMS-side APIs to persist data.

Q: Why can't I upload files of type X?

A: File upload support is limited to a list of types mentioned in the documentation (including XML, text, Markdown, Word). Provide details about the specific error message when reporting this issue, and note whether the error occurs consistently or intermittently.

Performance Issues

Q: Why is performance lackluster in some scenarios?

A: Performance depends primarily on the LLM model you're using. AI Positron cannot improve performance beyond what the LLM provides. Consider using a faster or more efficient LLM model.

Q: Why does analyzing a topic take a long time?

A: This slowdown is related to the LLM model's performance. Try using a faster LLM model or breaking your analysis into smaller chunks.

DTD and Structural Validation Issues

Q: Why does AI Positron generate content that violates my DTD?

A: This depends on the LLM model quality. To improve results:

  • Provide examples of correct structures in your project context prompt.
  • Instruct the LLM to avoid specific mistakes it frequently makes.
  • Create custom AI actions with more specific prompts.

Content Security and Hallucinations

Q: Can AI Positron retrieve information from outside my network?

A: No. AI Positron does not have tools to retrieve content from outside your network. If responses reference external sources or competitor information, these are LLM hallucinations. To mitigate this:

  • Instruct the LLM to avoid certain types of responses in your project context prompt.
  • When using AI Positron Web Author with a third-party CMS, ask the CMS to implement tools that give the LLM access to keyword search your project, enabling Retrieval Augmented Generation (RAG).
  • Consider implementing an LLM server proxy with RAG capabilities.

Q: How can I prevent AI Positron from discussing competitor software?

A: Add instructions to your project context prompt explicitly telling the LLM to avoid discussing competitor products or external sources. This helps reduce hallucinations and IP/copyright risks.

Best Practices and Recommendations

Always upgrade to the latest AI Positron version

The latest version includes improvements and bug fixes. See https://www.oxygenxml.com/ai_positron/whats_new.html for details.

Create a comprehensive project context prompt

This is the second most important factor after LLM selection. Your project context prompt should include:

  • Project characteristics and goals.
  • Terminology and style guidelines.
  • Examples of correct structures and formats.
  • Instructions to avoid common mistakes.
  • Guidance on wrapping modified content in Markdown codeblocks.
  • Instructions to avoid discussing external sources or competitors.

See Customizing AI Positron for Your DITA XML Project for detailed guidance.

Use AI actions instead of manual Chat prompts

The dedicated AI action buttons (Correct Grammar, Readability, etc.) work better than manual prompts in the Chat window, especially with lower quality LLMs.

Export chat conversations for support

When reporting issues, export your chat conversation. This helps support diagnose problems.

Provide detailed reproduction steps

When reporting issues, include:

  • A sample topic that reproduces the problem.
  • Step-by-step instructions to reproduce the issue.
  • Screenshots if applicable.
  • Your LLM model and version information.

Consider alternative platforms for better Chat support

If you need robust Chat functionality with RAG support, consider using Oxygen desktop or Oxygen Content Fusion CMS instead of integrations with a third-party CMS. See Practical Steps for Vibe Authoring with AI Positron for more information.

Getting Help

If you encounter issues not covered in this FAQ: