LogoBibGenie Docs

Troubleshooting

Diagnose and resolve common BibGenie errors

During your use of BibGenie, you may encounter some error messages. This page will help you quickly identify the root cause and find solutions.

Error Messages Matter

Error dialogs usually contain detailed information. Please read these messages carefully as they can help you locate the problem faster.

ModelNotSupported

Error Description

A dialog appears with "ModelNotSupported" as the title.

Cause

The selected model does not support the current input type. This typically occurs when you input an image, but the selected model doesn't support image input.

Solution

Switch to a Vision-Enabled Model

Change to a model that supports visual input, such as:

  • GLM-4V
  • GLM-4.5V
  • Other Vision-capable models

Check Custom Model Configuration

If you're certain the selected model supports image input but still encounter this error, verify that your custom model configuration correctly marks the model as a Vision model.


AI_APICallError

Error Description

A dialog appears with "AI_APICallError" as the title.

Cause

The model API call failed. Possible reasons include: configuration errors, invalid API keys, incorrect endpoints, or request timeouts.

Common Error Messages

When the API key format is incorrect or expired, you may see errors like:

Unauthorized Failure reason: Authentication Fails
(auth header format should be Bearer sk-...)
Model: deepseek-reasoner
Request URL: https://api.deepseek.com/v1/chat/completions
Incorrect API key provided: ''.
You can find your API key at https://platform.openai.com/account/api-keys.
Failure reason: [{"error":{"code":400,"message":"Missing Authorization header.","status":"INVALID_ARGUMENT"}}]

When the model ID is incorrect or doesn't exist, you may see errors like:

Model Not Exist Failure reason:
{"error":{"message":"Model Not Exist","type":"invalid_request_error","param":null,"code":"invalid_request_error"}}

When the Base URL is misconfigured, you may see errors like:

Base URL Not Found Failure reason:
{"error":{"message":"Base URL Not Found","type":"invalid_request_error","param":null,"code":"invalid_request_error"}}
Model: deepseek-chat
Request URL: https://api.deepseek.com/v/chat/completions

Check URL Format

Notice the URL path error in the example above: /v/ should be /v1/.

Solution


NetworkError

Error Description

A dialog appears with "Error" as the title, containing TypeError: NetworkError when attempting to fetch resource.

Cause

Network request failed when connecting to the model API.

Possible Causes

When using local models like Ollama or LM Studio:

  • Model API address is incorrectly configured
  • Local model service is not running
  • Port is occupied or blocked by firewall

Troubleshooting Steps:

  1. Confirm the local model service is running
  2. Test the model API address by visiting it in your browser
  3. Consult the official documentation for Ollama or LM Studio

When using built-in models (e.g., OpenAI, Claude):

  • Network environment cannot access the API service
  • Proxy configuration is incorrect

Troubleshooting Steps:

  1. Check if your network proxy settings are correct
  2. Try accessing the API endpoint directly in your browser
  3. Some services may require proxy configuration in certain regions

Need More Help?

If the solutions above don't resolve your issue, feel free to join our Discord community for support.