ai-model-nodejs

Installation
Summary

AI text and image generation for Node.js backends and CloudBase cloud functions.

  • Supports text generation (non-streaming and streaming) via Hunyuan and DeepSeek models, with recommended models hunyuan-2.0-instruct-20251111 and deepseek-v3.2
  • Image generation exclusive to Node.js SDK using Hunyuan Image model with configurable size, style, negative prompts, and seed control
  • Requires @cloudbase/node-sdk version 3.16.0 or higher; set cloud function timeouts to 60–120 seconds for text operations and up to 900 seconds for image generation
  • Token usage tracking and full message history available for all operations; streaming supports both incremental text chunks and full response data iteration
SKILL.md

Standalone Install Note

If this environment only installed the current skill, start from the CloudBase main entry and use the published cloudbase/references/... paths for sibling skills.

  • CloudBase main entry: https://cnb.cool/tencent/cloud/cloudbase/cloudbase-skills/-/git/raw/main/skills/cloudbase/SKILL.md
  • Current skill raw source: https://cnb.cool/tencent/cloud/cloudbase/cloudbase-skills/-/git/raw/main/skills/cloudbase/references/ai-model-nodejs/SKILL.md

Keep local references/... paths for files that ship with the current skill directory. When this file points to a sibling skill such as auth-tool or web-development, use the standalone fallback URL shown next to that reference.

When to use this skill

Use this skill for calling AI models from Node.js backends, cloud functions, or CloudRun services via @cloudbase/node-sdk.

🧭 Runtime-plane fit. This is the right skill when the AI call truly belongs on the server: image generation (the only SDK that supports it), long-running agent jobs, orchestration across multiple tools, scheduled tasks, or flows that must keep secrets server-side. If the user is building a Web page / frontend AI chat UI, do NOT wrap this SDK behind a backend proxy — route to ai-model-web and call the model directly from the browser. For WeChat Mini Programs use ai-model-wechat. Routing is decided by runtime plane first; the concrete model (deepseek-*, glm-*, hunyuan-*, kimi-*, …) only affects the model field.

Use it when you need to:

Related skills

More from tencentcloudbase/skills

Installs
766
GitHub Stars
58
First Seen
Jan 22, 2026