write-passive-swarm-client
Introduction:
Your task is to connect to an external agent and enable dynamic model tuning under the AgentJet reinforcement learning framework, this is very simple.
User --> Application Interface (WEB, TUI, GUI) --> Application Backend --> Fake vLLM (fake_vllm_endpoint.py, you need to write this) --> In fake vLLM, duplicate each request multiple times (on_user_submit_new_requests) --> Calculate relative reward (on_compute_relative_reward) --> Submit reward (swarm_client.end_episode) --> Select the item with highest reward --> Return to user via original path
用户 --> Application界面(WEB, TUI, GUI) --> Application后端 --> 假vLLM(fake_vllm_endpoint.py, 你要写这个) --> 假vLLM中, 将每个请求复制多份(on_user_submit_new_requests) --> 计算相对奖励(on_compute_relative_reward) --> 提交奖励 (swarm_client.end_episode) --> 选取奖励最高的一项 --> 原路返回给用户
First, give the agent system a name based on the user's requirements, always place your code at ``tutorial/opencode_build_***, for example opencode_build_openclaw_agent`.
Next, create the directory:
tutorial/opencode_build_openclaw_agent
Then, create the Agent source files:
More from modelscope/agentjet
openjudge
>
2train-complex-blackbox
Create a trainable agent loop or agent workflow with AgentJet
2write-swarm-client
Create a trainable agent loop or agent workflow with AgentJet
2map-verl-config
map verl config to agentjet config
1monitor-with-tmux
Monitor training progress by reading tmux content with exponential backoff intervals (30s, 1min, 2min, 4min, 8min, 16min), analyze logs when anomalies occur, and provide fix suggestions
1install-agentjet-client
Install agentjet swarm server with uv package manager
1