Ver Fonte

feature: add router mode

Dew-OF-Aurora há 1 semana atrás
pai
commit
43c0c602db

+ 3 - 1
.gitignore

@@ -1,2 +1,4 @@
 runtime/
 runtime/
-.claude/
+cfip_runtime/
+.claude/
+cfst_darwin_arm64/

+ 166 - 111
CLAUDE.md

@@ -1,40 +1,76 @@
 # CLAUDE.md
 # CLAUDE.md
 
 
-This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
+This file provides guidance to Claude Code when working with this repository.
 
 
 ## Project Overview
 ## Project Overview
 
 
-VMess domain rotator: fetch candidate domains from an API, select a preferred domain, write runtime artifacts, and optionally auto-commit/push runtime changes to a dedicated branch (`runtime-state`).
+The repository supports two independent operating modes:
+
+- Server mode
+  - Source: remote API
+  - Main config: `config.server.json`
+  - Main output: `runtime/`
+  - Optional git automation: yes, via `runtime-state`
+- Router mode
+  - Source: local `cfst`
+  - Python config: `config.router.json`
+  - BusyBox shell config: `router_local.conf`
+  - Main output: `cfip_runtime/`
+  - Optional git automation: no
+
+The old `config.json` and `config.example.json` are deprecated and should not be reintroduced.
 
 
 ## Common Commands
 ## Common Commands
 
 
 ### Validate scripts
 ### Validate scripts
+
 ```bash
 ```bash
-python3 -m py_compile scripts/domain_updater.py
-python3 -m py_compile scripts/update_vmess_links.py
+env PYTHONPYCACHEPREFIX=/tmp/pycache python3 -m py_compile scripts/domain_updater.py
+env PYTHONPYCACHEPREFIX=/tmp/pycache python3 -m py_compile scripts/update_vmess_links.py
 bash -n scripts/run_update_and_commit.sh
 bash -n scripts/run_update_and_commit.sh
 bash -n scripts/install_debian.sh
 bash -n scripts/install_debian.sh
 bash -n scripts/uninstall_debian.sh
 bash -n scripts/uninstall_debian.sh
+sh -n scripts/router_local_update.sh
+sh -n scripts/router_local_http.sh
+```
+
+### Run server mode once
+
+```bash
+python3 scripts/domain_updater.py --config config.server.json
+```
+
+### Run local cfst mode once
+
+```bash
+python3 scripts/domain_updater.py --config config.router.json
 ```
 ```
 
 
-### Run domain selection once
+### Print resolved output paths
+
 ```bash
 ```bash
-python3 scripts/domain_updater.py --config config.json
+python3 scripts/domain_updater.py --config config.server.json --print-output-settings
+python3 scripts/domain_updater.py --config config.router.json --print-output-settings
 ```
 ```
 
 
-### Run scheduler entrypoint (updater + runtime-state commit/push)
+### Run server scheduler entrypoint
+
 ```bash
 ```bash
-bash scripts/run_update_and_commit.sh config.json
+bash scripts/run_update_and_commit.sh config.server.json
 ```
 ```
 
 
-### Force commit to runtime-state once (manual)
+### Force commit to runtime-state once
+
 ```bash
 ```bash
-bash scripts/run_update_and_commit.sh --force-commit config.json
+bash scripts/run_update_and_commit.sh --force-commit config.server.json
 # or
 # or
-GIT_FORCE_COMMIT=1 bash scripts/run_update_and_commit.sh config.json
+GIT_FORCE_COMMIT=1 bash scripts/run_update_and_commit.sh config.server.json
 ```
 ```
 
 
-### Update VMess links from selected domain
+### Update VMess links from selected value
+
+Server mode:
+
 ```bash
 ```bash
 python3 scripts/update_vmess_links.py \
 python3 scripts/update_vmess_links.py \
   --input ./nodes.txt \
   --input ./nodes.txt \
@@ -42,121 +78,140 @@ python3 scripts/update_vmess_links.py \
   --domain-file ./runtime/current_domain.txt
   --domain-file ./runtime/current_domain.txt
 ```
 ```
 
 
-### Update only matching node names
+Router mode:
+
 ```bash
 ```bash
 python3 scripts/update_vmess_links.py \
 python3 scripts/update_vmess_links.py \
   --input ./nodes.txt \
   --input ./nodes.txt \
   --output ./nodes.updated.txt \
   --output ./nodes.updated.txt \
-  --domain-file ./runtime/current_domain.txt \
-  --name-regex "(argo|cf|vm)"
+  --domain-file ./cfip_runtime/current_ip.txt
 ```
 ```
 
 
-### Local runtime smoke test
+### BusyBox router mode
+
 ```bash
 ```bash
-python3 -m http.server 8080 --directory runtime
-curl http://127.0.0.1:8080/current_domain.json
+sh scripts/router_local_update.sh ./router_local.conf
+sh scripts/router_local_http.sh ./router_local.conf
 ```
 ```
 
 
 ### Debian systemd install/uninstall
 ### Debian systemd install/uninstall
-```bash
-# Recommended default install (1h timer, push enabled, credential-helper mode)
-sudo bash scripts/install_debian.sh
-
-# Useful variants
-sudo bash scripts/install_debian.sh --interval 5min
-sudo bash scripts/install_debian.sh --git-push 0
-sudo bash scripts/install_debian.sh --git-push-remote origin
-sudo bash scripts/install_debian.sh --git-http-username aurora --git-http-token-file /root/.config/vmess-token --git-use-credential-store 1
 
 
+```bash
+sudo bash scripts/install_debian.sh --config config.server.json
+sudo bash scripts/install_debian.sh --config config.server.json --interval 5min
+sudo bash scripts/install_debian.sh --config config.server.json --git-push 0
 sudo bash scripts/uninstall_debian.sh
 sudo bash scripts/uninstall_debian.sh
 sudo bash scripts/uninstall_debian.sh --keep-auth-files
 sudo bash scripts/uninstall_debian.sh --keep-auth-files
 ```
 ```
 
 
-## Testing and verification status
+## Testing and Verification
 
 
-- There is currently no dedicated `tests/` directory or unit test suite.
+- There is no dedicated `tests/` directory yet.
 - Primary verification is syntax checks plus manual script runs.
 - Primary verification is syntax checks plus manual script runs.
+- `--print-output-settings` is the quickest way to verify output path resolution without performing a full update.
+
+## Architecture
+
+### 1) Unified Python updater
+
+`scripts/domain_updater.py` is the shared core:
+
+- Supports `source.type = "api"` and `source.type = "cfst_local"`
+- Resolves output file paths from config instead of hardcoded runtime paths
+- Writes four runtime artifacts:
+  - selected value text file
+  - selected value JSON file
+  - state file
+  - export vars file
+- Supports fallback to the last good value from the configured state file
+
+### 2) Server mode
+
+`config.server.json` defines:
+
+- API request settings
+- parser / record mapping / record filter
+- scoring and healthcheck behavior
+- output paths under `runtime/`
+
+`scripts/run_update_and_commit.sh`:
+
+- Resolves output paths by calling `domain_updater.py --print-output-settings`
+- Runs the updater
+- Compares the selected value with `runtime-state`
+- Syncs configured repo-local output files into the target worktree
+- Commits and optionally pushes
+
+### 3) Router mode
+
+There are two router-side entry styles:
+
+- Python local mode via `config.router.json`
+  - still uses `domain_updater.py`
+  - executes local `cfst`
+  - parses CSV and writes `cfip_runtime/`
+- BusyBox shell mode via `router_local.conf`
+  - `scripts/router_local_update.sh`
+  - `scripts/router_local_http.sh`
+  - intended for routers without Python
+
+### 4) Sub-Store consumer
 
 
-## Architecture (big picture)
-
-### 1) Domain selection pipeline
-`scripts/domain_updater.py` is the core pipeline:
-- Calls the configured API (`api` block in `config.json`).
-- Extracts candidates via `parser.field_paths`, `parser.json_paths`, or regex fallback.
-- Normalizes and de-duplicates domains/IPs.
-- Applies include/exclude filtering (`domain_filter`).
-- Resolves records via `record_mapping.records_path` and only accesses whitelisted fields from `record_mapping.field_map`.
-- Applies record-level exclusion rules from `record_filter` (API-specific strategy via config).
-- Optionally ranks records (`scoring`) with configurable `weighted_average` or `lexicographic` strategy.
-- Optionally healthchecks candidates with TLS handshake (`healthcheck`).
-- Selects winner from scored/check results (`selection.top_n`).
-- Writes runtime artifacts under `runtime/`.
-- Resolves relative `output.runtime_dir` against the config file directory.
-- Falls back to `last_good_domain` from `runtime/state.json` when selection fails.
-
-### 2) Runtime-state git automation
-`scripts/run_update_and_commit.sh` wraps the updater and git workflow:
-- Resolves git top-level robustly and skips git actions if not inside a git repo.
-- Requires non-empty `runtime/current_domain.txt` after updater run.
-- Compares selected domain with `runtime-state` HEAD (`runtime/current_domain.txt`) and skips commit/push when unchanged (default behavior).
-- Copies runtime outputs into `runtime-state` (same branch or temporary worktree).
-- Commits only runtime outputs (`runtime/current_domain.txt`, `runtime/current_domain.json`, `runtime/state.json`, `runtime/substore_vars.json`) when domain changes.
-- Supports manual force commit via `--force-commit` or `GIT_FORCE_COMMIT=1`; when no content changes it creates an empty commit with a `manual:` commit message that includes selected domain and UTC update time.
-- Supports non-interactive push auth in two modes:
-  - credential helper mode (`GIT_CREDENTIAL_HELPER`, e.g. `store`)
-  - header mode (`GIT_HTTP_USERNAME` + `GIT_HTTP_TOKEN`/`GIT_HTTP_TOKEN_FILE`)
-- `GIT_PUSH_REQUIRED` defaults to `GIT_PUSH_ENABLED`; when enabled, push failure returns non-zero for systemd visibility.
-- Disables interactive git prompts via `GIT_TERMINAL_PROMPT=0`.
-
-### 3) VMess subscription post-processing
-`scripts/update_vmess_links.py`:
-- Reads subscription lines (plain text or full base64 subscription).
-- Decodes each `vmess://` payload JSON.
-- Replaces `add` field with selected domain.
-- Optionally filters by node name regex (`ps` field).
-- Re-encodes output and prints JSON summary stats.
-
-### 4) Sub-Store runtime consumer
 `substore/operator_template.js`:
 `substore/operator_template.js`:
-- Fetches `runtime/current_domain.json` over HTTP.
-- Caches domain in `scriptResourceCache` (default TTL 5 min).
-- Rewrites VMess `server` for matched node names.
-
-## Config model (`config.json`)
-
-Key top-level blocks:
-- `api`: endpoint/method/headers/params/body/timeout
-- `parser`: domain extraction paths/regex
-- `record_mapping`: required whitelist registry (`records_path`, `field_map`, created time parse settings)
-- `record_filter`: optional record-level exclusion rules (fields must come from `record_mapping.field_map`)
-- `domain_filter`: include suffixes / exclude regex
-- `scoring`: record ranking strategy (`weighted_average` / `lexicographic`) + `within_hours` + tie-breakers
-- `healthcheck`: TLS probe settings
-- `selection`: candidate cut (`top_n`)
-- `output`: runtime file names/paths
-- `v2ray`: optional token replacement rendering
-- `notify`: optional post-run command (`AUTODOMAIN`, `AUTODOMAIN_STATUS` env vars)
-
-Validation behavior:
-- `record_mapping` is mandatory.
-- `record_mapping.field_map.domain` and `record_mapping.field_map.created_at` are mandatory.
-- Any field referenced by `record_filter` / `scoring.weighted_fields` / `scoring.lexicographic_fields` / `scoring.tie_breakers` must be pre-registered in `field_map`.
-- Unregistered field references fail fast (updater exits with config error).
-
-## Runtime artifacts
-
-Generated in `runtime/`:
-- `current_domain.txt`: selected domain (plain text)
-- `current_domain.json`: selected domain + status + metadata
-- `state.json`: persistent state, including `last_good_domain`
-- `substore_vars.json`: export-friendly variables
-
-## Operational behavior that matters
-
-- Fallback behavior is stateful: `last_good_domain` persistence is critical for resilience.
-- `runtime-state` branch is intended to isolate frequently changing runtime outputs from `main` source history.
-- `main` normally ignores `runtime/` outputs via `.gitignore`; runtime artifacts are intended to be tracked on `runtime-state`.
-- Debian installer (`scripts/install_debian.sh`) writes `/etc/vmess-domain-rotator.env`, configures oneshot service+timer, and sets push-required behavior when push is enabled.
-- Repository `systemd/*` static templates are intentionally not maintained; install script dynamically generates unit files under `/etc/systemd/system/`.
-- Default install path is in-place (current git clone), default service user is `SUDO_USER`, and default auth mode is credential helper store.
-- Uninstaller removes systemd units and, by default, removes service auth/env files unless `--keep-auth-files` is set.
+
+- Fetches a JSON runtime endpoint
+- Accepts either `domain` or `ip`
+- Rewrites VMess `server` for matched nodes
+- Uses `scriptResourceCache` with a short TTL
+
+## Configuration Model
+
+### `config.server.json`
+
+Main blocks:
+
+- `source`
+- `api`
+- `parser`
+- `record_mapping`
+- `record_filter`
+- `domain_filter`
+- `scoring`
+- `healthcheck`
+- `selection`
+- `output`
+- `v2ray`
+- `notify`
+
+### `config.router.json`
+
+Main blocks:
+
+- `source`
+- `cfst_local`
+- `domain_filter`
+- `healthcheck`
+- `selection`
+- `output`
+- `v2ray`
+- `notify`
+
+### `router_local.conf`
+
+Main groups:
+
+- `CFST_*`
+- `TOP_N`
+- `RUNTIME_DIR`
+- `VALUE_*`
+- `STATE_*`
+- `EXPORT_*`
+- `HTTP_PORT`
+
+## Operational Notes
+
+- Server mode is the only mode intended to update `runtime-state`.
+- Router mode does not use git automation by default.
+- `runtime/` and `cfip_runtime/` are ignored on `main`; runtime artifacts are meant to be ephemeral locally.
+- Persistent `state.json` matters for fallback behavior in both modes.
+- Avoid reintroducing hardcoded assumptions about `runtime/current_domain.txt`; use config-driven paths instead.

+ 366 - 247
README.md

@@ -1,373 +1,463 @@
 # vmess-domain-rotator
 # vmess-domain-rotator
 
 
-一个用于 **自动获取优选域名**、写入 `runtime/` 运行时文件,并按计划将运行时变更提交到 `runtime-state` 分支的工具。
+一个用于选择优选目标并写出运行时文件的工具集,当前支持两套独立模式:
 
 
----
+- 云服务器模式:调用远程 API,选择优选域名,写入 `runtime/`,可自动提交到 `runtime-state`
+- 本地路由器模式:调用本地 `cfst`,选择优选 IP,写入 `cfip_runtime/`,可通过 BusyBox `nc` 暴露到局域网
 
 
-## 1. 功能概览
+这两套模式已经彻底拆开:
 
 
-本项目主要完成以下流程:
+- 服务器模式使用 [`config.server.json`](./config.server.json)
+- 本地 `cfst` 模式使用 [`config.router.json`](./config.router.json)
+- BusyBox 路由器脚本使用 [`router_local.conf`](./router_local.conf)
 
 
-1. 从 API 拉取候选域名/IP
-2. 解析并过滤候选结果
-3. 按规则打分/选择最终域名
-4. 写入运行时文件到 `runtime/`
-5. (可选)将运行时变更自动提交并 push 到 `runtime-state`
+旧的 `config.json` / `config.example.json` 已废弃,不再使用。
+
+## 1. 目录与入口
 
 
 核心脚本:
 核心脚本:
 
 
-- `scripts/domain_updater.py`:执行域名拉取、选择与写文件
-- `scripts/run_update_and_commit.sh`:执行 updater + git 提交/推送
-- `scripts/install_debian.sh`:Debian 下安装 systemd service + timer
-- `scripts/uninstall_debian.sh`:卸载 systemd 与认证环境文件
+- `scripts/domain_updater.py`
+  统一的 Python 主入口,支持 `api` 和 `cfst_local` 两种 `source.type`
+- `scripts/run_update_and_commit.sh`
+  服务器模式入口,执行 updater 并按配置同步运行时文件到 `runtime-state`
+- `scripts/install_debian.sh`
+  Debian/Ubuntu 一键安装 systemd service + timer
+- `scripts/uninstall_debian.sh`
+  卸载 systemd service + timer
+- `scripts/router_local_update.sh`
+  BusyBox `sh` 路由器入口,执行 `cfst` 并写出 `cfip_runtime`
+- `scripts/router_local_http.sh`
+  BusyBox `sh` 路由器 HTTP 暴露入口,使用 `nc` 暴露 TXT/JSON
+- `scripts/update_vmess_links.py`
+  可选工具,用运行时文件里的值批量替换 `vmess://` 节点
 
 
----
+配置文件:
 
 
-## 2. 运行产物
+- `config.server.json`
+  服务器模式配置,输出默认是:
+  - `runtime/current_domain.txt`
+  - `runtime/current_domain.json`
+  - `runtime/state.json`
+  - `runtime/substore_vars.json`
+- `config.router.json`
+  本地 `cfst` 模式配置,输出默认是:
+  - `cfip_runtime/current_ip.txt`
+  - `cfip_runtime/current_ip.json`
+  - `cfip_runtime/state.json`
+  - `cfip_runtime/substore_vars.json`
+- `router_local.conf`
+  BusyBox 路由器脚本配置,给 `router_local_update.sh` / `router_local_http.sh` 使用
 
 
-默认写入仓库根目录 `runtime/`:
+## 2. 共用设计
 
 
-- `runtime/current_domain.txt`:当前域名(纯文本)
-- `runtime/current_domain.json`:当前结果 JSON
-- `runtime/state.json`:状态文件(含 `last_good_domain`)
-- `runtime/substore_vars.json`:给外部系统消费的变量
+两套模式共用同一套“输出抽象”:
 
 
-> 注意:`domain_updater.py` 现在会按 `--config` 文件所在目录解析 `output.runtime_dir`(默认 `./runtime`),避免误写到 `scripts/runtime/`。
-> 
-> 注意:`main` 分支通常不追踪 `runtime/`(由 `.gitignore` 忽略),运行时产物建议通过 `runtime-state` 分支消费。
+- 文本值文件:当前选中的值
+- JSON 文件:当前结果
+- state 文件:上次可用值和状态
+- export vars 文件:给外部系统消费
 
 
----
+关键点:
 
 
-## 3. 本地快速开始
+- 输出文件名和输出目录都从配置读取,不再在 shell 脚本里硬编码 `runtime/current_domain.txt`
+- `run_update_and_commit.sh` 会先解析配置里的输出路径,再决定同步哪些文件
+- 只有服务器模式默认集成 git commit/push;路由器模式不做 git 操作
+- `domain_updater.py` 会按 `--config` 所在目录解析相对路径
 
 
-### 3.1 配置
+## 3. 模式选择
 
 
-编辑 `config.json`(可先从 `config.example.json` 复制一份再按你的 API 调整)。
+### 3.1 云服务器模式
 
 
-典型解析路径(如 API 返回 `data.good[].ip`)
+适用场景
 
 
-```json
-"parser": {
-  "field_paths": ["data.good[].ip"],
-  "json_paths": [],
-  "regex": "[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}"
-}
-```
+- 有 `python3`
+- 有网络访问目标 API
+- 需要 systemd 定时执行
+- 需要自动提交 `runtime-state`
 
 
-### 3.2 配置块说明(API 无关)
-
-你可以为不同 API 使用同一套脚本,只调整配置:
-
-- `api`:请求地址、方法、header、query 参数、超时
-- `parser`:如何从返回 JSON 提取候选域名
-- `record_mapping`:**必填白名单字段注册表**(后续过滤/评分只能引用这里登记的逻辑字段)
-- `record_filter`:按记录字段做排除(个性化策略应放这里)
-- `domain_filter`:按域名字符串做 include/exclude
-- `scoring`:如何按配置字段排序(支持 `weighted_average` / `lexicographic`)
-- `healthcheck`:可选 TLS 检测
-- `selection`:候选截断数量
-- `output`:runtime 文件输出路径与文件名
-- `v2ray`:模板替换输出(可选)
-- `notify`:后置命令回调(可选)
-
-`record_mapping` 最小必填示例(`domain` 与 `created_at` 必须注册):
-
-```json
-"record_mapping": {
-  "records_path": "data.good[]",
-  "field_map": {
-    "domain": "ip",
-    "created_at": "createdTime",
-    "avg_score": "avgScore"
-  },
-  "created_time_formats": ["%Y-%m-%d %H:%M:%S"],
-  "created_time_timezone": "UTC"
-}
-```
+入口:
 
 
-`record_filter` 示例(示例仅保留 1 条规则):
+- 手动执行:`python3 scripts/domain_updater.py --config config.server.json`
+- 定时执行:`bash scripts/run_update_and_commit.sh config.server.json`
+- Debian 安装:`sudo bash scripts/install_debian.sh --config config.server.json`
 
 
-```json
-"record_filter": {
-  "enabled": false,
-  "exclude_if_any": [
-    { "field": "domain", "regex": "(test|staging)", "case_sensitive": false }
-  ]
-}
-```
+### 3.2 本地 `cfst` 模式
 
 
-`scoring` 示例(最简 weighted_average,且当前分数语义为“越低越好”):
-
-```json
-"scoring": {
-  "enabled": true,
-  "strategy": "weighted_average",
-  "weighted_fields": [
-    { "field": "avg_score", "weight": 1.0 }
-  ],
-  "prefer_lower": true,
-  "within_hours": 24,
-  "tie_breakers": [
-    { "field": "domain", "order": "asc" }
-  ]
-}
-```
+适用场景:
 
 
-`healthcheck` 示例(当前默认 attempts=5):
+- 在本机、Mac、Linux 上已有可执行的 `cfst`
+- 希望由 `domain_updater.py` 直接调用本地 `cfst`
+- 不需要 BusyBox 专用脚本
 
 
-```json
-"healthcheck": {
-  "enabled": false,
-  "attempts": 5,
-  "timeout_ms": 1800,
-  "port": 443,
-  "tls_verify": true
-}
-```
+入口:
+
+- `python3 scripts/domain_updater.py --config config.router.json`
+
+### 3.3 BusyBox 路由器模式
+
+适用场景:
+
+- 路由器没有 Python
+- 只有 BusyBox `sh` / `awk` / `sed` / `nc`
+- 直接在路由器上跑 `cfst`
+- 需要在局域网暴露 HTTP
+
+入口:
+
+- 更新结果:`sh scripts/router_local_update.sh ./router_local.conf`
+- 暴露 HTTP:`sh scripts/router_local_http.sh ./router_local.conf`
+
+## 4. 云服务器模式部署
 
 
+### 4.1 前置条件
 
 
-规则支持:
-- `contains`
-- `equals`
-- `regex`
+- Debian/Ubuntu
+- `git clone` 后在仓库目录执行
+- 远程仓库已配置,例如 `origin`
+- 运行用户对仓库有读写权限
+- 机器上可访问 `config.server.json` 中的 API
+
+### 4.2 核心配置文件
+
+服务器模式固定使用 [`config.server.json`](./config.server.json)。
+
+它当前默认行为:
+
+- `source.type = "api"`
+- 通过 `api.url` 拉取候选结果
+- 通过 `parser`、`record_mapping`、`record_filter`、`scoring` 选择目标
+- 写出到 `runtime/`
+
+你通常只需要修改:
+
+- `api.url`
+- `api.headers`
+- `parser`
+- `record_mapping`
+- `record_filter`
+- `scoring`
+- `healthcheck`
 
 
-> 说明:`record_filter` / `scoring` 引用未在 `record_mapping.field_map` 注册的字段时会 fail-fast;服务主流程、日志输出、runtime 文件写入逻辑保持不变。
+### 4.3 本地手动跑通
 
 
-### 3.3 语法检查
+先做语法检查:
 
 
 ```bash
 ```bash
-python3 -m py_compile scripts/domain_updater.py
-python3 -m py_compile scripts/update_vmess_links.py
+env PYTHONPYCACHEPREFIX=/tmp/pycache python3 -m py_compile scripts/domain_updater.py
 ```
 ```
 
 
-### 3.4 运行一次
+执行一次:
 
 
 ```bash
 ```bash
-python3 scripts/domain_updater.py --config config.json
+python3 scripts/domain_updater.py --config config.server.json
 ```
 ```
 
 
-### 3.5 本地查看结果
+查看结果
 
 
 ```bash
 ```bash
 cat runtime/current_domain.txt
 cat runtime/current_domain.txt
 cat runtime/current_domain.json
 cat runtime/current_domain.json
+cat runtime/state.json
+cat runtime/substore_vars.json
 ```
 ```
 
 
-可选 HTTP 验证
+如果你只想看脚本解析出的输出路径
 
 
 ```bash
 ```bash
-python3 -m http.server 8080 --directory runtime
-curl http://127.0.0.1:8080/current_domain.json
+python3 scripts/domain_updater.py --config config.server.json --print-output-settings
 ```
 ```
 
 
----
+### 4.4 自动提交到 runtime-state
 
 
-## 4. Debian 一键部署(完整教程)
+服务器模式提交脚本:
 
 
-## 4.1 前置条件
+```bash
+bash scripts/run_update_and_commit.sh config.server.json
+```
 
 
-- Debian/Ubuntu
-- 项目已 `git clone` 到服务器(在仓库目录执行安装)
-- 远程仓库 `origin` 可用
-- 用于运行服务的用户有仓库读写权限
+它会执行:
 
 
----
+1. 调用 `domain_updater.py`
+2. 从配置解析出运行时文件路径
+3. 比较本次“选中值”和 `runtime-state` 分支上次记录
+4. 值未变化时跳过 commit/push
+5. 值变化时同步配置定义的输出文件并提交
 
 
-### 4.2 推荐方案(不传参数)
+强制提交:
+
+```bash
+bash scripts/run_update_and_commit.sh --force-commit config.server.json
+```
 
 
-如果你采用“每台机器先手动保存 git 凭证”的流程,直接执行:
+
 
 
 ```bash
 ```bash
-sudo bash scripts/install_debian.sh
+GIT_FORCE_COMMIT=1 bash scripts/run_update_and_commit.sh config.server.json
 ```
 ```
 
 
-默认行为:
+### 4.5 Debian 一键安装
 
 
-- 服务用户:当前 `sudo` 执行前用户(`SUDO_USER`)
-- 定时间隔:`1h`
-- 自动 push:开启(`--git-push 1`)
-- 认证模式:`credential.helper store`
-- 推送分支:`runtime-state`
+推荐命令:
 
 
----
+```bash
+sudo bash scripts/install_debian.sh --config config.server.json
+```
 
 
-### 4.3 Git 凭证(无交互)配置建议
+默认行为:
 
 
-#### 方式 A:手动存凭证(你当前采用的方式)
+- 使用当前 `sudo` 前的用户作为 service 用户
+- 定时周期 `1h`
+- 自动 push 开启
+- 目标分支 `runtime-state`
 
 
-在**服务用户**下执行一次:
+常用参数
 
 
 ```bash
 ```bash
-git config --global credential.helper store
-git push origin runtime-state:runtime-state
+sudo bash scripts/install_debian.sh \
+  --config config.server.json \
+  --interval 10min \
+  --git-push 1 \
+  --git-push-remote origin
 ```
 ```
 
 
-首次输入用户名/密码(或 PAT)后,后续 systemd 可无交互 push。
-
-> 关键点:保存凭证的用户必须和 service 的 `User=` 一致。
-
-#### 方式 B:安装脚本带 token(可选)
+如果用 token:
 
 
 ```bash
 ```bash
 sudo bash scripts/install_debian.sh \
 sudo bash scripts/install_debian.sh \
+  --config config.server.json \
   --git-http-username <your-user> \
   --git-http-username <your-user> \
   --git-http-token-file /root/.config/vmess-token \
   --git-http-token-file /root/.config/vmess-token \
   --git-use-credential-store 1
   --git-use-credential-store 1
 ```
 ```
 
 
----
-
-### 4.4 安装后验证
+### 4.6 安装后验证
 
 
 ```bash
 ```bash
 sudo systemctl status vmess-domain-rotator.timer
 sudo systemctl status vmess-domain-rotator.timer
 sudo systemctl status vmess-domain-rotator.service
 sudo systemctl status vmess-domain-rotator.service
-
-# 手动触发一次
 sudo systemctl start vmess-domain-rotator.service
 sudo systemctl start vmess-domain-rotator.service
-
-# 看日志
 sudo journalctl -u vmess-domain-rotator.service -n 120 --no-pager
 sudo journalctl -u vmess-domain-rotator.service -n 120 --no-pager
 ```
 ```
 
 
-成功时应看到:
+成功时通常应看到:
 
 
 - updater 输出 JSON
 - updater 输出 JSON
-- `committed runtime changes on runtime-state ...`
+- `committed output changes on runtime-state`
 - `pushed to origin/runtime-state`
 - `pushed to origin/runtime-state`
 
 
----
+### 4.7 卸载
+
+```bash
+sudo bash scripts/uninstall_debian.sh
+```
 
 
-## 5. install_debian.sh 参数说明
+保留认证文件:
 
 
 ```bash
 ```bash
-sudo bash scripts/install_debian.sh [options]
+sudo bash scripts/uninstall_debian.sh --keep-auth-files
 ```
 ```
 
 
-| 参数 | 说明 | 默认值 |
-|---|---|---|
-| `--user <name>` | 指定 service 用户 | 当前 `sudo` 用户 |
-| `--group <name>` | 指定 service 用户组 | 当前 `sudo` 用户主组 |
-| `--interval <value>` | 定时周期(如 `1h`/`5min`) | `1h` |
-| `--git-push <0\|1>` | 是否自动 push | `1` |
-| `--git-push-remote <name>` | 远程名 | `origin` |
-| `--git-http-username <u>` | HTTPS 认证用户名 | `git` |
-| `--git-http-token <t>` | HTTPS token(明文参数) | 空 |
-| `--git-http-token-file <f>` | 从文件读取 token | 空 |
-| `--git-use-credential-store <0\|1>` | 是否使用 `credential.helper store` | `1` |
-| `--git-credentials-file <f>` | 指定 credential store 文件路径 | 空(Git 默认) |
-| `--no-install-deps` | 跳过 apt 安装依赖 | 关闭 |
-| `-h, --help` | 查看帮助 | - |
-
-说明:
-
-- `--git-push 1` 时,push 失败会返回非 0,systemd 任务标记失败(便于监控)。
-- 安装会写入环境文件:`/etc/vmess-domain-rotator.env`。
-
-### 5.1 systemd unit 生成说明(示例)
-
-`install_debian.sh` 会按安装参数动态写入 `/etc/systemd/system/vmess-domain-rotator.service` 和 `/etc/systemd/system/vmess-domain-rotator.timer`。仓库不再维护 `systemd/*` 静态模板文件。
-
-service 示例(安装后实际内容会替换成你的参数):
-
-```ini
-[Unit]
-Description=VMess Domain Rotator updater
-After=network-online.target
-Wants=network-online.target
-
-[Service]
-Type=oneshot
-User=<RUN_USER>
-Group=<RUN_GROUP>
-WorkingDirectory=<APP_DIR>
-EnvironmentFile=-/etc/vmess-domain-rotator.env
-UMask=0077
-ExecStart=/bin/bash <APP_DIR>/scripts/run_update_and_commit.sh <APP_DIR>/config.json
+## 5. 本地 `cfst` 模式部署
+
+### 5.1 适用环境
+
+- 有 `python3`
+- 有可执行的 `cfst`
+- 不依赖 BusyBox
+- 希望继续复用 `domain_updater.py` 的统一输出逻辑
+
+### 5.2 核心配置文件
+
+本模式使用 [`config.router.json`](./config.router.json)。
+
+它当前默认指向:
+
+- `./cfst_darwin_arm64/cfst`
+- 结果文件 `./cfst_darwin_arm64/result.csv`
+- 输出目录 `./cfip_runtime`
+
+你通常只需要修改:
+
+- `cfst_local.work_dir`
+- `cfst_local.binary`
+- `cfst_local.run_args`
+- `cfst_local.result_file`
+- `cfst_local.columns`
+- `output.*`
+
+### 5.3 手动执行
+
+```bash
+python3 scripts/domain_updater.py --config config.router.json
 ```
 ```
 
 
-timer 示例(`OnUnitActiveSec` 由 `--interval` 决定)
+查看结果:
 
 
-```ini
-[Unit]
-Description=Run VMess Domain Rotator every <INTERVAL>
+```bash
+cat cfip_runtime/current_ip.txt
+cat cfip_runtime/current_ip.json
+cat cfip_runtime/state.json
+cat cfip_runtime/substore_vars.json
+```
 
 
-[Timer]
-OnBootSec=2min
-OnUnitActiveSec=<INTERVAL>
-AccuracySec=30s
-Unit=vmess-domain-rotator.service
-Persistent=true
+查看解析出的输出路径:
 
 
-[Install]
-WantedBy=timers.target
+```bash
+python3 scripts/domain_updater.py --config config.router.json --print-output-settings
 ```
 ```
 
 
----
+### 5.4 配置说明
+
+`cfst_local` 关键项:
 
 
-## 6. 自动提交/推送逻辑说明
+- `work_dir`
+  `cfst` 工作目录
+- `binary`
+  `cfst` 可执行文件路径
+- `run_args`
+  执行参数数组,例如 `-f ip.txt -o result.csv -p 10`
+- `result_file`
+  `cfst` 输出结果文件
+- `skip_run`
+  为 `true` 时不执行 `cfst`,只解析现有结果文件
+- `columns`
+  CSV 列映射,默认按:
+  - `0` IP
+  - `1` 已发送
+  - `2` 已接收
+  - `3` 丢包率
+  - `4` 平均延迟
+  - `5` 下载速度
+  - `6` 地区
 
 
-`scripts/run_update_and_commit.sh` 的行为:
+## 6. BusyBox 路由器模式部署
 
 
-1. 运行 `domain_updater.py` 更新根目录 `runtime/`
-2. 读取本次选出的域名,并与 `runtime-state` 分支上次记录的域名比较
-3. 域名相同则直接跳过 commit/push(会输出 skip 日志)
-4. 域名变化时,将 `runtime/` 下四个文件同步到 `runtime-state` worktree,再 commit
-5. `--git-push 1` 时要求 push 成功
+### 6.1 适用环境
 
 
-手动强制提交(忽略“域名相同”与“无变更”跳过逻辑):
+- 路由器架构已确认
+- 已准备对应架构的 `cfst`
+- 路由器只有 BusyBox,没有 Python
+
+你之前确认的路由器架构是:
 
 
 ```bash
 ```bash
-bash scripts/run_update_and_commit.sh --force-commit config.json
-# 或
-GIT_FORCE_COMMIT=1 bash scripts/run_update_and_commit.sh config.json
+uname -m
+# armv7
 ```
 ```
 
 
-说明:强制模式会使用 `manual:` 前缀的提交信息(包含域名和更新时间);若内容无变化,会创建 empty commit 后继续按配置 push。
+所以你需要放入可在 `armv7` 上运行的 `cfst` 二进制。
+
+### 6.2 路由器目录建议
+
+示例:
+
+```text
+/tmp/home/root/vmess-domain-rotator/
+├── router_local.conf
+├── scripts/
+│   ├── router_local_update.sh
+│   └── router_local_http.sh
+└── cfst/
+    ├── cfst
+    ├── ip.txt
+    └── result.csv
+```
+
+其中:
+
+- `router_local.conf` 指定 `CFST_WORK_DIR`、输出目录、HTTP 端口等
+- `cfst/` 放路由器架构对应的 `cfst`
+
+### 6.3 配置 router_local.conf
+
+当前仓库里的 [`router_local.conf`](./router_local.conf) 就是 BusyBox 配置文件。
 
 
----
+最重要的字段:
 
 
-## 7. 常用运维命令
+- `CFST_WORK_DIR`
+  `cfst` 所在目录
+- `CFST_BIN`
+  `cfst` 可执行文件名,一般是 `./cfst`
+- `CFST_IP_FILE`
+  输入 IP 列表
+- `CFST_RESULT_FILE`
+  `cfst` 结果文件
+- `RUNTIME_DIR`
+  运行时输出目录,默认 `./cfip_runtime`
+- `VALUE_TEXT_FILE`
+  当前值文本文件,默认 `current_ip.txt`
+- `VALUE_JSON_FILE`
+  当前值 JSON 文件,默认 `current_ip.json`
+- `HTTP_PORT`
+  局域网 HTTP 监听端口,默认 `8080`
 
 
-### 7.1 service / timer
+### 6.4 手动更新一次
 
 
 ```bash
 ```bash
-sudo systemctl status vmess-domain-rotator.timer
-sudo systemctl status vmess-domain-rotator.service
-sudo systemctl start vmess-domain-rotator.service
-sudo systemctl restart vmess-domain-rotator.timer
+sh scripts/router_local_update.sh ./router_local.conf
 ```
 ```
 
 
-### 7.2 日志
+成功时会输出类似:
 
 
-```bash
-sudo journalctl -u vmess-domain-rotator.service -n 200 --no-pager
-sudo journalctl -u vmess-domain-rotator.service -f
+```text
+[router-local] selected ip: x.x.x.x
 ```
 ```
 
 
-### 7.3 查看 runtime-state 提交
+生成文件默认在:
+
+- `cfip_runtime/current_ip.txt`
+- `cfip_runtime/current_ip.json`
+- `cfip_runtime/state.json`
+- `cfip_runtime/substore_vars.json`
+
+### 6.5 暴露到局域网
 
 
 ```bash
 ```bash
-git log runtime-state --oneline -n 20
+sh scripts/router_local_http.sh ./router_local.conf
+```
+
+默认会监听:
+
+```text
+0.0.0.0:8080
 ```
 ```
 
 
----
+可访问路径:
+
+- `/`
+- `/current_ip.txt`
+- `/current_ip.json`
+- `/state.json`
+- `/substore_vars.json`
 
 
-## 8. 卸载
+例如局域网内访问:
 
 
 ```bash
 ```bash
-sudo bash scripts/uninstall_debian.sh
+curl http://192.168.50.1:8080/current_ip.json
+curl http://192.168.50.1:8080/current_ip.txt
 ```
 ```
 
 
-保留认证/env 文件:
+### 6.6 定时执行
+
+可以用 BusyBox `crond` 定时更新,例如每 15 分钟执行一次:
+
+```cron
+*/15 * * * * cd /tmp/home/root/vmess-domain-rotator && sh scripts/router_local_update.sh ./router_local.conf >> /tmp/router_local_update.log 2>&1
+```
+
+HTTP 服务如果需要常驻,建议独立后台启动:
 
 
 ```bash
 ```bash
-sudo bash scripts/uninstall_debian.sh --keep-auth-files
+cd /tmp/home/root/vmess-domain-rotator
+nohup sh scripts/router_local_http.sh ./router_local.conf >/tmp/router_http.log 2>&1 &
 ```
 ```
 
 
----
+## 7. VMess 链接批量替换
+
+如果下游要消费“当前值文件”,可使用 `scripts/update_vmess_links.py`。
 
 
-## 9. VMess 链接批量替换(可选)
+服务器模式示例:
 
 
 ```bash
 ```bash
 python3 scripts/update_vmess_links.py \
 python3 scripts/update_vmess_links.py \
@@ -376,7 +466,16 @@ python3 scripts/update_vmess_links.py \
   --domain-file ./runtime/current_domain.txt
   --domain-file ./runtime/current_domain.txt
 ```
 ```
 
 
-仅替换匹配名称(`ps`)的节点:
+路由器/本地 `cfst` 模式示例:
+
+```bash
+python3 scripts/update_vmess_links.py \
+  --input ./nodes.txt \
+  --output ./nodes.updated.txt \
+  --domain-file ./cfip_runtime/current_ip.txt
+```
+
+仅替换匹配名称的节点:
 
 
 ```bash
 ```bash
 python3 scripts/update_vmess_links.py \
 python3 scripts/update_vmess_links.py \
@@ -386,12 +485,32 @@ python3 scripts/update_vmess_links.py \
   --name-regex "(argo|cf|vm)"
   --name-regex "(argo|cf|vm)"
 ```
 ```
 
 
----
+## 8. 常用运维命令
+
+服务器模式:
+
+```bash
+sudo systemctl status vmess-domain-rotator.timer
+sudo systemctl status vmess-domain-rotator.service
+sudo systemctl start vmess-domain-rotator.service
+sudo journalctl -u vmess-domain-rotator.service -f
+git log runtime-state --oneline -n 20
+```
+
+路由器模式:
+
+```bash
+sh scripts/router_local_update.sh ./router_local.conf
+sh scripts/router_local_http.sh ./router_local.conf
+cat cfip_runtime/current_ip.txt
+cat cfip_runtime/current_ip.json
+```
 
 
-## 10. 注意事项
+## 9. 注意事项
 
 
-1. **服务用户与 git 凭证用户必须一致**,否则会出现:
-   - `fatal: could not read Username ... terminal prompts disabled`
-2. `credential.helper store` 为明文存储,建议仅在可控服务器使用。
-3. 不要把 token 提交进仓库。
-4. `runtime/state.json` 需要持久化,保障 fallback 能用。
+1. 服务器模式和路由器模式的配置不要混用。
+2. `run_update_and_commit.sh` 设计目标是服务器模式;路由器模式默认不走 git 提交。
+3. 服务器模式下,service 用户和 git 凭证用户必须一致,否则会出现 `terminal prompts disabled`。
+4. `credential.helper store` 是明文存储,只适合可控服务器。
+5. BusyBox 路由器模式下,`nc`、`mkfifo`、`awk` 的行为依赖 BusyBox 版本,建议在目标设备上实测。
+6. `state.json` 需要持久化,否则 fallback 不可用。

+ 0 - 114
config.example.json

@@ -1,114 +0,0 @@
-{
-  "api": {
-    "url": "https://example.com/api/domains",
-    "method": "GET",
-    "headers": {
-      "Authorization": "Bearer <token>"
-    },
-    "params": {},
-    "timeout_sec": 10
-  },
-  "parser": {
-    "field_paths": [
-      "data.good[].ip"
-    ],
-    "json_paths": [],
-    "regex": "[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}"
-  },
-  "record_mapping": {
-    "_comment_required": "必填。记录白名单注册表:所有在过滤/排序里会用到的 API 字段都必须先在 field_map 注册,否则会 fail-fast。",
-    "records_path": "data.good[]",
-    "field_map": {
-      "domain": "ip",
-      "created_at": "createdTime",
-      "avg_score": "avgScore"
-    },
-    "created_time_formats": [
-      "%Y-%m-%d %H:%M:%S"
-    ],
-    "created_time_timezone": "UTC"
-  },
-  "record_filter": {
-    "_comment": "可选。域名记录过滤规则。enabled=true 时生效。exclude_if_any 中满足任一条件即排除该记录",
-    "enabled": false,
-    "exclude_if_any": [
-      {
-        "field": "domain",
-        "regex": "(test|staging)",
-        "case_sensitive": false
-      }
-    ]
-  },
-  "domain_filter": {
-    "include_suffixes": [
-      ".example.com"
-    ],
-    "exclude_regex": []
-  },
-  "scoring": {
-    "_comment_required": "enabled=true 时必须配置 strategy。weighted_average 需 weighted_fields;lexicographic 需 lexicographic_fields。",
-    "enabled": true,
-    "strategy": "weighted_average",
-    "weighted_fields": [
-      {
-        "field": "avg_score",
-        "weight": 0.5
-      },
-      {
-        "field": "yd_score",
-        "weight": 0.2
-      },
-      {
-        "field": "dx_score",
-        "weight": 0.1
-      },
-      {
-        "field": "lt_score",
-        "weight": 0.2
-      }
-    ],
-    "lexicographic_fields": [
-      {
-        "field": "avg_score",
-        "order": "desc"
-      },
-      {
-        "field": "created_at",
-        "order": "desc"
-      }
-    ],
-    "prefer_lower": true,
-    "within_hours": 24,
-    "tie_breakers": [
-      {
-        "field": "domain",
-        "order": "asc"
-      }
-    ]
-  },
-  "healthcheck": {
-    "enabled": false,
-    "attempts": 5,
-    "timeout_ms": 1800,
-    "port": 443,
-    "tls_verify": true
-  },
-  "selection": {
-    "top_n": 3
-  },
-  "output": {
-    "runtime_dir": "./runtime",
-    "current_domain_file": "current_domain.txt",
-    "current_domain_json": "current_domain.json",
-    "state_file": "state.json",
-    "substore_vars_file": "substore_vars.json"
-  },
-  "v2ray": {
-    "template_file": "",
-    "output_file": "",
-    "replace_token": "__AUTO_DOMAIN__"
-  },
-  "notify": {
-    "command": ""
-  }
-}

+ 62 - 0
config.router.json

@@ -0,0 +1,62 @@
+{
+  "source": {
+    "type": "cfst_local"
+  },
+  "cfst_local": {
+    "work_dir": "./cfst_darwin_arm64",
+    "binary": "./cfst",
+    "run_args": [
+      "-f",
+      "ip.txt",
+      "-o",
+      "result.csv",
+      "-p",
+      "10"
+    ],
+    "result_file": "result.csv",
+    "run_timeout_sec": 600,
+    "skip_run": false,
+    "header_rows": 1,
+    "columns": {
+      "ip": 0,
+      "sent": 1,
+      "received": 2,
+      "loss_rate": 3,
+      "avg_latency": 4,
+      "download_speed": 5,
+      "region": 6
+    }
+  },
+  "domain_filter": {
+    "include_suffixes": [],
+    "exclude_regex": []
+  },
+  "healthcheck": {
+    "enabled": false,
+    "attempts": 3,
+    "timeout_ms": 1800,
+    "port": 443,
+    "tls_verify": false
+  },
+  "selection": {
+    "top_n": 3
+  },
+  "output": {
+    "runtime_dir": "./cfip_runtime",
+    "selected_value_file": "current_ip.txt",
+    "selected_value_json": "current_ip.json",
+    "selected_value_json_key": "ip",
+    "state_file": "state.json",
+    "state_last_good_key": "last_good_ip",
+    "export_vars_file": "substore_vars.json",
+    "substore_value_key": "AUTO_CFIP"
+  },
+  "v2ray": {
+    "template_file": "",
+    "output_file": "",
+    "replace_token": "__AUTO_DOMAIN__"
+  },
+  "notify": {
+    "command": ""
+  }
+}

+ 4 - 1
config.json → config.server.json

@@ -1,4 +1,7 @@
 {
 {
+  "source": {
+    "type": "api"
+  },
   "api": {
   "api": {
     "url": "https://vps789.com/openApi/cfIpTop20",
     "url": "https://vps789.com/openApi/cfIpTop20",
     "method": "GET",
     "method": "GET",
@@ -107,4 +110,4 @@
   "notify": {
   "notify": {
     "command": ""
     "command": ""
   }
   }
-}
+}

+ 34 - 0
router_local.conf

@@ -0,0 +1,34 @@
+CFST_WORK_DIR="./cfst"
+CFST_BIN="./cfst"
+CFST_IP_FILE="ip.txt"
+CFST_RESULT_FILE="result.csv"
+CFST_DISPLAY_COUNT="10"
+CFST_THREADS=""
+CFST_TEST_COUNT=""
+CFST_DOWNLOAD_COUNT=""
+CFST_DOWNLOAD_TIME=""
+CFST_PORT="443"
+CFST_URL=""
+CFST_HTTPING="0"
+CFST_HTTPING_CODE=""
+CFST_CFCOLO=""
+CFST_LATENCY_LIMIT=""
+CFST_LATENCY_LOWER=""
+CFST_LOSS_LIMIT=""
+CFST_SPEED_LIMIT=""
+CFST_DISABLE_DOWNLOAD="0"
+CFST_ALL_IP="0"
+CFST_DEBUG="0"
+CFST_SKIP_RUN="0"
+
+TOP_N="3"
+RUNTIME_DIR="./cfip_runtime"
+VALUE_TEXT_FILE="current_ip.txt"
+VALUE_JSON_FILE="current_ip.json"
+STATE_FILE="state.json"
+EXPORT_VARS_FILE="substore_vars.json"
+VALUE_JSON_KEY="ip"
+STATE_LAST_GOOD_KEY="last_good_ip"
+EXPORT_VALUE_KEY="AUTO_CFIP"
+
+HTTP_PORT="8080"

+ 246 - 42
scripts/domain_updater.py

@@ -1,5 +1,6 @@
 #!/usr/bin/env python3
 #!/usr/bin/env python3
 import argparse
 import argparse
+import csv
 import datetime as dt
 import datetime as dt
 import functools
 import functools
 import json
 import json
@@ -55,6 +56,24 @@ def build_url(base_url, params):
     return urllib.parse.urlunparse(parsed._replace(query=query))
     return urllib.parse.urlunparse(parsed._replace(query=query))
 
 
 
 
+def resolve_path(base_dir, path_value):
+    path_text = str(path_value or "").strip()
+    if not path_text:
+        return ""
+    if os.path.isabs(path_text):
+        return os.path.normpath(path_text)
+    return os.path.normpath(os.path.join(base_dir, path_text))
+
+
+def get_source_type(cfg):
+    source_cfg = cfg.get("source", {})
+    if isinstance(source_cfg, dict):
+        source_type = str(source_cfg.get("type", "api")).strip().lower()
+        if source_type:
+            return source_type
+    return "api"
+
+
 def fetch_api_json(cfg):
 def fetch_api_json(cfg):
     api = cfg["api"]
     api = cfg["api"]
     url = build_url(api["url"], api.get("params", {}))
     url = build_url(api["url"], api.get("params", {}))
@@ -73,6 +92,97 @@ def fetch_api_json(cfg):
     return json.loads(raw)
     return json.loads(raw)
 
 
 
 
+def load_cfst_rows(cfg, config_path_abs):
+    cfst_cfg = cfg.get("cfst_local", {})
+    config_dir = os.path.dirname(config_path_abs)
+
+    work_dir = resolve_path(config_dir, cfst_cfg.get("work_dir", "./cfst"))
+    binary_path = resolve_path(work_dir, cfst_cfg.get("binary", "./cfst"))
+    result_file = resolve_path(work_dir, cfst_cfg.get("result_file", "result.csv"))
+    encoding = str(cfst_cfg.get("encoding", "utf-8")).strip() or "utf-8"
+    skip_run = bool(cfst_cfg.get("skip_run", False))
+    timeout_sec = int(cfst_cfg.get("run_timeout_sec", 600))
+
+    run_args = cfst_cfg.get("run_args", ["-o", os.path.basename(result_file)])
+    if not isinstance(run_args, list):
+        raise ValueError("cfst_local.run_args must be an array")
+    command = [binary_path] + [str(x) for x in run_args]
+
+    if not skip_run:
+        completed = subprocess.run(
+            command,
+            cwd=work_dir,
+            check=False,
+            capture_output=True,
+            text=True,
+            encoding=encoding,
+            errors="replace",
+            timeout=timeout_sec,
+        )
+        if completed.returncode != 0:
+            stderr = (completed.stderr or "").strip()
+            stdout = (completed.stdout or "").strip()
+            details = stderr or stdout or f"exit code {completed.returncode}"
+            raise RuntimeError(f"cfst run failed: {details}")
+
+    if not os.path.exists(result_file):
+        raise RuntimeError(f"cfst result file not found: {result_file}")
+
+    with open(result_file, "r", encoding=encoding, errors="replace", newline="") as f:
+        reader = csv.reader(f)
+        rows = [row for row in reader if any(str(col).strip() for col in row)]
+
+    header_rows = int(cfst_cfg.get("header_rows", 1))
+    if len(rows) <= header_rows:
+        raise RuntimeError("cfst result has no data rows")
+
+    columns_cfg = cfst_cfg.get("columns", {})
+    if not isinstance(columns_cfg, dict):
+        raise ValueError("cfst_local.columns must be an object")
+
+    def col_index(name, default_index):
+        raw = columns_cfg.get(name, default_index)
+        try:
+            idx = int(raw)
+        except Exception as exc:
+            raise ValueError(f"cfst_local.columns.{name} must be an integer") from exc
+        if idx < 0:
+            raise ValueError(f"cfst_local.columns.{name} must be >= 0")
+        return idx
+
+    ip_idx = col_index("ip", 0)
+    sent_idx = col_index("sent", 1)
+    received_idx = col_index("received", 2)
+    loss_idx = col_index("loss_rate", 3)
+    latency_idx = col_index("avg_latency", 4)
+    speed_idx = col_index("download_speed", 5)
+    region_idx = col_index("region", 6)
+
+    out = []
+    for row in rows[header_rows:]:
+        if ip_idx >= len(row):
+            continue
+        domain = normalize_domain(row[ip_idx])
+        if not domain:
+            continue
+        out.append(
+            {
+                "domain": domain,
+                "ip": domain,
+                "sent": row[sent_idx].strip() if sent_idx < len(row) else "",
+                "received": row[received_idx].strip() if received_idx < len(row) else "",
+                "loss_rate": row[loss_idx].strip() if loss_idx < len(row) else "",
+                "avg_latency": row[latency_idx].strip() if latency_idx < len(row) else "",
+                "download_speed": row[speed_idx].strip() if speed_idx < len(row) else "",
+                "region": row[region_idx].strip() if region_idx < len(row) else "",
+            }
+        )
+
+    if not out:
+        raise RuntimeError("cfst result parsed to zero valid rows")
+    return out
+
+
 def flatten_values(value):
 def flatten_values(value):
     out = []
     out = []
     if isinstance(value, str):
     if isinstance(value, str):
@@ -230,6 +340,40 @@ def extract_records(payload, record_mapping):
 
 
 
 
 def validate_config(cfg):
 def validate_config(cfg):
+    source_type = get_source_type(cfg)
+    if source_type not in {"api", "cfst_local"}:
+        raise ValueError("source.type must be 'api' or 'cfst_local'")
+
+    output_cfg = cfg.get("output", {})
+    if output_cfg and not isinstance(output_cfg, dict):
+        raise ValueError("output must be an object")
+
+    if source_type == "cfst_local":
+        cfst_cfg = cfg.get("cfst_local")
+        if not isinstance(cfst_cfg, dict):
+            raise ValueError("cfst_local is required and must be an object when source.type=cfst_local")
+
+        work_dir = str(cfst_cfg.get("work_dir", "")).strip()
+        if not work_dir:
+            raise ValueError("cfst_local.work_dir is required")
+
+        binary = str(cfst_cfg.get("binary", "")).strip()
+        if not binary:
+            raise ValueError("cfst_local.binary is required")
+
+        result_file = str(cfst_cfg.get("result_file", "")).strip()
+        if not result_file:
+            raise ValueError("cfst_local.result_file is required")
+
+        run_args = cfst_cfg.get("run_args", [])
+        if not isinstance(run_args, list):
+            raise ValueError("cfst_local.run_args must be an array")
+
+        columns_cfg = cfst_cfg.get("columns", {})
+        if columns_cfg and not isinstance(columns_cfg, dict):
+            raise ValueError("cfst_local.columns must be an object")
+        return
+
     record_mapping = cfg.get("record_mapping")
     record_mapping = cfg.get("record_mapping")
     if not isinstance(record_mapping, dict):
     if not isinstance(record_mapping, dict):
         raise ValueError("record_mapping is required and must be an object")
         raise ValueError("record_mapping is required and must be an object")
@@ -766,9 +910,41 @@ def choose_domain(filtered_domains, check_results, top_n, ranked_scored):
     return None, []
     return None, []
 
 
 
 
+def build_output_settings(output_cfg, config_path_abs):
+    runtime_dir_cfg = output_cfg.get("runtime_dir", "./runtime")
+    runtime_dir = resolve_path(os.path.dirname(config_path_abs), runtime_dir_cfg)
+
+    selected_text_name = output_cfg.get("selected_value_file", output_cfg.get("current_domain_file", "current_domain.txt"))
+    selected_json_name = output_cfg.get("selected_value_json", output_cfg.get("current_domain_json", "current_domain.json"))
+    state_name = output_cfg.get("state_file", "state.json")
+    vars_name = output_cfg.get("export_vars_file", output_cfg.get("substore_vars_file", "substore_vars.json"))
+
+    return {
+        "runtime_dir": runtime_dir,
+        "selected_text_path": os.path.join(runtime_dir, selected_text_name),
+        "selected_json_path": os.path.join(runtime_dir, selected_json_name),
+        "state_path": os.path.join(runtime_dir, state_name),
+        "vars_path": os.path.join(runtime_dir, vars_name),
+        "selected_json_key": str(output_cfg.get("selected_value_json_key", "domain")).strip() or "domain",
+        "state_last_good_key": str(output_cfg.get("state_last_good_key", "last_good_domain")).strip() or "last_good_domain",
+        "vars_value_key": str(output_cfg.get("substore_value_key", "AUTO_DOMAIN")).strip() or "AUTO_DOMAIN",
+    }
+
+
+def print_output_settings(config_path_abs, cfg):
+    output_cfg = cfg.get("output", {})
+    settings = build_output_settings(output_cfg, config_path_abs)
+    print(json.dumps(settings, ensure_ascii=True))
+
+
 def main():
 def main():
     ap = argparse.ArgumentParser(description="Auto select VMess preferred domain")
     ap = argparse.ArgumentParser(description="Auto select VMess preferred domain")
-    ap.add_argument("--config", default="config.json", help="Path to config JSON")
+    ap.add_argument("--config", default="config.server.json", help="Path to config JSON")
+    ap.add_argument(
+        "--print-output-settings",
+        action="store_true",
+        help="Print resolved output settings as JSON and exit",
+    )
     args = ap.parse_args()
     args = ap.parse_args()
 
 
     config_path_abs = os.path.abspath(args.config)
     config_path_abs = os.path.abspath(args.config)
@@ -784,72 +960,97 @@ def main():
         print(json.dumps({"status": "error", "error": f"invalid config: {e}"}, ensure_ascii=True), file=sys.stderr)
         print(json.dumps({"status": "error", "error": f"invalid config: {e}"}, ensure_ascii=True), file=sys.stderr)
         sys.exit(1)
         sys.exit(1)
 
 
+    if args.print_output_settings:
+        print_output_settings(config_path_abs, cfg)
+        return
+
     output_cfg = cfg.get("output", {})
     output_cfg = cfg.get("output", {})
-    runtime_dir_cfg = output_cfg.get("runtime_dir", "./runtime")
-    if os.path.isabs(runtime_dir_cfg):
-        runtime_dir = runtime_dir_cfg
-    else:
-        runtime_dir = os.path.normpath(os.path.join(os.path.dirname(config_path_abs), runtime_dir_cfg))
+    output_settings = build_output_settings(output_cfg, config_path_abs)
     v2_cfg = cfg.get("v2ray", {})
     v2_cfg = cfg.get("v2ray", {})
     notify_cfg = cfg.get("notify", {})
     notify_cfg = cfg.get("notify", {})
-
-    current_domain_file = os.path.join(runtime_dir, output_cfg.get("current_domain_file", "current_domain.txt"))
-    current_domain_json = os.path.join(runtime_dir, output_cfg.get("current_domain_json", "current_domain.json"))
-    state_file = os.path.join(runtime_dir, output_cfg.get("state_file", "state.json"))
-    substore_vars_file = os.path.join(runtime_dir, output_cfg.get("substore_vars_file", "substore_vars.json"))
+    selected_text_file = output_settings["selected_text_path"]
+    selected_json_file = output_settings["selected_json_path"]
+    state_file = output_settings["state_path"]
+    vars_file = output_settings["vars_path"]
+    selected_json_key = output_settings["selected_json_key"]
+    state_last_good_key = output_settings["state_last_good_key"]
+    vars_value_key = output_settings["vars_value_key"]
 
 
     state = read_json_file(state_file, default={})
     state = read_json_file(state_file, default={})
-    last_good = state.get("last_good_domain", "")
+    last_good = state.get(state_last_good_key, "")
+    source_type = get_source_type(cfg)
 
 
     try:
     try:
-        payload = fetch_api_json(cfg)
-        parsed = parse_domains(payload, cfg.get("parser", {}))
-        filtered = apply_filter(parsed, cfg.get("domain_filter", {}))
+        top_n = int(cfg.get("selection", {}).get("top_n", 3))
+        check_results = []
+        payload = None
+
+        if source_type == "cfst_local":
+            cfst_rows = load_cfst_rows(cfg, config_path_abs)
+            parsed = [row["domain"] for row in cfst_rows]
+            filtered = apply_filter(parsed, cfg.get("domain_filter", {}))
+            filtered_set = set(filtered)
+            cfst_rows = [row for row in cfst_rows if row["domain"] in filtered_set]
+            if not cfst_rows:
+                raise RuntimeError("No valid IP available from cfst result after filtering")
+
+            if cfg.get("healthcheck", {}).get("enabled", False):
+                check_results = check_domains(filtered, cfg.get("healthcheck", {}))
+                selected, _ = choose_domain(filtered, check_results, top_n, [])
+                top_candidates = cfst_rows[:top_n]
+            else:
+                selected = cfst_rows[0]["domain"]
+                top_candidates = cfst_rows[:top_n]
+        else:
+            payload = fetch_api_json(cfg)
+            parsed = parse_domains(payload, cfg.get("parser", {}))
+            filtered = apply_filter(parsed, cfg.get("domain_filter", {}))
 
 
-        record_mapping_cfg = cfg.get("record_mapping", {})
-        field_map = record_mapping_cfg.get("field_map", {})
-        records = extract_records(payload, record_mapping_cfg)
+            record_mapping_cfg = cfg.get("record_mapping", {})
+            field_map = record_mapping_cfg.get("field_map", {})
+            records = extract_records(payload, record_mapping_cfg)
 
 
-        record_filter_cfg = cfg.get("record_filter", {})
-        blocked_domains = collect_excluded_domains(records, field_map, record_filter_cfg)
-        if blocked_domains:
-            filtered = [d for d in filtered if d not in blocked_domains]
+            record_filter_cfg = cfg.get("record_filter", {})
+            blocked_domains = collect_excluded_domains(records, field_map, record_filter_cfg)
+            if blocked_domains:
+                filtered = [d for d in filtered if d not in blocked_domains]
 
 
-        scoring_cfg = cfg.get("scoring", {})
-        scored_records = parse_scored_records(records, field_map, record_mapping_cfg, scoring_cfg)
-        filtered_set = set(filtered)
-        scored_records = [r for r in scored_records if r["domain"] in filtered_set]
-        ranked_scored = rank_scored_records(scored_records, scoring_cfg)
+            scoring_cfg = cfg.get("scoring", {})
+            scored_records = parse_scored_records(records, field_map, record_mapping_cfg, scoring_cfg)
+            filtered_set = set(filtered)
+            scored_records = [r for r in scored_records if r["domain"] in filtered_set]
+            ranked_scored = rank_scored_records(scored_records, scoring_cfg)
 
 
-        check_results = []
-        if cfg.get("healthcheck", {}).get("enabled", True):
-            check_results = check_domains(filtered, cfg.get("healthcheck", {}))
+            if cfg.get("healthcheck", {}).get("enabled", True):
+                check_results = check_domains(filtered, cfg.get("healthcheck", {}))
 
 
-        top_n = int(cfg.get("selection", {}).get("top_n", 3))
-        selected, top_candidates = choose_domain(filtered, check_results, top_n, ranked_scored)
+            selected, top_candidates = choose_domain(filtered, check_results, top_n, ranked_scored)
 
 
         status = "ok"
         status = "ok"
         if not selected and last_good:
         if not selected and last_good:
             selected = last_good
             selected = last_good
             status = "fallback_last_good"
             status = "fallback_last_good"
         if not selected:
         if not selected:
+            if source_type == "cfst_local":
+                raise RuntimeError("No valid IP available from cfst and no fallback in state")
             raise RuntimeError("No valid domain available from API and no fallback in state")
             raise RuntimeError("No valid domain available from API and no fallback in state")
 
 
-        write_text_file(current_domain_file, selected + "\n")
+        write_text_file(selected_text_file, selected + "\n")
 
 
         current_json = {
         current_json = {
-            "domain": selected,
+            selected_json_key: selected,
             "updated_at": utc_now_iso(),
             "updated_at": utc_now_iso(),
             "status": status,
             "status": status,
+            "source_type": source_type,
             "source_count": len(parsed),
             "source_count": len(parsed),
             "checked_count": len(check_results),
             "checked_count": len(check_results),
             "top_candidates": top_candidates,
             "top_candidates": top_candidates,
         }
         }
-        write_json_file(current_domain_json, current_json)
+        write_json_file(selected_json_file, current_json)
         write_json_file(
         write_json_file(
-            substore_vars_file,
+            vars_file,
             {
             {
-                "AUTO_DOMAIN": selected,
+                vars_value_key: selected,
                 "UPDATED_AT": current_json["updated_at"],
                 "UPDATED_AT": current_json["updated_at"],
                 "STATUS": status,
                 "STATUS": status,
             },
             },
@@ -864,11 +1065,12 @@ def main():
 
 
         new_state = {
         new_state = {
             "updated_at": current_json["updated_at"],
             "updated_at": current_json["updated_at"],
-            "last_good_domain": selected,
+            state_last_good_key: selected,
             "status": status,
             "status": status,
             "source_count": len(parsed),
             "source_count": len(parsed),
             "checked_count": len(check_results),
             "checked_count": len(check_results),
             "rendered_v2ray": rendered,
             "rendered_v2ray": rendered,
+            "source_type": source_type,
         }
         }
         write_json_file(state_file, new_state)
         write_json_file(state_file, new_state)
 
 
@@ -881,19 +1083,21 @@ def main():
             "updated_at": now,
             "updated_at": now,
             "status": "error",
             "status": "error",
             "error": str(e),
             "error": str(e),
-            "last_good_domain": last_good,
+            state_last_good_key: last_good,
+            "source_type": source_type,
         }
         }
         write_json_file(state_file, err_state)
         write_json_file(state_file, err_state)
 
 
         if last_good:
         if last_good:
-            write_text_file(current_domain_file, last_good + "\n")
+            write_text_file(selected_text_file, last_good + "\n")
             write_json_file(
             write_json_file(
-                current_domain_json,
+                selected_json_file,
                 {
                 {
-                    "domain": last_good,
+                    selected_json_key: last_good,
                     "updated_at": now,
                     "updated_at": now,
                     "status": "error_use_last_good",
                     "status": "error_use_last_good",
                     "error": str(e),
                     "error": str(e),
+                    "source_type": source_type,
                 },
                 },
             )
             )
             run_notify(notify_cfg.get("command", ""), last_good, "error_use_last_good")
             run_notify(notify_cfg.get("command", ""), last_good, "error_use_last_good")

+ 25 - 4
scripts/install_debian.sh

@@ -9,6 +9,7 @@ RUN_GROUP_SET="0"
 RUN_HOME=""
 RUN_HOME=""
 INTERVAL="1h"
 INTERVAL="1h"
 INSTALL_DEPS="1"
 INSTALL_DEPS="1"
+CONFIG_PATH=""
 GIT_PUSH_ENABLED="1"
 GIT_PUSH_ENABLED="1"
 GIT_PUSH_REMOTE="origin"
 GIT_PUSH_REMOTE="origin"
 GIT_HTTP_USERNAME="git"
 GIT_HTTP_USERNAME="git"
@@ -30,6 +31,7 @@ Options:
  --user <name>                  Service user (default: current sudo user)
  --user <name>                  Service user (default: current sudo user)
  --group <name>                 Service group (default: current sudo user's group)
  --group <name>                 Service group (default: current sudo user's group)
  --interval <value>             Timer interval, e.g. 1h/10min (default: 1h)
  --interval <value>             Timer interval, e.g. 1h/10min (default: 1h)
+ --config <path>                Config file path (default: <repo>/config.server.json)
  --git-push <0|1>               Enable/disable push to remote (default: 1)
  --git-push <0|1>               Enable/disable push to remote (default: 1)
  --git-push-remote <name>       Remote name for push (default: origin)
  --git-push-remote <name>       Remote name for push (default: origin)
  --git-http-username <u>        Username for HTTPS auth (default: git)
  --git-http-username <u>        Username for HTTPS auth (default: git)
@@ -42,6 +44,7 @@ Options:
 
 
 Examples:
 Examples:
  sudo bash scripts/install_debian.sh
  sudo bash scripts/install_debian.sh
+ sudo bash scripts/install_debian.sh --config /opt/vmess-domain-rotator/config.server.json
  sudo bash scripts/install_debian.sh --interval 10min
  sudo bash scripts/install_debian.sh --interval 10min
  sudo bash scripts/install_debian.sh --git-push 0
  sudo bash scripts/install_debian.sh --git-push 0
  sudo bash scripts/install_debian.sh --git-http-username aurora --git-http-token-file /root/.config/vmess-token
  sudo bash scripts/install_debian.sh --git-http-username aurora --git-http-token-file /root/.config/vmess-token
@@ -69,6 +72,10 @@ while [[ $# -gt 0 ]]; do
 			INTERVAL="$2"
 			INTERVAL="$2"
 			shift 2
 			shift 2
 			;;
 			;;
+		--config)
+			CONFIG_PATH="$2"
+			shift 2
+			;;
 		--git-push)
 		--git-push)
 			GIT_PUSH_ENABLED="$2"
 			GIT_PUSH_ENABLED="$2"
 			shift 2
 			shift 2
@@ -131,6 +138,17 @@ if ! git -C "$SOURCE_DIR" rev-parse --is-inside-work-tree >/dev/null 2>&1; then
 fi
 fi
 APP_DIR="$SOURCE_DIR"
 APP_DIR="$SOURCE_DIR"
 
 
+if [[ -z "$CONFIG_PATH" ]]; then
+	CONFIG_PATH="${APP_DIR}/config.server.json"
+elif [[ "$CONFIG_PATH" != /* ]]; then
+	CONFIG_PATH="${APP_DIR}/${CONFIG_PATH}"
+fi
+
+if [[ ! -r "$CONFIG_PATH" ]]; then
+	echo "Error: config file not found or unreadable: $CONFIG_PATH" >&2
+	exit 1
+fi
+
 if [[ -n "${SUDO_USER:-}" ]] && [[ "$RUN_USER_SET" != "1" ]]; then
 if [[ -n "${SUDO_USER:-}" ]] && [[ "$RUN_USER_SET" != "1" ]]; then
 	RUN_USER="$SUDO_USER"
 	RUN_USER="$SUDO_USER"
 fi
 fi
@@ -201,9 +219,11 @@ if [[ "$INSTALL_DEPS" == "1" ]]; then
 	apt-get install -y python3 ca-certificates git
 	apt-get install -y python3 ca-certificates git
 fi
 fi
 
 
-mkdir -p "$APP_DIR/runtime"
+RUNTIME_DIR="$(/usr/bin/python3 "${APP_DIR}/scripts/domain_updater.py" --config "$CONFIG_PATH" --print-output-settings | /usr/bin/python3 -c 'import json,sys; print(json.load(sys.stdin)["runtime_dir"])')"
+
+mkdir -p "$RUNTIME_DIR"
 chmod +x "$APP_DIR/scripts/run_update_and_commit.sh" || true
 chmod +x "$APP_DIR/scripts/run_update_and_commit.sh" || true
-chown -R "$RUN_USER:$RUN_GROUP" "$APP_DIR/runtime"
+chown -R "$RUN_USER:$RUN_GROUP" "$RUNTIME_DIR"
 
 
 SERVICE_STATE_DIR="/var/lib/${SERVICE_NAME}"
 SERVICE_STATE_DIR="/var/lib/${SERVICE_NAME}"
 ENV_FILE="/etc/${SERVICE_NAME}.env"
 ENV_FILE="/etc/${SERVICE_NAME}.env"
@@ -295,7 +315,7 @@ Group=${RUN_GROUP}
 WorkingDirectory=${APP_DIR}
 WorkingDirectory=${APP_DIR}
 EnvironmentFile=-${ENV_FILE}
 EnvironmentFile=-${ENV_FILE}
 UMask=0077
 UMask=0077
-ExecStart=/bin/bash ${APP_DIR}/scripts/run_update_and_commit.sh ${APP_DIR}/config.json
+ExecStart=/bin/bash ${APP_DIR}/scripts/run_update_and_commit.sh ${CONFIG_PATH}
 EOF
 EOF
 
 
 cat >"/etc/systemd/system/${SERVICE_NAME}.timer" <<EOF
 cat >"/etc/systemd/system/${SERVICE_NAME}.timer" <<EOF
@@ -322,6 +342,7 @@ echo "✓ Installation complete!"
 echo ""
 echo ""
 echo "Configuration:"
 echo "Configuration:"
 echo "  Working directory: ${APP_DIR}"
 echo "  Working directory: ${APP_DIR}"
+echo "  Config path: ${CONFIG_PATH}"
 echo "  Service user: ${RUN_USER}"
 echo "  Service user: ${RUN_USER}"
 echo "  Service group: ${RUN_GROUP}"
 echo "  Service group: ${RUN_GROUP}"
 echo "  Timer interval: ${INTERVAL}"
 echo "  Timer interval: ${INTERVAL}"
@@ -334,5 +355,5 @@ echo "Commands:"
 echo "  Check status: systemctl status ${SERVICE_NAME}.timer"
 echo "  Check status: systemctl status ${SERVICE_NAME}.timer"
 echo "  View logs:    journalctl -u ${SERVICE_NAME}.service -n 50 --no-pager"
 echo "  View logs:    journalctl -u ${SERVICE_NAME}.service -n 50 --no-pager"
 echo "  Manual run:   sudo systemctl start ${SERVICE_NAME}.service"
 echo "  Manual run:   sudo systemctl start ${SERVICE_NAME}.service"
-echo "  Force commit: sudo -u ${RUN_USER} /bin/bash ${APP_DIR}/scripts/run_update_and_commit.sh --force-commit ${APP_DIR}/config.json"
+echo "  Force commit: sudo -u ${RUN_USER} /bin/bash ${APP_DIR}/scripts/run_update_and_commit.sh --force-commit ${CONFIG_PATH}"
 echo ""
 echo ""

+ 126 - 0
scripts/router_local_http.sh

@@ -0,0 +1,126 @@
+#!/bin/sh
+set -eu
+
+SCRIPT_DIR=$(CDPATH= cd -- "$(dirname -- "$0")" && pwd)
+APP_DIR=$(CDPATH= cd -- "$SCRIPT_DIR/.." && pwd)
+CONFIG_PATH=${1:-"$APP_DIR/router_local.conf"}
+
+if [ ! -r "$CONFIG_PATH" ]; then
+  echo "[router-http] config not found: $CONFIG_PATH" >&2
+  exit 1
+fi
+
+# shellcheck disable=SC1090
+. "$CONFIG_PATH"
+
+RUNTIME_DIR=${RUNTIME_DIR:-"$APP_DIR/cfip_runtime"}
+VALUE_TEXT_FILE=${VALUE_TEXT_FILE:-"current_ip.txt"}
+VALUE_JSON_FILE=${VALUE_JSON_FILE:-"current_ip.json"}
+STATE_FILE=${STATE_FILE:-"state.json"}
+EXPORT_VARS_FILE=${EXPORT_VARS_FILE:-"substore_vars.json"}
+HTTP_PORT=${HTTP_PORT:-8080}
+
+TEXT_PATH="$RUNTIME_DIR/$VALUE_TEXT_FILE"
+JSON_PATH="$RUNTIME_DIR/$VALUE_JSON_FILE"
+STATE_PATH="$RUNTIME_DIR/$STATE_FILE"
+EXPORT_PATH="$RUNTIME_DIR/$EXPORT_VARS_FILE"
+TMP_BASE=${TMPDIR:-/tmp}
+
+nc_listen() {
+  if nc -h 2>&1 | grep -qi 'busybox'; then
+    nc -l -p "$HTTP_PORT"
+  else
+    nc -l "$HTTP_PORT"
+  fi
+}
+
+serve_once() {
+  req_fifo="$TMP_BASE/router_http_req.$$"
+  resp_fifo="$TMP_BASE/router_http_resp.$$"
+
+  rm -f "$req_fifo" "$resp_fifo"
+  mkfifo "$req_fifo" "$resp_fifo"
+
+  cat "$resp_fifo" | nc_listen > "$req_fifo" 2>/dev/null &
+  nc_pid=$!
+  sleep 1
+  if ! kill -0 "$nc_pid" 2>/dev/null; then
+    rm -f "$req_fifo" "$resp_fifo"
+    echo "[router-http] nc listen failed on port $HTTP_PORT" >&2
+    return 1
+  fi
+
+  exec 3<"$req_fifo"
+  exec 4>"$resp_fifo"
+
+  if ! IFS= read -r request_line <&3; then
+    exec 3<&-
+    exec 4>&-
+    wait "$nc_pid" 2>/dev/null || true
+    rm -f "$req_fifo" "$resp_fifo"
+    return 0
+  fi
+
+  while IFS= read -r header_line <&3; do
+    [ "$header_line" = "$(printf '\r')" ] && break
+    [ -z "$header_line" ] && break
+  done
+
+  request_path=$(printf '%s' "$request_line" | awk '{print $2}')
+  status_line="HTTP/1.1 200 OK\r"
+  content_type="text/plain; charset=utf-8"
+  file_path="$TEXT_PATH"
+
+  case "$request_path" in
+    /|/current_ip.txt|"/$VALUE_TEXT_FILE")
+      content_type="text/plain; charset=utf-8"
+      file_path="$TEXT_PATH"
+      ;;
+    /current_ip.json|"/$VALUE_JSON_FILE")
+      content_type="application/json; charset=utf-8"
+      file_path="$JSON_PATH"
+      ;;
+    /state.json|"/$STATE_FILE")
+      content_type="application/json; charset=utf-8"
+      file_path="$STATE_PATH"
+      ;;
+    /substore_vars.json|"/$EXPORT_VARS_FILE")
+      content_type="application/json; charset=utf-8"
+      file_path="$EXPORT_PATH"
+      ;;
+    *)
+      status_line="HTTP/1.1 404 Not Found\r"
+      file_path=""
+      ;;
+  esac
+
+  if [ -n "$file_path" ] && [ -f "$file_path" ]; then
+    content_length=$(wc -c < "$file_path" | tr -d ' ')
+    printf '%b\n' "$status_line" >&4
+    printf 'Content-Type: %s\r\n' "$content_type" >&4
+    printf 'Content-Length: %s\r\n' "$content_length" >&4
+    printf 'Connection: close\r\n' >&4
+    printf '\r\n' >&4
+    cat "$file_path" >&4
+  else
+    body='not found'
+    printf '%b\n' "$status_line" >&4
+    printf 'Content-Type: text/plain; charset=utf-8\r\n' >&4
+    printf 'Content-Length: %s\r\n' "$(printf '%s' "$body" | wc -c | tr -d ' ')" >&4
+    printf 'Connection: close\r\n' >&4
+    printf '\r\n' >&4
+    printf '%s' "$body" >&4
+  fi
+
+  exec 3<&-
+  exec 4>&-
+  wait "$nc_pid" 2>/dev/null || true
+  rm -f "$req_fifo" "$resp_fifo"
+}
+
+echo "[router-http] listening on 0.0.0.0:$HTTP_PORT"
+while true; do
+  if ! serve_once; then
+    exit 1
+  fi
+done

+ 281 - 0
scripts/router_local_update.sh

@@ -0,0 +1,281 @@
+#!/bin/sh
+set -eu
+
+SCRIPT_DIR=$(CDPATH= cd -- "$(dirname -- "$0")" && pwd)
+APP_DIR=$(CDPATH= cd -- "$SCRIPT_DIR/.." && pwd)
+CONFIG_PATH=${1:-"$APP_DIR/router_local.conf"}
+
+if [ ! -r "$CONFIG_PATH" ]; then
+  echo "[router-local] config not found: $CONFIG_PATH" >&2
+  exit 1
+fi
+
+# shellcheck disable=SC1090
+. "$CONFIG_PATH"
+
+CFST_WORK_DIR=${CFST_WORK_DIR:-"$APP_DIR/cfst"}
+CFST_BIN=${CFST_BIN:-"./cfst"}
+CFST_IP_FILE=${CFST_IP_FILE:-"ip.txt"}
+CFST_RESULT_FILE=${CFST_RESULT_FILE:-"result.csv"}
+CFST_DISPLAY_COUNT=${CFST_DISPLAY_COUNT:-10}
+CFST_THREADS=${CFST_THREADS:-}
+CFST_TEST_COUNT=${CFST_TEST_COUNT:-}
+CFST_DOWNLOAD_COUNT=${CFST_DOWNLOAD_COUNT:-}
+CFST_DOWNLOAD_TIME=${CFST_DOWNLOAD_TIME:-}
+CFST_PORT=${CFST_PORT:-443}
+CFST_URL=${CFST_URL:-}
+CFST_HTTPING=${CFST_HTTPING:-0}
+CFST_HTTPING_CODE=${CFST_HTTPING_CODE:-}
+CFST_CFCOLO=${CFST_CFCOLO:-}
+CFST_LATENCY_LIMIT=${CFST_LATENCY_LIMIT:-}
+CFST_LATENCY_LOWER=${CFST_LATENCY_LOWER:-}
+CFST_LOSS_LIMIT=${CFST_LOSS_LIMIT:-}
+CFST_SPEED_LIMIT=${CFST_SPEED_LIMIT:-}
+CFST_DISABLE_DOWNLOAD=${CFST_DISABLE_DOWNLOAD:-0}
+CFST_ALL_IP=${CFST_ALL_IP:-0}
+CFST_DEBUG=${CFST_DEBUG:-0}
+CFST_SKIP_RUN=${CFST_SKIP_RUN:-0}
+
+TOP_N=${TOP_N:-3}
+RUNTIME_DIR=${RUNTIME_DIR:-"$APP_DIR/cfip_runtime"}
+VALUE_TEXT_FILE=${VALUE_TEXT_FILE:-"current_ip.txt"}
+VALUE_JSON_FILE=${VALUE_JSON_FILE:-"current_ip.json"}
+STATE_FILE=${STATE_FILE:-"state.json"}
+EXPORT_VARS_FILE=${EXPORT_VARS_FILE:-"substore_vars.json"}
+VALUE_JSON_KEY=${VALUE_JSON_KEY:-ip}
+STATE_LAST_GOOD_KEY=${STATE_LAST_GOOD_KEY:-last_good_ip}
+EXPORT_VALUE_KEY=${EXPORT_VALUE_KEY:-AUTO_CFIP}
+
+CURRENT_TEXT_PATH="$RUNTIME_DIR/$VALUE_TEXT_FILE"
+CURRENT_JSON_PATH="$RUNTIME_DIR/$VALUE_JSON_FILE"
+STATE_PATH="$RUNTIME_DIR/$STATE_FILE"
+EXPORT_VARS_PATH="$RUNTIME_DIR/$EXPORT_VARS_FILE"
+RESULT_PATH="$CFST_WORK_DIR/$CFST_RESULT_FILE"
+UPDATED_AT=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
+
+mkdir -p "$RUNTIME_DIR"
+
+json_get_string() {
+  key="$1"
+  file="$2"
+  if [ ! -f "$file" ]; then
+    return 0
+  fi
+  sed -n "s/.*\"$key\":[[:space:]]*\"\([^\"]*\)\".*/\1/p" "$file" | head -n 1
+}
+
+json_escape() {
+  printf '%s' "$1" | sed 's/\\/\\\\/g; s/"/\\"/g'
+}
+
+trim_cr() {
+  printf '%s' "$1" | tr -d '\r'
+}
+
+build_top_candidates_json() {
+  awk -F',' -v top_n="$TOP_N" '
+    function trim(v) {
+      gsub(/\r/, "", v)
+      sub(/^[[:space:]]+/, "", v)
+      sub(/[[:space:]]+$/, "", v)
+      return v
+    }
+    function esc(v) {
+      gsub(/\\/,"\\\\",v)
+      gsub(/"/,"\\\"",v)
+      return v
+    }
+    NR == 1 { next }
+    count >= top_n { exit }
+    NF < 1 { next }
+    {
+      ip = esc(trim($1))
+      sent = esc(trim($2))
+      received = esc(trim($3))
+      loss = esc(trim($4))
+      latency = esc(trim($5))
+      speed = esc(trim($6))
+      region = esc(trim($7))
+      if (count > 0) {
+        printf(",")
+      }
+      printf("{\"ip\":\"%s\",\"domain\":\"%s\",\"sent\":\"%s\",\"received\":\"%s\",\"loss_rate\":\"%s\",\"avg_latency\":\"%s\",\"download_speed\":\"%s\",\"region\":\"%s\"}", ip, ip, sent, received, loss, latency, speed, region)
+      count++
+    }
+    END {
+      if (count == 0) {
+        printf("")
+      }
+    }
+  ' "$RESULT_PATH"
+}
+
+write_success_outputs() {
+  selected_ip="$1"
+  top_candidates_json="$2"
+  source_count="$3"
+
+  escaped_ip=$(json_escape "$selected_ip")
+  printf '%s\n' "$selected_ip" > "$CURRENT_TEXT_PATH"
+  cat > "$CURRENT_JSON_PATH" <<EOF
+{
+  "$VALUE_JSON_KEY": "$escaped_ip",
+  "updated_at": "$UPDATED_AT",
+  "status": "ok",
+  "source_type": "cfst_local_busybox",
+  "source_count": $source_count,
+  "checked_count": 0,
+  "top_candidates": [${top_candidates_json}]
+}
+EOF
+
+  cat > "$STATE_PATH" <<EOF
+{
+  "updated_at": "$UPDATED_AT",
+  "$STATE_LAST_GOOD_KEY": "$escaped_ip",
+  "status": "ok",
+  "source_count": $source_count,
+  "checked_count": 0,
+  "source_type": "cfst_local_busybox"
+}
+EOF
+
+  cat > "$EXPORT_VARS_PATH" <<EOF
+{
+  "$EXPORT_VALUE_KEY": "$escaped_ip",
+  "UPDATED_AT": "$UPDATED_AT",
+  "STATUS": "ok"
+}
+EOF
+}
+
+write_error_with_fallback() {
+  last_good_ip="$1"
+  error_message="$2"
+  escaped_ip=$(json_escape "$last_good_ip")
+  escaped_error=$(json_escape "$error_message")
+
+  printf '%s\n' "$last_good_ip" > "$CURRENT_TEXT_PATH"
+  cat > "$CURRENT_JSON_PATH" <<EOF
+{
+  "$VALUE_JSON_KEY": "$escaped_ip",
+  "updated_at": "$UPDATED_AT",
+  "status": "error_use_last_good",
+  "error": "$escaped_error",
+  "source_type": "cfst_local_busybox"
+}
+EOF
+
+  cat > "$STATE_PATH" <<EOF
+{
+  "updated_at": "$UPDATED_AT",
+  "$STATE_LAST_GOOD_KEY": "$escaped_ip",
+  "status": "error",
+  "error": "$escaped_error",
+  "source_type": "cfst_local_busybox"
+}
+EOF
+
+  cat > "$EXPORT_VARS_PATH" <<EOF
+{
+  "$EXPORT_VALUE_KEY": "$escaped_ip",
+  "UPDATED_AT": "$UPDATED_AT",
+  "STATUS": "error_use_last_good"
+}
+EOF
+}
+
+run_cfst() {
+  cd "$CFST_WORK_DIR"
+  set -- "$CFST_BIN" -f "$CFST_IP_FILE" -o "$CFST_RESULT_FILE" -p "$CFST_DISPLAY_COUNT" -tp "$CFST_PORT"
+
+  if [ -n "$CFST_THREADS" ]; then
+    set -- "$@" -n "$CFST_THREADS"
+  fi
+  if [ -n "$CFST_TEST_COUNT" ]; then
+    set -- "$@" -t "$CFST_TEST_COUNT"
+  fi
+  if [ -n "$CFST_DOWNLOAD_COUNT" ]; then
+    set -- "$@" -dn "$CFST_DOWNLOAD_COUNT"
+  fi
+  if [ -n "$CFST_DOWNLOAD_TIME" ]; then
+    set -- "$@" -dt "$CFST_DOWNLOAD_TIME"
+  fi
+  if [ -n "$CFST_URL" ]; then
+    set -- "$@" -url "$CFST_URL"
+  fi
+  if [ "$CFST_HTTPING" = "1" ]; then
+    set -- "$@" -httping
+  fi
+  if [ -n "$CFST_HTTPING_CODE" ]; then
+    set -- "$@" -httping-code "$CFST_HTTPING_CODE"
+  fi
+  if [ -n "$CFST_CFCOLO" ]; then
+    set -- "$@" -cfcolo "$CFST_CFCOLO"
+  fi
+  if [ -n "$CFST_LATENCY_LIMIT" ]; then
+    set -- "$@" -tl "$CFST_LATENCY_LIMIT"
+  fi
+  if [ -n "$CFST_LATENCY_LOWER" ]; then
+    set -- "$@" -tll "$CFST_LATENCY_LOWER"
+  fi
+  if [ -n "$CFST_LOSS_LIMIT" ]; then
+    set -- "$@" -tlr "$CFST_LOSS_LIMIT"
+  fi
+  if [ -n "$CFST_SPEED_LIMIT" ]; then
+    set -- "$@" -sl "$CFST_SPEED_LIMIT"
+  fi
+  if [ "$CFST_DISABLE_DOWNLOAD" = "1" ]; then
+    set -- "$@" -dd
+  fi
+  if [ "$CFST_ALL_IP" = "1" ]; then
+    set -- "$@" -allip
+  fi
+  if [ "$CFST_DEBUG" = "1" ]; then
+    set -- "$@" -debug
+  fi
+
+  "$@"
+}
+
+LAST_GOOD_IP=$(json_get_string "$STATE_LAST_GOOD_KEY" "$STATE_PATH")
+
+if [ "$CFST_SKIP_RUN" != "1" ]; then
+  if ! run_cfst; then
+    if [ -n "$LAST_GOOD_IP" ]; then
+      write_error_with_fallback "$LAST_GOOD_IP" "cfst run failed"
+      echo "[router-local] cfst run failed, fallback to last good ip: $LAST_GOOD_IP"
+      exit 0
+    fi
+    echo "[router-local] cfst run failed and no last good ip available" >&2
+    exit 1
+  fi
+fi
+
+if [ ! -s "$RESULT_PATH" ]; then
+  if [ -n "$LAST_GOOD_IP" ]; then
+    write_error_with_fallback "$LAST_GOOD_IP" "cfst result file missing or empty"
+    echo "[router-local] empty result, fallback to last good ip: $LAST_GOOD_IP"
+    exit 0
+  fi
+  echo "[router-local] result file missing or empty: $RESULT_PATH" >&2
+  exit 1
+fi
+
+BEST_LINE=$(sed -n '2p' "$RESULT_PATH" | tr -d '\r')
+BEST_IP=$(printf '%s' "$BEST_LINE" | cut -d',' -f1 | tr -d ' ')
+SOURCE_COUNT=$(awk -F',' 'NR > 1 && NF > 0 { count++ } END { print count + 0 }' "$RESULT_PATH")
+TOP_CANDIDATES_JSON=$(build_top_candidates_json)
+
+if [ -z "$BEST_IP" ]; then
+  if [ -n "$LAST_GOOD_IP" ]; then
+    write_error_with_fallback "$LAST_GOOD_IP" "no valid ip found in cfst result"
+    echo "[router-local] no valid ip found, fallback to last good ip: $LAST_GOOD_IP"
+    exit 0
+  fi
+  echo "[router-local] no valid ip found in result file" >&2
+  exit 1
+fi
+
+write_success_outputs "$BEST_IP" "$TOP_CANDIDATES_JSON" "$SOURCE_COUNT"
+echo "[router-local] selected ip: $BEST_IP"

+ 67 - 17
scripts/run_update_and_commit.sh

@@ -13,8 +13,7 @@ if [[ ! "$force_commit" =~ ^[01]$ ]]; then
   echo "[vmess-domain-rotator] invalid GIT_FORCE_COMMIT=${force_commit}, expected 0 or 1"
   echo "[vmess-domain-rotator] invalid GIT_FORCE_COMMIT=${force_commit}, expected 0 or 1"
   exit 1
   exit 1
 fi
 fi
-CONFIG_PATH="${1:-${APP_DIR}/config.json}"
-DOMAIN_FILE="${APP_DIR}/runtime/current_domain.txt"
+CONFIG_PATH="${1:-${APP_DIR}/config.server.json}"
 
 
 export GIT_TERMINAL_PROMPT=0
 export GIT_TERMINAL_PROMPT=0
 
 
@@ -49,17 +48,57 @@ git_auth() {
   fi
   fi
 }
 }
 
 
+output_settings_json="$(/usr/bin/python3 "${APP_DIR}/scripts/domain_updater.py" --config "$CONFIG_PATH" --print-output-settings)"
+mapfile -t output_settings < <(
+  printf '%s' "$output_settings_json" | /usr/bin/python3 -c '
+import json, sys
+settings = json.load(sys.stdin)
+for key in ["runtime_dir", "selected_text_path", "selected_json_path", "state_path", "vars_path"]:
+    print(settings[key])
+'
+)
+
+RUNTIME_DIR="${output_settings[0]}"
+SELECTED_TEXT_FILE="${output_settings[1]}"
+SELECTED_JSON_FILE="${output_settings[2]}"
+STATE_FILE="${output_settings[3]}"
+VARS_FILE="${output_settings[4]}"
+
+repo_relpath() {
+  /usr/bin/python3 - "$APP_DIR" "$1" <<'PY'
+import os
+import sys
+
+base = os.path.realpath(sys.argv[1])
+path = os.path.realpath(sys.argv[2])
+try:
+    common = os.path.commonpath([base, path])
+except ValueError:
+    common = ""
+if common != base:
+    print("")
+else:
+    print(os.path.relpath(path, base))
+PY
+}
+
 /usr/bin/python3 "${APP_DIR}/scripts/domain_updater.py" --config "$CONFIG_PATH"
 /usr/bin/python3 "${APP_DIR}/scripts/domain_updater.py" --config "$CONFIG_PATH"
 
 
-if [[ ! -f "$DOMAIN_FILE" ]]; then
-  echo "[vmess-domain-rotator] runtime/current_domain.txt missing after updater run, skip git commit"
+if [[ ! -f "$SELECTED_TEXT_FILE" ]]; then
+  echo "[vmess-domain-rotator] selected value file missing after updater run (${SELECTED_TEXT_FILE}), skip git commit"
   exit 0
   exit 0
 fi
 fi
 
 
-after="$(tr -d '\r\n' < "$DOMAIN_FILE")"
+after="$(tr -d '\r\n' < "$SELECTED_TEXT_FILE")"
 
 
 if [[ -z "$after" ]]; then
 if [[ -z "$after" ]]; then
-  echo "[vmess-domain-rotator] empty selected domain, skip git commit"
+  echo "[vmess-domain-rotator] empty selected value, skip git commit"
+  exit 0
+fi
+
+selected_rel="$(repo_relpath "$SELECTED_TEXT_FILE")"
+if [[ -z "$selected_rel" ]]; then
+  echo "[vmess-domain-rotator] selected value file is outside repo (${SELECTED_TEXT_FILE}), skip git commit"
   exit 0
   exit 0
 fi
 fi
 
 
@@ -126,19 +165,26 @@ if [[ "$work_branch" != "$runtime_branch" ]]; then
 fi
 fi
 
 
 before=""
 before=""
-if before_raw="$(git -C "$work_dir" show "HEAD:runtime/current_domain.txt" 2>/dev/null)"; then
+if before_raw="$(git -C "$work_dir" show "HEAD:${selected_rel}" 2>/dev/null)"; then
   before="$(printf '%s' "$before_raw" | tr -d '\r\n')"
   before="$(printf '%s' "$before_raw" | tr -d '\r\n')"
 fi
 fi
 
 
 if [[ "$force_commit" != "1" ]] && [[ -n "$before" ]] && [[ "$after" == "$before" ]]; then
 if [[ "$force_commit" != "1" ]] && [[ -n "$before" ]] && [[ "$after" == "$before" ]]; then
-  echo "[vmess-domain-rotator] selected domain unchanged (${after}), skip git commit and push"
+  echo "[vmess-domain-rotator] selected value unchanged (${after}), skip git commit and push"
   exit 0
   exit 0
 fi
 fi
 
 
-mkdir -p "$work_dir/runtime"
-for file in current_domain.txt current_domain.json state.json substore_vars.json; do
-  src="$APP_DIR/runtime/$file"
-  dst="$work_dir/runtime/$file"
+tracked_src_files=("$SELECTED_TEXT_FILE" "$SELECTED_JSON_FILE" "$STATE_FILE" "$VARS_FILE")
+tracked_rel_files=()
+for src in "${tracked_src_files[@]}"; do
+  rel="$(repo_relpath "$src")"
+  if [[ -z "$rel" ]]; then
+    echo "[vmess-domain-rotator] skip non-repo output file: ${src}"
+    continue
+  fi
+  tracked_rel_files+=("$rel")
+  dst="$work_dir/$rel"
+  mkdir -p "$(dirname "$dst")"
   if [[ -f "$src" ]]; then
   if [[ -f "$src" ]]; then
     cp "$src" "$dst"
     cp "$src" "$dst"
   else
   else
@@ -146,7 +192,12 @@ for file in current_domain.txt current_domain.json state.json substore_vars.json
   fi
   fi
 done
 done
 
 
-git -C "$work_dir" add -A runtime/current_domain.txt runtime/current_domain.json runtime/state.json runtime/substore_vars.json || true
+if [[ "${#tracked_rel_files[@]}" -eq 0 ]]; then
+  echo "[vmess-domain-rotator] no repo-local output files from config (${CONFIG_PATH}), skip git commit"
+  exit 0
+fi
+
+git -C "$work_dir" add -A -- "${tracked_rel_files[@]}" || true
 
 
 staged_changed="1"
 staged_changed="1"
 if git -C "$work_dir" diff --cached --quiet; then
 if git -C "$work_dir" diff --cached --quiet; then
@@ -164,9 +215,9 @@ if [[ "$staged_changed" == "0" ]] && [[ "$force_commit" == "1" ]]; then
   echo "[vmess-domain-rotator] force commit enabled with unchanged content, creating empty commit"
   echo "[vmess-domain-rotator] force commit enabled with unchanged content, creating empty commit"
 fi
 fi
 
 
-commit_message="chore: rotate preferred domain to ${after} (${ts})"
+commit_message="chore: rotate preferred value to ${after} (${ts})"
 if [[ "$force_commit" == "1" ]]; then
 if [[ "$force_commit" == "1" ]]; then
-  commit_message="manual: domain ${after}, updated at ${ts}"
+  commit_message="manual: value ${after}, updated at ${ts}"
 fi
 fi
 
 
 git -C "$work_dir" \
 git -C "$work_dir" \
@@ -205,5 +256,4 @@ else
   fi
   fi
 fi
 fi
 
 
-echo "[vmess-domain-rotator] committed runtime changes on ${runtime_branch}: selected domain ${after}"
-
+echo "[vmess-domain-rotator] committed output changes on ${runtime_branch}: selected value ${after} from ${RUNTIME_DIR}"

+ 21 - 15
substore/operator_template.js

@@ -1,18 +1,19 @@
 /*
 /*
   Sub-Store operator (production-friendly)
   Sub-Store operator (production-friendly)
-  - Pull dynamic domain from your current_domain.json
+  - Pull dynamic value from current_domain.json / current_ip.json
   - Replace vmess server field for matched nodes
   - Replace vmess server field for matched nodes
 */
 */
 
 
-const DOMAIN_JSON_URL = "https://git.dewofaurora.de/aurora/vmess-domain-rotator/raw/runtime-state/runtime/current_domain.json";
+const VALUE_JSON_URL = "https://git.dewofaurora.de/aurora/vmess-domain-rotator/raw/runtime-state/runtime/current_domain.json";
 const NODE_NAME_REGEX = /(argo|cf|vm|优选)/i;
 const NODE_NAME_REGEX = /(argo|cf|vm|优选)/i;
 const CACHE_KEY = "vmess-domain-rotator:current";
 const CACHE_KEY = "vmess-domain-rotator:current";
 const CACHE_TTL_MS = 5 * 60 * 1000;
 const CACHE_TTL_MS = 5 * 60 * 1000;
+const JSON_VALUE_KEYS = ["domain", "ip"];
 
 
-async function fetchDomainViaSubStore() {
+async function fetchValueViaSubStore() {
   const $ = $substore;
   const $ = $substore;
   const { body, statusCode } = await $.http.get({
   const { body, statusCode } = await $.http.get({
-    url: DOMAIN_JSON_URL,
+    url: VALUE_JSON_URL,
     headers: {
     headers: {
       Accept: "application/json",
       Accept: "application/json",
       "Cache-Control": "no-cache"
       "Cache-Control": "no-cache"
@@ -25,21 +26,26 @@ async function fetchDomainViaSubStore() {
   }
   }
 
 
   const obj = JSON.parse(body || "{}");
   const obj = JSON.parse(body || "{}");
-  const domain = String(obj.domain || "").trim().toLowerCase();
-  if (!domain) {
-    throw new Error("empty domain field");
+  let value = "";
+  for (const key of JSON_VALUE_KEYS) {
+    value = String(obj[key] || "").trim().toLowerCase();
+    if (value) break;
   }
   }
-  return domain;
+
+  if (!value) {
+    throw new Error(`empty value field, expected one of: ${JSON_VALUE_KEYS.join(", ")}`);
+  }
+  return value;
 }
 }
 
 
 async function operator(proxies = [], targetPlatform, context) {
 async function operator(proxies = [], targetPlatform, context) {
   const cache = scriptResourceCache;
   const cache = scriptResourceCache;
-  let domain = cache.get(CACHE_KEY);
+  let value = cache.get(CACHE_KEY);
 
 
-  if (!domain) {
+  if (!value) {
     try {
     try {
-      domain = await fetchDomainViaSubStore();
-      cache.set(CACHE_KEY, domain, CACHE_TTL_MS);
+      value = await fetchValueViaSubStore();
+      cache.set(CACHE_KEY, value, CACHE_TTL_MS);
     } catch (e) {
     } catch (e) {
       console.log(`[vmess-domain-rotator] fetch failed: ${e.message}`);
       console.log(`[vmess-domain-rotator] fetch failed: ${e.message}`);
       return proxies;
       return proxies;
@@ -51,12 +57,12 @@ async function operator(proxies = [], targetPlatform, context) {
     if (!p || p.type !== "vmess") continue;
     if (!p || p.type !== "vmess") continue;
     if (!NODE_NAME_REGEX.test(p.name || "")) continue;
     if (!NODE_NAME_REGEX.test(p.name || "")) continue;
 
 
-    if (p.server !== domain) {
-      p.server = domain;
+    if (p.server !== value) {
+      p.server = value;
       updated += 1;
       updated += 1;
     }
     }
   }
   }
 
 
-  console.log(`[vmess-domain-rotator] domain=${domain}, updated=${updated}, total=${proxies.length}, target=${targetPlatform}`);
+  console.log(`[vmess-domain-rotator] value=${value}, updated=${updated}, total=${proxies.length}, target=${targetPlatform}`);
   return proxies;
   return proxies;
 }
 }

+ 79 - 94
workflow.md

@@ -1,113 +1,98 @@
 # Workflow
 # Workflow
 
 
+## 1. Server Mode
+
 ```mermaid
 ```mermaid
 flowchart TD
 flowchart TD
-    %% =========================
-    %% Entry points
-    %% =========================
-    A1[systemd timer 触发<br/>OnBootSec=2min / OnUnitActiveSec=interval] --> A2[vmess-domain-rotator.service]
-    A2 --> A3[run_update_and_commit.sh config.json]
-
-    A4[手动执行<br/>bash scripts/run_update_and_commit.sh config.json] --> A3
-    A5[仅手动更新不走服务提交<br/>python3 scripts/domain_updater.py --config config.json] --> B1
+    A1[systemd timer / manual run] --> A2[run_update_and_commit.sh config.server.json]
+    A3[manual updater run] --> A4[domain_updater.py --config config.server.json]
 
 
-    %% =========================
-    %% domain_updater.py pipeline
-    %% =========================
-    subgraph U["domain_updater.py(域名选择主流程)"]
+    subgraph U1["domain_updater.py / server mode"]
       direction TB
       direction TB
-      B1[读取 config.json<br/>解析 output.runtime_dir] --> B2[读取 runtime/state.json<br/>last_good_domain]
-      B2 --> B3[请求 API<br/>api.url/method/headers/params/body/timeout]
-      B3 --> B4[解析候选域名<br/>parser.field_paths/json_paths/regex fallback]
-      B4 --> B5[域名过滤<br/>domain_filter.include_suffixes/exclude_regex]
-      B5 --> B6[记录级过滤<br/>record_filter.exclude_if_any<br/>contains/equals/regex]
-      B6 --> B7[解析记录字段<br/>record_mapping.records_path + field_map 白名单]
-      B7 --> B8[评分排序<br/>scoring.strategy(weighted_average/lexicographic)<br/>within_hours + prefer_lower + tie_breakers]
-      B8 --> B9{healthcheck.enabled?}
+      B1[read config.server.json] --> B2[resolve output paths]
+      B2 --> B3[read runtime/state.json]
+      B3 --> B4[fetch API payload]
+      B4 --> B5[parse candidates]
+      B5 --> B6[apply domain filter]
+      B6 --> B7[apply record filter]
+      B7 --> B8[score and rank]
+      B8 --> B9[optional healthcheck]
+      B9 --> B10[select preferred domain]
+      B10 --> B11[write runtime/current_domain.txt]
+      B11 --> B12[write runtime/current_domain.json]
+      B12 --> B13[write runtime/substore_vars.json]
+      B13 --> B14[write runtime/state.json]
+    end
 
 
-      B9 -- 是 --> B10[TLS 探测候选<br/>attempts/timeout_ms/port/tls_verify]
-      B9 -- 否 --> B11[跳过 healthcheck]
-      B10 --> B12[choose_domain]
-      B11 --> B12
+    A2 --> C1[resolve configured output paths]
+    C1 --> A4
+    A4 --> C2[read selected text file from resolved path]
+    C2 --> C3[compare with runtime-state HEAD]
+    C3 --> C4{changed?}
+    C4 -- no --> C5[skip commit/push]
+    C4 -- yes --> C6[sync configured repo-local output files]
+    C6 --> C7[commit on runtime-state]
+    C7 --> C8[optional push]
+```
 
 
-      B12 --> B13{是否选出域名?}
-      B13 -- 是 --> B16[status=ok]
-      B13 -- 否 --> B14{last_good_domain 存在?}
-      B14 -- 是 --> B15[使用 last_good_domain<br/>status=fallback_last_good]
-      B14 -- 否 --> BE1[报错并退出]
+## 2. Local Python cfst Mode
 
 
-      B15 --> B17[写 runtime/current_domain.txt]
-      B16 --> B17
-      B17 --> B18[写 runtime/current_domain.json]
-      B18 --> B19[写 runtime/substore_vars.json]
-      B19 --> B20[可选渲染 v2ray 模板<br/>v2ray.template_file/output_file/replace_token]
-      B20 --> B21[写 runtime/state.json]
-      B21 --> B22[可选 notify.command]
-      B22 --> B23[stdout 输出本次 JSON 结果]
+```mermaid
+flowchart TD
+    A1[manual run] --> A2[domain_updater.py --config config.router.json]
 
 
-      BE1 --> BE2[写 state.json status=error]
-      BE2 --> BE3{last_good_domain 存在?}
-      BE3 -- 是 --> BE4[写 current_domain*.json/txt<br/>status=error_use_last_good]
-      BE4 --> BE5[notify.command + 输出 error_use_last_good]
-      BE3 -- 否 --> BE6[stderr 输出 error 并 exit 1]
+    subgraph U2["domain_updater.py / cfst_local mode"]
+      direction TB
+      B1[read config.router.json] --> B2[resolve output paths]
+      B2 --> B3[run local cfst]
+      B3 --> B4[parse result.csv]
+      B4 --> B5[optional filter / optional healthcheck]
+      B5 --> B6[select preferred ip]
+      B6 --> B7[write cfip_runtime/current_ip.txt]
+      B7 --> B8[write cfip_runtime/current_ip.json]
+      B8 --> B9[write cfip_runtime/substore_vars.json]
+      B9 --> B10[write cfip_runtime/state.json]
     end
     end
+```
 
 
-    %% updater 结果回到 wrapper
-    B23 --> C1
-    BE5 --> C1
+## 3. BusyBox Router Mode
 
 
-    %% =========================
-    %% run_update_and_commit.sh pipeline
-    %% =========================
-    subgraph W["run_update_and_commit.sh(runtime-state 自动提交/推送)"]
-      direction TB
-      C1[检查 runtime/current_domain.txt 存在且非空] --> C2{满足 git 环境?<br/>git存在+在仓库+HEAD有效}
-      C2 -- 否 --> C0[仅完成本地 runtime 更新并退出]
-      C2 -- 是 --> C3[确定 runtime_branch / push_remote / auth 选项]
-      C3 --> C4{当前分支是 runtime-state?}
-      C4 -- 是 --> C5[直接在当前仓库操作]
-      C4 -- 否 --> C6[创建临时 worktree]
-      C6 --> C7{runtime-state 分支存在?}
-      C7 -- 本地存在 --> C8[checkout 本地 runtime-state]
-      C7 -- 仅远程存在 --> C9[fetch 后 checkout]
-      C7 -- 都不存在 --> C10[创建 orphan runtime-state]
-      C8 --> C11
-      C9 --> C11
-      C10 --> C11
-      C5 --> C11[读取 runtime-state HEAD 的 runtime/current_domain.txt]
+```mermaid
+flowchart TD
+    A1[crond / manual run] --> A2[router_local_update.sh router_local.conf]
+    A3[background service / manual run] --> A4[router_local_http.sh router_local.conf]
 
 
-      C11 --> C12{force_commit!=1 且 新旧域名相同?}
-      C12 -- 是 --> C13[skip git commit/push]
-      C12 -- 否 --> C14[同步 4 个 runtime 文件到目标 worktree]
-      C14 --> C15[git add runtime/*.txt/json]
-      C15 --> C16{有 staged 变化?}
-      C16 -- 否 且 force=0 --> C17[skip commit]
-      C16 -- 否 且 force=1 --> C18[allow-empty commit]
-      C16 -- 是 --> C19[正常 commit]
+    subgraph R1["router_local_update.sh"]
+      direction TB
+      B1[read router_local.conf] --> B2[run cfst]
+      B2 --> B3[read result.csv]
+      B3 --> B4[pick best ip]
+      B4 --> B5[write cfip_runtime/current_ip.txt]
+      B5 --> B6[write cfip_runtime/current_ip.json]
+      B6 --> B7[write cfip_runtime/substore_vars.json]
+      B7 --> B8[write cfip_runtime/state.json]
+    end
 
 
-      C18 --> C20[提交信息 manual: ...]
-      C19 --> C21[提交信息 chore: rotate preferred domain ...]
-      C20 --> C22{GIT_PUSH_ENABLED=1?}
-      C21 --> C22
-      C22 -- 否 --> C23[结束(仅本地提交)]
-      C22 -- 是 --> C24{有可用 remote?}
-      C24 -- 否且required=1 --> C25[exit 1]
-      C24 -- 否且required=0 --> C26[跳过 push]
-      C24 -- 是 --> C27[按认证方式 push<br/>credential helper 或 HTTP header/token]
-      C27 --> C28{push 成功?}
-      C28 -- 是 --> C29[结束]
-      C28 -- 否且required=1 --> C30[exit 1]
-      C28 -- 否且required=0 --> C31[记录失败并结束]
+    subgraph R2["router_local_http.sh"]
+      direction TB
+      C1[read router_local.conf] --> C2[listen with nc]
+      C2 --> C3[serve current_ip.txt]
+      C2 --> C4[serve current_ip.json]
+      C2 --> C5[serve state.json]
+      C2 --> C6[serve substore_vars.json]
     end
     end
+```
+
+## 4. Consumers
 
 
-    %% =========================
-    %% Consumers
-    %% =========================
-    B18 --> D1[runtime/current_domain.json 对外可读]
-    D1 --> D2[substore/operator_template.js 拉取 JSON]
-    D2 --> D3[scriptResourceCache 缓存 domain(默认 5 分钟)]
-    D3 --> D4[重写匹配节点 vmess server]
+```mermaid
+flowchart TD
+    A1[runtime/current_domain.json] --> B1[Sub-Store operator]
+    A2[cfip_runtime/current_ip.json over LAN] --> B1
+    B1 --> C1[read domain or ip]
+    C1 --> C2[rewrite matched vmess server]
 
 
-    C29 --> E1[runtime-state 分支更新]
-    E1 --> E2[下游通过 raw/runtime-state/runtime/*.json 消费]
+    D1[runtime/current_domain.txt] --> E1[update_vmess_links.py]
+    D2[cfip_runtime/current_ip.txt] --> E1
+    E1 --> E2[rewrite vmess add field]
 ```
 ```