Dew-OF-Aurora 3 هفته پیش
کامیت
5fe072703d

+ 48 - 0
AGENTS.md

@@ -0,0 +1,48 @@
+# AGENTS
+
+## Repo Purpose (verified)
+- This repo is an automation utility for VMess preferred-domain rotation: fetch API candidates, pick one domain, write runtime artifacts, then let Sub-Store/V2Ray consume them.
+- Main entrypoint is `scripts/domain_updater.py`; optional post-processor for VMess links is `scripts/update_vmess_links.py`.
+- Scheduled runtime entrypoint is `scripts/run_update_and_commit.sh` (runs updater, then commits only when domain changed).
+
+## High-value Files
+- `config.json`: active runtime config (single source for runtime settings).
+- `runtime/current_domain.json` and `runtime/current_domain.txt`: outputs consumed by Sub-Store/operator scripts.
+- `substore/operator_template.js`: Sub-Store script-operation template using `$substore.http.get` and `scriptResourceCache`.
+- `systemd/vmess-domain-rotator.service` and `systemd/vmess-domain-rotator.timer`: deployment templates (must customize user/path).
+
+## Commands You Should Actually Use
+- Syntax check after Python edits:
+  - `python3 -m py_compile scripts/domain_updater.py`
+  - `python3 -m py_compile scripts/update_vmess_links.py`
+- Manual refresh run:
+  - `python3 scripts/domain_updater.py --config config.json`
+- Manual refresh + conditional git commit:
+  - `bash scripts/run_update_and_commit.sh config.json`
+- Update VMess links (`add` field) from selected domain:
+  - `python3 scripts/update_vmess_links.py --input <in> --output <out> --domain-file runtime/current_domain.txt`
+- Debian one-shot deploy/remove:
+  - `sudo bash scripts/install_debian.sh`
+  - `sudo bash scripts/uninstall_debian.sh`
+- Local output smoke test endpoint:
+  - `python3 -m http.server 8080 --directory runtime`
+
+## Behavior/Gotchas That Are Easy To Miss
+- For vps789 Top20, keep API ranking order (`scoring.use_api_order=true`) and keep only domains (filter IPv4 via `domain_filter.exclude_regex`).
+- `healthcheck.enabled` may be intentionally `false` in vps789 mode; do not re-enable unless you want local TLS checks to override API ranking behavior.
+- `domain_updater.py` fallback behavior depends on persisted `runtime/state.json` (`last_good_domain`). Do not delete this file in normal operation.
+- `run_update_and_commit.sh` commits only when `runtime/current_domain.txt` changed before/after update.
+- Sub-Store script preview may show no change if `NODE_NAME_REGEX` in `substore/operator_template.js` does not match node names, or if cached output is reused.
+- In Sub-Store operators here, use `$substore.http.get(...)` (not plain `fetch`) for compatibility.
+
+## Deployment Notes (systemd templates)
+- Template service assumes:
+  - Working dir: `/opt/vmess-domain-rotator`
+  - User: `vmessrotator`
+- Template timer interval is `12h` by default.
+- Before enabling timer, adjust `User`, `WorkingDirectory`, and `ExecStart` in `systemd/vmess-domain-rotator.service` to match the real host.
+- Enable flow: copy unit files to `/etc/systemd/system/` -> `systemctl daemon-reload` -> `systemctl enable --now vmess-domain-rotator.timer`.
+
+## Editing Constraints For This Repo
+- Keep secrets out of source snapshots/backups: avoid placing tokens in tracked/shared files.
+- `runtime/` is operational output; avoid treating it as source-of-truth code.

+ 196 - 0
README.md

@@ -0,0 +1,196 @@
+# vmess-domain-rotator
+
+This repo provides a first working version of an automated pipeline:
+
+1. Pull preferred domains from an API (like vps789).
+2. Validate and optionally health-check candidates.
+3. Pick the best domain automatically.
+4. Export runtime files for Sub-Store and V2Ray integration.
+
+## Quick start
+
+1. Edit `config.json` directly (single runtime config file).
+
+For your sample response (`data.good[].ip`), parser config can be:
+
+```json
+"parser": {
+  "field_paths": ["data.good[].ip"],
+  "json_paths": [],
+  "regex": "[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}"
+}
+```
+
+For vps789 Top20 ranking, if you want "directly use first ranked domain":
+
+```json
+"scoring": {
+  "enabled": true,
+  "records_path": "data.good[]",
+  "ip_field": "ip",
+  "created_time_field": "createdTime",
+  "within_hours": 24,
+  "use_api_order": true
+}
+```
+
+Then set:
+
+```json
+"healthcheck": { "enabled": false }
+```
+
+2. Run once:
+
+```bash
+python3 scripts/domain_updater.py --config config.json
+```
+
+3. Check output files:
+
+- `runtime/current_domain.txt`
+- `runtime/current_domain.json`
+- `runtime/state.json`
+
+## What the script exports
+
+- `current_domain.txt`: plain text domain.
+- `current_domain.json`: machine-readable payload:
+
+```json
+{
+  "domain": "best.example.com",
+  "updated_at": "2026-04-13T07:00:00Z",
+  "status": "ok",
+  "source_count": 15,
+  "checked_count": 10
+}
+```
+
+- `state.json`: includes `last_good_domain` for automatic fallback.
+
+If you need Sub-Store to fetch these files over HTTP, you can expose `runtime/` via nginx or caddy.
+
+## How to connect with Sub-Store
+
+Two practical modes:
+
+1. **Recommended**: run updater externally and let Sub-Store operator fetch `current_domain.json`.
+2. Keep Sub-Store static and update V2Ray template directly (token replacement).
+
+`substore/operator_template.js` is a starter operator script showing how to replace VMess `server` fields by regex match on node names.
+
+Minimal smoke test (local):
+
+```bash
+python3 -m http.server 8080 --directory runtime
+```
+
+Then test:
+
+```bash
+curl http://127.0.0.1:8080/current_domain.json
+```
+
+## How to connect with V2Ray
+
+If you maintain a template config (containing `__AUTO_DOMAIN__` token), this script can render a real config file with the selected domain.
+
+Then reload service:
+
+```bash
+systemctl reload v2ray
+```
+
+## Update Base64 VMess links
+
+If your node list uses `vmess://<base64-json>`, use:
+
+```bash
+python3 scripts/update_vmess_links.py \
+  --input ./nodes.txt \
+  --output ./nodes.updated.txt \
+  --domain-file ./runtime/current_domain.txt
+```
+
+Only update specific node names (`ps`) by regex:
+
+```bash
+python3 scripts/update_vmess_links.py \
+  --input ./nodes.txt \
+  --output ./nodes.updated.txt \
+  --domain-file ./runtime/current_domain.txt \
+  --name-regex "(argo|cf|vm)"
+```
+
+If the whole subscription file is itself base64-encoded, add:
+
+```bash
+--subscription-base64
+```
+
+## Scheduling
+
+### Cron
+
+```cron
+0 */12 * * * /bin/bash /opt/vmess-domain-rotator/scripts/run_update_and_commit.sh /opt/vmess-domain-rotator/config.json >> /var/log/vmess-domain-rotator.log 2>&1
+```
+
+### systemd timer
+
+Use files under `systemd/` and adjust paths/user.
+Default timer interval in this repo is `12h`.
+
+### Debian install/uninstall scripts
+
+Install on a Debian server (creates systemd service+timer):
+
+```bash
+sudo bash scripts/install_debian.sh
+```
+
+This installer also initializes a git repo under app dir (if missing) and configures the service to auto-commit only when selected domain changes.
+
+Uninstall:
+
+```bash
+sudo bash scripts/uninstall_debian.sh
+```
+
+Useful options:
+
+```bash
+sudo bash scripts/install_debian.sh --user root --group root --interval 5min
+sudo bash scripts/uninstall_debian.sh --keep-app-dir
+```
+
+## Config notes
+
+- API response formats differ. Use one of:
+  - `parser.json_paths` (preferred)
+  - `parser.regex` fallback
+- If API has occasional bad results, keep `healthcheck.enabled=true`.
+- If API may return IPv4 addresses, consider `healthcheck.tls_verify=false`.
+- If all checks fail, script falls back to last known good domain.
+
+## About Sub-Store "config file location"
+
+If you use Sub-Store web UI, rules are usually stored in backend data storage (sqlite/json), not a simple editable config file.
+
+Common deployment cases:
+
+- Docker: check mounted volume path, then back up that volume.
+- Node/PM2: check app directory `data/` or database file.
+
+In practice, you can avoid touching backend db files directly:
+
+1. Keep your node logic in Sub-Store operator script (web UI).
+2. Let operator fetch `current_domain.json` from this project.
+3. Dynamic replacement happens at subscription processing time.
+
+## Security and reliability
+
+- Do not commit API tokens.
+- Use reverse proxy auth if exposing `current_domain.json` publicly.
+- Keep `runtime/state.json` persisted (for fallback).

+ 62 - 0
config.json

@@ -0,0 +1,62 @@
+{
+  "api": {
+    "url": "https://vps789.com/openApi/cfIpTop20",
+    "method": "GET",
+    "headers": {},
+    "params": {},
+    "timeout_sec": 10
+  },
+  "parser": {
+    "field_paths": [
+      "data.good[].ip"
+    ],
+    "json_paths": [],
+    "regex": "[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}"
+  },
+  "domain_filter": {
+    "include_suffixes": [],
+    "exclude_regex": [
+      "^(?:25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(?:\\.(?:25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}$"
+    ]
+  },
+  "scoring": {
+    "enabled": true,
+    "records_path": "data.good[]",
+    "ip_field": "ip",
+    "created_time_field": "createdTime",
+    "score_fields": [
+      "avgScore",
+      "ydScore",
+      "dxScore",
+      "ltScore"
+    ],
+    "within_hours": 24,
+    "prefer_lower": true,
+    "use_api_order": true
+  },
+  "healthcheck": {
+    "enabled": false,
+    "attempts": 2,
+    "timeout_ms": 1800,
+    "port": 443,
+    "tls_verify": true
+  },
+  "selection": {
+    "top_n": 3
+  },
+  "output": {
+    "runtime_dir": "./runtime",
+    "current_domain_file": "current_domain.txt",
+    "current_domain_json": "current_domain.json",
+    "state_file": "state.json",
+    "substore_vars_file": "substore_vars.json"
+  },
+  "v2ray": {
+    "template_file": "",
+    "output_file": "",
+    "replace_token": "__AUTO_DOMAIN__"
+  },
+  "notify": {
+    "command": ""
+  }
+}

+ 39 - 0
runtime/current_domain.json

@@ -0,0 +1,39 @@
+{
+  "domain": "7.cf.3666888.xyz",
+  "updated_at": "2026-04-13T15:22:30Z",
+  "status": "ok",
+  "source_count": 20,
+  "checked_count": 0,
+  "top_candidates": [
+    {
+      "domain": "7.cf.3666888.xyz",
+      "scores": [
+        258.0,
+        380.0,
+        167.0,
+        196.0
+      ],
+      "created_raw": "2026-04-13 00:00:00"
+    },
+    {
+      "domain": "cfsaas.080112.xyz",
+      "scores": [
+        268.0,
+        488.0,
+        68.0,
+        188.0
+      ],
+      "created_raw": "2026-04-13 00:00:00"
+    },
+    {
+      "domain": "blog.646474.xyz",
+      "scores": [
+        272.0,
+        482.0,
+        65.0,
+        191.0
+      ],
+      "created_raw": "2026-04-13 00:00:00"
+    }
+  ]
+}

+ 1 - 0
runtime/current_domain.txt

@@ -0,0 +1 @@
+7.cf.3666888.xyz

+ 1 - 0
runtime/sample_nodes.txt

@@ -0,0 +1 @@
+vmess://ewogICJ2IjogIjIiLAogICJwcyI6ICJ2bS1hcmdvLTVnbXpOdVVWVlUiLAogICJhZGQiOiAiY2ZzYWFzLjA4MDExMi54eXoiLAogICJwb3J0IjogIjg0NDMiLAogICJpZCI6ICI5MDM3ZGNkZC0zNTI5LTQxZjQtOWI1OC05YTJlNGY5NzIwZjYiLAogICJhaWQiOiAiMCIsCiAgInNjeSI6ICJhdXRvIiwKICAibmV0IjogIndzIiwKICAidHlwZSI6ICJub25lIiwKICAiaG9zdCI6ICJoeTIuZGV3b2ZhdXJvcmEuZHBkbnMub3JnIiwKICAicGF0aCI6ICI5MDM3ZGNkZC0zNTI5LTQxZjQtOWI1OC05YTJlNGY5NzIwZjYtdm0iLAogICJ0bHMiOiAidGxzIiwKICAic25pIjogImh5Mi5kZXdvZmF1cm9yYS5kcGRucy5vcmciLAogICJhbHBuIjogIiIsCiAgImZwIjogImNocm9tZSIsCiAgImluc2VjdXJlIjogIjAiCn0=

+ 1 - 0
runtime/sample_nodes.updated.txt

@@ -0,0 +1 @@
+vmess://eyJ2IjoiMiIsInBzIjoidm0tYXJnby01Z216TnVVVlZVIiwiYWRkIjoiNy5jZi4zNjY2ODg4Lnh5eiIsInBvcnQiOiI4NDQzIiwiaWQiOiI5MDM3ZGNkZC0zNTI5LTQxZjQtOWI1OC05YTJlNGY5NzIwZjYiLCJhaWQiOiIwIiwic2N5IjoiYXV0byIsIm5ldCI6IndzIiwidHlwZSI6Im5vbmUiLCJob3N0IjoiaHkyLmRld29mYXVyb3JhLmRwZG5zLm9yZyIsInBhdGgiOiI5MDM3ZGNkZC0zNTI5LTQxZjQtOWI1OC05YTJlNGY5NzIwZjYtdm0iLCJ0bHMiOiJ0bHMiLCJzbmkiOiJoeTIuZGV3b2ZhdXJvcmEuZHBkbnMub3JnIiwiYWxwbiI6IiIsImZwIjoiY2hyb21lIiwiaW5zZWN1cmUiOiIwIn0=

+ 8 - 0
runtime/state.json

@@ -0,0 +1,8 @@
+{
+  "updated_at": "2026-04-13T15:22:30Z",
+  "last_good_domain": "7.cf.3666888.xyz",
+  "status": "ok",
+  "source_count": 20,
+  "checked_count": 0,
+  "rendered_v2ray": false
+}

+ 5 - 0
runtime/substore_vars.json

@@ -0,0 +1,5 @@
+{
+  "AUTO_DOMAIN": "7.cf.3666888.xyz",
+  "UPDATED_AT": "2026-04-13T15:22:30Z",
+  "STATUS": "ok"
+}

+ 477 - 0
scripts/domain_updater.py

@@ -0,0 +1,477 @@
+#!/usr/bin/env python3
+import argparse
+import datetime as dt
+import json
+import os
+import re
+import socket
+import ssl
+import subprocess
+import sys
+import time
+import urllib.parse
+import urllib.request
+
+
+DOMAIN_RE = re.compile(r"^(?=.{1,253}$)(?!-)[A-Za-z0-9-]{1,63}(?<!-)(\.(?!-)[A-Za-z0-9-]{1,63}(?<!-))+$")
+IPV4_RE = re.compile(r"^(?:25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(?:\.(?:25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}$")
+
+
+def utc_now_iso():
+    return dt.datetime.now(dt.timezone.utc).replace(microsecond=0).isoformat().replace("+00:00", "Z")
+
+
+def read_json_file(path, default=None):
+    if default is None:
+        default = {}
+    if not os.path.exists(path):
+        return default
+    with open(path, "r", encoding="utf-8") as f:
+        return json.load(f)
+
+
+def write_json_file(path, data):
+    os.makedirs(os.path.dirname(path), exist_ok=True)
+    with open(path, "w", encoding="utf-8") as f:
+        json.dump(data, f, ensure_ascii=True, indent=2)
+
+
+def write_text_file(path, data):
+    os.makedirs(os.path.dirname(path), exist_ok=True)
+    with open(path, "w", encoding="utf-8") as f:
+        f.write(data)
+
+
+def build_url(base_url, params):
+    if not params:
+        return base_url
+    parsed = urllib.parse.urlparse(base_url)
+    current = urllib.parse.parse_qs(parsed.query)
+    for k, v in params.items():
+        current[k] = [str(v)]
+    query = urllib.parse.urlencode(current, doseq=True)
+    return urllib.parse.urlunparse(parsed._replace(query=query))
+
+
+def fetch_api_json(cfg):
+    api = cfg["api"]
+    url = build_url(api["url"], api.get("params", {}))
+    method = api.get("method", "GET").upper()
+    headers = api.get("headers", {})
+    timeout = int(api.get("timeout_sec", 10))
+    body_obj = api.get("body")
+    body = None
+    if body_obj is not None:
+        body = json.dumps(body_obj).encode("utf-8")
+        headers = {**headers, "Content-Type": "application/json"}
+
+    req = urllib.request.Request(url=url, data=body, headers=headers, method=method)
+    with urllib.request.urlopen(req, timeout=timeout) as resp:
+        raw = resp.read().decode("utf-8", errors="replace")
+    return json.loads(raw)
+
+
+def flatten_values(value):
+    out = []
+    if isinstance(value, str):
+        out.append(value)
+    elif isinstance(value, list):
+        for item in value:
+            out.extend(flatten_values(item))
+    elif isinstance(value, dict):
+        for item in value.values():
+            out.extend(flatten_values(item))
+    return out
+
+
+def get_by_json_path(data, path):
+    cur = data
+    for part in path.split("."):
+        if isinstance(cur, dict) and part in cur:
+            cur = cur[part]
+        else:
+            return None
+    return cur
+
+
+def get_values_by_path(data, path):
+    parts = path.split(".")
+
+    def walk(cur, idx):
+        if idx >= len(parts):
+            return [cur]
+
+        part = parts[idx]
+        if part.endswith("[]"):
+            key = part[:-2]
+            if isinstance(cur, dict):
+                arr = cur.get(key)
+            else:
+                arr = None
+            if not isinstance(arr, list):
+                return []
+
+            out = []
+            for item in arr:
+                out.extend(walk(item, idx + 1))
+            return out
+
+        if isinstance(cur, dict) and part in cur:
+            return walk(cur[part], idx + 1)
+        return []
+
+    return walk(data, 0)
+
+
+def parse_domains(payload, parser_cfg):
+    domains = []
+
+    for p in parser_cfg.get("field_paths", []):
+        values = get_values_by_path(payload, p)
+        domains.extend(flatten_values(values))
+
+    for p in parser_cfg.get("json_paths", []):
+        v = get_by_json_path(payload, p)
+        if v is not None:
+            domains.extend(flatten_values(v))
+
+    if not domains:
+        regex_s = parser_cfg.get("regex", r"[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}")
+        text = json.dumps(payload, ensure_ascii=True)
+        domains.extend(re.findall(regex_s, text))
+
+    clean = []
+    seen = set()
+    for d in domains:
+        d = d.strip().lower().rstrip(".")
+        if (DOMAIN_RE.match(d) or IPV4_RE.match(d)) and d not in seen:
+            seen.add(d)
+            clean.append(d)
+    return clean
+
+
+def parse_created_time(s):
+    if not s:
+        return None
+    try:
+        return dt.datetime.strptime(str(s).strip(), "%Y-%m-%d %H:%M:%S").replace(tzinfo=dt.timezone.utc)
+    except Exception:
+        return None
+
+
+def parse_scored_records(payload, scoring_cfg):
+    if not scoring_cfg.get("enabled", False):
+        return []
+
+    records_path = scoring_cfg.get("records_path", "data.good[]")
+    ip_field = scoring_cfg.get("ip_field", "ip")
+    created_time_field = scoring_cfg.get("created_time_field", "createdTime")
+    score_fields = scoring_cfg.get("score_fields", ["avgScore", "ydScore", "dxScore", "ltScore"])
+
+    raw_records = get_values_by_path(payload, records_path)
+    out = []
+    for r in raw_records:
+        if not isinstance(r, dict):
+            continue
+        domain = str(r.get(ip_field, "")).strip().lower().rstrip(".")
+        if not domain:
+            continue
+        created = parse_created_time(r.get(created_time_field))
+        scores = []
+        for f in score_fields:
+            v = r.get(f)
+            try:
+                scores.append(float(v))
+            except Exception:
+                scores.append(float("inf"))
+        out.append(
+            {
+                "domain": domain,
+                "created_at": created,
+                "created_raw": r.get(created_time_field),
+                "scores": scores,
+                "raw": r,
+            }
+        )
+    return out
+
+
+def rank_scored_records(records, scoring_cfg):
+    if not records:
+        return []
+
+    within_hours = float(scoring_cfg.get("within_hours", 24))
+    prefer_lower = bool(scoring_cfg.get("prefer_lower", True))
+    use_api_order = bool(scoring_cfg.get("use_api_order", False))
+
+    now = dt.datetime.now(dt.timezone.utc)
+    cutoff = now - dt.timedelta(hours=within_hours)
+
+    recent = [r for r in records if r.get("created_at") is not None and r["created_at"] >= cutoff]
+    candidates = recent if recent else records
+
+    if use_api_order:
+        seen = set()
+        ordered = []
+        for r in candidates:
+            d = r["domain"]
+            if d in seen:
+                continue
+            seen.add(d)
+            ordered.append(r)
+        return ordered
+
+    def key_lower(r):
+        return tuple(r["scores"] + [r["domain"]])
+
+    def key_higher(r):
+        return tuple([-x if x != float("inf") else float("inf") for x in r["scores"]] + [r["domain"]])
+
+    ranked = sorted(candidates, key=key_lower if prefer_lower else key_higher)
+    return ranked
+
+
+def apply_filter(domains, filter_cfg):
+    include_suffixes = [s.lower() for s in filter_cfg.get("include_suffixes", []) if s]
+    exclude_regex = [re.compile(x) for x in filter_cfg.get("exclude_regex", []) if x]
+
+    out = []
+    for d in domains:
+        if include_suffixes and not any(d.endswith(s) for s in include_suffixes):
+            continue
+        if any(rx.search(d) for rx in exclude_regex):
+            continue
+        out.append(d)
+    return out
+
+
+def single_tls_check(domain, timeout_ms, port, tls_verify=True):
+    start = time.perf_counter()
+    timeout_sec = max(0.2, timeout_ms / 1000.0)
+    try:
+        infos = socket.getaddrinfo(domain, port, proto=socket.IPPROTO_TCP)
+        if not infos:
+            return False, None, "dns_empty"
+
+        af, socktype, proto, _, sockaddr = infos[0]
+        with socket.socket(af, socktype, proto) as sock:
+            sock.settimeout(timeout_sec)
+            sock.connect(sockaddr)
+            if tls_verify:
+                ctx = ssl.create_default_context()
+            else:
+                ctx = ssl._create_unverified_context()
+            with ctx.wrap_socket(sock, server_hostname=domain) as ssock:
+                ssock.do_handshake()
+
+        elapsed = int((time.perf_counter() - start) * 1000)
+        return True, elapsed, "ok"
+    except Exception as e:
+        return False, None, str(e)
+
+
+def check_domains(domains, hc_cfg):
+    attempts = int(hc_cfg.get("attempts", 2))
+    timeout_ms = int(hc_cfg.get("timeout_ms", 1800))
+    port = int(hc_cfg.get("port", 443))
+    tls_verify = bool(hc_cfg.get("tls_verify", True))
+
+    results = []
+    for d in domains:
+        ok_count = 0
+        latencies = []
+        errors = []
+        for _ in range(attempts):
+            ok, latency, err = single_tls_check(d, timeout_ms, port, tls_verify=tls_verify)
+            if ok:
+                ok_count += 1
+                latencies.append(latency)
+            else:
+                errors.append(err)
+
+        success_ratio = ok_count / attempts if attempts else 0.0
+        avg_latency = int(sum(latencies) / len(latencies)) if latencies else 999999
+        results.append(
+            {
+                "domain": d,
+                "success_ratio": success_ratio,
+                "avg_latency_ms": avg_latency,
+                "ok_count": ok_count,
+                "attempts": attempts,
+                "errors": errors[:3],
+            }
+        )
+
+    results.sort(key=lambda x: (-x["success_ratio"], x["avg_latency_ms"], x["domain"]))
+    return results
+
+
+def render_v2ray(template_file, output_file, token, domain):
+    if not template_file or not output_file:
+        return False
+    if not os.path.exists(template_file):
+        return False
+    with open(template_file, "r", encoding="utf-8") as f:
+        tpl = f.read()
+    rendered = tpl.replace(token, domain)
+    os.makedirs(os.path.dirname(output_file), exist_ok=True)
+    with open(output_file, "w", encoding="utf-8") as f:
+        f.write(rendered)
+    return True
+
+
+def run_notify(cmd, domain, status):
+    if not cmd:
+        return
+    env = os.environ.copy()
+    env["AUTODOMAIN"] = domain
+    env["AUTODOMAIN_STATUS"] = status
+    subprocess.run(cmd, shell=True, check=False, env=env)
+
+
+def choose_domain(filtered_domains, check_results, top_n, ranked_scored):
+    if ranked_scored:
+        domains_by_score = [x["domain"] for x in ranked_scored]
+        domain_set = set(domains_by_score)
+
+        if check_results:
+            check_map = {x["domain"]: x for x in check_results}
+            top = []
+            for d in domains_by_score:
+                if d in check_map and check_map[d]["success_ratio"] > 0:
+                    top.append(check_map[d])
+                if len(top) >= top_n:
+                    break
+            if top:
+                return top[0]["domain"], top
+
+            score_only = [{"domain": x["domain"], "scores": x["scores"], "created_raw": x["created_raw"]} for x in ranked_scored[:top_n]]
+            return score_only[0]["domain"], score_only
+
+        top_scored = [{"domain": x["domain"], "scores": x["scores"], "created_raw": x["created_raw"]} for x in ranked_scored[:top_n]]
+        if top_scored:
+            return top_scored[0]["domain"], top_scored
+
+    if check_results:
+        top = [x for x in check_results if x["success_ratio"] > 0][:top_n]
+        if top:
+            return top[0]["domain"], top
+        return None, check_results[:top_n]
+    if filtered_domains:
+        return filtered_domains[0], [{"domain": x} for x in filtered_domains[:top_n]]
+    return None, []
+
+
+def main():
+    ap = argparse.ArgumentParser(description="Auto select VMess preferred domain")
+    ap.add_argument("--config", default="config.json", help="Path to config JSON")
+    args = ap.parse_args()
+
+    cfg = read_json_file(args.config)
+    runtime_dir = cfg.get("output", {}).get("runtime_dir", "./runtime")
+    output_cfg = cfg.get("output", {})
+    v2_cfg = cfg.get("v2ray", {})
+    notify_cfg = cfg.get("notify", {})
+
+    current_domain_file = os.path.join(runtime_dir, output_cfg.get("current_domain_file", "current_domain.txt"))
+    current_domain_json = os.path.join(runtime_dir, output_cfg.get("current_domain_json", "current_domain.json"))
+    state_file = os.path.join(runtime_dir, output_cfg.get("state_file", "state.json"))
+    substore_vars_file = os.path.join(runtime_dir, output_cfg.get("substore_vars_file", "substore_vars.json"))
+
+    state = read_json_file(state_file, default={})
+    last_good = state.get("last_good_domain", "")
+
+    try:
+        payload = fetch_api_json(cfg)
+        parsed = parse_domains(payload, cfg.get("parser", {}))
+        filtered = apply_filter(parsed, cfg.get("domain_filter", {}))
+
+        scored_records = parse_scored_records(payload, cfg.get("scoring", {}))
+        scored_records = [r for r in scored_records if r["domain"] in set(filtered)]
+        ranked_scored = rank_scored_records(scored_records, cfg.get("scoring", {}))
+
+        check_results = []
+        if cfg.get("healthcheck", {}).get("enabled", True):
+            check_results = check_domains(filtered, cfg.get("healthcheck", {}))
+
+        top_n = int(cfg.get("selection", {}).get("top_n", 3))
+        selected, top_candidates = choose_domain(filtered, check_results, top_n, ranked_scored)
+
+        status = "ok"
+        if not selected and last_good:
+            selected = last_good
+            status = "fallback_last_good"
+        if not selected:
+            raise RuntimeError("No valid domain available from API and no fallback in state")
+
+        write_text_file(current_domain_file, selected + "\n")
+
+        current_json = {
+            "domain": selected,
+            "updated_at": utc_now_iso(),
+            "status": status,
+            "source_count": len(parsed),
+            "checked_count": len(check_results),
+            "top_candidates": top_candidates,
+        }
+        write_json_file(current_domain_json, current_json)
+        write_json_file(
+            substore_vars_file,
+            {
+                "AUTO_DOMAIN": selected,
+                "UPDATED_AT": current_json["updated_at"],
+                "STATUS": status,
+            },
+        )
+
+        rendered = render_v2ray(
+            template_file=v2_cfg.get("template_file", ""),
+            output_file=v2_cfg.get("output_file", ""),
+            token=v2_cfg.get("replace_token", "__AUTO_DOMAIN__"),
+            domain=selected,
+        )
+
+        new_state = {
+            "updated_at": current_json["updated_at"],
+            "last_good_domain": selected,
+            "status": status,
+            "source_count": len(parsed),
+            "checked_count": len(check_results),
+            "rendered_v2ray": rendered,
+        }
+        write_json_file(state_file, new_state)
+
+        run_notify(notify_cfg.get("command", ""), selected, status)
+        print(json.dumps(current_json, ensure_ascii=True))
+
+    except Exception as e:
+        now = utc_now_iso()
+        err_state = {
+            "updated_at": now,
+            "status": "error",
+            "error": str(e),
+            "last_good_domain": last_good,
+        }
+        write_json_file(state_file, err_state)
+
+        if last_good:
+            write_text_file(current_domain_file, last_good + "\n")
+            write_json_file(
+                current_domain_json,
+                {
+                    "domain": last_good,
+                    "updated_at": now,
+                    "status": "error_use_last_good",
+                    "error": str(e),
+                },
+            )
+            run_notify(notify_cfg.get("command", ""), last_good, "error_use_last_good")
+            print(json.dumps({"status": "error_use_last_good", "error": str(e)}, ensure_ascii=True))
+            return
+
+        print(json.dumps({"status": "error", "error": str(e)}, ensure_ascii=True), file=sys.stderr)
+        sys.exit(1)
+
+
+if __name__ == "__main__":
+    main()

+ 159 - 0
scripts/install_debian.sh

@@ -0,0 +1,159 @@
+#!/usr/bin/env bash
+set -euo pipefail
+
+SERVICE_NAME="vmess-domain-rotator"
+APP_DIR="/opt/vmess-domain-rotator"
+RUN_USER="vmessrotator"
+RUN_GROUP="vmessrotator"
+INTERVAL="12h"
+INSTALL_DEPS="1"
+OVERWRITE_CONFIG="0"
+
+usage() {
+  cat <<'EOF'
+Usage: sudo bash scripts/install_debian.sh [options]
+
+Options:
+  --app-dir <path>         Install directory (default: /opt/vmess-domain-rotator)
+  --user <name>            Service user (default: vmessrotator)
+  --group <name>           Service group (default: vmessrotator)
+  --interval <value>       Timer interval, e.g. 12h/10min (default: 12h)
+  --no-install-deps        Skip apt dependency install
+  --overwrite-config       Overwrite existing config.json in app dir
+  -h, --help               Show help
+
+Examples:
+  sudo bash scripts/install_debian.sh
+  sudo bash scripts/install_debian.sh --user root --group root --interval 10min
+EOF
+}
+
+while [[ $# -gt 0 ]]; do
+  case "$1" in
+    --app-dir)
+      APP_DIR="$2"
+      shift 2
+      ;;
+    --user)
+      RUN_USER="$2"
+      shift 2
+      ;;
+    --group)
+      RUN_GROUP="$2"
+      shift 2
+      ;;
+    --interval)
+      INTERVAL="$2"
+      shift 2
+      ;;
+    --no-install-deps)
+      INSTALL_DEPS="0"
+      shift
+      ;;
+    --overwrite-config)
+      OVERWRITE_CONFIG="1"
+      shift
+      ;;
+    -h|--help)
+      usage
+      exit 0
+      ;;
+    *)
+      echo "Unknown option: $1" >&2
+      usage
+      exit 1
+      ;;
+  esac
+done
+
+if [[ "$(id -u)" -ne 0 ]]; then
+  echo "Please run as root (use sudo)." >&2
+  exit 1
+fi
+
+SOURCE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
+
+if [[ "$INSTALL_DEPS" == "1" ]]; then
+  export DEBIAN_FRONTEND=noninteractive
+  apt-get update -y
+  apt-get install -y python3 ca-certificates git
+fi
+
+if [[ "$RUN_USER" != "root" ]]; then
+  if ! getent group "$RUN_GROUP" >/dev/null 2>&1; then
+    groupadd --system "$RUN_GROUP"
+  fi
+  if ! id -u "$RUN_USER" >/dev/null 2>&1; then
+    useradd --system --home-dir "$APP_DIR" --create-home --shell /usr/sbin/nologin --gid "$RUN_GROUP" "$RUN_USER"
+  fi
+fi
+
+mkdir -p "$APP_DIR"
+
+CONFIG_BACKUP=""
+if [[ "$OVERWRITE_CONFIG" != "1" && -f "$APP_DIR/config.json" ]]; then
+  CONFIG_BACKUP="$(mktemp)"
+  cp "$APP_DIR/config.json" "$CONFIG_BACKUP"
+fi
+
+tar -C "$SOURCE_DIR" \
+  --exclude='.git' \
+  --exclude='.DS_Store' \
+  --exclude='__pycache__' \
+  -cf - . | tar -C "$APP_DIR" -xf -
+
+if [[ -n "$CONFIG_BACKUP" ]]; then
+  cp "$CONFIG_BACKUP" "$APP_DIR/config.json"
+  rm -f "$CONFIG_BACKUP"
+fi
+
+mkdir -p "$APP_DIR/runtime"
+chmod +x "$APP_DIR/scripts/run_update_and_commit.sh" "$APP_DIR/scripts/install_debian.sh" "$APP_DIR/scripts/uninstall_debian.sh" || true
+
+if [[ "$RUN_USER" != "root" ]]; then
+  chown -R "$RUN_USER:$RUN_GROUP" "$APP_DIR"
+fi
+
+if ! git -C "$APP_DIR" rev-parse --is-inside-work-tree >/dev/null 2>&1; then
+  git -C "$APP_DIR" init
+fi
+
+cat >"/etc/systemd/system/${SERVICE_NAME}.service" <<EOF
+[Unit]
+Description=VMess Domain Rotator updater
+After=network-online.target
+Wants=network-online.target
+
+[Service]
+Type=oneshot
+User=${RUN_USER}
+Group=${RUN_GROUP}
+WorkingDirectory=${APP_DIR}
+ExecStart=/bin/bash ${APP_DIR}/scripts/run_update_and_commit.sh ${APP_DIR}/config.json
+EOF
+
+cat >"/etc/systemd/system/${SERVICE_NAME}.timer" <<EOF
+[Unit]
+Description=Run VMess Domain Rotator every ${INTERVAL}
+
+[Timer]
+OnBootSec=2min
+OnUnitActiveSec=${INTERVAL}
+AccuracySec=30s
+Unit=${SERVICE_NAME}.service
+Persistent=true
+
+[Install]
+WantedBy=timers.target
+EOF
+
+systemctl daemon-reload
+systemctl enable --now "${SERVICE_NAME}.timer"
+systemctl start "${SERVICE_NAME}.service"
+
+echo "Installed successfully."
+echo "App dir: ${APP_DIR}"
+echo "Service: ${SERVICE_NAME}.service"
+echo "Timer: ${SERVICE_NAME}.timer"
+echo "Check status: systemctl status ${SERVICE_NAME}.timer"
+echo "View logs: journalctl -u ${SERVICE_NAME}.service -n 50 --no-pager"

+ 56 - 0
scripts/run_update_and_commit.sh

@@ -0,0 +1,56 @@
+#!/usr/bin/env bash
+set -euo pipefail
+
+APP_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
+CONFIG_PATH="${1:-${APP_DIR}/config.json}"
+DOMAIN_FILE="${APP_DIR}/runtime/current_domain.txt"
+
+before=""
+if [[ -f "$DOMAIN_FILE" ]]; then
+  before="$(tr -d '\r\n' < "$DOMAIN_FILE")"
+fi
+
+/usr/bin/python3 "${APP_DIR}/scripts/domain_updater.py" --config "$CONFIG_PATH"
+
+after=""
+if [[ -f "$DOMAIN_FILE" ]]; then
+  after="$(tr -d '\r\n' < "$DOMAIN_FILE")"
+fi
+
+if [[ -z "$after" ]]; then
+  echo "[vmess-domain-rotator] empty selected domain, skip git commit"
+  exit 0
+fi
+
+if [[ "$before" == "$after" ]]; then
+  echo "[vmess-domain-rotator] domain unchanged (${after}), skip git commit"
+  exit 0
+fi
+
+if ! command -v git >/dev/null 2>&1; then
+  echo "[vmess-domain-rotator] git not found, skip git commit"
+  exit 0
+fi
+
+if ! git -C "$APP_DIR" rev-parse --is-inside-work-tree >/dev/null 2>&1; then
+  echo "[vmess-domain-rotator] not a git repo, skip git commit"
+  exit 0
+fi
+
+git -C "$APP_DIR" add runtime/current_domain.txt runtime/current_domain.json runtime/state.json runtime/substore_vars.json || true
+
+if git -C "$APP_DIR" diff --cached --quiet; then
+  echo "[vmess-domain-rotator] no staged changes, skip git commit"
+  exit 0
+fi
+
+commit_name="${GIT_COMMIT_NAME:-vmess-domain-rotator}"
+commit_email="${GIT_COMMIT_EMAIL:-vmess-domain-rotator@localhost}"
+ts="$(date -u +"%Y-%m-%dT%H:%M:%SZ")"
+
+git -C "$APP_DIR" \
+  -c user.name="$commit_name" \
+  -c user.email="$commit_email" \
+  commit -m "chore: rotate preferred domain to ${after} (${ts})"
+
+echo "[vmess-domain-rotator] committed domain change: ${before} -> ${after}"

+ 93 - 0
scripts/uninstall_debian.sh

@@ -0,0 +1,93 @@
+#!/usr/bin/env bash
+set -euo pipefail
+
+SERVICE_NAME="vmess-domain-rotator"
+APP_DIR="/opt/vmess-domain-rotator"
+KEEP_APP_DIR="0"
+REMOVE_USER="0"
+RUN_USER="vmessrotator"
+
+usage() {
+  cat <<'EOF'
+Usage: sudo bash scripts/uninstall_debian.sh [options]
+
+Options:
+  --app-dir <path>         Install directory (default: /opt/vmess-domain-rotator)
+  --service-name <name>    Service base name (default: vmess-domain-rotator)
+  --keep-app-dir           Keep install directory and files
+  --remove-user <name>     Remove this service user after uninstall
+  -h, --help               Show help
+
+Examples:
+  sudo bash scripts/uninstall_debian.sh
+  sudo bash scripts/uninstall_debian.sh --keep-app-dir
+  sudo bash scripts/uninstall_debian.sh --remove-user vmessrotator
+EOF
+}
+
+while [[ $# -gt 0 ]]; do
+  case "$1" in
+    --app-dir)
+      APP_DIR="$2"
+      shift 2
+      ;;
+    --service-name)
+      SERVICE_NAME="$2"
+      shift 2
+      ;;
+    --keep-app-dir)
+      KEEP_APP_DIR="1"
+      shift
+      ;;
+    --remove-user)
+      REMOVE_USER="1"
+      RUN_USER="$2"
+      shift 2
+      ;;
+    -h|--help)
+      usage
+      exit 0
+      ;;
+    *)
+      echo "Unknown option: $1" >&2
+      usage
+      exit 1
+      ;;
+  esac
+done
+
+if [[ "$(id -u)" -ne 0 ]]; then
+  echo "Please run as root (use sudo)." >&2
+  exit 1
+fi
+
+if systemctl list-unit-files | grep -q "^${SERVICE_NAME}.timer"; then
+  systemctl disable --now "${SERVICE_NAME}.timer" || true
+fi
+
+if systemctl list-unit-files | grep -q "^${SERVICE_NAME}.service"; then
+  systemctl stop "${SERVICE_NAME}.service" || true
+fi
+
+rm -f "/etc/systemd/system/${SERVICE_NAME}.service"
+rm -f "/etc/systemd/system/${SERVICE_NAME}.timer"
+
+systemctl daemon-reload
+systemctl reset-failed
+
+if [[ "$KEEP_APP_DIR" != "1" ]]; then
+  rm -rf "$APP_DIR"
+fi
+
+if [[ "$REMOVE_USER" == "1" ]]; then
+  if id -u "$RUN_USER" >/dev/null 2>&1; then
+    userdel "$RUN_USER" || true
+  fi
+fi
+
+echo "Uninstall completed."
+if [[ "$KEEP_APP_DIR" == "1" ]]; then
+  echo "Kept app directory: ${APP_DIR}"
+else
+  echo "Removed app directory: ${APP_DIR}"
+fi

+ 131 - 0
scripts/update_vmess_links.py

@@ -0,0 +1,131 @@
+#!/usr/bin/env python3
+import argparse
+import base64
+import json
+import os
+import re
+import sys
+
+
+def b64_decode_flexible(s):
+    t = "".join(s.strip().split())
+    pad = (-len(t)) % 4
+    t = t + ("=" * pad)
+    try:
+        return base64.b64decode(t).decode("utf-8", errors="replace")
+    except Exception:
+        return base64.urlsafe_b64decode(t).decode("utf-8", errors="replace")
+
+
+def b64_encode_std(s):
+    return base64.b64encode(s.encode("utf-8")).decode("ascii")
+
+
+def normalize_vmess_payload(payload):
+    clean = "".join(payload.strip().split())
+    pad = (-len(clean)) % 4
+    return clean + ("=" * pad)
+
+
+def update_vmess_line(line, domain, name_rx=None):
+    stripped = line.strip()
+    if not stripped.startswith("vmess://"):
+        return line, False, None
+
+    payload = stripped[len("vmess://") :]
+    try:
+        body = b64_decode_flexible(normalize_vmess_payload(payload))
+        obj = json.loads(body)
+    except Exception as e:
+        return line, False, f"decode_error: {e}"
+
+    ps = str(obj.get("ps", ""))
+    if name_rx is not None and not name_rx.search(ps):
+        return line, False, None
+
+    if str(obj.get("add", "")).strip() == domain:
+        return line, False, None
+
+    obj["add"] = domain
+    encoded = b64_encode_std(json.dumps(obj, ensure_ascii=False, separators=(",", ":")))
+    return f"vmess://{encoded}", True, None
+
+
+def read_domain(args):
+    if args.domain:
+        return args.domain.strip()
+    if not args.domain_file:
+        raise ValueError("must provide --domain or --domain-file")
+    with open(args.domain_file, "r", encoding="utf-8") as f:
+        return f.read().strip()
+
+
+def main():
+    ap = argparse.ArgumentParser(description="Update vmess:// links by replacing add field")
+    ap.add_argument("--input", required=True, help="Input subscription file")
+    ap.add_argument("--output", required=True, help="Output subscription file")
+    ap.add_argument("--domain", default="", help="Target domain to set as add")
+    ap.add_argument("--domain-file", default="./runtime/current_domain.txt", help="Domain file path")
+    ap.add_argument("--name-regex", default="", help="Only update nodes whose ps matches regex")
+    ap.add_argument(
+        "--subscription-base64",
+        action="store_true",
+        help="Input/output file is base64-encoded full subscription content",
+    )
+    args = ap.parse_args()
+
+    domain = read_domain(args)
+    if not domain:
+        print(json.dumps({"status": "error", "error": "empty domain"}, ensure_ascii=True), file=sys.stderr)
+        sys.exit(1)
+
+    name_rx = re.compile(args.name_regex) if args.name_regex else None
+
+    with open(args.input, "r", encoding="utf-8") as f:
+        raw_input = f.read()
+
+    content = raw_input
+    if args.subscription_base64:
+        content = b64_decode_flexible(raw_input)
+
+    lines = content.splitlines()
+    out_lines = []
+    total_vmess = 0
+    updated = 0
+    errors = 0
+
+    for line in lines:
+        if line.strip().startswith("vmess://"):
+            total_vmess += 1
+        new_line, changed, err = update_vmess_line(line, domain, name_rx=name_rx)
+        if changed:
+            updated += 1
+        if err:
+            errors += 1
+        out_lines.append(new_line)
+
+    out_text = "\n".join(out_lines)
+    if content.endswith("\n"):
+        out_text += "\n"
+
+    write_text = out_text
+    if args.subscription_base64:
+        write_text = b64_encode_std(out_text)
+
+    os.makedirs(os.path.dirname(args.output) or ".", exist_ok=True)
+    with open(args.output, "w", encoding="utf-8") as f:
+        f.write(write_text)
+
+    result = {
+        "status": "ok",
+        "domain": domain,
+        "total_vmess": total_vmess,
+        "updated": updated,
+        "errors": errors,
+        "output": args.output,
+    }
+    print(json.dumps(result, ensure_ascii=True))
+
+
+if __name__ == "__main__":
+    main()

+ 62 - 0
substore/operator_template.js

@@ -0,0 +1,62 @@
+/*
+  Sub-Store operator (production-friendly)
+  - Pull dynamic domain from your current_domain.json
+  - Replace vmess server field for matched nodes
+*/
+
+const DOMAIN_JSON_URL = "https://your-host.example.com/current_domain.json";
+const NODE_NAME_REGEX = /(argo|cf|vm|优选)/i;
+const CACHE_KEY = "vmess-domain-rotator:current";
+const CACHE_TTL_MS = 5 * 60 * 1000;
+
+async function fetchDomainViaSubStore() {
+  const $ = $substore;
+  const { body, statusCode } = await $.http.get({
+    url: DOMAIN_JSON_URL,
+    headers: {
+      Accept: "application/json",
+      "Cache-Control": "no-cache"
+    },
+    timeout: 5000
+  });
+
+  if (statusCode < 200 || statusCode >= 300) {
+    throw new Error(`http status ${statusCode}`);
+  }
+
+  const obj = JSON.parse(body || "{}");
+  const domain = String(obj.domain || "").trim().toLowerCase();
+  if (!domain) {
+    throw new Error("empty domain field");
+  }
+  return domain;
+}
+
+async function operator(proxies = [], targetPlatform, context) {
+  const cache = scriptResourceCache;
+  let domain = cache.get(CACHE_KEY);
+
+  if (!domain) {
+    try {
+      domain = await fetchDomainViaSubStore();
+      cache.set(CACHE_KEY, domain, CACHE_TTL_MS);
+    } catch (e) {
+      console.log(`[vmess-domain-rotator] fetch failed: ${e.message}`);
+      return proxies;
+    }
+  }
+
+  let updated = 0;
+  for (const p of proxies) {
+    if (!p || p.type !== "vmess") continue;
+    if (!NODE_NAME_REGEX.test(p.name || "")) continue;
+
+    if (p.server !== domain) {
+      p.server = domain;
+      updated += 1;
+    }
+  }
+
+  console.log(`[vmess-domain-rotator] domain=${domain}, updated=${updated}, total=${proxies.length}, target=${targetPlatform}`);
+  return proxies;
+}

+ 10 - 0
systemd/vmess-domain-rotator.service

@@ -0,0 +1,10 @@
+[Unit]
+Description=VMess Domain Rotator updater
+After=network-online.target
+Wants=network-online.target
+
+[Service]
+Type=oneshot
+User=vmessrotator
+WorkingDirectory=/opt/vmess-domain-rotator
+ExecStart=/bin/bash /opt/vmess-domain-rotator/scripts/run_update_and_commit.sh /opt/vmess-domain-rotator/config.json

+ 12 - 0
systemd/vmess-domain-rotator.timer

@@ -0,0 +1,12 @@
+[Unit]
+Description=Run VMess Domain Rotator every 12 hours
+
+[Timer]
+OnBootSec=2min
+OnUnitActiveSec=12h
+AccuracySec=30s
+Unit=vmess-domain-rotator.service
+Persistent=true
+
+[Install]
+WantedBy=timers.target