27 Commits

Author SHA1 Message Date
Timothy Miller
2446c1d6a0 Bump crate to 2.0.8 and refine updater behavior
Deduplicate up-to-date messages by tracking noop keys and move logging
to the updater so callers only log the first noop.
Reuse a single reqwest Client for IP detection instead of rebuilding it
for each call.
Always ping heartbeat even when there are no meaningful changes.
Fix Pushover shoutrrr parsing (token@user order) and update tests
2026-03-19 23:22:20 -04:00
Timothy Miller
9b8aba5e20 Add CachedCloudflareFilter
Introduce CachedCloudflareFilter that caches Cloudflare IP ranges and
refreshes every 24 hours. If a refresh fails the previously cached
ranges
are retained and a warning is emitted. Wire the cache through main and
updater so Cloudflare fetches reuse the cached result. Update tests and
bump crate version to 2.0.7
2026-03-19 19:24:44 -04:00
Timothy Miller
83dd454c42 Fetch CF ranges concurrently and prevent writes
Use tokio::join to fetch IPv4 and IPv6 Cloudflare ranges in parallel.
When range fetch fails, avoid performing updates that could write
Cloudflare addresses by clearing detected/filtered IP lists and emitting
warnings. Add unit tests to validate parsing and boundary checks for the
current Cloudflare ranges. Bump crate version to 2.0.6.
Fetch Cloudflare ranges concurrently; avoid writes

Skip updates (clear detected IPs) if Cloudflare ranges can't be
retrieved to avoid writing Cloudflare anycast addresses.
Default REJECT_CLOUDFLARE_IPS=true, update README, add comprehensive
CF-range tests, and bump crate version
Fetch CF ranges concurrently and avoid updates

Enable rejecting Cloudflare IPs by default and skip any updates
if the published ranges cannot be fetched to avoid writing Cloudflare
anycast addresses. Fetch IPv4 and IPv6 ranges concurrently, add
parsing/matching tests, and update README and version.
2026-03-19 18:56:11 -04:00
Timothy Miller
f8d5b5cb7e Bump version to 2.0.5 2026-03-19 18:19:41 -04:00
Timothy Miller
bb5cc43651 Add ip4_provider and ip6_provider for legacy mode
Use the shared provider abstraction for IPv4/IPv6 detection in legacy
mode.
Allow per-family provider overrides in config.json (ip4_provider /
ip6_provider)
and support disabling a family with "none". Update config parsing,
examples,
and the legacy update flow to use the provider-based detection client.
2026-03-19 18:18:53 -04:00
Timothy Miller
7ff8379cfb Filter Cloudflare IPs in legacy mode
Add support for REJECT_CLOUDFLARE_IPS in legacy config and fetch
Cloudflare
IP ranges to drop matching detected addresses. Improve IP detection in
legacy mode by using literal-IP primary trace URLs with hostname
fallbacks, binding dedicated IPv4/IPv6 HTTP clients, and setting a Host
override for literal-IP trace endpoints so TLS SNI works. Expose
build_split_client and update tests accordingly.
2026-03-19 18:18:32 -04:00
Timothy Miller
943e38d70c Update README.md 2026-03-18 20:12:25 -04:00
Timothy Miller
ac982a208e Replace ipnet dependency with inline CidrRange for CIDR matching
Remove the ipnet crate and implement a lightweight CidrRange struct
  that handles IPv4/IPv6 CIDR parsing and containment checks using
  bitwise masking. Adds tests for invalid prefixes and cross-family
  non-matching.
2026-03-18 19:53:51 -04:00
Timothy Miller
4b1875b0cd Add REJECT_CLOUDFLARE_IPS flag to filter out Cloudflare-owned IPs from
DNS updates

  IP detection providers can sometimes return a Cloudflare anycast IP
  instead
  of the user's real public IP, causing incorrect DNS updates. When
  REJECT_CLOUDFLARE_IPS=true, detected IPs are checked against
  Cloudflare's
  published IP ranges (ips-v4/ips-v6) and rejected if they match.
2026-03-18 19:44:06 -04:00
Timothy Miller
54ca4a5eae Bump version to 2.0.3 and update GitHub Actions to Node.js 24
Update all Docker GitHub Actions to their latest major versions to
  resolve Node.js 20 deprecation warnings ahead of the June 2026 cutoff.
2026-03-18 19:01:50 -04:00
Timothy Miller
94ce10fccc Only set Host header for literal-IP trace URLs
The fallback hostname-based URL and custom URLs resolve correctly
without a Host override, so restrict the header to the cases that
need it (direct IP connections to 1.1.1.1 / [2606:4700:4700::1111]).
2026-03-18 18:19:55 -04:00
Timothy Miller
7e96816740 Merge pull request #240 from masterwishx/dev-test
Fix proxyIP + Notify
2026-03-18 16:34:28 -04:00
DaRK AnGeL
8a4b57c163 undo FIX: remove duplicates so CloudflareHandle::set_ips sees stable input
Signed-off-by: DaRK AnGeL <28630321+masterwishx@users.noreply.github.com>
2026-03-17 10:10:00 +02:00
DaRK AnGeL
3c7072f4b6 Merge branch 'master' of https://github.com/masterwishx/cloudflare-ddns 2026-03-17 10:05:15 +02:00
DaRK AnGeL
3d796d470c Deduplicate IPs before DNS record update
Remove duplicate IPs before updating DNS records to ensure stable input.

Signed-off-by: DaRK AnGeL <28630321+masterwishx@users.noreply.github.com>
2026-03-17 10:04:20 +02:00
DaRK AnGeL
36bdbea568 Deduplicate IPs before DNS record update
Remove duplicate IPs before updating DNS records to ensure stable input.
2026-03-16 20:28:26 +02:00
DaRK AnGeL
6085ba0cc2 Add Host header to fetch_trace_ip function 2026-03-16 09:02:10 +02:00
Timothy Miller
560a3b7b28 Bump version to 2.0.2 2026-03-13 00:10:31 -04:00
Timothy Miller
1b3928865b Use literal IP trace URLs as primary
Primary trace endpoints now use literal IPs per address family to
guarantee correct address family selection. Fallback uses
api.cloudflare.com to work around WARP/Zero Trust interception. Rename
constants and update tests accordingly.
2026-03-13 00:04:08 -04:00
Timothy Miller
93d351d997 Use Cloudflare trace by default and validate IPs
Default IPv4 provider is now CloudflareTrace.
Primary uses api.cloudflare.com; fallbacks are literal IPs.
Build per-family HTTP clients by binding to 0.0.0.0/[::] so the trace
endpoint observes the requested address family. Add validate_detected_ip
to reject wrong-family or non-global addresses (loopback, link-local,
private, documentation ranges, etc). Update tests and legacy updater
URLs.
Default to Cloudflare trace and validate IPs

Use api.cloudflare.com as the primary trace endpoint (fallbacks
remain literal IPs) to avoid WARP/Zero Trust interception. Build
IP-family-specific HTTP clients by binding to the unspecified
address so the trace endpoint sees the correct family. Add
validate_detected_ip to reject non-global or wrong-family addresses
and expand tests. Bump crate version and tempfile dev-dependency.
2026-03-11 18:42:46 -04:00
Timothy Miller
e7772c0fe0 Change default IPv4 provider to ipify
Update README and tests to reflect new defaults

Bump actions/checkout to v6, replace linux/arm/v7 with
linux/ppc64le in the Docker build, and normalize tag quoting in the
GitHub workflow
2026-03-10 05:37:09 -04:00
Timothy Miller
33266ced63 Correct Docker image size in README 2026-03-10 05:11:56 -04:00
Timothy Miller
332d730da8 Highlight tiny static Docker image in README 2026-03-10 02:06:52 -04:00
Timothy Miller
a4ac4e1e1c Use scratch release image and optimize build
Narrow tokio features to rt-multi-thread, macros, time and signal.
Add release profile to reduce binary size:
opt-level = s, lto = true, codegen-units = 1, strip = true, panic =
abort
Update Cargo.lock to remove unused deps and adjust Dockerfile to copy
CA certs from builder and set ENTRYPOINT for the release image
Use scratch base image and optimize release build

Add linux/ppc64le support in CI and build script
Switch Docker release stage to scratch, copy CA certificates from the
builder and use an explicit ENTRYPOINT for the binary
Tighten Cargo release profile (opt-level="s", lto, codegen-units=1,
strip, panic="abort") and reduce Tokio features to shrink the binary
Update README to reflect image size and supported platforms
2026-03-10 02:04:30 -04:00
Timothy Miller
6cad2de74c Remove linux/arm/v7 platform from image workflow 2026-03-10 01:49:59 -04:00
Timothy Miller
fd0d2ea647 Add Docker Hub badges to README 2026-03-10 01:28:15 -04:00
Timothy Miller
b1a2fa7af3 Migrate cloudflare-ddns to Rust
Add Cargo.toml, Cargo.lock and a full src/ tree with modules and tests
Update Dockerfile to build a Rust release binary and simplify CI/publish
Remove legacy Python script, requirements.txt, and startup helper
Switch .gitignore to Rust artifacts; update Dependabot and workflows to
cargo
Add .env example, docker-compose env, and update README and VSCode
settings

Remove the old Python implementation and requirements; add a Rust
implementation with Cargo.toml/Cargo.lock and full src/ modules, tests,
and notifier/heartbeat support. Update Dockerfile, build/publish
scripts, dependabot and workflows, README, and provide env-based
docker-compose and .env examples.
2026-03-10 01:21:21 -04:00
25 changed files with 14012 additions and 791 deletions

View File

@@ -1,6 +1,6 @@
version: 2 version: 2
updates: updates:
- package-ecosystem: 'pip' - package-ecosystem: 'cargo'
directory: '/' directory: '/'
schedule: schedule:
interval: 'daily' interval: 'daily'

View File

@@ -3,6 +3,8 @@ name: Build cloudflare-ddns Docker image (multi-arch)
on: on:
push: push:
branches: master branches: master
tags:
- "v*"
pull_request: pull_request:
jobs: jobs:
@@ -10,45 +12,48 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v2 uses: actions/checkout@v6
# https://github.com/docker/setup-qemu-action
- name: Set up QEMU - name: Set up QEMU
uses: docker/setup-qemu-action@v1 uses: docker/setup-qemu-action@v4
# https://github.com/docker/setup-buildx-action
- name: Setting up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1 uses: docker/setup-buildx-action@v4
- name: Login to DockerHub - name: Login to DockerHub
if: github.event_name != 'pull_request' if: github.event_name != 'pull_request'
uses: docker/login-action@v1 uses: docker/login-action@v4
with: with:
username: ${{ secrets.DOCKER_USERNAME }} username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }} password: ${{ secrets.DOCKER_PASSWORD }}
- name: Extract branch name - name: Extract version from Cargo.toml
shell: bash id: version
run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})" run: |
id: extract_branch VERSION=$(grep '^version' Cargo.toml | head -1 | sed 's/.*"\(.*\)".*/\1/')
echo "version=$VERSION" >> "$GITHUB_OUTPUT"
- name: Docker meta - name: Docker meta
id: meta id: meta
uses: docker/metadata-action@v3 uses: docker/metadata-action@v6
with: with:
images: timothyjmiller/cloudflare-ddns images: timothyjmiller/cloudflare-ddns
sep-tags: ','
flavor: |
latest=false
tags: | tags: |
type=raw,enable=${{ steps.extract_branch.outputs.branch == 'master' }},value=latest type=raw,enable=${{ github.ref == 'refs/heads/master' }},value=latest
type=schedule type=semver,pattern={{version}}
type=ref,event=pr type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=raw,enable=${{ github.ref == 'refs/heads/master' }},value=${{ steps.version.outputs.version }}
- name: Build and publish - name: Build and push
uses: docker/build-push-action@v2 uses: docker/build-push-action@v7
with: with:
context: . context: .
push: ${{ github.event_name != 'pull_request' }} push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }} tags: ${{ steps.meta.outputs.tags }}
platforms: linux/ppc64le,linux/s390x,linux/386,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/amd64 platforms: linux/amd64,linux/arm64,linux/ppc64le
labels: | labels: |
org.opencontainers.image.source=${{ github.event.repository.html_url }} org.opencontainers.image.source=${{ github.event.repository.html_url }}
org.opencontainers.image.created=${{ steps.meta.outputs.created }} org.opencontainers.image.created=${{ steps.meta.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }} org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.version=${{ steps.version.outputs.version }}

61
.gitignore vendored
View File

@@ -1,63 +1,10 @@
# Private API keys for updating IPv4 & IPv6 addresses on Cloudflare # Private API keys for updating IPv4 & IPv6 addresses on Cloudflare
config.json config.json
# Byte-compiled / optimized / DLL files # Rust build artifacts
__pycache__/ /target/
*.py[cod] debug/
*$py.class *.pdb
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Git History # Git History
**/.history/* **/.history/*

View File

@@ -11,11 +11,7 @@
".vscode": true, ".vscode": true,
"Dockerfile": true, "Dockerfile": true,
"LICENSE": true, "LICENSE": true,
"requirements.txt": true, "target": true
"venv": true
}, },
"explorerExclude.backup": {}, "explorerExclude.backup": {}
"python.linting.pylintEnabled": true,
"python.linting.enabled": true,
"python.formatting.provider": "autopep8"
} }

2029
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

28
Cargo.toml Normal file
View File

@@ -0,0 +1,28 @@
[package]
name = "cloudflare-ddns"
version = "2.0.8"
edition = "2021"
description = "Access your home network remotely via a custom domain name without a static IP"
license = "GPL-3.0"
[dependencies]
reqwest = { version = "0.12", features = ["json", "rustls-tls"], default-features = false }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tokio = { version = "1", features = ["rt-multi-thread", "macros", "time", "signal"] }
regex = "1"
chrono = { version = "0.4", features = ["clock"] }
url = "2"
idna = "1"
if-addrs = "0.13"
[profile.release]
opt-level = "s"
lto = true
codegen-units = 1
strip = true
panic = "abort"
[dev-dependencies]
tempfile = "3.27.0"
wiremock = "0.6"

View File

@@ -1,18 +1,13 @@
# ---- Base ---- # ---- Build ----
FROM python:alpine AS base FROM rust:alpine AS builder
RUN apk add --no-cache musl-dev
WORKDIR /build
COPY Cargo.toml Cargo.lock ./
COPY src ./src
RUN cargo build --release
#
# ---- Dependencies ----
FROM base AS dependencies
# install dependencies
COPY requirements.txt .
RUN pip install --user -r requirements.txt
#
# ---- Release ---- # ---- Release ----
FROM base AS release FROM scratch AS release
# copy installed dependencies and project source file(s) COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
WORKDIR / COPY --from=builder /build/target/release/cloudflare-ddns /cloudflare-ddns
COPY --from=dependencies /root/.local /root/.local ENTRYPOINT ["/cloudflare-ddns", "--repeat"]
COPY cloudflare-ddns.py .
CMD ["python", "-u", "/cloudflare-ddns.py", "--repeat"]

813
README.md
View File

@@ -1,286 +1,240 @@
<p align="center"><a href="https://timknowsbest.com/free-dynamic-dns" target="_blank" rel="noopener noreferrer"><img width="1024" src="feature-graphic.jpg" alt="Cloudflare DDNS"/></a></p> <p align="center"><a href="https://timknowsbest.com/free-dynamic-dns" target="_blank" rel="noopener noreferrer"><img width="1024" src="feature-graphic.jpg" alt="Cloudflare DDNS"/></a></p>
# 🚀 Cloudflare DDNS # 🌍 Cloudflare DDNS
Access your home network remotely via a custom domain name without a static IP! Access your home network remotely via a custom domain name without a static IP!
## ⚡ Efficiency A feature-complete dynamic DNS client for Cloudflare, written in Rust. The **smallest and most memory-efficient** open-source Cloudflare DDNS Docker image available — **~1.9 MB image size** and **~3.5 MB RAM** at runtime, smaller and leaner than Go-based alternatives. Built as a fully static binary from scratch with zero runtime dependencies.
- ❤️ Easy config. List your domains and you're done. Configure everything with environment variables. Supports notifications, heartbeat monitoring, WAF list management, flexible scheduling, and more.
- 🔁 The Python runtime will re-use existing HTTP connections.
- 🗃️ Cloudflare API responses are cached to reduce API usage.
- 🤏 The Docker image is small and efficient.
- 0⃣ Zero dependencies.
- 💪 Supports all platforms.
- 🏠 Enables low cost self hosting to promote a more decentralized internet.
- 🔒 Zero-log IP provider ([cdn-cgi/trace](https://www.cloudflare.com/cdn-cgi/trace))
- 👐 GPL-3.0 License. Open source for open audits.
## 💯 Complete Support of Domain Names, Subdomains, IPv4 & IPv6, and Load Balancing [![Docker Pulls](https://img.shields.io/docker/pulls/timothyjmiller/cloudflare-ddns?style=flat&logo=docker&label=pulls)](https://hub.docker.com/r/timothyjmiller/cloudflare-ddns) [![Docker Image Size](https://img.shields.io/docker/image-size/timothyjmiller/cloudflare-ddns/latest?style=flat&logo=docker&label=image%20size)](https://hub.docker.com/r/timothyjmiller/cloudflare-ddns)
- 🌐 Supports multiple domains (zones) on the same IP. ## ✨ Features
- 📠 Supports multiple subdomains on the same IP.
- 📡 IPv4 and IPv6 support.
- 🌍 Supports all Cloudflare regions.
- ⚖️ Supports [Cloudflare Load Balancing](https://developers.cloudflare.com/load-balancing/understand-basics/pools/).
- 🇺🇸 Made in the U.S.A.
## 📊 Stats - 🔍 **Multiple IP detection providers** — Cloudflare Trace, Cloudflare DNS-over-HTTPS, ipify, local interface, custom URL, or static IPs
- 📡 **IPv4 and IPv6** — Full dual-stack support with independent provider configuration
- 🌐 **Multiple domains and zones** — Update any number of domains across multiple Cloudflare zones
- 🃏 **Wildcard domains** — Support for `*.example.com` records
- 🌍 **Internationalized domain names** — Full IDN/punycode support (e.g. `münchen.de`)
- 🛡️ **WAF list management** — Automatically update Cloudflare WAF IP lists
- 🔔 **Notifications** — Shoutrrr-compatible notifications (Discord, Slack, Telegram, Gotify, Pushover, generic webhooks)
- 💓 **Heartbeat monitoring** — Healthchecks.io and Uptime Kuma integration
- ⏱️ **Cron scheduling** — Flexible update intervals via cron expressions
- 🧪 **Dry-run mode** — Preview changes without modifying DNS records
- 🧹 **Graceful shutdown** — Signal handling (SIGINT/SIGTERM) with optional DNS record cleanup
- 💬 **Record comments** — Tag managed records with comments for identification
- 🎯 **Managed record regex** — Control which records the tool manages via regex matching
- 🎨 **Pretty output with emoji** — Configurable emoji and verbosity levels
- 🔒 **Zero-log IP detection** — Uses Cloudflare's [cdn-cgi/trace](https://www.cloudflare.com/cdn-cgi/trace) by default
- 🏠 **CGNAT-aware local detection** — Filters out shared address space (100.64.0.0/10) and private ranges
- 🚫 **Cloudflare IP rejection** — Automatically rejects Cloudflare anycast IPs to prevent incorrect DNS updates
- 🤏 **Tiny static binary** — ~1.9 MB Docker image built from scratch, zero runtime dependencies
| Size | Downloads | Discord | ## 🚀 Quick Start
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [![cloudflare-ddns docker image size](https://img.shields.io/docker/image-size/timothyjmiller/cloudflare-ddns?style=flat-square)](https://hub.docker.com/r/timothyjmiller/cloudflare-ddns 'cloudflare-ddns docker image size') | [![Total DockerHub pulls](https://img.shields.io/docker/pulls/timothyjmiller/cloudflare-ddns?style=flat-square)](https://hub.docker.com/r/timothyjmiller/cloudflare-ddns 'Total DockerHub pulls') | [![Official Discord Server](https://img.shields.io/discord/785778163887112192?style=flat-square)](https://discord.gg/UgGmwMvNxm 'Official Discord Server') |
## 🚦 Getting Started
First copy the example configuration file into the real one.
```bash ```bash
cp config-example.json config.json docker run -d \
--name cloudflare-ddns \
--restart unless-stopped \
--network host \
-e CLOUDFLARE_API_TOKEN=your-api-token \
-e DOMAINS=example.com,www.example.com \
timothyjmiller/cloudflare-ddns:latest
``` ```
Edit `config.json` and replace the values with your own. That's it. The container detects your public IP and updates the DNS records for your domains every 5 minutes.
### 🔑 Authentication methods > ⚠️ `--network host` is required to detect IPv6 addresses. If you only need IPv4, you can omit it and set `IP6_PROVIDER=none`.
You can choose to use either the newer API tokens, or the traditional API keys ## 🔑 Authentication
To generate a new API tokens, go to your [Cloudflare Profile](https://dash.cloudflare.com/profile/api-tokens) and create a token capable of **Edit DNS**. Then replace the value in | Variable | Description |
|----------|-------------|
| `CLOUDFLARE_API_TOKEN` | API token with "Edit DNS" capability |
| `CLOUDFLARE_API_TOKEN_FILE` | Path to a file containing the API token (Docker secrets compatible) |
```json To generate an API token, go to your [Cloudflare Profile](https://dash.cloudflare.com/profile/api-tokens) and create a token capable of **Edit DNS**.
"authentication":
"api_token": "Your cloudflare API token, including the capability of **Edit DNS**"
```
Alternatively, you can use the traditional API keys by setting appropriate values for: ## 🌐 Domains
```json | Variable | Description |
"authentication": |----------|-------------|
"api_key": | `DOMAINS` | Comma-separated list of domains to update for both IPv4 and IPv6 |
"api_key": "Your cloudflare API Key", | `IP4_DOMAINS` | Comma-separated list of IPv4-only domains |
"account_email": "The email address you use to sign in to cloudflare", | `IP6_DOMAINS` | Comma-separated list of IPv6-only domains |
```
### 📍 Enable or disable IPv4 or IPv6 Wildcard domains are supported: `*.example.com`
Some ISP provided modems only allow port forwarding over IPv4 or IPv6. In this case, you would want to disable any interface not accessible via port forward. At least one of `DOMAINS`, `IP4_DOMAINS`, `IP6_DOMAINS`, or `WAF_LISTS` must be set.
```json ## 🔍 IP Detection Providers
"a": true,
"aaaa": true
```
### 🎛️ Other values explained | Variable | Default | Description |
|----------|---------|-------------|
| `IP4_PROVIDER` | `ipify` | IPv4 detection method |
| `IP6_PROVIDER` | `cloudflare.trace` | IPv6 detection method |
```json Available providers:
"zone_id": "The ID of the zone that will get the records. From your dashboard click into the zone. Under the overview tab, scroll down and the zone ID is listed in the right rail",
"subdomains": "Array of subdomains you want to update the A & where applicable, AAAA records. IMPORTANT! Only write subdomain name. Do not include the base domain name. (e.g. foo or an empty string to update the base domain)",
"proxied": "Defaults to false. Make it true if you want CDN/SSL benefits from cloudflare. This usually disables SSH)",
"ttl": "Defaults to 300 seconds. Longer TTLs speed up DNS lookups by increasing the chance of cached results, but a longer TTL also means that updates to your records take longer to go into effect. You can choose a TTL between 30 seconds and 1 day. For more information, see [Cloudflare's TTL documentation](https://developers.cloudflare.com/dns/manage-dns-records/reference/ttl/)",
```
## 📠 Hosting multiple subdomains on the same IP? | Provider | Description |
|----------|-------------|
| `cloudflare.trace` | 🔒 Cloudflare's `/cdn-cgi/trace` endpoint (default, zero-log) |
| `cloudflare.doh` | 🌐 Cloudflare DNS-over-HTTPS (`whoami.cloudflare` TXT query) |
| `ipify` | 🌎 ipify.org API |
| `local` | 🏠 Local IP via system routing table (no network traffic, CGNAT-aware) |
| `local.iface:<name>` | 🔌 IP from a specific network interface (e.g., `local.iface:eth0`) |
| `url:<url>` | 🔗 Custom HTTP(S) endpoint that returns an IP address |
| `literal:<ips>` | 📌 Static IP addresses (comma-separated) |
| `none` | 🚫 Disable this IP type |
This script can be used to update multiple subdomains on the same IP address. ## 🚫 Cloudflare IP Rejection
For example, if you have a domain `example.com` and you want to host additional subdomains at `foo.example.com` and `bar.example.com` on the same IP address, you can use this script to update the DNS records for all subdomains. | Variable | Default | Description |
|----------|---------|-------------|
| `REJECT_CLOUDFLARE_IPS` | `true` | Reject detected IPs that fall within Cloudflare's IP ranges |
### ⚠️ Note Some IP detection providers occasionally return a Cloudflare anycast IP instead of your real public IP. When this happens, your DNS record gets updated to point at Cloudflare infrastructure rather than your actual address.
Please remove the comments after `//` in the below example. They are only there to explain the config. By default, each update cycle fetches [Cloudflare's published IP ranges](https://www.cloudflare.com/ips/) and skips any detected IP that falls within them. A warning is logged for every rejected IP. If the ranges cannot be fetched, the update is skipped entirely to prevent writing a Cloudflare IP.
Do not include the base domain name in your `subdomains` config. Do not use the [FQDN](https://en.wikipedia.org/wiki/Fully_qualified_domain_name). To disable this protection, set `REJECT_CLOUDFLARE_IPS=false`.
### 👉 Example 🚀 ## ⏱️ Scheduling
```bash | Variable | Default | Description |
{ |----------|---------|-------------|
"cloudflare": [ | `UPDATE_CRON` | `@every 5m` | Update schedule |
{ | `UPDATE_ON_START` | `true` | Run an update immediately on startup |
"authentication": { | `DELETE_ON_STOP` | `false` | Delete managed DNS records on shutdown |
"api_token": "api_token_here", // Either api_token or api_key
"api_key": {
"api_key": "api_key_here",
"account_email": "your_email_here"
}
},
"zone_id": "your_zone_id_here",
"subdomains": [
{
"name": "", // Root domain (example.com)
"proxied": true
},
{
"name": "foo", // (foo.example.com)
"proxied": true
},
{
"name": "bar", // (bar.example.com)
"proxied": true
}
]
}
],
"a": true,
"aaaa": true,
"purgeUnknownRecords": false,
"ttl": 300
}
```
## 🌐 Hosting multiple domains (zones) on the same IP? Schedule formats:
You can handle ddns for multiple domains (cloudflare zones) using the same docker container by duplicating your configs inside the `cloudflare: []` key within `config.json` like below: - `@every 5m` — Every 5 minutes
- `@every 1h` — Every hour
- `@every 30s` — Every 30 seconds
- `@once` — Run once and exit
### ⚠️ Note: When `UPDATE_CRON=@once`, `UPDATE_ON_START` must be `true` and `DELETE_ON_STOP` must be `false`.
If you are using API Tokens, make sure the token used supports editing your zone ID. ## 📝 DNS Record Settings
```bash | Variable | Default | Description |
{ |----------|---------|-------------|
"cloudflare": [ | `TTL` | `1` (auto) | DNS record TTL in seconds (1=auto, or 30-86400) |
{ | `PROXIED` | `false` | Expression controlling which domains are proxied through Cloudflare |
"authentication": { | `RECORD_COMMENT` | (empty) | Comment attached to managed DNS records |
"api_token": "api_token_here", | `MANAGED_RECORDS_COMMENT_REGEX` | (empty) | Regex to identify which records are managed (empty = all) |
"api_key": {
"api_key": "api_key_here",
"account_email": "your_email_here"
}
},
"zone_id": "your_first_zone_id_here",
"subdomains": [
{
"name": "",
"proxied": false
},
{
"name": "remove_or_replace_with_your_subdomain",
"proxied": false
}
]
},
{
"authentication": {
"api_token": "api_token_here",
"api_key": {
"api_key": "api_key_here",
"account_email": "your_email_here"
}
},
"zone_id": "your_second_zone_id_here",
"subdomains": [
{
"name": "",
"proxied": false
},
{
"name": "remove_or_replace_with_your_subdomain",
"proxied": false
}
]
}
],
"a": true,
"aaaa": true,
"purgeUnknownRecords": false
}
```
## ⚖️ Load Balancing The `PROXIED` variable supports boolean expressions:
If you have multiple IP addresses and want to load balance between them, you can use the `loadBalancing` option. This will create a CNAME record for each subdomain that points to the subdomain with the lowest IP address. | Expression | Meaning |
|------------|---------|
| `true` | ☁️ Proxy all domains |
| `false` | 🔓 Don't proxy any domains |
| `is(example.com)` | 🎯 Only proxy `example.com` |
| `sub(cdn.example.com)` | 🌳 Proxy `cdn.example.com` and its subdomains |
| `is(a.com) \|\| is(b.com)` | 🔀 Proxy `a.com` or `b.com` |
| `!is(vpn.example.com)` | 🚫 Proxy everything except `vpn.example.com` |
### 📜 Example config to support load balancing Operators: `is()`, `sub()`, `!`, `&&`, `||`, `()`
```json ## 🛡️ WAF Lists
{
"cloudflare": [
{
"authentication": {
"api_token": "api_token_here",
"api_key": {
"api_key": "api_key_here",
"account_email": "your_email_here"
}
},
"zone_id": "your_zone_id_here",
"subdomains": [
{
"name": "",
"proxied": false
},
{
"name": "remove_or_replace_with_your_subdomain",
"proxied": false
}
]
}
],{
"cloudflare": [
{
"authentication": {
"api_token": "api_token_here",
"api_key": {
"api_key": "api_key_here",
"account_email": "your_email_here"
}
},
"zone_id": "your_zone_id_here",
"subdomains": [
{
"name": "",
"proxied": false
},
{
"name": "remove_or_replace_with_your_subdomain",
"proxied": false
}
]
}
],
"load_balancer": [
{
"authentication": {
"api_token": "api_token_here",
"api_key": {
"api_key": "api_key_here",
"account_email": "your_email_here"
}
},
"pool_id": "your_pool_id_here",
"origin": "your_origin_name_here"
}
],
"a": true,
"aaaa": true,
"purgeUnknownRecords": false,
"ttl": 300
}
```
### Docker environment variable support | Variable | Default | Description |
|----------|---------|-------------|
| `WAF_LISTS` | (empty) | Comma-separated WAF lists in `account-id/list-name` format |
| `WAF_LIST_DESCRIPTION` | (empty) | Description for managed WAF lists |
| `WAF_LIST_ITEM_COMMENT` | (empty) | Comment for WAF list items |
| `MANAGED_WAF_LIST_ITEMS_COMMENT_REGEX` | (empty) | Regex to identify managed WAF list items |
Define environmental variables starts with `CF_DDNS_` and use it in config.json WAF list names must match the pattern `[a-z0-9_]+`.
For ex: ## 🔔 Notifications (Shoutrrr)
```json | Variable | Description |
{ |----------|-------------|
"cloudflare": [ | `SHOUTRRR` | Newline-separated list of notification service URLs |
{
"authentication": {
"api_token": "${CF_DDNS_API_TOKEN}",
```
### 🧹 Optional features Supported services:
`purgeUnknownRecords` removes stale DNS records from Cloudflare. This is useful if you have a dynamic DNS record that you no longer want to use. If you have a dynamic DNS record that you no longer want to use, you can set `purgeUnknownRecords` to `true` and the script will remove the stale DNS record from Cloudflare. | Service | URL format |
|---------|------------|
| 💬 Discord | `discord://token@webhook-id` |
| 📨 Slack | `slack://token-a/token-b/token-c` |
| ✈️ Telegram | `telegram://bot-token@telegram?chats=chat-id` |
| 📡 Gotify | `gotify://host/path?token=app-token` |
| 📲 Pushover | `pushover://user-key@api-token` |
| 🌐 Generic webhook | `generic://host/path` or `generic+https://host/path` |
## 🐳 Deploy with Docker Compose Notifications are sent when DNS records are updated, created, deleted, or when errors occur.
Pre-compiled images are available via [the official docker container on DockerHub](https://hub.docker.com/r/timothyjmiller/cloudflare-ddns). ## 💓 Heartbeat Monitoring
Modify the host file path of config.json inside the volumes section of docker-compose.yml. | Variable | Description |
|----------|-------------|
| `HEALTHCHECKS` | Healthchecks.io ping URL |
| `UPTIMEKUMA` | Uptime Kuma push URL |
Heartbeats are sent after each update cycle. On failure, a fail signal is sent. On shutdown, an exit signal is sent.
## ⏳ Timeouts
| Variable | Default | Description |
|----------|---------|-------------|
| `DETECTION_TIMEOUT` | `5s` | Timeout for IP detection requests |
| `UPDATE_TIMEOUT` | `30s` | Timeout for Cloudflare API requests |
## 🖥️ Output
| Variable | Default | Description |
|----------|---------|-------------|
| `EMOJI` | `true` | Use emoji in output messages |
| `QUIET` | `false` | Suppress informational output |
## 🏁 CLI Flags
| Flag | Description |
|------|-------------|
| `--dry-run` | 🧪 Preview changes without modifying DNS records |
| `--repeat` | 🔁 Run continuously (legacy config mode only; env var mode uses `UPDATE_CRON`) |
## 📋 All Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `CLOUDFLARE_API_TOKEN` | — | 🔑 API token |
| `CLOUDFLARE_API_TOKEN_FILE` | — | 📄 Path to API token file |
| `DOMAINS` | — | 🌐 Domains for both IPv4 and IPv6 |
| `IP4_DOMAINS` | — | 4⃣ IPv4-only domains |
| `IP6_DOMAINS` | — | 6⃣ IPv6-only domains |
| `IP4_PROVIDER` | `ipify` | 🔍 IPv4 detection provider |
| `IP6_PROVIDER` | `cloudflare.trace` | 🔍 IPv6 detection provider |
| `UPDATE_CRON` | `@every 5m` | ⏱️ Update schedule |
| `UPDATE_ON_START` | `true` | 🚀 Update on startup |
| `DELETE_ON_STOP` | `false` | 🧹 Delete records on shutdown |
| `TTL` | `1` | ⏳ DNS record TTL |
| `PROXIED` | `false` | ☁️ Proxied expression |
| `RECORD_COMMENT` | — | 💬 DNS record comment |
| `MANAGED_RECORDS_COMMENT_REGEX` | — | 🎯 Managed records regex |
| `WAF_LISTS` | — | 🛡️ WAF lists to manage |
| `WAF_LIST_DESCRIPTION` | — | 📝 WAF list description |
| `WAF_LIST_ITEM_COMMENT` | — | 💬 WAF list item comment |
| `MANAGED_WAF_LIST_ITEMS_COMMENT_REGEX` | — | 🎯 Managed WAF items regex |
| `DETECTION_TIMEOUT` | `5s` | ⏳ IP detection timeout |
| `UPDATE_TIMEOUT` | `30s` | ⏳ API request timeout |
| `REJECT_CLOUDFLARE_IPS` | `true` | 🚫 Reject Cloudflare anycast IPs |
| `EMOJI` | `true` | 🎨 Enable emoji output |
| `QUIET` | `false` | 🤫 Suppress info output |
| `HEALTHCHECKS` | — | 💓 Healthchecks.io URL |
| `UPTIMEKUMA` | — | 💓 Uptime Kuma URL |
| `SHOUTRRR` | — | 🔔 Notification URLs (newline-separated) |
---
## 🚢 Deployment
### 🐳 Docker Compose
```yml ```yml
version: '3.9' version: '3.9'
@@ -292,146 +246,295 @@ services:
- no-new-privileges:true - no-new-privileges:true
network_mode: 'host' network_mode: 'host'
environment: environment:
- PUID=1000 - CLOUDFLARE_API_TOKEN=your-api-token
- PGID=1000 - DOMAINS=example.com,www.example.com
- PROXIED=true
- IP6_PROVIDER=none
- HEALTHCHECKS=https://hc-ping.com/your-uuid
restart: unless-stopped
```
> ⚠️ Docker requires `network_mode: host` to access the IPv6 public address.
### ☸️ Kubernetes
The included manifest uses the legacy JSON config mode. Create a secret containing your `config.json` and apply:
```bash
kubectl create secret generic config-cloudflare-ddns --from-file=config.json -n ddns
kubectl apply -f k8s/cloudflare-ddns.yml
```
### 🐧 Linux + Systemd
1. Build and install:
```bash
cargo build --release
sudo cp target/release/cloudflare-ddns /usr/local/bin/
```
2. Copy the systemd units from the `systemd/` directory:
```bash
sudo cp systemd/cloudflare-ddns.service /etc/systemd/system/
sudo cp systemd/cloudflare-ddns.timer /etc/systemd/system/
```
3. Place a `config.json` at `/etc/cloudflare-ddns/config.json` (the systemd service uses legacy config mode).
4. Enable the timer:
```bash
sudo systemctl enable --now cloudflare-ddns.timer
```
The timer runs the service every 15 minutes (configurable in `cloudflare-ddns.timer`).
## 🔨 Building from Source
```bash
cargo build --release
```
The binary is at `target/release/cloudflare-ddns`.
### 🐳 Docker builds
```bash
# Single architecture (linux/amd64)
./scripts/docker-build.sh
# Multi-architecture (linux/amd64, linux/arm64, linux/ppc64le)
./scripts/docker-build-all.sh
```
## 💻 Supported Platforms
- 🐳 [Docker](https://docs.docker.com/get-docker/) (amd64, arm64, ppc64le)
- 🐙 [Docker Compose](https://docs.docker.com/compose/install/)
- ☸️ [Kubernetes](https://kubernetes.io/docs/tasks/tools/)
- 🐧 [Systemd](https://www.freedesktop.org/wiki/Software/systemd/)
- 🍎 macOS, 🪟 Windows, 🐧 Linux — anywhere Rust compiles
---
## 📁 Legacy JSON Config File
For backwards compatibility, cloudflare-ddns still supports configuration via a `config.json` file. This mode is used automatically when no `CLOUDFLARE_API_TOKEN` environment variable is set.
### 🚀 Quick Start
```bash
cp config-example.json config.json
# Edit config.json with your values
cloudflare-ddns
```
### 🔑 Authentication
Use either an API token (recommended) or a legacy API key:
```json
"authentication": {
"api_token": "Your cloudflare API token with Edit DNS capability"
}
```
Or with a legacy API key:
```json
"authentication": {
"api_key": {
"api_key": "Your cloudflare API Key",
"account_email": "The email address you use to sign in to cloudflare"
}
}
```
### 📡 IPv4 and IPv6
Some ISP provided modems only allow port forwarding over IPv4 or IPv6. Disable the interface that is not accessible:
```json
"a": true,
"aaaa": true
```
### ⚙️ Config Options
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| `cloudflare` | array | required | List of zone configurations |
| `a` | bool | `true` | Enable IPv4 (A record) updates |
| `aaaa` | bool | `true` | Enable IPv6 (AAAA record) updates |
| `purgeUnknownRecords` | bool | `false` | Delete stale/duplicate DNS records |
| `ttl` | int | `300` | DNS record TTL in seconds (30-86400, values < 30 become auto) |
| `ip4_provider` | string | `"cloudflare.trace"` | IPv4 detection provider (same values as `IP4_PROVIDER` env var) |
| `ip6_provider` | string | `"cloudflare.trace"` | IPv6 detection provider (same values as `IP6_PROVIDER` env var) |
### 🚫 Cloudflare IP Rejection (Legacy Mode)
Cloudflare IP rejection is enabled by default in legacy mode too. To disable it, set `REJECT_CLOUDFLARE_IPS=false` alongside your `config.json`:
```bash
REJECT_CLOUDFLARE_IPS=false cloudflare-ddns
```
Or in Docker Compose:
```yml
environment:
- REJECT_CLOUDFLARE_IPS=false
volumes:
- ./config.json:/config.json
```
### 🔍 IP Detection (Legacy Mode)
Legacy mode now uses the same shared provider abstraction as environment variable mode. By default it uses the `cloudflare.trace` provider, which builds an IP-family-bound HTTP client (`0.0.0.0` for IPv4, `[::]` for IPv6) to guarantee the correct address family on dual-stack hosts.
You can override the detection method per address family with `ip4_provider` and `ip6_provider` in your `config.json`. Supported values are the same as the `IP4_PROVIDER` / `IP6_PROVIDER` environment variables: `cloudflare.trace`, `cloudflare.doh`, `ipify`, `local`, `local.iface:<name>`, `url:<https://...>`, `none`.
Set a provider to `"none"` to disable detection for that address family (overrides `a`/`aaaa`):
```json
{
"a": true,
"aaaa": true,
"ip4_provider": "cloudflare.trace",
"ip6_provider": "none"
}
```
Each zone entry contains:
| Key | Type | Description |
|-----|------|-------------|
| `authentication` | object | API token or API key credentials |
| `zone_id` | string | Cloudflare zone ID (found in zone dashboard) |
| `subdomains` | array | Subdomain entries to update |
| `proxied` | bool | Default proxied status for subdomains in this zone |
Subdomain entries can be a simple string or a detailed object:
```json
"subdomains": [
"",
"@",
"www",
{ "name": "vpn", "proxied": true }
]
```
Use `""` or `"@"` for the root domain. Do not include the base domain name.
### 🔄 Environment Variable Substitution
In the legacy config file, values can reference environment variables with the `CF_DDNS_` prefix:
```json
{
"cloudflare": [{
"authentication": {
"api_token": "${CF_DDNS_API_TOKEN}"
},
...
}]
}
```
### 📠 Example: Multiple Subdomains
```json
{
"cloudflare": [
{
"authentication": {
"api_token": "your-api-token"
},
"zone_id": "your_zone_id",
"subdomains": [
{ "name": "", "proxied": true },
{ "name": "www", "proxied": true },
{ "name": "vpn", "proxied": false }
]
}
],
"a": true,
"aaaa": true,
"purgeUnknownRecords": false,
"ttl": 300
}
```
### 🌐 Example: Multiple Zones
```json
{
"cloudflare": [
{
"authentication": { "api_token": "your-api-token" },
"zone_id": "first_zone_id",
"subdomains": [
{ "name": "", "proxied": false }
]
},
{
"authentication": { "api_token": "your-api-token" },
"zone_id": "second_zone_id",
"subdomains": [
{ "name": "", "proxied": false }
]
}
],
"a": true,
"aaaa": true,
"purgeUnknownRecords": false
}
```
### 🐳 Docker Compose (legacy config file)
```yml
version: '3.9'
services:
cloudflare-ddns:
image: timothyjmiller/cloudflare-ddns:latest
container_name: cloudflare-ddns
security_opt:
- no-new-privileges:true
network_mode: 'host'
volumes: volumes:
- /YOUR/PATH/HERE/config.json:/config.json - /YOUR/PATH/HERE/config.json:/config.json
restart: unless-stopped restart: unless-stopped
``` ```
### ⚠️ IPv6 ### 🏁 Legacy CLI Flags
Docker requires network_mode be set to host in order to access the IPv6 public address. In legacy config mode, use `--repeat` to run continuously (the TTL value is used as the update interval):
### 🏃‍♂️ Running
From the project root directory
```bash ```bash
docker-compose up -d cloudflare-ddns --repeat
cloudflare-ddns --repeat --dry-run
``` ```
## 🐋 Kubernetes ---
Create config File ## 🔗 Helpful Links
```bash - 🔑 [Cloudflare API token](https://dash.cloudflare.com/profile/api-tokens)
cp ../../config-example.json config.json - 🆔 [Cloudflare zone ID](https://support.cloudflare.com/hc/en-us/articles/200167836-Where-do-I-find-my-Cloudflare-IP-address-)
``` - 📋 [Cloudflare zone DNS record ID](https://support.cloudflare.com/hc/en-us/articles/360019093151-Managing-DNS-records-in-Cloudflare)
Edit config.jsonon (vim, nvim, nano... ) ## 📜 License
```bash This project is licensed under the GNU General Public License, version 3 (GPLv3).
${EDITOR} config.json
```
Create config file as Secret. ## 👨‍💻 Author
```bash
kubectl create secret generic config-cloudflare-ddns --from-file=config.json --dry-run=client -oyaml -n ddns > config-cloudflare-ddns-Secret.yaml
```
apply this secret
```bash
kubectl apply -f config-cloudflare-ddns-Secret.yaml
rm config.json # recomended Just keep de secret on Kubernetes Cluster
```
apply this Deployment
```bash
kubectl apply -f cloudflare-ddns-Deployment.yaml
```
## 🐧 Deploy with Linux + Cron
### 🏃 Running (all distros)
This script requires Python 3.5+, which comes preinstalled on the latest version of Raspbian. Download/clone this repo and give permission to the project's bash script by running `chmod +x ./start-sync.sh`. Now you can execute `./start-sync.sh`, which will set up a virtualenv, pull in any dependencies, and fire the script.
1. Upload the cloudflare-ddns folder to your home directory /home/your_username_here/
2. Run the following code in terminal
```bash
crontab -e
```
3. Add the following lines to sync your DNS records every 15 minutes
```bash
*/15 * * * * /home/your_username_here/cloudflare-ddns/start-sync.sh
```
## Building from source
Create a config.json file with your production credentials.
### 💖 Please Note
The optional `docker-build-all.sh` script requires Docker experimental support to be enabled.
Docker Hub has experimental support for multi-architecture builds. Their official blog post specifies easy instructions for building with [Mac and Windows versions of Docker Desktop](https://docs.docker.com/docker-for-mac/multi-arch/).
1. Choose build platform
- Multi-architecture (experimental) `docker-build-all.sh`
- Linux/amd64 by default `docker-build.sh`
2. Give your bash script permission to execute.
```bash
sudo chmod +x ./docker-build.sh
```
```bash
sudo chmod +x ./docker-build-all.sh
```
3. At project root, run the `docker-build.sh` script.
Recommended for local development
```bash
./docker-build.sh
```
Recommended for production
```bash
./docker-build-all.sh
```
### Run the locally compiled version
```bash
docker run -d timothyjmiller/cloudflare_ddns:latest
```
## Supported Platforms
- [Docker](https://docs.docker.com/get-docker/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Kubernetes](https://kubernetes.io/docs/tasks/tools/)
- [Python 3](https://www.python.org/downloads/)
- [Systemd](https://www.freedesktop.org/wiki/Software/systemd/)
## 📜 Helpful links
- [Cloudflare API token](https://dash.cloudflare.com/profile/api-tokens)
- [Cloudflare zone ID](https://support.cloudflare.com/hc/en-us/articles/200167836-Where-do-I-find-my-Cloudflare-IP-address-)
- [Cloudflare zone DNS record ID](https://support.cloudflare.com/hc/en-us/articles/360019093151-Managing-DNS-records-in-Cloudflare)
## License
This Template is licensed under the GNU General Public License, version 3 (GPLv3).
## Author
Timothy Miller Timothy Miller
[View my GitHub profile 💡](https://github.com/timothymiller) [View my GitHub profile 💡](https://github.com/timothymiller)
[View my personal website 💻](https://timknowsbest.com) [View my personal website 💻](https://itstmillertime.com)

View File

@@ -1,319 +0,0 @@
#!/usr/bin/env python3
# cloudflare-ddns.py
# Summary: Access your home network remotely via a custom domain name without a static IP!
# Description: Access your home network remotely via a custom domain
# Access your home network remotely via a custom domain
# A small, 🕵️ privacy centric, and ⚡
# lightning fast multi-architecture Docker image for self hosting projects.
__version__ = "1.0.2"
from string import Template
import json
import os
import signal
import sys
import threading
import time
import requests
CONFIG_PATH = os.environ.get('CONFIG_PATH', os.getcwd())
# Read in all environment variables that have the correct prefix
ENV_VARS = {key: value for (key, value) in os.environ.items() if key.startswith('CF_DDNS_')}
class GracefulExit:
def __init__(self):
self.kill_now = threading.Event()
signal.signal(signal.SIGINT, self.exit_gracefully)
signal.signal(signal.SIGTERM, self.exit_gracefully)
def exit_gracefully(self, signum, frame):
print("🛑 Stopping main thread...")
self.kill_now.set()
def deleteEntries(type):
# Helper function for deleting A or AAAA records
# in the case of no IPv4 or IPv6 connection, yet
# existing A or AAAA records are found.
for option in config["cloudflare"]:
answer = cf_api(
"zones/" + option['zone_id'] +
"/dns_records?per_page=100&type=" + type,
"GET", option)
if answer is None or answer["result"] is None:
time.sleep(5)
return
for record in answer["result"]:
identifier = str(record["id"])
cf_api(
"zones/" + option['zone_id'] + "/dns_records/" + identifier,
"DELETE", option)
print("🗑️ Deleted stale record " + identifier)
def getIPs():
a = None
aaaa = None
global ipv4_enabled
global ipv6_enabled
global purgeUnknownRecords
if ipv4_enabled:
try:
a = requests.get(
"https://1.1.1.1/cdn-cgi/trace").text.split("\n")
a.pop()
a = dict(s.split("=") for s in a)["ip"]
except Exception:
global shown_ipv4_warning
if not shown_ipv4_warning:
shown_ipv4_warning = True
print("🧩 IPv4 not detected via 1.1.1.1, trying 1.0.0.1")
# Try secondary IP check
try:
a = requests.get(
"https://1.0.0.1/cdn-cgi/trace").text.split("\n")
a.pop()
a = dict(s.split("=") for s in a)["ip"]
except Exception:
global shown_ipv4_warning_secondary
if not shown_ipv4_warning_secondary:
shown_ipv4_warning_secondary = True
print("🧩 IPv4 not detected via 1.0.0.1. Verify your ISP or DNS provider isn't blocking Cloudflare's IPs.")
if purgeUnknownRecords:
deleteEntries("A")
if ipv6_enabled:
try:
aaaa = requests.get(
"https://[2606:4700:4700::1111]/cdn-cgi/trace").text.split("\n")
aaaa.pop()
aaaa = dict(s.split("=") for s in aaaa)["ip"]
except Exception:
global shown_ipv6_warning
if not shown_ipv6_warning:
shown_ipv6_warning = True
print("🧩 IPv6 not detected via 1.1.1.1, trying 1.0.0.1")
try:
aaaa = requests.get(
"https://[2606:4700:4700::1001]/cdn-cgi/trace").text.split("\n")
aaaa.pop()
aaaa = dict(s.split("=") for s in aaaa)["ip"]
except Exception:
global shown_ipv6_warning_secondary
if not shown_ipv6_warning_secondary:
shown_ipv6_warning_secondary = True
print("🧩 IPv6 not detected via 1.0.0.1. Verify your ISP or DNS provider isn't blocking Cloudflare's IPs.")
if purgeUnknownRecords:
deleteEntries("AAAA")
ips = {}
if (a is not None):
ips["ipv4"] = {
"type": "A",
"ip": a
}
if (aaaa is not None):
ips["ipv6"] = {
"type": "AAAA",
"ip": aaaa
}
return ips
def commitRecord(ip):
global ttl
for option in config["cloudflare"]:
subdomains = option["subdomains"]
response = cf_api("zones/" + option['zone_id'], "GET", option)
if response is None or response["result"]["name"] is None:
time.sleep(5)
return
base_domain_name = response["result"]["name"]
for subdomain in subdomains:
try:
name = subdomain["name"].lower().strip()
proxied = subdomain["proxied"]
except:
name = subdomain
proxied = option["proxied"]
fqdn = base_domain_name
# Check if name provided is a reference to the root domain
if name != '' and name != '@':
fqdn = name + "." + base_domain_name
record = {
"type": ip["type"],
"name": fqdn,
"content": ip["ip"],
"proxied": proxied,
"ttl": ttl
}
dns_records = cf_api(
"zones/" + option['zone_id'] +
"/dns_records?per_page=100&type=" + ip["type"],
"GET", option)
identifier = None
modified = False
duplicate_ids = []
if dns_records is not None:
for r in dns_records["result"]:
if (r["name"] == fqdn):
if identifier:
if r["content"] == ip["ip"]:
duplicate_ids.append(identifier)
identifier = r["id"]
else:
duplicate_ids.append(r["id"])
else:
identifier = r["id"]
if r['content'] != record['content'] or r['proxied'] != record['proxied']:
modified = True
if identifier:
if modified:
print("📡 Updating record " + str(record))
response = cf_api(
"zones/" + option['zone_id'] +
"/dns_records/" + identifier,
"PUT", option, {}, record)
else:
print(" Adding new record " + str(record))
response = cf_api(
"zones/" + option['zone_id'] + "/dns_records", "POST", option, {}, record)
if purgeUnknownRecords:
for identifier in duplicate_ids:
identifier = str(identifier)
print("🗑️ Deleting stale record " + identifier)
response = cf_api(
"zones/" + option['zone_id'] +
"/dns_records/" + identifier,
"DELETE", option)
return True
def updateLoadBalancer(ip):
for option in config["load_balancer"]:
pools = cf_api('user/load_balancers/pools', 'GET', option)
if pools:
idxr = dict((p['id'], i) for i, p in enumerate(pools['result']))
idx = idxr.get(option['pool_id'])
origins = pools['result'][idx]['origins']
idxr = dict((o['name'], i) for i, o in enumerate(origins))
idx = idxr.get(option['origin'])
origins[idx]['address'] = ip['ip']
data = {'origins': origins}
response = cf_api(f'user/load_balancers/pools/{option["pool_id"]}', 'PATCH', option, {}, data)
def cf_api(endpoint, method, config, headers={}, data=False):
api_token = config['authentication']['api_token']
if api_token != '' and api_token != 'api_token_here':
headers = {
"Authorization": "Bearer " + api_token, **headers
}
else:
headers = {
"X-Auth-Email": config['authentication']['api_key']['account_email'],
"X-Auth-Key": config['authentication']['api_key']['api_key'],
}
try:
if (data == False):
response = requests.request(
method, "https://api.cloudflare.com/client/v4/" + endpoint, headers=headers)
else:
response = requests.request(
method, "https://api.cloudflare.com/client/v4/" + endpoint,
headers=headers, json=data)
if response.ok:
return response.json()
else:
print("😡 Error sending '" + method +
"' request to '" + response.url + "':")
print(response.text)
return None
except Exception as e:
print("😡 An exception occurred while sending '" +
method + "' request to '" + endpoint + "': " + str(e))
return None
def updateIPs(ips):
for ip in ips.values():
commitRecord(ip)
#updateLoadBalancer(ip)
if __name__ == '__main__':
shown_ipv4_warning = False
shown_ipv4_warning_secondary = False
shown_ipv6_warning = False
shown_ipv6_warning_secondary = False
ipv4_enabled = True
ipv6_enabled = True
purgeUnknownRecords = False
if sys.version_info < (3, 5):
raise Exception("🐍 This script requires Python 3.5+")
config = None
try:
with open(os.path.join(CONFIG_PATH, "config.json")) as config_file:
if len(ENV_VARS) != 0:
config = json.loads(Template(config_file.read()).safe_substitute(ENV_VARS))
else:
config = json.loads(config_file.read())
except:
print("😡 Error reading config.json")
# wait 10 seconds to prevent excessive logging on docker auto restart
time.sleep(10)
if config is not None:
try:
ipv4_enabled = config["a"]
ipv6_enabled = config["aaaa"]
except:
ipv4_enabled = True
ipv6_enabled = True
print("⚙️ Individually disable IPv4 or IPv6 with new config.json options. Read more about it here: https://github.com/timothymiller/cloudflare-ddns/blob/master/README.md")
try:
purgeUnknownRecords = config["purgeUnknownRecords"]
except:
purgeUnknownRecords = False
print("⚙️ No config detected for 'purgeUnknownRecords' - defaulting to False")
try:
ttl = int(config["ttl"])
except:
ttl = 300 # default Cloudflare TTL
print(
"⚙️ No config detected for 'ttl' - defaulting to 300 seconds (5 minutes)")
if ttl < 30:
ttl = 1 #
print("⚙️ TTL is too low - defaulting to 1 (auto)")
if (len(sys.argv) > 1):
if (sys.argv[1] == "--repeat"):
if ipv4_enabled and ipv6_enabled:
print(
"🕰️ Updating IPv4 (A) & IPv6 (AAAA) records every " + str(ttl) + " seconds")
elif ipv4_enabled and not ipv6_enabled:
print("🕰️ Updating IPv4 (A) records every " +
str(ttl) + " seconds")
elif ipv6_enabled and not ipv4_enabled:
print("🕰️ Updating IPv6 (AAAA) records every " +
str(ttl) + " seconds")
next_time = time.time()
killer = GracefulExit()
prev_ips = None
while True:
updateIPs(getIPs())
if killer.kill_now.wait(ttl):
break
else:
print("❓ Unrecognized parameter '" +
sys.argv[1] + "'. Stopping now.")
else:
updateIPs(getIPs())

View File

@@ -24,5 +24,7 @@
"a": true, "a": true,
"aaaa": true, "aaaa": true,
"purgeUnknownRecords": false, "purgeUnknownRecords": false,
"ttl": 300 "ttl": 300,
"ip4_provider": "cloudflare.trace",
"ip6_provider": "cloudflare.trace"
} }

View File

@@ -0,0 +1,19 @@
version: '3.9'
services:
cloudflare-ddns:
image: timothyjmiller/cloudflare-ddns:latest
container_name: cloudflare-ddns
security_opt:
- no-new-privileges:true
network_mode: 'host'
environment:
- CLOUDFLARE_API_TOKEN=your-api-token-here
- DOMAINS=example.com,www.example.com
- PROXIED=false
- TTL=1
- UPDATE_CRON=@every 5m
# - IP6_PROVIDER=none
# - HEALTHCHECKS=https://hc-ping.com/your-uuid
# - UPTIMEKUMA=https://kuma.example.com/api/push/your-token
# - SHOUTRRR=discord://token@webhook-id
restart: unless-stopped

98
env-example Normal file
View File

@@ -0,0 +1,98 @@
# Cloudflare DDNS - Environment Variable Configuration
# Copy this file to .env and set your values.
# Setting CLOUDFLARE_API_TOKEN activates environment variable mode.
# === Required ===
# Cloudflare API token with "Edit DNS" capability
CLOUDFLARE_API_TOKEN=your-api-token-here
# Or read from a file:
# CLOUDFLARE_API_TOKEN_FILE=/run/secrets/cloudflare_token
# Domains to update (comma-separated)
# At least one of DOMAINS, IP4_DOMAINS, IP6_DOMAINS, or WAF_LISTS must be set
DOMAINS=example.com,www.example.com
# IP4_DOMAINS=v4only.example.com
# IP6_DOMAINS=v6only.example.com
# === IP Detection ===
# Provider for IPv4 detection (default: cloudflare.trace)
# Options: cloudflare.trace, cloudflare.doh, ipify, local, local.iface:<name>,
# url:<custom-url>, literal:<ip1>,<ip2>, none
# IP4_PROVIDER=cloudflare.trace
# Provider for IPv6 detection (default: cloudflare.trace)
# IP6_PROVIDER=cloudflare.trace
# === Scheduling ===
# Update schedule (default: @every 5m)
# Formats: @every 5m, @every 1h, @every 30s, @once
# UPDATE_CRON=@every 5m
# Run an update immediately on startup (default: true)
# UPDATE_ON_START=true
# Delete managed DNS records on shutdown (default: false)
# DELETE_ON_STOP=false
# === DNS Records ===
# TTL in seconds: 1=auto, or 30-86400 (default: 1)
# TTL=1
# Proxied expression: true, false, is(domain), sub(domain), or boolean combos
# PROXIED=false
# Comment to attach to managed DNS records
# RECORD_COMMENT=Managed by cloudflare-ddns
# Regex to identify which records are managed (empty = all matching records)
# MANAGED_RECORDS_COMMENT_REGEX=cloudflare-ddns
# === WAF Lists ===
# Comma-separated WAF lists in account-id/list-name format
# WAF_LISTS=account123/my_ip_list
# Description for managed WAF lists
# WAF_LIST_DESCRIPTION=Dynamic IP list
# Comment for WAF list items
# WAF_LIST_ITEM_COMMENT=cloudflare-ddns
# Regex to identify managed WAF list items
# MANAGED_WAF_LIST_ITEMS_COMMENT_REGEX=cloudflare-ddns
# === Notifications ===
# Shoutrrr notification URLs (newline-separated)
# SHOUTRRR=discord://token@webhook-id
# SHOUTRRR=slack://token-a/token-b/token-c
# SHOUTRRR=telegram://bot-token@telegram?chats=chat-id
# SHOUTRRR=generic+https://hooks.example.com/webhook
# === Heartbeat Monitoring ===
# Healthchecks.io ping URL
# HEALTHCHECKS=https://hc-ping.com/your-uuid
# Uptime Kuma push URL
# UPTIMEKUMA=https://your-uptime-kuma.com/api/push/your-token
# === Timeouts ===
# IP detection timeout (default: 5s)
# DETECTION_TIMEOUT=5s
# Cloudflare API request timeout (default: 30s)
# UPDATE_TIMEOUT=30s
# === Output ===
# Use emoji in output (default: true)
# EMOJI=true
# Suppress informational output (default: false)
# QUIET=false

View File

@@ -1 +0,0 @@
requests==2.31.0

View File

@@ -1,4 +1,3 @@
#!/bin/bash #!/bin/bash
BASH_DIR=$(dirname $(realpath "${BASH_SOURCE}")) BASH_DIR=$(dirname $(realpath "${BASH_SOURCE}"))
docker buildx build --platform linux/ppc64le,linux/s390x,linux/386,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/amd64 --tag timothyjmiller/cloudflare-ddns:latest ${BASH_DIR}/../ docker buildx build --platform linux/amd64,linux/arm64,linux/ppc64le --tag timothyjmiller/cloudflare-ddns:latest ${BASH_DIR}/../
# TODO: Support linux/riscv64

View File

@@ -1,3 +1,8 @@
#!/bin/bash #!/bin/bash
BASH_DIR=$(dirname $(realpath "${BASH_SOURCE}")) BASH_DIR=$(dirname $(realpath "${BASH_SOURCE}"))
docker buildx build --platform linux/ppc64le,linux/s390x,linux/386,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/amd64 --tag timothyjmiller/cloudflare-ddns:latest --push ${BASH_DIR}/../ VERSION=$(grep '^version' ${BASH_DIR}/../Cargo.toml | head -1 | sed 's/.*"\(.*\)".*/\1/')
docker buildx build \
--platform linux/amd64,linux/arm64,linux/ppc64le \
--tag timothyjmiller/cloudflare-ddns:latest \
--tag timothyjmiller/cloudflare-ddns:${VERSION} \
--push ${BASH_DIR}/../

421
src/cf_ip_filter.rs Normal file
View File

@@ -0,0 +1,421 @@
use crate::pp::{self, PP};
use reqwest::Client;
use std::net::IpAddr;
use std::time::{Duration, Instant};
const CF_IPV4_URL: &str = "https://www.cloudflare.com/ips-v4";
const CF_IPV6_URL: &str = "https://www.cloudflare.com/ips-v6";
/// A CIDR range parsed from "address/prefix" notation.
struct CidrRange {
addr: IpAddr,
prefix_len: u8,
}
impl CidrRange {
fn parse(s: &str) -> Option<Self> {
let (addr_str, prefix_str) = s.split_once('/')?;
let addr: IpAddr = addr_str.parse().ok()?;
let prefix_len: u8 = prefix_str.parse().ok()?;
match addr {
IpAddr::V4(_) if prefix_len > 32 => None,
IpAddr::V6(_) if prefix_len > 128 => None,
_ => Some(Self { addr, prefix_len }),
}
}
fn contains(&self, ip: &IpAddr) -> bool {
match (self.addr, ip) {
(IpAddr::V4(net), IpAddr::V4(ip)) => {
let net_bits = u32::from(net);
let ip_bits = u32::from(*ip);
if self.prefix_len == 0 {
return true;
}
let mask = !0u32 << (32 - self.prefix_len);
(net_bits & mask) == (ip_bits & mask)
}
(IpAddr::V6(net), IpAddr::V6(ip)) => {
let net_bits = u128::from(net);
let ip_bits = u128::from(*ip);
if self.prefix_len == 0 {
return true;
}
let mask = !0u128 << (128 - self.prefix_len);
(net_bits & mask) == (ip_bits & mask)
}
_ => false,
}
}
}
/// Holds parsed Cloudflare CIDR ranges for IP filtering.
pub struct CloudflareIpFilter {
ranges: Vec<CidrRange>,
}
impl CloudflareIpFilter {
/// Fetch Cloudflare IP ranges from their published URLs and parse them.
pub async fn fetch(client: &Client, timeout: Duration, ppfmt: &PP) -> Option<Self> {
let mut ranges = Vec::new();
let (v4_result, v6_result) = tokio::join!(
client.get(CF_IPV4_URL).timeout(timeout).send(),
client.get(CF_IPV6_URL).timeout(timeout).send(),
);
for (url, result) in [(CF_IPV4_URL, v4_result), (CF_IPV6_URL, v6_result)] {
match result {
Ok(resp) if resp.status().is_success() => match resp.text().await {
Ok(body) => {
for line in body.lines() {
let line = line.trim();
if line.is_empty() {
continue;
}
match CidrRange::parse(line) {
Some(range) => ranges.push(range),
None => {
ppfmt.warningf(
pp::EMOJI_WARNING,
&format!(
"Failed to parse Cloudflare IP range '{line}'"
),
);
}
}
}
}
Err(e) => {
ppfmt.warningf(
pp::EMOJI_WARNING,
&format!("Failed to read Cloudflare IP ranges from {url}: {e}"),
);
return None;
}
},
Ok(resp) => {
ppfmt.warningf(
pp::EMOJI_WARNING,
&format!(
"Failed to fetch Cloudflare IP ranges from {url}: HTTP {}",
resp.status()
),
);
return None;
}
Err(e) => {
ppfmt.warningf(
pp::EMOJI_WARNING,
&format!("Failed to fetch Cloudflare IP ranges from {url}: {e}"),
);
return None;
}
}
}
if ranges.is_empty() {
ppfmt.warningf(
pp::EMOJI_WARNING,
"No Cloudflare IP ranges loaded; skipping filter",
);
return None;
}
ppfmt.infof(
pp::EMOJI_DETECT,
&format!("Loaded {} Cloudflare IP ranges for filtering", ranges.len()),
);
Some(Self { ranges })
}
/// Parse ranges from raw text lines (for testing).
#[cfg(test)]
pub fn from_lines(lines: &str) -> Option<Self> {
let ranges: Vec<CidrRange> = lines
.lines()
.filter_map(|l| {
let l = l.trim();
if l.is_empty() {
None
} else {
CidrRange::parse(l)
}
})
.collect();
if ranges.is_empty() {
None
} else {
Some(Self { ranges })
}
}
/// Check if an IP address falls within any Cloudflare range.
pub fn contains(&self, ip: &IpAddr) -> bool {
self.ranges.iter().any(|net| net.contains(ip))
}
}
/// Refresh interval for Cloudflare IP ranges (24 hours).
const CF_RANGE_REFRESH: Duration = Duration::from_secs(24 * 60 * 60);
/// Cached wrapper around [`CloudflareIpFilter`].
///
/// Fetches once, then re-uses the cached ranges for [`CF_RANGE_REFRESH`].
/// If a refresh fails, the previously cached ranges are kept.
pub struct CachedCloudflareFilter {
filter: Option<CloudflareIpFilter>,
fetched_at: Option<Instant>,
}
impl CachedCloudflareFilter {
pub fn new() -> Self {
Self {
filter: None,
fetched_at: None,
}
}
/// Return a reference to the current filter, refreshing if stale or absent.
pub async fn get(
&mut self,
client: &Client,
timeout: Duration,
ppfmt: &PP,
) -> Option<&CloudflareIpFilter> {
let stale = match self.fetched_at {
Some(t) => t.elapsed() >= CF_RANGE_REFRESH,
None => true,
};
if stale {
match CloudflareIpFilter::fetch(client, timeout, ppfmt).await {
Some(new_filter) => {
self.filter = Some(new_filter);
self.fetched_at = Some(Instant::now());
}
None => {
if self.filter.is_some() {
ppfmt.warningf(
pp::EMOJI_WARNING,
"Failed to refresh Cloudflare IP ranges; using cached version",
);
// Keep using cached filter, but don't update fetched_at
// so we retry next cycle.
}
// If no cached filter exists, return None (caller handles fail-safe).
}
}
}
self.filter.as_ref()
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::net::{Ipv4Addr, Ipv6Addr};
const SAMPLE_RANGES: &str = "\
173.245.48.0/20
103.21.244.0/22
103.22.200.0/22
104.16.0.0/13
2400:cb00::/32
2606:4700::/32
";
#[test]
fn test_parse_ranges() {
let filter = CloudflareIpFilter::from_lines(SAMPLE_RANGES).unwrap();
assert_eq!(filter.ranges.len(), 6);
}
#[test]
fn test_contains_cloudflare_ipv4() {
let filter = CloudflareIpFilter::from_lines(SAMPLE_RANGES).unwrap();
// 104.16.0.1 is within 104.16.0.0/13
let ip: IpAddr = IpAddr::V4(Ipv4Addr::new(104, 16, 0, 1));
assert!(filter.contains(&ip));
}
#[test]
fn test_rejects_non_cloudflare_ipv4() {
let filter = CloudflareIpFilter::from_lines(SAMPLE_RANGES).unwrap();
// 203.0.113.42 is a documentation IP, not Cloudflare
let ip: IpAddr = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 42));
assert!(!filter.contains(&ip));
}
#[test]
fn test_contains_cloudflare_ipv6() {
let filter = CloudflareIpFilter::from_lines(SAMPLE_RANGES).unwrap();
// 2606:4700::1 is within 2606:4700::/32
let ip: IpAddr = IpAddr::V6(Ipv6Addr::new(0x2606, 0x4700, 0, 0, 0, 0, 0, 1));
assert!(filter.contains(&ip));
}
#[test]
fn test_rejects_non_cloudflare_ipv6() {
let filter = CloudflareIpFilter::from_lines(SAMPLE_RANGES).unwrap();
// 2001:db8::1 is a documentation address, not Cloudflare
let ip: IpAddr = IpAddr::V6(Ipv6Addr::new(0x2001, 0xdb8, 0, 0, 0, 0, 0, 1));
assert!(!filter.contains(&ip));
}
#[test]
fn test_empty_input() {
assert!(CloudflareIpFilter::from_lines("").is_none());
assert!(CloudflareIpFilter::from_lines(" \n \n").is_none());
}
#[test]
fn test_edge_of_range() {
let filter = CloudflareIpFilter::from_lines("104.16.0.0/13").unwrap();
// First IP in range
assert!(filter.contains(&IpAddr::V4(Ipv4Addr::new(104, 16, 0, 0))));
// Last IP in range (104.23.255.255)
assert!(filter.contains(&IpAddr::V4(Ipv4Addr::new(104, 23, 255, 255))));
// Just outside range (104.24.0.0)
assert!(!filter.contains(&IpAddr::V4(Ipv4Addr::new(104, 24, 0, 0))));
}
#[test]
fn test_invalid_prefix_rejected() {
assert!(CidrRange::parse("10.0.0.0/33").is_none());
assert!(CidrRange::parse("::1/129").is_none());
assert!(CidrRange::parse("not-an-ip/24").is_none());
}
#[test]
fn test_v4_does_not_match_v6() {
let filter = CloudflareIpFilter::from_lines("104.16.0.0/13").unwrap();
let ip: IpAddr = IpAddr::V6(Ipv6Addr::new(0x2606, 0x4700, 0, 0, 0, 0, 0, 1));
assert!(!filter.contains(&ip));
}
/// All real Cloudflare ranges as of 2026-03. Verifies every range parses
/// and that the first and last IP in each range is matched while the
/// address just past the end is not.
const ALL_CF_RANGES: &str = "\
173.245.48.0/20
103.21.244.0/22
103.22.200.0/22
103.31.4.0/22
141.101.64.0/18
108.162.192.0/18
190.93.240.0/20
188.114.96.0/20
197.234.240.0/22
198.41.128.0/17
162.158.0.0/15
104.16.0.0/13
104.24.0.0/14
172.64.0.0/13
131.0.72.0/22
2400:cb00::/32
2606:4700::/32
2803:f800::/32
2405:b500::/32
2405:8100::/32
2a06:98c0::/29
2c0f:f248::/32
";
#[test]
fn test_all_real_ranges_parse() {
let filter = CloudflareIpFilter::from_lines(ALL_CF_RANGES).unwrap();
assert_eq!(filter.ranges.len(), 22);
}
/// For a /N IPv4 range starting at `base`, return (first, last, just_outside).
fn v4_range_bounds(a: u8, b: u8, c: u8, d: u8, prefix: u8) -> (Ipv4Addr, Ipv4Addr, Ipv4Addr) {
let base = u32::from(Ipv4Addr::new(a, b, c, d));
let size = 1u32 << (32 - prefix);
let first = Ipv4Addr::from(base);
let last = Ipv4Addr::from(base + size - 1);
let outside = Ipv4Addr::from(base + size);
(first, last, outside)
}
#[test]
fn test_all_real_ipv4_ranges_match() {
// Test each range individually so adjacent ranges (e.g. 104.16.0.0/13
// and 104.24.0.0/14) don't cause false failures on boundary checks.
let ranges: &[(u8, u8, u8, u8, u8)] = &[
(173, 245, 48, 0, 20),
(103, 21, 244, 0, 22),
(103, 22, 200, 0, 22),
(103, 31, 4, 0, 22),
(141, 101, 64, 0, 18),
(108, 162, 192, 0, 18),
(190, 93, 240, 0, 20),
(188, 114, 96, 0, 20),
(197, 234, 240, 0, 22),
(198, 41, 128, 0, 17),
(162, 158, 0, 0, 15),
(104, 16, 0, 0, 13),
(104, 24, 0, 0, 14),
(172, 64, 0, 0, 13),
(131, 0, 72, 0, 22),
];
for &(a, b, c, d, prefix) in ranges {
let cidr = format!("{a}.{b}.{c}.{d}/{prefix}");
let filter = CloudflareIpFilter::from_lines(&cidr).unwrap();
let (first, last, outside) = v4_range_bounds(a, b, c, d, prefix);
assert!(
filter.contains(&IpAddr::V4(first)),
"First IP {first} should be in {cidr}"
);
assert!(
filter.contains(&IpAddr::V4(last)),
"Last IP {last} should be in {cidr}"
);
assert!(
!filter.contains(&IpAddr::V4(outside)),
"IP {outside} should NOT be in {cidr}"
);
}
}
#[test]
fn test_all_real_ipv6_ranges_match() {
let filter = CloudflareIpFilter::from_lines(ALL_CF_RANGES).unwrap();
// (base high 16-bit segment, prefix len)
let ranges: &[(u16, u16, u8)] = &[
(0x2400, 0xcb00, 32),
(0x2606, 0x4700, 32),
(0x2803, 0xf800, 32),
(0x2405, 0xb500, 32),
(0x2405, 0x8100, 32),
(0x2a06, 0x98c0, 29),
(0x2c0f, 0xf248, 32),
];
for &(seg0, seg1, prefix) in ranges {
let base = u128::from(Ipv6Addr::new(seg0, seg1, 0, 0, 0, 0, 0, 0));
let size = 1u128 << (128 - prefix);
let first = Ipv6Addr::from(base);
let last = Ipv6Addr::from(base + size - 1);
let outside = Ipv6Addr::from(base + size);
assert!(
filter.contains(&IpAddr::V6(first)),
"First IP {first} should be in {seg0:x}:{seg1:x}::/{prefix}"
);
assert!(
filter.contains(&IpAddr::V6(last)),
"Last IP {last} should be in {seg0:x}:{seg1:x}::/{prefix}"
);
assert!(
!filter.contains(&IpAddr::V6(outside)),
"IP {outside} should NOT be in {seg0:x}:{seg1:x}::/{prefix}"
);
}
}
}

1771
src/cloudflare.rs Normal file

File diff suppressed because it is too large Load Diff

2102
src/config.rs Normal file

File diff suppressed because it is too large Load Diff

547
src/domain.rs Normal file
View File

@@ -0,0 +1,547 @@
use std::fmt;
/// Represents a DNS domain - either a regular FQDN or a wildcard.
#[allow(dead_code)]
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub enum Domain {
FQDN(String),
Wildcard(String),
}
#[allow(dead_code)]
impl Domain {
/// Parse a domain string. Handles:
/// - "@" or "" -> root domain (handled at FQDN construction time)
/// - "*.example.com" -> wildcard
/// - "sub.example.com" -> regular FQDN
pub fn new(input: &str) -> Result<Self, String> {
let trimmed = input.trim().to_lowercase();
if trimmed.starts_with("*.") {
let base = &trimmed[2..];
let ascii = domain_to_ascii(base)?;
Ok(Domain::Wildcard(ascii))
} else {
let ascii = domain_to_ascii(&trimmed)?;
Ok(Domain::FQDN(ascii))
}
}
/// Returns the DNS name in ASCII form suitable for API calls.
pub fn dns_name_ascii(&self) -> String {
match self {
Domain::FQDN(s) => s.clone(),
Domain::Wildcard(s) => format!("*.{s}"),
}
}
/// Returns a human-readable description of the domain.
pub fn describe(&self) -> String {
match self {
Domain::FQDN(s) => describe_domain(s),
Domain::Wildcard(s) => format!("*.{}", describe_domain(s)),
}
}
/// Returns the zones (parent domains) for this domain, from most specific to least.
pub fn zones(&self) -> Vec<String> {
let base = match self {
Domain::FQDN(s) => s.as_str(),
Domain::Wildcard(s) => s.as_str(),
};
let mut zones = Vec::new();
let mut current = base.to_string();
while !current.is_empty() {
zones.push(current.clone());
if let Some(pos) = current.find('.') {
current = current[pos + 1..].to_string();
} else {
break;
}
}
zones
}
}
impl fmt::Display for Domain {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.describe())
}
}
/// Construct an FQDN from a subdomain name and base domain.
pub fn make_fqdn(subdomain: &str, base_domain: &str) -> String {
let name = subdomain.to_lowercase();
let name = name.trim();
if name.is_empty() || name == "@" {
base_domain.to_lowercase()
} else if name.starts_with("*.") {
// Wildcard subdomain
format!("{name}.{}", base_domain.to_lowercase())
} else {
format!("{name}.{}", base_domain.to_lowercase())
}
}
/// Convert a domain to ASCII using IDNA encoding.
#[allow(dead_code)]
fn domain_to_ascii(domain: &str) -> Result<String, String> {
if domain.is_empty() {
return Ok(String::new());
}
// Try IDNA encoding for internationalized domain names
match idna::domain_to_ascii(domain) {
Ok(ascii) => Ok(ascii),
Err(_) => {
// Fallback: if it's already ASCII, just return it
if domain.is_ascii() {
Ok(domain.to_string())
} else {
Err(format!("Invalid domain name: {domain}"))
}
}
}
}
/// Convert ASCII domain back to Unicode for display.
#[allow(dead_code)]
fn describe_domain(ascii: &str) -> String {
// Try to convert punycode back to unicode for display
match idna::domain_to_unicode(ascii) {
(unicode, Ok(())) => unicode,
_ => ascii.to_string(),
}
}
/// Parse a comma-separated list of domain strings.
#[allow(dead_code)]
pub fn parse_domain_list(input: &str) -> Result<Vec<Domain>, String> {
if input.trim().is_empty() {
return Ok(Vec::new());
}
input
.split(',')
.map(|s| Domain::new(s.trim()))
.collect()
}
// --- Domain Expression Evaluator ---
// Supports: true, false, is(domain,...), sub(domain,...), !, &&, ||, ()
/// Parse and evaluate a domain expression to determine if a domain should be proxied.
pub fn parse_proxied_expression(expr: &str) -> Result<Box<dyn Fn(&str) -> bool + Send + Sync>, String> {
let expr = expr.trim();
if expr.is_empty() || expr == "false" {
return Ok(Box::new(|_: &str| false));
}
if expr == "true" {
return Ok(Box::new(|_: &str| true));
}
let tokens = tokenize_expr(expr)?;
let (predicate, rest) = parse_or_expr(&tokens)?;
if !rest.is_empty() {
return Err(format!("Unexpected tokens in proxied expression: {}", rest.join(" ")));
}
Ok(predicate)
}
fn tokenize_expr(input: &str) -> Result<Vec<String>, String> {
let mut tokens = Vec::new();
let mut chars = input.chars().peekable();
while let Some(&c) = chars.peek() {
match c {
' ' | '\t' | '\n' | '\r' => {
chars.next();
}
'(' | ')' | '!' | ',' => {
tokens.push(c.to_string());
chars.next();
}
'&' => {
chars.next();
if chars.peek() == Some(&'&') {
chars.next();
tokens.push("&&".to_string());
} else {
return Err("Expected '&&', got single '&'".to_string());
}
}
'|' => {
chars.next();
if chars.peek() == Some(&'|') {
chars.next();
tokens.push("||".to_string());
} else {
return Err("Expected '||', got single '|'".to_string());
}
}
_ => {
let mut word = String::new();
while let Some(&c) = chars.peek() {
if c.is_alphanumeric() || c == '.' || c == '-' || c == '_' || c == '*' || c == '@' {
word.push(c);
chars.next();
} else {
break;
}
}
if word.is_empty() {
return Err(format!("Unexpected character: {c}"));
}
tokens.push(word);
}
}
}
Ok(tokens)
}
type Predicate = Box<dyn Fn(&str) -> bool + Send + Sync>;
fn parse_or_expr(tokens: &[String]) -> Result<(Predicate, &[String]), String> {
let (mut left, mut rest) = parse_and_expr(tokens)?;
while !rest.is_empty() && rest[0] == "||" {
let (right, new_rest) = parse_and_expr(&rest[1..])?;
let prev = left;
left = Box::new(move |d: &str| prev(d) || right(d));
rest = new_rest;
}
Ok((left, rest))
}
fn parse_and_expr(tokens: &[String]) -> Result<(Predicate, &[String]), String> {
let (mut left, mut rest) = parse_not_expr(tokens)?;
while !rest.is_empty() && rest[0] == "&&" {
let (right, new_rest) = parse_not_expr(&rest[1..])?;
let prev = left;
left = Box::new(move |d: &str| prev(d) && right(d));
rest = new_rest;
}
Ok((left, rest))
}
fn parse_not_expr(tokens: &[String]) -> Result<(Predicate, &[String]), String> {
if tokens.is_empty() {
return Err("Unexpected end of expression".to_string());
}
if tokens[0] == "!" {
let (inner, rest) = parse_not_expr(&tokens[1..])?;
let pred: Predicate = Box::new(move |d: &str| !inner(d));
Ok((pred, rest))
} else {
parse_atom(tokens)
}
}
fn parse_atom(tokens: &[String]) -> Result<(Predicate, &[String]), String> {
if tokens.is_empty() {
return Err("Unexpected end of expression".to_string());
}
match tokens[0].as_str() {
"true" => Ok((Box::new(|_: &str| true), &tokens[1..])),
"false" => Ok((Box::new(|_: &str| false), &tokens[1..])),
"(" => {
let (inner, rest) = parse_or_expr(&tokens[1..])?;
if rest.is_empty() || rest[0] != ")" {
return Err("Missing closing parenthesis".to_string());
}
Ok((inner, &rest[1..]))
}
"is" => {
let (domains, rest) = parse_domain_args(&tokens[1..])?;
let pred: Predicate = Box::new(move |d: &str| {
let d_lower = d.to_lowercase();
domains.iter().any(|dom| d_lower == *dom)
});
Ok((pred, rest))
}
"sub" => {
let (domains, rest) = parse_domain_args(&tokens[1..])?;
let pred: Predicate = Box::new(move |d: &str| {
let d_lower = d.to_lowercase();
domains.iter().any(|dom| {
d_lower == *dom || d_lower.ends_with(&format!(".{dom}"))
})
});
Ok((pred, rest))
}
_ => Err(format!("Unexpected token: {}", tokens[0])),
}
}
fn parse_domain_args(tokens: &[String]) -> Result<(Vec<String>, &[String]), String> {
if tokens.is_empty() || tokens[0] != "(" {
return Err("Expected '(' after function name".to_string());
}
let mut domains = Vec::new();
let mut i = 1;
while i < tokens.len() && tokens[i] != ")" {
if tokens[i] == "," {
i += 1;
continue;
}
domains.push(tokens[i].to_lowercase());
i += 1;
}
if i >= tokens.len() {
return Err("Missing closing ')' in function call".to_string());
}
Ok((domains, &tokens[i + 1..]))
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_make_fqdn_root() {
assert_eq!(make_fqdn("", "example.com"), "example.com");
assert_eq!(make_fqdn("@", "example.com"), "example.com");
}
#[test]
fn test_make_fqdn_subdomain() {
assert_eq!(make_fqdn("www", "example.com"), "www.example.com");
assert_eq!(make_fqdn("VPN", "Example.COM"), "vpn.example.com");
}
#[test]
fn test_domain_wildcard() {
let d = Domain::new("*.example.com").unwrap();
assert_eq!(d.dns_name_ascii(), "*.example.com");
}
#[test]
fn test_parse_domain_list() {
let domains = parse_domain_list("example.com, *.example.com, sub.example.com").unwrap();
assert_eq!(domains.len(), 3);
}
#[test]
fn test_proxied_expr_true() {
let pred = parse_proxied_expression("true").unwrap();
assert!(pred("anything.com"));
}
#[test]
fn test_proxied_expr_false() {
let pred = parse_proxied_expression("false").unwrap();
assert!(!pred("anything.com"));
}
#[test]
fn test_proxied_expr_is() {
let pred = parse_proxied_expression("is(example.com)").unwrap();
assert!(pred("example.com"));
assert!(!pred("sub.example.com"));
}
#[test]
fn test_proxied_expr_sub() {
let pred = parse_proxied_expression("sub(example.com)").unwrap();
assert!(pred("example.com"));
assert!(pred("sub.example.com"));
assert!(!pred("other.com"));
}
#[test]
fn test_proxied_expr_complex() {
let pred = parse_proxied_expression("is(a.com) || is(b.com)").unwrap();
assert!(pred("a.com"));
assert!(pred("b.com"));
assert!(!pred("c.com"));
}
#[test]
fn test_proxied_expr_negation() {
let pred = parse_proxied_expression("!is(internal.com)").unwrap();
assert!(!pred("internal.com"));
assert!(pred("public.com"));
}
// --- Domain::new with regular FQDN ---
#[test]
fn test_domain_new_fqdn() {
let d = Domain::new("example.com").unwrap();
assert_eq!(d, Domain::FQDN("example.com".to_string()));
}
#[test]
fn test_domain_new_fqdn_uppercase() {
let d = Domain::new("EXAMPLE.COM").unwrap();
assert_eq!(d, Domain::FQDN("example.com".to_string()));
}
// --- Domain::dns_name_ascii for FQDN ---
#[test]
fn test_dns_name_ascii_fqdn() {
let d = Domain::FQDN("example.com".to_string());
assert_eq!(d.dns_name_ascii(), "example.com");
}
// --- Domain::describe for both variants ---
#[test]
fn test_describe_fqdn() {
let d = Domain::FQDN("example.com".to_string());
// ASCII domain should round-trip through describe unchanged
assert_eq!(d.describe(), "example.com");
}
#[test]
fn test_describe_wildcard() {
let d = Domain::Wildcard("example.com".to_string());
assert_eq!(d.describe(), "*.example.com");
}
// --- Domain::zones ---
#[test]
fn test_zones_fqdn() {
let d = Domain::FQDN("sub.example.com".to_string());
let zones = d.zones();
assert_eq!(zones, vec!["sub.example.com", "example.com", "com"]);
}
#[test]
fn test_zones_wildcard() {
let d = Domain::Wildcard("example.com".to_string());
let zones = d.zones();
assert_eq!(zones, vec!["example.com", "com"]);
}
#[test]
fn test_zones_single_label() {
let d = Domain::FQDN("localhost".to_string());
let zones = d.zones();
assert_eq!(zones, vec!["localhost"]);
}
// --- Domain Display trait ---
#[test]
fn test_display_fqdn() {
let d = Domain::FQDN("example.com".to_string());
assert_eq!(format!("{d}"), "example.com");
}
#[test]
fn test_display_wildcard() {
let d = Domain::Wildcard("example.com".to_string());
assert_eq!(format!("{d}"), "*.example.com");
}
// --- domain_to_ascii (tested indirectly via Domain::new) ---
#[test]
fn test_domain_new_empty_string() {
// empty string -> domain_to_ascii returns Ok("") -> Domain::FQDN("")
let d = Domain::new("").unwrap();
assert_eq!(d, Domain::FQDN("".to_string()));
}
#[test]
fn test_domain_new_ascii_domain() {
let d = Domain::new("www.example.org").unwrap();
assert_eq!(d.dns_name_ascii(), "www.example.org");
}
#[test]
fn test_domain_new_internationalized() {
// "münchen.de" should be encoded to punycode
let d = Domain::new("münchen.de").unwrap();
let ascii = d.dns_name_ascii();
// The punycode-encoded form should start with "xn--"
assert!(ascii.contains("xn--"), "expected punycode, got: {ascii}");
}
// --- describe_domain (tested indirectly via Domain::describe) ---
#[test]
fn test_describe_punycode_roundtrip() {
// Build a domain with a known punycode label and confirm describe decodes it
let d = Domain::new("münchen.de").unwrap();
let described = d.describe();
// Should contain the Unicode form, not the raw punycode
assert!(described.contains("münchen") || described.contains("xn--"),
"describe returned: {described}");
}
#[test]
fn test_describe_regular_ascii() {
let d = Domain::FQDN("example.com".to_string());
assert_eq!(d.describe(), "example.com");
}
// --- parse_domain_list with empty input ---
#[test]
fn test_parse_domain_list_empty() {
let result = parse_domain_list("").unwrap();
assert!(result.is_empty());
}
#[test]
fn test_parse_domain_list_whitespace_only() {
let result = parse_domain_list(" ").unwrap();
assert!(result.is_empty());
}
// --- Tokenizer edge cases (via parse_proxied_expression) ---
#[test]
fn test_tokenizer_single_ampersand_error() {
let result = parse_proxied_expression("is(a.com) & is(b.com)");
assert!(result.is_err());
let err = result.err().unwrap();
assert!(err.contains("&&"), "error was: {err}");
}
#[test]
fn test_tokenizer_single_pipe_error() {
let result = parse_proxied_expression("is(a.com) | is(b.com)");
assert!(result.is_err());
let err = result.err().unwrap();
assert!(err.contains("||"), "error was: {err}");
}
#[test]
fn test_tokenizer_unexpected_character_error() {
let result = parse_proxied_expression("is(a.com) $ is(b.com)");
assert!(result.is_err());
}
// --- Parser edge cases ---
#[test]
fn test_parse_and_expr_double_ampersand() {
let pred = parse_proxied_expression("is(a.com) && is(b.com)").unwrap();
assert!(!pred("a.com"));
assert!(!pred("b.com"));
let pred2 = parse_proxied_expression("sub(example.com) && !is(internal.example.com)").unwrap();
assert!(pred2("www.example.com"));
assert!(!pred2("internal.example.com"));
}
#[test]
fn test_parse_nested_parentheses() {
let pred = parse_proxied_expression("(is(a.com) || is(b.com)) && !is(c.com)").unwrap();
assert!(pred("a.com"));
assert!(pred("b.com"));
assert!(!pred("c.com"));
}
#[test]
fn test_parse_missing_closing_paren() {
let result = parse_proxied_expression("(is(a.com)");
assert!(result.is_err());
let err = result.err().unwrap();
assert!(err.contains("parenthesis") || err.contains(")"), "error was: {err}");
}
#[test]
fn test_parse_unexpected_tokens_after_expr() {
let result = parse_proxied_expression("true false");
assert!(result.is_err());
}
// --- make_fqdn with wildcard subdomain ---
#[test]
fn test_make_fqdn_wildcard_subdomain() {
// A name starting with "*." is treated as a wildcard subdomain
assert_eq!(make_fqdn("*.sub", "example.com"), "*.sub.example.com");
}
}

953
src/main.rs Normal file
View File

@@ -0,0 +1,953 @@
mod cf_ip_filter;
mod cloudflare;
mod config;
mod domain;
mod notifier;
mod pp;
mod provider;
mod updater;
use crate::cloudflare::{Auth, CloudflareHandle};
use crate::config::{AppConfig, CronSchedule};
use crate::notifier::{CompositeNotifier, Heartbeat, Message};
use crate::pp::PP;
use std::collections::HashSet;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use reqwest::Client;
use tokio::signal;
use tokio::time::{sleep, Duration};
const VERSION: &str = env!("CARGO_PKG_VERSION");
#[tokio::main]
async fn main() {
// Parse CLI args
let args: Vec<String> = std::env::args().collect();
let dry_run = args.iter().any(|a| a == "--dry-run");
let repeat = args.iter().any(|a| a == "--repeat");
// Check for unknown args (legacy behavior)
let known_args = ["--dry-run", "--repeat"];
let unknown: Vec<&str> = args
.iter()
.skip(1)
.filter(|a| !known_args.contains(&a.as_str()))
.map(|a| a.as_str())
.collect();
if !unknown.is_empty() {
eprintln!(
"Unrecognized parameter(s): {}. Stopping now.",
unknown.join(", ")
);
return;
}
// Determine config mode and create initial PP for config loading
let initial_pp = if config::is_env_config_mode() {
// In env mode, read emoji/quiet from env before loading full config
let emoji = std::env::var("EMOJI")
.map(|v| matches!(v.to_lowercase().as_str(), "true" | "1" | "yes"))
.unwrap_or(true);
let quiet = std::env::var("QUIET")
.map(|v| matches!(v.to_lowercase().as_str(), "true" | "1" | "yes"))
.unwrap_or(false);
PP::new(emoji, quiet)
} else {
// Legacy mode: no emoji, not quiet (preserves original output behavior)
PP::new(false, false)
};
println!("cloudflare-ddns v{VERSION}");
// Load config
let app_config = match config::load_config(dry_run, repeat, &initial_pp) {
Ok(c) => c,
Err(e) => {
eprintln!("{e}");
sleep(Duration::from_secs(10)).await;
std::process::exit(1);
}
};
// Create PP with final settings
let ppfmt = PP::new(app_config.emoji, app_config.quiet);
if dry_run {
ppfmt.noticef(
pp::EMOJI_WARNING,
"[DRY RUN] No records will be created, updated, or deleted.",
);
}
// Print config summary (env mode only)
config::print_config_summary(&app_config, &ppfmt);
// Setup notifiers and heartbeats
let notifier = config::setup_notifiers(&ppfmt);
let heartbeat = config::setup_heartbeats(&ppfmt);
// Create Cloudflare handle (for env mode)
let handle = if !app_config.legacy_mode {
CloudflareHandle::new(
app_config.auth.clone(),
app_config.update_timeout,
app_config.managed_comment_regex.clone(),
app_config.managed_waf_comment_regex.clone(),
)
} else {
// Create a dummy handle for legacy mode (won't be used)
CloudflareHandle::new(
Auth::Token(String::new()),
Duration::from_secs(30),
None,
None,
)
};
// Signal handler for graceful shutdown
let running = Arc::new(AtomicBool::new(true));
let r = running.clone();
tokio::spawn(async move {
let _ = signal::ctrl_c().await;
println!("Stopping...");
r.store(false, Ordering::SeqCst);
});
// Start heartbeat
heartbeat.start().await;
let mut cf_cache = cf_ip_filter::CachedCloudflareFilter::new();
let detection_client = Client::builder()
.timeout(app_config.detection_timeout)
.build()
.unwrap_or_default();
if app_config.legacy_mode {
// --- Legacy mode (original cloudflare-ddns behavior) ---
run_legacy_mode(&app_config, &handle, &notifier, &heartbeat, &ppfmt, running, &mut cf_cache, &detection_client).await;
} else {
// --- Env var mode (cf-ddns behavior) ---
run_env_mode(&app_config, &handle, &notifier, &heartbeat, &ppfmt, running, &mut cf_cache, &detection_client).await;
}
// On shutdown: delete records if configured
if app_config.delete_on_stop && !app_config.legacy_mode {
ppfmt.noticef(pp::EMOJI_STOP, "Deleting records on stop...");
updater::final_delete(&app_config, &handle, &notifier, &heartbeat, &ppfmt).await;
}
// Exit heartbeat
heartbeat
.exit(&Message::new_ok("Shutting down"))
.await;
}
async fn run_legacy_mode(
config: &AppConfig,
handle: &CloudflareHandle,
notifier: &CompositeNotifier,
heartbeat: &Heartbeat,
ppfmt: &PP,
running: Arc<AtomicBool>,
cf_cache: &mut cf_ip_filter::CachedCloudflareFilter,
detection_client: &Client,
) {
let legacy = match &config.legacy_config {
Some(l) => l,
None => return,
};
let mut noop_reported = HashSet::new();
if config.repeat {
match (legacy.a, legacy.aaaa) {
(true, true) => println!(
"Updating IPv4 (A) & IPv6 (AAAA) records every {} seconds",
legacy.ttl
),
(true, false) => {
println!("Updating IPv4 (A) records every {} seconds", legacy.ttl)
}
(false, true) => {
println!("Updating IPv6 (AAAA) records every {} seconds", legacy.ttl)
}
(false, false) => println!("Both IPv4 and IPv6 are disabled"),
}
while running.load(Ordering::SeqCst) {
updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt, &mut noop_reported, detection_client).await;
for _ in 0..legacy.ttl {
if !running.load(Ordering::SeqCst) {
break;
}
sleep(Duration::from_secs(1)).await;
}
}
} else {
updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt, &mut noop_reported, detection_client).await;
}
}
async fn run_env_mode(
config: &AppConfig,
handle: &CloudflareHandle,
notifier: &CompositeNotifier,
heartbeat: &Heartbeat,
ppfmt: &PP,
running: Arc<AtomicBool>,
cf_cache: &mut cf_ip_filter::CachedCloudflareFilter,
detection_client: &Client,
) {
let mut noop_reported = HashSet::new();
match &config.update_cron {
CronSchedule::Once => {
if config.update_on_start {
updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt, &mut noop_reported, detection_client).await;
}
}
schedule => {
let interval = schedule.next_duration().unwrap_or(Duration::from_secs(300));
ppfmt.noticef(
pp::EMOJI_LAUNCH,
&format!(
"Started cloudflare-ddns, updating every {}",
describe_duration(interval)
),
);
// Update on start if configured
if config.update_on_start {
updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt, &mut noop_reported, detection_client).await;
}
// Main loop
while running.load(Ordering::SeqCst) {
// Sleep for interval, checking running flag each second
let secs = interval.as_secs();
let next_time = chrono::Local::now() + chrono::Duration::seconds(secs as i64);
ppfmt.infof(
pp::EMOJI_SLEEP,
&format!(
"Next update at {}",
next_time.format("%Y-%m-%d %H:%M:%S %Z")
),
);
for _ in 0..secs {
if !running.load(Ordering::SeqCst) {
return;
}
sleep(Duration::from_secs(1)).await;
}
if !running.load(Ordering::SeqCst) {
return;
}
updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt, &mut noop_reported, detection_client).await;
}
}
}
}
fn describe_duration(d: Duration) -> String {
let secs = d.as_secs();
if secs >= 3600 {
let hours = secs / 3600;
let mins = (secs % 3600) / 60;
if mins > 0 {
format!("{hours}h{mins}m")
} else {
format!("{hours}h")
}
} else if secs >= 60 {
let mins = secs / 60;
let s = secs % 60;
if s > 0 {
format!("{mins}m{s}s")
} else {
format!("{mins}m")
}
} else {
format!("{secs}s")
}
}
// ============================================================
// Tests (backwards compatible with original test suite)
// ============================================================
#[cfg(test)]
mod tests {
use crate::config::{
LegacyAuthentication, LegacyCloudflareEntry, LegacyConfig, LegacySubdomainEntry,
parse_legacy_config,
};
use crate::provider::parse_trace_ip;
use reqwest::Client;
use wiremock::matchers::{method, path, query_param};
use wiremock::{Mock, MockServer, ResponseTemplate};
fn test_config(zone_id: &str) -> LegacyConfig {
LegacyConfig {
cloudflare: vec![LegacyCloudflareEntry {
authentication: LegacyAuthentication {
api_token: "test-token".to_string(),
api_key: None,
},
zone_id: zone_id.to_string(),
subdomains: vec![
LegacySubdomainEntry::Detailed {
name: "".to_string(),
proxied: false,
},
LegacySubdomainEntry::Detailed {
name: "vpn".to_string(),
proxied: true,
},
],
proxied: false,
}],
a: true,
aaaa: false,
purge_unknown_records: false,
ttl: 300,
ip4_provider: None,
ip6_provider: None,
}
}
// Helper to create a legacy client for testing
struct TestDdnsClient {
client: Client,
cf_api_base: String,
ipv4_urls: Vec<String>,
dry_run: bool,
}
impl TestDdnsClient {
fn new(base_url: &str) -> Self {
Self {
client: Client::new(),
cf_api_base: base_url.to_string(),
ipv4_urls: vec![format!("{base_url}/cdn-cgi/trace")],
dry_run: false,
}
}
fn dry_run(mut self) -> Self {
self.dry_run = true;
self
}
async fn cf_api<T: serde::de::DeserializeOwned>(
&self,
endpoint: &str,
method_str: &str,
token: &str,
body: Option<&impl serde::Serialize>,
) -> Option<T> {
let url = format!("{}/{endpoint}", self.cf_api_base);
let mut req = match method_str {
"GET" => self.client.get(&url),
"POST" => self.client.post(&url),
"PUT" => self.client.put(&url),
"DELETE" => self.client.delete(&url),
_ => return None,
};
req = req.header("Authorization", format!("Bearer {token}"));
if let Some(b) = body {
req = req.json(b);
}
match req.send().await {
Ok(resp) if resp.status().is_success() => resp.json::<T>().await.ok(),
Ok(resp) => {
let text = resp.text().await.unwrap_or_default();
eprintln!("Error: {text}");
None
}
Err(e) => {
eprintln!("Exception: {e}");
None
}
}
}
async fn get_ip(&self) -> Option<String> {
for url in &self.ipv4_urls {
if let Ok(resp) = self.client.get(url).send().await {
if let Ok(body) = resp.text().await {
if let Some(ip) = parse_trace_ip(&body) {
return Some(ip);
}
}
}
}
None
}
async fn commit_record(
&self,
ip: &str,
record_type: &str,
config: &[LegacyCloudflareEntry],
ttl: i64,
purge_unknown_records: bool,
noop_reported: &mut std::collections::HashSet<String>,
) {
for entry in config {
#[derive(serde::Deserialize)]
struct Resp<T> {
result: Option<T>,
}
#[derive(serde::Deserialize)]
struct Zone {
name: String,
}
#[derive(serde::Deserialize)]
struct Rec {
id: String,
name: String,
content: String,
proxied: bool,
}
let zone_resp: Option<Resp<Zone>> = self
.cf_api(
&format!("zones/{}", entry.zone_id),
"GET",
&entry.authentication.api_token,
None::<&()>.as_ref(),
)
.await;
let base_domain = match zone_resp.and_then(|r| r.result) {
Some(z) => z.name,
None => continue,
};
for subdomain in &entry.subdomains {
let (name, proxied) = match subdomain {
LegacySubdomainEntry::Detailed { name, proxied } => {
(name.to_lowercase().trim().to_string(), *proxied)
}
LegacySubdomainEntry::Simple(name) => {
(name.to_lowercase().trim().to_string(), entry.proxied)
}
};
let fqdn = crate::domain::make_fqdn(&name, &base_domain);
#[derive(serde::Serialize)]
struct Payload {
#[serde(rename = "type")]
record_type: String,
name: String,
content: String,
proxied: bool,
ttl: i64,
}
let record = Payload {
record_type: record_type.to_string(),
name: fqdn.clone(),
content: ip.to_string(),
proxied,
ttl,
};
let dns_endpoint = format!(
"zones/{}/dns_records?per_page=100&type={record_type}",
entry.zone_id
);
let dns_records: Option<Resp<Vec<Rec>>> = self
.cf_api(
&dns_endpoint,
"GET",
&entry.authentication.api_token,
None::<&()>.as_ref(),
)
.await;
let mut identifier: Option<String> = None;
let mut modified = false;
let mut duplicate_ids: Vec<String> = Vec::new();
if let Some(resp) = dns_records {
if let Some(records) = resp.result {
for r in &records {
if r.name == fqdn {
if let Some(ref existing_id) = identifier {
if r.content == ip {
duplicate_ids.push(existing_id.clone());
identifier = Some(r.id.clone());
} else {
duplicate_ids.push(r.id.clone());
}
} else {
identifier = Some(r.id.clone());
if r.content != ip || r.proxied != proxied {
modified = true;
}
}
}
}
}
}
let noop_key = format!("{fqdn}:{record_type}");
if let Some(ref id) = identifier {
if modified {
noop_reported.remove(&noop_key);
if self.dry_run {
println!("[DRY RUN] Would update record {fqdn} -> {ip}");
} else {
println!("Updating record {fqdn} -> {ip}");
let update_endpoint =
format!("zones/{}/dns_records/{id}", entry.zone_id);
let _: Option<serde_json::Value> = self
.cf_api(
&update_endpoint,
"PUT",
&entry.authentication.api_token,
Some(&record),
)
.await;
}
} else if noop_reported.insert(noop_key) {
if self.dry_run {
println!("[DRY RUN] Record {fqdn} is up to date");
} else {
println!("Record {fqdn} is up to date");
}
}
} else {
noop_reported.remove(&noop_key);
if self.dry_run {
println!("[DRY RUN] Would add new record {fqdn} -> {ip}");
} else {
println!("Adding new record {fqdn} -> {ip}");
let create_endpoint =
format!("zones/{}/dns_records", entry.zone_id);
let _: Option<serde_json::Value> = self
.cf_api(
&create_endpoint,
"POST",
&entry.authentication.api_token,
Some(&record),
)
.await;
}
}
if purge_unknown_records {
for dup_id in &duplicate_ids {
if self.dry_run {
println!("[DRY RUN] Would delete stale record {dup_id}");
} else {
println!("Deleting stale record {dup_id}");
let del_endpoint =
format!("zones/{}/dns_records/{dup_id}", entry.zone_id);
let _: Option<serde_json::Value> = self
.cf_api(
&del_endpoint,
"DELETE",
&entry.authentication.api_token,
None::<&()>.as_ref(),
)
.await;
}
}
}
}
}
}
}
#[test]
fn test_parse_trace_ip() {
let body = "fl=1f1\nh=1.1.1.1\nip=203.0.113.42\nts=1234567890\nvisit_scheme=https\n";
assert_eq!(parse_trace_ip(body), Some("203.0.113.42".to_string()));
}
#[test]
fn test_parse_trace_ip_missing() {
let body = "fl=1f1\nh=1.1.1.1\nts=1234567890\n";
assert_eq!(parse_trace_ip(body), None);
}
#[test]
fn test_parse_config_minimal() {
let json = r#"{
"cloudflare": [{
"authentication": { "api_token": "tok123" },
"zone_id": "zone1",
"subdomains": ["@"]
}]
}"#;
let config = parse_legacy_config(json).unwrap();
assert!(config.a);
assert!(config.aaaa);
assert!(!config.purge_unknown_records);
assert_eq!(config.ttl, 300);
}
#[test]
fn test_parse_config_low_ttl() {
let json = r#"{
"cloudflare": [{
"authentication": { "api_token": "tok123" },
"zone_id": "zone1",
"subdomains": ["@"]
}],
"ttl": 10
}"#;
let config = parse_legacy_config(json).unwrap();
assert_eq!(config.ttl, 1);
}
#[tokio::test]
async fn test_ip_detection() {
let mock_server = MockServer::start().await;
Mock::given(method("GET"))
.and(path("/cdn-cgi/trace"))
.respond_with(
ResponseTemplate::new(200)
.set_body_string("fl=1f1\nh=mock\nip=198.51.100.7\nts=0\n"),
)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri());
let ip = ddns.get_ip().await;
assert_eq!(ip, Some("198.51.100.7".to_string()));
}
#[tokio::test]
async fn test_creates_new_record() {
let mock_server = MockServer::start().await;
let zone_id = "zone-abc-123";
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "name": "example.com" }
})))
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.and(query_param("type", "A"))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": []
})))
.mount(&mock_server)
.await;
Mock::given(method("POST"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "id": "new-record-1" }
})))
.expect(2)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri());
let config = test_config(zone_id);
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false, &mut std::collections::HashSet::new())
.await;
}
#[tokio::test]
async fn test_updates_existing_record() {
let mock_server = MockServer::start().await;
let zone_id = "zone-update-1";
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "name": "example.com" }
})))
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.and(query_param("type", "A"))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": [
{ "id": "rec-1", "name": "example.com", "content": "10.0.0.1", "proxied": false },
{ "id": "rec-2", "name": "vpn.example.com", "content": "10.0.0.1", "proxied": true }
]
})))
.mount(&mock_server)
.await;
Mock::given(method("PUT"))
.and(path(format!("/zones/{zone_id}/dns_records/rec-1")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "id": "rec-1" }
})))
.expect(1)
.mount(&mock_server)
.await;
Mock::given(method("PUT"))
.and(path(format!("/zones/{zone_id}/dns_records/rec-2")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "id": "rec-2" }
})))
.expect(1)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri());
let config = test_config(zone_id);
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false, &mut std::collections::HashSet::new())
.await;
}
#[tokio::test]
async fn test_skips_up_to_date_record() {
let mock_server = MockServer::start().await;
let zone_id = "zone-noop";
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "name": "example.com" }
})))
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.and(query_param("type", "A"))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": [
{ "id": "rec-1", "name": "example.com", "content": "198.51.100.7", "proxied": false },
{ "id": "rec-2", "name": "vpn.example.com", "content": "198.51.100.7", "proxied": true }
]
})))
.mount(&mock_server)
.await;
Mock::given(method("PUT"))
.respond_with(ResponseTemplate::new(500))
.expect(0)
.mount(&mock_server)
.await;
Mock::given(method("POST"))
.respond_with(ResponseTemplate::new(500))
.expect(0)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri());
let config = test_config(zone_id);
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false, &mut std::collections::HashSet::new())
.await;
}
#[tokio::test]
async fn test_dry_run_does_not_mutate() {
let mock_server = MockServer::start().await;
let zone_id = "zone-dry";
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "name": "example.com" }
})))
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.and(query_param("type", "A"))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": []
})))
.mount(&mock_server)
.await;
Mock::given(method("POST"))
.respond_with(ResponseTemplate::new(500))
.expect(0)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri()).dry_run();
let config = test_config(zone_id);
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false, &mut std::collections::HashSet::new())
.await;
}
#[tokio::test]
async fn test_purge_duplicate_records() {
let mock_server = MockServer::start().await;
let zone_id = "zone-purge";
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "name": "example.com" }
})))
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.and(query_param("type", "A"))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": [
{ "id": "rec-keep", "name": "example.com", "content": "198.51.100.7", "proxied": false },
{ "id": "rec-dup", "name": "example.com", "content": "198.51.100.7", "proxied": false }
]
})))
.mount(&mock_server)
.await;
Mock::given(method("DELETE"))
.and(path(format!("/zones/{zone_id}/dns_records/rec-keep")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({})))
.expect(1)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri());
let config = LegacyConfig {
cloudflare: vec![LegacyCloudflareEntry {
authentication: LegacyAuthentication {
api_token: "test-token".to_string(),
api_key: None,
},
zone_id: zone_id.to_string(),
subdomains: vec![LegacySubdomainEntry::Detailed {
name: "".to_string(),
proxied: false,
}],
proxied: false,
}],
a: true,
aaaa: false,
purge_unknown_records: true,
ttl: 300,
ip4_provider: None,
ip6_provider: None,
};
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, true, &mut std::collections::HashSet::new())
.await;
}
// --- describe_duration tests ---
#[test]
fn test_describe_duration_seconds_only() {
use tokio::time::Duration;
assert_eq!(super::describe_duration(Duration::from_secs(45)), "45s");
}
#[test]
fn test_describe_duration_exact_minutes() {
use tokio::time::Duration;
assert_eq!(super::describe_duration(Duration::from_secs(300)), "5m");
}
#[test]
fn test_describe_duration_minutes_and_seconds() {
use tokio::time::Duration;
assert_eq!(super::describe_duration(Duration::from_secs(330)), "5m30s");
}
#[test]
fn test_describe_duration_exact_hours() {
use tokio::time::Duration;
assert_eq!(super::describe_duration(Duration::from_secs(7200)), "2h");
}
#[test]
fn test_describe_duration_hours_and_minutes() {
use tokio::time::Duration;
assert_eq!(super::describe_duration(Duration::from_secs(5400)), "1h30m");
}
#[tokio::test]
async fn test_end_to_end_detect_and_update() {
let mock_server = MockServer::start().await;
let zone_id = "zone-e2e";
Mock::given(method("GET"))
.and(path("/cdn-cgi/trace"))
.respond_with(
ResponseTemplate::new(200)
.set_body_string("fl=1f1\nh=mock\nip=203.0.113.99\nts=0\n"),
)
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "name": "example.com" }
})))
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.and(query_param("type", "A"))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": [
{ "id": "rec-root", "name": "example.com", "content": "10.0.0.1", "proxied": false }
]
})))
.mount(&mock_server)
.await;
Mock::given(method("PUT"))
.and(path(format!("/zones/{zone_id}/dns_records/rec-root")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "id": "rec-root" }
})))
.expect(1)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri());
let ip = ddns.get_ip().await;
assert_eq!(ip, Some("203.0.113.99".to_string()));
let config = LegacyConfig {
cloudflare: vec![LegacyCloudflareEntry {
authentication: LegacyAuthentication {
api_token: "test-token".to_string(),
api_key: None,
},
zone_id: zone_id.to_string(),
subdomains: vec![LegacySubdomainEntry::Detailed {
name: "".to_string(),
proxied: false,
}],
proxied: false,
}],
a: true,
aaaa: false,
purge_unknown_records: false,
ttl: 300,
ip4_provider: None,
ip6_provider: None,
};
ddns.commit_record("203.0.113.99", "A", &config.cloudflare, 300, false, &mut std::collections::HashSet::new())
.await;
}
}

1437
src/notifier.rs Normal file

File diff suppressed because it is too large Load Diff

435
src/pp.rs Normal file
View File

@@ -0,0 +1,435 @@
use std::collections::HashSet;
use std::sync::{Arc, Mutex};
// Verbosity levels
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub enum Verbosity {
Quiet,
Notice,
Info,
Verbose,
}
// Emoji constants
#[allow(dead_code)]
pub const EMOJI_GLOBE: &str = "\u{1F30D}";
pub const EMOJI_WARNING: &str = "\u{26A0}\u{FE0F}";
pub const EMOJI_ERROR: &str = "\u{274C}";
#[allow(dead_code)]
pub const EMOJI_SUCCESS: &str = "\u{2705}";
pub const EMOJI_LAUNCH: &str = "\u{1F680}";
pub const EMOJI_STOP: &str = "\u{1F6D1}";
pub const EMOJI_SLEEP: &str = "\u{1F634}";
pub const EMOJI_DETECT: &str = "\u{1F50D}";
pub const EMOJI_UPDATE: &str = "\u{2B06}\u{FE0F}";
pub const EMOJI_CREATE: &str = "\u{2795}";
pub const EMOJI_DELETE: &str = "\u{2796}";
pub const EMOJI_SKIP: &str = "\u{23ED}\u{FE0F}";
pub const EMOJI_NOTIFY: &str = "\u{1F514}";
pub const EMOJI_HEARTBEAT: &str = "\u{1F493}";
pub const EMOJI_CONFIG: &str = "\u{2699}\u{FE0F}";
#[allow(dead_code)]
pub const EMOJI_HINT: &str = "\u{1F4A1}";
const INDENT_PREFIX: &str = " ";
pub struct PP {
pub verbosity: Verbosity,
pub emoji: bool,
indent: usize,
seen: Arc<Mutex<HashSet<String>>>,
}
impl PP {
pub fn new(emoji: bool, quiet: bool) -> Self {
Self {
verbosity: if quiet { Verbosity::Quiet } else { Verbosity::Verbose },
emoji,
indent: 0,
seen: Arc::new(Mutex::new(HashSet::new())),
}
}
pub fn default_pp() -> Self {
Self::new(false, false)
}
pub fn is_showing(&self, level: Verbosity) -> bool {
self.verbosity >= level
}
pub fn indent(&self) -> PP {
PP {
verbosity: self.verbosity,
emoji: self.emoji,
indent: self.indent + 1,
seen: Arc::clone(&self.seen),
}
}
fn output(&self, emoji: &str, msg: &str) {
let prefix = INDENT_PREFIX.repeat(self.indent);
if self.emoji && !emoji.is_empty() {
println!("{prefix}{emoji} {msg}");
} else {
println!("{prefix}{msg}");
}
}
fn output_err(&self, emoji: &str, msg: &str) {
let prefix = INDENT_PREFIX.repeat(self.indent);
if self.emoji && !emoji.is_empty() {
eprintln!("{prefix}{emoji} {msg}");
} else {
eprintln!("{prefix}{msg}");
}
}
pub fn infof(&self, emoji: &str, msg: &str) {
if self.is_showing(Verbosity::Info) {
self.output(emoji, msg);
}
}
pub fn noticef(&self, emoji: &str, msg: &str) {
if self.is_showing(Verbosity::Notice) {
self.output(emoji, msg);
}
}
pub fn warningf(&self, emoji: &str, msg: &str) {
self.output_err(emoji, msg);
}
pub fn errorf(&self, emoji: &str, msg: &str) {
self.output_err(emoji, msg);
}
#[allow(dead_code)]
pub fn info_once(&self, key: &str, emoji: &str, msg: &str) {
if self.is_showing(Verbosity::Info) {
let mut seen = self.seen.lock().unwrap();
if seen.insert(key.to_string()) {
self.output(emoji, msg);
}
}
}
#[allow(dead_code)]
pub fn notice_once(&self, key: &str, emoji: &str, msg: &str) {
if self.is_showing(Verbosity::Notice) {
let mut seen = self.seen.lock().unwrap();
if seen.insert(key.to_string()) {
self.output(emoji, msg);
}
}
}
#[allow(dead_code)]
pub fn blank_line_if_verbose(&self) {
if self.is_showing(Verbosity::Verbose) {
println!();
}
}
}
#[allow(dead_code)]
pub fn english_join(items: &[String]) -> String {
match items.len() {
0 => String::new(),
1 => items[0].clone(),
2 => format!("{} and {}", items[0], items[1]),
_ => {
let (last, rest) = items.split_last().unwrap();
format!("{}, and {last}", rest.join(", "))
}
}
}
#[cfg(test)]
mod tests {
use super::*;
// ---- PP::new with emoji flag ----
#[test]
fn new_with_emoji_true() {
let pp = PP::new(true, false);
assert!(pp.emoji);
}
#[test]
fn new_with_emoji_false() {
let pp = PP::new(false, false);
assert!(!pp.emoji);
}
// ---- PP::new with quiet flag (verbosity levels) ----
#[test]
fn new_quiet_true_sets_verbosity_quiet() {
let pp = PP::new(false, true);
assert_eq!(pp.verbosity, Verbosity::Quiet);
}
#[test]
fn new_quiet_false_sets_verbosity_verbose() {
let pp = PP::new(false, false);
assert_eq!(pp.verbosity, Verbosity::Verbose);
}
// ---- PP::is_showing at different verbosity levels ----
#[test]
fn quiet_shows_only_quiet_level() {
let pp = PP::new(false, true);
assert!(pp.is_showing(Verbosity::Quiet));
assert!(!pp.is_showing(Verbosity::Notice));
assert!(!pp.is_showing(Verbosity::Info));
assert!(!pp.is_showing(Verbosity::Verbose));
}
#[test]
fn verbose_shows_all_levels() {
let pp = PP::new(false, false);
assert!(pp.is_showing(Verbosity::Quiet));
assert!(pp.is_showing(Verbosity::Notice));
assert!(pp.is_showing(Verbosity::Info));
assert!(pp.is_showing(Verbosity::Verbose));
}
#[test]
fn notice_level_shows_quiet_and_notice_only() {
let mut pp = PP::new(false, false);
pp.verbosity = Verbosity::Notice;
assert!(pp.is_showing(Verbosity::Quiet));
assert!(pp.is_showing(Verbosity::Notice));
assert!(!pp.is_showing(Verbosity::Info));
assert!(!pp.is_showing(Verbosity::Verbose));
}
#[test]
fn info_level_shows_up_to_info() {
let mut pp = PP::new(false, false);
pp.verbosity = Verbosity::Info;
assert!(pp.is_showing(Verbosity::Quiet));
assert!(pp.is_showing(Verbosity::Notice));
assert!(pp.is_showing(Verbosity::Info));
assert!(!pp.is_showing(Verbosity::Verbose));
}
// ---- PP::indent ----
#[test]
fn indent_increments_indent_level() {
let pp = PP::new(true, false);
assert_eq!(pp.indent, 0);
let child = pp.indent();
assert_eq!(child.indent, 1);
let grandchild = child.indent();
assert_eq!(grandchild.indent, 2);
}
#[test]
fn indent_preserves_verbosity_and_emoji() {
let pp = PP::new(true, true);
let child = pp.indent();
assert_eq!(child.verbosity, pp.verbosity);
assert_eq!(child.emoji, pp.emoji);
}
#[test]
fn indent_shares_seen_state() {
let pp = PP::new(false, false);
let child = pp.indent();
// Insert via parent's seen set
pp.seen.lock().unwrap().insert("key1".to_string());
// Child should observe the same entry
assert!(child.seen.lock().unwrap().contains("key1"));
// Insert via child
child.seen.lock().unwrap().insert("key2".to_string());
// Parent should observe it too
assert!(pp.seen.lock().unwrap().contains("key2"));
}
// ---- PP::infof, noticef, warningf, errorf - no panic and verbosity gating ----
#[test]
fn infof_does_not_panic_when_verbose() {
let pp = PP::new(false, false);
pp.infof("", "test info message");
}
#[test]
fn infof_does_not_panic_when_quiet() {
let pp = PP::new(false, true);
// Should simply not print, and not panic
pp.infof("", "test info message");
}
#[test]
fn noticef_does_not_panic_when_verbose() {
let pp = PP::new(true, false);
pp.noticef(EMOJI_DETECT, "test notice message");
}
#[test]
fn noticef_does_not_panic_when_quiet() {
let pp = PP::new(false, true);
pp.noticef("", "test notice message");
}
#[test]
fn warningf_does_not_panic() {
let pp = PP::new(true, false);
pp.warningf(EMOJI_WARNING, "test warning");
}
#[test]
fn warningf_does_not_panic_when_quiet() {
// warningf always outputs (no verbosity check), just verify no panic
let pp = PP::new(false, true);
pp.warningf("", "test warning");
}
#[test]
fn errorf_does_not_panic() {
let pp = PP::new(true, false);
pp.errorf(EMOJI_ERROR, "test error");
}
#[test]
fn errorf_does_not_panic_when_quiet() {
let pp = PP::new(false, true);
pp.errorf("", "test error");
}
// ---- PP::info_once and notice_once ----
#[test]
fn info_once_suppresses_duplicates() {
let pp = PP::new(false, false);
// First call inserts the key
pp.info_once("dup_key", "", "first");
// The key should now be in the seen set
assert!(pp.seen.lock().unwrap().contains("dup_key"));
// Calling again with the same key should not insert again (set unchanged)
let size_before = pp.seen.lock().unwrap().len();
pp.info_once("dup_key", "", "second");
let size_after = pp.seen.lock().unwrap().len();
assert_eq!(size_before, size_after);
}
#[test]
fn info_once_allows_different_keys() {
let pp = PP::new(false, false);
pp.info_once("key_a", "", "msg a");
pp.info_once("key_b", "", "msg b");
let seen = pp.seen.lock().unwrap();
assert!(seen.contains("key_a"));
assert!(seen.contains("key_b"));
assert_eq!(seen.len(), 2);
}
#[test]
fn info_once_skipped_when_quiet() {
let pp = PP::new(false, true);
pp.info_once("quiet_key", "", "should not register");
// Because verbosity is Quiet, info_once should not even insert the key
assert!(!pp.seen.lock().unwrap().contains("quiet_key"));
}
#[test]
fn notice_once_suppresses_duplicates() {
let pp = PP::new(false, false);
pp.notice_once("notice_dup", "", "first");
assert!(pp.seen.lock().unwrap().contains("notice_dup"));
let size_before = pp.seen.lock().unwrap().len();
pp.notice_once("notice_dup", "", "second");
let size_after = pp.seen.lock().unwrap().len();
assert_eq!(size_before, size_after);
}
#[test]
fn notice_once_skipped_when_quiet() {
let pp = PP::new(false, true);
pp.notice_once("quiet_notice", "", "should not register");
assert!(!pp.seen.lock().unwrap().contains("quiet_notice"));
}
#[test]
fn info_once_shared_via_indent() {
let pp = PP::new(false, false);
let child = pp.indent();
// Mark a key via the parent
pp.info_once("shared_key", "", "parent");
assert!(pp.seen.lock().unwrap().contains("shared_key"));
// Child should see it as already present, so set size stays the same
let size_before = child.seen.lock().unwrap().len();
child.info_once("shared_key", "", "child duplicate");
let size_after = child.seen.lock().unwrap().len();
assert_eq!(size_before, size_after);
// Child can add a new key visible to parent
child.info_once("child_key", "", "child new");
assert!(pp.seen.lock().unwrap().contains("child_key"));
}
// ---- english_join ----
#[test]
fn english_join_empty() {
let items: Vec<String> = vec![];
assert_eq!(english_join(&items), "");
}
#[test]
fn english_join_single() {
let items = vec!["alpha".to_string()];
assert_eq!(english_join(&items), "alpha");
}
#[test]
fn english_join_two() {
let items = vec!["alpha".to_string(), "beta".to_string()];
assert_eq!(english_join(&items), "alpha and beta");
}
#[test]
fn english_join_three() {
let items = vec![
"alpha".to_string(),
"beta".to_string(),
"gamma".to_string(),
];
assert_eq!(english_join(&items), "alpha, beta, and gamma");
}
#[test]
fn english_join_four() {
let items = vec![
"a".to_string(),
"b".to_string(),
"c".to_string(),
"d".to_string(),
];
assert_eq!(english_join(&items), "a, b, c, and d");
}
// ---- default_pp ----
#[test]
fn default_pp_is_verbose_no_emoji() {
let pp = PP::default_pp();
assert!(!pp.emoji);
assert_eq!(pp.verbosity, Verbosity::Verbose);
}
}

1346
src/provider.rs Normal file

File diff suppressed because it is too large Load Diff

2313
src/updater.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +0,0 @@
#!/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
python3 -m venv venv
source ./venv/bin/activate
cd $DIR
set -o pipefail; pip install -r requirements.txt | { grep -v "already satisfied" || :; }
python3 cloudflare-ddns.py