154 Commits

Author SHA1 Message Date
Timothy Miller
93d351d997 Use Cloudflare trace by default and validate IPs
Default IPv4 provider is now CloudflareTrace.
Primary uses api.cloudflare.com; fallbacks are literal IPs.
Build per-family HTTP clients by binding to 0.0.0.0/[::] so the trace
endpoint observes the requested address family. Add validate_detected_ip
to reject wrong-family or non-global addresses (loopback, link-local,
private, documentation ranges, etc). Update tests and legacy updater
URLs.
Default to Cloudflare trace and validate IPs

Use api.cloudflare.com as the primary trace endpoint (fallbacks
remain literal IPs) to avoid WARP/Zero Trust interception. Build
IP-family-specific HTTP clients by binding to the unspecified
address so the trace endpoint sees the correct family. Add
validate_detected_ip to reject non-global or wrong-family addresses
and expand tests. Bump crate version and tempfile dev-dependency.
2026-03-11 18:42:46 -04:00
Timothy Miller
e7772c0fe0 Change default IPv4 provider to ipify
Update README and tests to reflect new defaults

Bump actions/checkout to v6, replace linux/arm/v7 with
linux/ppc64le in the Docker build, and normalize tag quoting in the
GitHub workflow
2026-03-10 05:37:09 -04:00
Timothy Miller
33266ced63 Correct Docker image size in README 2026-03-10 05:11:56 -04:00
Timothy Miller
332d730da8 Highlight tiny static Docker image in README 2026-03-10 02:06:52 -04:00
Timothy Miller
a4ac4e1e1c Use scratch release image and optimize build
Narrow tokio features to rt-multi-thread, macros, time and signal.
Add release profile to reduce binary size:
opt-level = s, lto = true, codegen-units = 1, strip = true, panic =
abort
Update Cargo.lock to remove unused deps and adjust Dockerfile to copy
CA certs from builder and set ENTRYPOINT for the release image
Use scratch base image and optimize release build

Add linux/ppc64le support in CI and build script
Switch Docker release stage to scratch, copy CA certificates from the
builder and use an explicit ENTRYPOINT for the binary
Tighten Cargo release profile (opt-level="s", lto, codegen-units=1,
strip, panic="abort") and reduce Tokio features to shrink the binary
Update README to reflect image size and supported platforms
2026-03-10 02:04:30 -04:00
Timothy Miller
6cad2de74c Remove linux/arm/v7 platform from image workflow 2026-03-10 01:49:59 -04:00
Timothy Miller
fd0d2ea647 Add Docker Hub badges to README 2026-03-10 01:28:15 -04:00
Timothy Miller
b1a2fa7af3 Migrate cloudflare-ddns to Rust
Add Cargo.toml, Cargo.lock and a full src/ tree with modules and tests
Update Dockerfile to build a Rust release binary and simplify CI/publish
Remove legacy Python script, requirements.txt, and startup helper
Switch .gitignore to Rust artifacts; update Dependabot and workflows to
cargo
Add .env example, docker-compose env, and update README and VSCode
settings

Remove the old Python implementation and requirements; add a Rust
implementation with Cargo.toml/Cargo.lock and full src/ modules, tests,
and notifier/heartbeat support. Update Dockerfile, build/publish
scripts, dependabot and workflows, README, and provide env-based
docker-compose and .env examples.
2026-03-10 01:21:21 -04:00
Timothy Miller
f0d9510fff Merge pull request #117 from arulrajnet/env-support
[feature] Support for environmental substitution in config.json
2024-08-23 13:55:33 -04:00
Timothy Miller
4ea9ba5745 Merge pull request #151 from 4n4n4s/dependabot-github-actions
Update github-actions
2023-12-10 16:51:21 -05:00
Timothy Miller
9a295bbf91 Merge pull request #127 from adamantike/fix/copy-dependencies-from-stage
Reduce Docker image size by only copying pip installed dependencies
2023-10-12 02:15:43 -04:00
Timothy Miller
fecf30cd2a Merge pull request #139 from Suyun114/ttl-patch
Add TTL set to 1 (auto)
2023-10-12 02:10:52 -04:00
Timothy Miller
f7d1ff8687 Merge pull request #140 from Nevah5/master
Fixed example config for load balancing support in README.md
2023-10-12 02:10:10 -04:00
4n4n4s
fa398b83fc Update github-actions 2023-09-16 16:52:56 +02:00
Timothy Miller
9eb395031e Merge pull request #137 from timothymiller/dependabot/pip/requests-2.31.0
Bump requests from 2.28.2 to 2.31.0
2023-07-23 16:15:58 -04:00
Nevah5
a8a7ed1e5f Fixed example config for load balancing support in README.md 2023-06-04 20:34:14 +02:00
Suyun
060257fe12 Add TTL set to 1 (auto) 2023-06-01 19:35:04 +08:00
dependabot[bot]
4be08d8811 Bump requests from 2.28.2 to 2.31.0
Bumps [requests](https://github.com/psf/requests) from 2.28.2 to 2.31.0.
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](https://github.com/psf/requests/compare/v2.28.2...v2.31.0)

---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-23 06:13:44 +00:00
Michael Manganiello
0ca623329a Reduce Docker image size by only copying pip installed dependencies
Currently, the multi-stage Docker build makes the `release` stage
inherit from `dependencies`, which will include any files created by the
`pip install` process in the final image.

By using `pip install --user` to make dependencies be installed in
`~/.local`, we can only copy those files into the final image, reducing
the image size:

```
cloudflare-ddns-fix-applied     latest            68427bd7c88d   3 minutes ago   54.6MB
cloudflare-ddns-master          latest            2675320b651d   8 minutes ago   65.9MB
```

A good resource going deeper on how this approach works can be found at
https://pythonspeed.com/articles/multi-stage-docker-python/, solution 1.
2023-02-22 10:26:02 -03:00
Arul
d3fe3940f9 addressing review comments 2023-02-21 06:53:01 +05:30
Arul
fa79547f9b Merge branch 'master' into env-support 2023-02-21 06:40:37 +05:30
Timothy Miller
6e92fc0d09 Fix load balancer errors 2023-02-15 19:28:08 -05:00
Timothy Miller
82b97f9cda Updated Load Balancing docs 2023-02-15 17:32:14 -05:00
Timothy Miller
190b90f769 Merge pull request #120 from DeeeeLAN/master
[feature] Add load balancer support
2023-02-15 17:27:03 -05:00
Timothy Miller
fff882be11 Revert config-example.json options for netif 2023-02-15 17:26:55 -05:00
Timothy Miller
713f0de5b0 Updated README.md 2023-02-15 17:15:03 -05:00
Timothy Miller
414ef99f96 Updated docker compose version to 3.9 2023-02-15 17:13:42 -05:00
Timothy Miller
ed65aff55f Revert netif changes for now 2023-02-15 17:05:00 -05:00
Timothy Miller
cb7b1804cf [feature] Extract IP address from netif credit: @comicchang 2023-02-15 16:14:22 -05:00
Timothy Miller
c135a7d343 Merge pull request #121 from davide125/systemd
Add systemd service and timer
2023-02-15 15:39:26 -05:00
Timothy Miller
af347f89b9 Update interpreter of shebang to python3 2023-02-15 15:37:06 -05:00
Timothy Miller
9824815e12 Git history folder added to gitignore 2023-02-15 15:30:36 -05:00
Timothy Miller
0dbd2f7c2b Added dependabot 2023-02-15 15:28:58 -05:00
Timothy Miller
e913d94eb8 Fix wildcard subdomain support 2023-02-15 15:22:06 -05:00
Timothy Miller
83fa74831e Updated README.md 2023-02-15 15:17:08 -05:00
Timothy Miller
f22ec89f3e Updated README.md 2023-02-15 15:15:23 -05:00
Timothy Miller
bd3f4a94cb Updated README.md 2023-02-15 15:05:59 -05:00
Timothy Miller
7212161f7b Updated README.md 2023-02-15 15:01:28 -05:00
Timothy Miller
2ad7e57d65 Added support for secondary IP checks if primary fails (Fixes #111)
Updated requests module version
2023-02-15 14:07:17 -05:00
Timothy Miller
5c909e25cd Updated README.md 2023-02-15 13:25:07 -05:00
Davide Cavalca
2c2e929d17 Add systemd service and timer 2023-01-29 15:28:11 -08:00
Dillan Mills
d92976993d Add load balancer slupport 2023-01-27 15:08:10 -07:00
Arul
a1fa3b9714 support for environmental substitution in config.json using python Template
refer comment in #35
2022-11-13 21:37:44 +05:30
Timothy Miller
7e6d74f1f6 Onboarding experience improved 2022-10-30 17:54:32 -04:00
Timothy Miller
9855ca6249 Update documentation 2022-10-30 17:47:46 -04:00
Timothy Miller
e0f0280656 Upgrade requests to 2.28.1 2022-10-30 17:45:41 -04:00
Timothy Miller
e86695f77d Updated documentation 2022-10-30 17:43:04 -04:00
Timothy Miller
b0a396b8f1 Update README.md 2022-08-31 16:07:34 -04:00
Timothy Miller
c648b81b25 Fixed typo in README 2022-07-31 22:46:30 -04:00
Timothy Miller
ceeb011366 Updated the domain used to fetch IPv4 from cloudflare. 2022-07-31 15:55:59 -04:00
Timothy Miller
f0357c71c1 Cleaned up code smell 2022-07-31 03:32:05 -04:00
Timothy Miller
6933cbe27f Added compatibility for legacy configs 2022-07-31 03:06:08 -04:00
Timothy Miller
566ad3a7cf Cleaned up code smells 2022-07-31 02:44:38 -04:00
Timothy Miller
3287447e0a Added 🗣️ Call to action for Docker environment variable support 2022-07-30 21:52:27 -04:00
Timothy Miller
62c360cff2 Added 🗣️ Call to action for Docker environment variable support 2022-07-30 21:48:12 -04:00
Timothy Miller
ae7be14004 Synced repeat interval with TTL 2022-07-30 21:44:59 -04:00
Timothy Miller
cb539ad64d Fixed config path bugs in Docker 2022-07-30 21:24:20 -04:00
Timothy Miller
a4d29036c5 Updated cdn-cgi/trace domain from 1.1.1.1 to cloudflare.com 2022-07-30 21:22:54 -04:00
Timothy Miller
ef4e3a5787 Added configurable TTL option, plus documentation 2022-07-30 21:20:41 -04:00
Timothy Miller
2b9ebdeab2 Added exception handling for unhandled api requests 2022-07-30 20:28:54 -04:00
Timothy Miller
86976e5133 Added per-subdomain proxy flag to config.json 2022-07-30 20:24:27 -04:00
Timothy Miller
2401e7a995 Fixed directory not set when running script with crontab 2022-07-30 20:14:29 -04:00
Timothy Miller
8acd8e5f59 Added a catch all for * & @, which are common references to the root domain 2022-07-30 20:12:24 -04:00
Timothy Miller
0e0e9f9989 Fixed bug that caused the root domain to not update 2022-07-30 20:09:38 -04:00
Timothy Miller
464d2792b1 Fixed purgeUnknownRecords behavior 2022-07-30 20:07:36 -04:00
Timothy Miller
a9d25c743a Create CODE_OF_CONDUCT.md 2021-10-30 22:13:17 -04:00
Timothy Miller
254e978971 Merge pull request #67 from favonia/python-version-check 2021-10-30 22:03:03 -04:00
favonia
bf6135739d Simplify Python version checking 2021-10-30 16:15:37 -05:00
Timothy Miller
bc06202b35 Merge pull request #66 from rojosinalma/patch-1
Fixes Python version check
2021-10-30 16:22:46 -04:00
Rojo
eebbcfbbdf Fixes Python version check
This fixes the Python version check.

float() cuts trailing zeroes:

```python
import sys
```
2021-10-30 20:55:52 +02:00
Timothy Miller
2a4d9530dd Reduce unimportant logging
Original solution from here https://github.com/pypa/pip/issues/5900#issuecomment-490216395
2021-10-29 23:15:05 -04:00
Timothy Miller
6587d86c65 Reorganized folder structure 2021-10-29 22:56:13 -04:00
Timothy Miller
6e68d2623f Improved documentation around optional features 2021-10-29 22:22:57 -04:00
Timothy Miller
ffa4963ddd Merge pull request #62 from arpagon/master
K8S Compatability and Example
2021-10-29 22:06:12 -04:00
Timothy Miller
def75e282d Merge pull request #57 from omeganot/master
Config option for purge/delete of "stale" records
2021-10-29 21:54:53 -04:00
Timothy Miller
870da367a9 Merge pull request #59 from adamus1red/adamus1red-docker-ci-tweaks
Update GHA CI to use offical docker build/push actions and allow PR's to be tested
2021-10-29 21:52:31 -04:00
Timothy Miller
571a22ac22 Merge pull request #53 from zmilonas/patch-2
Do not wait before updating IPs for the first time (#51)
2021-10-29 21:42:47 -04:00
Sebastian Rojo
5136c925d2 FIX confi filename on Docs 2021-07-15 16:32:09 -05:00
Sebastian Rojo
96527aaab2 BETTER docs 2021-07-15 16:26:47 -05:00
Sebastian Rojo
f7d2e7dc00 Set default image to timothyjmiller/cloudflare-ddns:latest 2021-07-15 16:18:46 -05:00
Sebastian Rojo
4e4d3cebf1 BETTER docs 2021-07-15 16:17:35 -05:00
Sebastian Rojo
01993807a9 ADDED env Variable CONFIG_PATH for Kubernetes secret 2021-07-15 16:14:23 -05:00
adamus1red
0d9a9a0579 Update image.yml 2021-06-16 14:41:04 +01:00
Rich Visotcky
0a85b04287 Add config and option for purgeUnknownRecords 2021-06-02 09:18:56 -05:00
Zachary Milonas
1a6ffc9681 Do not wait before updating IPs for the first time (#51) 2021-04-11 15:44:18 +02:00
Timothy Miller
458559d52c 📈 Increase sync frequency to 5 minutes to prevent potential gateway timeout 2021-03-21 20:18:55 -04:00
Timothy Miller
1f6daa5968 Merge pull request #49 from immortaly007/feature/error-response-logging
🦢 Log the response text, in case the response indicated an error
2021-03-21 13:40:43 -04:00
Bas Dado
c34401c43f 🦢 Log the response text, in case the response indicated an error 2021-03-20 14:35:08 +01:00
Timothy Miller
9a8d7d57e1 🧹 Refactored code 2021-03-17 02:33:51 -04:00
Timothy Miller
04d87d3aa6 Improved error handling 2021-03-17 01:15:07 -04:00
Timothy Miller
bdf8c75cad 🧩 Disable IPv4 or IPv6 in config.json 2021-03-16 20:53:28 -04:00
Timothy Miller
6fe23a2aee 🦢 Improved error message handling 2021-03-16 14:46:29 -04:00
Timothy Miller
47ae1238e2 🦢 Graceful warnings when config.json path is not configured correctly 2021-03-12 15:58:36 -05:00
Timothy Miller
55b705072a 💨 Sped up shutdown
 Check every minute for changes
2021-03-11 20:34:05 -05:00
Timothy Miller
d3cc054b03 💹 Prevent rate limiting by increasing sync frequency to 15 minutes 2021-03-05 23:11:00 -05:00
Timothy Miller
6b25c64846 Revert merge pull request #39 2021-03-05 21:53:18 -05:00
Timothy Miller
378c600084 Merge pull request #39 from bjackman/set-config-path
🏁 Add a flag to modify config.json location
2021-03-03 22:16:30 -05:00
Timothy Miller
975fba4d42 🪵 Reduced duplicate logs [your SD card(s) will thank me] 2021-03-01 00:18:37 -05:00
Timothy Miller
3cd26feb03 🪵 Reduced duplicate logs [your SD card(s) will thank me] 2021-03-01 00:13:11 -05:00
Timothy Miller
1ca225b85c 🔬 Clarified config values for subdomains 2021-03-01 00:01:47 -05:00
Timothy Miller
80bd7801fe 🪵 Reduced duplicate logs [your SD card(s) will thank me] 2021-02-28 23:58:11 -05:00
Timothy Miller
000c833f43 🐳 CI Multi-Arch Docker Builds 2021-02-28 16:58:05 -05:00
Timothy Miller
29771030b1 🐳 CI Multi-Arch Docker builds 2021-02-28 16:39:14 -05:00
Timothy Miller
3753542dce 🐳 CI Multi-Arch Docker builds 2021-02-28 16:32:04 -05:00
Timothy Miller
c34ba8e94c 🐳 CI Multi-Arch Docker builds 2021-02-28 16:28:08 -05:00
Timothy Miller
6be8add640 🐳 CI Multi-Arch Docker Builds 2021-02-28 16:18:06 -05:00
Brendan Jackman
0f3708a482 Add a flag to modify config.json location
On Kubernetes, it's really awkward to write a Secret into the root directory:

https://www.jeffgeerling.com/blog/2019/mounting-kubernetes-secret-single-file-inside-pod

Therefore this adds support for reading the config from an arbitrary path. The
behaviour is unchanged if you don't set this new flag.
2021-02-28 18:24:25 +01:00
Brendan Jackman
8c55892f32 Switch to argparse
The next commit adds a second argument, so raw sys.argv parsing will be a bit
cumbersome. Switch to argparse instead.
2021-02-28 18:18:03 +01:00
Timothy Miller
86c935dea7 🐳 CI Multi-Arch Docker Builds
🗄️ Organized scripts
2021-02-28 12:06:38 -05:00
Timothy Miller
27ccdd0203 🦮 Strip whitespace from subdomain
📚 Improved documentation
2021-02-28 01:51:43 -05:00
Timothy Miller
a816fb6c3f 🧵 Type error resolved 2021-02-27 11:53:30 -05:00
Timothy Miller
e129789a85 🚀 Improvement: Update readme guide on multiple zones 2021-02-26 01:18:53 -05:00
Timothy Miller
4ffbb98f29 🚀 Improvement: Skip PUT request when IP does not change
🧑‍🚀 Improvement: Working graceful exit
🚀 Improvement: Update readme on multiple zones
🐛 Fix: Handle IP changes correctly https://github.com/timothymiller/cloudflare-ddns/issues/37
2021-02-26 01:15:35 -05:00
Timothy Miller
6140917119 Merge pull request #33 from markormesher/feat/handle-sigterm
handle sigterm and shutdown immediately
2021-01-23 11:50:23 -05:00
Mark Ormesher
d763be7931 handle sigterm and shutdown immediately 2021-01-20 18:50:36 +00:00
Timothy Miller
839ffe2551 Merge pull request #29 from xinxijishuwyq/master
Ignore case
2020-12-20 00:41:09 -05:00
KenWong
16352e4543 Ignore case
Signed-off-by: KenWong <xinxijishuwyq@gmail.com>
2020-12-20 13:37:14 +08:00
Timothy Miller
65d8c44ec3 Update docker-build.sh 2020-12-16 21:52:30 -05:00
Timothy Miller
de4e2ac5b6 Update docker-publish.sh 2020-12-16 21:52:18 -05:00
Timothy Miller
efefa0ae7a Update docker-run.sh 2020-12-16 21:52:04 -05:00
Timothy Miller
748170926c Update docker-build-all.sh 2020-12-16 21:51:50 -05:00
Timothy Miller
0ca979f91d Update docker-publish.sh 2020-12-16 19:15:41 -05:00
Timothy Miller
3b92c57a75 Update README.md 2020-12-16 18:55:57 -05:00
timothymiller
db5edef4f0 🖥️ Complete Official Python Docker Image support
📚 Updated README.md
2020-12-16 18:55:06 -05:00
Timothy Miller
1235464e18 Merge pull request #28 from wloot/patch-1
Use 1.1.1.1 api instead of dirty hack to get ip
2020-12-16 18:28:25 -05:00
Julian Liu
58c69e2c5f Use 1.1.1.1 api instead of dirty hack to get ip 2020-12-17 02:42:41 +08:00
timothymiller
3e1fcb13f3 🐳 Docker build scripts renamed 2020-12-16 05:52:41 -05:00
Timothy Miller
2b67615330 Merge pull request #27 from paz/master
dirty support to use cloudflare trace & force it to be ipv4/ipv6
2020-12-16 05:47:22 -05:00
root
344b056a6d use cloudflare trace & force it to be ipv4/ipv6 2020-12-16 18:01:40 +08:00
Timothy Miller
18ad6c6bc4 Merge pull request #25 from xinxijishuwyq/master
add option ttl
2020-12-12 15:22:40 -05:00
timothymiller
bc837c61a0 Merge branch 'master' of https://github.com/timothymiller/cloudflare-ddns 2020-12-12 14:53:03 -05:00
timothymiller
f63b0f13fc 🖼️ Added feature graphic 2020-12-12 14:52:52 -05:00
KenWong
cbfd628f22 add option ttl
Signed-off-by: KenWong <xinxijishuwyq@gmail.com>
2020-12-12 21:18:44 +08:00
Timothy Miller
3f2346db6f Update README.md 2020-12-08 22:35:10 -05:00
Timothy Miller
a8be42292b 📄 Updated Dockerhub links in README.md 2020-12-08 04:04:43 -05:00
timothymiller
f77a72f4e3 📄 Fixed Discord invite link in README.md 2020-12-08 03:58:21 -05:00
timothymiller
a633478239 🖥️ Pi-zero support (ARMv6)
📊 Docker Image Stats
💬 Official Discord Server for support
2020-12-08 03:07:40 -05:00
timothymiller
96f781f8b3 👨‍💻 Multi-arch support (ARMv7/ARMv8,AMD64)
Updated requests dependency
2020-12-07 20:50:15 -05:00
timothymiller
242575d7aa ⛏️ Fix: Gracefully handles all IPv4 or IPv6 connectivity scenarios 2020-10-04 13:54:01 -04:00
Timothy Miller
2ad3d6b564 Updated hidden project file default settings 2020-08-26 16:33:49 -04:00
Timothy Miller
d6d3cb54d2 update README.md 2020-08-26 16:20:03 -04:00
timothymiller
142fbaa8ba Merge branch 'master' of https://github.com/timothymiller/cloudflare-ddns 2020-08-26 05:51:28 -04:00
timothymiller
fa56332d18 Update README.md 2020-08-26 05:51:20 -04:00
Tim Miller
78042582bb Update FUNDING.yml 2020-08-16 15:14:10 -04:00
timothymiller
96d92accaa Added warnings for failure to detect ipv4 2020-08-15 22:42:43 -04:00
timothymiller
18654798e0 Update README.md 2020-08-13 23:15:27 -04:00
timothymiller
1e14700d4e Update README.md section on IPv6 inside Docker 2020-08-13 18:35:51 -04:00
timothymiller
3b9a961f61 Updated README.md 2020-08-13 18:33:52 -04:00
timothymiller
bd15e6f117 Fixed IPv6 support in Docker compose file. 2020-08-13 18:30:10 -04:00
timothymiller
5ac69b8274 Fixed IPv6 access inside Docker container 2020-08-13 18:26:43 -04:00
Tim Miller
e2deea1d6e Merge pull request #10 from merlinschumacher/patch-1
Fix image path in docker-compose example README.
2020-08-11 10:56:41 -04:00
Merlin Schumacher
ddc84cec96 Fix image path in docker-compose example 2020-08-11 14:09:19 +02:00
Tim Miller
33334a529f Merge pull request #9 from luigifcruz/patch-1
Fix docker compose username.
2020-08-07 19:19:31 -04:00
Luigi Cruz
86499b038a Update docker-compose.yml 2020-08-07 20:16:06 -03:00
34 changed files with 13527 additions and 315 deletions

2
.github/FUNDING.yml vendored
View File

@@ -1,5 +1,3 @@
# These are supported funding model platforms
github: [timothymiller]
patreon: timknowsbest
custom: ['https://timknowsbest.com/donate']

16
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
version: 2
updates:
- package-ecosystem: 'cargo'
directory: '/'
schedule:
interval: 'daily'
- package-ecosystem: 'docker'
directory: '/'
schedule:
interval: 'daily'
- package-ecosystem: 'github-actions'
directory: '/'
schedule:
interval: 'daily'

59
.github/workflows/image.yml vendored Normal file
View File

@@ -0,0 +1,59 @@
name: Build cloudflare-ddns Docker image (multi-arch)
on:
push:
branches: master
tags:
- "v*"
pull_request:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Extract version from Cargo.toml
id: version
run: |
VERSION=$(grep '^version' Cargo.toml | head -1 | sed 's/.*"\(.*\)".*/\1/')
echo "version=$VERSION" >> "$GITHUB_OUTPUT"
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: timothyjmiller/cloudflare-ddns
tags: |
type=raw,enable=${{ github.ref == 'refs/heads/master' }},value=latest
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=raw,enable=${{ github.ref == 'refs/heads/master' }},value=${{ steps.version.outputs.version }}
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
platforms: linux/amd64,linux/arm64,linux/ppc64le
labels: |
org.opencontainers.image.source=${{ github.event.repository.html_url }}
org.opencontainers.image.created=${{ steps.meta.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.version=${{ steps.version.outputs.version }}

62
.gitignore vendored
View File

@@ -1,60 +1,10 @@
# Private API keys for updating IPv4 & IPv6 addresses on Cloudflare
config.json
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# Rust build artifacts
/target/
debug/
*.pdb
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Git History
**/.history/*

16
.vscode/settings.json vendored
View File

@@ -5,17 +5,13 @@
"**/.hg": true,
"**/CVS": true,
"**/.DS_Store": true,
"**/Thumbs.db": true,
".github": true,
".vscode": true,
"LICENSE": true,
"requirements.txt": true,
"build-docker-image.sh": false,
".gitignore": true,
"Dockerfile": false,
"start-sync.sh": false,
"venv": true
".vscode": true,
"Dockerfile": true,
"LICENSE": true,
"target": true
},
"explorerExclude.backup": null,
"python.linting.pylintEnabled": true,
"python.linting.enabled": true
"explorerExclude.backup": {}
}

128
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

1822
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

28
Cargo.toml Normal file
View File

@@ -0,0 +1,28 @@
[package]
name = "cloudflare-ddns"
version = "2.0.1"
edition = "2021"
description = "Access your home network remotely via a custom domain name without a static IP"
license = "GPL-3.0"
[dependencies]
reqwest = { version = "0.12", features = ["json", "rustls-tls"], default-features = false }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tokio = { version = "1", features = ["rt-multi-thread", "macros", "time", "signal"] }
regex = "1"
chrono = { version = "0.4", features = ["clock"] }
url = "2"
idna = "1"
if-addrs = "0.13"
[profile.release]
opt-level = "s"
lto = true
codegen-units = 1
strip = true
panic = "abort"
[dev-dependencies]
tempfile = "3.27.0"
wiremock = "0.6"

View File

@@ -1,17 +1,13 @@
# ---- Base ----
FROM python:alpine AS base
# ---- Build ----
FROM rust:alpine AS builder
RUN apk add --no-cache musl-dev
WORKDIR /build
COPY Cargo.toml Cargo.lock ./
COPY src ./src
RUN cargo build --release
#
# ---- Dependencies ----
FROM base AS dependencies
# install dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
#
# ---- Release ----
FROM dependencies AS release
# copy project file(s)
WORKDIR /
COPY cloudflare-ddns.py .
CMD ["python", "/cloudflare-ddns.py", "--repeat"]
FROM scratch AS release
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /build/target/release/cloudflare-ddns /cloudflare-ddns
ENTRYPOINT ["/cloudflare-ddns", "--repeat"]

496
README.md
View File

@@ -1,132 +1,490 @@
# :rocket: Cloudflare DDNS
<p align="center"><a href="https://timknowsbest.com/free-dynamic-dns" target="_blank" rel="noopener noreferrer"><img width="1024" src="feature-graphic.jpg" alt="Cloudflare DDNS"/></a></p>
Dynamic DNS service based on Cloudflare! Access your home network remotely via a custom domain name without a static IP!
# 🌍 Cloudflare DDNS
## :us: Origin
Access your home network remotely via a custom domain name without a static IP!
This script was written for the Raspberry Pi platform to enable low cost, simple self hosting to promote a more decentralized internet. On execution, the script fetches public IPv4 and IPv6 addresses and creates/updates DNS records for the subdomains in Cloudflare. Stale, duplicate DNS records are removed for housekeeping.
A feature-complete dynamic DNS client for Cloudflare, written in Rust. The **smallest and most memory-efficient** open-source Cloudflare DDNS Docker image available — **~1.9 MB image size** and **~3.5 MB RAM** at runtime, smaller and leaner than Go-based alternatives. Built as a fully static binary from scratch with zero runtime dependencies.
## :vertical_traffic_light: Getting Started
Configure everything with environment variables. Supports notifications, heartbeat monitoring, WAF list management, flexible scheduling, and more.
First copy the example configuration file into the real one.
[![Docker Pulls](https://img.shields.io/docker/pulls/timothyjmiller/cloudflare-ddns?style=flat&logo=docker&label=pulls)](https://hub.docker.com/r/timothyjmiller/cloudflare-ddns) [![Docker Image Size](https://img.shields.io/docker/image-size/timothyjmiller/cloudflare-ddns/latest?style=flat&logo=docker&label=image%20size)](https://hub.docker.com/r/timothyjmiller/cloudflare-ddns)
## ✨ Features
- 🔍 **Multiple IP detection providers** — Cloudflare Trace, Cloudflare DNS-over-HTTPS, ipify, local interface, custom URL, or static IPs
- 📡 **IPv4 and IPv6** — Full dual-stack support with independent provider configuration
- 🌐 **Multiple domains and zones** — Update any number of domains across multiple Cloudflare zones
- 🃏 **Wildcard domains** — Support for `*.example.com` records
- 🌍 **Internationalized domain names** — Full IDN/punycode support (e.g. `münchen.de`)
- 🛡️ **WAF list management** — Automatically update Cloudflare WAF IP lists
- 🔔 **Notifications** — Shoutrrr-compatible notifications (Discord, Slack, Telegram, Gotify, Pushover, generic webhooks)
- 💓 **Heartbeat monitoring** — Healthchecks.io and Uptime Kuma integration
- ⏱️ **Cron scheduling** — Flexible update intervals via cron expressions
- 🧪 **Dry-run mode** — Preview changes without modifying DNS records
- 🧹 **Graceful shutdown** — Signal handling (SIGINT/SIGTERM) with optional DNS record cleanup
- 💬 **Record comments** — Tag managed records with comments for identification
- 🎯 **Managed record regex** — Control which records the tool manages via regex matching
- 🎨 **Pretty output with emoji** — Configurable emoji and verbosity levels
- 🔒 **Zero-log IP detection** — Uses Cloudflare's [cdn-cgi/trace](https://www.cloudflare.com/cdn-cgi/trace) by default
- 🏠 **CGNAT-aware local detection** — Filters out shared address space (100.64.0.0/10) and private ranges
- 🤏 **Tiny static binary** — ~1.9 MB Docker image built from scratch, zero runtime dependencies
## 🚀 Quick Start
```bash
cp config-example.json config.json
docker run -d \
--name cloudflare-ddns \
--restart unless-stopped \
--network host \
-e CLOUDFLARE_API_TOKEN=your-api-token \
-e DOMAINS=example.com,www.example.com \
timothyjmiller/cloudflare-ddns:latest
```
Edit `config.json` and replace the values with your own.
That's it. The container detects your public IP and updates the DNS records for your domains every 5 minutes.
### Authentication methods
> ⚠️ `--network host` is required to detect IPv6 addresses. If you only need IPv4, you can omit it and set `IP6_PROVIDER=none`.
You can choose to use either the newer API tokens, or the traditional API keys
## 🔑 Authentication
To generate a new API tokens, go to https://dash.cloudflare.com/profile/api-tokens and create a token capable of **Edit DNS**. Then replace the value in
```json
"authentication":
"api_token": "Your cloudflare API token, including the capability of **Edit DNS**"
```
| Variable | Description |
|----------|-------------|
| `CLOUDFLARE_API_TOKEN` | API token with "Edit DNS" capability |
| `CLOUDFLARE_API_TOKEN_FILE` | Path to a file containing the API token (Docker secrets compatible) |
Alternatively, you can use the traditional API keys by setting appropriate values for:
```json
"authentication":
"api_key":
"api_key": "Your cloudflare API Key",
"account_email": "The email address you use to sign in to cloudflare",
```
To generate an API token, go to your [Cloudflare Profile](https://dash.cloudflare.com/profile/api-tokens) and create a token capable of **Edit DNS**.
### Other values explained
## 🌐 Domains
```json
"zone_id": "The ID of the zone that will get the records. From your dashboard click into the zone. Under the overview tab, scroll down and the zone ID is listed in the right rail",
"subdomains": "Array of subdomains you want to update the A & where applicable, AAAA records. IMPORTANT! Only write subdomain name. Do not include the base domain name. (e.g. foo or an empty string to update the base domain)",
"proxied": false (defaults to false. Make it true if you want CDN/SSL benefits from cloudflare. This usually disables SSH)
```
| Variable | Description |
|----------|-------------|
| `DOMAINS` | Comma-separated list of domains to update for both IPv4 and IPv6 |
| `IP4_DOMAINS` | Comma-separated list of IPv4-only domains |
| `IP6_DOMAINS` | Comma-separated list of IPv6-only domains |
## :fax: Hosting multiple domains on the same IP?
You can save yourself some trouble when hosting multiple domains pointing to the same IP address (in the case of Traefik) by defining one A & AAAA record 'ddns.example.com' pointing to the IP of the server that will be updated by this DDNS script. For each subdomain, create a CNAME record pointing to 'ddns.example.com'. Now you don't have to manually modify the script config every time you add a new subdomain to your site!
Wildcard domains are supported: `*.example.com`
## :whale: Deploy with Docker Compose
At least one of `DOMAINS`, `IP4_DOMAINS`, `IP6_DOMAINS`, or `WAF_LISTS` must be set.
Precompiled images are available via the official docker container [on DockerHub](https://hub.docker.com/r/timothyjmiller/cloudflare-ddns).
## 🔍 IP Detection Providers
Modify the host file path of config.json inside the volumes section of docker-compose.yml.
| Variable | Default | Description |
|----------|---------|-------------|
| `IP4_PROVIDER` | `ipify` | IPv4 detection method |
| `IP6_PROVIDER` | `cloudflare.trace` | IPv6 detection method |
Available providers:
| Provider | Description |
|----------|-------------|
| `cloudflare.trace` | 🔒 Cloudflare's `/cdn-cgi/trace` endpoint (default, zero-log) |
| `cloudflare.doh` | 🌐 Cloudflare DNS-over-HTTPS (`whoami.cloudflare` TXT query) |
| `ipify` | 🌎 ipify.org API |
| `local` | 🏠 Local IP via system routing table (no network traffic, CGNAT-aware) |
| `local.iface:<name>` | 🔌 IP from a specific network interface (e.g., `local.iface:eth0`) |
| `url:<url>` | 🔗 Custom HTTP(S) endpoint that returns an IP address |
| `literal:<ips>` | 📌 Static IP addresses (comma-separated) |
| `none` | 🚫 Disable this IP type |
## ⏱️ Scheduling
| Variable | Default | Description |
|----------|---------|-------------|
| `UPDATE_CRON` | `@every 5m` | Update schedule |
| `UPDATE_ON_START` | `true` | Run an update immediately on startup |
| `DELETE_ON_STOP` | `false` | Delete managed DNS records on shutdown |
Schedule formats:
- `@every 5m` — Every 5 minutes
- `@every 1h` — Every hour
- `@every 30s` — Every 30 seconds
- `@once` — Run once and exit
When `UPDATE_CRON=@once`, `UPDATE_ON_START` must be `true` and `DELETE_ON_STOP` must be `false`.
## 📝 DNS Record Settings
| Variable | Default | Description |
|----------|---------|-------------|
| `TTL` | `1` (auto) | DNS record TTL in seconds (1=auto, or 30-86400) |
| `PROXIED` | `false` | Expression controlling which domains are proxied through Cloudflare |
| `RECORD_COMMENT` | (empty) | Comment attached to managed DNS records |
| `MANAGED_RECORDS_COMMENT_REGEX` | (empty) | Regex to identify which records are managed (empty = all) |
The `PROXIED` variable supports boolean expressions:
| Expression | Meaning |
|------------|---------|
| `true` | ☁️ Proxy all domains |
| `false` | 🔓 Don't proxy any domains |
| `is(example.com)` | 🎯 Only proxy `example.com` |
| `sub(cdn.example.com)` | 🌳 Proxy `cdn.example.com` and its subdomains |
| `is(a.com) \|\| is(b.com)` | 🔀 Proxy `a.com` or `b.com` |
| `!is(vpn.example.com)` | 🚫 Proxy everything except `vpn.example.com` |
Operators: `is()`, `sub()`, `!`, `&&`, `||`, `()`
## 🛡️ WAF Lists
| Variable | Default | Description |
|----------|---------|-------------|
| `WAF_LISTS` | (empty) | Comma-separated WAF lists in `account-id/list-name` format |
| `WAF_LIST_DESCRIPTION` | (empty) | Description for managed WAF lists |
| `WAF_LIST_ITEM_COMMENT` | (empty) | Comment for WAF list items |
| `MANAGED_WAF_LIST_ITEMS_COMMENT_REGEX` | (empty) | Regex to identify managed WAF list items |
WAF list names must match the pattern `[a-z0-9_]+`.
## 🔔 Notifications (Shoutrrr)
| Variable | Description |
|----------|-------------|
| `SHOUTRRR` | Newline-separated list of notification service URLs |
Supported services:
| Service | URL format |
|---------|------------|
| 💬 Discord | `discord://token@webhook-id` |
| 📨 Slack | `slack://token-a/token-b/token-c` |
| ✈️ Telegram | `telegram://bot-token@telegram?chats=chat-id` |
| 📡 Gotify | `gotify://host/path?token=app-token` |
| 📲 Pushover | `pushover://user-key@api-token` |
| 🌐 Generic webhook | `generic://host/path` or `generic+https://host/path` |
Notifications are sent when DNS records are updated, created, deleted, or when errors occur.
## 💓 Heartbeat Monitoring
| Variable | Description |
|----------|-------------|
| `HEALTHCHECKS` | Healthchecks.io ping URL |
| `UPTIMEKUMA` | Uptime Kuma push URL |
Heartbeats are sent after each update cycle. On failure, a fail signal is sent. On shutdown, an exit signal is sent.
## ⏳ Timeouts
| Variable | Default | Description |
|----------|---------|-------------|
| `DETECTION_TIMEOUT` | `5s` | Timeout for IP detection requests |
| `UPDATE_TIMEOUT` | `30s` | Timeout for Cloudflare API requests |
## 🖥️ Output
| Variable | Default | Description |
|----------|---------|-------------|
| `EMOJI` | `true` | Use emoji in output messages |
| `QUIET` | `false` | Suppress informational output |
## 🏁 CLI Flags
| Flag | Description |
|------|-------------|
| `--dry-run` | 🧪 Preview changes without modifying DNS records |
| `--repeat` | 🔁 Run continuously (legacy config mode only; env var mode uses `UPDATE_CRON`) |
## 📋 All Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `CLOUDFLARE_API_TOKEN` | — | 🔑 API token |
| `CLOUDFLARE_API_TOKEN_FILE` | — | 📄 Path to API token file |
| `DOMAINS` | — | 🌐 Domains for both IPv4 and IPv6 |
| `IP4_DOMAINS` | — | 4⃣ IPv4-only domains |
| `IP6_DOMAINS` | — | 6⃣ IPv6-only domains |
| `IP4_PROVIDER` | `ipify` | 🔍 IPv4 detection provider |
| `IP6_PROVIDER` | `cloudflare.trace` | 🔍 IPv6 detection provider |
| `UPDATE_CRON` | `@every 5m` | ⏱️ Update schedule |
| `UPDATE_ON_START` | `true` | 🚀 Update on startup |
| `DELETE_ON_STOP` | `false` | 🧹 Delete records on shutdown |
| `TTL` | `1` | ⏳ DNS record TTL |
| `PROXIED` | `false` | ☁️ Proxied expression |
| `RECORD_COMMENT` | — | 💬 DNS record comment |
| `MANAGED_RECORDS_COMMENT_REGEX` | — | 🎯 Managed records regex |
| `WAF_LISTS` | — | 🛡️ WAF lists to manage |
| `WAF_LIST_DESCRIPTION` | — | 📝 WAF list description |
| `WAF_LIST_ITEM_COMMENT` | — | 💬 WAF list item comment |
| `MANAGED_WAF_LIST_ITEMS_COMMENT_REGEX` | — | 🎯 Managed WAF items regex |
| `DETECTION_TIMEOUT` | `5s` | ⏳ IP detection timeout |
| `UPDATE_TIMEOUT` | `30s` | ⏳ API request timeout |
| `EMOJI` | `true` | 🎨 Enable emoji output |
| `QUIET` | `false` | 🤫 Suppress info output |
| `HEALTHCHECKS` | — | 💓 Healthchecks.io URL |
| `UPTIMEKUMA` | — | 💓 Uptime Kuma URL |
| `SHOUTRRR` | — | 🔔 Notification URLs (newline-separated) |
---
## 🚢 Deployment
### 🐳 Docker Compose
```yml
version: "3.7"
version: '3.9'
services:
cloudflare-ddns:
image: timothymiller/cloudflare-ddns:latest
image: timothyjmiller/cloudflare-ddns:latest
container_name: cloudflare-ddns
security_opt:
- no-new-privileges:true
network_mode: 'host'
environment:
- PUID=1000
- PGID=1000
volumes:
- /EDIT/YOUR/PATH/HERE/config.json:/config.json
- CLOUDFLARE_API_TOKEN=your-api-token
- DOMAINS=example.com,www.example.com
- PROXIED=true
- IP6_PROVIDER=none
- HEALTHCHECKS=https://hc-ping.com/your-uuid
restart: unless-stopped
```
### :running: Running
> ⚠️ Docker requires `network_mode: host` to access the IPv6 public address.
From the project root directory
### ☸️ Kubernetes
The included manifest uses the legacy JSON config mode. Create a secret containing your `config.json` and apply:
```bash
docker-compose up -d
kubectl create secret generic config-cloudflare-ddns --from-file=config.json -n ddns
kubectl apply -f k8s/cloudflare-ddns.yml
```
### Building from source
### 🐧 Linux + Systemd
Create a config.json file with your production credentials.
Give build-docker-image.sh permission to execute.
1. Build and install:
```bash
sudo chmod +x ./build-docker-image.sh
cargo build --release
sudo cp target/release/cloudflare-ddns /usr/local/bin/
```
At project root, run the build-docker-image.sh script.
2. Copy the systemd units from the `systemd/` directory:
```bash
./build-docker-image.sh
sudo cp systemd/cloudflare-ddns.service /etc/systemd/system/
sudo cp systemd/cloudflare-ddns.timer /etc/systemd/system/
```
#### Run the locally compiled version
3. Place a `config.json` at `/etc/cloudflare-ddns/config.json` (the systemd service uses legacy config mode).
4. Enable the timer:
```bash
docker run -d timothyjmiller/cloudflare_ddns:latest
sudo systemctl enable --now cloudflare-ddns.timer
```
## :penguin: (legacy) Linux + cron instructions (all distros)
The timer runs the service every 15 minutes (configurable in `cloudflare-ddns.timer`).
### :running: Running
This script requires Python 3.5+, which comes preinstalled on the latest version of Raspbian. Download/clone this repo and give permission to the project's bash script by running `chmod +x ./start-sync.sh`. Now you can execute `./start-sync.sh`, which will set up a virtualenv, pull in any dependencies, and fire the script.
1. Upload the cloudflare-ddns folder to your home directory /home/your_username_here/
2. Run the following code in terminal
## 🔨 Building from Source
```bash
crontab -e
cargo build --release
```
3. Add the following lines to sync your DNS records every 15 minutes
The binary is at `target/release/cloudflare-ddns`.
### 🐳 Docker builds
```bash
*/15 * * * * /home/your_username_here/cloudflare-ddns/start-sync.sh
# Single architecture (linux/amd64)
./scripts/docker-build.sh
# Multi-architecture (linux/amd64, linux/arm64, linux/ppc64le)
./scripts/docker-build-all.sh
```
## License
## 💻 Supported Platforms
This Template is licensed under the GNU General Public License, version 3 (GPLv3) and is distributed free of charge.
- 🐳 [Docker](https://docs.docker.com/get-docker/) (amd64, arm64, ppc64le)
- 🐙 [Docker Compose](https://docs.docker.com/compose/install/)
- ☸️ [Kubernetes](https://kubernetes.io/docs/tasks/tools/)
- 🐧 [Systemd](https://www.freedesktop.org/wiki/Software/systemd/)
- 🍎 macOS, 🪟 Windows, 🐧 Linux — anywhere Rust compiles
## Author
---
## 📁 Legacy JSON Config File
For backwards compatibility, cloudflare-ddns still supports configuration via a `config.json` file. This mode is used automatically when no `CLOUDFLARE_API_TOKEN` environment variable is set.
### 🚀 Quick Start
```bash
cp config-example.json config.json
# Edit config.json with your values
cloudflare-ddns
```
### 🔑 Authentication
Use either an API token (recommended) or a legacy API key:
```json
"authentication": {
"api_token": "Your cloudflare API token with Edit DNS capability"
}
```
Or with a legacy API key:
```json
"authentication": {
"api_key": {
"api_key": "Your cloudflare API Key",
"account_email": "The email address you use to sign in to cloudflare"
}
}
```
### 📡 IPv4 and IPv6
Some ISP provided modems only allow port forwarding over IPv4 or IPv6. Disable the interface that is not accessible:
```json
"a": true,
"aaaa": true
```
### ⚙️ Config Options
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| `cloudflare` | array | required | List of zone configurations |
| `a` | bool | `true` | Enable IPv4 (A record) updates |
| `aaaa` | bool | `true` | Enable IPv6 (AAAA record) updates |
| `purgeUnknownRecords` | bool | `false` | Delete stale/duplicate DNS records |
| `ttl` | int | `300` | DNS record TTL in seconds (30-86400, values < 30 become auto) |
Each zone entry contains:
| Key | Type | Description |
|-----|------|-------------|
| `authentication` | object | API token or API key credentials |
| `zone_id` | string | Cloudflare zone ID (found in zone dashboard) |
| `subdomains` | array | Subdomain entries to update |
| `proxied` | bool | Default proxied status for subdomains in this zone |
Subdomain entries can be a simple string or a detailed object:
```json
"subdomains": [
"",
"@",
"www",
{ "name": "vpn", "proxied": true }
]
```
Use `""` or `"@"` for the root domain. Do not include the base domain name.
### 🔄 Environment Variable Substitution
In the legacy config file, values can reference environment variables with the `CF_DDNS_` prefix:
```json
{
"cloudflare": [{
"authentication": {
"api_token": "${CF_DDNS_API_TOKEN}"
},
...
}]
}
```
### 📠 Example: Multiple Subdomains
```json
{
"cloudflare": [
{
"authentication": {
"api_token": "your-api-token"
},
"zone_id": "your_zone_id",
"subdomains": [
{ "name": "", "proxied": true },
{ "name": "www", "proxied": true },
{ "name": "vpn", "proxied": false }
]
}
],
"a": true,
"aaaa": true,
"purgeUnknownRecords": false,
"ttl": 300
}
```
### 🌐 Example: Multiple Zones
```json
{
"cloudflare": [
{
"authentication": { "api_token": "your-api-token" },
"zone_id": "first_zone_id",
"subdomains": [
{ "name": "", "proxied": false }
]
},
{
"authentication": { "api_token": "your-api-token" },
"zone_id": "second_zone_id",
"subdomains": [
{ "name": "", "proxied": false }
]
}
],
"a": true,
"aaaa": true,
"purgeUnknownRecords": false
}
```
### 🐳 Docker Compose (legacy config file)
```yml
version: '3.9'
services:
cloudflare-ddns:
image: timothyjmiller/cloudflare-ddns:latest
container_name: cloudflare-ddns
security_opt:
- no-new-privileges:true
network_mode: 'host'
volumes:
- /YOUR/PATH/HERE/config.json:/config.json
restart: unless-stopped
```
### 🏁 Legacy CLI Flags
In legacy config mode, use `--repeat` to run continuously (the TTL value is used as the update interval):
```bash
cloudflare-ddns --repeat
cloudflare-ddns --repeat --dry-run
```
---
## 🔗 Helpful Links
- 🔑 [Cloudflare API token](https://dash.cloudflare.com/profile/api-tokens)
- 🆔 [Cloudflare zone ID](https://support.cloudflare.com/hc/en-us/articles/200167836-Where-do-I-find-my-Cloudflare-IP-address-)
- 📋 [Cloudflare zone DNS record ID](https://support.cloudflare.com/hc/en-us/articles/360019093151-Managing-DNS-records-in-Cloudflare)
## 📜 License
This project is licensed under the GNU General Public License, version 3 (GPLv3).
## 👨‍💻 Author
Timothy Miller
GitHub: https://github.com/timothymiller 💡
[View my GitHub profile 💡](https://github.com/timothymiller)
Website: https://timknowsbest.com 💻
Donation: https://timknowsbest.com/donate 💸
[View my personal website 💻](https://itstmillertime.com)

View File

@@ -1 +0,0 @@
docker build -t timothyjmiller/cloudflare-ddns:latest .

View File

@@ -1,127 +0,0 @@
import requests, json, sys, os
import time, traceback
PATH = os.getcwd() + "/"
version = float(str(sys.version_info[0]) + "." + str(sys.version_info[1]))
if(version < 3.5):
raise Exception("This script requires Python 3.5+")
with open(PATH + "config.json") as config_file:
config = json.loads(config_file.read())
def getIPs():
a = requests.get("https://api.ipify.org?format=json").json().get("ip")
aaaa = requests.get("https://api6.ipify.org?format=json").json().get("ip")
ips = []
if(a.find(".") > -1):
ips.append({
"type": "A",
"ip": a
})
if(aaaa.find(":") > -1):
ips.append({
"type": "AAAA",
"ip": aaaa
})
return ips
def commitRecord(ip):
stale_record_ids = []
for c in config["cloudflare"]:
subdomains = c["subdomains"]
response = cf_api("zones/" + c['zone_id'], "GET", c)
base_domain_name = response["result"]["name"]
for subdomain in subdomains:
exists = False
record = {
"type": ip["type"],
"name": subdomain,
"content": ip["ip"],
"proxied": c["proxied"]
}
list = cf_api(
"zones/" + c['zone_id'] + "/dns_records?per_page=100&type=" + ip["type"], "GET", c)
full_subdomain = base_domain_name
if subdomain:
full_subdomain = subdomain + "." + full_subdomain
dns_id = ""
for r in list["result"]:
if (r["name"] == full_subdomain):
exists = True
if (r["content"] != ip["ip"]):
if (dns_id == ""):
dns_id = r["id"]
else:
stale_record_ids.append(r["id"])
if(exists == False):
print("Adding new record " + str(record))
response = cf_api(
"zones/" + c['zone_id'] + "/dns_records", "POST", c, {}, record)
elif(dns_id != ""):
# Only update if the record content is different
print("Updating record " + str(record))
response = cf_api(
"zones/" + c['zone_id'] + "/dns_records/" + dns_id, "PUT", c, {}, record)
# Delete duplicate, stale records
for identifier in stale_record_ids:
print("Deleting stale record " + str(identifier))
response = cf_api(
"zones/" + c['zone_id'] + "/dns_records/" + identifier, "DELETE", c)
return True
def cf_api(endpoint, method, config, headers={}, data=False):
api_token = config['authentication']['api_token']
if api_token != '' and api_token != 'api_token_here':
headers = {
"Authorization": "Bearer " + api_token,
**headers
}
else:
headers = {
"X-Auth-Email": config['authentication']['api_key']['account_email'],
"X-Auth-Key": config['authentication']['api_key']['api_key'],
}
if(data == False):
response = requests.request(
method, "https://api.cloudflare.com/client/v4/" + endpoint, headers=headers)
else:
response = requests.request(
method, "https://api.cloudflare.com/client/v4/" + endpoint, headers=headers, json=data)
return response.json()
def every(delay, task):
next_time = time.time() + delay
while True:
time.sleep(max(0, next_time - time.time()))
try:
task()
except Exception:
traceback.print_exc()
# in production code you might want to have this instead of course:
# logger.exception("Problem while executing repetitive task.")
# skip tasks if we are behind schedule:
next_time += (time.time() - next_time) // delay * delay + delay
def updateIPs():
for ip in getIPs():
print("Checking " + ip["type"] + " records")
commitRecord(ip)
if(len(sys.argv) > 1):
if(sys.argv[1] == "--repeat"):
import threading
threading.Thread(target=lambda: every(60*15, updateIPs)).start()
updateIPs()

View File

@@ -10,10 +10,19 @@
},
"zone_id": "your_zone_id_here",
"subdomains": [
"",
"subdomain"
],
{
"name": "",
"proxied": false
},
{
"name": "remove_or_replace_with_your_subdomain",
"proxied": false
}
]
}
],
"a": true,
"aaaa": true,
"purgeUnknownRecords": false,
"ttl": 300
}

View File

@@ -0,0 +1,19 @@
version: '3.9'
services:
cloudflare-ddns:
image: timothyjmiller/cloudflare-ddns:latest
container_name: cloudflare-ddns
security_opt:
- no-new-privileges:true
network_mode: 'host'
environment:
- CLOUDFLARE_API_TOKEN=your-api-token-here
- DOMAINS=example.com,www.example.com
- PROXIED=false
- TTL=1
- UPDATE_CRON=@every 5m
# - IP6_PROVIDER=none
# - HEALTHCHECKS=https://hc-ping.com/your-uuid
# - UPTIMEKUMA=https://kuma.example.com/api/push/your-token
# - SHOUTRRR=discord://token@webhook-id
restart: unless-stopped

View File

@@ -1,13 +1,14 @@
version: "3.7"
version: '3.9'
services:
cloudflare-ddns:
image: timothymiller/cloudflare-ddns:latest
image: timothyjmiller/cloudflare-ddns:latest
container_name: cloudflare-ddns
security_opt:
- no-new-privileges:true
network_mode: 'host'
environment:
- PUID=1000
- PGID=1000
volumes:
- /EDIT/YOUR/PATH/HERE/config.json:/config.json
- /YOUR/PATH/HERE/config.json:/config.json
restart: unless-stopped

98
env-example Normal file
View File

@@ -0,0 +1,98 @@
# Cloudflare DDNS - Environment Variable Configuration
# Copy this file to .env and set your values.
# Setting CLOUDFLARE_API_TOKEN activates environment variable mode.
# === Required ===
# Cloudflare API token with "Edit DNS" capability
CLOUDFLARE_API_TOKEN=your-api-token-here
# Or read from a file:
# CLOUDFLARE_API_TOKEN_FILE=/run/secrets/cloudflare_token
# Domains to update (comma-separated)
# At least one of DOMAINS, IP4_DOMAINS, IP6_DOMAINS, or WAF_LISTS must be set
DOMAINS=example.com,www.example.com
# IP4_DOMAINS=v4only.example.com
# IP6_DOMAINS=v6only.example.com
# === IP Detection ===
# Provider for IPv4 detection (default: cloudflare.trace)
# Options: cloudflare.trace, cloudflare.doh, ipify, local, local.iface:<name>,
# url:<custom-url>, literal:<ip1>,<ip2>, none
# IP4_PROVIDER=cloudflare.trace
# Provider for IPv6 detection (default: cloudflare.trace)
# IP6_PROVIDER=cloudflare.trace
# === Scheduling ===
# Update schedule (default: @every 5m)
# Formats: @every 5m, @every 1h, @every 30s, @once
# UPDATE_CRON=@every 5m
# Run an update immediately on startup (default: true)
# UPDATE_ON_START=true
# Delete managed DNS records on shutdown (default: false)
# DELETE_ON_STOP=false
# === DNS Records ===
# TTL in seconds: 1=auto, or 30-86400 (default: 1)
# TTL=1
# Proxied expression: true, false, is(domain), sub(domain), or boolean combos
# PROXIED=false
# Comment to attach to managed DNS records
# RECORD_COMMENT=Managed by cloudflare-ddns
# Regex to identify which records are managed (empty = all matching records)
# MANAGED_RECORDS_COMMENT_REGEX=cloudflare-ddns
# === WAF Lists ===
# Comma-separated WAF lists in account-id/list-name format
# WAF_LISTS=account123/my_ip_list
# Description for managed WAF lists
# WAF_LIST_DESCRIPTION=Dynamic IP list
# Comment for WAF list items
# WAF_LIST_ITEM_COMMENT=cloudflare-ddns
# Regex to identify managed WAF list items
# MANAGED_WAF_LIST_ITEMS_COMMENT_REGEX=cloudflare-ddns
# === Notifications ===
# Shoutrrr notification URLs (newline-separated)
# SHOUTRRR=discord://token@webhook-id
# SHOUTRRR=slack://token-a/token-b/token-c
# SHOUTRRR=telegram://bot-token@telegram?chats=chat-id
# SHOUTRRR=generic+https://hooks.example.com/webhook
# === Heartbeat Monitoring ===
# Healthchecks.io ping URL
# HEALTHCHECKS=https://hc-ping.com/your-uuid
# Uptime Kuma push URL
# UPTIMEKUMA=https://your-uptime-kuma.com/api/push/your-token
# === Timeouts ===
# IP detection timeout (default: 5s)
# DETECTION_TIMEOUT=5s
# Cloudflare API request timeout (default: 30s)
# UPDATE_TIMEOUT=30s
# === Output ===
# Use emoji in output (default: true)
# EMOJI=true
# Suppress informational output (default: false)
# QUIET=false

BIN
feature-graphic.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 440 KiB

33
k8s/cloudflare-ddns.yml Normal file
View File

@@ -0,0 +1,33 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudflare-ddns
spec:
selector:
matchLabels:
app: cloudflare-ddns
template:
metadata:
labels:
app: cloudflare-ddns
spec:
containers:
- name: cloudflare-ddns
image: timothyjmiller/cloudflare-ddns:latest
resources:
limits:
memory: '32Mi'
cpu: '50m'
env:
- name: CONFIG_PATH
value: '/etc/cloudflare-ddns/'
volumeMounts:
- mountPath: '/etc/cloudflare-ddns'
name: config-cloudflare-ddns
readOnly: true
volumes:
- name: config-cloudflare-ddns
secret:
secretName: config-cloudflare-ddns

View File

@@ -1 +0,0 @@
requests==2.24.0

3
scripts/docker-build-all.sh Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
BASH_DIR=$(dirname $(realpath "${BASH_SOURCE}"))
docker buildx build --platform linux/amd64,linux/arm64,linux/ppc64le --tag timothyjmiller/cloudflare-ddns:latest ${BASH_DIR}/../

3
scripts/docker-build.sh Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
BASH_DIR=$(dirname $(realpath "${BASH_SOURCE}"))
docker build --platform linux/amd64 --tag timothyjmiller/cloudflare-ddns:latest ${BASH_DIR}/../

8
scripts/docker-publish.sh Executable file
View File

@@ -0,0 +1,8 @@
#!/bin/bash
BASH_DIR=$(dirname $(realpath "${BASH_SOURCE}"))
VERSION=$(grep '^version' ${BASH_DIR}/../Cargo.toml | head -1 | sed 's/.*"\(.*\)".*/\1/')
docker buildx build \
--platform linux/amd64,linux/arm64,linux/ppc64le \
--tag timothyjmiller/cloudflare-ddns:latest \
--tag timothyjmiller/cloudflare-ddns:${VERSION} \
--push ${BASH_DIR}/../

2
scripts/docker-run.sh Executable file
View File

@@ -0,0 +1,2 @@
#!/bin/bash
docker run timothyjmiller/cloudflare-ddns:latest

1774
src/cloudflare.rs Normal file

File diff suppressed because it is too large Load Diff

1961
src/config.rs Normal file

File diff suppressed because it is too large Load Diff

547
src/domain.rs Normal file
View File

@@ -0,0 +1,547 @@
use std::fmt;
/// Represents a DNS domain - either a regular FQDN or a wildcard.
#[allow(dead_code)]
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub enum Domain {
FQDN(String),
Wildcard(String),
}
#[allow(dead_code)]
impl Domain {
/// Parse a domain string. Handles:
/// - "@" or "" -> root domain (handled at FQDN construction time)
/// - "*.example.com" -> wildcard
/// - "sub.example.com" -> regular FQDN
pub fn new(input: &str) -> Result<Self, String> {
let trimmed = input.trim().to_lowercase();
if trimmed.starts_with("*.") {
let base = &trimmed[2..];
let ascii = domain_to_ascii(base)?;
Ok(Domain::Wildcard(ascii))
} else {
let ascii = domain_to_ascii(&trimmed)?;
Ok(Domain::FQDN(ascii))
}
}
/// Returns the DNS name in ASCII form suitable for API calls.
pub fn dns_name_ascii(&self) -> String {
match self {
Domain::FQDN(s) => s.clone(),
Domain::Wildcard(s) => format!("*.{s}"),
}
}
/// Returns a human-readable description of the domain.
pub fn describe(&self) -> String {
match self {
Domain::FQDN(s) => describe_domain(s),
Domain::Wildcard(s) => format!("*.{}", describe_domain(s)),
}
}
/// Returns the zones (parent domains) for this domain, from most specific to least.
pub fn zones(&self) -> Vec<String> {
let base = match self {
Domain::FQDN(s) => s.as_str(),
Domain::Wildcard(s) => s.as_str(),
};
let mut zones = Vec::new();
let mut current = base.to_string();
while !current.is_empty() {
zones.push(current.clone());
if let Some(pos) = current.find('.') {
current = current[pos + 1..].to_string();
} else {
break;
}
}
zones
}
}
impl fmt::Display for Domain {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.describe())
}
}
/// Construct an FQDN from a subdomain name and base domain.
pub fn make_fqdn(subdomain: &str, base_domain: &str) -> String {
let name = subdomain.to_lowercase();
let name = name.trim();
if name.is_empty() || name == "@" {
base_domain.to_lowercase()
} else if name.starts_with("*.") {
// Wildcard subdomain
format!("{name}.{}", base_domain.to_lowercase())
} else {
format!("{name}.{}", base_domain.to_lowercase())
}
}
/// Convert a domain to ASCII using IDNA encoding.
#[allow(dead_code)]
fn domain_to_ascii(domain: &str) -> Result<String, String> {
if domain.is_empty() {
return Ok(String::new());
}
// Try IDNA encoding for internationalized domain names
match idna::domain_to_ascii(domain) {
Ok(ascii) => Ok(ascii),
Err(_) => {
// Fallback: if it's already ASCII, just return it
if domain.is_ascii() {
Ok(domain.to_string())
} else {
Err(format!("Invalid domain name: {domain}"))
}
}
}
}
/// Convert ASCII domain back to Unicode for display.
#[allow(dead_code)]
fn describe_domain(ascii: &str) -> String {
// Try to convert punycode back to unicode for display
match idna::domain_to_unicode(ascii) {
(unicode, Ok(())) => unicode,
_ => ascii.to_string(),
}
}
/// Parse a comma-separated list of domain strings.
#[allow(dead_code)]
pub fn parse_domain_list(input: &str) -> Result<Vec<Domain>, String> {
if input.trim().is_empty() {
return Ok(Vec::new());
}
input
.split(',')
.map(|s| Domain::new(s.trim()))
.collect()
}
// --- Domain Expression Evaluator ---
// Supports: true, false, is(domain,...), sub(domain,...), !, &&, ||, ()
/// Parse and evaluate a domain expression to determine if a domain should be proxied.
pub fn parse_proxied_expression(expr: &str) -> Result<Box<dyn Fn(&str) -> bool + Send + Sync>, String> {
let expr = expr.trim();
if expr.is_empty() || expr == "false" {
return Ok(Box::new(|_: &str| false));
}
if expr == "true" {
return Ok(Box::new(|_: &str| true));
}
let tokens = tokenize_expr(expr)?;
let (predicate, rest) = parse_or_expr(&tokens)?;
if !rest.is_empty() {
return Err(format!("Unexpected tokens in proxied expression: {}", rest.join(" ")));
}
Ok(predicate)
}
fn tokenize_expr(input: &str) -> Result<Vec<String>, String> {
let mut tokens = Vec::new();
let mut chars = input.chars().peekable();
while let Some(&c) = chars.peek() {
match c {
' ' | '\t' | '\n' | '\r' => {
chars.next();
}
'(' | ')' | '!' | ',' => {
tokens.push(c.to_string());
chars.next();
}
'&' => {
chars.next();
if chars.peek() == Some(&'&') {
chars.next();
tokens.push("&&".to_string());
} else {
return Err("Expected '&&', got single '&'".to_string());
}
}
'|' => {
chars.next();
if chars.peek() == Some(&'|') {
chars.next();
tokens.push("||".to_string());
} else {
return Err("Expected '||', got single '|'".to_string());
}
}
_ => {
let mut word = String::new();
while let Some(&c) = chars.peek() {
if c.is_alphanumeric() || c == '.' || c == '-' || c == '_' || c == '*' || c == '@' {
word.push(c);
chars.next();
} else {
break;
}
}
if word.is_empty() {
return Err(format!("Unexpected character: {c}"));
}
tokens.push(word);
}
}
}
Ok(tokens)
}
type Predicate = Box<dyn Fn(&str) -> bool + Send + Sync>;
fn parse_or_expr(tokens: &[String]) -> Result<(Predicate, &[String]), String> {
let (mut left, mut rest) = parse_and_expr(tokens)?;
while !rest.is_empty() && rest[0] == "||" {
let (right, new_rest) = parse_and_expr(&rest[1..])?;
let prev = left;
left = Box::new(move |d: &str| prev(d) || right(d));
rest = new_rest;
}
Ok((left, rest))
}
fn parse_and_expr(tokens: &[String]) -> Result<(Predicate, &[String]), String> {
let (mut left, mut rest) = parse_not_expr(tokens)?;
while !rest.is_empty() && rest[0] == "&&" {
let (right, new_rest) = parse_not_expr(&rest[1..])?;
let prev = left;
left = Box::new(move |d: &str| prev(d) && right(d));
rest = new_rest;
}
Ok((left, rest))
}
fn parse_not_expr(tokens: &[String]) -> Result<(Predicate, &[String]), String> {
if tokens.is_empty() {
return Err("Unexpected end of expression".to_string());
}
if tokens[0] == "!" {
let (inner, rest) = parse_not_expr(&tokens[1..])?;
let pred: Predicate = Box::new(move |d: &str| !inner(d));
Ok((pred, rest))
} else {
parse_atom(tokens)
}
}
fn parse_atom(tokens: &[String]) -> Result<(Predicate, &[String]), String> {
if tokens.is_empty() {
return Err("Unexpected end of expression".to_string());
}
match tokens[0].as_str() {
"true" => Ok((Box::new(|_: &str| true), &tokens[1..])),
"false" => Ok((Box::new(|_: &str| false), &tokens[1..])),
"(" => {
let (inner, rest) = parse_or_expr(&tokens[1..])?;
if rest.is_empty() || rest[0] != ")" {
return Err("Missing closing parenthesis".to_string());
}
Ok((inner, &rest[1..]))
}
"is" => {
let (domains, rest) = parse_domain_args(&tokens[1..])?;
let pred: Predicate = Box::new(move |d: &str| {
let d_lower = d.to_lowercase();
domains.iter().any(|dom| d_lower == *dom)
});
Ok((pred, rest))
}
"sub" => {
let (domains, rest) = parse_domain_args(&tokens[1..])?;
let pred: Predicate = Box::new(move |d: &str| {
let d_lower = d.to_lowercase();
domains.iter().any(|dom| {
d_lower == *dom || d_lower.ends_with(&format!(".{dom}"))
})
});
Ok((pred, rest))
}
_ => Err(format!("Unexpected token: {}", tokens[0])),
}
}
fn parse_domain_args(tokens: &[String]) -> Result<(Vec<String>, &[String]), String> {
if tokens.is_empty() || tokens[0] != "(" {
return Err("Expected '(' after function name".to_string());
}
let mut domains = Vec::new();
let mut i = 1;
while i < tokens.len() && tokens[i] != ")" {
if tokens[i] == "," {
i += 1;
continue;
}
domains.push(tokens[i].to_lowercase());
i += 1;
}
if i >= tokens.len() {
return Err("Missing closing ')' in function call".to_string());
}
Ok((domains, &tokens[i + 1..]))
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_make_fqdn_root() {
assert_eq!(make_fqdn("", "example.com"), "example.com");
assert_eq!(make_fqdn("@", "example.com"), "example.com");
}
#[test]
fn test_make_fqdn_subdomain() {
assert_eq!(make_fqdn("www", "example.com"), "www.example.com");
assert_eq!(make_fqdn("VPN", "Example.COM"), "vpn.example.com");
}
#[test]
fn test_domain_wildcard() {
let d = Domain::new("*.example.com").unwrap();
assert_eq!(d.dns_name_ascii(), "*.example.com");
}
#[test]
fn test_parse_domain_list() {
let domains = parse_domain_list("example.com, *.example.com, sub.example.com").unwrap();
assert_eq!(domains.len(), 3);
}
#[test]
fn test_proxied_expr_true() {
let pred = parse_proxied_expression("true").unwrap();
assert!(pred("anything.com"));
}
#[test]
fn test_proxied_expr_false() {
let pred = parse_proxied_expression("false").unwrap();
assert!(!pred("anything.com"));
}
#[test]
fn test_proxied_expr_is() {
let pred = parse_proxied_expression("is(example.com)").unwrap();
assert!(pred("example.com"));
assert!(!pred("sub.example.com"));
}
#[test]
fn test_proxied_expr_sub() {
let pred = parse_proxied_expression("sub(example.com)").unwrap();
assert!(pred("example.com"));
assert!(pred("sub.example.com"));
assert!(!pred("other.com"));
}
#[test]
fn test_proxied_expr_complex() {
let pred = parse_proxied_expression("is(a.com) || is(b.com)").unwrap();
assert!(pred("a.com"));
assert!(pred("b.com"));
assert!(!pred("c.com"));
}
#[test]
fn test_proxied_expr_negation() {
let pred = parse_proxied_expression("!is(internal.com)").unwrap();
assert!(!pred("internal.com"));
assert!(pred("public.com"));
}
// --- Domain::new with regular FQDN ---
#[test]
fn test_domain_new_fqdn() {
let d = Domain::new("example.com").unwrap();
assert_eq!(d, Domain::FQDN("example.com".to_string()));
}
#[test]
fn test_domain_new_fqdn_uppercase() {
let d = Domain::new("EXAMPLE.COM").unwrap();
assert_eq!(d, Domain::FQDN("example.com".to_string()));
}
// --- Domain::dns_name_ascii for FQDN ---
#[test]
fn test_dns_name_ascii_fqdn() {
let d = Domain::FQDN("example.com".to_string());
assert_eq!(d.dns_name_ascii(), "example.com");
}
// --- Domain::describe for both variants ---
#[test]
fn test_describe_fqdn() {
let d = Domain::FQDN("example.com".to_string());
// ASCII domain should round-trip through describe unchanged
assert_eq!(d.describe(), "example.com");
}
#[test]
fn test_describe_wildcard() {
let d = Domain::Wildcard("example.com".to_string());
assert_eq!(d.describe(), "*.example.com");
}
// --- Domain::zones ---
#[test]
fn test_zones_fqdn() {
let d = Domain::FQDN("sub.example.com".to_string());
let zones = d.zones();
assert_eq!(zones, vec!["sub.example.com", "example.com", "com"]);
}
#[test]
fn test_zones_wildcard() {
let d = Domain::Wildcard("example.com".to_string());
let zones = d.zones();
assert_eq!(zones, vec!["example.com", "com"]);
}
#[test]
fn test_zones_single_label() {
let d = Domain::FQDN("localhost".to_string());
let zones = d.zones();
assert_eq!(zones, vec!["localhost"]);
}
// --- Domain Display trait ---
#[test]
fn test_display_fqdn() {
let d = Domain::FQDN("example.com".to_string());
assert_eq!(format!("{d}"), "example.com");
}
#[test]
fn test_display_wildcard() {
let d = Domain::Wildcard("example.com".to_string());
assert_eq!(format!("{d}"), "*.example.com");
}
// --- domain_to_ascii (tested indirectly via Domain::new) ---
#[test]
fn test_domain_new_empty_string() {
// empty string -> domain_to_ascii returns Ok("") -> Domain::FQDN("")
let d = Domain::new("").unwrap();
assert_eq!(d, Domain::FQDN("".to_string()));
}
#[test]
fn test_domain_new_ascii_domain() {
let d = Domain::new("www.example.org").unwrap();
assert_eq!(d.dns_name_ascii(), "www.example.org");
}
#[test]
fn test_domain_new_internationalized() {
// "münchen.de" should be encoded to punycode
let d = Domain::new("münchen.de").unwrap();
let ascii = d.dns_name_ascii();
// The punycode-encoded form should start with "xn--"
assert!(ascii.contains("xn--"), "expected punycode, got: {ascii}");
}
// --- describe_domain (tested indirectly via Domain::describe) ---
#[test]
fn test_describe_punycode_roundtrip() {
// Build a domain with a known punycode label and confirm describe decodes it
let d = Domain::new("münchen.de").unwrap();
let described = d.describe();
// Should contain the Unicode form, not the raw punycode
assert!(described.contains("münchen") || described.contains("xn--"),
"describe returned: {described}");
}
#[test]
fn test_describe_regular_ascii() {
let d = Domain::FQDN("example.com".to_string());
assert_eq!(d.describe(), "example.com");
}
// --- parse_domain_list with empty input ---
#[test]
fn test_parse_domain_list_empty() {
let result = parse_domain_list("").unwrap();
assert!(result.is_empty());
}
#[test]
fn test_parse_domain_list_whitespace_only() {
let result = parse_domain_list(" ").unwrap();
assert!(result.is_empty());
}
// --- Tokenizer edge cases (via parse_proxied_expression) ---
#[test]
fn test_tokenizer_single_ampersand_error() {
let result = parse_proxied_expression("is(a.com) & is(b.com)");
assert!(result.is_err());
let err = result.err().unwrap();
assert!(err.contains("&&"), "error was: {err}");
}
#[test]
fn test_tokenizer_single_pipe_error() {
let result = parse_proxied_expression("is(a.com) | is(b.com)");
assert!(result.is_err());
let err = result.err().unwrap();
assert!(err.contains("||"), "error was: {err}");
}
#[test]
fn test_tokenizer_unexpected_character_error() {
let result = parse_proxied_expression("is(a.com) $ is(b.com)");
assert!(result.is_err());
}
// --- Parser edge cases ---
#[test]
fn test_parse_and_expr_double_ampersand() {
let pred = parse_proxied_expression("is(a.com) && is(b.com)").unwrap();
assert!(!pred("a.com"));
assert!(!pred("b.com"));
let pred2 = parse_proxied_expression("sub(example.com) && !is(internal.example.com)").unwrap();
assert!(pred2("www.example.com"));
assert!(!pred2("internal.example.com"));
}
#[test]
fn test_parse_nested_parentheses() {
let pred = parse_proxied_expression("(is(a.com) || is(b.com)) && !is(c.com)").unwrap();
assert!(pred("a.com"));
assert!(pred("b.com"));
assert!(!pred("c.com"));
}
#[test]
fn test_parse_missing_closing_paren() {
let result = parse_proxied_expression("(is(a.com)");
assert!(result.is_err());
let err = result.err().unwrap();
assert!(err.contains("parenthesis") || err.contains(")"), "error was: {err}");
}
#[test]
fn test_parse_unexpected_tokens_after_expr() {
let result = parse_proxied_expression("true false");
assert!(result.is_err());
}
// --- make_fqdn with wildcard subdomain ---
#[test]
fn test_make_fqdn_wildcard_subdomain() {
// A name starting with "*." is treated as a wildcard subdomain
assert_eq!(make_fqdn("*.sub", "example.com"), "*.sub.example.com");
}
}

920
src/main.rs Normal file
View File

@@ -0,0 +1,920 @@
mod cloudflare;
mod config;
mod domain;
mod notifier;
mod pp;
mod provider;
mod updater;
use crate::cloudflare::{Auth, CloudflareHandle};
use crate::config::{AppConfig, CronSchedule};
use crate::notifier::{CompositeNotifier, Heartbeat, Message};
use crate::pp::PP;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use tokio::signal;
use tokio::time::{sleep, Duration};
const VERSION: &str = env!("CARGO_PKG_VERSION");
#[tokio::main]
async fn main() {
// Parse CLI args
let args: Vec<String> = std::env::args().collect();
let dry_run = args.iter().any(|a| a == "--dry-run");
let repeat = args.iter().any(|a| a == "--repeat");
// Check for unknown args (legacy behavior)
let known_args = ["--dry-run", "--repeat"];
let unknown: Vec<&str> = args
.iter()
.skip(1)
.filter(|a| !known_args.contains(&a.as_str()))
.map(|a| a.as_str())
.collect();
if !unknown.is_empty() {
eprintln!(
"Unrecognized parameter(s): {}. Stopping now.",
unknown.join(", ")
);
return;
}
// Determine config mode and create initial PP for config loading
let initial_pp = if config::is_env_config_mode() {
// In env mode, read emoji/quiet from env before loading full config
let emoji = std::env::var("EMOJI")
.map(|v| matches!(v.to_lowercase().as_str(), "true" | "1" | "yes"))
.unwrap_or(true);
let quiet = std::env::var("QUIET")
.map(|v| matches!(v.to_lowercase().as_str(), "true" | "1" | "yes"))
.unwrap_or(false);
PP::new(emoji, quiet)
} else {
// Legacy mode: no emoji, not quiet (preserves original output behavior)
PP::new(false, false)
};
println!("cloudflare-ddns v{VERSION}");
// Load config
let app_config = match config::load_config(dry_run, repeat, &initial_pp) {
Ok(c) => c,
Err(e) => {
eprintln!("{e}");
sleep(Duration::from_secs(10)).await;
std::process::exit(1);
}
};
// Create PP with final settings
let ppfmt = PP::new(app_config.emoji, app_config.quiet);
if dry_run {
ppfmt.noticef(
pp::EMOJI_WARNING,
"[DRY RUN] No records will be created, updated, or deleted.",
);
}
// Print config summary (env mode only)
config::print_config_summary(&app_config, &ppfmt);
// Setup notifiers and heartbeats
let notifier = config::setup_notifiers(&ppfmt);
let heartbeat = config::setup_heartbeats(&ppfmt);
// Create Cloudflare handle (for env mode)
let handle = if !app_config.legacy_mode {
CloudflareHandle::new(
app_config.auth.clone(),
app_config.update_timeout,
app_config.managed_comment_regex.clone(),
app_config.managed_waf_comment_regex.clone(),
)
} else {
// Create a dummy handle for legacy mode (won't be used)
CloudflareHandle::new(
Auth::Token(String::new()),
Duration::from_secs(30),
None,
None,
)
};
// Signal handler for graceful shutdown
let running = Arc::new(AtomicBool::new(true));
let r = running.clone();
tokio::spawn(async move {
let _ = signal::ctrl_c().await;
println!("Stopping...");
r.store(false, Ordering::SeqCst);
});
// Start heartbeat
heartbeat.start().await;
if app_config.legacy_mode {
// --- Legacy mode (original cloudflare-ddns behavior) ---
run_legacy_mode(&app_config, &handle, &notifier, &heartbeat, &ppfmt, running).await;
} else {
// --- Env var mode (cf-ddns behavior) ---
run_env_mode(&app_config, &handle, &notifier, &heartbeat, &ppfmt, running).await;
}
// On shutdown: delete records if configured
if app_config.delete_on_stop && !app_config.legacy_mode {
ppfmt.noticef(pp::EMOJI_STOP, "Deleting records on stop...");
updater::final_delete(&app_config, &handle, &notifier, &heartbeat, &ppfmt).await;
}
// Exit heartbeat
heartbeat
.exit(&Message::new_ok("Shutting down"))
.await;
}
async fn run_legacy_mode(
config: &AppConfig,
handle: &CloudflareHandle,
notifier: &CompositeNotifier,
heartbeat: &Heartbeat,
ppfmt: &PP,
running: Arc<AtomicBool>,
) {
let legacy = match &config.legacy_config {
Some(l) => l,
None => return,
};
if config.repeat {
match (legacy.a, legacy.aaaa) {
(true, true) => println!(
"Updating IPv4 (A) & IPv6 (AAAA) records every {} seconds",
legacy.ttl
),
(true, false) => {
println!("Updating IPv4 (A) records every {} seconds", legacy.ttl)
}
(false, true) => {
println!("Updating IPv6 (AAAA) records every {} seconds", legacy.ttl)
}
(false, false) => println!("Both IPv4 and IPv6 are disabled"),
}
while running.load(Ordering::SeqCst) {
updater::update_once(config, handle, notifier, heartbeat, ppfmt).await;
for _ in 0..legacy.ttl {
if !running.load(Ordering::SeqCst) {
break;
}
sleep(Duration::from_secs(1)).await;
}
}
} else {
updater::update_once(config, handle, notifier, heartbeat, ppfmt).await;
}
}
async fn run_env_mode(
config: &AppConfig,
handle: &CloudflareHandle,
notifier: &CompositeNotifier,
heartbeat: &Heartbeat,
ppfmt: &PP,
running: Arc<AtomicBool>,
) {
match &config.update_cron {
CronSchedule::Once => {
if config.update_on_start {
updater::update_once(config, handle, notifier, heartbeat, ppfmt).await;
}
}
schedule => {
let interval = schedule.next_duration().unwrap_or(Duration::from_secs(300));
ppfmt.noticef(
pp::EMOJI_LAUNCH,
&format!(
"Started cloudflare-ddns, updating every {}",
describe_duration(interval)
),
);
// Update on start if configured
if config.update_on_start {
updater::update_once(config, handle, notifier, heartbeat, ppfmt).await;
}
// Main loop
while running.load(Ordering::SeqCst) {
// Sleep for interval, checking running flag each second
let secs = interval.as_secs();
let next_time = chrono::Local::now() + chrono::Duration::seconds(secs as i64);
ppfmt.infof(
pp::EMOJI_SLEEP,
&format!(
"Next update at {}",
next_time.format("%Y-%m-%d %H:%M:%S %Z")
),
);
for _ in 0..secs {
if !running.load(Ordering::SeqCst) {
return;
}
sleep(Duration::from_secs(1)).await;
}
if !running.load(Ordering::SeqCst) {
return;
}
updater::update_once(config, handle, notifier, heartbeat, ppfmt).await;
}
}
}
}
fn describe_duration(d: Duration) -> String {
let secs = d.as_secs();
if secs >= 3600 {
let hours = secs / 3600;
let mins = (secs % 3600) / 60;
if mins > 0 {
format!("{hours}h{mins}m")
} else {
format!("{hours}h")
}
} else if secs >= 60 {
let mins = secs / 60;
let s = secs % 60;
if s > 0 {
format!("{mins}m{s}s")
} else {
format!("{mins}m")
}
} else {
format!("{secs}s")
}
}
// ============================================================
// Tests (backwards compatible with original test suite)
// ============================================================
#[cfg(test)]
mod tests {
use crate::config::{
LegacyAuthentication, LegacyCloudflareEntry, LegacyConfig, LegacySubdomainEntry,
parse_legacy_config,
};
use crate::provider::parse_trace_ip;
use reqwest::Client;
use wiremock::matchers::{method, path, query_param};
use wiremock::{Mock, MockServer, ResponseTemplate};
fn test_config(zone_id: &str) -> LegacyConfig {
LegacyConfig {
cloudflare: vec![LegacyCloudflareEntry {
authentication: LegacyAuthentication {
api_token: "test-token".to_string(),
api_key: None,
},
zone_id: zone_id.to_string(),
subdomains: vec![
LegacySubdomainEntry::Detailed {
name: "".to_string(),
proxied: false,
},
LegacySubdomainEntry::Detailed {
name: "vpn".to_string(),
proxied: true,
},
],
proxied: false,
}],
a: true,
aaaa: false,
purge_unknown_records: false,
ttl: 300,
}
}
// Helper to create a legacy client for testing
struct TestDdnsClient {
client: Client,
cf_api_base: String,
ipv4_urls: Vec<String>,
dry_run: bool,
}
impl TestDdnsClient {
fn new(base_url: &str) -> Self {
Self {
client: Client::new(),
cf_api_base: base_url.to_string(),
ipv4_urls: vec![format!("{base_url}/cdn-cgi/trace")],
dry_run: false,
}
}
fn dry_run(mut self) -> Self {
self.dry_run = true;
self
}
async fn cf_api<T: serde::de::DeserializeOwned>(
&self,
endpoint: &str,
method_str: &str,
token: &str,
body: Option<&impl serde::Serialize>,
) -> Option<T> {
let url = format!("{}/{endpoint}", self.cf_api_base);
let mut req = match method_str {
"GET" => self.client.get(&url),
"POST" => self.client.post(&url),
"PUT" => self.client.put(&url),
"DELETE" => self.client.delete(&url),
_ => return None,
};
req = req.header("Authorization", format!("Bearer {token}"));
if let Some(b) = body {
req = req.json(b);
}
match req.send().await {
Ok(resp) if resp.status().is_success() => resp.json::<T>().await.ok(),
Ok(resp) => {
let text = resp.text().await.unwrap_or_default();
eprintln!("Error: {text}");
None
}
Err(e) => {
eprintln!("Exception: {e}");
None
}
}
}
async fn get_ip(&self) -> Option<String> {
for url in &self.ipv4_urls {
if let Ok(resp) = self.client.get(url).send().await {
if let Ok(body) = resp.text().await {
if let Some(ip) = parse_trace_ip(&body) {
return Some(ip);
}
}
}
}
None
}
async fn commit_record(
&self,
ip: &str,
record_type: &str,
config: &[LegacyCloudflareEntry],
ttl: i64,
purge_unknown_records: bool,
) {
for entry in config {
#[derive(serde::Deserialize)]
struct Resp<T> {
result: Option<T>,
}
#[derive(serde::Deserialize)]
struct Zone {
name: String,
}
#[derive(serde::Deserialize)]
struct Rec {
id: String,
name: String,
content: String,
proxied: bool,
}
let zone_resp: Option<Resp<Zone>> = self
.cf_api(
&format!("zones/{}", entry.zone_id),
"GET",
&entry.authentication.api_token,
None::<&()>.as_ref(),
)
.await;
let base_domain = match zone_resp.and_then(|r| r.result) {
Some(z) => z.name,
None => continue,
};
for subdomain in &entry.subdomains {
let (name, proxied) = match subdomain {
LegacySubdomainEntry::Detailed { name, proxied } => {
(name.to_lowercase().trim().to_string(), *proxied)
}
LegacySubdomainEntry::Simple(name) => {
(name.to_lowercase().trim().to_string(), entry.proxied)
}
};
let fqdn = crate::domain::make_fqdn(&name, &base_domain);
#[derive(serde::Serialize)]
struct Payload {
#[serde(rename = "type")]
record_type: String,
name: String,
content: String,
proxied: bool,
ttl: i64,
}
let record = Payload {
record_type: record_type.to_string(),
name: fqdn.clone(),
content: ip.to_string(),
proxied,
ttl,
};
let dns_endpoint = format!(
"zones/{}/dns_records?per_page=100&type={record_type}",
entry.zone_id
);
let dns_records: Option<Resp<Vec<Rec>>> = self
.cf_api(
&dns_endpoint,
"GET",
&entry.authentication.api_token,
None::<&()>.as_ref(),
)
.await;
let mut identifier: Option<String> = None;
let mut modified = false;
let mut duplicate_ids: Vec<String> = Vec::new();
if let Some(resp) = dns_records {
if let Some(records) = resp.result {
for r in &records {
if r.name == fqdn {
if let Some(ref existing_id) = identifier {
if r.content == ip {
duplicate_ids.push(existing_id.clone());
identifier = Some(r.id.clone());
} else {
duplicate_ids.push(r.id.clone());
}
} else {
identifier = Some(r.id.clone());
if r.content != ip || r.proxied != proxied {
modified = true;
}
}
}
}
}
}
if let Some(ref id) = identifier {
if modified {
if self.dry_run {
println!("[DRY RUN] Would update record {fqdn} -> {ip}");
} else {
println!("Updating record {fqdn} -> {ip}");
let update_endpoint =
format!("zones/{}/dns_records/{id}", entry.zone_id);
let _: Option<serde_json::Value> = self
.cf_api(
&update_endpoint,
"PUT",
&entry.authentication.api_token,
Some(&record),
)
.await;
}
} else if self.dry_run {
println!("[DRY RUN] Record {fqdn} is up to date ({ip})");
}
} else if self.dry_run {
println!("[DRY RUN] Would add new record {fqdn} -> {ip}");
} else {
println!("Adding new record {fqdn} -> {ip}");
let create_endpoint =
format!("zones/{}/dns_records", entry.zone_id);
let _: Option<serde_json::Value> = self
.cf_api(
&create_endpoint,
"POST",
&entry.authentication.api_token,
Some(&record),
)
.await;
}
if purge_unknown_records {
for dup_id in &duplicate_ids {
if self.dry_run {
println!("[DRY RUN] Would delete stale record {dup_id}");
} else {
println!("Deleting stale record {dup_id}");
let del_endpoint =
format!("zones/{}/dns_records/{dup_id}", entry.zone_id);
let _: Option<serde_json::Value> = self
.cf_api(
&del_endpoint,
"DELETE",
&entry.authentication.api_token,
None::<&()>.as_ref(),
)
.await;
}
}
}
}
}
}
}
#[test]
fn test_parse_trace_ip() {
let body = "fl=1f1\nh=1.1.1.1\nip=203.0.113.42\nts=1234567890\nvisit_scheme=https\n";
assert_eq!(parse_trace_ip(body), Some("203.0.113.42".to_string()));
}
#[test]
fn test_parse_trace_ip_missing() {
let body = "fl=1f1\nh=1.1.1.1\nts=1234567890\n";
assert_eq!(parse_trace_ip(body), None);
}
#[test]
fn test_parse_config_minimal() {
let json = r#"{
"cloudflare": [{
"authentication": { "api_token": "tok123" },
"zone_id": "zone1",
"subdomains": ["@"]
}]
}"#;
let config = parse_legacy_config(json).unwrap();
assert!(config.a);
assert!(config.aaaa);
assert!(!config.purge_unknown_records);
assert_eq!(config.ttl, 300);
}
#[test]
fn test_parse_config_low_ttl() {
let json = r#"{
"cloudflare": [{
"authentication": { "api_token": "tok123" },
"zone_id": "zone1",
"subdomains": ["@"]
}],
"ttl": 10
}"#;
let config = parse_legacy_config(json).unwrap();
assert_eq!(config.ttl, 1);
}
#[tokio::test]
async fn test_ip_detection() {
let mock_server = MockServer::start().await;
Mock::given(method("GET"))
.and(path("/cdn-cgi/trace"))
.respond_with(
ResponseTemplate::new(200)
.set_body_string("fl=1f1\nh=mock\nip=198.51.100.7\nts=0\n"),
)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri());
let ip = ddns.get_ip().await;
assert_eq!(ip, Some("198.51.100.7".to_string()));
}
#[tokio::test]
async fn test_creates_new_record() {
let mock_server = MockServer::start().await;
let zone_id = "zone-abc-123";
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "name": "example.com" }
})))
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.and(query_param("type", "A"))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": []
})))
.mount(&mock_server)
.await;
Mock::given(method("POST"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "id": "new-record-1" }
})))
.expect(2)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri());
let config = test_config(zone_id);
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false)
.await;
}
#[tokio::test]
async fn test_updates_existing_record() {
let mock_server = MockServer::start().await;
let zone_id = "zone-update-1";
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "name": "example.com" }
})))
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.and(query_param("type", "A"))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": [
{ "id": "rec-1", "name": "example.com", "content": "10.0.0.1", "proxied": false },
{ "id": "rec-2", "name": "vpn.example.com", "content": "10.0.0.1", "proxied": true }
]
})))
.mount(&mock_server)
.await;
Mock::given(method("PUT"))
.and(path(format!("/zones/{zone_id}/dns_records/rec-1")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "id": "rec-1" }
})))
.expect(1)
.mount(&mock_server)
.await;
Mock::given(method("PUT"))
.and(path(format!("/zones/{zone_id}/dns_records/rec-2")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "id": "rec-2" }
})))
.expect(1)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri());
let config = test_config(zone_id);
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false)
.await;
}
#[tokio::test]
async fn test_skips_up_to_date_record() {
let mock_server = MockServer::start().await;
let zone_id = "zone-noop";
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "name": "example.com" }
})))
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.and(query_param("type", "A"))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": [
{ "id": "rec-1", "name": "example.com", "content": "198.51.100.7", "proxied": false },
{ "id": "rec-2", "name": "vpn.example.com", "content": "198.51.100.7", "proxied": true }
]
})))
.mount(&mock_server)
.await;
Mock::given(method("PUT"))
.respond_with(ResponseTemplate::new(500))
.expect(0)
.mount(&mock_server)
.await;
Mock::given(method("POST"))
.respond_with(ResponseTemplate::new(500))
.expect(0)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri());
let config = test_config(zone_id);
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false)
.await;
}
#[tokio::test]
async fn test_dry_run_does_not_mutate() {
let mock_server = MockServer::start().await;
let zone_id = "zone-dry";
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "name": "example.com" }
})))
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.and(query_param("type", "A"))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": []
})))
.mount(&mock_server)
.await;
Mock::given(method("POST"))
.respond_with(ResponseTemplate::new(500))
.expect(0)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri()).dry_run();
let config = test_config(zone_id);
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false)
.await;
}
#[tokio::test]
async fn test_purge_duplicate_records() {
let mock_server = MockServer::start().await;
let zone_id = "zone-purge";
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "name": "example.com" }
})))
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.and(query_param("type", "A"))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": [
{ "id": "rec-keep", "name": "example.com", "content": "198.51.100.7", "proxied": false },
{ "id": "rec-dup", "name": "example.com", "content": "198.51.100.7", "proxied": false }
]
})))
.mount(&mock_server)
.await;
Mock::given(method("DELETE"))
.and(path(format!("/zones/{zone_id}/dns_records/rec-keep")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({})))
.expect(1)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri());
let config = LegacyConfig {
cloudflare: vec![LegacyCloudflareEntry {
authentication: LegacyAuthentication {
api_token: "test-token".to_string(),
api_key: None,
},
zone_id: zone_id.to_string(),
subdomains: vec![LegacySubdomainEntry::Detailed {
name: "".to_string(),
proxied: false,
}],
proxied: false,
}],
a: true,
aaaa: false,
purge_unknown_records: true,
ttl: 300,
};
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, true)
.await;
}
// --- describe_duration tests ---
#[test]
fn test_describe_duration_seconds_only() {
use tokio::time::Duration;
assert_eq!(super::describe_duration(Duration::from_secs(45)), "45s");
}
#[test]
fn test_describe_duration_exact_minutes() {
use tokio::time::Duration;
assert_eq!(super::describe_duration(Duration::from_secs(300)), "5m");
}
#[test]
fn test_describe_duration_minutes_and_seconds() {
use tokio::time::Duration;
assert_eq!(super::describe_duration(Duration::from_secs(330)), "5m30s");
}
#[test]
fn test_describe_duration_exact_hours() {
use tokio::time::Duration;
assert_eq!(super::describe_duration(Duration::from_secs(7200)), "2h");
}
#[test]
fn test_describe_duration_hours_and_minutes() {
use tokio::time::Duration;
assert_eq!(super::describe_duration(Duration::from_secs(5400)), "1h30m");
}
#[tokio::test]
async fn test_end_to_end_detect_and_update() {
let mock_server = MockServer::start().await;
let zone_id = "zone-e2e";
Mock::given(method("GET"))
.and(path("/cdn-cgi/trace"))
.respond_with(
ResponseTemplate::new(200)
.set_body_string("fl=1f1\nh=mock\nip=203.0.113.99\nts=0\n"),
)
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "name": "example.com" }
})))
.mount(&mock_server)
.await;
Mock::given(method("GET"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.and(query_param("type", "A"))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": [
{ "id": "rec-root", "name": "example.com", "content": "10.0.0.1", "proxied": false }
]
})))
.mount(&mock_server)
.await;
Mock::given(method("PUT"))
.and(path(format!("/zones/{zone_id}/dns_records/rec-root")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
"result": { "id": "rec-root" }
})))
.expect(1)
.mount(&mock_server)
.await;
let ddns = TestDdnsClient::new(&mock_server.uri());
let ip = ddns.get_ip().await;
assert_eq!(ip, Some("203.0.113.99".to_string()));
let config = LegacyConfig {
cloudflare: vec![LegacyCloudflareEntry {
authentication: LegacyAuthentication {
api_token: "test-token".to_string(),
api_key: None,
},
zone_id: zone_id.to_string(),
subdomains: vec![LegacySubdomainEntry::Detailed {
name: "".to_string(),
proxied: false,
}],
proxied: false,
}],
a: true,
aaaa: false,
purge_unknown_records: false,
ttl: 300,
};
ddns.commit_record("203.0.113.99", "A", &config.cloudflare, 300, false)
.await;
}
}

1436
src/notifier.rs Normal file

File diff suppressed because it is too large Load Diff

435
src/pp.rs Normal file
View File

@@ -0,0 +1,435 @@
use std::collections::HashSet;
use std::sync::{Arc, Mutex};
// Verbosity levels
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub enum Verbosity {
Quiet,
Notice,
Info,
Verbose,
}
// Emoji constants
#[allow(dead_code)]
pub const EMOJI_GLOBE: &str = "\u{1F30D}";
pub const EMOJI_WARNING: &str = "\u{26A0}\u{FE0F}";
pub const EMOJI_ERROR: &str = "\u{274C}";
#[allow(dead_code)]
pub const EMOJI_SUCCESS: &str = "\u{2705}";
pub const EMOJI_LAUNCH: &str = "\u{1F680}";
pub const EMOJI_STOP: &str = "\u{1F6D1}";
pub const EMOJI_SLEEP: &str = "\u{1F634}";
pub const EMOJI_DETECT: &str = "\u{1F50D}";
pub const EMOJI_UPDATE: &str = "\u{2B06}\u{FE0F}";
pub const EMOJI_CREATE: &str = "\u{2795}";
pub const EMOJI_DELETE: &str = "\u{2796}";
pub const EMOJI_SKIP: &str = "\u{23ED}\u{FE0F}";
pub const EMOJI_NOTIFY: &str = "\u{1F514}";
pub const EMOJI_HEARTBEAT: &str = "\u{1F493}";
pub const EMOJI_CONFIG: &str = "\u{2699}\u{FE0F}";
#[allow(dead_code)]
pub const EMOJI_HINT: &str = "\u{1F4A1}";
const INDENT_PREFIX: &str = " ";
pub struct PP {
pub verbosity: Verbosity,
pub emoji: bool,
indent: usize,
seen: Arc<Mutex<HashSet<String>>>,
}
impl PP {
pub fn new(emoji: bool, quiet: bool) -> Self {
Self {
verbosity: if quiet { Verbosity::Quiet } else { Verbosity::Verbose },
emoji,
indent: 0,
seen: Arc::new(Mutex::new(HashSet::new())),
}
}
pub fn default_pp() -> Self {
Self::new(false, false)
}
pub fn is_showing(&self, level: Verbosity) -> bool {
self.verbosity >= level
}
pub fn indent(&self) -> PP {
PP {
verbosity: self.verbosity,
emoji: self.emoji,
indent: self.indent + 1,
seen: Arc::clone(&self.seen),
}
}
fn output(&self, emoji: &str, msg: &str) {
let prefix = INDENT_PREFIX.repeat(self.indent);
if self.emoji && !emoji.is_empty() {
println!("{prefix}{emoji} {msg}");
} else {
println!("{prefix}{msg}");
}
}
fn output_err(&self, emoji: &str, msg: &str) {
let prefix = INDENT_PREFIX.repeat(self.indent);
if self.emoji && !emoji.is_empty() {
eprintln!("{prefix}{emoji} {msg}");
} else {
eprintln!("{prefix}{msg}");
}
}
pub fn infof(&self, emoji: &str, msg: &str) {
if self.is_showing(Verbosity::Info) {
self.output(emoji, msg);
}
}
pub fn noticef(&self, emoji: &str, msg: &str) {
if self.is_showing(Verbosity::Notice) {
self.output(emoji, msg);
}
}
pub fn warningf(&self, emoji: &str, msg: &str) {
self.output_err(emoji, msg);
}
pub fn errorf(&self, emoji: &str, msg: &str) {
self.output_err(emoji, msg);
}
#[allow(dead_code)]
pub fn info_once(&self, key: &str, emoji: &str, msg: &str) {
if self.is_showing(Verbosity::Info) {
let mut seen = self.seen.lock().unwrap();
if seen.insert(key.to_string()) {
self.output(emoji, msg);
}
}
}
#[allow(dead_code)]
pub fn notice_once(&self, key: &str, emoji: &str, msg: &str) {
if self.is_showing(Verbosity::Notice) {
let mut seen = self.seen.lock().unwrap();
if seen.insert(key.to_string()) {
self.output(emoji, msg);
}
}
}
#[allow(dead_code)]
pub fn blank_line_if_verbose(&self) {
if self.is_showing(Verbosity::Verbose) {
println!();
}
}
}
#[allow(dead_code)]
pub fn english_join(items: &[String]) -> String {
match items.len() {
0 => String::new(),
1 => items[0].clone(),
2 => format!("{} and {}", items[0], items[1]),
_ => {
let (last, rest) = items.split_last().unwrap();
format!("{}, and {last}", rest.join(", "))
}
}
}
#[cfg(test)]
mod tests {
use super::*;
// ---- PP::new with emoji flag ----
#[test]
fn new_with_emoji_true() {
let pp = PP::new(true, false);
assert!(pp.emoji);
}
#[test]
fn new_with_emoji_false() {
let pp = PP::new(false, false);
assert!(!pp.emoji);
}
// ---- PP::new with quiet flag (verbosity levels) ----
#[test]
fn new_quiet_true_sets_verbosity_quiet() {
let pp = PP::new(false, true);
assert_eq!(pp.verbosity, Verbosity::Quiet);
}
#[test]
fn new_quiet_false_sets_verbosity_verbose() {
let pp = PP::new(false, false);
assert_eq!(pp.verbosity, Verbosity::Verbose);
}
// ---- PP::is_showing at different verbosity levels ----
#[test]
fn quiet_shows_only_quiet_level() {
let pp = PP::new(false, true);
assert!(pp.is_showing(Verbosity::Quiet));
assert!(!pp.is_showing(Verbosity::Notice));
assert!(!pp.is_showing(Verbosity::Info));
assert!(!pp.is_showing(Verbosity::Verbose));
}
#[test]
fn verbose_shows_all_levels() {
let pp = PP::new(false, false);
assert!(pp.is_showing(Verbosity::Quiet));
assert!(pp.is_showing(Verbosity::Notice));
assert!(pp.is_showing(Verbosity::Info));
assert!(pp.is_showing(Verbosity::Verbose));
}
#[test]
fn notice_level_shows_quiet_and_notice_only() {
let mut pp = PP::new(false, false);
pp.verbosity = Verbosity::Notice;
assert!(pp.is_showing(Verbosity::Quiet));
assert!(pp.is_showing(Verbosity::Notice));
assert!(!pp.is_showing(Verbosity::Info));
assert!(!pp.is_showing(Verbosity::Verbose));
}
#[test]
fn info_level_shows_up_to_info() {
let mut pp = PP::new(false, false);
pp.verbosity = Verbosity::Info;
assert!(pp.is_showing(Verbosity::Quiet));
assert!(pp.is_showing(Verbosity::Notice));
assert!(pp.is_showing(Verbosity::Info));
assert!(!pp.is_showing(Verbosity::Verbose));
}
// ---- PP::indent ----
#[test]
fn indent_increments_indent_level() {
let pp = PP::new(true, false);
assert_eq!(pp.indent, 0);
let child = pp.indent();
assert_eq!(child.indent, 1);
let grandchild = child.indent();
assert_eq!(grandchild.indent, 2);
}
#[test]
fn indent_preserves_verbosity_and_emoji() {
let pp = PP::new(true, true);
let child = pp.indent();
assert_eq!(child.verbosity, pp.verbosity);
assert_eq!(child.emoji, pp.emoji);
}
#[test]
fn indent_shares_seen_state() {
let pp = PP::new(false, false);
let child = pp.indent();
// Insert via parent's seen set
pp.seen.lock().unwrap().insert("key1".to_string());
// Child should observe the same entry
assert!(child.seen.lock().unwrap().contains("key1"));
// Insert via child
child.seen.lock().unwrap().insert("key2".to_string());
// Parent should observe it too
assert!(pp.seen.lock().unwrap().contains("key2"));
}
// ---- PP::infof, noticef, warningf, errorf - no panic and verbosity gating ----
#[test]
fn infof_does_not_panic_when_verbose() {
let pp = PP::new(false, false);
pp.infof("", "test info message");
}
#[test]
fn infof_does_not_panic_when_quiet() {
let pp = PP::new(false, true);
// Should simply not print, and not panic
pp.infof("", "test info message");
}
#[test]
fn noticef_does_not_panic_when_verbose() {
let pp = PP::new(true, false);
pp.noticef(EMOJI_DETECT, "test notice message");
}
#[test]
fn noticef_does_not_panic_when_quiet() {
let pp = PP::new(false, true);
pp.noticef("", "test notice message");
}
#[test]
fn warningf_does_not_panic() {
let pp = PP::new(true, false);
pp.warningf(EMOJI_WARNING, "test warning");
}
#[test]
fn warningf_does_not_panic_when_quiet() {
// warningf always outputs (no verbosity check), just verify no panic
let pp = PP::new(false, true);
pp.warningf("", "test warning");
}
#[test]
fn errorf_does_not_panic() {
let pp = PP::new(true, false);
pp.errorf(EMOJI_ERROR, "test error");
}
#[test]
fn errorf_does_not_panic_when_quiet() {
let pp = PP::new(false, true);
pp.errorf("", "test error");
}
// ---- PP::info_once and notice_once ----
#[test]
fn info_once_suppresses_duplicates() {
let pp = PP::new(false, false);
// First call inserts the key
pp.info_once("dup_key", "", "first");
// The key should now be in the seen set
assert!(pp.seen.lock().unwrap().contains("dup_key"));
// Calling again with the same key should not insert again (set unchanged)
let size_before = pp.seen.lock().unwrap().len();
pp.info_once("dup_key", "", "second");
let size_after = pp.seen.lock().unwrap().len();
assert_eq!(size_before, size_after);
}
#[test]
fn info_once_allows_different_keys() {
let pp = PP::new(false, false);
pp.info_once("key_a", "", "msg a");
pp.info_once("key_b", "", "msg b");
let seen = pp.seen.lock().unwrap();
assert!(seen.contains("key_a"));
assert!(seen.contains("key_b"));
assert_eq!(seen.len(), 2);
}
#[test]
fn info_once_skipped_when_quiet() {
let pp = PP::new(false, true);
pp.info_once("quiet_key", "", "should not register");
// Because verbosity is Quiet, info_once should not even insert the key
assert!(!pp.seen.lock().unwrap().contains("quiet_key"));
}
#[test]
fn notice_once_suppresses_duplicates() {
let pp = PP::new(false, false);
pp.notice_once("notice_dup", "", "first");
assert!(pp.seen.lock().unwrap().contains("notice_dup"));
let size_before = pp.seen.lock().unwrap().len();
pp.notice_once("notice_dup", "", "second");
let size_after = pp.seen.lock().unwrap().len();
assert_eq!(size_before, size_after);
}
#[test]
fn notice_once_skipped_when_quiet() {
let pp = PP::new(false, true);
pp.notice_once("quiet_notice", "", "should not register");
assert!(!pp.seen.lock().unwrap().contains("quiet_notice"));
}
#[test]
fn info_once_shared_via_indent() {
let pp = PP::new(false, false);
let child = pp.indent();
// Mark a key via the parent
pp.info_once("shared_key", "", "parent");
assert!(pp.seen.lock().unwrap().contains("shared_key"));
// Child should see it as already present, so set size stays the same
let size_before = child.seen.lock().unwrap().len();
child.info_once("shared_key", "", "child duplicate");
let size_after = child.seen.lock().unwrap().len();
assert_eq!(size_before, size_after);
// Child can add a new key visible to parent
child.info_once("child_key", "", "child new");
assert!(pp.seen.lock().unwrap().contains("child_key"));
}
// ---- english_join ----
#[test]
fn english_join_empty() {
let items: Vec<String> = vec![];
assert_eq!(english_join(&items), "");
}
#[test]
fn english_join_single() {
let items = vec!["alpha".to_string()];
assert_eq!(english_join(&items), "alpha");
}
#[test]
fn english_join_two() {
let items = vec!["alpha".to_string(), "beta".to_string()];
assert_eq!(english_join(&items), "alpha and beta");
}
#[test]
fn english_join_three() {
let items = vec![
"alpha".to_string(),
"beta".to_string(),
"gamma".to_string(),
];
assert_eq!(english_join(&items), "alpha, beta, and gamma");
}
#[test]
fn english_join_four() {
let items = vec![
"a".to_string(),
"b".to_string(),
"c".to_string(),
"d".to_string(),
];
assert_eq!(english_join(&items), "a, b, c, and d");
}
// ---- default_pp ----
#[test]
fn default_pp_is_verbose_no_emoji() {
let pp = PP::default_pp();
assert!(!pp.emoji);
assert_eq!(pp.verbosity, Verbosity::Verbose);
}
}

1340
src/provider.rs Normal file

File diff suppressed because it is too large Load Diff

2389
src/updater.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +0,0 @@
#!/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
python3 -m venv venv
source ./venv/bin/activate
pip3 install requests
cd $DIR
python3 cloudflare-ddns.py

View File

@@ -0,0 +1,13 @@
[Unit]
Description=Update DDNS on Cloudflare
ConditionPathExists=/etc/cloudflare-ddns/config.json
Wants=network-online.target
After=network-online.target
[Service]
Type=oneshot
Environment=CONFIG_PATH=/etc/cloudflare-ddns
ExecStart=cloudflare-ddns
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,9 @@
[Unit]
Description=Update DDNS on Cloudflare every 15 minutes
[Timer]
OnBootSec=2min
OnUnitActiveSec=15m
[Install]
WantedBy=timers.target