3 Commits

Author SHA1 Message Date
Timothy Miller
8c7af02698 Revise SECURITY.md with version support and reporting updates
Updated the security policy to include new version support details and improved reporting guidelines for vulnerabilities.
2026-03-19 23:34:45 -04:00
Timothy Miller
245ac0b061 Potential fix for code scanning alert no. 6: Workflow does not contain permissions
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2026-03-19 23:30:56 -04:00
Timothy Miller
2446c1d6a0 Bump crate to 2.0.8 and refine updater behavior
Deduplicate up-to-date messages by tracking noop keys and move logging
to the updater so callers only log the first noop.
Reuse a single reqwest Client for IP detection instead of rebuilding it
for each call.
Always ping heartbeat even when there are no meaningful changes.
Fix Pushover shoutrrr parsing (token@user order) and update tests
2026-03-19 23:22:20 -04:00
8 changed files with 299 additions and 97 deletions

View File

@@ -9,6 +9,8 @@ on:
jobs: jobs:
build: build:
permissions:
contents: read
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout code - name: Checkout code

2
Cargo.lock generated
View File

@@ -109,7 +109,7 @@ dependencies = [
[[package]] [[package]]
name = "cloudflare-ddns" name = "cloudflare-ddns"
version = "2.0.6" version = "2.0.8"
dependencies = [ dependencies = [
"chrono", "chrono",
"idna", "idna",

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "cloudflare-ddns" name = "cloudflare-ddns"
version = "2.0.7" version = "2.0.8"
edition = "2021" edition = "2021"
description = "Access your home network remotely via a custom domain name without a static IP" description = "Access your home network remotely via a custom domain name without a static IP"
license = "GPL-3.0" license = "GPL-3.0"

78
SECURITY.md Normal file
View File

@@ -0,0 +1,78 @@
# Security Policy
## Supported Versions
| Version | Supported |
| ------- | ------------------ |
| 2.0.x | :white_check_mark: |
| < 2.0 | :x: |
Only the latest release in the `2.0.x` series receives security updates. The legacy Python codebase and all `1.x` releases are **end-of-life** and will not be patched. Users on older versions should upgrade to the latest release immediately.
## Reporting a Vulnerability
**Please do not open a public GitHub issue for security vulnerabilities.**
Instead, report vulnerabilities privately using one of the following methods:
1. **GitHub Private Vulnerability Reporting** — Use the [Security Advisories](https://github.com/timothymiller/cloudflare-ddns/security/advisories/new) page to submit a private report directly on GitHub.
2. **Email** — Contact the maintainer directly at the email address listed on the [GitHub profile](https://github.com/timothymiller).
### What to Include
- A clear description of the vulnerability and its potential impact
- Steps to reproduce or a proof-of-concept
- Affected version(s)
- Any suggested fix or mitigation, if applicable
### What to Expect
- **Acknowledgment** within 72 hours of your report
- **Status updates** at least every 7 days while the issue is being investigated
- A coordinated disclosure timeline — we aim to release a fix within 30 days of a confirmed vulnerability, and will credit reporters (unless anonymity is preferred) in the release notes
If a report is declined (e.g., out of scope or not reproducible), you will receive an explanation.
## Security Considerations
This project handles **Cloudflare API tokens** that grant DNS editing privileges. Users should be aware of the following:
### API Token Handling
- **Never commit your API token** to version control or include it in Docker images.
- Use `CLOUDFLARE_API_TOKEN_FILE` or Docker secrets to inject tokens at runtime rather than passing them as plain environment variables where possible.
- Create a **scoped API token** with only "Edit DNS" permission on the specific zones you need — avoid using Global API Keys.
### Container Security
- The Docker image runs as a **static binary from scratch** with zero runtime dependencies, which minimizes the attack surface.
- Use `security_opt: no-new-privileges:true` in Docker Compose deployments.
- Pin image tags to a specific version (e.g., `timothyjmiller/cloudflare-ddns:v2.0.8`) rather than using `latest` in production.
### Network Security
- The default IP detection provider (`cloudflare.trace`) communicates directly with Cloudflare's infrastructure over HTTPS and does not log your IP.
- All Cloudflare API calls are made over HTTPS/TLS.
- `--network host` mode is required for IPv6 detection — be aware this gives the container access to the host's full network stack.
### Supply Chain
- The project is built with `cargo` and all dependencies are declared in `Cargo.lock` for reproducible builds.
- Docker images are built via GitHub Actions and published to Docker Hub. Multi-arch builds cover `linux/amd64`, `linux/arm64`, and `linux/ppc64le`.
## Scope
The following are considered **in scope** for security reports:
- Authentication or authorization flaws (e.g., token leakage, insufficient credential protection)
- Injection vulnerabilities in configuration parsing
- Vulnerabilities in DNS record handling that could lead to record hijacking or poisoning
- Dependency vulnerabilities with a demonstrable exploit path
- Container escape or privilege escalation
The following are **out of scope**:
- Denial of service against the user's own instance
- Vulnerabilities in Cloudflare's API or infrastructure (report those to [Cloudflare](https://hackerone.com/cloudflare))
- Social engineering attacks
- Issues requiring physical access to the host machine

View File

@@ -467,7 +467,7 @@ impl CloudflareHandle {
self.update_record(zone_id, &record.id, &payload, ppfmt).await; self.update_record(zone_id, &record.id, &payload, ppfmt).await;
} }
} else { } else {
ppfmt.infof(pp::EMOJI_SKIP, &format!("Record {fqdn} is up to date ({ip_str})")); // Caller handles "up to date" logging based on SetResult::Noop
} }
} else { } else {
// Find an existing managed record to update, or create new // Find an existing managed record to update, or create new
@@ -668,10 +668,7 @@ impl CloudflareHandle {
.collect(); .collect();
if to_add.is_empty() && ids_to_delete.is_empty() { if to_add.is_empty() && ids_to_delete.is_empty() {
ppfmt.infof( // Caller handles "up to date" logging based on SetResult::Noop
pp::EMOJI_SKIP,
&format!("WAF list {} is up to date", waf_list.describe()),
);
return SetResult::Noop; return SetResult::Noop;
} }

View File

@@ -11,8 +11,10 @@ use crate::cloudflare::{Auth, CloudflareHandle};
use crate::config::{AppConfig, CronSchedule}; use crate::config::{AppConfig, CronSchedule};
use crate::notifier::{CompositeNotifier, Heartbeat, Message}; use crate::notifier::{CompositeNotifier, Heartbeat, Message};
use crate::pp::PP; use crate::pp::PP;
use std::collections::HashSet;
use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc; use std::sync::Arc;
use reqwest::Client;
use tokio::signal; use tokio::signal;
use tokio::time::{sleep, Duration}; use tokio::time::{sleep, Duration};
@@ -117,13 +119,17 @@ async fn main() {
heartbeat.start().await; heartbeat.start().await;
let mut cf_cache = cf_ip_filter::CachedCloudflareFilter::new(); let mut cf_cache = cf_ip_filter::CachedCloudflareFilter::new();
let detection_client = Client::builder()
.timeout(app_config.detection_timeout)
.build()
.unwrap_or_default();
if app_config.legacy_mode { if app_config.legacy_mode {
// --- Legacy mode (original cloudflare-ddns behavior) --- // --- Legacy mode (original cloudflare-ddns behavior) ---
run_legacy_mode(&app_config, &handle, &notifier, &heartbeat, &ppfmt, running, &mut cf_cache).await; run_legacy_mode(&app_config, &handle, &notifier, &heartbeat, &ppfmt, running, &mut cf_cache, &detection_client).await;
} else { } else {
// --- Env var mode (cf-ddns behavior) --- // --- Env var mode (cf-ddns behavior) ---
run_env_mode(&app_config, &handle, &notifier, &heartbeat, &ppfmt, running, &mut cf_cache).await; run_env_mode(&app_config, &handle, &notifier, &heartbeat, &ppfmt, running, &mut cf_cache, &detection_client).await;
} }
// On shutdown: delete records if configured // On shutdown: delete records if configured
@@ -146,12 +152,15 @@ async fn run_legacy_mode(
ppfmt: &PP, ppfmt: &PP,
running: Arc<AtomicBool>, running: Arc<AtomicBool>,
cf_cache: &mut cf_ip_filter::CachedCloudflareFilter, cf_cache: &mut cf_ip_filter::CachedCloudflareFilter,
detection_client: &Client,
) { ) {
let legacy = match &config.legacy_config { let legacy = match &config.legacy_config {
Some(l) => l, Some(l) => l,
None => return, None => return,
}; };
let mut noop_reported = HashSet::new();
if config.repeat { if config.repeat {
match (legacy.a, legacy.aaaa) { match (legacy.a, legacy.aaaa) {
(true, true) => println!( (true, true) => println!(
@@ -168,7 +177,7 @@ async fn run_legacy_mode(
} }
while running.load(Ordering::SeqCst) { while running.load(Ordering::SeqCst) {
updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt).await; updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt, &mut noop_reported, detection_client).await;
for _ in 0..legacy.ttl { for _ in 0..legacy.ttl {
if !running.load(Ordering::SeqCst) { if !running.load(Ordering::SeqCst) {
@@ -178,7 +187,7 @@ async fn run_legacy_mode(
} }
} }
} else { } else {
updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt).await; updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt, &mut noop_reported, detection_client).await;
} }
} }
@@ -190,11 +199,14 @@ async fn run_env_mode(
ppfmt: &PP, ppfmt: &PP,
running: Arc<AtomicBool>, running: Arc<AtomicBool>,
cf_cache: &mut cf_ip_filter::CachedCloudflareFilter, cf_cache: &mut cf_ip_filter::CachedCloudflareFilter,
detection_client: &Client,
) { ) {
let mut noop_reported = HashSet::new();
match &config.update_cron { match &config.update_cron {
CronSchedule::Once => { CronSchedule::Once => {
if config.update_on_start { if config.update_on_start {
updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt).await; updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt, &mut noop_reported, detection_client).await;
} }
} }
schedule => { schedule => {
@@ -210,7 +222,7 @@ async fn run_env_mode(
// Update on start if configured // Update on start if configured
if config.update_on_start { if config.update_on_start {
updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt).await; updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt, &mut noop_reported, detection_client).await;
} }
// Main loop // Main loop
@@ -237,7 +249,7 @@ async fn run_env_mode(
return; return;
} }
updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt).await; updater::update_once(config, handle, notifier, heartbeat, cf_cache, ppfmt, &mut noop_reported, detection_client).await;
} }
} }
} }
@@ -386,6 +398,7 @@ mod tests {
config: &[LegacyCloudflareEntry], config: &[LegacyCloudflareEntry],
ttl: i64, ttl: i64,
purge_unknown_records: bool, purge_unknown_records: bool,
noop_reported: &mut std::collections::HashSet<String>,
) { ) {
for entry in config { for entry in config {
#[derive(serde::Deserialize)] #[derive(serde::Deserialize)]
@@ -487,8 +500,10 @@ mod tests {
} }
} }
let noop_key = format!("{fqdn}:{record_type}");
if let Some(ref id) = identifier { if let Some(ref id) = identifier {
if modified { if modified {
noop_reported.remove(&noop_key);
if self.dry_run { if self.dry_run {
println!("[DRY RUN] Would update record {fqdn} -> {ip}"); println!("[DRY RUN] Would update record {fqdn} -> {ip}");
} else { } else {
@@ -504,10 +519,16 @@ mod tests {
) )
.await; .await;
} }
} else if self.dry_run { } else if noop_reported.insert(noop_key) {
println!("[DRY RUN] Record {fqdn} is up to date ({ip})"); if self.dry_run {
println!("[DRY RUN] Record {fqdn} is up to date");
} else {
println!("Record {fqdn} is up to date");
} }
} else if self.dry_run { }
} else {
noop_reported.remove(&noop_key);
if self.dry_run {
println!("[DRY RUN] Would add new record {fqdn} -> {ip}"); println!("[DRY RUN] Would add new record {fqdn} -> {ip}");
} else { } else {
println!("Adding new record {fqdn} -> {ip}"); println!("Adding new record {fqdn} -> {ip}");
@@ -522,6 +543,7 @@ mod tests {
) )
.await; .await;
} }
}
if purge_unknown_records { if purge_unknown_records {
for dup_id in &duplicate_ids { for dup_id in &duplicate_ids {
@@ -640,7 +662,7 @@ mod tests {
let ddns = TestDdnsClient::new(&mock_server.uri()); let ddns = TestDdnsClient::new(&mock_server.uri());
let config = test_config(zone_id); let config = test_config(zone_id);
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false) ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false, &mut std::collections::HashSet::new())
.await; .await;
} }
@@ -689,7 +711,7 @@ mod tests {
let ddns = TestDdnsClient::new(&mock_server.uri()); let ddns = TestDdnsClient::new(&mock_server.uri());
let config = test_config(zone_id); let config = test_config(zone_id);
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false) ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false, &mut std::collections::HashSet::new())
.await; .await;
} }
@@ -732,7 +754,7 @@ mod tests {
let ddns = TestDdnsClient::new(&mock_server.uri()); let ddns = TestDdnsClient::new(&mock_server.uri());
let config = test_config(zone_id); let config = test_config(zone_id);
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false) ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false, &mut std::collections::HashSet::new())
.await; .await;
} }
@@ -766,7 +788,7 @@ mod tests {
let ddns = TestDdnsClient::new(&mock_server.uri()).dry_run(); let ddns = TestDdnsClient::new(&mock_server.uri()).dry_run();
let config = test_config(zone_id); let config = test_config(zone_id);
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false) ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, false, &mut std::collections::HashSet::new())
.await; .await;
} }
@@ -823,7 +845,7 @@ mod tests {
ip4_provider: None, ip4_provider: None,
ip6_provider: None, ip6_provider: None,
}; };
ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, true) ddns.commit_record("198.51.100.7", "A", &config.cloudflare, 300, true, &mut std::collections::HashSet::new())
.await; .await;
} }
@@ -925,7 +947,7 @@ mod tests {
ip6_provider: None, ip6_provider: None,
}; };
ddns.commit_record("203.0.113.99", "A", &config.cloudflare, 300, false) ddns.commit_record("203.0.113.99", "A", &config.cloudflare, 300, false, &mut std::collections::HashSet::new())
.await; .await;
} }
} }

View File

@@ -406,7 +406,7 @@ fn parse_shoutrrr_url(url_str: &str) -> Result<ShoutrrrService, String> {
service_type: ShoutrrrServiceType::Pushover, service_type: ShoutrrrServiceType::Pushover,
webhook_url: format!( webhook_url: format!(
"https://api.pushover.net/1/messages.json?token={}&user={}", "https://api.pushover.net/1/messages.json?token={}&user={}",
parts[1], parts[0] parts[0], parts[1]
), ),
}); });
} }
@@ -868,7 +868,7 @@ mod tests {
#[test] #[test]
fn test_parse_pushover() { fn test_parse_pushover() {
let result = parse_shoutrrr_url("pushover://userkey@apitoken").unwrap(); let result = parse_shoutrrr_url("pushover://apitoken@userkey").unwrap();
assert_eq!( assert_eq!(
result.webhook_url, result.webhook_url,
"https://api.pushover.net/1/messages.json?token=apitoken&user=userkey" "https://api.pushover.net/1/messages.json?token=apitoken&user=userkey"
@@ -1307,7 +1307,8 @@ mod tests {
#[test] #[test]
fn test_pushover_url_query_parsing() { fn test_pushover_url_query_parsing() {
// Verify that the pushover webhook URL format contains the right params // Verify that the pushover webhook URL format contains the right params
let service = parse_shoutrrr_url("pushover://myuser@mytoken").unwrap(); // shoutrrr format: pushover://token@user
let service = parse_shoutrrr_url("pushover://mytoken@myuser").unwrap();
let parsed = url::Url::parse(&service.webhook_url).unwrap(); let parsed = url::Url::parse(&service.webhook_url).unwrap();
let params: std::collections::HashMap<_, _> = parsed.query_pairs().collect(); let params: std::collections::HashMap<_, _> = parsed.query_pairs().collect();
assert_eq!(params.get("token").unwrap().as_ref(), "mytoken"); assert_eq!(params.get("token").unwrap().as_ref(), "mytoken");

View File

@@ -6,7 +6,7 @@ use crate::notifier::{CompositeNotifier, Heartbeat, Message};
use crate::pp::{self, PP}; use crate::pp::{self, PP};
use crate::provider::IpType; use crate::provider::IpType;
use reqwest::Client; use reqwest::Client;
use std::collections::HashMap; use std::collections::{HashMap, HashSet};
use std::net::IpAddr; use std::net::IpAddr;
use std::time::Duration; use std::time::Duration;
@@ -18,18 +18,15 @@ pub async fn update_once(
heartbeat: &Heartbeat, heartbeat: &Heartbeat,
cf_cache: &mut CachedCloudflareFilter, cf_cache: &mut CachedCloudflareFilter,
ppfmt: &PP, ppfmt: &PP,
noop_reported: &mut HashSet<String>,
detection_client: &Client,
) -> bool { ) -> bool {
let detection_client = Client::builder()
.timeout(config.detection_timeout)
.build()
.unwrap_or_default();
let mut all_ok = true; let mut all_ok = true;
let mut messages = Vec::new(); let mut messages = Vec::new();
let mut notify = false; // NEW: track meaningful events let mut notify = false; // NEW: track meaningful events
if config.legacy_mode { if config.legacy_mode {
all_ok = update_legacy(config, cf_cache, ppfmt).await; all_ok = update_legacy(config, cf_cache, ppfmt, noop_reported, detection_client).await;
} else { } else {
// Detect IPs for each provider // Detect IPs for each provider
let mut detected_ips: HashMap<IpType, Vec<IpAddr>> = HashMap::new(); let mut detected_ips: HashMap<IpType, Vec<IpAddr>> = HashMap::new();
@@ -153,9 +150,11 @@ pub async fn update_once(
) )
.await; .await;
let noop_key = format!("{domain_str}:{record_type}");
match result { match result {
SetResult::Updated => { SetResult::Updated => {
notify = true; // NEW noop_reported.remove(&noop_key);
notify = true;
let ip_strs: Vec<String> = ips.iter().map(|ip| ip.to_string()).collect(); let ip_strs: Vec<String> = ips.iter().map(|ip| ip.to_string()).collect();
messages.push(Message::new_ok(&format!( messages.push(Message::new_ok(&format!(
"Updated {domain_str} -> {}", "Updated {domain_str} -> {}",
@@ -163,13 +162,18 @@ pub async fn update_once(
))); )));
} }
SetResult::Failed => { SetResult::Failed => {
notify = true; // NEW noop_reported.remove(&noop_key);
notify = true;
all_ok = false; all_ok = false;
messages.push(Message::new_fail(&format!( messages.push(Message::new_fail(&format!(
"Failed to update {domain_str}" "Failed to update {domain_str}"
))); )));
} }
SetResult::Noop => {} SetResult::Noop => {
if noop_reported.insert(noop_key) {
ppfmt.infof(pp::EMOJI_SKIP, &format!("Record {domain_str} is up to date"));
}
}
} }
} }
} }
@@ -194,32 +198,37 @@ pub async fn update_once(
) )
.await; .await;
let noop_key = format!("waf:{}", waf_list.describe());
match result { match result {
SetResult::Updated => { SetResult::Updated => {
notify = true; // NEW noop_reported.remove(&noop_key);
notify = true;
messages.push(Message::new_ok(&format!( messages.push(Message::new_ok(&format!(
"Updated WAF list {}", "Updated WAF list {}",
waf_list.describe() waf_list.describe()
))); )));
} }
SetResult::Failed => { SetResult::Failed => {
notify = true; // NEW noop_reported.remove(&noop_key);
notify = true;
all_ok = false; all_ok = false;
messages.push(Message::new_fail(&format!( messages.push(Message::new_fail(&format!(
"Failed to update WAF list {}", "Failed to update WAF list {}",
waf_list.describe() waf_list.describe()
))); )));
} }
SetResult::Noop => {} SetResult::Noop => {
if noop_reported.insert(noop_key) {
ppfmt.infof(pp::EMOJI_SKIP, &format!("WAF list {} is up to date", waf_list.describe()));
}
}
} }
} }
} }
// Send heartbeat ONLY if something meaningful happened // Always ping heartbeat so monitors know the updater is alive
if notify {
let heartbeat_msg = Message::merge(messages.clone()); let heartbeat_msg = Message::merge(messages.clone());
heartbeat.ping(&heartbeat_msg).await; heartbeat.ping(&heartbeat_msg).await;
}
// Send notifications ONLY when IP changed or failed // Send notifications ONLY when IP changed or failed
if notify { if notify {
@@ -236,29 +245,27 @@ pub async fn update_once(
/// IP-family-bound clients (0.0.0.0 for IPv4, [::] for IPv6). This prevents the old /// IP-family-bound clients (0.0.0.0 for IPv4, [::] for IPv6). This prevents the old
/// wrong-family warning on dual-stack hosts and honours `ip4_provider`/`ip6_provider` /// wrong-family warning on dual-stack hosts and honours `ip4_provider`/`ip6_provider`
/// overrides from config.json. /// overrides from config.json.
async fn update_legacy(config: &AppConfig, cf_cache: &mut CachedCloudflareFilter, ppfmt: &PP) -> bool { async fn update_legacy(
config: &AppConfig,
cf_cache: &mut CachedCloudflareFilter,
ppfmt: &PP,
noop_reported: &mut HashSet<String>,
detection_client: &Client,
) -> bool {
let legacy = match &config.legacy_config { let legacy = match &config.legacy_config {
Some(l) => l, Some(l) => l,
None => return false, None => return false,
}; };
let client = Client::builder() let ddns = LegacyDdnsClient {
client: Client::builder()
.timeout(config.update_timeout) .timeout(config.update_timeout)
.build() .build()
.unwrap_or_default(); .unwrap_or_default(),
let ddns = LegacyDdnsClient {
client,
cf_api_base: "https://api.cloudflare.com/client/v4".to_string(), cf_api_base: "https://api.cloudflare.com/client/v4".to_string(),
dry_run: config.dry_run, dry_run: config.dry_run,
}; };
// Detect IPs using the shared provider abstraction
let detection_client = Client::builder()
.timeout(config.detection_timeout)
.build()
.unwrap_or_default();
let mut ips = HashMap::new(); let mut ips = HashMap::new();
for (ip_type, provider) in &config.providers { for (ip_type, provider) in &config.providers {
@@ -339,6 +346,7 @@ async fn update_legacy(config: &AppConfig, cf_cache: &mut CachedCloudflareFilter
&legacy.cloudflare, &legacy.cloudflare,
legacy.ttl, legacy.ttl,
legacy.purge_unknown_records, legacy.purge_unknown_records,
noop_reported,
) )
.await; .await;
@@ -490,9 +498,10 @@ impl LegacyDdnsClient {
config: &[LegacyCloudflareEntry], config: &[LegacyCloudflareEntry],
ttl: i64, ttl: i64,
purge_unknown_records: bool, purge_unknown_records: bool,
noop_reported: &mut HashSet<String>,
) { ) {
for ip in ips.values() { for ip in ips.values() {
self.commit_record(ip, config, ttl, purge_unknown_records) self.commit_record(ip, config, ttl, purge_unknown_records, noop_reported)
.await; .await;
} }
} }
@@ -503,6 +512,7 @@ impl LegacyDdnsClient {
config: &[LegacyCloudflareEntry], config: &[LegacyCloudflareEntry],
ttl: i64, ttl: i64,
purge_unknown_records: bool, purge_unknown_records: bool,
noop_reported: &mut HashSet<String>,
) { ) {
for entry in config { for entry in config {
let zone_resp: Option<LegacyCfResponse<LegacyZoneResult>> = self let zone_resp: Option<LegacyCfResponse<LegacyZoneResult>> = self
@@ -578,8 +588,10 @@ impl LegacyDdnsClient {
} }
} }
let noop_key = format!("{fqdn}:{}", ip.record_type);
if let Some(ref id) = identifier { if let Some(ref id) = identifier {
if modified { if modified {
noop_reported.remove(&noop_key);
if self.dry_run { if self.dry_run {
println!("[DRY RUN] Would update record {fqdn} -> {}", ip.ip); println!("[DRY RUN] Would update record {fqdn} -> {}", ip.ip);
} else { } else {
@@ -590,10 +602,16 @@ impl LegacyDdnsClient {
.cf_api(&update_endpoint, "PUT", entry, Some(&record)) .cf_api(&update_endpoint, "PUT", entry, Some(&record))
.await; .await;
} }
} else if self.dry_run { } else if noop_reported.insert(noop_key) {
println!("[DRY RUN] Record {fqdn} is up to date ({})", ip.ip); if self.dry_run {
println!("[DRY RUN] Record {fqdn} is up to date");
} else {
println!("Record {fqdn} is up to date");
} }
} else if self.dry_run { }
} else {
noop_reported.remove(&noop_key);
if self.dry_run {
println!("[DRY RUN] Would add new record {fqdn} -> {}", ip.ip); println!("[DRY RUN] Would add new record {fqdn} -> {}", ip.ip);
} else { } else {
println!("Adding new record {fqdn} -> {}", ip.ip); println!("Adding new record {fqdn} -> {}", ip.ip);
@@ -602,6 +620,7 @@ impl LegacyDdnsClient {
.cf_api(&create_endpoint, "POST", entry, Some(&record)) .cf_api(&create_endpoint, "POST", entry, Some(&record))
.await; .await;
} }
}
if purge_unknown_records { if purge_unknown_records {
for dup_id in &duplicate_ids { for dup_id in &duplicate_ids {
@@ -804,11 +823,12 @@ mod tests {
let ppfmt = pp(); let ppfmt = pp();
let mut cf_cache = CachedCloudflareFilter::new(); let mut cf_cache = CachedCloudflareFilter::new();
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt).await; let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt, &mut HashSet::new(), &Client::new()).await;
assert!(ok); assert!(ok);
} }
/// update_once returns true (all_ok) when IP is already correct (Noop). /// update_once returns true (all_ok) when IP is already correct (Noop),
/// and populates noop_reported so subsequent calls suppress the message.
#[tokio::test] #[tokio::test]
async fn test_update_once_noop_when_record_up_to_date() { async fn test_update_once_noop_when_record_up_to_date() {
let server = MockServer::start().await; let server = MockServer::start().await;
@@ -853,8 +873,90 @@ mod tests {
let ppfmt = pp(); let ppfmt = pp();
let mut cf_cache = CachedCloudflareFilter::new(); let mut cf_cache = CachedCloudflareFilter::new();
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt).await; let mut noop_reported = HashSet::new();
// First call: noop_reported is empty, so "up to date" is reported and key is inserted
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt, &mut noop_reported, &Client::new()).await;
assert!(ok); assert!(ok);
assert!(noop_reported.contains("home.example.com:A"), "noop_reported should contain the domain key after first noop");
// Second call: noop_reported already has the key, so the message is suppressed
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt, &mut noop_reported, &Client::new()).await;
assert!(ok);
assert_eq!(noop_reported.len(), 1, "noop_reported should still have exactly one entry");
}
/// noop_reported is cleared when a record is updated, so "up to date" prints again
/// on the next noop cycle.
#[tokio::test]
async fn test_update_once_noop_reported_cleared_on_change() {
let server = MockServer::start().await;
let zone_id = "zone-abc";
let domain = "home.example.com";
let old_ip = "198.51.100.42";
let new_ip = "198.51.100.99";
// Zone lookup
Mock::given(method("GET"))
.and(path("/zones"))
.and(query_param("name", domain))
.respond_with(
ResponseTemplate::new(200).set_body_json(zones_response(zone_id, "example.com")),
)
.mount(&server)
.await;
// List existing records - record has old IP, will be updated
Mock::given(method("GET"))
.and(path_regex(format!("/zones/{zone_id}/dns_records")))
.respond_with(
ResponseTemplate::new(200)
.set_body_json(dns_records_one("rec-1", domain, old_ip)),
)
.mount(&server)
.await;
// Create record (new IP doesn't match existing, so it creates + deletes stale)
Mock::given(method("POST"))
.and(path(format!("/zones/{zone_id}/dns_records")))
.respond_with(
ResponseTemplate::new(200)
.set_body_json(dns_record_created("rec-2", domain, new_ip)),
)
.mount(&server)
.await;
// Delete stale record
Mock::given(method("DELETE"))
.and(path(format!("/zones/{zone_id}/dns_records/rec-1")))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({"result": {}})))
.mount(&server)
.await;
let mut providers = HashMap::new();
providers.insert(
IpType::V4,
ProviderType::Literal {
ips: vec![new_ip.parse::<IpAddr>().unwrap()],
},
);
let mut domains = HashMap::new();
domains.insert(IpType::V4, vec![domain.to_string()]);
let config = make_config(providers, domains, vec![], false);
let cf = handle(&server.uri());
let notifier = empty_notifier();
let heartbeat = empty_heartbeat();
let ppfmt = pp();
// Pre-populate noop_reported as if a previous cycle reported it
let mut noop_reported = HashSet::new();
noop_reported.insert("home.example.com:A".to_string());
let mut cf_cache = CachedCloudflareFilter::new();
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt, &mut noop_reported, &Client::new()).await;
assert!(ok);
assert!(!noop_reported.contains("home.example.com:A"), "noop_reported should be cleared after an update");
} }
/// update_once returns true even when IP detection yields empty (no providers configured), /// update_once returns true even when IP detection yields empty (no providers configured),
@@ -898,7 +1000,7 @@ mod tests {
// all_ok = true because no zone-level errors occurred (empty ips just noop or warn) // all_ok = true because no zone-level errors occurred (empty ips just noop or warn)
let mut cf_cache = CachedCloudflareFilter::new(); let mut cf_cache = CachedCloudflareFilter::new();
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt).await; let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt, &mut HashSet::new(), &Client::new()).await;
// Providers with None are not inserted in loop, so no IP detection warning is emitted, // Providers with None are not inserted in loop, so no IP detection warning is emitted,
// no detected_ips entry is created, and set_ips is called with empty slice -> Noop. // no detected_ips entry is created, and set_ips is called with empty slice -> Noop.
assert!(ok); assert!(ok);
@@ -948,7 +1050,7 @@ mod tests {
let ppfmt = pp(); let ppfmt = pp();
let mut cf_cache = CachedCloudflareFilter::new(); let mut cf_cache = CachedCloudflareFilter::new();
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt).await; let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt, &mut HashSet::new(), &Client::new()).await;
assert!(!ok, "Expected false when zone is not found"); assert!(!ok, "Expected false when zone is not found");
} }
@@ -998,7 +1100,7 @@ mod tests {
// dry_run returns Updated from set_ips (it signals intent), all_ok should be true // dry_run returns Updated from set_ips (it signals intent), all_ok should be true
let mut cf_cache = CachedCloudflareFilter::new(); let mut cf_cache = CachedCloudflareFilter::new();
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt).await; let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt, &mut HashSet::new(), &Client::new()).await;
assert!(ok); assert!(ok);
} }
@@ -1064,7 +1166,7 @@ mod tests {
let ppfmt = pp(); let ppfmt = pp();
let mut cf_cache = CachedCloudflareFilter::new(); let mut cf_cache = CachedCloudflareFilter::new();
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt).await; let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt, &mut HashSet::new(), &Client::new()).await;
assert!(ok); assert!(ok);
} }
@@ -1118,7 +1220,7 @@ mod tests {
let ppfmt = pp(); let ppfmt = pp();
let mut cf_cache = CachedCloudflareFilter::new(); let mut cf_cache = CachedCloudflareFilter::new();
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt).await; let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt, &mut HashSet::new(), &Client::new()).await;
assert!(ok); assert!(ok);
} }
@@ -1158,7 +1260,7 @@ mod tests {
let ppfmt = pp(); let ppfmt = pp();
let mut cf_cache = CachedCloudflareFilter::new(); let mut cf_cache = CachedCloudflareFilter::new();
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt).await; let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt, &mut HashSet::new(), &Client::new()).await;
assert!(!ok, "Expected false when WAF list is not found"); assert!(!ok, "Expected false when WAF list is not found");
} }
@@ -1243,7 +1345,7 @@ mod tests {
let ppfmt = pp(); let ppfmt = pp();
let mut cf_cache = CachedCloudflareFilter::new(); let mut cf_cache = CachedCloudflareFilter::new();
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt).await; let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt, &mut HashSet::new(), &Client::new()).await;
assert!(ok); assert!(ok);
} }
@@ -1260,7 +1362,7 @@ mod tests {
let ppfmt = pp(); let ppfmt = pp();
let mut cf_cache = CachedCloudflareFilter::new(); let mut cf_cache = CachedCloudflareFilter::new();
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt).await; let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt, &mut HashSet::new(), &Client::new()).await;
assert!(ok); assert!(ok);
} }
@@ -1645,7 +1747,7 @@ mod tests {
// set_ips with empty ips and no existing records = Noop; all_ok = true // set_ips with empty ips and no existing records = Noop; all_ok = true
let mut cf_cache = CachedCloudflareFilter::new(); let mut cf_cache = CachedCloudflareFilter::new();
let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt).await; let ok = update_once(&config, &cf, &notifier, &heartbeat, &mut cf_cache, &ppfmt, &mut HashSet::new(), &Client::new()).await;
assert!(ok); assert!(ok);
} }
// ------------------------------------------------------- // -------------------------------------------------------
@@ -1850,7 +1952,7 @@ mod tests {
subdomains: vec![LegacySubdomainEntry::Simple("@".to_string())], subdomains: vec![LegacySubdomainEntry::Simple("@".to_string())],
proxied: false, proxied: false,
}]; }];
ddns.commit_record(&ip, &config, 300, false).await; ddns.commit_record(&ip, &config, 300, false, &mut HashSet::new()).await;
} }
#[tokio::test] #[tokio::test]
@@ -1906,7 +2008,7 @@ mod tests {
subdomains: vec![LegacySubdomainEntry::Simple("@".to_string())], subdomains: vec![LegacySubdomainEntry::Simple("@".to_string())],
proxied: false, proxied: false,
}]; }];
ddns.commit_record(&ip, &config, 300, false).await; ddns.commit_record(&ip, &config, 300, false, &mut HashSet::new()).await;
} }
#[tokio::test] #[tokio::test]
@@ -1949,7 +2051,7 @@ mod tests {
proxied: false, proxied: false,
}]; }];
// Should not POST // Should not POST
ddns.commit_record(&ip, &config, 300, false).await; ddns.commit_record(&ip, &config, 300, false, &mut HashSet::new()).await;
} }
#[tokio::test] #[tokio::test]
@@ -2002,7 +2104,7 @@ mod tests {
}], }],
proxied: false, proxied: false,
}]; }];
ddns.commit_record(&ip, &config, 300, false).await; ddns.commit_record(&ip, &config, 300, false, &mut HashSet::new()).await;
} }
#[tokio::test] #[tokio::test]
@@ -2054,7 +2156,7 @@ mod tests {
subdomains: vec![LegacySubdomainEntry::Simple("@".to_string())], subdomains: vec![LegacySubdomainEntry::Simple("@".to_string())],
proxied: false, proxied: false,
}]; }];
ddns.commit_record(&ip, &config, 300, true).await; ddns.commit_record(&ip, &config, 300, true, &mut HashSet::new()).await;
} }
#[tokio::test] #[tokio::test]
@@ -2104,7 +2206,7 @@ mod tests {
subdomains: vec![LegacySubdomainEntry::Simple("@".to_string())], subdomains: vec![LegacySubdomainEntry::Simple("@".to_string())],
proxied: false, proxied: false,
}]; }];
ddns.update_ips(&ips, &config, 300, false).await; ddns.update_ips(&ips, &config, 300, false, &mut HashSet::new()).await;
} }
#[tokio::test] #[tokio::test]