I run a small setup with fail2ban protecting nginx and SSH. The setup worked fine – fail2ban watches logs, detects attackers, and bans them via firewalld. Nothing exotic.
After accumulating hundreds of bans across my f2b jails, fail2ban-client reload became noticeably slow. The default firewallcmd-multiport action adds one firewall rule per banned IP, so every reload rebuilds hundreds of individual rules.
Enter IPSet, per Claude, instead of one rule per IP, it stores all banned IPs in a kernel-level hash table behind a single firewall rule. Lookup is O(1) regardless of how many IPs are in the set. Fail2ban already ships with firewallcmd-ipset.conf, so the switch is just a config change:
This site was recently migrated to ClouDNS from DNSMadeEasy. Everything went smoothly, especially with Claude’s help automating the migration script via zonefile and API.
However, during post-migration testing, I found a weird behavior with ClouDNS’ Premium plan nameservers: they return 127.0.0.1 as an authoritative answer for domains that are not hosted on their system.
For example, querying pns1.cloudns.net for google.com – a domain that obviously has nothing to do with ClouDNS:
Note the aa (authoritative answer) flag in the response. This server is claiming to be authoritative for google.com, returning 127.0.0.1 with a nonsensical authority section delegating the root zone to localhost.. The correct behavior would be to return REFUSED for domains not configured on the server.
Even more weirdly, ClouDNS’ free nameservers correctly return REFUSED for domains they don’t host:
No aa flag, no bogus answer — just a clean REFUSED.
The exception is ns1.cloudns.net, which returns a false authoritative answer pointing to an actual working IP 185.105.32.123 instead – a catch-all landing page by ClouDNS:
Note: As of April 2026, ClouDNS support replied: “At this time, the current behavior of our Premium nameservers is expected and no changes are planned.”
发布于
DMARC analytics services are expensive and aren’t really meant for individual use. Why not build one myself, now that we live in a world where Claude Code is so powerful? So I did.
When connecting to Cloud SQL, you should switch to IAM authentication and only use a connection string as a local fallback. Set a CLOUD_SQL_INSTANCEvariable only in the deploy environment, and your application can detect this and use @google-cloud/cloud-sql-connector. https://docs.cloud.google.com/sql/docs/mysql/connect-connectors#node.js
Correlate your container logs with a request log (services only) By default, container logs are not correlated with request logs. To link them without a client library like @google-cloud/logging, write a structured JSON log containing a logging.googleapis.com/trace field with the trace identifier extracted from theX-Cloud-Trace-Context request header. https://docs.cloud.google.com/run/docs/logging#writing_structured_logs
Remember that Cloud Run is a serverless service, meaning instances can be scaled down at any time. If required, install a SIGTERMhandler in your application to receive a warning when Cloud Run is about to shut down an instance. You will have a 10-second window to react. https://docs.cloud.google.com/run/docs/container-contract#instance-shutdown
Cloud Run uses an in-memory file system, so every file you write to the instance is actually consuming your instance’s memory.
Outbound network connections from Cloud Run can occasionally be terminated, either due to timeout or infrastructure restarts. If your application reuses long-lived connections, we recommend configuring it to re-establish connections to avoid reusing a dead connection.
Cloud Run injects a PORTenvironment variable and expects your application to listen on it. Do not hardcode a port number (e.g. 8080); always read from process.env.PORTinstead.
Today, I received a warning from Alibaba Cloud Security Center. A file scan reveals that wp-admin/network/theme-browse.php is likely a backdoor on one of the sites that I hosted. This article intends to record the findings and actions.
Immediately, I jump into confirming WordPress core’s integrity and confirm that these files should not exist:
This shows that the file was inserted back in 2023, while the attacker backdated it to look like an old 2016 file. Further checks of my legacy backup files show that these files did exist as far back as at least 2020. Therefore, the 2016 timestamp may be correct, and the hosted site was likely exploited long before it was moved to my managed environment.
It is my sincere advice to everyone to avoid using Azure’s WordPress on App Service, for the following reasons: (Disclaimer: As the title suggests, this is biased – Azure has never been easy for me to use or understand, and I am not an Azure power user.)
The WordPress image relies on a Unison process to monitor filesystem events, syncing content between /home/site/wwwroot (the default FTP destination) and /var/www/wordpress (where the site is actually served).
This introduces operational complexity to an already non-transparent build (none of this is documented in https://github.com/Azure/wordpress-linux-appservice). Even worse, filesystem monitoring never worked correctly in our environment.
It does not automatically create a Log Analytics workspace, which makes no sense to me. I’m really not a fan of how logging on Azure works – if you forget to enable a Log Analytics workspace, no logs are retained. When an outage happens, you’re left with nothing to investigate.
Debugging performance issues on WordPress on App Service was extremely unpleasant, especially given the lack of documentation on how Microsoft’s WordPress image actually works or what the recommended debugging steps are when Metrics provide no useful insight.
For these reasons, I would recommend everyone to just host WordPress on a standalone VM any day, or using an inexpensive managed WordPress provider from a smaller vendor.
发布于
Server update: Now runs on Debian 13 (trixie) + added X25519MLKEM768 support.
I have been using Cloudflare WARP+ for a while, mostly to protect myself when using public Wi-Fi. Today I thought I should give products like NordVPN and Surfshark a fresh try to see if they could surpass WARP’s performance.
During my test, I found that mysteriously both VPNs, when using WireGuard, achieved about 50% slower download speed with my ISP. Switching to OpenVPN (UDP) fixed this. However, if buying a modern VPN service only to end up using OVPN, I might as well build the server myself. (Note: NordVPN also had a much higher success rate when connecting with OVPN (UDP) compared to Surfshark, for some reason.)
In comparison to WARP, people should note that WARP was never designed to be a full-featured VPN. I mainly used it to ensure my traffic is securely sent out to the Internet. And WARP really shines in two things:
One click and it’s connected. No hassle. (However, server connection speed is slower than NordVPN.)
Really fast speeds when connecting to local websites, though not so much when multiple hops to other countries are involved.
In light of the findings above, I decided to make use of the 30-day money-back guarantee they both offer. With Surfshark, I simply told the bot I wanted a refund, got redirected to a form, filled it in, and voilà — it was done.
Now, that was a totally different story with NordVPN, despite their similar refund form. With NordVPN:
First, I had to provide proof via speedtest.net showing slower performance when using NordVPN.
Then I was asked to change protocols — which is when I discovered OVPN (UDP) actually runs faster. But again, I bought NordVPN specifically for the NordLynx protocol.
Finally, the live chat (Zendesk Sunshine) was a pain to use. I was asked to upload screenshots, but sending images kept failing. In the end, I managed to upload them as files.
In summary, compared to the straightforward refund from Surfshark, NordVPN really runs through their whole playbook to stop you from getting one. However, NordVPN does at least show a clear “refund request is processing” notice on their billing page, while Surfshark does not (though Surfshark will send you a confirmation email within 24 hours, according to their bot).