Since a while I’m using Pi-hole in my home to filter all kinds of domain names. This in combination with a PowerDNS resolver makes gives me more privacy. At least not monitored by the ‘free’ DNS resolvers out there…
PowerDNS is running on my Synology NAS in a docker container. Pi-Hole is still running on a Raspberry Pi 3 Model B+ since I wanted to test first and did not want to touch my working PowerDNS setup. but it already running fine for a while and I need to still move form the Pi to a docker as well. I will update this post when I moved everything with the detailed docker information as well.
Currently Pi-Hole v5 is just out, so I want to wait for a short moment when the fist fixed are done and the move to docker as well.
Cool thing with Pi-Hole is that you can use your one blocklists. Some I took form fireblog.net.
I’m running the UniFi Controller on my Synology NAS in a docker container. After a firewall upgrade (gateway for the AP LAN), I noticed that all my Access Points were in the Adopting state. The wireless was still working fine, but no UniFI AP was in the Connected state. Forcing provisioning the AP’s did not work. So I restated all my AP’s the hard way by restarting the PoE adapters, just to get it working again, i hoped…
This fixed the issue for the AP-AC-Pro access points. However the AP-Pro stayed in the Adopting state even after a reboot.
I logged in to the node and checked the configuration;
As you can see the mgmt.servers.1.url is set to IP 172.17.0.2, this is incorrect! No idea how it ended up in there, but I changed it and intermediately the AP had a Connected state with the controller again!
AP1-BZ.v4.0.69# set-inform http://192.168.1.139:8080/inform
Adoption request sent to 'http://192.168.1.139:8080/inform'. Use the controller to complete the adopt process.
After this setting change, I upgraded the AP to version 22.214.171.12475, checked the cfg/mgmt file again and now the mgmt.servers.1.url value is just fine.
Update: Now I got an idea what happened :) 172.17.0.2 is the IP the docker container is using. Somehow this ended up at the AP. I changed the following settings and now it’s fixed;