Registration for BSDCan 2025 is open!
Tutorials Jun 11-12
Talks & BOFs Jun 13-14,
See https://blog.bsdcan.org/blog/ and register at https://www.bsdcan.org/2025/registration.html
Registration for BSDCan 2025 is open!
Tutorials Jun 11-12
Talks & BOFs Jun 13-14,
See https://blog.bsdcan.org/blog/ and register at https://www.bsdcan.org/2025/registration.html
At EuroBSDCon 2025, we're eager to read your paper, BOF or tutorial submission!
Please go to https://2025.eurobsdcon.org/ for info, submit at https://events.eurobsdcon.org/2025/cfp
See you in Zagreb in September!
Hello, hachyderm! we've been working hard on building up our ansible runbooks and improving hachyderm's overall resilience. Recently, we've been focusing on is database resilience.
We're getting close to retiring our original database server (finally!) and preparing to move to a fully ansible-managed set of databases servers, primary and replica on new hardware. We'll send another announcement when we do the cut over. The team has done excellent work to make this highly automated, quick, and painless!
Done:
author ansible roles for managing postgresql, pgbackrest (backups), pgbouncer, and primary/replica failover
decide to continue with pgbouncer and *not* use pgcat
rotate database passwords
order new replica database hardware
order new future primary database hardware
To do soon:
rebuild replica database with ansible scripts
prepare primary database with ansible scripts
start replicating to new database replica
cut over to new database server
We're also planning on open-sourcing our ansible roles in the coming weeks - just a little housekeeping & tidying up before we do!
Nouveau sponsor : @Octopuce !
Depuis 2005, Octopuce accompagne ses clients avec un service d’hébergement et d’infogérance sur mesure, en combinant rigueur, innovation et passion du libre.
Leur spécialité ? Des infrastructures Debian Linux gérées en mode DevOps, un support 24/7 ultra-réactif, et une attention constante à la sécurité et à la qualité de service.
Merci à Octopuce de soutenir MiXiT 2025 !
Si vous souhaitez devenir sponsor de MiXiT 2025 : https://mixitconf.org/sponsors
You Have Installed OpenBSD. Now For The Daily Tasks.
Despite some persistent rumors, installing OpenBSD is both quick and easy on most not too exotic hardware. But once the thing is installed, what is daily life with the most secure free operating system like?
More at https://nxdomain.no/~peter/openbsd_installed_now_for_the_daily_tasks.html #openbsd #development #devops #security #sysadmin #maintenance #freesoftware #libresoftware #bsd #unix #unixlike (from 2024)
System Administration
Week 8, HTTPS & TLS
After discussing HTTP in the previous week and seeing how we used STARTTLS in the context of #SMTP, we are now quickly reviewing HTTPS, TLS, and the WebPKI. While we don't have a video segment for this, here are slides, including this handy diagram illustrating the CSR process:
System Administration
Week 8, The Simple Mail Transfer Protocol
Shared by a student of mine: Email vs Capitalism, or, Why We Can't Have Nice Things, a talk given by Dylan Beattie at NDC Oslo 2023. Covers a lot of our materials and adds some additional context.
System Administration
Week 8, The Simple Mail Transfer Protocol, Part III
In this video, we look at ways to combat Spam. In the process, we learn about email headers, the Sender Policy Framework (#SPF), DomainKeys Identified Mail (#DKIM), and Domain-based Message Authentication, Reporting and Conformance (#DMARC). #SMTP doesn't seem quite so simple any more...
#ProxLB - an opensource & advanced VM loadbalancer for #Proxmox clusters. Including affinity & anti-affinity rules, maintenance mode (evacuating nodes) and more. I just published my slides about it.
Project: https://github.com/gyptazy/ProxLB
Slides: https://cdn.gyptazy.com/files/talks/ProxLB-Intelligent-Workload-Balancing-for-Proxmox-Clusters.pdf
“Take This On-Call Rotation And Shove It”, Scott Smitelli (https://www.scottsmitelli.com/articles/take-oncall-and-shove-it/).
Via HN: https://news.ycombinator.com/item?id=43498213
On Lobsters: https://lobste.rs/s/ki4dkb/take_this_on_call_rotation_shove_it
So hand-wavy ; you still need #OnCall for critical services with defined SLAs
:
“Breaking Up With On-Call”, Alexey Karandashev (https://reflector.dev/articles/breaking-up-with-on-call/).
System Administration
Week 8, The Simple Mail Transfer Protocol, Part II
In this video, we observe the incoming mail on our MTA, look at how STARTTLS can help protect information in transit, how MTA-STS can help defeat a MitM performing a STARTTLS-stripping attack, and how DANE can be used to verify the authenticity of the mail server's certificate.
howdy, #hachyderm!
over the last week or so, we've been preparing to move hachy's #DNS zones from #AWS route 53 to bunny DNS.
since this could be a pretty scary thing -- going from one geo-DNS provider to another -- we want to make sure *before* we move that records are resolving in a reasonable way across the globe.
to help us to do this, we've started a small, lightweight tool that we can deploy to a provider like bunny's magic containers to quickly get DNS resolution info from multiple geographic regions quickly. we then write this data to a backend S3 bucket, at which point we can use a tool like #duckdb to analyze the results and find records we need to tweak to improve performance. all *before* we make the change.
then, after we've flipped the switch and while DNS is propagating -- -- we can watch in real-time as different servers begin flipping over to the new provider.
we named the tool hachyboop and it's available publicly --> https://github.com/hachyderm/hachyboop
please keep in mind that it's early in the booper's life, and there's a lot we can do, including cleaning up my hacky code.
attached is an example of a quick run across 17 regions for a few minutes. the data is spread across multiple files but duckdb makes it quite easy for us to query everything like it's one table.
Another round of “hey, your server is down!” drama from the "we need moar kubernetes!" crowd.
“I can’t reach your server, it must be down.”
I connect. Everything’s fine.
A few emails later, I ask to access the container. The dev says he can’t - doesn’t know how. He’s a nice guy, though, so he gives me the credentials.
I log in and find the issue: someone pushed a workload to production (cue Kubernetes! Moooaaarrr powaaaarrr! We have the cloud! Who needs sysadmins anymore?!) with DNS set to 192.168.1.1.
Of course, it fell to me to investigate, because the dev couldn’t even get a shell inside his container. And it's ok, as he's a dev - and just wants to be a dev.
Once I pointed it out, they rebuilt the container with the correct config and - TADA! - everything worked again.
Then he went to check other workloads (for other clients, not managed by me) that had been having issues for weeks... Same problem.
It was DNS.
But it wasn't DNS.