I've transferred the chromic.org domain from Dreamhost to Hover. Mostly because of UI/UX reasons.
Back in 2009, when I was looking to get a domain, I chose DreamHost for Shared Hosting. They’re also a registrar, so that’s who I registered the domain with.
A couple of years later, I moved the hosting part to Linode and kept DreamHost as the registrar. Linode doesn’t do domain registrations, and I never had any problems with my domain over at DreamHost so I wasn’t really looking for alternative registrars.
Recently, I started looking into the possibility of implementing DNSSEC, and while DreamHost supports it (as long as you don’t use their nameservers), I couldn’t find the options (DNS glue, DNSSEC fields) myself in the account settings. While hunting around for those, I realized that the vast majority of everything I was seeing on the screen were menus/options completely useless to me since I’m not hosting anything with them.
Then, I came across an article that suggests I need to create a support ticket to have support complete the configuration. Blech. What is this? 1997? I never had to deal with DreamHost support much, and the couple of times I contacted them they were quick to respond and helpful, but I’d very much rather have the option to do this on my own.
I did a couple of searches to see what my options were and decided to move to Hover. They don’t do hosting, their UI focuses on domain-related features, and the glue/DNSSEC settings aren’t hidden away multiple levels deep or completely missing.
More importantly, I can set those up myself. Win!
The transfer itself went without a hitch. No suprises, no downtime, and ultimately transparent. So kudos to both DreamHost and Hover for making that happen painlessly.
Chimo Boop! Renewed chromic.org. You guys are stuck with me for at least another two years.
The act of renewing the domain kind of felt like the beginning of a new year, somehow. A good time to reflect on what I've done with the domain since last time I renewed it, to think about where it's going, what the future projects are.
I'm not going to lay down plans for the next two years in this post because frankly, I haven't thought about it that far ahead. But I can try to give an overview of the plans I have for the near-future. Maybe publishing those plans here is going to give me a reason to get to them faster than I normally would.
Right now, I’m hosting a bunch of stuff (including this blog) on Linode, and I’m using Linode’s nameservers for DNS. Everything’s been working well, but I want to try and run my own DNS server for my (sub)domains. There are a couple of reasons for that. One reason is “Project Autonomous” driven: don’t rely on third-parties whenever possible. The other reason is because Linode’s nameservers don’t support DNSSEC, which brings me to my second item in my plans.
I want to implement DNSSEC. I’m not sure what else to say about this other than “because why not!”. I think it’ll be interesting and I might learn a couple of things along the way.
I also want to implement DANE, for the same reasons listed for DNSSEC, really.
The *.chromic.org “environment” has grown and evolved and changed over the years. When I started fiddling around with all the platforms and things on here, I wasn’t aware of the Indieweb’s POSSE, webmentions, etc. I kind of wedged webmentions on here at some point, but I’d like to make Indieweb a first-class “citizen” of this place.
For example, I can receive webmentions, but it’s a bit of a pain to reply or send them at the moment. I do have plans and tools in mind to fix this problem, and I hope to get to it soon.
[ update: This is done!]
I don’t know why, but I haven’t enabled HSTS on my (sub)domains yet. I want to do that.
I want to enable restricitve CSP headers on all subdomains.
[ update: This is done!]
This has nothing to do with the domain renewal or the Linode VPS, but since I’m making a list of things I want to do I’m adding it here.
I recently got a couple of network devices that allow me greater control over my network (compared to the generic ISP-provided router/modem I was using before) and I need to sit down and configure my home network with the ideas I have in mind.
Once that’s done, I’m going to have a blog post dedicated to that configuration on here.
This time I'm documenting how my home network is currently setup. In short, I have two separate internal networks: an "IoT/Untrusted" network and a "Trusted" network. In this context, "trusted" means that I installed whatever OS/ROM I wanted on the device. It's a pretty common idea; this is my experiments on the subject.
Let's start with the Internet. I connect to it like most people do: via my ISP-provided router/modem. It happens to be a Hitron CGN3. Behind that, I have a pretty versatile EdgeRouter X. I set up two different subnets on there: 192.168.1.0/24, which is the "IoT/Untrusted" network, and 22.214.171.124/24, which is the "Trusted" network. The two networks are on separate VLANs and the ER-X's firewall rules don't allow the networks to talk to each other.
The ER-X doesn't do wireless, so I have a wireless AP connected to each of the subnets. The "IoT/Untrusted" wireless AP (10.1.1.0/24) is an old D-Link DIR-615 Rev. B2. I had lying around. The "Trusted" wireless AP (10.1.2.0/24) is a UniFi In-Wall AP.
The "IoT/Untrusted" network has a few devices on it: PlayStation 3, Nintendo Wii, iPhone 5S, iPad Air, Samsung Tv, Brothers Printer.
I'm planning on applying different rules for different devices on this network.
I have a few devices on the "Trusted" network:
For kicks, and because it's a feature of the ER-X out-of-the-box, I set up an IPSec site-to-site tunnel between the ER-X and my Linode VPS.
Brace yourselves, you’re in for a “existential crisis”-type of blog post. Welcome to LiveJournal. Or something, I don’t know…
Yeah so, I like to read blogs. Always have. I've been reading blogs since I learned the internet was a thing. I'm subscribed to a bunch of blogs via this archaic thing called RSS — also h-feeds since that's what the cool people are using these days :) (no, but seriously, check it out; it makes a whole lot of sense).
Sometimes, when reading those blogs, I get the feeling that I should blog more myself. Especially when those blog posts explicitly ask readers to blog more.
And then after thinking about it for about three seconds, I think maybe I should not. I'm not very good at it. Nor do I have anything very interesting to write about, really.
Blogging about my personal experiences surely can't be interesting to anyone, right? Sometimes I post notes about it. Rarely, some pictures. But a long-form blog post about me? Ain't nobody got time for that!, right? Why would anyone read that? Why are you still here, friend?
And then there's blogging about technical things. I like computers. I studied computers. I write code for a job. I write code as a hobbby. Surely I could write something about that, don't you think? Well, I could. Then I think about it for about three seconds, search the ol' 'net for about three seconds, and give up.
Thing is, when I think I have something to write about, a quick search yields a plethora of people smarter than me, who wrote better articles than I could write, often years before that idea came to my mind. So I repost, or bookmark, or otherwise share some of those articles, but I don't blog. Which wouldn't be a bad thing, except for that nagging feeling that I want to create, participate, actively be involved in something related to my field.
See, the internet has always had a soft spot in my heart, but for a few years now it also makes me feel very, very small, insignificant, underacheiving and inadequate. I read about all these people creating, writing, coding and otherwise making successful, forward-thinking, cool, original things. And I'm just sitting here making friends with spiders. Or whatever.
Sometimes I remember that the articles I read, and all those new libraries, frameworks, studies, articles, music tracks, I discover are written and built by multiple, separate people over a long period of time. And for about three seconds I feel better realizing people (usually) don't produce great things during a single lunch break between their TED talks and a flight to some other conference.
For those quick three seconds, I realize that it's not "me against the internet competing for a job position". The "internet" isn't a "average" entity to compare yourself to. The internet doesn't sleep. The internet is an expert in everything. The internet has ~7 billion brains. It's just not a fair comparison.
Yet, I go back to my restless self, writing a useless blog post at 04:08 because I can't sleep since I feel like I haven't shipped anything concrete, nothing useful, today. 2016-08-25 is gone, and I have nothing to show for it. And let's face it, this blog post is probably going to be the the most productive thing I'm going to do today. Or this month. Or year. Oh well.
Recently, the homepage of this blog changed from a static list of recent blog posts to a realtime stream of updates (the list of blog posts is still there, moved to the sidebar). This post tries to document (mostly as a "note-to-self", again) how this "lifestream" works.
Note: This is a work-in-progress; I've already got plans to tweak how some of the pieces fit together.
My sources are PuSH publishers, I have a PuSH hub running on the server, the lifestream is a PuSH subscriber.
For this project, I have three requirements:
If we refer to the image above, the information flows from left to right, with the exception of the dashed line which we'll talk about a bit later.
After the new post has been published, I run a bash one-liner script that pings the PuSH hub:
Yes, I do plan on automating this at some point.
I run GNU mediagoblin as a media publishing platform. It's PuSH-enabled out-of-the-box so I don't have to do much here. I did, however, modify the Atom Feed generated by gmg so that it includes a direct link to the media file (so that I can include it in the lifestream).
I run Gogs on code.chromic.org. This one is the most hacky-cobbled-together source in the lifestream (so far!). Ultimately, I'd like to have the public activity events pushed to the stream, but right now only "git-push" events are sent over. There are a couple of reasons for this:
Given this situation, I have to use the features Gogs support out-of-the-book. Luckily, Gogs supports webhooks on git-push events. You can probably see where this is going: I've setup all my repository to execute a PHP script whenever I push code:
This is also on the "to improve" list.
This one is also easy. I use GNU social as a microblogging platform. It's PuSH-enabled out-of-the-box. I don't need to do anything here.
Note: There are a couple of event types that aren't PuSH'ed (e.g.: favorites) by the GNU social platform.
I added a small curl snippet to
/gnukebox/submissions/1.2/index.php to ping the hub anytime I scrobble
I run an unmodified instance of Switchboard as a PuSH hub. This is where the PuSH publishers send the "ping" whenever they publish something, with the exception of GNU social.
GNU social, in addition to being a PuSH-publisher out-of-the-box, has its own built-in PuSH hub.
I wrote a quick NodeJS "app" that subscribes to the publishers above through the Switchboard Hub or the GNU social hub. When it recieves notification from the hub that content has been published, it fetches the feed/page it's subscribed to (this is the dashed line in the image above), gets the latest update, parses it and saves the data to a database.
It also notifies the lifestream of new events through websockets.
Finally, we have the lifestream which is a simple PHP script that gets previously saved events from the MySQL database, and new events through websockets.
I'm planning on adding more sources to the stream as time goes by. Polishing how everything works is also an ongoing activity.