Matt Hackmann

The Yearly Time Travel Trip

Posted On

As has now become a yearly tradition, words from my past self to my present day self arrived this 21st of December. Here is what I had to say.

Dear FutureMe,

Here we are once again. We'll skip the pleasantries and get right to the heart.

To be quite honest, I'm a bit apprehensive about how next year is going to go down, at least as far as work goes. Being understaffed and rewriting the entire stack in the time the PMs are allotting is worrisome to be sure. I'm hoping that there will be no long nights (certainly impossible odds), but the realist in my is not optimistic.

I am, however, slightly more optimistic about that trip to Japan with your coworkers. The tickets are soon to be bought as I write this which will essentially seal three people into that deal. How things will play out while there, though, remains a bit shaded to me.

Speaking of Japan, don't really expect anything to come of the 3 month work stint over there, but it would certainly be nice. Oh, and this is of course assuming that you're still at LI, though I can't see any reason why you wouldn't be. That gig, though the honeymoon is over, is still pretty sweet.

On the girlfriend front... sigh... I can't really make any predictions. I've become disillusioned with the effectiveness of online dating as it doesn't pair you up with a person in any natural sort of way. Maybe with that senior promotion (presumably that'll happen sooner rather than later), you'll get some extra confidence and balls and can converse with ladies on a "cold call" type basis.

It seems these letters are getting longer each year I write them, so the wrapping shall begin here.

Oh, you need to bike over 2000 miles next year and get your fat body into shape. Please do that for me.

This is probably the most accurate thing I've written to date. Project Voyager, which I played a marginally large-ish role in for a while (lead web developer for the new "Me" page), was an absolute nightmare to work on, and continues to be in some cases. Any project that has people tossing out the term "PTSD" for those who have abandoned ship is not one that should be praised too heavily. In the initial stages, I did a lot of actual yelling in anger and frustration in trying to keep the expectations sane. During the course of the year, I entered what was probably actually some level of depression. I nearly stopped doing things after work at all, trying to keep what time I wasn't at work to myself just to recover so I could go and face it again the next day or week. In the end, I bailed from the project entirely (in the most hush-hush way I could, except for the long essay I wrote to managers about my thoughts on the mess). I attempted to flee to Facebook, making it all the way through the interview process only to be turned down, so instead just went to a different team which still has me working tangentially on Voyager. It's much calmer and saner overall and I'm slowly beginning to return to "normal", whatever that is.

Japan is a thing that happened, though, with a couple of work friends and myself making a pilgrimage to my home away from America. I'd like to say that this trip was way more successful than the one my brothers and I took. We went to more places, saw more stuff, and ate a larger variety of food. We hung out with the fine folk at the Tokyo LinkedIn office, who are exceptionally good at partying (from karaoke to playing wingman in Ropongi bars). To that last point, I had every opportunity to nail a drunk Japanese chick, but even my level of inebriation did not allow me miss the last train back to our hotel (our flight was the next day). Of course, I came back with all manner of used figures and such to adorn my various shelves.

Yeah, the girlfriend thing continues to elude me. I went on a couple-ish dates this year, but neither I didn't consider to be enough of a personality fit to continue pursuing (or other various circumstances). 2016 is the year I want to turn my lack of love life around (which is the equivalent of saying it'll be the year of the Linux desktop) because I'm getting bummed out watching everybody around me get married and have kids.

Finally, I think I biked six or seven hundred miles this year. Nothing to write home about after the 1800 I biked last year. I'm fatter than ever, but I think I've fleshed out a nice little plan for once the holidays and their gluttonous wiles have played out. Getting into shape will play heavily into the dating thing, methinks.

And with that, I will now go and pen the letter I will be reading and writing about next year.

A Thought

Posted On

I don't like talking about things in the political or socio-economic realm. I abhor it, even. A couple reasons, A) I'm bad at it, B) it requires an emotional component that I don't have enough of. That said, I'm going to make a societal rant.

I of course am going to be speaking of the recent unpleasantness going down in France. I've been quasi-monitoring various threads on reddit and - unsurprisingly - it would seem the collective world's head is turned to look at Islam (or, rather, Muslims) as being the cause. Now, I wouldn't be surprised if extremists who claim to be working in the name of this religion are involved, but that's not why I'm here. Well, I suppose it is. A lot of the sentiment seems to be around the notion that this particular religion is making people do this.

No. No it's not. And the people who claim this are themselves probably hypocrites.

I had an interesting thought as I was grabbing a piece of cheese from the fridge. It's the exact same argument as "video games make those who play them violent". I've written about this before, maybe here, maybe for school, but the answer is: no, of course they don't. A video game, movie, book, or piece of music doesn't make you violent. If you're acting out on things, you were probably pre-disposed to such things already. The exact same idea applies to this notion that a religion will turn somebody violent.

These individuals chose to kill of their own free will. We as humans kind of have that privilege.

In the end it's not video games and it's not religion causing people to cause fucked up mayhem against innocent lives. It's extremely disturbed individuals who will find any reason to validate the horrendous crimes they already want to commit. Intangible concepts don't cause people to kill.

People do that all by themselves.

And it goes without saying that they are acting and representing themselves and themselves only, not the whole. They may represent a subset of people, but simply by being human, they actually represent all people.

So, if you subscribe to the group mindset, does that not make all of us murderers?

Idiocy, Caching, and Reducing Round Trips

Posted On

Over the last few days, I'd been receiving high CPU usage alerts from my host. A tad perplexed, I'd login in, check the graph and logs to see that, indeed, CPU usage was high, but nothing really seemed out of the ordinary. Google Analytics showed that traffic was moderately high, to be expected with a few widely visible brackets going on in /r/anime, but it wasn't anything to raise an eyebrow at. Still, you can't look at the following graph and notice a very predictable period of high activity:

Those peaks would be terrible to bike up

These peaks were actually coinciding with the most prominent bracket updating, bringing with it a fresh wave users every day. By the time I had received the third CPU alert email, I knew something was up and decided to actually take a look. Luckily, I didn't have to look far. The MySQL query graph was showing 1000+ read queries per second, certainly out of ordinary given how aggressive I am about caching.

Suspecting that something might be awry with memcache, but not actually wanting to bounce it, I bugged the cache library with some stats tracking hoping that it would turn up something I had overlooked. Indeed, I know where things are cached, but have never really had a list of just how all of this looks in production. And, true to that, I found something that I had not anticipated: every call to Dal::getById - a method on a class that database models extend - was a cache miss. Every. Single. One. And there were thousands of these, which very quickly explained the high query volume. All of this I found out in about 30 seconds of having that profiling code live, which is good because it also brought the site down and I had to disable it...

With that information, I had a pretty solid lead as to where I needed to be checking for issues: the aforementioned getById method. I was a bit baffled because I knew there should be caching on that, it's one of the two reasons that the method even exists (the other reason being for coding simplicity). So I get in there and take a look. Lo and behold, the cache is checked but nothing was ever actually stored back. Of course, I had to fuck the fix up once before actually resolving it.

Once that was shoved to production, I was greeted with this wonderful little sight:

Down.. down.. down.. ROCK LOBSTER

So, the immediate issue was fixed, but there was still something obviously very wrong if I was individually getting so many singular items by ID. As it turns out, in one of the most looped pieces of code in the entire project, I was making not one but two calls to get an item by ID. The whole output was cache guarded, but on a per user basis. In the case of the popular running bracket, that's over 400 queries per user per page generation. Unacceptable.

So, in one of those cases of code brevity != code speed, I did a heavy refactor to stash all the IDs that needed to be fetched and then make a big batch call later. There are some trade-offs here. There's still room for optimization here, especially if the getById calls are aggressively cached (indeed, the data fetched changes rarely). Or, the data returned from the batched call could be cached in shorter intervals. It still needs to be looped through as it's decorated with user specific data, but that would bring the overhead down to one big-ish hit only every few minutes. Still, with those measures currently in place, the CPU load issue has gone away and the queries per second is generally down in the double digits on average.

The true sign that everything is working, though, was the lack of an email in my inbox this afternoon.


Posted On

Hey, there. It's been quite some time. About two months, I'd reckon. Actually, I'm not even sure that last post counts for much, being that it was entirely technical. The one before it doesn't count either, as it was more of an op-ed piece... I guess? I dunno...

So, what exactly has Matt Hackmann been up to since his last meaningful "life update"? Who wants to know? Certainly, you don't. But, the future me does, so that's enough to validate my want to write this post. The answer to the original question is fairly simple: not a lot. And also, everything. Maybe it's not so simple after all.

The easy answer, really, is work. NDA prevents me from mentioning what exactly is going on, but it's pretty big. Now that I'm a team lead, I'm being challenged in ways that are completely new and foreign to me. Adding to that, I've spent the last month or so on a pretty large portion of this project that affects everybody working on this codebase. All of that landed on Monday and since then, I've been helping people on-board while also crushing any bugs that crop up. I still have a long way to go until all that is all tied up with a neat little bow, but this is probably the most difficult and certainly the most crucial time in the life cycle. I'm hoping to be able to breathe a little bit once this part is done.

In the now, very little happens outside of work. I spend most of my evenings and weekends just trying to mentally recuperate, which primarily consists of naps and lazing around. However, this is coming off of a fair amount of things that happened in the last few months.

Back in March, I moved from my little apartment in Sunnyvale to a slightly less little apartment in San Mateo. The apartment itself is pretty sweet and the location is even better. There are so many places that are now in walking distance, from ramen shops to the train station to friends houses. Due to the above exhaustion, I haven't taken as much advantage of this as I should, but once things ease up, I hope to start sampling some more of the food wares in the area. Inside my apartment, I built a couple of sweet desks to fill the larger space in my room, so I now have a space for computer things and a space for electronics things.


In April, I made another pilgrimage to Japan, this time accompanied by a couple of friends from work. In some regards, I consider this trip to have been more successful than the previous one. I knew better what was going on, and had far, far less fast/conbini food than last time. We also managed to land right in the middle of cherry blossom season which was goddamned gorgeous. Of course, there was plenty of time spent in Akihabara and I came away with many nerdy goods to fill my shelves and to gift to folks. It's hard to believe that it's already been almost two months since I've been back... or that I even went at all.

So, that's a bunch of stuff. I'm actually feeling motivated to work on my LED board project right now, so I guess I'll do that.

Or I'll binge watch a bunch of Forensic Files >_>

Fun with Load Balancers and Running on Lean Disk Space

Posted On

A while back, I spun up a new server and threw a load balancer on top of it and my existing server. The primary reason at the time was to allow to continue to run while Linode performed some mandatory maintenance on the original server (we'll call it "Chitoge"). The size of that directory at the time was somewhere around 100GB; my new server (we'll call that one "Taiga") has something like 20GB of total space. Obviously, I couldn't clone from Chitoge to Taiga, so I got a little clever. Using some fun around the hosts file, the nginx site config, and a small PHP script, when a file can't be found locally, it'd be retrieved from the Amazon S3 store I use as long term backup. S3 is slow, though, so I later changed it to pull directly from Chitoge. This helped both with speed and didn't count against traffic since internal network traffic is free. In a bout of laziness, instead of opening a connection to that server via IP and hand massaging the 'Host' header of the request (or mounting some sort of file share), I just host overrode on Taiga to point to Chitoge. Everything worked great.

Last night, knowing that I was soon to exhaust space on Chitoge (a thing that has happened on more than one occasion), I decided to convert it over to the same setup as Taiga. This would allow me to kill dozens of gigabytes of locally stored images and extend the life of my server plan at least a couple of years. Knowing that I was now deleting Taiga's source of truth image store, I changed up the script to check from a pool of servers; first try a local server, then if that fails, go to AWS. That worked fine and, while I was setting this script up on Chitoge, I decided that it should check against Taiga just in case it had a copy before wasting time downloading from AWS.

I got everything set up and started deleting old image files. Everything seemed fine, so I let the deleting continue and went to bed.

When I woke up, I was greeted by many messages in my reddit inbox saying that images were experiencing 50x errors. My initial thought was that I was saturating the CPU/connection with that long running process, so I killed it. I was unable to repro any errors, so I went on my merry way... until somebody new complained. Well, fuck...

To make a long story short, because I had pointed each server to each other, I was creating an infinite network loop whenever an non-local image was requesting. Here it is in traffic logs:

[User Request -> Taiga] GET /the_image.jpg

Doesn't exist locally. Retrieving from first server in pool: (Forced pointing to Chitoge)

[Taiga -> Chitoge] GET /the_image.jpg

Doesn't exist locally. Retrieving from first server in pool: (Forced pointing to Taiga)

[Chitoge -> Taiga] GET /the_image.jpg

Doesn't exist locally. Retrieving from first server in pool:

So, basically that until the scripts start timing out. Now, in a normal infinite loop, you kill your own process but, generally, should be safe from affecting others. However, here, every time there's a new request, that's potentially a new instance of PHP being spun up because the other instances are deadlocked waiting for data. So, eventually, the entire chain collapses and you start getting orangereds in your reddit inbox.

In the end, I still have Taiga sourcing from Chitoge since all image uploads go directly there and, generally, Chitoge will have a copy of the file. Chitoge will always take the AWS hit when a file doesn't exist to avoid this loop scenario. There are many ways I could avoid this scenario entirely, but for now, this set up is getting the job done.