Being Happy

In my personal experience not everyone wants to be happy. When I say that I'm not talking about that one friend either. I'm talking about me. For a very long time I didn't wanna be happy. It was a very frustrating time. I can't say I knew I wanted to be unhappy, I just was all the time. I was angry, depressed, and off and on apathetic. It was actually my mother who coined the phrase "You just need to be unhappy." somewhere in my early to mid 20s.

Above and beyond my own internal strife one of my problems was that those around me were constantly interested in trying to make me happy. Maybe that doesn't sound like a problem to you but it was aggravating to me. It wasn't that people wanted to help it was this frustrating struggle of people offering what I saw as cookie cutter back pats and weak attempts to stoke a nonexistent ego instead of just facing the truth and seeing the world like I saw it. They didn't understand my problems. I thought about how miserable life is and how horrific this whole world works. I thought all the time about how my mediocre skill set in an over saturated field that shared a job pool with savants & geniuses meant I would going to toil and claw against an over abundant and under paid workforce. I thought about how compared to my peers my nothing-to-offer existence meant I get to writhe away in this world alone with a few cynical friends to cheerlead ourselves along to the grave. I looked at the ugly side of every story and trust me when you stop sugar coating everything you realize that there is millions of fucked up things going on every day and we do our best to gloss over it and put lipstick on this pig so that we can wake up and say today is an awesome day. This whole world is a corrupt Masque of the Red Death; an extravagant gala thrown by the privileged to hide away the social plague destroying everyone else around us.

Over time I learned how to communicate with others about my feelings in ways that helped prevent the constant fawning over my state. Of course I had to be highly selective of who I chose to associate with since I didn't want to have to go through this rigmarole constantly. I wasn't happy at that point but I reached this semi-content equilibrium where I got by with thick sense of dark humor and snark, as much as one possibly could exist in this sickness.

After a long while doing this I reached this weird nirvana where I was just me, and everyone was ok with it and I was ok with it to. I was just ok. Then the weird shit happened.

Through having this small social group, literally six to eight people deep, I was able to find this confidence in myself. Maybe what I could do was shit to anyone else but I could do things that made a difference to my friends, or at least impressed them. Over a period of about ten years give or take this grew, as well as my social circle. I often felt like the imposter in the room but through all these people I started to realize my own potential.

I can't say the exact moment it hit me. I know it was when I was working for Stephens. I had stuck my foot out enough times and somehow not gotten the door slammed on it enough times that I had made it somewhere in the company. The group I was running hadn't completely imploded on itself around me yet and I was a pretend famous DJ. Somewhere around this time it dawned on me that life isn't the Olympics but an Industry and even if I sucked the fact I wasn't going to stop trying made me valuable.

It was around that time I stopped reading fiction and switched to non-fiction. I read a lot of 90s-00s new era "be awesome at life" self help books and started implementing all these systems and tricks I read about.

My personal mantra around then was "It Never Hurts to Help"; a tongue in cheek reference to a cartoon from my youth called Eek the Cat. It was a morbid tale about an anthropomorphic feline whose overly sunny attitude and unflappable willingness to help other constantly ended him up in the hospital. That was his catch phrase, the one he said right before he was mauled by something. I like to say I was using it ironically since in the end I rarely caught fire after saying it but I did end up getting places professional and personally.

Needless to say life started moving really fast when I became truly motivated to help and get things done. I can't say I was happy... but I was really busy.

It was around then the shift really happened. I don't know if my attitude shifted first or those around me but things became nightmarishly disjointed and stressful at work over bad management, the club promoters I was working with went to war with another promotions group, and the community I was dealing with collapsed around the time someone slept with a minor unknowing and then someone else killed themselves. Needless to say these were dark times. However through all this stress I had a mantra and I stuck to it. Suddenly I was a too positive person for those I was around.

I was right back to where I was before in the reverse way. Everyone told me I need to "take it down a notch" and accused me of being disingenuous and sarcastic simply because I'm living my live the way I chose. I can't possibly feel like that, I can't do this, and we can't do that.

That's when my mantra changed to "I Only Have Cans". Just like before I had to adjust who is important in my life since I don't want to surround myself with those who are going to try to slow my progress and scowl at my outlook. This new mantra isn't just a tongue and cheek spite of never giving up and always helping. This one is only having the positive, always being able to do something. I guess that's when I decided to be happy?

I can't say I'm always happy. In fact I'd go so far as to say no one really stops dealing with depression. I still have the eternal funeral procession of self doubt, loathing, paranoia, and ill wishes flickering through my mind like an unending film. However I can decide it doesn't control me and I have way way better things to do with my life than be consumed by my own innate apathy. I can say that I'm in control of my outlook and what is important to me and I can control the world around me enough to decide I'm gonna be happy this day.

My mantra has been changing lately. I didn't have a mantra for over 26 years and now suddenly in eight years I have gone through three of them. Like I said, things started moving fast. It's not final, nothing is, but these days I'm sticking with "Be Fucking Amazing".

Not bad for someone who needed to be unhappy almost his entire life.

For those who actually read this far, I didn't write this to publicly stoke my dick at everyone or at least that wasn't the original intention. I've been thinking a lot lately about those around me who are unhappy now. I get a little sad and want to go make them happy which reminds me of where I stood not that long ago. I'm not going to pretend my story applies to anyone else and this should be shared around facebook by duck lip hotties as some overly winded it gets better back pat. However I'd like to hope that there are plenty of people who struggle with needing to be unhappy that will learn to take control of their own world and be fucking awesome in their own right.

Quick Note on GnuRadio on Pentoo

Not a big blog, but a quick problem I got solved on IRC that I thought might help others.

I have a Gateway LT4009u with an Atom N2600. It's my "hacker/workshop" laptop. The atom N processors are a bit gimpy so sometimes things don't run right. One thing is GNURadio on Pentoo. Pentoo runs hardened and this pisses off the atom n.

So if you get the following error.

LLVM ERROR: Allocation failed when allocating new memory in the JIT
Can't Allocate RWX Memory: Operation not permitted

Then you need to soft disable hardened with the following command

sudo toggle_hardened

I hope that helps anyone else on the internet.

Thanks to Zero_Chaos in #pentoo on irc.freenode.net for the fix (and pentoo)

Quick Update: This also happens when running in VirtualBox 5 on my 2015 MacBook i7, but the fix is the same

Tagged

Monitoring Chef runs without Chef

I, like many sysadmins, really want to monitor all the things I actually care about. Monitoring is in general hard. Not because it’s hard to set up, but it’s hard to get right. It’s really easy to monitor ALL THE THINGS and then just end up with pager fatigue. It’s all about figuring out what you need to know and when you need to know it.

So in this case I really need to know that my machines are staying in compliance with chef.

There was a few ways you can do this. The first thought I had was adding a hook into all of my runs and having them report in on failure. This is mostly because I’m always looking for another way to hack on Chef and work on my ruby. The big problem with this is:

  • What if the node is offline?
  • What if the cron doesn’t fire?
  • What if chef/or ruby is so borked it can’t even fire the app
  • What if someone disabled chef

I need a better solution

Knife Status

Knife status is just awesome, it has some awesome flags and generally I run it far more than I should. The great part about this query the server approach is that it lets me know;

  1. The server is still happy and spitting out cookbooks to nodes
  2. The status of ALL of my runs from the “source of truth” for runs

Not making my chef test rely on chef

But I’m not going to shell knife status. I’m a damn code snob and something about having the chef test rely on the chef client status didn’t seem right.

Instead I wrote a nagios script that I am not going to share in it’s entirety here because $WORK_CODE1insert sad face but I will tell you exactly how I did it.

How to python your chef, or how I stopped worrying and learned to love that I can still use python to do anything.

I’m the most experienced in python and almost all of our internal nagios checks we have written in python. So this is in python.

Step one

Use pynagioscheck and pychef. Seriously. Don’t reinvent the wheel here.

Step two

Create a knife object. have it take all your settings on initialize, then you can create functions for all the different knife commands to recreate them with pychef.

You really only need status for this one. The meat of status is this here, coderanger dropped this on me in IRC

for row in chef.Search('node', '*:*'):
    nodes[row.object['machine name']] = datetime.fromtimestamp(row.object['ohai_time'])

Step three

Now from here I created a TimeChecker object. It takes the dictionary of { server: datetimeObj } on it’s init. For consistency sake I also init self.now = datetime.now(). Then I have a TimeChecker.runs_not_in_the_last() that just takes an int.

The magic of runs_not_in_the_last I will also share with you because I’m proud of this damn script and want to share it with the world

diff = timedelta(hours=hours)
return [k for k in self.runtimes.keys() if self.now - self.runtimes[k] > diff]

Bam!

Step four

Now just extend NagiosCheck with KnifeStatusCheck, make all your options and other goods in your init and then make your check()

In the check you make knife, Make a Timechecker with the status return… then all you have to do is see if you have any runs_not_in_the_last for critical and then warning.

Gotchas and cleanup notes

USE EXCEPTIONS

seriously, this can and will make them so catch them properly and return errors. You will need to catch and handle AT LEAST - URLError - Status - UsageError - ChefError - At least two of your own exceptions

SSL errors

So there is no trusted_certs here. You need to either give your server a working cert, install the snake oil into the nagios server as acceptable or do the dirtiest of monkey patches.

# Dirty Monkeypatch
if sys.version_info >= (2, 7, 9):
    import ssl
    ssl._create_default_https_context = ssl._create_unverified_context

But before you do this think of the children!!!

Weird ass errors with join

I need to maybe open a ticket and patch pynagioscheck but I had the weirdest bug when raising a critical. It would die in the super’s check on “”.join(bt) or something of the ilk.

My work around was to not just pass msg to the Status exception but to make msg a list and put the main message in msg[0] and then put the comma joined list of servers out of compliance in msg[1]. This means the standard error comes up on normal returns but if you run the check with -v it will give you a list of servers out of compliance for troubleshooting or debugging. Not bad.

Handling the pem file

Eeeeehhhh This maybe my one cop out in the whole script. Basically I created a nagios user in chef with a insane never to be used again and promptly lost password and put the nagios.pem file alongside the check script. Then I let the script optionally take a pem name, and it just checks that the pemfile is alongside the check script. I was considering letting you specify a pem script somewhere on the server or in the Nagios’s users home directory but decided to bite that and take the simplest route there.

Don’t destroy your nagios server

Seriously. Did you see this code? Run a search on all nodes and then return an attribute for every node in your nagios server. This is not the worlds fastest check script.

Unless you dedicate some serious power to your solr service on your chef server you should make sure to only check this service once every ten minutes tops. I only check once an hour normally and then follow up with 10 minute checks on fail on my server since I only do converges every four hours so an “out of compliance” warning for me would be at the 12 hour mark and critical at 24 hours2.


  1. I don’t yet have any clearance to post or share anything I write for, while, at, or around work. The company owns all that, but we are currently working on getting to the point where we can share some stuff. Especially things not so related to our IP like infrastructure code, cookbook, checks, ect. 

  2. The reason I picked these numbers is I don’t want to know the FIRST time a converge fails. I use the omnibus_updater in my runs (Pinned version in attributes of course) so a failed run can be normal. Plus I am deploying something that important I am going to spot check runs and verify everything gets run with knife ssh. I just want to know mostly if a machine is out of the loop for more than a day because that’s a node that needs to get shot. 

Tagged ,