In recent months, I’ve been attempting to spend more time outside of my comfort zone. I thought that I could continue that theme in my writing. You won’t find much personal content on this blog, because I find it unhelpful for understanding serious topics. It can be impossible to discuss an issue when people make it personal. Everyone should spend more time identifying and debating key premises in plain, unaffecting language. That’s what I attempt here, and I think I do a pretty good job.
I shouldn’t place artificial limits on this blog, though. This post has sat in the back of my head for months, but I wasn’t sure if it fit with my brand. Part of me wonders why anyone would take advice from me. After all, who the heck am I? Then again, who the heck is anyone else? I write good stuff, so I should say what I want to say.
I’ve always been highly neurotic. I’m a huge advocate for the Big Five Personality Model, and those tests show a high level of neuroticism for me. I imagine some of this comes from my genes since my parents also seem quite neurotic. When I landed my first job out of college, the company praises me for my performance but dinged me for my inability to say calm under pressure. I’m also Ashkenazi, and we are known to lodge a complaint from time to time.
In all my time being a wacky weirdo, though, one particularly stupid event stands out. During the pandemic shutdowns, I worked on an AI for a simple two-player board game. This project inspired an earlier post. At one point, the AI wasn’t working properly. I can’t recall the exact issue, but I think it had something to do with the game not switching turns properly. The automated player would take two turns in a row, or something like that. This frustrated me to no end. I struggled to focus on anything for a couple of days until I decided to seek help. I found some websites where (allegedly) real programmers could review your code and tell you how to improve it. I’m not sure why I thought this would work. It’s not clear how anyone could sift through 200+ lines of code in a 30-minute session and find the problem, but I paid a small amount of money out of desperation. On the call, I tried to explain the issue to the programmer, and she did her best to help. Unfortunately, she couldn’t understand much of what I saying. I was crying, shaking, and feeling too frustrated to form coherent sentences. I hung up on her after about 10 minutes. She sent an apology (despite not doing anything wrong) and refunded the money.
I, of course, regret my mistreatment of someone who was trying her best to help. Beyond that, I now recognize the sheer absurdity of the situation. Again, this was just some boredom-inspired side project. I had no deadline and I faced no negative consequences if the thing didn’t work. Yet, at that moment, the functionality of my stupid Python code felt like a life-or-death endeavor.
I eventually fixed the issue by implementing a tried a true programming technique: finding someone else’s code and copying it. I continued to progress until I ran into another issue: the AI would easily lose if the player attempted a certain strategy. Faced with this problem, I found an even more satisfying solution: I lost interest and moved on to something else. To this day, my GitHub profile showcases an AI that doesn’t quite work. My resume links to the GitHub page, and it didn’t prevent me from getting any jobs. I don’t know if anyone has ever opened that GitHub page. If they have, they probably clicked the first file, scrolled through, and thought “Yup, that’s Python code.” In other words, no one cares. I’ve told friends and hiring managers about the board game bot, including the problems, and I’ve never received negative judgment. Reactions range from “Oh that’s cool!” (and actually meaning it) to “Oh that’s cool!” (with an implicit “Woah that’s interesting, but I sure don’t care.”)
I enjoy listening to the “Model Talk” podcasts where Nate Silver explains his process for predicting elections. Silver highlighted the importance of removing “temporal autocorrelation” in his forecasts. This is a fancy way of saying that you shouldn’t be able to predict the future value of the forecast. As he put it, if you can predict that model will give a candidate a 70% shot of winning next week, the model should show a 70% probability right now. Otherwise, it’s producing suboptimal results in the current moment.
I can apply this same idea to my buggy AI. I’ll replace “chance of a candidate winning the election” with “chance that this is an important problem.” At the moment, that chance seemed close to 100%. A couple of years later, I can confirm that it didn’t matter at all. That’s not a problem in itself, of course. Everybody gets stuff wrong. The issue is that I should have known better at the time. Somewhere in my brain, I knew that this coding problem wouldn’t matter in 2 months and definitely not in two years. To paraphrase Silver: if the model will say it has a 1% chance of mattering in 2 months, it should show a 1% chance of mattering right now.
Plenty of events meet this threshold. In two months, I will still care about my health, finances, employability, and relationships with friends and family. I think my model accounts for temporal autocorrelation here. We also, sometimes, have to let ourselves indulge in human stupidity. Any sports fan knows the feeling. Your heart rate rises and feel physical stress when the game’s on the line. Then they lose, and you think “dude it’s grown men playing with a ball, who really cares about this crap.” Sometimes it is okay to act like to animal.
Oftentimes, though, it’s maladaptive. I knew that code wouldn’t matter in two months. When I interviewed for my first post-college job, I lost my validated parking ticket. Do you know how often I think about that? Never, of course, but it popped into my head when writing this paragraph. Yet, I felt palpable anguish at this time. If I asked myself “Will this matter in two months?” I wouldn’t have worried about it. I’ve also been applying this thinking to my social interaction. I used to feel deep dread at the thought of hanging out with strangers. Sure, sometimes you meet a bunch of people and don’t click with any of them. That’s a bummer, but I’ve never sat at home and thought “Man, that event from two months ago still hurts me.” People aren’t pit bulls: a bad interaction doesn’t leave a permanent scar. So, I just talk to people. If it goes well, cool. If it doesn’t, I know better than to inflate the importance of that awkwardness.
Ultimately, that’s why I published this article. Maybe someone will comment “Self-help, bullshit? Seriously? That’s not what I pay my $0 a month for!'“ If so, it will sting a bit in a moment. In two months, though, I won’t care. I’ll think “fuck that guy” for a day or two and then forget about it. Maybe, on the contrary, I’ll receive a ton of positive reception for this article. To be honest, that won’t matter in two months either. Even the happy stuff doesn’t last that long. We all know how quickly the new job or the new hobby becomes “well, guess I’m doing this again.” I can predict one thing about the next two months, though. I’ll feel more wholistic for having published something more personal on the blog. I’ll also appreciate, as I always do, the fact that a decent number of people care about what I have to say.
Fixing Anxiety with Time Series Models
“Part of me wonders why anyone would take advice from me. After all, who the heck am I? Then again, who the heck is anyone else? I write good stuff, so I should say what I want to say.”
I found this really inspirational! “Who the heck is anyone else?” is going to end up used when I have performance anxiety in the future, I can tell.
You're putting your analytic mind to good use here. I hadn't heard "will this matter in 2 months?" quite like this before, with "then it doesn't really matter right now" being the implication. Although I absolutely agree that letting go of outcome expectations is often the healthiest choice.