11 July 2016

PTSD After Large Release

I don't like releasing large chunks of software at once, but there are situations where it's easier to swap out a large piece of interdependent pieces than it is to replace them piecemeal.

Except you have to make it through the release.

I worked on replacing the Family Tree database on familysearch.org with a new one.  Old was Oracle, new was Cassandra.

It started at 12:30am and extended, with various emergencies that were handled more or less gracefully by amazing people, until 6:21am.  And then it was done, and we had to start keeping it up.

After a few surprises that none of our simulations exposed, I have enough rest again and can function more or less normally.

However, the anxiety I felt during the release and the uncertainty I felt in the days just afterwards all added up to a feeling I don't remember feeling before.

Then I realized that the flashbacks and the irrational worry about keeping things working - probably some mild form of post-traumatic stress.

Certainly only a taste of what others go through who were in danger of losing their life and barely survived.  Not trying to imply that my experience is anywhere near that sort of thing.

I'm just trying to process my emotions and am hopeful this helps someone know they're not alone.

26 August 2015

Git worktree - clone but not quite

Have you ever wanted another workarea for the current repository you're working in?  Maybe you're running some tests and need the normal workarea to stay unchanged, so you can't rebase or tweak another branch in the meantime.

Options up to now:

  • $ git clone [current-repo] [temp-repo]; cd temp-repo  # too heavy-weight, have to push changes back
  • $ sleep 120; check email  # interrupts flow

In Git 2.5.0, you can do this easily:

  • $ git worktree add ../temp master

This creates a new workarea in ../temp with all the current branches.  It's the same repository!  Any git command you do in the new workarea is applied to (and uses the database of) the original repository: commit, rebase, push, etc.

NOTE: If you leave off the branch name, 'git worktree add' creates a branch named after the new worktree directory.

NOTE: If you want to operate on the same branch as the original repository, it is disallowed by default.  In order to operate on the same branch, you have to say $ git worktree add ../temp master -f'.  Also if you ever move off of the branch and want to switch back when another worktree has the same branch checked out, you have to use an annoyingly-long option: $ git checkout master --ignore-other-worktrees.  You could put that in an alias like so: $ git config --global alias.co "checkout --ignore-other-worktrees".

27 August 2014

Rubber Ducking with Git

You've heard of the phenomenon where when you try to explain a hard problem to someone else, you suddenly know the answer, and the other person did nothing but listen to you ramble.

On the C2 wiki, it's called:

The theory I have about the phenomenon is that in a problem solving situation, the human mind develops a lot of parallel ideas & possible solutions, even ones that you are not aware of.  But when you try to describe the problem and your ideas to someone else, just the act of trying to explain the situation helps you see it more clearly and links the ideas together better in a way that you become aware of more possibilities than you were able to see before.

But I've always had a problem talking to inanimate objects.  Call me less imaginative, I guess.  Or timid, maybe.

Well, I've had the feeling for a while now that using Git with small commits made me more productive.  And I just realized, I'm using my future self as a rubber ducky, and that the act of writing explanatory commit messages to explain things to my future self is a source of ideas for me.

09 April 2014

Heartbleed Reaction Part 2

A particularly relavent statement from http://heartbleed.org (server side):
"Fortunately many large consumer sites are saved by their conservative choice of SSL/TLS termination equipment and software. Ironically smaller and more progressive services or those who have upgraded to latest and best encryption will be affected most."

There doesn't appear to be any up-to-the-minute current registry that I could find of sites that are affected on the server.  The scan posted on github is fairly out of date at this point, and from what I can tell only takes the homepage into consideration, not sites that only forward to https for things like login / checkout.

Here is the best one-off checker I could find (server side):
- https://www.ssllabs.com/ssltest/

Also, it may not be necessary to update Chrome/Firefox, based on the following language on the security stackexchange site:
- http://security.stackexchange.com/questions/55119/does-the-heartbleed-vulnerability-affect-clients-as-severely
"Chrome (all platforms except Android): Probably unaffected (uses NSS)"
"Chrome on Android: 4.1.1 may be affected (uses OpenSSL). Source. 4.1.2 should be unaffected, as it is compiled with heartbeats disabled."
"Mozilla products (e.g. Firefox, Thunderbird, SeaMonkey, Fennec): Probably unaffected, all use NSS"

The potential vulnerability of clients is discussed here:
- http://security.stackexchange.com/questions/55119/does-the-heartbleed-vulnerability-affect-clients-as-severely
- https://www.openssl.org/news/secadv_20140407.txt (language: "client or server")
- http://heartbleed.com
"Furthermore you might have client side software on your computer that could expose the data from your computer if you connect to compromised services."

My guess is that curl going to an https site would be affected, or other programs that use OpenSSL.  Maybe a chat client or if programs are downloading their own "auto-updates" over SSL.  Those are the only kinds of things that come to mind right now.

Reacting to Heartbleed

It's 2:37am and I can't sleep.  It feels like the internet fell down around my ears.

What I am doing:

  1. Got educated at http://heartbleed.com
  2. Updated Chrome to the 34.x version manually (promoted to stable yesterday)
  3. Checked for vulnerability in sites I use
  4. Completely clearing cookies and cache on ALL my computers, family & work, including phones
  5. Installing LastPass and resetting ALL my passwords as I become confident that each site is patched
    • I am assuming that all my user/passwords are either already known at this point, or can be discovered by anyone who recorded SSL traffic in the past 2 years
  6. Wondering what will happen because of this
UPDATE: Chrome update seems to be not strictly necessary as stated here.  But I'm upgrading anyway, because the Chrome stable release on 8 Apr. 2014 has a lot of other security fixes in it.

UPDATE: More details that I've learned are here in a follow-up post.

03 April 2014

Paying Down Learning Risk

I've heard: "Solve the hardest problem first."  As a rule of thumb, that works great to reduce risk early on a software project.  But I found myself saying something different to a co-worker recently:
Sometimes I start with the hardest problem, but sometimes I like to start with a really easy problem.  Why do I do that?
Why would it be a good idea to start with the easiest problem?  What kind of risk are you trying to pay down in a given situation?

Here are some reasons that would justify breaking the "hardest problem first" rule:
  • If you need to gain experience in a new domain, starting with something easy can help you get experience before tackling the harder problems.
  • If the world has changed out from underneath an existing system in an unpredictable way, starting with changing something easy or predictable can help you observe the source of the chaos.
  • If you are sharing work, handing the easy work items out to others based on their learning goals can help them learn better.
  • If tackling a hard problem will take a very long time, and others are waiting for you, then picking an easier part of the problem can help ease integration while still letting you engage on the hard problem.

The kind of risk you want to pay down first is important.  Here are the kinds of risk that would be payed down by the above behaviors:
  • risk of getting lost while learning
  • risk of being unable to bring order to a chaotic system
  • risk of assigning impossible tasks to someone who just wants to ramp up
  • risk of high integration costs because of trying to change too much at once

Most of the time, the risk caused by the uncertainty inherent in solving a hard problem is the most important risk to pay down first.  But sometimes, there are other factors at play, and other subtle variables that need to be managed to achieve a successful group outcome.

Thank you to Michael Nelson for his instructive collaboration on this topic.

29 March 2014

Painless localhost demos

Quick demo?  Easier than heroku?  Look at @ngrok.

I have periodically needed something that lets me painlessly set up a demo from my laptop that I could just email a link to anyone on the internet.

I guess that ngrok.com would be a pretty valuable target to pwn.  Maybe it wouldn't be too hard to install on my own host.

Thanks to @lmonson for retweeting about @ngrok.