Showing posts with label learning-curve. Show all posts
Showing posts with label learning-curve. Show all posts

10 June 2019

Discover, Receive, Commit

Yesterday, I was learning with my peers in a priesthood quorum at church.  The topic was answered prayer.

The combined message of three scriptures stood out to me:
- Matt 6:8 discover vs beg
- Jacob 4:10 receive vs command
- James 1:5-7 commit vs worry

The combination of receiving God's help, discovering what He has in store, and being pre-committed to act on His generous & challenging prompting is a magic combination to me.

It helps me feel a sense of sufficiency when facing my set of challenges today.  I'm grateful for the learning & strengthening environment afforded to me in my priesthood quorum.

18 May 2019

Another Poem, Finding Encouragement

Here is another poem, born out of a desire to help my daughter get through the last few weeks of high school.


Born Again

Say your piece.
Toot your horn!
Do your thing.
Sing your song!

Then look and see
If it was what
You meant to say or do
Then cut.

And if you meant to do it different,
Or if you notice something new,
Then aren't you glad you started out,
With confidence to see the world,
To learn and do?


Sometimes it takes a lot of confidence to overcome the resistance you feel to take the next step in life.

17 January 2019

Command The Computer

After hearing hype around machine learning eating up tech jobs, I've wondered if I just have my head in the ground about what kind of work I'm doing.  So far, I haven't seen a straightforward way to apply machine learning to my work goals.

However, today I realized I can start a mind shift toward my work that will be both more healthy for me personally, and will allow me to see opportunities for machine learning that I've been missing.

There are 4 main levels of work I can see:
  1. Initiative
  2. Project
  3. Task
  4. Micro-task
As an individual contributor or team lead, the aspects of the work are scoped down to levels 2-4.  Most of the opportunities I've seen for applying machine learning have been at the Initiative level, where statistical methods can be applied to solve novel problems or automate whole classes of new/existing work.

Now, imagine you already work at levels 2-4.  Imagine you have a computer with infinite intelligence.  Imagine that if you describe a piece of work that needs to be done down to the 80% level of precision, that this intelligent computer could get it done and take care of the details.  This is what I've found impossible to imagine in advance.

The best way I've found to imagine this is as follows:
  • do a Task the way I've always done it (level 3)
  • break the task up into atomic pieces by creating self-standing git commits as I go (one commit per level 4 Micro-task)
  • when I write each git commit, imagine that I had asked a computer to do what I just did
  • write the git commit in imperative form, as if I had commanded the computer to accomplish the work at the 80% level of precision
  • imagine that I had spent the last 30-60min doing something other than solving the problem, and ask myself, "What else would I have been able to do while the computer was spending 15-30min on this?"
This helps me wire up the neurons in my own brain to start thinking about the computer as an intelligent agent that can assist me.  And it helps me to imagine how I can use my attention and energy more effectively, rather than just solving micro-tasks all day.

It appears to me that applying machine learning effectively requires stepping back at least to the Project or Initiative level and delegating more work to an "intelligent" computer.  And if such an agent doesn't exist, perhaps it can be built.

In a weird sort of way, I'm starting to use the Tell, Don't Ask principle in my own thinking to enhance my ability to imagine solutions coming together more quickly.  I know that's taking the Law of Demeter way outside of its traditional scope.  But I'm trying to break out of the box here. ;)

If 2018 was a year I didn't write any blog posts, it was certainly a year of great personal growth.  I look forward to writing here more during 2019.

02 June 2017

Print selection only in Chrome

Maybe you already know about "Print selection only" in Chrome.  But it changed my life today.

I wanted to print only a part of a web page.  Usually, I tweak the pages but then it spans pages and it's confusing to get just the pages I want.  Or if I got desperate, I would copy/paste into a text editor and print that instead (after reformatting all the copy/paste noise away).  Instead of all that nonsense, I found a better way.

Here's how to do it:
  1. Select the text you want to print (in Chrome)
  2. Click Print (or press Ctrl-P or Cmd-P on Mac)
  3. Click "More Settings" in the Chrome print dialog
  4. Select the "Selection only" box
  5. Adjust "Scale" to get it on the right number of pages (1 page usually)
Then print and you can move on with your life.  I love how simple it is.  Hopefully you benefit from this.

29 January 2017

Worry is a Signal, Not an Activity

"You look down today, what's going on?" my wife says when I get home from work.  I answer, "I don't know, I was just worrying about this project at work." What's wrong with this picture?

The mental error is that I was treating worry like an activity instead of treating it like a signal.  It's all about the self talk.  When there is some outstanding issue that needs attention, it is easy to jump straight to thinking about the issue, even though you can't really do anything about it at the moment.  The reality is that if I'm going to resolve the outstanding issue, I need my computer open, I need to talk with a team member to figure things out.  I need to write some code or run a query to see where things stand.

But if I attempt to sort things out mentally while I don't have everything I need to make progress on an issue, it's easy to spin my wheels and fall helplessly into a non-productive mental loop.

On the other hand, if the "outstanding issue" thought comes into my mind, and I call it worry (which it is), and instead of holding onto that thought, if I treat it like a signal, like an alarm bell, like a red light, then that frees me up to act on the signal.  Instead of treating the "outstanding issue" thought as an activity waiting to be engaged in, if I treat it as an self-alert, then I can move to deal with it at a later, more appropriate time.

The question becomes: "What action can I take right now to make sure that the outstanding issue is dealt with at the appropriate time and place?"  Maybe make a reminder on an index card and put it in my work pants pocket.  Maybe make a reminder on my phone.  Maybe send myself a memory jogger in email.  Maybe write a card and stick it in the Trello / Getting Things Done inbox.  It just needs to be something that I am confident will get my attention and lead me down the right mental path at the time when I know I will have resources to deal with the issue.

Any time spent on worry as an activity (beyond dealing with the reminder for the future time/place) I now believe to be worse than a total waste.  Not just worthless time spent, but also a drag on the rest of my life.  Any unnecessary, anxiety-provoking activity drags me down, makes me less capable of living my life in a worthwhile, enjoyable way.

Why did I not see this earlier in my life?  What could I have done to have learned this earlier in my adolescent / adult experience?

09 April 2014

Heartbleed Reaction Part 2

A particularly relavent statement from http://heartbleed.org (server side):
"Fortunately many large consumer sites are saved by their conservative choice of SSL/TLS termination equipment and software. Ironically smaller and more progressive services or those who have upgraded to latest and best encryption will be affected most."

There doesn't appear to be any up-to-the-minute current registry that I could find of sites that are affected on the server.  The scan posted on github is fairly out of date at this point, and from what I can tell only takes the homepage into consideration, not sites that only forward to https for things like login / checkout.

Here is the best one-off checker I could find (server side):
- https://www.ssllabs.com/ssltest/

Also, it may not be necessary to update Chrome/Firefox, based on the following language on the security stackexchange site:
- http://security.stackexchange.com/questions/55119/does-the-heartbleed-vulnerability-affect-clients-as-severely
"Chrome (all platforms except Android): Probably unaffected (uses NSS)"
"Chrome on Android: 4.1.1 may be affected (uses OpenSSL). Source. 4.1.2 should be unaffected, as it is compiled with heartbeats disabled."
"Mozilla products (e.g. Firefox, Thunderbird, SeaMonkey, Fennec): Probably unaffected, all use NSS"

The potential vulnerability of clients is discussed here:
- http://security.stackexchange.com/questions/55119/does-the-heartbleed-vulnerability-affect-clients-as-severely
- https://www.openssl.org/news/secadv_20140407.txt (language: "client or server")
- http://heartbleed.com
"Furthermore you might have client side software on your computer that could expose the data from your computer if you connect to compromised services."

My guess is that curl going to an https site would be affected, or other programs that use OpenSSL.  Maybe a chat client or if programs are downloading their own "auto-updates" over SSL.  Those are the only kinds of things that come to mind right now.

Reacting to Heartbleed

It's 2:37am and I can't sleep.  It feels like the internet fell down around my ears.

What I am doing:

  1. Got educated at http://heartbleed.com
  2. Updated Chrome to the 34.x version manually (promoted to stable yesterday)
  3. Checked for vulnerability in sites I use
  4. Completely clearing cookies and cache on ALL my computers, family & work, including phones
  5. Installing LastPass and resetting ALL my passwords as I become confident that each site is patched
    • I am assuming that all my user/passwords are either already known at this point, or can be discovered by anyone who recorded SSL traffic in the past 2 years
  6. Wondering what will happen because of this
UPDATE: Chrome update seems to be not strictly necessary as stated here.  But I'm upgrading anyway, because the Chrome stable release on 8 Apr. 2014 has a lot of other security fixes in it.

UPDATE: More details that I've learned are here in a follow-up post.

03 April 2014

Paying Down Learning Risk

I've heard: "Solve the hardest problem first."  As a rule of thumb, that works great to reduce risk early on a software project.  But I found myself saying something different to a co-worker recently:
Sometimes I start with the hardest problem, but sometimes I like to start with a really easy problem.  Why do I do that?
Why would it be a good idea to start with the easiest problem?  What kind of risk are you trying to pay down in a given situation?

Here are some reasons that would justify breaking the "hardest problem first" rule:
  • If you need to gain experience in a new domain, starting with something easy can help you get experience before tackling the harder problems.
  • If the world has changed out from underneath an existing system in an unpredictable way, starting with changing something easy or predictable can help you observe the source of the chaos.
  • If you are sharing work, handing the easy work items out to others based on their learning goals can help them learn better.
  • If tackling a hard problem will take a very long time, and others are waiting for you, then picking an easier part of the problem can help ease integration while still letting you engage on the hard problem.

The kind of risk you want to pay down first is important.  Here are the kinds of risk that would be payed down by the above behaviors:
  • risk of getting lost while learning
  • risk of being unable to bring order to a chaotic system
  • risk of assigning impossible tasks to someone who just wants to ramp up
  • risk of high integration costs because of trying to change too much at once

Most of the time, the risk caused by the uncertainty inherent in solving a hard problem is the most important risk to pay down first.  But sometimes, there are other factors at play, and other subtle variables that need to be managed to achieve a successful group outcome.

Thank you to Michael Nelson for his instructive collaboration on this topic.

07 March 2014

Unanswered Questions

In my experience, programmers vary in their ability to tolerate ambiguity, or in their ability to proceed without an answer to a critical question.

For myself, I've had the sense that I have advanced in my ability to tolerate not knowing the answer to an important question.  However, it's only because of some coping mechanisms I've built up over time.  And without those coping mechanisms, I still basically stink at dealing with ambiguity at a core human level.

There is a sequence that I go through all the time:

  1. No explanation yet
  2. What is the real question?
  3. How to file open questions while I'm working to find the real question?
  4. What is the TTL on open questions?
  5. How do I review open questions?
  6. How do I forget to revisit something important?


And I always feel uneasy when it gets to #4-#6.  I realize that GTD is all about managing a fixed-size attention span, and keeping track of things that fall outside that attention span.  However, I stink at the paperwork part of GTD, but I try to apply some of the principles in the context of open questions.

Here is a list of ideas that relate to each other and are related to this overall theme:

  • Ambiguous results
  • Anomalous results
  • Open question
  • Loss reflex
  • Disorientation cost
  • Orientation rate
  • Orientation ability
  • Orientation cost
  • Learning pipeline
  • Fixed buffer size of open questions
  • Open question LRU/LN cache eviction (least recently used, least needed)
  • Isolating open questions in code


Here are some articles that relate to this theme:



This post is totally alpha and I don't even know where to go with it, but I wanted to get it out there to think about it some more, since I always think better after pressing "Post" than before.

15 April 2013

Separation of concerns for AWS DB Instance setup

Database node in AWS?  Wow, I'm out of my league on this, maybe writing things down will help me get some clarity. :)

This is a writeup of my thoughts about how to properly separate concerns for a production db node setup in AWS.

Constraints:
  • utilize the available AWS automation tools at every appropriate point
  • reduce the number of decisions that a NoSQL DBA would have to make when bringing a new db node online (storage, machine type, disk configuration)
  • reduce the number of tweaks that a NoSQL DBA would have to make to a setup script to bring a node up (goal: fully automated)
How this played out in my head:
Mongo: You can run these handy MongoDB CloudFormation templates.
Me: How am I going to get a 20-node cluster?  Copy/paste in the CF template?
Me: Copy/paste alarm beeping really loud...
Me: Who am I asking to do this copy/paste in the future, just my proof-of-concept team members, or also NoSQL DBAs?
Proof-of-concept team: When are you going to finally have the Mongo cluster set up?
Me: Need to split the prod setup from the head-to-head setup...  => creating this page to record my prod setup thoughts :)
There seem to be 4 different concerns when setting up a db node:
  1. Base machine image, including the following:
    • software pre-installed, but unconfigured
    • appropriate user accounts pre-created
    • appropriate BIOS & OS settings for a DB node
  2. Storage configuration, pre-configured for the following concerns:
    • Q: How many volumes?
    • Q: How large should the volumes be?
    • Q: What type of volumes should exist? (ephemeral vs. EBS; single volume vs. RAID0/1/10)
    • Q: How durable does the storage need to be? (based on published failure rates)
    • NOTE: All of the above questions depend on the db technology, starting with vendor recommendations, with our tweaks added on.
    • NOTE: All of the above questions should be answered and saved in as reusable of a form as feasible (or at least documented for proof-of-concept tests).
  3. Volume construction, including the following:
    • creating any necessary RAID structures over top of the block devices
    • mounting the resulting storage volumes with the appropriate filesystem
    • carving up the space among different mount points to appropriately cap certain kinds of usage
    • using the appropriate flags for optimum filesystem use (noatime, nodiratime, etc)
    • formatting the volumes appropriately
  4. Running instance parameters, including the following:
    • Q: How much memory is needed?
    • Q: How many cores are needed?
    • Q: Is EBS optimization needed?
Each of these concerns have impact on the choices made when setting up a database node in AWS.  And luckily, each set of concerns seems to be easily saved in template form, separate from each other, and ready to be deployed when needed.
  1. Base machine image
    • pre-created AMI
    • script in VCS to take a stock AMI on a given OS and produce a new AMI (solves OS upgrade, etc)
  2. Storage configuration
    • volume configuration is saved with the AMI, I think
  3. Volume construction
    • needs to be done at first boot
    • db service startup script could be patched to call the volume construction lazily
    • RAID setup software could be pre-deployed in #1, like: https://github.com/jsmartin/raidformer
    • boot script could be laid down as part of #1, or deployed as part of #4
    • can be saved in a CloudFormation script, but not really in any reusable form
  4. Running instance parameters
    • just have this documented somewhere so we know how 
    • possible to script this, this is the sweet spot for CloudFormation

08 March 2013

Living in the Future

What does it feel like to hoist yourself into the future, and start living in the future again after having drifted for a while?

I'm having to catch up on cloud deployment as part of a new team I'm on.  Wow, there is a lot of change in the last 5 years.  I feel swamped.

I remember I felt swamped like this a while ago.  And reading about taking crazy risks reminded me of the feeling, and makes me wonder what the risks are of introducing significant change into the group I'm working in.

I guess I could get fired for being too inconsiderate of people who don't like change or the associated risks.

I've also had a long-running idea about making genealogy data editable in a distributed version control way.  Although my ideas are undeveloped, due to my current lack of ability to focus, I've been working on how I could make the idea more viable.

So now that I'm feeling the need to get on the early adopter curve again, I saw How to Get Startup Ideas from Paul Graham, and realized that this article was about both things:

  • living in the future
  • developing new ideas

From Paul's article:
It takes time to come across situations where you notice something missing. And often these gaps won't seem to be ideas for companies, just things that would be interesting to build. Which is why it's good to have the time and the inclination to build things just because they're interesting.
Live in the future and build what seems interesting. Strange as it sounds, that's the real recipe.

I guess it's the process of shedding the natural loss aversion that I'm not used to, and accepting and realizing the innovation risks that may come.

06 March 2013

Innovation Risks


Instead of responding to "innovation" as a buzzword, I want to make sure that I always just think about innovation in terms of social changes, large or small.

As software people, we probably tend to be much more change-tolerant on average than many people in the non-software population -- I believe that's one reason why we gravitate toward the soft-ware part of things.

But there are particular kinds of innovation risk, I think:
1) effort investment risk
2) future opportunity risk
3) legacy replacement risk
4) replacement rate risk

Business people often talk about the expected ROI of a particular proposal.  That is what I'd categorize under the category of #1.  If I expect a return, it'd better be worth the effort I put into it.  This is standard stuff for software developers.  We do estimates to establish expected ROI, we do the work and see the results.

What it comes down to the categories of risk #2 and #3, I thing there is a wide variation in risk tolerance in software developers.  Often this is because of variation in our perception of value, or varied backgrounds, and even in similar backgrounds, variation in our recall of the hard lessons of experience.  Some of the most experienced among us look farther ahead, and therefore avoid certain risks because they look similar to times when we got bitten in the past.

In addition, it seems that #4 is different than #3, because even though some people may be willing to absorb the cost of a significant change once or twice, they may not be willing to continue to absorb changes of the same magnitude on the same frequency.

I think that common responses to these different kinds of risk are as follows:
1) proper planning (mitigates effort investment risk)
2) proper deliberation (mitigates future opportunity risk by measuring which opportunity to chase)
3) caution, loss aversion (enhances legacy replacement risk by clouding judgement)
4) apathy, rejection (enhances replacement rate risk by inhibiting trust and hurting relationships)

The difference is that most humans have a disproportionate amount of loss aversion for things they perceive that they own.  That's what distinguishes #2 from #3.

Where we fall into a trap is if we over-deliberate or let loss aversion dictate our learning environment, and if the world changed in a way that causes us to mis-predict failure based on our prior experience.  Sometimes it's more valuable to walk away from something of value even when we don't know what we're looking for in its replacement, because we have a distinct feeling that non-linear improvement is needed, or because we trust someone else's concept of where we can eventually end up.  Sometimes, something we failed at earlier is now possible, but only possible in a way are ignorant of, and therefore only possible in a way we cannot predict.

Ignorant and highly-motivated young blood (or adventurous veterans) in our field is what keeps us continue taking inordinate risks and learning from the experiences that come from them.

Does experience and capability make us better innovators?  Does our level of context make us more capable of effecting positive change?  Not necessarily, I think.

Maybe, to a point.  Once we've achieved a certain level of experience, I believe our efforts have higher overall effectiveness only if we are capable of avoiding the expert trap, and are able to forget & re-learn appropriate parts of our experience in the current context.

Some material that is relevant to this topic:
- http://matt.might.net/articles/programmers-resolutions/
- http://blog.8thlight.com/uncle-bob/2013/03/05/TheStartUpTrap.html (warn: unfortunate language)
- http://www.lessonsoffailure.com/developers/habits-kill-career/ (warn: contains a crude analogy)
http://tcagley.wordpress.com/tag/zen/
http://pragprog.com/book/trevan/driving-technical-change

Just remember the following pragmatic rallying cry:
"If it's not broke, let's not invite the UN to fix it." (heard on Linux Radar podcast)

31 December 2011

Useful Git Tips

Every one of these tips was useful to me:
It just amazed me that these were written over a year ago, and I've been trying to learn as much about git as I could, but didn't even stumble across these in manpages.

I found this by looking for an option that would let "git remote update" only fetch a subset of the remotes I have attached.

It is rewarding to feel like I'm working with a rich toolset.

Thanks to @mislav for posting this.

BTW: Poking around in his tweet stream also yielded this gem.  I've always wondered if there was a lightweight XPath for JSON, and there it is.  Some background is in order, as much of the "XPath for Javascript" mentality was based on early JQuery thoughts.

24 December 2011

Missed Tweets Through News.me

I've never really been a twitter nut.  There are already enough sources of distraction for me, I don't need to add another source.  It just seemed really noisy.

However, people have been saying and linking to important things on twitter for a while now.  I'm just not discerning enough and don't have enough time to filter the firehose in a useful way.

The most helpful summary of interesting news I skim regularly is the weekly LinkedIn emails.  In fact, I think it may have been through a link chain from one of those emails that I stumbled on unionfs, as reported in the last post.  If I could have the same thing for twitter, but personalized to the people I follow, that would be really helpful!

Ironically, as part of the effort to make this blog more stuble-upon-able, I was looking for a way to auto-post on twitter and facebook whenever I post on this blog, and I found twitterfeed.com, which does all of that and more.  And it's easy to set up.

After setting that up this morning, I wondered: "What is the company behind twitterfeed.com?", and found betaworks.  They run some very interesting sites/companies, some of which I knew about (bit.ly), some of which were new to me (findings.com, chartbeat.com).  The interesting one for me right now was another site called news.me.

Turns out that news.me is just the kind of thing that has the chance of making twitter useful to me.  If things go well, you'll probably hear more about it.

27 September 2011

Keeping a Beginner's Mind

As an experienced software developer, I care deeply about retaining my ability to remain flexible in my habits and learning style.  That's the only way I got good, and if I ossify in my learning habits, I'll end up the equivalent of a COBOL programmer.  Certainly not the way to live up to the broad possibilities that exist to make the world a better place.

I appreciated the wider perspective that I got from glancing through these slides by Patrick Kua:
The Beginner's Mind
Although I had seen and skimmed Pragmatic Thinking & Learning, this presentation was a gentle and useful introduction to the whole idea of the Dreyfus model of skill aquisition.

Another very useful idea that this presentation expressed was the contrast between "skill-acquiring apprentice" vs. "closed-minded expert".  Patrick said that the "skill-acquiring" attribute can also apply to highly-skilled practitioners.

Among the tips Patrick gave, the following were useful to me:

  • You can't be an expert on everything. [so don't even try]
  • How can I try this safely?
  • How does this fit in my world?
  • Remain curious.
  • Mix with diverse groups.
  • Beware of built-in biases.
  • Avoid judging early.

The reason I decided to compare/contrast some of the ideas in this presentation was actually because of my forays into the Pragmatic Thinking & Learning book, which was one of three "Further Resources" offered at the end.

Patrick recommended the following books for further learning:


Because of the context in which the Apprenticeship Patterns book appears, I want to read that as a next step for learning how to make a real difference at FamilySearch.

28 July 2010

Playing catch-up

The world is changing all the time, which implies the requirement of constant learning to keep current. In other words, you are always behind.

If you accept the fact that you are always going to be behind, in one way or another, the problem then becomes more tractable. How, in a limited amount of time, can you bootstrap yourself into a learning environment where you can catch up enough to get something working?

The core questions are:
  1. What to learn? (because of the limited time, you know you have to be selective)
  2. How to go about it? (because of the limited time, and the constant churn, you have to be a quick learner/applier)

Andy Hunt wrote a book about pragmatic learning. Clayton Christensen wrote several books about disruptive innovation. The Wikipedia contributors wrote an article about the term "learning curve". The ideas in these books can be instructive.

I have my own opinion about the matter.

My answers to the core questions are:
  1. Look around and get creative about how you can apply about-to-be-stable newer technology to the software problem at hand.
  2. Climb the dynamic learning curve by becoming an "early adopter".

Being an "early adopter" is a productive approach to bootstrapping yourself into a rich learning environment. The key to quality learning is keeping it real & experiential. And trying new technology out and trying to apply it to the task at hand is certainly real & experiential.

The part that makes this whole learning equation possible is that the "innovators" actually need the "early adopters" in order to gain traction and stability. In an open world, that means that you can use early adoption as a means by which it is always possible to inject yourself into a rich and productive learning environment.

After I play this game for a while and become skilled at it, I'm guessing there will be a point at which I will want to have a talk with Paul Graham about a startup. Or maybe I will care about being an innovator in my family more than being famous in a technical sphere. Who knows.

Counterpoint

Donald Knuth is the classic example of someone whose life mission specifically excludes playing month-to-month catch-up. Oh, and by the way, that page returns the following HTTP header (in 2010):
Last-Modified: Fri, 23 Sep 2005 04:39:22 GMT

Even the innovation-encouraging Paul Graham wrote an article about addiction that cautions about blanket acceptance of technical improvements.