Cognitive biases in enterprise IT

I have a theory that almost all inefficiencies, problems, or destructive behaviorisms in enterprise IT settings can be classified and connected to some type of cognitive bias. I realized this as I was reading through "Cognitive Biases - A Visual Study Guide". At a macro level you can easily see how these apply. Things like:

  • The Consultation Paradox, and how it baffles you that your advice is ignored but when a consultant comes it and says the same thing, it's take up like no one has recommended it before
  • How System Justification Effect causes infrastructure or development teams to maintain the status quo rather than modernize, seek improvements, adapt to needs of others, etc.
  • How the False Consensus Effect causes you to over-estimate how much change you think you've influenced but haven't (and you're surprised when you find an island of teams who completely disagree with what change you're trying to make)
  • When you try an new technique to speed up a deployment but it has a bug, and the Negatively Bias causes people to irrationally recommend against improvements for a period of time (related to Von Restorff effect as well)

There's classifications for biases you knew existed but never knew they had a proper name. Read through them yourself and I bet you can think of an example scenario for each bias if you've worked in enterprise IT long enough.

I even see some of my own work-place behaviors couched in some of these cognitive biases. Understanding each one and being conscious of them might help to avoid the error.

Now, how to remember each bias, recognize when they present themselves, and defuse or counter-act them, that's the hard part.

A way of many ways to gauge continuous integration maturity

In an enterprise setting, for those involved with pushing your organization to adopt continuous integration and strive for continuous delivery, you're familiar with the obstacle filled trail you have to take (keep your head up; the fruits of change are rewarding).

As you watch other teams pick up CI, you hear them talk about where they are in their journey, and when they ask you "Am I getting close to good CI maturity?" (no one actually says it that way BTW), you can say, "You know you're doing it right when your continuous integration bottleneck isn't an manual organizational delay anymore but instead you find yourself improving technical bottlenecks like: how long it takes to bootstrap chef on a fresh VM, the time it takes to spin up a VM on OpenStack, or the duration of running test cases, code coverage, static code analysis, etc."


Blogging about technology at Target Corporation

I think most people know this, but I write technology career oriented blog posts for Target Corporation's "Pulse" blog. I've been doing it for over a year and I actually enjoy it because I'm given the freedom to talk openly about the technology work we do at Target (I'm especially proud to talk about the things I'm personally involved with and work on). In return it gives Target an opportunity to highlight its increasingly visible technical brand.

If you haven't read any before, you can read my posts here: https://pulse.target.com/author/dan/


Talking to 6th graders about careers in technology

For the final week before summer vacation, Washington Middle School teacher Daniele Albrecht invited me and several others to Career Week for an opportunity to speak to 6th graders about what we do for a living. The chance to do this excited me because I often reflect on my own childhood at the point when I was beginning to think about technology as a job; having accessible people around willing to share their experiences, guidance, and knowledge can be a powerful motivator.

I did two sessions, and the students were top-notch in both of them (the teachers Bill Spradley and Bill Ethridge were excellent hosts as well). They really seemed to be interested in what it takes to do what I do; I could tell because we went over allotted time with more questions than I could answer. We talked about things like:
  • "What's your typical day look like?"
  • "How much do you make?" - one of the first questions both times, but I didn't mind giving a ballpark figure; mainly because I remember hearing as a child how much doctors made; that left an impression on me and motivated me to get a good job actually
  • "What can I do to learn and start programming?"
  • "Can I code games at home?" - a good question actually; that's how I got started a Commodore 64!
I stuck to a simple narrative: there are companies who want to pay people to use technology to make more money, and I laced it with examples about making video games, building robots, creating mobile apps, etc (things they could relate to). It worked out really well I think. I could see a little bit of me in some of the student's reactions; their interest in my words captured their attention and you could see I got through to a few of them.

The key things I wanted them to walk away with were:
  • Start tinkering and self-learning now
  • Pursue a degree
  • Most of all, apply yourself
Lastly, Jimmy Jacobson's (@jimmyjacobson) post today in /r/programming reminded me to blog about this. In hindsight it would have be useful to have fun illustrations to share while I talked. Jimmy sets a creative example to follow; fork and modify for yourself if you ever have the opportunity to do this!

Thanks to the teachers who organized this and to the students who listened and wrote me this nice thank you card!


What a radioactive leak can teach us about avoiding blame culture

The Verge published a notable article today titled "Radioactive kitty litter may have ruined our best hope to store nuclear waste". It's a well published story by reporter Matt Stroud (@ssttrroouudd) about the New Mexico Waste Isolation Pilot Plant (WIPP) and how a seemingly banal procedural human error has resulted in the shutdown of the site and jeopardized the future of radioactive waste disposal for the facility.

More interestingly is the teachable moment in all this about avoiding blame culture. It's easy to quickly react and suggest that the human who made the error should be punished, perhaps heavily fined, fired, or worse (some of the comments in the article suggest just that, and even Jim Conca a PhD and ex-geologist at WIPP who Matt interviewed suggested the same but later backtracked). Matt jumped in to reply to a comment suggesting the offender be jailed with an insightful rebuttal from Per Peterson, a professor at UC Berkeley’s Department of Nuclear Engineering who Matt exchanged emails with for the article. In the comment Matt mentions what Peterson had to say about this:

"The natural tendency in events and accidents is to focus on assigning blame and punishing human errors. This approach is generally ineffective because human error happens. The critical issue for safety is to design systems which are tolerant of human error and which encourage reporting of problems and errors and effective corrective action."

He's absolutely right about this. And it's applicable to so many other industries like health, construction, banking, and more, but its especially relevant to me as it applies to my line of work: IT. It's not uncommon that major mistakes happen in software development, that a small coding error brings down a system or an incorrect infrastructure config causes downtime in the middle of the night. It's hard not to react with blame top of mind when these moments happen.

Peterson suggests we eschew the natural reaction and instead design processes that account for the possibility of human error AND that we promote feedback loops allowing for process improvement. In IT that can manifest itself as infrastructure-automation (Chef, Puppet, etc.), continuous integration with good test coverage, etc. for the former, AND things like DevOps culture, blameless post-mortems, etc. for the latter. Making this the default mindset in a company really comes down to culture and how this type of behavior is rewarded and encouraged. I know I don't always practice building a blameless culture myself, but stories like this and advice like the kind that Peterson gives reminds me of the importance of doing it.


Speaking about enterprise APIs at AppsWorld 2014

I’m speaking at AppsWorld (Moscone West, San Francisco, February 5th - 6th). Well not speaking exactly, more of a discussion panel. The session is titled: "Panel: Launch and management of your API". We're going to be talking about:

  • Publishing, promoting and overseeing your API deployment
  • What tools are available to help you launch and manage your API and how do you use them?
  • How to achieve seamless integration with enterprise systems
  • Monitoring the lifecycle of your API and assessing its effectiveness at meeting developer and application needs
  • Security considerations

Having spent nearly 3 years developing the API platform at the company I work for, I think I'll have a lot of useful protips to share.

If you see me there, stop me and say hi OR if you want to talk enterprise APIs over beer you can reach me on Twitter @pmotch.