🔙 All posts

What's the point of remembering when we've got Google?

Planted September 08, 2021

We live in an information rich world. More data is created every day than was in existence for our entire history before 1900. Each piece of information is just at the end of a Google search - down a rabbit hole of links and footnotes and comments.

Is it realistic to talk about remembering things in this world? When all I need to do is search for a code snippet on Stack Overflow to remind me or craft a search to move my thinking forward, why should I bother remembering syntax, methods or code?

I think there are at least three reasons why it’s worthwhile trying to remember some things in our development growth.

Working memory

There have been a number of studies exploring how many things we can store in our working memory. The numbers vary but tend to be around 5-9 items. This capacity can be trained to be larger and encoding information as you hear it can make a big difference. For example, remembering 12 digits like 212539825982 is easier when you split it into two digit (21-25-38-82-59-82) or three digit (212-539-825-982) numbers.

When my main gig was teaching Mathematics, I often had to defend learning times tables. Why bother? The reason is the limits of working memory. If I’m trying to calculate something complicated (or work on a challenging piece of code) then my memory can serve as a cache of previous calculations and insights. I don’t have to pause to work out what 7 x 6 is and, in development, I don’t have to pause to work out what the function syntax is in JavaScript.

The items I can keep in my working memory are all about the problem I’m solving, rather than a problem in my solution.


This means I can focus on the main thing. How often have you paused when writing a piece of code, gone to Google a bit of a problem and then taken a while to get back into the context of the problem?

There are plenty of memes about developers needing to have uninterrupted time to help them get back into the zone for solving problems. In not leveraging our memory, we are essentially setting ourselves up to self-interrupt incessently. In having a growing number of methods, syntax and common solutions in memory, you allow yourself to be more focused on the actual problem.

This can allow you to enjoy the process of development more and achieve results quicker.

Coincidental insights

Another aspect of leveraging your memory is that you can make connections that wouldn’t be possible otherwise. While solving one problem, an insight from a previous one can save the day. While exploring the syntax of one library, the previously mastered syntax of a previous library can help you get the most out of a new tool.

This obviously exists beyond development. Reading a book, watching a TV show or listening to a podcast - all of these new thoughts can be interacting with those in your memory.

So what?

I feel like I’m starting a conversation here rather than come to any conclusions. How should we remember things? Which things are worth remembering? With linting and type-hinting tools is this less relevant? Should we lean on just our meat memory and, if not, how can we leverage our digital memories to our benefit? What types of digital memories are there?

I’m going to be digging deep on these thoughts both here on the blog and in my newsletter. Sign up below if you’d like to follow and participate in the journey.

Like what you see?

I send out a (semi) regular newsletter which shares the latest from here and my reading from around the web. Sign up below.

    Your next read?