Lazy Brain Harmful or Just Efficient?

Research into LLMs and education is a hot topic apparently.[1]

I stumbled upon a draft paper today titled “Generative AI Can Harm Learning”[2] where researchers compared test scores of students divided in three cohorts: students with access to “ChatGPT Base”, students with access to “ChatGPT Tutor” (a “custom math tutoring program”) and students not having any AI companion.

The experiment was designed in a way that allowed the researchers to asses and compare results between LLM-assisted and unassisted exercises. 1) students would attend a standard lecture covering a certain concept, 2) they would get practice period, and 3) their retention would be evaluated through an exam. Step #2 was performed tin those different cohorts.

The initial result painted a positive picture: students having access to either ChatGPT Base or ChatGPT Tutor in step #2 generally performed better than students without. However, assessing student’s retention in step #3 showed that students with access to ChatGPT Base performed worse than the other two cohorts. The researchers explain this partially by noting that these students mainly asked ChatGPT for the answers, instead of using ChatGPT as an aid to learning and understanding. But what’s more, both ChatGPT-assisted cohorts expressed higher confidence in performing well on the test as opposed to the non-assisted cohort.

Lazy brain

The results, I think, make a lot of sense. Understanding is not the same as knowledge. Understanding requires effort. I believe our brains are big suckers for efficiency, erring on the side of least effort.

I’m reminded of how, back in the 90’s/00’s we were able to recall from memory complete 10-digit phone numbers of family members, close friends, maybe some institutions. We’ve lost this ability because of mobile phones. Suddenly, for our “survival”, we weren’t required to know these by heart. We merely needed to know how to perform an action yielding the same result: e.g. look up a contact and press (or tap) a button.

In a similar fashion, search engines introduced a similar mechanism on a bigger scale. Instead of remembering the details of information we had to look up, we remember the steps we took to find the information. This is the “Google Effect”, also known as “digital amnesia”[3]. It’s something I’m confronted with daily: “what did those docs say again specifically?”

I don’t want to argue whether these cognitive-behavioral changes are good or bad for an individual or society as a whole. It’s too soon to tell what the net effect will be. I also don’t think the net effect is the most important metric concerning the morality of AI technology. Judging disruptive technology by it’s net effect is a lazy brain thing.

For any disruptive technology there will be advocates overplaying the positives and Luddites overplaying the negatives. That is not to say the truth is somewhere in the middle. Rather the truth is in most likely found across constituent parts of the whole. One of those parts is the activity of learning, and like the research shows, you can utilize an LLM to aid in precisely that.

You could argue the students in the study weren’t really being offered a choice on how to approach their tasks. They were either given ChatGPT Base, ChatGPT Tutor or traditional material as aids. I also think the lack in brain maturity might make the students more susceptible to taking paths of least effort—e.g. asking for answers directly as opposed to setting the LLM up to help them in forming an actual understanding of the problems.

Lazy Brain Plugins

One of the core mechanisms in which, I believe, technology functions is how every technology is an “extension of man.”[4] The classic example is how a man with a hammer becomes a “hammerman”. The hammer becomes an extension of his arm and hand, and adds the affordance to drive nails into wood or just hammer things. And, like other tools, especially with training or repeated use, the brain and body become more and more accustomed to being “hammerman”, incorporating the tool as a natural part of the body.

Becoming skilled at something is all about finding a path for the brain to become lazy in performing something complex. When seeing experts and professionals perform it always looks so easy, effortless. A key thing, however, is that they undertook a lot of effort to get to that point. They had to develop a true understanding first.

I’m very curious about how LLMs and their future iterations will shape us. Just like the “Google effect”, I’m sure LLMs will affect our cognition and behavior in certain (new) ways. LLMs cater to the brain’s craving for efficiency but all this does raise the question whether we should be wary of losing the value in putting effort into understanding something. Their affordance to converse in natural language has in it the potential for the LLM to take on a myriad of roles, ranging from fact-checker, therapist, programming tutor, conscience?

Personally, I want to be wary of LLMs robbing us of the pleasure of finding things out. Maybe it’s fine to lose a little bit of understanding for more efficiency in some aspects of life, work. On the other hand, it is infinitely more rewarding to reach understanding. Lets hope our (somewhat mature) brains are capable enough to know when we need to choose between instant versus delayed gratification.

I guess we can always ask the LLM.


  1. One of the recent systematic review (draft) papers synthesized results from 72 studies over the last year or so. ↩︎

  2. Bastani, Hamsa and Bastani, Osbert and Sungu, Alp and Ge, Haosen and Kabakcı, Özge and Mariman, Rei, Generative AI Can Harm Learning (July 15, 2024). The Wharton School Research Paper, Available at SSRN: https://ssrn.com/abstract=4895486 or http://dx.doi.org/10.2139/ssrn.4895486 ↩︎

  3. Sparrow, B., Liu, J., & Wegner, D. (2007). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science, 333(6043), 776-778. https://doi.org/10.1037/e633982013-286 ↩︎

  4. McLuhan, M. (1964). Understanding media: the extensions of man. New York, McGraw-Hill. ↩︎

‹ Back