Cognitive debt in Tech and beyond

Artistic illustration of a human profile representing different facets of the mind
© Wiki Sinaloa on Unsplash

Text to speech not supported

Browsers to try :

  • Firefox
  • Microsoft Edge
  • Safari (iOS 14.5+)
  • Opera
  • Chrome

Are Large Language Models (LLMs) and the platforms that use them now indispensable?

AI leaves nobody indifferent. Some get excited, others question, some are afraid, and others deny any sort of interest.

Much attention is given to technological aspects, which makes sense, but little is said about our ability to properly integrate these tools.

“Dementia” and lock-in

The word “dementia” may sound extreme, yet some engineers use it.

As if all the knowledge built with these agents (~ LLMs) would vanish the minute you cancel your subscription.

A so-called “lock-in” effect: you must pay to improve a proprietary system, with your own consent:

Knowledge Locked Behind Paywalls

My role risks shifting from creator and problem-solver to prompt engineer and output validator [Sebastian Schürmann]

So, we may be entering a new era where devs spend more time refining prompts and validating output than genuinely considering the problems themselves.

Open-AI has published a batch of prompts on this subject.

One thing is certain: it would be very hard to ditch these tools given the leaps in productivity.

However, we should be allowed to discuss the terms of the contract.

Local agents

Software engineers use agents to execute tasks of varying complexity.

Examples include Devstral, the coding agent by Mistral AI, and All Hands AI, but there are plenty more.

They can connect to all project tools, even participating in code reviews.

On one hand, these agents are highly tuned for their tasks. On the other, they can articulate concepts related to the problem and propose complete solutions.

Businesses need compliance to handle privacy and sensitive data.

An agent can work locally, which prevent risks assiociated with external pipes in the Cloud.

Autonomy and orchestration

The key is autonomy.

Ideally, agents would learn on their own, fulfill goals, and perhaps start reporting.

In other words, they’re not just coding assistants.

We may even see AI Driven Development (ADD), where AI is the pilot.

Why not whole teams of agents, managed by other, higher-order agents, on large-scale projects?

Another big word is orchestration, so the ultimate goal is to coordinate all these resources to produce better results.

The decline of devs

I mentioned dementia at the start. Some fear that heavy reliance on AI could lead to a general decline in reasoning and decision-making abilities.

In this dystopia, there’d be little room left for critical thinking or independence.

Basically, as agents gain autonomy, ours would decline.

In software development, it might translate into the following concerns:

  • What happens when the agent is out of service? Will we be able to take over?
  • Are we coding less and losing that skill over time?
  • Will devs become obsolete?

Tech jobs literally work the cortex.

Simply put: the less you use it, the more cognitive debt you accumulate.

A challenge beyond tech

The cognitive debt is a common concern. It’s not limited to Tech.

Still, let’s stay active.

The tool will find the place we are willing to give it.

As of writing, AI can orchestrate complex flows, but when pattens are well documented.

When facing the unknown, it’s another story.

Don’t get me wrong. It can open us to new approaches, but for known problems, which is frequent in software development.

Wasting our human energy on familiar problems without bringing a fresh perspective or notable gains will only create frustration.

Nonetheless, assuming AI will always do better, is a critical mistake.

Article references