Skip to main content

2025 – the year AI stole my dopamine

A reflection on automation, responsibility and the evolving definition of purpose in the AI era.

TL;DR

Early software development delivered purpose through struggle, mastery, and ownership. AI collapsed that loop.

As AI evolved from a gimmick into a capable reasoning partner, productivity exploded…but satisfaction quietly evaporated. The dopamine wasn’t being stolen outright, it was slowly (sort of) displaced. When effort disappears, so does the sense of authorship and responsibility that once gave our work meaning.

This isn’t just a engineer problem. As AI becomes universal across industries, we’re now entering an “Agency Age” where tools increasingly act on our behalf. When understanding becomes optional and outcomes are generated rather than earned, responsibility blurs and systems become brittle.

The real risk isn’t job loss – it’s loss of agency. Progress without responsible ownership becomes momentum without direction.

The next frontier isn’t faster output or smarter tools. It’s intentional agency: deciding what to automate, what to retain, and where responsibility must remain human. Fulfilment in the AI era won’t come from competing with machines, but from reclaiming authorship over direction, intent, and consequence.

AI didn’t just change how work gets done – it forces us to redefine purpose itself.


Setting a relatable scene

Its the year 2002 and we’re deep in the weeds developing OptusTV on the iPhone2 and the soon to be released iPhone3G; phones that were never meant to stream video – that was me building a way to stream (ahem, progressively download) video playback for a product launch. No documentation, no guidance, no outside help – just one impossible task after another, one tiny addictive victory at a time. Young, ambitious, out there to solve all problems with technology – go get ’em tiger.

It was an incredible time to be a software engineer, there were coding books everywhere, limited web resources to lean on, the internet was painfully slow in Australia, CPUs still had “Pentium” at the start and AI was something that only existed in movies and SciFi novels. Developing software back then felt very much like opening up the digital frontier and the notion of developing a piece of software for the very first time was a real possibility.

I suspect this period of early expansion is the same in most exploits across industries; the thrill of doing something “first” – feeling like you’ve conquered the mountain before all others and living to tell the tale. It’s an exhilarating feeling and being part of a group of like-minded individuals sharing similar goals made that period in my life truly special.

Fast-forward to 2025 and unbeknownst to me, it would prove to be the pivotal moment that AI would assume the heavy lifting of the software development and with it, quietly take my dopamine as well.

I’ll be honest, I was particularly late to the AI game. Sure, I read about it, tinkered with the tools, made a video of Will Smith gobbling pasta, read the philosophical and societal issues concerning our jobs and livelihood, but using it in my day to day – nope. It was only when I was invited by Github to beta test Copilot in 2021-22 that I actually decided to give it a try.

My initial assessment of AI was that it was conflated and clumsy. Sure it worked, but it felt like someone speaking broken english giving a lecture on Shakespeare, sure it got over the line, but it did so in a very round-about kind of way. Hallucinations happened often, context was limited and the ongoing result would wander off into oblivion very quickly. In all honesty, it felt like autocomplete on steroids – and not in a good way.

I wrestled with the technology a lot, fought it even when it wanted to do one thing and I another. Differences of opinion, latest versions and features clashing, solving problems and dealing with issues that it created and would then apologise profusely for drove me round the twist to the point where I turned it off to get some sanity and focus back. I consulted my peers who signalled the same experiences and frustrations with the technology, some going as far as requesting refunds given the technology did not meet expectations in terms of productivity and quality.

That was the way it was for a while – AI was still very gimmicky. I didn’t use it in my primary role and it didn’t have a significant impact on what I did. I continued to experiment with AI, but not from an engineering point of view. Research, ideation, analysis were commonplace usages for AI and indeed it helped. Combing the internet and research documents for specific pieces of information is incredibly time consuming and AI breezed through that better and better with each iteration to the point where the “flow” state was beginning to emanate in my work. This felt like cheating in some ways and in other ways, like laziness. I knew what I wanted and now I had the all-seeing oracle to consult whenever I chose. It was intoxicating, and unsettling – a small taste of what might happen if AI became capable of carrying more than just the menial load.

Then something shifted…

When OpenAI’s o1 model came out in 2024 with the ability to “reason”, I rebooted coding with AI. This was significant as the AI tools had clearly been on a supercharged journey of evolution to address the needs of software engineers. Once I started using it and it understood my work, could resolve its suggestions with what I was trying to achieve, reduced hallucinations, removed the need for prompt engineering structures and sequencing and started working with me, I made a dedicated effort to incorporate it everywhere and anywhere I could.

The productivity gains were dramatic and very, very noticeable. Analysis became trivial, identifying bottlenecks and issues with my work became a matter of minutes instead of hours, roughing in patches became days instead of weeks. The tool had shifted into the realm of conversational and cooperative. Adding the ability to read, understand and compile emails, designs, flowcharts, image creation and develop and iterate – the productivity really began to accelerate. AI wasn’t assisting me anymore, it was taking the load off my shoulders.

That’s when the discomfort set in.

The problem that slowly began to surface in my everyday interaction with AI was a lack of satisfaction in my work. Sure I was motoring through work that would have otherwise taken days, weeks or in some cases months to do pre-AI, but I was left on the other side with an ever growing feeling of hollowness. My first reaction was to do more work, improve the systems I’d put in place, make them better, faster, more resilient, give it access to more information, but the unrelenting efficiency by which the AI was performing tasks I gave it, only fuelled the underlying problem. I came to realise that massive increases in personal productivity didn’t equate to satisfaction.

Up until this point in time, solving digital problems was my thing. People came to me from all over to solve unsolvable problems, confident in my ability to solve them. That responsibility gave me incredible purpose. Throwing yourself into problems no one else could comprehend, let alone fix is quite a privilege. The dopamine wasn’t just from success, it was from the cycle of struggle, anticipation and resolution. AI collapsed that entire loop.

Sure, my initial reaction to AI solving problems for me was akin to the first time you see a magic trick, it’s truly a wondrous thing to behold. AI is relatively instant, understands every subject, every language, is mostly right (given the right context and information) and endlessly helpful. But once I realised it was doing my thing, I realised my joy wasn’t being stolen outright – it was being displaced.

This isn’t just a technology problem.

With AI now permeated into Finance, Healthcare, Law, Marketing & Media (Canva anyone?), Manufacturing, Logistics, Insurance, Education, Resourcing, Retail & FMCG, its on the way to being as universal as electricity and the internet. Most people and industries are encountering AI through tools, not as a core capability, but the overall direction is clear.

What productivity, satisfaction, creativity, fulfilment and purpose means to each of us, especially considering we’ve moved away the ages of “distribution” (e.g. digital age, information age, social media) and into something that more closely resembles the “agency age”; where our agency is shifting away from us in a constantly, but ever so subtle, productivity-boosting, dopamine-sapping way, will be something different and unique to every one of us. Yes it has an element of fear, but an equal amount of thrill and excitement.

That shift has consequences.

Responsibility used to be tangibly connected to the effort. You built something because you understood how. If it broke, you fixed it because you owned it. AI challenges that relationship. When you no longer fully understand what’s happening under the hood, responsibility becomes abstract, and systems by consequence become brittle.

This is where I get a little uncomfortable in my chair. It’s not a fear of AI itself, but more so about where responsibility lives in an AI-enabled world. When the thinking, designing and deciding are increasingly outsourced, what does it mean to truly own an outcome?

…I don’t have clean answers yet – and I suspect the answer will change over time.

For me and for right now, the transition is well underway. Building things using software is always going to be a thing I do – I’ve come across too many upside down systems to know that AI doesn’t have imperfect answers to incompatible realities, messy constraints and compromises that are fundamentally human in nature. Those gaps still matter…for now.

But, I’m certain of a few things, firstly, redefining purpose and fulfilment is no longer optional. The things that historically gave us meaning; struggle, mastery, publication, are being reshaped. Preserving first-principle thinking, craft, artistry and responsibility isn’t nostalgic though – it’s resilience.

AI did steal my dopamine in 2025. It flipped the switch on who was solving the everyday puzzles and for the first time in my career, I had to confront the difference between progress and fulfilment. Looking back, that shift was inevitable. In 2002, if there had been a faster or better way to solve those streaming problems, I would have taken it without hesitation. I’ve always used the best tools available. AI isn’t different in principle: it’s different in proximity.

It’s closer to the bone now. That’s why it feels unsettling.

What’s changed isn’t productivity, or even creativity, it’s agency. When effort collapses, responsibility becomes harder to see or assign. When understanding becomes optional, ownership blurs. And when systems become powerful enough to act on our behalf, the question stops being can we build this and becomes who is responsible (read: accountable) when it works, and what happens when it doesn’t?

This is the heart of the age we’re entering.

The next frontier isn’t faster code, smarter tools, or even higher output. It’s intentional agency. It’s deciding where automation ends and responsibility begins. It’s preserving first-principle thinking not as nostalgia, but as a safeguard against brittle systems that no one truly understands or owns – a very real danger I’m seeing unfold.

AI will continue to accelerate…everything, including our capacity to build things we don’t fully comprehend or understand. That makes design, diagnosis, and decision-making more important, not less. The value is no longer in doing the work faster; it’s in asking better questions, defining the right problems, and choosing deliberately what should, and should not be delegated.

In the Agency Age, fulfilment won’t come from outpacing machines – let’s be honest, that’s not possible anymore. It will come from reclaiming authorship over direction, intent, and consequence. That’s a tougher challenge than writing code ever was, and requires a different mental approach we can’t outsource.

AI may have stolen my dopamine, but it showed me something far more important – progress without agency is just momentum. And momentum, without responsibility, is how systems and people, quietly lose their way. The challenge isn’t about keeping pace with AI – its about redefining what purpose means in a world where agency is no longer guaranteed.