Skip to main content

Author: Matt Merryfull

2025 – the year AI stole my dopamine

A reflection on automation, responsibility and the evolving definition of purpose in the AI era.

TL;DR

Early software development delivered purpose through struggle, mastery, and ownership. AI collapsed that loop.

As AI evolved from a gimmick into a capable reasoning partner, productivity exploded…but satisfaction quietly evaporated. The dopamine wasn’t being stolen outright, it was slowly (sort of) displaced. When effort disappears, so does the sense of authorship and responsibility that once gave our work meaning.

This isn’t just a engineer problem. As AI becomes universal across industries, we’re now entering an “Agency Age” where tools increasingly act on our behalf. When understanding becomes optional and outcomes are generated rather than earned, responsibility blurs and systems become brittle.

The real risk isn’t job loss – it’s loss of agency. Progress without responsible ownership becomes momentum without direction.

The next frontier isn’t faster output or smarter tools. It’s intentional agency: deciding what to automate, what to retain, and where responsibility must remain human. Fulfilment in the AI era won’t come from competing with machines, but from reclaiming authorship over direction, intent, and consequence.

AI didn’t just change how work gets done – it forces us to redefine purpose itself.


Setting a relatable scene

Its the year 2002 and we’re deep in the weeds developing OptusTV on the iPhone2 and the soon to be released iPhone3G; phones that were never meant to stream video – that was me building a way to stream (ahem, progressively download) video playback for a product launch. No documentation, no guidance, no outside help – just one impossible task after another, one tiny addictive victory at a time. Young, ambitious, out there to solve all problems with technology – go get ’em tiger.

It was an incredible time to be a software engineer, there were coding books everywhere, limited web resources to lean on, the internet was painfully slow in Australia, CPUs still had “Pentium” at the start and AI was something that only existed in movies and SciFi novels. Developing software back then felt very much like opening up the digital frontier and the notion of developing a piece of software for the very first time was a real possibility.

I suspect this period of early expansion is the same in most exploits across industries; the thrill of doing something “first” – feeling like you’ve conquered the mountain before all others and living to tell the tale. It’s an exhilarating feeling and being part of a group of like-minded individuals sharing similar goals made that period in my life truly special.

Fast-forward to 2025 and unbeknownst to me, it would prove to be the pivotal moment that AI would assume the heavy lifting of the software development and with it, quietly take my dopamine as well.

I’ll be honest, I was particularly late to the AI game. Sure, I read about it, tinkered with the tools, made a video of Will Smith gobbling pasta, read the philosophical and societal issues concerning our jobs and livelihood, but using it in my day to day – nope. It was only when I was invited by Github to beta test Copilot in 2021-22 that I actually decided to give it a try.

My initial assessment of AI was that it was conflated and clumsy. Sure it worked, but it felt like someone speaking broken english giving a lecture on Shakespeare, sure it got over the line, but it did so in a very round-about kind of way. Hallucinations happened often, context was limited and the ongoing result would wander off into oblivion very quickly. In all honesty, it felt like autocomplete on steroids – and not in a good way.

I wrestled with the technology a lot, fought it even when it wanted to do one thing and I another. Differences of opinion, latest versions and features clashing, solving problems and dealing with issues that it created and would then apologise profusely for drove me round the twist to the point where I turned it off to get some sanity and focus back. I consulted my peers who signalled the same experiences and frustrations with the technology, some going as far as requesting refunds given the technology did not meet expectations in terms of productivity and quality.

That was the way it was for a while – AI was still very gimmicky. I didn’t use it in my primary role and it didn’t have a significant impact on what I did. I continued to experiment with AI, but not from an engineering point of view. Research, ideation, analysis were commonplace usages for AI and indeed it helped. Combing the internet and research documents for specific pieces of information is incredibly time consuming and AI breezed through that better and better with each iteration to the point where the “flow” state was beginning to emanate in my work. This felt like cheating in some ways and in other ways, like laziness. I knew what I wanted and now I had the all-seeing oracle to consult whenever I chose. It was intoxicating, and unsettling – a small taste of what might happen if AI became capable of carrying more than just the menial load.

Then something shifted…

When OpenAI’s o1 model came out in 2024 with the ability to “reason”, I rebooted coding with AI. This was significant as the AI tools had clearly been on a supercharged journey of evolution to address the needs of software engineers. Once I started using it and it understood my work, could resolve its suggestions with what I was trying to achieve, reduced hallucinations, removed the need for prompt engineering structures and sequencing and started working with me, I made a dedicated effort to incorporate it everywhere and anywhere I could.

The productivity gains were dramatic and very, very noticeable. Analysis became trivial, identifying bottlenecks and issues with my work became a matter of minutes instead of hours, roughing in patches became days instead of weeks. The tool had shifted into the realm of conversational and cooperative. Adding the ability to read, understand and compile emails, designs, flowcharts, image creation and develop and iterate – the productivity really began to accelerate. AI wasn’t assisting me anymore, it was taking the load off my shoulders.

That’s when the discomfort set in.

The problem that slowly began to surface in my everyday interaction with AI was a lack of satisfaction in my work. Sure I was motoring through work that would have otherwise taken days, weeks or in some cases months to do pre-AI, but I was left on the other side with an ever growing feeling of hollowness. My first reaction was to do more work, improve the systems I’d put in place, make them better, faster, more resilient, give it access to more information, but the unrelenting efficiency by which the AI was performing tasks I gave it, only fuelled the underlying problem. I came to realise that massive increases in personal productivity didn’t equate to satisfaction.

Up until this point in time, solving digital problems was my thing. People came to me from all over to solve unsolvable problems, confident in my ability to solve them. That responsibility gave me incredible purpose. Throwing yourself into problems no one else could comprehend, let alone fix is quite a privilege. The dopamine wasn’t just from success, it was from the cycle of struggle, anticipation and resolution. AI collapsed that entire loop.

Sure, my initial reaction to AI solving problems for me was akin to the first time you see a magic trick, it’s truly a wondrous thing to behold. AI is relatively instant, understands every subject, every language, is mostly right (given the right context and information) and endlessly helpful. But once I realised it was doing my thing, I realised my joy wasn’t being stolen outright – it was being displaced.

This isn’t just a technology problem.

With AI now permeated into Finance, Healthcare, Law, Marketing & Media (Canva anyone?), Manufacturing, Logistics, Insurance, Education, Resourcing, Retail & FMCG, its on the way to being as universal as electricity and the internet. Most people and industries are encountering AI through tools, not as a core capability, but the overall direction is clear.

What productivity, satisfaction, creativity, fulfilment and purpose means to each of us, especially considering we’ve moved away the ages of “distribution” (e.g. digital age, information age, social media) and into something that more closely resembles the “agency age”; where our agency is shifting away from us in a constantly, but ever so subtle, productivity-boosting, dopamine-sapping way, will be something different and unique to every one of us. Yes it has an element of fear, but an equal amount of thrill and excitement.

That shift has consequences.

Responsibility used to be tangibly connected to the effort. You built something because you understood how. If it broke, you fixed it because you owned it. AI challenges that relationship. When you no longer fully understand what’s happening under the hood, responsibility becomes abstract, and systems by consequence become brittle.

This is where I get a little uncomfortable in my chair. It’s not a fear of AI itself, but more so about where responsibility lives in an AI-enabled world. When the thinking, designing and deciding are increasingly outsourced, what does it mean to truly own an outcome?

…I don’t have clean answers yet – and I suspect the answer will change over time.

For me and for right now, the transition is well underway. Building things using software is always going to be a thing I do – I’ve come across too many upside down systems to know that AI doesn’t have imperfect answers to incompatible realities, messy constraints and compromises that are fundamentally human in nature. Those gaps still matter…for now.

But, I’m certain of a few things, firstly, redefining purpose and fulfilment is no longer optional. The things that historically gave us meaning; struggle, mastery, publication, are being reshaped. Preserving first-principle thinking, craft, artistry and responsibility isn’t nostalgic though – it’s resilience.

AI did steal my dopamine in 2025. It flipped the switch on who was solving the everyday puzzles and for the first time in my career, I had to confront the difference between progress and fulfilment. Looking back, that shift was inevitable. In 2002, if there had been a faster or better way to solve those streaming problems, I would have taken it without hesitation. I’ve always used the best tools available. AI isn’t different in principle: it’s different in proximity.

It’s closer to the bone now. That’s why it feels unsettling.

What’s changed isn’t productivity, or even creativity, it’s agency. When effort collapses, responsibility becomes harder to see or assign. When understanding becomes optional, ownership blurs. And when systems become powerful enough to act on our behalf, the question stops being can we build this and becomes who is responsible (read: accountable) when it works, and what happens when it doesn’t?

This is the heart of the age we’re entering.

The next frontier isn’t faster code, smarter tools, or even higher output. It’s intentional agency. It’s deciding where automation ends and responsibility begins. It’s preserving first-principle thinking not as nostalgia, but as a safeguard against brittle systems that no one truly understands or owns – a very real danger I’m seeing unfold.

AI will continue to accelerate…everything, including our capacity to build things we don’t fully comprehend or understand. That makes design, diagnosis, and decision-making more important, not less. The value is no longer in doing the work faster; it’s in asking better questions, defining the right problems, and choosing deliberately what should, and should not be delegated.

In the Agency Age, fulfilment won’t come from outpacing machines – let’s be honest, that’s not possible anymore. It will come from reclaiming authorship over direction, intent, and consequence. That’s a tougher challenge than writing code ever was, and requires a different mental approach we can’t outsource.

AI may have stolen my dopamine, but it showed me something far more important – progress without agency is just momentum. And momentum, without responsibility, is how systems and people, quietly lose their way. The challenge isn’t about keeping pace with AI – its about redefining what purpose means in a world where agency is no longer guaranteed.

Continue reading

AI Isn’t the Future, It’s the Fillter

Temporary Concern, or the New Normal?

TL;DR

Businesses are no longer deciding whether to adopt AI — they’re deciding how quickly they can do it before someone else does. Traditional digital modernisation often failed due to high upfront cost and complexity. AI flips that — reducing resource overhead, accelerating delivery, and changing how software and strategy are approached.

But…

From due diligence to product design, AI-readiness is now a permanent evaluation lens. Companies ignoring it risk being outpaced or devalued. But moving fast without architectural discipline — a la vibe coding — introduces its own fragility.

To help teams move with confidence, I’ve developed the Blacklight 4D Framework:

Discover → Diagnose → Design → Deliver — a structured path to uncover, validate, and execute on AI-native opportunities.

📩 I work with investors, founders, and teams to navigate innovation, M&A, and strategic tech delivery. Let’s talk if you’re building, buying, or betting on the next wave.


Across industries, businesses are facing a clear fork in the road: evolve with AI, or be overtaken by those who already have. In every due diligence engagement I’ve run over the past 12 months — from payments to policy to platform ventures — AI is no longer a speculative layer. It’s a strategic constant.

What was once an exploration — “Could we use AI here?” — is now a gating condition: “Are you AI-ready enough to move forward?”

If you’re not embedding AI into your architectural thinking, operational model, and commercial roadmap, you’re preparing to compete against businesses that already have — and they’re doing it faster, leaner, and smarter.

they’re doing it faster, leaner, and smarter.

From Paper-Tiger Modernisation to AI-Native Execution

Traditional business modernisation promised leverage: digitise your systems, connect your data, unlock new markets. But it often fell flat. Expensive COTS systems, bloated middleware layers, and months of onboarding for abstract outcomes. Most companies balked at the cost, because the resource overhead and risk outweighed the perceived opportunity.

Now? AI-native strategies have flipped that dynamic.

  • Prototyping timelines have collapsed.

  • Small teams can outbuild entire departments.

  • LLM-powered workflows remove the need for excessive headcount to scale.

  • Training, automation, and deployment can be embedded with near-zero marginal cost.

The high-friction modernisation of the last decade has been replaced by modular, intent-driven, low-lift innovation — and the gap between adopters and followers is growing…really, really fast.

AI as a Due Diligence Standard, Not a Side Topic

In technical due diligence, we’ve reached a tipping point: AI-readiness isn’t just part of the review — it’s foundational.

Key questions now include:

  • Can this company scale without exploding OPEX?

  • Is the product team fluent in AI-first design and automation?

  • How resilient is the architecture under real-world LLM use?

  • Can the company defend its IP in an AI-assisted competitive field?

We’re not just looking at code or capability anymore — we’re looking at velocity, adaptability, and execution logic. We’re also applying this lens internally: our own skunkworks innovation tracks are AI-native from day zero, because anything else is slower, costlier, and harder to pivot.

The Software Shift: From Code to Cognition

This shift has deep implications for software engineering and IT leadership.

AI is not just a tool for speed — it’s transforming the structure and economics of delivery:

  • System design trumps individual code quality

  • Prompting replaces boilerplate

  • Testing and deployment are increasingly self-managed

  • Toolchains are flattening, generalist builders are accelerating

  • Cost-to-deploy is approaching zero

This isn’t just a change in toolkits — it’s a redefinition of what it means to build.

Vibe Coding and the Cognitive Gap

But here’s where the nuance creeps in — and where strategic leaders need to tread carefully.

The rise of vibe coding — where users describe what they want and AI writes the code — introduces a new kind of fragility. Yes, anyone can now generate software. But most don’t know how it works, what breaks it, or how to fix it. It’s like handing the keys to a supercar to someone who’s never driven manual.

While this lowers the barrier to entry, it also raises the floor for required system literacy. We’re heading into a world where more people can “drive” the system — but fewer understand how it’s wired underneath.

In the near future, this may be abstracted away entirely — with specialist LLMs handling fault tolerance, debugging, triage, and observability. Developers will become orchestrators, not operators. But for now? It’s a risk. One that must be assessed in any serious technical review or innovation planning cycle.

Blacklight 4D as Strategy: Build What the Business Can’t Yet Buy

We’ve formalised this into what we call Blacklight 4D (find what you cannot yet see) — a short-cycle innovation program built around AI-native tooling, modular architecture, and due diligence-grade engineering and creative disciplines.

It’s designed for companies that:

  • Need to prototype fast without overcommitting headcount

  • Want to validate innovation without legacy drag

  • Are preparing for M&A, internal restructuring, or investor scrutiny

  • Have leadership buy-in, but need execution clarity

It’s not about shiny proofs of concept. It’s about building real capability, fast, with the structural foresight needed to scale or integrate post-sprint (look to our recent hackathon for more).

Who We Help

I partner with decision-makers who see the writing on the wall and want to get ahead of it. Whether you’re preparing for a capital raise, exploring a tech acquisition, building internal capability, or modernising your stack — I help map risk, accelerate opportunity, and engineer with intent.

📩 If you’re navigating AI-readiness, due diligence, or innovation bottlenecks — let’s talk. I’m currently supporting engagements across multiple sectors.

Continue reading

Building Bridges

with Real-Time Data and Unreal Engine

TL;DR – A Hackathon Recap

We set out to prove that Unreal Engine can be more than a rendering tool—it can be a live, integrated node in a real-time digital ecosystem.

In just a few days, we:

  • Prototyped a WebSocket-based sync layer connecting UE with a web UI (Google Maps) and .NET backend using MassTransit + AWS SQS.

  • Used Cesium for UE to build a 3D twin of the real world.

  • Demonstrated two-way communication, not just visualisation—allowing interactions from and to UE.

  • Skipped auth (for now) to focus on real-time viability and cross-system collaboration.

  • Explored how SpacetimeDB’s timewarp unlocks “experiential analytics”—revisiting moments in time spatially.

We also leaned into a cross-disciplinary team model, where engineering and technical artistry collaborated closely—proof that diverse perspectives create richer solutions.

This wasn’t about shipping production code. It was about momentum toward something bigger – BRAID-like 🤓, if you will.


Foundations are everything…

At Neon Light HQ here in Sydney, we recently ran a focused internal hackathon aimed at solving a deceptively simple but expansive problem: how do you synchronise Unreal Engine (UE) with other platforms in real-time – and in a way that’s extensible, scalable, and meaningful beyond the confines of game development?

The goal was to prototype a WebSocket service that could shuttle data back and forth between UE and external interfaces, making UE not just a rendering endpoint, but a participant in a broader digital ecosystem.

We landed on a three-tiered architecture:

  • A .NET Core backend acting as an intermediary layer, built using MassTransit for message orchestration and AWS SQS for queueing and fan-out

  • A React web interface that displayed contextual overlays via Google Maps.

  • A Cesium for UE setup rendering a rich 3D digital twin of the real world (this is almost trivial these days – so big thank you to the team over at Cesium).

The intent? If a user clicks something in the web interface, a WebSocket event fires to UE. UE responds with spatial context or 3D metadata. And just as crucially, if UE detects a spatial interaction (e.g. an object selected in-world), that event fans out to web dashboards, logs, and notification systems.

This may sound modest, but the core principle flips a common pattern on its head. Most integrations (Bentley Systems, for instance) are read-only. Data flows into the visual system but not out. We’re proving that the loop can – and should – close.

It’s not that these systems don’t have the capability, the desire just hasn’t been there, until now.


Why It Matters

Most of today’s visual-based workloads – spreadsheets, reports, PDFs – exist in ecosystems that sit around spatial engines, not within them. And while tools like UE Datasmith help ingest content into Unreal, they don’t help facilitate collaboration or insight generation from inside the experience – realistically, that’s not what Datasmith or UE was designed to do.

We believe that real value comes when spatial platforms become expressive interfaces—not just canvases.

Think: stakeholder walkthroughs that generate insights, not just impressions. Engineers observing user focus patterns. Designers iterating based on behaviour, not assumptions. Expand this use case through to future governance and the digital twin interface and you’ll see where our team’s collective minds are travelling toward.

This is why one of our next moves is incorporating SpacetimeDB, a time-aware database that unlocks ‘timewarp’ capabilities. Users and systems will be able to query what was happening, who was there, and what was seen at any point in the spatial timeline. It’s experiential analytics without the friction—impressions captured passively, insight drawn actively.

In the video, we see the world coordinate data (Latitude and Longitude) being synchronised from UE through to Google Maps. Towards the end, the ability to “warp” to different locations is captured as well – enabling new interaction paradigms not possible in previous experiential delivery.

On Security, Teamwork, and Realities

In the interest of velocity, we excluded an authentication layer. Not because it’s not important – it is, especially for security and multi-tenant setups – but because the goal of this hackathon wasn’t polish, it was potential. We know auth is a critical next step for any real-world deployment.

Equally critical to the hackathon’s success was our interdisciplinary team. Neon Light’s DNA isn’t just code; it’s artistry, engineering, experience, and storytelling. In this sprint, we saw technical artists collaborate with engineers, ops folks challenge assumptions, and designers stretch the boundaries of what the toolset was originally built for. That shared intent – the idea that collaboration is the actual outcome – was more valuable than any single feature we shipped.

Looking Ahead

As we step back from this experiment, it’s clear we’ve only scratched the surface. The addition of pub/sub queues – using MassTransit via C# and .NET with AWS SQS – enables a fan-out approach to event handling, allowing decoupled services like notifications, AI processes, reporting, and data-lakes to react asynchronously to system activity. This de-centralised approach offers scalable pathways to expand workloads and capabilities without overloading the core systems.

Equally compelling is the opportunity presented by SpacetimeDB’s timewarp feature. It introduces a novel concept in experiential analytics: the ability to revisit specific moments in a shared 3D environment and extract insights without interrupting or distorting the original user experience. Imagine stakeholders being able to explore what was viewed, when, why, and for how long – without intrusive data capture or forced interactions. It’s a subtle but powerful shift: analytics that respect the flow of experience while enabling deep reflection later.

a powerful shift: analytics that respect the flow of experience while enabling deep reflection later.

This prototype, while small in scope, is a key step toward a broader connected ecosystem. While we held off implementing authentication for now – given the short hackathon window – we fully recognise its role in enabling secure and scalable infrastructure for real-world deployment. Similarly, the discussions and cross-domain collaboration that fuelled this build are just as important as the technical outputs. The blending of software engineering and technical artistry created a feedback loop of ideas that shaped not only what we built, but why we built it.

We’re treating this not as a standalone exercise, but as a foundational thread in a broader tapestry – one that will weave into larger platform ambitions. This includes improved interoperability, new collaborative workflows, and real-time digital experiences that extend across disciplines and industries. As we refine these concepts and begin incorporating persistent state layers, we’re opening the door to meaningful partnerships, scalable implementations, and new ways of engaging with the digital world.

Continue reading