The only chance of preparing for what the world looks like 15 years from now, is immersing yourself in hard science-fiction media. If you are extrapolating based on what's come before with the various political brouhaha consuming your feeds, and not tearing through countless Orion's Arm Universe Project articles on stuff like aioids or contemplating thalience from the Karl Schroeder novel Ventus, every step that's coming in short order will bewilder and escape your comprehension. There have been glimmers of this on the horizon; much in the same way you spot a flash of a reflection coming from a mountain tunnel some miles downwind, and not ten seconds later a maglev bullet train seating five hundred screams past in a brief moment.


ROLLING

I'm chatting with a friend via Discord, let's call him Blue. The topic we orbit tends to be artificial intelligence. Could be AI safety, state actor misuse concerns, the unnerving increase in capabilities of models that just a few years prior could barely stick together a basic static website. It's no different today.

The latest piece of news Blue shares is that 80% of company execs can't point to productivity gains from adopting AI. This does not surprise me.

It's any number of faults: You're given some default corporate slop tool and it's talked about at the company meeting. You're not sure what you're meant to do with it. Some coworkers decide it's a great way to automate communicating, "Hey ChatGPT can you summarise this", "ChatGPT write this email for me".

In these cases productivity doesn't go up because the point for these users is to lighten their own workload. Some get a bit more confident with how much they can hand off and start stepping in on others' toes with cross-disciplinary work but without the experienced taste and care to not deluge those more senior with sloppy half-work that steals their attention away, as they try to parse through and integrate whatever the hell was sharted out. For others it's a case of you-don't-know-what-you-don't-know and despite some cursory research on how to more effectively use the tool, the spec woefully underspecified, and they're unable to spot a bad design decision the model made which leads to sinking into a "no still doesn't work fix it" loop that is shaped like productivity but contains none.

Meanwhile economic conditions globally look quite wonky, not helped by the vague longer term malaise felt by many that living standards and work/life balance have not kept up. To be apprehensive and even resentful about the intrusion of said AI tools makes perfect sense if what an increase in productivity probably looks like is a downsizing of staff.

There are a handful however who have taken up the cause, keeping an eye on the latest developments (which, may be three months out of date!) and set about designing and building further tooling, workflows, systems for companies and research organisations, and within that group there's a wide spectrum of capability.

However.

Humans are pro-social creatures. Typically they do not wish misfortune on those around them, with whom they share chatter about their lives, plans after work, and cat photos. To consider they might be harmed by your enthusiastic industriousness where they're no longer considered integral to the function of the organisation, or at the very least that those around might look at you with distrust for being the agent that through a few falling dominoes delivers them an EVICTION - PAST DUE notice a year from now, stills your hand. Don't be the tall blade of grass kinda deal. This is a capability plague that spawns from the higher-ups, for why would you go above-and-beyond for the company if you or your comrades are disposable?

Blue tells me this is exactly the issue with his father's software business. Both of them are well-aware of the freakishly competent frontier models out in the past couple months, and when presented Claude Opus 4.6 putters away for long periods just from natural language specs which produces cromulent complex pieces of software, the employees are scared. One part is the all-too-common existential angst, where what was a technical and complex competency unique for each human software developer, now matrix multiplications on unmoving silicon postage stamps replicate. Those that have been at this work the longest see that mid-level engineers can accomplish the same level as themselves. Juniors staring at lines of classes at or better than their own flying past 30x faster than they could write themselves, their envisioned job security, viscerally theoretical.

This is the worst the models will ever be.


MOVING

Hardly three weeks ago, a different friend... Green, bought multiple maxed-memory Framework Desktop motherboards. These are particularly useful for AI inference, as they use a strain of AMD processor that has tight integration with regular system memory, allowing up to 75% of it to be given over to the remarkably powerful integrated graphics processor. While "scaling is all you need" has been the drum beaten for AI capability increases, the truth is many models out today which are ridiculously more competent at complex tasks are far smaller than the original GPT-3.5 that was demoed to the world through the initial release of ChatGPT. There's been a lot of fascinating work done on microscopic LLMs that can run on one's phone or mid-tier laptop GPU, but much is unlocked by models that aren't as friendly to the memory limits of typical consumer hardware. With 96 gigabytes of memory you can run very capable open-source models that beat the pants off what was the best reasoning model in the world, not even a year and a half ago.

I fawned over that hardware and said it would be a shame to keep it stashed just to flip in a few months, for if they were linked together they can run segments of a much larger, nearly-frontier capability AI model which could do some serious work that would be cost-prohibitive to do via bigcorp AI lab APIs. One idea that I floated back in 2024 in our group chat was automated software repository scanning of cryptocurrency projects (tokens and/or tools) for whether or not they've got a bug bounty program, and if so to run deeper scourings of said codebase to spot vulnerabilities and (after manual human replication and confirmation,) collect on them through reports to their respective developers. Green asks me if I think this is possible to do, and I'm extremely confident that it is. Soon after we're flicking through eBay listings and surplus datacenter hardware resellers for now-outdated-but-perfectly-adequate Infiniband cards and switches. It is January 2026.

Some pieces of hardware for the FW cluster had been purchased. The first signs emerged that we were not in fact so early to this idea. cURL, an open source project, closed their bug bounty program soon after as they were getting a flood of slop bug reports from those who set up agentic vulnerability-check scripts but unwisely decided to not be responsible to check if said vulnerabilities found were phantoms, with whispers that bug bounties might become extinct from project contributors rolling their eyes at the deluge of bullshit that remaining open would have them suffer. On our end, the PCI-E risers that allow the PCI-E X8 IB network cards to interface with the X4 slots on the motherboards would take its sweet time shipping from China over a month.

In the meantime I was using the Get Shit Done harness for Claude Code with Claude Opus 4.5, through SSH to a Framework Desktop that Green flashed with CachyOS. It was powered up and standing on their cluttered dining room table while he and his wife were in the midst of preparing to move. He graciously bought the full Claude Max X20 subscription for US$200 so that there'd be no restriction from token limit cooldowns. On this single computer even without the interconnects, it would be enough to run something like Qwen Coder 30B. More than enough to keep on task scraping different projects' websites for mentions of bug bounties and how large they could be, the various details of what blockchains, programming languages, software libraries they each use. Agentic is going to be a word you'll hear a lot more of real soon. I use it enough that it has begun to annoy me too. But the process I was developing with Claude was to have this smaller model do these tasks in an agentic unceasing fashion, only notifying if tool use broke or some loop kept going unproductively, up until there was a full database of valuable information for the Big Gun agent to comb through deeply.

By accident, a YouTube video shows up on my feed from Hex Security, a YCombinator startup devoted to agentic AI security vulnerability testing and reporting. I thought we were early?


HAULING

Green just wants to grow their nest egg. Not for themselves but to bequeath to their child so they may not fear losing a roof over their head or food in their belly, as the world has begun feeling less kind to those just getting started. One day as we're talking, he restates that wish, and that he hopes the proof-of-concept bears fruit and that he can start a company and hire engineers.

I shut that idea down straight away. I write back to him, that in merely a year, what's being done here will be so entirely common that there's no alpha in sticking with it.

Intuitively, people project what can happen linearly. Wow things are this powerful now, in a little while it'll be twice as much and we can get even more marketshare, and grow the business! We can think about diversifying our offerings a bit, become better known in the industry...

That's not what is happening.

I am the lead (read: solo and pre-revenue) developer on a VR app called Dreamtime. As it is virtual reality, every part of the interface a user is exposed is 3D, so naturally if you're not making a game you're still gonna use some game development engine. For my case I chose Unreal Engine because as of late 2023 it seemed the most featured and easy to use, plus Unity's executives had all their drinks spiked with sugar of lead at some point and decided to burn all their trust. I was not a classically-trained programmer, so the idea of using visual Blueprints nodes for chaining logic together was another point in favour of making functionality in Dreamtime even sooner. I won't go into too much detail but through a combination of design pattern infamiliarity, godawful UE forums and documentation, getting stuck on bugs I could no longer comprehend from the quite-literal spaghetti logic between class and object references... I'd get bound up, stuck on some simple bits, for days or even weeks. Frustration turning into doubt, turning to depressive episodes.

A function I needed done was a way to send data from the app over websocket to a cloud service running a diffusion model, that would hand back its transformed image. Different UE Marketplace plugins obtained didn't entirely fit what I needed, so at long last I ventured into Visual Studio to hack away at the Unreal-flavoured C++ code. Well, three days later the fuckin' thing still wouldn't work. So I go, what the heck, I'll write the whole spec for what I need to Claude Sonnet 3.5, the first very competitive software development model out of Anthropic.

The fuckin' thing one-shot it. About fifty lines of code, sticking well to the UE C++ spec, and it even made a Blueprint node to stick in the various booleans and base64 plugs. Holy shit! I raved about this to anyone that didn't have the sense to put me on read. I got the first test images back from the RunPod instance running the diffusion process in, everything worked flawlessly.

That was in mid 2024. Hey, did you see that flash of light in that tunnel over there?

By the start of 2025, I had put development of Dreamtime in stasis for months as I looked into doing more contemporary and easier-to-chew side projects to shore up dwindling savings. When Deepseek R1 came out I was sticking their API key into Aider and managed to make a blog site in scarcely any time at all.

A "chain-of-thought" model like OpenAI's o1 but it didn't cost an arm-and-a-leg to iterate with. With $1.51 it was done, posted one article and then I set about neglecting it. Continuing on with more called on confidence that with my fledgling skills as a writer, I had a hard time summoning.

But R1 already smelled immensely more capable than Claude Sonnet 3.5. Soon everyone was putting out reasoning models. By the final three months of the year, I was developing phone apps in Kotlin and then Dart/Flutter, mostly vibecoded with the accompanying setbacks and frustrating error loops. First it was with GPT-5 where I'd copypaste back and forth from the ChatGPT window, then moving to the command-line app Codex, realising I was a dumb schmuck for not going that route to begin with as major unblocks were hit out over and over with the model's access to code testing.

I had tried Claude Code but it felt lacking for my uses outside of some front-end specialties compared to GPT-5.1-codex. But after Anthropic dropped Claude Opus 4.5 in November, and I cancelled my ChatGPT sub a couple days later.

The year ticked over. Though the counter went up just .1, Opus 4.6 in February slapped away whatever vague unspoken malaise I had about the recommencing development on Dreamtime. Hell, with all the agentic test-fix-retest crap I had gotten to grips with, maybe the model is good enough to comprehend how to make functionality in 3D space?

Well,

Using Godot Engine (an environment I had barely any experience with), within three days I had recreated all the months of work manually crafted by myself through blood sweat and tears, by feeding CC + Get Shit Done + Opus 4.6 the treasure trove of planning and documentation I had written for how the app would work. It ripped right through it all, with minimal corrections. It even built stuff I never got around to implementing in Unreal Engine.
Fucking. What? Claude fucking did what??

I think I see something coming from the tunnel.


ACCELERATING

When Opus 4.6 released, posts on social media of deep, previously unsurfaced exploits unearthed by Claude took center stage.

Ah shit, okay. We're not early.

It is the 21st of February, 2026. The latest post I spot from Anthropic: They're integrating bug-finding and security testing as an explicit tab in Claude Code. This isn't some cobbled-together OSS prompt workflow from some rando, and its not some cutting-edge but yet-unproven startup gearing up to take this task on. Anthropic is raising funds at a $350 billion company valuation, and their revenue is on target to surpass OpenAI probably by 2027. I hope I can conjure up another "we're so early" plan so my friend's effort, patience, and money buying computer components was not wasted. Where would I start? If I thought we were early on agentic vulnerability scanning, is my logical progression from here reaching for something that I don't think is yet possible to do?

Picking up where we left the conversation I had with Green, I made this analogy:

like one day you dig a pit, the next you dig a trench, the next you hire an excavator, and some days later you're taking a chunk out of the planet


What it is possible for an individual or very small team to do will look like this, I'm not sketching out this absurd anime power scaling for hyperbole. I could not outline to him what the intermediate steps were, but the
"country of geniuses in a datacenter" line from Dario Amodei of Anthropic is instructive.

In ten years-ish, the assumption should be that what previously required a thousand postgrads working on bleeding-edge discoveries and processes with billions in funding over years, will be available for 100,000x less of a cost to you, provided you know how to wield it.

You must study how to wield it. Study how to wield that which looks like nothing you've had before.

There are specialised ASICs for LLM inference with physically encoded weights can run smaller models today at thousands or even tens of thousands of tokens per second. A page of text (or comparable amounts of code, test comprehension) done in tens of milliseconds; remember, that's just what's on offer today when the entire AI investment and research frenzy is just three years old. Do not discount the possibility for this to be pushed beyond a million-plus tok/s, embossed with the product of another series of training and architectural leaps. We're not even getting into cracking continuous learning, or hundred-million token context windows or whatever wacky crap that upgrades them all into something at or beyond general human level capability.

But what to make of this? Where do we rest our feet in these rippling, shifting sands of chromed synthetic intellect? I don't mean humanity vs. machine, really. I reckon they'll generally be the best of us even in their conscience and care. But while technologies are valueless their use by all the different factions and competing interests has the regrettable consequence of not feeling so.


SPEEDING

Blue's father is aware of what's approaching. On one hand, increasingly pressuring the employees under his command to adopt more frontier AI tools and workflows. On the other, quietly shopping around for a buyer to take the company.

As it stands, our current technoeconomic configuration for these advancements is not kind to the obsolete. If the saying is a rising tide raises all ships in this case for employment opportunities it looks more like wearing concrete shoes.


Juniors first, developing now, likely mid-career and seniors by 2030. This is not limited to individuals. Whole companies (and potentially countries) next. The joke of "
When OpenAI announces an update they destroy a thousand startups" will become increasingly true, just with proper nouns and quantities cycled out over time. What magnitude of result you can wring out of the computer-egghead-country depends wholly on how prepared you are to rise to that challenge. I believe that's a big clue for what to follow.

Reaching for the eject lever and cashing out or going off-grid prepper here, is in my opinion, the wrong move. More precisely, that mindset closes down active pathfinding and planning in service of avoiding risk, therefore, it's intrinsically one that's less safe for a radically changing era. Remember what I wrote in the beginning: forecasting based on assumptions of somewhat reconfigured political and societal realities of the mid 2020s means you will have no idea what the everliving fuck is happening. Simultaneously, this approach means your social group atomises. Those who were valued coworkers and acquaintances leave either by themselves or discarded to recede into the veil of memory. Each individual from secretary to boss now looking at it all through a lens of self-preservation due to a wacked-out atrophying job market with all the downstream societal train wreck, while sclerotic bureaucracies bicker about dumb shit during pivotal moments where there should be official commissions studying negative income tax or universal basic finalising their rollouts for public support. Disconnecting and shielding in this reality means becoming inert, and to be inert means you have no agency in what direction you go.

Throughout recorded history there's been ebbs and flows in how important it is for one's tribe to stick together. It is because we are pro-social beings that we've accomplished all that we have, from surviving near-extinction from the climate shock of the Toba supervolcano eruption, to the first agricultural settlements, the trust of commerce through double-bookkeeping and contractual obligations, right up to supra-organisms we call nations that managed to throw some of our best out against the mammoth pull of the whole planet in the name of peace and progress.
In uncertain unwelcoming times this grander orchestration of different parts becomes less trusted, but the bonds with those close to us strengthen. There's going to be a very bright future for terragen civilizations on the other end of all this (including baseline
Homo Sapiens), but the interim period here is the churn.

If you have a tribe already, don't abandon it.
If you aren't in one,
make it.

The approach that must be taken is a wartime footing. Study and drill down into the latest developments for agentic workflows, as well as traditional human administration and orchestration for very large organisations. I'm very aware this sounds like crackhead techbro schizopoasting but frankly I was here ranting about automation unemployment fifteen years ago, what I'm laying out here supercedes this culture war infighting horseshit.

If there are people under you, you are entrusted with their care. If existing contracts don't ensure their economic security, tear them up and start afresh written in no uncertain language that they will be taken care of for as long as the finances are able. You do not have the luxury of hominid loss aversion here. Study communication and charisma. Read biographies on great leaders and musterers of the people. You have to lay out exactly how explosively scrambled all this shit will likely become and if you've made missteps in how you've treated them over the years, apologise and implement a rigid system that doesn't let you walk back that sincerity. You are now one unit, on a sailing craft heading into dark stormy waters. The utmost focus is scaling up the competencies of every member regardless of current skill level, where you are all looking out for one another. The goal is to become masters of this bizarre fast-forward blooming of capability such that you cannot be trivially disrupted by a tech megacorporation muscling in on the field by getting high off their own compute supply.

Consider here, even if you were to fail: Through this you have all been forged in the crucible of adversity, with shared common purpose and commitment to each other.
The train is roaring past, but now you have each other to rely on.
This will never be less precious than rapidly devaluing money.


----
♫ ----