Unashamedly Accelerationist

There’s nothing quite like the impending threat of war and the end of humanity as we know it to start your day. During my 40 minute commute to work on the days I actually go to campus, I listen to podcasts to still my racing mind and warm up the old cognitive functions. I usually start off with ABC news; a fifteen minute insight into one particular news story (today was Iran weighing in on the Israel Palestine conflict after the execution of a Hezbollah leader), before moving onto some longer-format content.
A usual favourite is “The Futurists”, a podcast by futurists Brett King and Rob Tercek. Today’s discussion was on synthetic biology and its use in the creation of proteins, DNA and other parts of the endlessly fascinating biological machinery. My father is a biochemist and microbiologist, so I absorbed (osmosed?) quite a bit about biology growing up — in between kitchen table math lessons that often ended in snot bubbles and frustration. (Learning from your dad: character-building or emotional scarring? You decide.) Now while this has resulted in the inherent trust in the “sniff test” over best-before dates for food, I realised today that it also gave me an insight into a field that many people see as a field full of Frankenstein’s monsters and end-of-the-world scenarios.
During the futurists discussion with Andrew Hessel on DNA sequencing and the micron-level-machinery of synthetic biology, they were discussing that the easiest things to make are viruses as they are relative simple things and a fabulous mechanism to inject code (DNA) directly into cells. Oh the horror! I hear you scream, but no, this is a tech we have used for many years and in nature is the underlying mechanisms for everything from the common cold (apparently our immune systems training buddy) to more impactful viruses (viri?) such as the now all too familiar Covid19.
The conversation turned to how we create these sets of technologies but have relatively little defence against them when they get out of control (not saying Covid19, but… Covid19), which set my mind thinking about the concept of “red-teaming” technology, a process where white hat hackers attack a system to discover its flaws, defects, and vulnerabilities so they maybe patched and mitigated before the black-hats get there first.
This, in true tangential fashion (and because, why stop at one existential threat before breakfast?) led the converstion from viruses to another area that keeps me up at night: AI.
The recent OpenAI GPT 4o system card is fascinating because, if you read between the lines, it presents a OpenAi’s attempts to mitigate this terrifying tool for misinformation, malicious code generation, scamming, spamming, and general evil-genius-world-domination style bad actors.
This in turn led me to reflect upon, and now write about (in a somewhat excessively verbose and tangential manner), my general attitude of accelerationism, and “fuck-it-let’s-fuck-around-and-find-out” attitude.
Having moved into academia mid-life, I note the slow and cautious nature of my organisation and my lovely colleagues, and I contrast this to my own ADHD and pseudo-amphetamine-fuelled (prescribed!) approach of rapid testing and iteration based discovery and trial and error. Now, while this approach has led to several near-death experiences and the notable loss of a few eyebrows, this approach has served me very well.
When it comes to AI, I’m all in to see what happens when we use and abuse the system to see what breaks and why, especially in tertiary education where the general consensus is to shore up the dam wall by sticking our fingers in the cracks and pretending that the problem doesn’t really exist. General artificial intelligence? Yes, please! Super-intelligent agentic AI systems? Sounds good, take my money. While I see the need for slow and considered appraisal of new technologies, methods and systems, I come to realise that this is just not me. I want the latest and greatest and am happy to take the fall when our tests go wrong as a learning experience. Now obviously, I’m not saying “lets hand the aeroplanes over to random untested AIs and see if they stick the landing”, but I am for pushing ahead with technologies and prototypes to see what does and doesn’t work. This is especially true for using AI in academia for writing, productivity, image generation, and learning.
I am discovering in my recent experimentation with AI for qualitative coding (which lets me do weeks worth of work in a caffeine-fuelled 12-hour rampage of analysis) and report writing and referencing that there is an attitude from the non-early-adopters of AI that any writing or analysis that uses AI falls under the “well, you didn’t really write it” category and therefore has less value than if I’d slaved for weeks over it. There’s this weird stigma floating around: ‘If AI helps, did you really do the work?’ My answer? Hell yes, I did. I’ve done the hard yards with my PhD, I’ve done the re-writing-a-bazillion-words-to-craft-the-perfect-narrative thing, and now I have a tool that let’s me speed up my process and work at a speed that feels right and natural.So why should this feel like “cheating”? I’m completing tasks more quickly, while stealing back HOURS of time in productivity. I’ve always felt there are not enough hours in the day, and suddenly these tools are giving me some of those hours, which, when you spend half your life in ADHD task paralysis, is a bloody godsend.
Now, back to the point as I’m starting to ramble; I realised that while listening to a synthetic biologist push forward with their amazing work understanding the most advanced manufacturing mechanisms on earth, that I am all for pushing all the boundaries, all the time, to see where it leads. When something is pushed beyond it limits, it tends to break, or break something around it, and this can be fixed or upgraded to take the pressure.
While there is a case that the outlier scenarios of creating technology with this attitude may end the human race through the robot uprising, Covid24, or thermonuclear winter, or what ever, these challenges to our boundaries provoke conversation, thought and action, and are an essential part of our progress.
Given our current environment of late stage capitalism, the death of democracy (ahem Trump) wealth accretion in the top 1%, and the decline of environment, I’d say we’ve already cocked things up pretty badly and it might just be time to push ahead and see if we can’t make some breakthroughs by upsetting the apple cart and seeing what falls out.
Now, I’m not saying we don’t need the cautionists, ethicists and careful thinkers.
In the end, technologies like synthetic biology, quantum computing, and AI are reshaping the very fabric of society. And while we need the cautionists and ethicists — the ones saying, ‘Wait, maybe don’t press the ‘Deploy Skynet’ button’ — that’s just not me. I’m coming out as a proud and unashamed Accelerationist, because if we don’t push the boundaries, we’ll never know how far we can go.

Leave a Comment

Scroll to Top