AI hyperbole: Complexity and Destruction

Martijn Dekker
4 min readFeb 15, 2024

This blog is a bit of a hyperbole. But it is a train of thought that occupies my mind sometimes.

The news is filled with the latest developments of artificial intelligence (AI). In particular the release of ChatGPT and other generative AI models have accelerated adoption of AI with the promises of ease of use (because of the natural language interface for example) and higher productivity.

I do believe we are entering a new and exciting phase in AI. The current state of AI seems to be able to remove, at long last, the tedium from one’s work. Of course this was always the promise of IT, but instead IT has created a lot of new tedium (doing your email for example). The new AI models seem to be able to at least free us from the tedium IT created, but probably (and hopefully) more than that.

AI will free up a lot of human capital. It will, by doing that, lower the cost of what humankind is doing today, and freeing up time to do new things tomorrow. I am curious to see what we will do with that time.

But there is also another effect. I do believe that AI will result in humankind adding complexity and adding speed to our world. And without implementing guardrails this will lead to destruction.

Complexity

Complexity increases, because AI reduces the cost of complexity. We know that making something cheaper will increase its consumption. It is a simple result of the free market.

As AI is a form of automation it obviously accelerates the execution of the tasks. With AI operating in a unbounded environment (for example in a free market economy), we are thus creating a faster and a more complex world.

AI reduces the cost of complexity while increasing speed at the same time

The value of lower cost of complexity will not materialize as increase of happiness or wellbeing, it will be used to create more complexity instead, simply because that has become affordable and feasible.

An interesting analogy can be seen in the energy sector. The increase of green energy production (like solar- and wind-energy) has not resulted in using less fossil fuels. Instead it has resulted in us using more energy, because the supply has grown. It requires an external and active force to reduce fossil fuel consumption: regulation. And this triggers all kinds of (social, ethical and political) dilemmas, as forcing out fossil fuels, creates stranding risks for the global south for example (because that is where most fossil fuels are). It requires a mature and global leadership.

Speed

The adoption of AI will increase the heart-rate and reduce time to market. This applies not only to new products, but also to new threats and risks. Our highly connected world is already struggling with threats and risks and their migration velocities in this network. AI will make managing these harder. The result will be that there will be no other option than to use AI to manage or counter these risks and threats. There will be no way back. Once AI is adopted, it can only be adopted more.

AI will accelerate risk migration and then lock us in

As a security professional I worry about this. We know that complexity and speed are the enemies of security. Securing the AI powered world will require much better security controls and increased situational awareness to make better and faster security decisions. They will take time to develop and to build. But in an unbounded environment, new products will not wait for this. We will see new AI-powered systems put into large scale use, without proper security in place. We have always done that.

The Pacing Problem and Destruction

When you move fast, you better have a good brake and a good steering wheel. In our society, we only have one mechanism that supplies these: regulation. We need regulation to control the direction of travel of AI. We need mechanisms to enforce secure adoption of AI, because there is no reverse. The good news is that law makers (in the EU and in the USA) are busy building a large body of regulations to try to make sure AI is applied in ways that protect human rights (like privacy) and other (European) values. As always, they are struggling with the pacing problem: regulation is lagging the technological developments. Policy making has always been slow, and now this is even a bigger problem in an AI-powered accelerated world.

I hope humankind will somehow apply mature and global leadership and is able to create boundaries in the new AI driven societies that ensure we can safely operate and govern this new society, while maintaining our human values. If we leave AI run free in an unbounded context, we might find out that the only way to run such new AI driven society safely, is to abandon our human values and implement security controls that are based on total surveillance, and fall back to security decisions based on perfect and complete information. This will destroy all privacy for example. It might be the only way to survive or sustain.

The fact we have never seen alien life, can be an indicator that once a life form reaches the capability to leave the planet, it self destructs

Am I optimistic about humankind being able to define such boundaries and implement them in time by beating the pacing problem? I do not know. But the fact that we have never seen any other, alien life form in the entire universe, might be an indicator that once life has achieved the capability of leaving the planet, it self-destructs.

But maybe we will be the exception.

--

--

Martijn Dekker

Martijn has a PhD in pure mathematics, is top-executive, scientist and CISO with more than 25 years of experience pushing the limits of information security.