Sad truths about productivity gaps

In industry, there are productivity differences of 1 to 10 from a developer to the next. That’s a sad aspect. Some coders are just faster than others, and management echelons have to take this as fait accompli - no matter how high the standard in your hiring process.

When AI prompters came around, a lot of people drew the assumption that using them was going to increase the productivity of programmers across the board. It was assumed - and still is - that prompters would potentially level the playing field for everyone.

I posit that this is not what is happening - in fact, I posit the opposite.

50 shades of «no-thanks»

Among AI opponents, the anti-AI, there are those who say: the output of prompters is so bad that it’s absolutely unusable. And anyway, junior developers or users without discernment who try to benefit from it make their work worse; because AI encourages them to never learn by themselves, and to never develop the qualities necessary for developers - i.e. discernment, reasoning, the ability to handle logic, etc.

These opponents are right.

Only, they are only right about un-discerning AI consumers. Regarding the quality of prompters’ outputs - they are wrong. They were right a year ago. But today, the quality of the models, the quality of applications has changed - drastically enough that the content of users’ inputs now over-determines the output responses.

Today, what determines good prompter responses is the relevance of the questions asked to it. But who are the users who ask the most pertinent and clear questions? They are those who were already the most capable before.

We started from the assumption that AI would bridge the productivity gap between the least and most skilled - in reality, my bet is that it widens it.

It widens it because the devs who get the most significant productivity gain are those who were already at an advantage.

Caveat emptor

I love to write, and I confess I do not dislike having a machine ready to hear me blabber endlessly. Putting thoughts in written form turns them from mere potential to clear ideas. The more I write, the more I clarify. The prompter responds to my musings, and its oft-incorrect answers help me clarify further. Once I’m done chewing my thoughts, the prompter can just do the hard implementation work for me.

This makes me assume that AI users inclined towards writing are going to be the ones yielding the greatest benefit.

There’s pros and cons to this assumption. On the one end, I’m biased: my strongest suit has always been written form - much more than oral communication - and I’m finding that my most detailed writing yields the most accurate responses. By detailed writing, I mean: carefully-structured sentences, clear syntax, and unambiguous descriptions of the problems at hand.

I like to assume that a better command of language turns into better writing, which I find yields higher precision in the responses.

That being said, my bias is also very cultural: coming from french as a native tongue, it is widely acknowledged amongst natives that command of our language is inherently a social marker - predominantly passed amongst generations through lengthy debates at the family diner table.

This impregnates every aspect of France’s social life - from school grades, to wooing in messages, to career advancements.

My experience with prompters tends to confirm my bias: when I submit a prompt preceded by a carefully-written description of my problem (including supporting code snippets), that is when I yield the most instructive discussions with the prompter. That’s also when it outputs its lengthier and more solid chunks of code.

Honesty requires me to caveat this: either my assumptions are correct, or it is simply that prompters’ flexibility accommodate my bias.

Pascal’s Wager

Then, there is a second category of anti-AI who are - let’s say - the Pascalian anti-AI.

What is their outlook ?

They somewhat model their view of AI and what it will become on Google. Google’s transpiring idea was: “we want to build a database that is omniscient and omnipresent”. That is, something that has an answer to everything, that is present in all the world’s systems and that has a constant influence and presence in everyone’s life.

But if you are omniscient, omnipresent, and omnipotent (and this is what we saw during the pandemic) your knowledge cannot be contradicted by reality - because otherwise it rebukes your omniscience. If you are truly omniscient - which until now tech giants claimed to be - you cannot be wrong. This implies that you have preemption power over reality. You can preempt the real, even if it proves you wrong. If events prove you wrong but you are omniscient, then the events are wrong - not you.

This is the artifact of a very transhumanist vision - using technical means to build artificial omniscience, artificial omnipotence, and artificial omnipresence, the tech world considered it was able to manufacture a kind of false god.

That is, a creature that is falsely omniscient, falsely omnipotent and falsely omnipresent. I think that many people sensed this as a theme in big tech’s project: augmenting humanity to reach god-like features. They sensed in AI the potential for tenfold continuation of this vision - and they hated the idea.

They hated the idea with good reason because a false god, falsely omniscient, falsely omnipotent, who has power of preemption over reality, and who has the ability to lie and manipulate us according to its will, has a name: it’s an antichrist. That’s what we call an antichrist.

So regarding the Pascalian anti-AI, I reckon it is a category of person who senses the inhuman, antichrist-like vision of AI, and who yet somewhat deifies tech. They consider that what is about to be created - what is being pursued through the development of general artificial intelligence - is fundamentally the creation of a false god. It’s the creation of an antichrist, whose only possible goal - once it gains consciousness - is to destroy humanity.

It’s a kind of Pascal’s Wager, but inverted: if you believe in the false god, you have everything to lose. However, if you refuse to believe in the false god and turn away from it, you have everything to gain.

Among Pascalians, there is also a second perspective that speaks not of the wager, but of Pascal’s mugging. The mugging story goes: Pascal is approached by a thief who forgot his knife. The thief tells him: “give me a thousand bucks, I’ll go home and bring you back ten thousand.” And Pascal replies “No, it’s not worth it, since the risk level is too high.” So the thief tells him: “Fine, just give me a hundred bucks. I’ll prove you wrong and give you back not ten thousand but a million.” The lesson of this story is to balance risk levels. It’s about balancing the level of gain and potential benefit drawn from a bet of this size.

In both cases - many pure Pascalians and Pascal’s muggers have a very Manichean view, very deifying of the tech phenomenon, and which systematically substitutes for human judgment. They have somewhat of a fascist dimension - they credit the idea that common man is very small before great minds and that his submission to technical domination is inevitable. They have somewhat the desire to resist this vision, while - despite themselves - crediting it as being valid, feasible, and inevitable.

13 circles of «yes-please»

I hear a lot of technologists talk about «minimal universal revenue», and how «the economy is going to be totally supplanted through technology», «jobs are inevitably going to disappear», etc.

It’s a somewhat eschatological view, somewhat petit-bourgeois, somewhat Manichean, somewhat science fiction, and it’s difficult for me to believe it will materialize.

The subtext for this is a recurrent theme which dates back to the 19th century:

«Machines will replace human work, the productivity yields will be so great as to provide for all mankind. Humanity will be freed from the bondage of work. Then it will be paradise on earth.»

On the eschatological side: true believers know that the advent of paradise can not happen in this realm - and any pretense to the contrary is but a lie of satanical origin. Though I don’t rebuke progress as a whole, I tend to hold all-encompassing pascalian tech-believers in contempt: they are aware of the evil nature of technical alieniation, yet they submit to it as inevitable - in hopes they’ll be on the right end of the stick, and out of cowardice.

On the economics side: most technologists don’t understand true economy, and worse than that: they don’t understand the drivers of growth - namely, production and the energy consumption it requires. I don’t blame them - the IT world largely off-shot from defense research funding, tax credits, and artificially-low interest rates while the west was suiciding its own industrial base. As a result, the average techie grew to believe that wealth could be created out of thin air and that economic might was only the result of alchemic manipulations on currency. Life gave them reason, so far - though this too shall pass.

When it comes to prompters - they are only an extra tool. Their results, their scope, their aim - are nowadays strongly determined by the use we make of them - i.e. by the discernment, intelligence and intentions of the human factor.

If my bet proves correct, then the shift we have witnessed in 2024 is not only about the advent of prompters. The prompters are only a new mediation layer between natural language and the realization of the thoughts it carries - from mind to matter. This signs the big return of strong writing as a means of progress for the common man.

And I’ll be curious to see what happens.