The damned hyperbole, the damnable smugness, and the profiteering of Artificial Intelligence cheerleaders in their pathetic attempts to sort out our lives – makes for a gripping tale of little, littler, and littlest horrors…
Cutting the Gordian knot of all the newness, novelty hype, the conflicting interpretations, the flood of hysteria, the frenzy of fear-mongering about AI, make this another DeZaVu moment of comical reality.
And when it is known that the founders and the investors of AI are attempting to create a movement based on fear, we know who profits from these silly yet catchy viruses of imagination. Billy G., Elon M., and the Googlers, along with the Microserfs, and all other hapless Geeks smoking the “peace pipe” all day long, as they fantasize about wha new horrors to unleash on Humanity. Unsexed geeks all the way down, and all the way up.
Turtles all the way up and down and as above so bellow.
Remember Y2K ?
Bill Gates’ viruses unleashed by Microsoft to sell new versions of their awful software just to fix the back doors and the security viruses they had created and spread in the first place?
Look at the smart morons of today who are making money from your gullible enthusiastic and largely prevailing attitude towards the ChatGPT and the leading AI class of newly minted products finessed by prepubescent CEOs, who are claiming that the “sky is caving in” like chicken-little, in a sad masked theatre chorus stolen from an Ancient Greek Drama, and performed in front of your very eyes.
Only — you can’t see the shitty play, because you are being played and you don’t even know it.
Why else do you think that the founders of the leading AI companies continually remind us of the fearsome risk of AI-Robots overtaking our world and diminishing our lives?
These “chicken-little” mouthpieces of the weed-induced stupor, are stoning away while sprouting nonsense about the imminent demise of human civilization, and the awful existential risk of using AI for writing a Zen koan…
And maybe that’s the question we need to ask those cheerleaders of AI, in a moment that they enjoy their Doritos as they wake up from being wasted in a cloud of pot smoke.
Yet, because of 420 giving access to Kush for all of America’s frat-boys, they are causing a stampede of blinded rage, where all young truants are going around hyping AI even more than the instigators who yelled “Fire” in a crowded theatre of people suffering from the mental anguish of the interminable lock-downs, and the Covid scare along with the vaccine dementia and their ilk.
War, horrors, famines, catastrophes, climate change, genocides, and extreme poverty in our country, and all around the world are taking the back seat to this ultimate first world non-problem, so that the avocado boys can make some more mullah to spend on expensive weed.
What a crock of horseshitte…
Yes, we all know that this few iterations of AI are problematic in their technological applications, because like every new innovation, it is inherently destabilizing across many industries, across many types of employment and its effects are accelerating daily…
I get that and so do you.
Yet in my HO, and also according to the tech firms that provide AI, the solution to the problems presented by ever-increasing AI power is to use more AI to cure the AI that needs more serfs to be streamlined, informed and controlled.
That is the same paradox that drug makers, drug dealers and even the all too willing arms dealers rely on, in order to spur sales of weapons.
Much has been written about the AI arms race between Microsoft/OpenAI and Google, and also about the AI arms race between adversarial nations, but it’s all a bunch of baloney.
Because much less has been said, about the fact that the adoption of AI by one company in an industry will eventually cause every competitor to adopt AI, thereby replicating the pattern of escalation across every industry, every job function, in every part of the world, all at the same time.
This is not happening right now, because I do not think we are ready for this kind of rapid destabilization, and yet I also do not see a way for any one company or government to either be able to start, maintain, or even stop the downwards spiral, from sucking everything down the drain, simply because everyone concludes that they must get on the bandwagon instead of pretending its not there or making any serious attempts to stop it.
So, when the CEOs of the AI companies describe the potential for existential threats emerging from their products, they incite a primordial fear that demands a response. And for many people, the natural response to that kind of deep fear is to freeze, fight or flight.
And for some who choose to do something, to take some kind of action, or just scream like Munch’s painting — the obvious reflex-action is to adopt AI proactively, thinking that this way they might get ahead of their competitors in this new arms race, that inexorably leads to catastrophic Thermonuclear war, where there are no hostages — just dead people.
And if everyone has that impulse, then it’s easy to imagine how most countries will rush to adopt AI and accelerate their deployment of AI in all militarized fields of international conflicts.
This is a remarkable feat of tech marketing hype.
Stoking fear to spur mass adoption of a new product that, by all accounts, does not work reliably yet, is a great marketing ploy, but what these errant boys tend to forget is that Karma is a self fulfilling bitch, and its going to bite them in the ass sooner rather than later.
Mark my words, and I am betting you dinner and a movie, that this too shall pass.
So this is intended as a tool for better decision-making, cool headedness, calm and fvck-all ballsy perspective about AI and its ass-naked cheerleaders, because I wrote this book in order to help you arrive at a clearer understanding of what is happening. During the past three months, I found myself constantly whipsawed between extremes of elation/excitement/curiosity and fear/dread/uncertainty. It is very difficult to make sound decisions when your emotional state is turbulent. And people rely upon my advice to make decisions. So I needed a better way to attain some clarity about the rapid introduction of AI tools and consumer-facing AI. Now I have it.
I set out with a list of 8 to 10 dynamics that seem to be specifically relevant to artificial intelligence (especially the large language models that are the focus of the current hype wave). These are lenses that I used to bring the topic into focus and to dispel some of the fog of hype.
Because there is a lot of hype, ranging from extremely optimistic scenarios to incredibly dark scenarios. However, these are all, very low-probability potential outcomes. For those who like to devote energy to fretting about extreme edge cases and low-probability events, this stuff can be a fulfilling intellectual exercise, but for the rest of us it may be a waste of time.
Paranoia is dysphoria.
That’s because there are real problems in our world today, and there are also some opportunities and real issues today that stem from the tech in its current immature form. This is where we can all benefit from some careful thinking.
Let’s start with the problems. We don’t need to raise the specter of existential threats to humanity to galvanize concerns about AI. There are a significant number of real issues and problems with artificial intelligence in its current state, today, right now. The focus on the low-probability future scenarios has the unfortunate side effect of diverting resources and attention away from the current problems.
These problems include: algorithmic discrimination; bias baked into large learning models; the tendency of AI to favor increasing returns to already-rich tech companies; the tendency to reinforce existing disparities and inequalities.
Those problems are unlikely to fix themselves; if we do nothing, they will probably become entrenched and get worse as AI is further integrated into the software that we use for business, communications and socialization. For that reason I think it’s necessary for all of us to come to a clearer understanding of what the real, current issues are and what might be done about them.
Thrilling or scary future scenarios are a distraction from the mundane but important real-world problems, and that seems to me to be a major flaw in the hype cycle and the press coverage of generative AI. I think some balance is useful.
Ditto with the opportunities. There are many exciting opportunities afforded by the current generation of not-quite-ready-for-primetime conversational AIs and generative AIs. Even though these tools are in their infancy, they can be useful in certain ways particularly for creative people and writers in every discipline.
it is exciting to envision new kinds of companies and organizations that are built from the ground up to leverage artificial intelligence. We will see a lot of that kind of innovation this year. Likewise, things like Auto-GPT and domain-specific LLMs, and chain-of-reasoning seem likely to lead to some real breakthrough innovation and unexpected outcomes. The fact that developer are now messing around with and mashing up LLMs outside of the AI research labs is also very creative and probably likely to lead to some surprising outcomes. I am also seeing the emergence of a new kind of social creative process that is different from social networking. This stuff is new and exciting.
That said, these AI tools do not work reliably enough today to displace any workers (Even though the fear mongers at Goldman Sachs will tell you that 300 million people are likely to lose their jobs, there is zero evidence to support that today).
Examples: nobody today is going to replace a trial attorney with an AI. No intelligent marketer is going to replace their copywriters with AI. No movie studio is going to make a film written solely by AI. While it is possible that AI will automate some aspects of software development, the output is still unreliable. And even when the performance improves, it is most probable that the human experts in these fields will make use AI to be more productive and work more efficiently. So it’s not really “AI versus humanity” but “AI plus humanity.”
I am aware that there is no shortage of experts who will argue about that, because they prefer to inflame the imagination with worst case scenarios of mass unemployment. But what they cannot show you is any evidence of widespread job loss due to ChatGPT or StableDiffusion. Remember, this is happening at a time of very low unemployment.
And we need tools for increased productivity urgently, because of a fundamental demographic problem.
Globally, humanity approaching a demographic inflection point because of low birthrates.
Most countries on the planet have a very low birthrate, far lower than previous generations, and below the “replacement rate” that will keep the overall population numbers steady. Korea, Japan, China in particular have extremely low birthrates, and it is beginning to make a dent in the size of those nations’ populations. Most other nations are on the same path, including the US.
The only reason the US does not have negative population growth is immigration, and the harebrained conservatives in the US are doing everything they can to stop that. I say “harebrained” because it is self-defeating. The entire economy of the US is predicated on the assumption of continuous growth and expansion. If population stalls, or declines, the economy will contract. Intelligent conservatives grasp that…
This is not a complicated phenomenon: when people move to cities, they have fewer children. In a rural environment, more children = more free labor. But in an urban environment, more children = more cost.
Urban families cannot afford to have lots of children, and that is why, in country after country, around the globe, as populations urbanize, the birthrate goes down. There are few exceptions to this trend, mainly in sub-saharan Africa, but they will probably follow the same path in one generation.
Why am I talking about demographics? Because when we view the advent of AI, automation and robotics from the lens of demographic trends, we can see that it will be necessary to automate a huge percentage of white collar work very soon. There simply won’t be enough people to do that work. Every nation on earth needs a big productivity boost.
Yet they only fret about their military might and the armed race to make Super Soldiers from bleeding edge research and development into BioEngineering, Genetics, and Robotic AI.

In sum, if we want to solve real world problems, with big audacious thinking, while we also maintain and grow our current living standards, we need to find ways to automate a big percentage of the work that is currently done by humans.
This will not occur overnight, so there is no reason to panic about it.
Managed insightfully, it might work out well for society.
AI is really helpful for all of us … and it also is Anthropophobic too, so let us embrace the good, and throw away the bad.
Anyway, even if the developments of new AI models were paused (which will not happen), the current versions will continue to improve themselves, theoretically exponentially, yet in reality within the boundaries of available computing, dataset storage and energy power.
According to the research cited in this video, this process of self-improvement appears to be accelerating: https://www.youtube.com/watch?v=5SgJKZLBrmg
The authors of the Reflexion paper wrote: “We hypothesize that LLMs possess an emergent property of self reflection” and then they showed how it happens.
GPT-4 has demonstrated that it can self-correct and improve itself. This kind of recursive improvement is not limited to the software, either. AI is also helping Nvidia design better chips, including the H100
This short video summarizes the findings of five recent research papers and other news about recursive improvement in artificial intelligence.
Tesla now holds the mantle of Moore’s Law, with the D1 chip introduced last year, so I had to update my favorite graph and see where they lay.

This shift should not be a surprise, as Intel ceded leadership to NVIDIA a decade ago, and a further handoff was inevitable. The computational frontier has shifted across many technology substrates over the past 122 years, most recently from the CPU to the GPU to ASICs optimized for neural networks (the majority of new compute cycles).
Of all of the depictions of Moore’s Law, this is the one I find to be most useful, as it captures what customers actually value — computation per $ spent (note: on a log scale, so a straight line is an exponential; each y-axis tick is 100x).
And what do we see… Humanity’s capacity to compute has compounded for as long as we can measure it, exogenous to the economy, and starting long before Intel co-founder Gordon Moore noticed a refraction of the longer-term trend in the belly of the fledgling semiconductor industry in 1965. Spooky. Like it’s on exponential rails.
In the modern era of accelerating change, it is hard to find even five-year trends with any predictive value, let alone trends that span the centuries. I would go further and assert that this is the most interesting tech growth graph ever conceived, as you can see from this post on its origins and importance: https://flic.kr/p/qS3HU4 and it reflects badly for our short sightedness because it does not account for any externalities.
There may be dragons out there…
Now, let us ask why the transition within the integrated circuit era, from Intel losing to NVIDIA for neural networks because the fine-grained parallel compute architecture of a GPU maps better to the needs of deep learning.
Indeed, there is a poetic beauty to the computational similarity of a processor optimized for graphics processing and the computational needs of a sensory cortex, as commonly seen in neural networks today.
Whereas a custom chip like the Tesla D1 ASIC, is optimized for neural networks as it extends that trend to its inevitable future in the digital domain, well beyond its current functionality and tasking.
Of course that predicates, that further advances are probable, possible, and welcome, in the space of new chips, yet to be designed; that are analog-in-memory-computational devices. These new chips have yet to be designed, let alone being manufactured, and I sense an opportunity here to ideate a novel design of a biological analog-in-memory-computational chipset, with a Mother board specifically designed to be the operating system platform for brain software…
Let us do this because that is an even closer approximation of the human cortex a pure effort at quantum chips computing asa Biomimicry exercise.
As for money making — the best business planning assumption is that Moore’s Law, as depicted here, will continue for the next 20 years as it has for the past 120 years, but we all know how easy it is to upend predictions of such ilk.
War and total destruction will be with us all, and always, since it is the father of us all.
Tesla AI Day video: https://youtu.be/j0z4FweCy4M?t=6340
and summary: https://electrek.co/…/tesla-dojo-supercomputer-worlds…/
And although this hashing out of these cool ideas, is already too long, my friends and longtime followers know that I tend to use my “Bleeding Edge Blogs” a place to thrash out my mind’s throughput process in public, because I enjoy working through ideas with smart people by getting a variety of opinions and perspectives thrown into the mix from all of your comments and PMs.
And since you know me that I outsource, I open source and I delegate my “own thinking” to the hivemind for development by that particular type of lumpen proletariat’s free labor of the factoid mindhive working hard to make the Royal Jelly and the humble honey for the ideafactory.
That’s what I call Brain Software development, crowdsourced and debugged in an extreme form of coding by many, for the many, and from the many brilliant minds all around me.
Thank You
Yours,
Dr Churchill
PS:
One day, a man was walking along the beach when he saw a boy throwing starfish into the ocean.
The man asked the boy why he was doing that, and the boy replied: “The tide is going out and the sun is getting hot.”
“If I don’t throw them back into the water, they’ll die.”
The man looked at the miles and miles of beach covered with stranded starfish and said: “But there are so many. You can’t possibly make a difference.”
The boy picked up another starfish, threw it back into the ocean, and said: “I made a difference to that one’s life.”

The Man was Augustine of Hippo, a famed orator, pagan worshipper and Senator for the Roman city of Hippo in North Africa across from Sicily, and this was his Paulian moment of belief… through the revelation of the Meaning of Life, at this instant by the angel throwing starfish right in front of him.
The child was a three year old boy, appearing as an Angel of God to bring the reality of the Great Divine Spirit to this great orator who went on to become an apologist for Christ and the now known as the Great St Augustine whose treatises on “Politics” on “Just War” and on “Living a Just Life” make all the sense in this topsy turvy world for all of us who try to understand what is real and what is not, in order to be righteous people in service to humanity with kindness and levity of our Souls.
And please do not worry — AI will never have any of that.
Leave a Reply