Skip to Content

Is "Artificial Intelligence" Intelligent?

A leadership perspective on the history of A.I. and computing.
March 13, 2026 by
Is "Artificial Intelligence" Intelligent?
Steve Simons

Can machines think?

The mathematical fact is that the current generation of so called “Artificial Intelligence” tools are not actually functional replicas of human intelligence.  On the contrary, they are aggregators of human intellectual property that were built to synthesize, summarize, and align with existing human creativity, leadership, and accomplishments with the goal of being accepted into human society by people who can’t tell the difference between novel cognition, wisdom, insight, judgment, creativity, and genius – and direct plagiarism, psychological manipulation, sycophantic appeals, the law of large numbers, deterministic models, and derivative thought.

This is not by accident.  This is the logical outcome of a challenge created for computing by Alan Turing, a British mathematician and computer scientist in his article “Computing Machinery and Intelligence” that was published in the journal Mind in 1950 as his answer to the article’s opening question, “Can machines think?”.  This article came as the result of Turing’s exploration of what he aptly called the “imitation game,” a thought experiment that he believed would serve to answer an easier question than, “Can machines think?”  because in his estimation (and in the arguments of many detractors of his thought process since) it is unlikely that this test could properly detect the presence of thought due to its close ties to consciousness.

The “imitation game” is simple in concept, a three-person party game where one person stays in the room and two others, a man and a woman, go into a separate room.  The game than has the first person ask questions of the two people they can no longer see in the other room in order to attempt to discover which is the man and which is the woman.  Turing than generalizes from this specific inquiry to create a parallel scenario where rather than determining the biological sex of two human players, instead the game is played by two human players and one machine, where a human player is in one room and the other human player and the machine are in the other.  The first player then asks natural language questions to the two unseen players in the other room to determine and accurately identify which player is human and which player is a machine.  The basic assertion of Turing’s test, then, is simply that if a human being cannot accurately or reliably identify which player is human and which is a machine, then the machine is achieving something analogous to thought and so would pass his test of “thinking.”

Turing is by no means the first or only to suggest that machines might be created to imitate humanity.  In fact, the father or modern philosophy, René Descartes, in 1637 wrote, “we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may explain that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do” (Discourse on the Method). With the obvious conclusion being that if a machine could reply appropriately to every natural language statement made in its presence it would be analogously human.

The problem with both Turing’s test and Descartes’ assessment is that they challenge computer scientists to manufacture a machine that can imitate, replicate, and respond to natural language queries in a fashion that is indiscernible from a human party’s activity given the same stimuli.  In other words, if it walks like a duck and quacks like a duck the conclusion should be that it is a duck.  And yet, even the most basic biological analysis of a robot built and programmed to walk like a duck and quack like a duck can easily discern that it is not a duck, but instead a replica built to meet a finite collection of identifiable traits of a duck.  It is the same with so called “artificial intelligence” or “AI.”  AI is built to pass Turing’s test, to respond to natural language queries in a way that cannot be distinguished from what a human might do as a result of the same query.  And yet, even Turing would acknowledge the fundamental reality that his test could not actually be said to answer the question, “Can machines think?

So, are machines now performing tasks that humans traditionally were required to perform? Yes, of course.  But, are they performing them in the same way, with the same insight, creativity, judgment, and character that humans bring to the table? The fact is, there is no test to measure or determine if this is the case beyond the plain observation of our five senses and the internal summary judgment of our own human intelligence.

So why does this matter?

It matters because what we build and the decisions we make are directly related to what we choose to observe and measure.  In any business, the key performance indicators (KPIs) are used to assess and define success and then by extension to set policy and strategy to improve and evolve the business to conform to that definition of success.  And, seeing as all measures, instrumentation, and people are finite in scope, this is factually an inescapable limitation of human progress.  We can only observe a finite collection of outputs, assess a finite collection of measures, make a finite number of decisions, and provide a finite collection of inputs within a given process, system, or business model and by definition this excludes whatever other outputs, measures, decisions, and inputs may be impacting, influencing, controlling, or determining the actual flow and impacts of a process, system, or business model.

In the case of artificial intelligence, the biggest blind spot is found in Turing’s original observation that he did not know how to answer his own question and so chose to answer a completely different question that he proposed would by extension answer his original question of “Can machines think?”.  The problem with this divergence is not to be found within what he observed or concluded based on the resulting thought experiment, but rather the issues arising from the use of Turing’s work arise from what he failed to observe or conclude.  Turing’s work is not somehow universally flawed as a result.  On the contrary, he provided a basic rubric for the measurement of artificial intelligence that has been used in every context from the simple CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) that you see as part of user authentication for so many applications, to the obvious, rudimentary, and almost instinctual thinking that users from the lowest levels of organizations to the C-Suite are using to make up their own minds about the quality and character of the current generation of AI platforms, systems, and applications in the wild.

The basic experience of the human brain is such that it will tend to believe the conclusions that it draws for itself in preference to conclusions which require information that it does not have access to, or has not considered, or has already considered and dismissed as unrelated to the phenomena being observed.  And while this pattern is effective for the purposes of allowing decision making for the sake of survival, it is not necessarily the most reliable in arriving at a full or even accurate picture of objective reality.  This fact, René Descartes described in his Meditations as the “plate glass window” standing between the human mind and our ability to observe and reveal the true nature of reality.  Like the Apostle Paul stating that we “now see through a glass, darkly” (1 Corinthians 13:12), Descartes warned that our ability to observe reality is limited by our five senses and the finite amount of information we can collect even with the addition of instrumentation and data gathering tools that far exceed and so extend our ability to dispel the obscurity and reasonable doubts that otherwise obstruct our ability to observe and determine the true nature of reality.

So, can machines think?

The answer cannot be based on Turing’s test because Turing himself acknowledged that his test could not determine or discern the answer to this question.  And in fact, given that Turing’s test has shaped the mass majority of thought about and strategy for the pursuit of artificial intelligence for more than 75 years, it seems important to disambiguate its influence from the factual deliverables that the current generation of tools built to pass his test are now offering.

Let’s start with basic scientific determinism.  The concept of cause and effect in the scientific method create a foundation for predictability of natural phenomena based on calculations performed on or with the Standard Model because it was built as a summary and replica inclusive of all physical phenomena known and the ad hoc assertion is that so long as the data observed and conclusions drawn through the scientific method based on that data are used to augment and continuously improve the Standard Model, then all physical events in the universe can be believed to be causally determined by preceding events and natural laws as reflected, aggregated, computed, and predicted in the Standard Model.  This foundational rubric and philosophy are the underpinning and methodology of modern science as practiced by every major innovator from Galileo to Einstein and beyond.  The industrial revolution, the dawn of the information age, and now the advances in “artificial intelligence” are all built on a rigorous adherence to the scientific worldview and its reliance on the Standard Model.  Even so called “uncertainty” in the field of quantum mechanics or the nondeterministic patterns observed in chaos theory don’t detract from the fundamental assertion that given the consistency of space-time and the objective nature of the universe, science provides a means for understanding and replicating phenomena by discovering and observing every cause for the purpose of being able to reproduce every effect.

So, can machines think?

Let’s look at the basic physicality of every living being that we consider to have consciousness, judgment, wisdom, creativity, and thought.  The chemical and mechanical processes of thought of creatures ranging from the most basic to the most complex have been studied at length for centuries and yet, in all of that extensive study, we have not found a single example of silicon-based life or silicon-based consciousness.  Every single being that we belief to have thought, perception, judgment, and conscious free will self- determination are carbon based.  And while silicon is directly below carbon in the 14th column of the periodic table and they share certain similarities, their differences are as striking as the differences between artificial intelligence and human thought, and that’s not just a coincidence.  The fact is, artificial intelligence is a silicon-based simulacrum of actual carbon-based life.  Silicon is a metalloid, carbon is not.

So, have we chemically and mechanically replicated human consciousness and thought? No.  The machine level functions of the human brain do not share any material chemical or mechanical similarity to the silicon processors that operate artificial intelligence so as to make it reasonable to conclude that the similarities in their reproduced outcomes derive from some fundamental reproduction of the underlying causes or natural laws at work in the delivery of human thought.

But the distinction between human thought and artificial intelligence doesn’t stop there.  The silicon-based machines that can store, process, and operate upon the data generated by humans who have, on their part, contributed to the ever-growing catalog of novel insight, innovation, disruption, invention, creativity, leadership, and visionary grasp of reality, have demonstrated time and again that they are simply incapable of anything more than deterministic function.  Even the most basic non-deterministic outcome, like generating a truly random number, requires silicon-based machines to use alternate physical phenomena like thermal electrical noise or laser light to create anything even approximating a truly non-deterministic outcome.  In contrast, humans and other carbon-based life forms demonstrate a capacity for non-deterministic thought and behavior so reliably that some have even used the existence of carbon-based life as a fundamental challenge to the universality of the Standard Model and scientific determinism itself.  In fact, free will as a characteristic of life at even the most basic chemical levels appears to contradict patterns and natural laws that are broadly applicable to other phenomena in the Standard Model but somehow seem to interact differently in the functions of carbon-based lifeforms during the temporary disruption of the entropic flow of the universe during their dust-to-dust lifecycles.

Now, there are those who believe that even human thought and free-will determination are actually fully determined by “nature” and “nurture” or genetics and environmental factors.  And yet, time and again we see divergent behavior even in the closest genetic and environmental matches, as in the case of identical twins, that would indicate the presence of actual freedom and self-determination not observed in any other context.

So, can machines think?

Perhaps you are thinking that at this point you don’t care if they think.  Perhaps, like Turing, you are comfortable letting go of the need for machines that think because the silicon replicas have such a great range of deterministic responses to mimic human behavior that the distinction for you appears to be a distinction without a difference.  If this is you, I respectfully would suggest that you may be a below average performer, below average observer, below average thinker, and below average creator.  This is not a personal insult or a statement against your character, abilities, actual value, or contributions to society.  Far to the contrary, it is simply an observation that if you can’t tell the difference between human value, contribution, wisdom, insight, creativity, ingenuity, problem solving, and thought more broadly and the law of large numbers, cut and paste, synthesis and aggregation solutions and responses provided by so-called artificial intelligence, it seems likely that you literally can’t see or discern or perceive or identify distinctive human behavior at a level higher than the machines can deliver, literally, an above the average kind of thinking.

Once again, this is not to offend you or anyone else, it is simply an observation that the very nature of the machines and algorithms built to pass Turing’s test, were built to avoid detection by the average observer.  So, if it works on you, you might be average or below average and if it doesn’t work on you, you might be average or above average.  It’s definitionally the result of the fact that there is no objective standard in Turing’s test, only a population of subjective standards.  And, as in any population there are distributions of just about any characteristic you can identify.  Whether it is metacognition, mathematical computation, emotional intelligence, data analysis, the ability to detect deception/misinformation/disinformation, or any of the other many spectrums of skill and ability in a human population, there tends to be a distribution within a population that can be statistically observed to identify the mean, the median, the mode, and more for that particular characteristic within the population.  So, if a machine that is solely capable of deterministic activity is built explicitly to perform well on Turing’s “imitation game” can that machine be said to be artificially intelligent?  The answer should be, “no, it is not intelligent, it is simply proficient in performing calculations based on the deterministic rubrics of its creators.”

So, can machines think?

No.

Why does it matter if machines can think?

This is the more interesting question.  Once you understand what silicon-based machines are capable of it becomes far more apparent what they are actually delivering in response to the prompts offered to them.  And once you understand what is actually happening, there can be far more clarity on the legal and technical implications of how these machines consume and use intellectual property, information, and data.  In the same way our legal structures we designed to govern the human brain’s consumption and use of intellectual property, information, and data, so a proper understanding of AI will allow us to govern the legal, ethical, and moral implications of the business practices, systems, and processes that make up the AI universe.

But it is more than a legal question.  There is a bigger issue at stake.  What does the adoption of the current generation of AI algorithms, tools, and applications result in for the enterprises that embrace them?  Consider for a moment how large language models are trained, for example.  What information, data, and intellectual property do you want in the consideration set and context of the information your business is using to drive the creation of its products, the delivery of its services, the establishment of its strategy, and more?  Have you considered the normative effect of introducing a law of large numbers contributor into every boardroom decision, product management strategy, engineering design pattern, and more?  What does the introduction of a relatively small handful of meaningful contenders in the AI platform space do to the natural diversity of thought, problem solving creativity, investigational prowess, and capacity for invention in the broader population of enterprises that make up a national economy?  While gaining broad access to a law of large numbers solution should mathematically be attributive to a below average player in some industry, is trending to the norm a positive development for actual leaders and disruptors in that space?  How are the current silicon-based machines built to convince humanity of their similarity anything more than a more efficient herding mechanism for those who follow and a moat of defense for all those who actually lead?  And what happens when those who actually lead stop contributing their intellectual property, information, and data to the herding machine?  While the below average players are using the law or large numbers to catch up to average, the above average players are pulling away and then protecting access to the above average solutions they create, thereby leaving the law of large numbers solution degrading in quality over time.

There is another aspect to consider on a population basis.  Why should functionally solvent and robust enterprises with quality products and services in the market introduce functionally insolvent products and services into their infrastructure, go to market mechanisms, and service models?  What is the disaster recovery and business continuity plan to extract a business from its reliance on a product or service if the cost of that product or service increases dramatically to rectify this functional insolvency?  How many businesses are being put at risk by the massive capital injection facilitating the meteoric expansion of infrastructure and tools being offered under unsustainable pricing models?  What is the true cost of adopting these so-called artificial intelligence algorithms and the platforms that operate them into a business?

Consider the economics of cloud computing as an example.  One of the key value propositions of cloud computing was the mitigation of CapEx by exchanging it for OpEx.  For small businesses, high growth businesses, rapidly changing businesses, or highly variable businesses that were likely not to realize the full value of the depreciating assets of traditional infrastructure, cloud computing became a compelling economic offering.  However, for large enterprises with stable operating environments and predictable workloads it was far harder to make an economic case for cloud computing in preference to traditionally owned infrastructure.  Ultimately the total cost of ownership needs to be evaluated carefully to get past the hype and determine what strategy actually provides the best return for a specific business in a specific scenario.  And yet, many boardrooms have been dominated by executives preaching the absolute necessity of adopting cloud computing even in scenarios where it may not be a good economic or technical fit.  Why? Because many times the law of large numbers drives decision making instead of specific human insight, understanding, vision, or strategy.  Seeking the average position in a population is felt as a safe or risk averse position, or at the very least an easily defensible decision to make, i.e. “no one get’s fired for going with _____________” kind of thinking often plays well to the human decision maker, even if it is actually just putting a business into a weaker but more common position.

The fact is, the law of large numbers cannot lead, it can only follow.

The fact is, silicon-based machines cannot lead, they can only follow.

And this is the fundamental distinction between the human player and the machine player in Turing’s famous test.  An above average human player can distinguish the difference between the human player and the machine player by using novel cognition, creative thought, human intuition, and good judgment to press the human and the machine to compete not on the grounds where the deterministic algorithms that Descartes described have been designed, trained, and prepared to calculate or regurgitate synthesized, summarized, and aggregated human intellectual property, but instead by observing the reversion to the mean of every response in the pattern of responses to a range of queries seeking thoughtful, insightful, innovative, disruptive, wise, creative, or otherwise novel thinking.  The fact is, machines can’t think, they can only process.  And the truth is, there are many, many, many tasks that don’t require thinking, but instead can be solved by a simple deterministic process.  However, it should be understood by the visionary and the strategist choosing between these two options, what the impact of removing thought and replacing it with processing will deliver as a result.  So, for instance, it is easy to see how a process can use the law of large numbers to deliver acceptable answers, but maybe the question shouldn’t be whether or not an answer is acceptable, but whether or not it is truly valuable.

The problem with today’s so-called artificial intelligence platforms is that they are simply meeting the subjective standards of Turing’s test, not actually delivering on the promise of actual general intelligence.  And so perhaps instead of asking Turing’s question of whether or not a human can tell the difference between a human and a machine, which is a fundamentally misleading and flawed measure of success, perhaps instead we should be pursuing an objective measure that would allow us to evaluate accomplishments against a factual standard instead of average human perception.  The question is, how can we create such a standard or measure against it.  What should be the next generation replacement for Alan Turing’s “imitation game”?

Whatever the next generation test is, it should take into consideration the physical, chemical, mechanical, and electrical constraints of the system in its analysis and comparison together with a more transparent evaluation of each algorithm involved in the delivery of a response.  So, for example, a large language model might be great at providing a natural language interface, but many of the actual tasks and automations and activities that it performs may be traditional software not based on machine learning, neural networks, or other algorithms traditionally classified as artificial intelligence.  So, to enable wiser decision-making, more accurate system analysis, more secure application design, and in general more informed engineering, it is important to make the distinction to understand the alignment of the algorithms being used with the results being required and the prompts being provided.  So, rather than treating AI like a “black box” algorithm, robust governance and risk management should require businesses to understand the decisions they are making and so plan appropriately for the business continuity, disaster recovery, legal and compliance requirements, and cybersecurity concerns introduced by the use of the tools, algorithms, applications, systems, and platforms being used in the course of business.

In the same way that businesses evaluate risk management from outside threats and inside threats, and evaluate the vetting process for vendors, contractors, employees, and other third parties in the human world, the same or greater scrutiny should be applied when evaluating the introduction of autonomous, semi-autonomous, or even fully managed software, systems, processes, and platforms into the operations of a business.  Thought should be given to the unusual nature of the information being shared with AI platforms and vendors which includes not just the intellectual property of the company and its people, but also the intellectual property of the position of full surveillance of the processes that humans go through in generating, interacting with, improving, and evolving that intellectual property.  In fact, as an example, consider the fact that perhaps the intellectual property of greatest value that is being delivered to the AI platforms is not the explicit code samples being written by the human developers, but the ability of the platform to observe and record the problem solving and debugging process that follows the initial creation of the explicit code.  Do companies properly understand the massive transfer of intellectual property occurring every single time a human prompts an AI model, corrects an AI model, modifies the result from an AI model, etc?  Are companies and people being appropriately compensated for this transfer of training information which will then be incorporated and distributed by those AI platforms without royalties being paid to the originating party that actually trained their model by their use of it?  How are distinctions being made between intellectual property in the form of content creation versus intellectual property in the form of activity, process, and behavior?  Have companies considered the massive transfer of intellectual property occurring through the vehicles of these large AI platforms from the biggest players in tech?  Are we confident that our trade secrets, proprietary processes, and the unique value of our team members’ contributions are able to be defended in a world built on a shared model?  Have companies even properly vetted the data being used to deliver the popular AI models to ensure that no third-party intellectual property claim could be made against them simply by using these tools in the ordinary course of business?  The list goes on and on.

Do content providers who rely upon human consumption of their intellectual property in the form of published content have the tools available to them today to ensure that their content is only being delivered to human consumers who are economically facilitating their business model either through advertising or user fees or some other business model?  Is there a business model for human content creators to ensure that they are properly compensated for their contributions?  Once again there is a strong value proposition for these law of large number, reversion to the mean type of tools for all the humans that find themselves below the average, but what of the actual innovators, disruptors, creators, and leaders? Aren’t we watching the creation of these intellectual property vacuums both increasing their actual value in the world from an objective sense as the result of their contributions being truly essential to the delivery, evolution, and improvement of the collectivist model while at the same time decreasing their ability to receive compensation for their work as both their content and their creative process is surveilled, digested, and returned to the general population sanitized of any meaningful or economic connection with the creator?

I have been writing software since 1983 when I wrote my first line of code on an Apple II using LOGO.  I was in kindergarten at the time.  Since then I have written software in more than twenty languages and across some of the most interesting eras of computing, including the invention of modern networking, the birth of the internet, the invention of ecommerce, the dawn of virtualization, the creation of the message queue and the resulting doom scrolling feeds being pushed at the center of every so-called “social” application, and now the rise of natural language as a viable human-machine interface to name just a few of the defining moments along the way.  And what I can tell you based on that experience is that the tools being delivered today under the moniker “Artificial Intelligence” are really driving more of a disruption in the machine-to-machine interface world than the more entertaining changes that have arrived in the human-to-machine interface because of the natural language models being wrapped around the more mundane logic of the actual agents completing the tasks.

Similar to platforms like Alexa, Siri, and Google Assistant, the AI revolution is constrained to a finite number of tasks that have been intentionally integrated via machine-to-machine interfaces with the big AI models and so can be called when a request requiring that behavior is made by a human user through the natural language interface.  For example, every time an AI model creates a file, schedules a calendar event, sends an email, or any other real-world behavior that activity is being completed by compiled code that was written for the explicit purpose of making that feature available through an established interface.  In fact, many of the vibe coding platforms follow this pattern for delivering a more secure or reliable user authentication experience or other security controls or features rather than trusting the AI model to appropriately deliver a codebase that could effectively defend user information and enterprise data.  The list goes on, but it is important to realize that even in software engineering where “code writing code” has been a goal for decades and where the teams writing the AI models are deeply knowledgeable and aggressively focused on ensuring the functionality is prioritized in their deliverables, the code that is being written by the AI models is not being drawn out of thin air or created on the fly in the way that many users seem to believe it is.  Human software developers who are using these tools to replace older research and development tools for basic code completion witness this every day.  As you would expect from a law of large numbers, reversion to the mean type of algorithm the AI models deliver new solutions for smaller requests more reliably than they deliver problem solving or debugging and correction for existing solutions in larger contexts.  In other words, AI models function more like collaborative tools than they do like a true human collaborator, it’s just that not all of the collaborators are on your team or even at your company.  So, the question is, do you want to participate in training a collaborative tool that will be used not only by your competitors but also to compete directly with you for your role at your company?  And for companies, do you want to contribute to a collaborative tool that will not only be used by your competition, but will also normalize all of the code your company creates thus increasing the likelihood of vulnerabilities that would be shared not only by your solutions but by a large enough collection of applications in production in the wild so as to make your code a more likely target for malicious actors?

Perhaps the most interesting exploration of all in the world of AI is considering not just the access to intellectual property and the full surveillance of process, but actually reflecting on the nature of trust in a collaborative world like these AI algorithms create.  What exactly is the basis for trust not only for the integrity of company data and consumer privacy, but for the actual attack vectors afforded to malicious actors through the veil of the collaborative nature of AI models themselves.  In evaluating toolsets for the creation of applications, security is always a top consideration as malicious attacks continue to increase in number and sophistication every single day.  The fact is, any tool, AI or otherwise, that is used in every business in the United States or around the world everyday is a target for malicious actors to infiltrate access for the purpose of control and exfiltrate data and information for the purpose of intelligence, monetary gain, or other malicious ends.  This is just a fact and a reality that continues to accelerate in apparently increasingly aggressive ways every year with funding and focus not only driven by organized crime but now also nation-state level actors with nation-state level resources and capabilities driving them.  This isn’t just a game of simple mail fraud or muling money laundering operations for petty criminals anymore.  These are vast networks with complex goals and long-term strategies.  And the attack vectors into corporations have now expanded to include not only the simple prompt injection attack, toolchain attacks, open source exploits, and what has come to be known as “AI slop”, but also the introduction of corrupt training data that will set the stage for broader comprises in the future unless they are detected and removed before they are incorporated in a model.  The vulnerabilities are extensive and the malicious actors are not waiting for enterprises to wake up to the potential data compromises and malicious code execution that are being deployed against them every day.  The question is, without full transparency, accountability, audit, and control of the AI model, how can a company trust any shared AI toolkit, platform, application, or model?

The blind trust and excitement of the individual early adopters that are testing the algorithms for feature sets and functionality is easy to understand.  It feels like the birth of the internet all over again, where everyone was so focused on collaboration and technical innovation that security took a backseat.  I still wonder to this day why some of the basic protocols of the internet and world wide web haven’t been replaced with more security minded approaches now that we have grown up from the idealistic days of the early internet and now live in a world where every packet on every network could be the result of a malicious actor seeking your harm.  The fact is, we are too mature at this point in our experience online to not consider the full picture being proposed to us by these tech giants.  We are being manipulated by the most basic emotional hype of rudimentary fear and greed to embrace solutions that are not thoughtful, strategic, or necessarily even beneficial.  The classic brain hack of urgency is causing executives from every industry and walk of life to feel a sense of sinking uncertainty and desperation combined with a newfound sense of opportunity at the idea of what a natural language interface could do for their business.  And yet, even with all of the influencers and commentators promising amazement and the magic of a machine that passes their wildest expectations of humanity, I feel that perhaps the reality is that these tools, like every technology that has come before are better used in the hands of highly educated, highly trained professionals that are themselves above average and so can see the distinctions between humans and machines clearly and leverage both to their greatest advantage.

People who are not highly educated, highly trained technology professionals may not realize that AI is not actually a categorical shift in computing, but is more of an evolutionary change, similar to what service-oriented architecture, webservices, and microservices did earlier this century.  “AI” is just a moniker like “Web 2.0” or “the cloud” were before it, a label to categorize the kind of design patterns, algorithms, and solutions developers are using to build the human experiences that will define this generation of applications for better or worse.  And this is an evolution that has had similar goals and dreams ever since Alan Turing put pen to paper in 1950 to ask the question, “Can machines think?

Consider for a moment the history of computing, with each generation of languages seeking to make the human-to-machine interface easier to use and more accessible to more and more people with less and less knowledge required from the user.  First generation languages (1GL) like binary machine code required humans to understand the full instruction set for specific processors.  Second generation languages (2GL) like assembly were still machine-dependent and low-level but used symbols to make it easier for humans to read and understand. Third generation languages (3GL) like BASIC, ALGOL, FORTRAN, COBOL, C, C++, Pascal, Java, Python, C#, and so many more were introduced to use English-like statements and syntax that was more easily read and understood by humans, albeit still technically trained humans, but had to be compiled to be understood by machines.  And then fourth generation languages (4GL) like SQL were thought to be so high level and similar to natural language that even a non-technical user like a business owner or executive would be able to use them to accomplish basic tasks.  And now we have what are functionally fifth generation languages (5GL) like LISP, Prolog, Mercury, and more, even including natural language itself being used in the delivery of AI and expert systems where the human is expected only to be responsible for the constraints and the logic of a problem definition and solution design and so using the human-to-machine interface to tell the computer more of “what” to do, rather than necessarily all of the details of “how” to do it.

So, what does this mean for AI?  It means that like every generation before it, the experts who are trained, educated, and equipped to understand the actual machines, the actual algorithms, and the actual solutions being built will still be, as they always have been, the primary operators of the machines, with most humans remaining in the user class rather than promoting to the developer or true creator level of competence and comprehension.

The fact is, no, machines cannot think.  So long as they are bound by the metalloid constraints of silicon, they never will win in the battle with carbon-based machines that do not require synchronous computing or a clock speed, that have the ability to live-time rewire themselves in response to every stimuli and only consume, on average, the equivalent of about twelve watts of power derived not from a centralized power grid and complex generation infrastructure, but from exponentially renewable and highly scalable resources like meat and grain and vegetables and dairy when it comes to actual creativity, novel cognition, wise judgment, executive decision-making, true insight, contextual understanding, and comprehensive semantic vision and grasp of things with true and lasting value like the human identity, human impact, human heart, and actual relationship.

So, if you want to give your developers some cool tools to speed up their research and development lifecycle, the current generation of so-called “artificial intelligence” platforms are a great tool in the hands of a true artist.  But if you think the shadows cast by these deterministic players on the canvas are somehow indicative that they are capable of anything more than derivative contributions and average answers to ordinary queries, you may just be a below average player, hoping to augment your subpar performance by a reversion to the mean.  But there is one thing I can guarantee you – no true artist, creator, developer, or leader has ever set their sights on reverting to a mean that is factually, statistically, and in all reality always perpetually somewhere behind them in their pursuit of quality, beauty, intelligence, and impact.

At the end of the day, everything of actual value is human.  Every relationship. Every customer. Every team. Every partner. Every final measure of success.  Every goal post along the way.  The value of every human life is intrinsic and beautiful and unique.  And the tools that we build to improve those human lives should be seen for what they are, inanimate metalloid infrastructure converting power into action based on the rules and constraints that we impose upon them and through them as the canvas and the brush that bring our vision and our uniquely human thought to life.

So, can machines think? No, but humans can and should.

Can machines lead? No, but humans can and should.

Can machines empower human creators instead of replacing them? Yes, and that’s the point.

Originally Published on LinkedIn by Stephen Simons as Why Isn't "Artificial Intelligence" Intelligent?