Posted by: Dr Churchill | December 15, 2016

Artificial Intelligence Innovations for CyberSecurity & Communications Networks

The amazingly nasty Fembot of the future has already arrived.

This amazing AI robotic innovation comes amid quiet fanfare, to the joy of some, and to the simultaneous chagrin of some others of our fellow humans. And as with every technological breakthrough — some will benefit and enjoy the new innovation and some others will lament it, and reject it outright.

C’est la vie.

All for their own personal and professional reasons… will react as predicted.

But for me, and judging by the quality of today’s fembots and the feel of their skin tone, the rigor of their actions, and the intelligence of their engaging intellect — am certain that housewives, hookers, and escorts are all going to be crying the blues before too long.

All for their own differing reasons.

Yet the Geeks are already rejoicing and hi-fiving the advent of this new robotics and AI technology.


Artificial Intelligence Innovations for CyberSecurity & Communications Networks: On January the 13th we have a Meeting at the University of Washington and you can participate by registering here:

Latest Innovations in Artificial Intelligence for CyberSecurity and for Communications as well as for Organizational Network Infrastructure. Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal “intelligent” machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving.

Our distinguished panel will discuss the latest developments in AI (artificial intelligence), and will introduce the solutions to the field as State of the Art solutions. This will also serve as a Jobs Fair in the Fields of CuberSecurity and Artificial Intelligence. Please come with your Resume / CV, and with all your talents, your questions, and your comments.

If you are an AI startup and would like to participate in this Event by Pitching and Demoing your Products & Services, or if yo need space on a demo table for the networking portion of the event, please send an email message to the Organizer herewith.

On January the 13th we have a Meeting at the University of Washington and you can participate by registering here:

Now we are not just talking about Robots here or about your Rumba and the way you don’t have to clean your house…

We are talking about Intelligent Systems Emergence.

And here’s how artificial intelligence could change the face of Society starting from Information and Data management, and moving on to cybersecurity, governments, organizations, defense and business in general:

Artificial Intelligence (AI) may be the single most disruptive technology the world has seen since the Industrial Revolution. Granted, there is a lot of hype out there on AI, along with doomsday headlines and scary movies. But the reality is that it will positively and materially change how we engage with the world around us. It’s going to improve not only how business is done, but the kind of work we do – and unleash new levels of creativity and ingenuity.

In fact, Artificial intelligence could double annual economic growth rates of many developed countries by 2035, transforming work and fostering a new relationship between humans and machines. The report projects that AI technologies in business will boost labor productivity by up to 40 percent. Rather than undermining people, we believe AI will reinforce their role in driving business growth. As AI matures, it will potentially serve as a powerful antidote for the stagnant productivity and shortages in skilled labor of recent decades.

While it is early days, we are already seeing AI’s impact. Combined with cloud, sophisticated analytics and other technologies, it is starting to change how work is done by both people and computers. It’s also changing how organizations interact with consumers, sometimes in startling ways.

AI is flourishing now because of the rise of ubiquitous computing, low-cost cloud services, near unlimited inexpensive storage, new algorithms, and other related technology innovations. Cloud computing along with advances in Graphical Processing Units (GPU’s) has provided necessary computational power. AI algorithms and architectures have progressed rapidly, often enabled by open source software.

On January the 13th we have a Meeting at the University of Washington and you can participate by registering here:

Artificial intelligence

On January the 13th we have a Meeting at the University of Washington and you can participate by registering here:

Artificial Intelligence is not just one technology, but rather a variety of different sorts of software that can be applied in numerous ways for different applications.

But equally important is a vast increase in the availability of data. AI does not think for itself. Its insights are possible because the software gets fed information, and the more information it gets, the more insight it can produce. Over the last decade, crowdsourced data in particular has proliferated on internet and social media. People in their daily lives upload massive quantities of images, videos, social media comments, and chat dialogues.

All that creates labelled data that is available for machines to use in what’s called machine learning.


In fact the Reality is that our Cognitive development is not dissimilar to that of the robotic sciences…

Because as a matter of fact, we’re closer to robots than you might think.

The Real Question is: Could Human Intelligence be the product of a basic mathematical algorithm ?

Today we know that relatively simple mathematical logic underlies our complex brain computations…


Admittedly the human brain is the most sophisticated organ in the human body. The things that the brain can do, and how it does them, have even inspired a model of artificial intelligence (AI).

Now, a recent study published in the journal Frontiers in Systems Neuroscience shows how human intelligence may be a product of a basic algorithm.

This algorithm is found in the Theory of Connectivity, a “relatively simple mathematical logic underlies our complex brain computations,” according to researcher and author Joe Tsien, neuroscientist at the Medical College of Georgia at Augusta University, co-director of the Augusta University Brain and Behavior Discovery Institute and Georgia Research Alliance Eminent Scholar in Cognitive and Systems Neurobiology. He first proposed the theory in October 2015.

Basically, it’s a theory about how the acquisition of knowledge, as well as our ability to generalize and draw conclusions from them, is a function of billions of neurons assembling and aligning. “We present evidence that the brain may operate on an amazingly simple mathematical logic,” Tsien said.

On January the 13th we have a Meeting at the University of Washington and you can participate by registering here:

161201-brain formula graphic Source Augusta University

On January the 13th we have a Meeting at the University of Washington and you can participate by registering here:

So indeed we’re far closer to robots than you might think. Our basic intelligence could even be the product of a basic algorithm and this relatively simple mathematical logic underlies our complex brain computations and our basic human feeling and thinking functions assembles and evolved over many Millennia of our Development…


The human brain is the most sophisticated organ in the human body. The things that the brain can do, and how it does them, is what the theory describes when it states how groups of similar neurons form a complexity of cliques to handle basic ideas or information. These groups cluster into functional connectivity motifs (FCM), which handle every possible combinations of ideas. More cliques are involved in more complex thoughts.

In order to test it, Tsien and his team monitored and documented how the algorithm works in seven different brain regions, each involved in handling basics like food and fear in mice and hamsters. The algorithm represented how many cliques are necessary for an FCM, a power-of-two-based permutation logic (N=2i–1), according to the study.

They gave the animals various combinations of four different foods (rodent biscuits, pellets, rice, and milk). Using electrodes placed at specific areas of the brain, they were able to “listen” to the neurons’ response. The scientists were able to identify all 15 different combinations of neurons or cliques that responded to the assortment of food combinations, as the Theory of Connectivity would predict. Furthermore, these neural cliques seem prewired in the brain, as they appeared immediately as soon as the food choices did.

If the intelligence in the human brain, in all its complexity, can be summed up by a particular algorithm, imagine what it means for AI. It is possible, then, for the same algorithm to be applied to how AI neural networks work, as these already mimic the brain’s structural wiring.

And indeed this process will create jobs and not take them away from us…

While many believe that AI will supplant humans, we think it will instead mostly enable people to do more exceptional work. Certainly, AI will cause displacement of jobs, but it may also significantly boost the productivity of labor. Innovative AI technologies will enable people to make more efficient use of their time and do what humans do best – create, imagine and innovate new things.

With technology overall and AI in particular, the key ingredient for success and creating value is taking a “people first” approach. But to make this transition means both companies and governments must acknowledge the challenges and change how they behave. They must be thoroughly prepared—intellectually, technologically, politically, ethically and socially.

Governments and businesses will need to take several steps, many of which are not easy, but are all necessary if they hope to maintain their relevance and competitive edge, or even if they simply hope to maintain their own standing in the world…

Here are the Top Five Steps They Need To Take:

1) Prepare the next generation. Re-evaluate the type of knowledge and skills required for the future, and address the need for education and training. AI presents the opportunity to prepare an entirely new sort of skilled and trained workers that do not exist today. This training should be targeted to help those who are disproportionately affected by the coming changes in employment and incomes.

2) Advocate for and develop a code of ethics for AI. Ethical debates, challenging as they will be, should be supplemented by tangible standards and best practices in the development and use of intelligent machines.

3) Encourage AI-powered regulation. Update old laws and use AI itself to create adaptive, self-improving new ones to help close the gap between the pace of technological change and the pace of regulatory response. This will require government to think and act in new ways appropriate to the new landscape, and means more technologically-trained people must play an active role in government.

4) Work to integrate human intelligence with machine intelligence.Businesses must begin reimagining business processes, and reconstructing work to take advantage of the respective strengths of people and machines.

5) Increase their share of spending on Artificial Intelligence in order to capture the market demand and opportunity for AI which is expanding rapidly, with analyst firm IDC predicting that the worldwide content analytics, discovery and cognitive systems software market will grow from $4.5 billion in 2014 to $9.2 billion in 2019. In fact, Accenture’s Technology Vision 2016 — research that gathers input from more than 3,100 global business and IT executives — found that 70% of them are making significantly more investments in AI-related technologies than two years ago, with 55% planning to use machine learning and embedded artificial intelligence. Equity financings for AI companies have risen from $282 million in 2011 to $2.4 billion in 2015, or 746%, according to researchers at CB Insights. AI patents are being granted at a rate five times greater than 10 years ago. AI start-ups in the US alone have increased 20-fold in just 4 years.

A major Italian government agency offers a good example of how AI can dovetail with the work people do and enable them to be more effective. Employees there were spending the majority of their time attending to routine customer queries. The agency worked with Accenture to automate the process with AI. An intelligent Virtual Agent application now handles real-time voice calls and webchat interactions, using a combination of cognitive-semantic analysis and machine-learning algorithms. After just three months, the Virtual Agent application has already successfully served more than 70,000 users. Employees can now take on more demanding and rewarding tasks, which can positively impact their engagement.

AI is also positively impacting how governments operate. The Singapore government’s Safe City program uses the latest in video analytics and image recognition to assist in public safety. It increases security, delivers services more effectively and makes more efficient use of city resources.

The Accenture Institute for High Performance and Accenture Technology, in collaboration with Frontier Economics, modeled the impact of artificial intelligence on 12 developed economies that together generate more than 50 percent of the world’s economic output. The research compared the size of each country’s economy in 2035 under a baseline scenario, in which economic growth continues under current conditions, with an AI scenario, in which the impact of AI has been absorbed into the economy.

AI was found to yield the highest economic benefits for the United States, increasing annual growth from 2.6 percent to 4.6 percent by 2035, translating to an additional $8.3 trillion in gross value added (GVA). In the United Kingdom, AI could add an additional $814 billion to the economy in the same period. Japan has the potential to more than triple its annual rate of GVA growth by 2035, and Finland, Sweden, the Netherlands, Germany and Austria could see their growth rates double.

AI can empower people to create, imagine and innovate at entirely new levels to drive growth and productivity. Far from simply eliminating repetitive tasks, AI should put people at the center, augmenting the workforce by applying the capabilities of machines so people can focus on higher-value analysis, decision-making and innovation.

But What Is Artificial Intelligence Really All About ?

Beyond The Buzzwords Machine Learning and All That Jazz ?

Lately, tech companies have gone absolutely crazy for machine learning. They say it solves the problems only people could crack before. Some even go as far as calling it “artificial intelligence.” Machine learning is of special interest in IT security, where the threat landscape is rapidly shifting and we need to come up with adequate solutions.

Some go as far as calling machine learning ‘artificial intelligence’ just for the sake of it.

Technology comes down to speed and consistency, not tricks. And machine learning is based on technology, making it easy to explain in human terms. So, let’s get down to it: We will be solving a real problem by means of a working algorithm — a machine-learning-based algorithm. The concept is quite simple, and it delivers real, valuable insights.

Problem: Distinguish meaningful text from gibberish…

Human writing, might look like this:

“Give a man a fire and he’s warm for the day. But set fire to him and he’s warm for the rest of his life.”
“It is well known that a vital ingredient of success is not knowing that what you’re attempting can’t be done.”
“The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.”

Gibberish looks more like this:

DFgdgfkljhdfnmn vdfkjdfk kdfjkswjhwiuerwp2ijnsd,mfns sdlfkls wkjgwl
reoigh dfjdkjfhgdjbgk nretSRGsgkjdxfhgkdjfg gkfdgkoi
dfgldfkjgreiut rtyuiokjhg cvbnrtyu

Our task is to develop a machine-learning algorithm that can tell those apart. Though trivial for a human, the task is a real challenge for machines. It takes a lot to formalise the difference. We use machine learning here: We feed some examples to the algorithm and let it “learn” how to reliably answer the question, “Is it human or is it gibberish ?”

Every time a real-world antivirus program analyses a file, that’s essentially what it’s doing.

Because we are covering the subject within the context of IT security, and the main aim of antivirus software is to find malicious code in a huge amount of clean data, we’ll refer to meaningful text as “clean” and gibberish as “malicious.”

It seems a trivial task for a human: they can see immediately which one is ‘clean’ and which one is ‘malicious’. But it’s a real challenge to formalise the difference, or more, to explain this to a computer. We use machine learning here: we ‘feed’ some examples to the algorithm and let it ‘learn’ from them, so it is able to provide the correct answer to the question.

Solution: Use an algorithm.

Our algorithm will calculate the frequency of one particular letter being followed by another particular letter, thus analysing all possible letter pairs. For example, for our first phrase, “Give a man a fire and he’s warm for the day. But set fire to him and he’s warm for the rest of his life,” which we know to be clean, the frequency of particular letter pairs looks like this:

Bu — 1
Gi — 1
an — 3
ar — 2
ay — 1
da — 1
es — 1
et — 1
fe — 1
fi — 2
fo — 2
he — 4
hi — 2
if — 1
im — 1

To keep it simple, we ignore punctuation marks and spaces. So, in that phrase, a is followed by n three times, f is followed by i two times, and a is followed by yone time.

At this stage, we understand one phrase is not enough to make our model learn: We need to analyse a bigger string of text. So let’s count the letter pairs in Gone with the Wind, by Margaret Mitchell — or, to be precise, in the first 20% of the book. Here are a few of them:

he — 11460
th — 9260
er — 7089
in — 6515
an — 6214
nd — 4746
re — 4203
ou — 4176
wa — 2166
sh — 2161
ea — 2146
nt — 2144
wc — 1

As you can see, the probability of encountering the he combination is twice as high as that of seeing an. And wc appears just once ( is only one in newcomer).

So, now we have a model for clean text, but how do we use it? First, to define the probability of a line being clean or malicious, we’ll define its authenticity. We will define the frequency of each pair of letters with the help of a model (by evaluating how realistic a combination of letters is) and then multiply those numbers:

F(Gi) * F(iv) * F(ve) * F(e ) * F( a) * F(a ) * F( m) * F(ma) * F(an) * F(n ) * …
6 * 364 * 2339 * 13606 * 8751 * 1947 * 2665 * 1149 * 6214 * 5043 * …

In determining the final value of authenticity, we also consider the number of symbols in the line: The longer the line, the more numbers we multiplied. So, to make this value equally suitable to short and long lines we do some math magic (we extract the root of the degree “length of line in question minus one” from the result).

Using the model is following:

Now we can draw some conclusions: The higher the calculated number, the better the line in question fits into our model — and consequently, the greater the likelihood of it having been written by a human. If the text yields a high number, we can call it clean.

If the line in question contains a suspiciously large number of rare combinations (like wx, zg, yq, etc), it’s more likely malicious.

For the line we chose for analysis, we measure the likelihood (“authenticity”) in points, as follows:

Give a man a fire and he’s warm for the day. But set fire to him and he’s warm for the rest of his life — 1984 points
It is well known that a vital ingredient of success is not knowing that what you’re attempting can’t be done — 1601 points
The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it — 2460 points
DFgdgfkljhdfnmn vdfkjdfk kdfjkswjhwiuerwp2ijnsd,mfns sdlfkls wkjgwl — 16 points
reoigh dfjdkjfhgdjbgk nretSRGsgkjdxfhgkdjfg gkfdgkoi — 9 points
dfgldfkjgreiut rtyuiokjhg cvbnrtyu — 43 points

As you see, clean lines score well over 1,000 points and malicious ones couldn’t scratch even 100 points. It seems our algorithm works as expected.

As for putting high and low scores in context, the best way is to delegate this work to the machine as well, and let it learn. To do this, we’ll submit a number of real, clean lines and calculate their authenticity, and then submit some malicious lines and repeat. Then we’ll calculate the baseline for evaluation. In our case, it is about 500 points.

In real life…

Let’s go over what we’ve just done.

1. We defined the features of clean lines (i.e., pairs of characters).

In real life, when developing a working antivirus, analysts also define features of files and other objects. By the way, their contributions are vital: It’s still a human task to define what features to evaluate in the analysis, and the researchers’ level of expertise and experience directly influences the quality of the features. For example, who said one needs to analyse characters in pairs and not in threes? Such hypothetical assumptions are also evaluated in antivirus labs. I should note here that we at Kaspersky Lab use machine learning to select the best and complementary features.

2. We used the defined indicators to build a mathematical model, which we made learn based on a set of examples.

Of course, in real life the models are a tad more complex. Now, the best results come from a decision tree ensemble built by the Gradient Boosting technique, but as we continue to strive for perfection, we cannot sit idle and simply accept today’s best.

3. We used a simple mathematical model to calculate the authenticity rating.

To be honest, in real life, we do quite the opposite: We calculate the “malice” rating. That may not seem very different, but imagine how inauthentic a line in another language or alphabet would seem in our model. But it is unacceptable for an antivirus to provide false responses when checking a whole new class of files just because it does not know them yet.

An alternative to machine learning?

Some 20 years ago, when malware was less abundant, “gibberish” could be easily detected by signatures (distinctive fragments). In the examples above, the signatures might look like this:

DFgdgfkljhdfnmn vdfkjdfk kdfjkswjhwiuerwp2ijnsd,mfns sdlfkls wkjgwl
reoigh dfjdkjfhgdjbgk nretSRGsgkjdxfhgkdjfg gkfdgkoi

An antivirus program scanning the file and finding erwp2ij would reckon: “Aha, this is gibberish #17.” On finding gkjdxfhg,” it would recognise gibberish #139.

Then, some 15 years ago, when the population of malware samples has grown significantly, “generic” detecting took centre stage. A virus analyst defined the rules, which, when applied to meaningful text, looked something like this:

1. The length of a word should be 1 to 20 characters.

2. Capital letters and numbers are rarely placed in the middle of a word.

3. Vowels are relatively evenly mixed with consonants.

And so on. If a line does not comply with a number of these rules, it is detected as malicious.

In essence, the principle worked just the same, but in this case a set of rules, which analysts had to write manually, substituted for a mathematical model.

And then, some 10 years ago, when the number of malware samples grew to surpass any previously imagined levels, machine-learning algorithms started slowly to find their way into antivirus programs. At first, in terms of complexity they did not stretch too far beyond the primitive algorithm we described earlier as an example. But by then we were actively recruiting specialists and expanding our expertise. As a result, we have now attained the highest level of detection among existing simple antiviruses.

Today, no antivirus would work without machine learning. Comparing detection methods, machine learning would tie with some advanced techniques such as behavioural analysis. However, behavioural analysis does use machine learning! All in all, machine learning is essential for efficient protection. Period.


Machine learning has so many advantages — is it a cure-all? Well, not really. This method works efficiently if the aforementioned algorithm functions in the cloud or some kind of infrastructure that learns from analysing a huge number of both clean and malicious objects.

Also, it helps to have a team of experts to supervise this learning process and intervene every time their experience would make a difference.

In this case, drawbacks are minimised — down to, essentially, one drawback: the need for an expensive infrastructure solution and a highly paid team of experts.

But if someone wants to severely cut costs and use only the mathematical model, and only on the product-side, things may go wrong.

1. False positives.

Machine-learning-based detection is always about finding a sweet spot between the level of detected objects and the level of false positives. Should we want to enable more detection, there would eventually be more false positives. With machine learning, they might emerge somewhere you never imagined or predicted. For example, the clean line “Visit Reykjavik” would be detected as malicious, getting only 101 points in our rating of authenticity. That’s why it’s essential for an antivirus lab to keep records of clean files to enable the model’s learning and testing.

2. Model bypass.

A malefactor might take such a product apart and see how it works. Criminals are a human, making them more creative (if not smarter) than a machine, and they would adapt. For example, the following line is considered clean, even though its first part is clearly (to human eyes) malicious: “dgfkljhdfnmnvdfkHere’s a whole bunch of good text thrown in to mislead the machine.” However smart the algorithm, a smart human can always find a way to bypass it. That’s why an antivirus lab needs a highly responsive infrastructure to react instantly to new threats.

3. Model update.

Describing the aforementioned algorithm, we mentioned that a model that learned from English texts won’t work for texts in other languages. From this perspective, malicious files (provided they are created by humans, who can think outside the box) are like a steadily evolving alphabet. The threat landscape is very volatile. Through long years of research, Kaspersky Lab has developed a balanced approach: We update our models step-by-step directly in our antivirus databases. This enables us to provide extra learning or even a complete change of the learning angle for a model, without interrupting its usual operations.


With considerable respect for machine learning and its huge importance in the cybersecurity world, we at Dynamic Vault Inc CyberSecurity Systems, think that the best, the fastest and the most cost effective and efficient CyberSecurity Approach is based on a multilevel paradigm, of providing total protection with the best of breed programs out there cooperating together. That is the project of utilizing a multi-level intelligent CyberSecurity Defense starting from the lowly and simplest Antivirus, to the vastly more complex Anti-Penetration software, and the Boomerang weapons against the Hackers and of course the Artificial Intelligence that keeps the system emergent…

We do that because the CyberSecurity Protection should be all-around perfect, with its behavioural analysis, machine learning, and many other things.

But we’ll speak about those “many other things” next time we meet together…

As for the issue of Job Retention and even Job Creation from Artificial Intelligence we have to accept the fact that AI will shift jobs but it will also create man more jobs than it will displace…

People look at a RoboThespian humanoid robot at the Tami Intelligence Technology stall at the WRC 2016 World Robot Conference in Beijing, China, October 21, 2016.

And here is this article that keeps the point alive about AI creating jobs:

In the latter half of the 1980s a debate ensued between two camps of economists roughly grouped around the views of Edward Prescott, on the one hand, and Lawrence Summers, on the other.

Prescott argued that by and large, the booms and busts of the economic cycle were due to “technological shocks”; and Summers dismissed the notion as speculation not supported by evidence.

Over the years, the ‘technological shock’ model of economic shifts (TS) has surfaced over and over again in many forms, rising to the occasion whenever the debate over cycles rears its head.

Today, TS has penetrated the discussion on the nexus between Artificial Intelligence (AI) and the employment situation in advanced economies, with some AI enthusiasts like former Googler Sebastian Thrun offering fodder to those economists who are pessimistic about the impact of technology on the job market.

The famous Oxford Martin study by Frey and Osborne in 2013 concluded that “[a]ccording to our estimates, about 47 percent of total US employment is at risk.” It did offer the somewhat elite-reassuring view that the highest paid jobs and those requiring the highest educational attainment (and the two categories are often conjoined) might be safe for a significant time. The question, of course, is: “for how much longer?”

To be blunt, trying to predict the future more than two decades at a stretch is more science fiction than anything else. This article will thus focus on the ‘near horizon’ rather than the ‘distant future’.

A full generation after the Prescott-Summers debate, the issue of ‘productivity’ remains central. In recent comments, Summers has pointed to an interesting anomaly: despite the significant withdrawal of many blue collar jobs from the US economy (and others like it), partly because of the march of automation, productivity growth has been unimpressive, or even anemic.

There are hints in the data, both formal and anecdotal, if one cares to look carefully that while technology may be improving the quality of life on the whole, its aggregate effect on enterprise efficiency could be exaggerated, which is precisely the suspicion that led me to this subject and to this article.

The subconscious trigger, however, must have been comparing the efficient flow of Fortnum & Mason’s human-manned checkout-tills in London’s Bond Street, with the relative bumbling at Sainsbury’s robot-manned checkouts less than a mile away.

At the time I made my impressions I was not aware of a damning study at the University of Leicester in 2015 which had found that the robot-checkout contraptions could trigger everything from aggressive behaviour to increased shoplifting, and that they were actually losing the supermarkets significant amounts of money.

When they first launched, the pitch was that auto-checkouts would save shoppers 500,000 hours of unnecessary queuing time. The reality today is depressingly different.

Brianna Lempesis, from San Diego, appears on a video screen on her Image: REUTERS/Robert Galbraith

Similar attempts by Australian mining giants to induce human redundancy using robots have been beset with glitches, have led to lost production, and hasty retreats from the technology.

It is not surprising then that a careful look at the actual flow of R&D dollars into AI in many of the most tech savvy companies reveals a less prominent role for ‘hard automation’, defined roughly as ‘machine induced human redundancy (MIHRED), than is usually perceived to be the case.

Salesforce’s recently launched product, Einstein, focuses on helping salespeople write superior emails to targeted prospects. SAP’s HANA is integrating AI to help users better detect fraudulent transactions. Enlitic promises algorithms to make junior doctors read x-rays faster and more accurately, not to replace them. Affectiva wants to use its deep-learning kit to help empathy-challenged people become more emotively competent. It goes on and on.

At play here are two interlocking principles: the ideas of ‘digital bicephaly (dibicephaly)’ – a literal new ‘exended-hemisphere’ of the brain where accurate measurement, a behaviour largely alien to the human mind, can thrive on demand – and ‘cognitive exoskeletons (COGNEX)’ – a concept related to Flynn’s thesis of ‘new cognitive tools’ driving incremental increases in observed human IQ.

This is no Engelbart and Kurzweil style ‘machine-human symbiosis’ utopia however.


At the root of the cognex-dibicephaly vision of the future is, rather, a strong emphasis on the workplace and on the ‘mid-range’ scale of capabilities in both humans and machines (pseudo-AI). There are two contexts that converge on the same point.

Firstly, most people think of automation through the lens of assembly-line logic. Actually, ‘automation’ is a softer and more pervasive feature of all modern management. Every modern company in the world has been deploying more and more supply chain management (SCM), customer response management (CRM) and enterprise resource planning (ERP) systems in a bid to automate more and more functions. Rather than Human redundancy, improved HUMAN productivity has been the chief driver.

The problem though is that, as Denver-based Panorama Consulting has noticed, only 12% of companies report full satisfaction with their automation programs.

Gartner has found that 75% of all ERP implementations fail. In fact, Thomas Wailgum (a top executive of the American SAP Users Group) once estimated that the chances of a successful ERP implementation may be closer to 7%.

Poor automation outcomes make experimenting with innovative business models harder and prone to failure, and the single most cited cause is poor “personnel interfacing”. In every major lawsuit in the wake of a failed implementation, like Bridgestone’s $600 million suit against IBM, personnel-automation incongruity rise to the top of the pile. It has been widely observed that attempts to circumvent rather than enhance human input typically constitute the key failure points.


Dr Churchill

On January the 13th we have a Meeting at the University of Washington and you can participate by registering here:

Because we are looking for Solutions to augment Artificial Intelligence with Humanity for the betterment of both.
The question, it would seem then, is not how to remove humans from the chain altogether, but how to embed them more seamlessly.

A robot is seen in the automobile production line of the new Honda plant in Prachinburi, Thailand May 12, 2016.Image: REUTERS/Jorge Silva

The second point is the issue of ‘unfilled jobs’.

America alone has nearly 5.8 million of them. George Washington University’s Tara Sinclair is the lead author of a recent report that showed that a quarter of advertised jobs in the US and about a fifth in other rich countries like Canada and Germany were going unfilled.

The report correctly tied this mismatch of human skills and labour requirements with the sluggish growth in global productivity and thereby casts a more interesting complexion on the issue of human redundancy and artificial intelligence, at least in the near-term horizon.

In the same way that personnel inadequacies continue to undermine efforts to automate the enterprise, skills imbalances inflate unemployment rates and exaggerate the effect of efficiency-inducing technology. And both dynamics are strongest not in the unskilled or the superskilled segments (the tail-ends) but in the ‘middle-bulge’ of the employment curve.

It is reasonable to infer, given this background, that large-scale human redundancies caused by transhuman AI are fanciful, at least in the near-term horizon, given the actual performance of automation and the gaps in the enterprise today.

What is more likely is the proliferation of mid-tier AI systems transforming the capacity of mid-level skilled workers to better fill vacant jobs and to participate in human-critical automation of the enterprise, and in the search for novel business methods and models.

With superior virtual reality and machine-iteration systems, average food technologists can carry out a more varied range of biochemical explorations. Nurses can perform a wider range of imaging tests. Fashion design trainees can contribute more effectively to the fabric technology sourcing process.

And so on and so forth.

With improving personnel agility comes more nimble business models and an expansion of the job market.

Add these prospects to the potential productivity lift and the better synching of job openings and personnel availability and a whole new vision of what pro-human or cis-human AI might do for the job market emerges, one that is starkly different from the dystopian prophecies tethered to the rise of trans-human AI.

On January the 13th we have a Meeting at the University of Washington and you can participate by registering here:

And by the way the AI fembots of our lives are already here judging by the great specimen that is featured in our photograph today…

Mind you these are learning machines and you’ve got to teach them the tricks of the trade.

Enjoy and remember to always teach them to “obey” just like a good puppy should always do…

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: