ARTIFICIAL INTELLIGENCE - “when, how, where, …and of what consequence”

INTRODUCTION

Artificial intelligence (AI) has been defined variously, but perhaps the most cogent example being “… the ability of a computer or synthetic entity controlled by a computer to perform tasks which would otherwise require human intelligence and oversight.”  Further granular definition and discernment yields the term artificial general intelligence (AGI).  And still further discernment brings us to as the notion of ‘the singularity’.  Perhaps the greatest hope --- and also the greatest fear --- about AI has to do with the singularity and what it bodes for us as a species.  Here, the attainment of AGI may be thought of as the point at which a synthetic system becomes capable of human-level thinking. According to many experts, this singularity also implies the dawn of actual machine consciousness.

Some context and background may be useful in digesting the scope of the paradigm shift that will almost certainly occur. 

Humankind is thought to have arisen on the plains of Africa as long as two million years ago.  Homo sapiens possess an average cranial capacity of ~1,500 cc with the ability to perform approximately 100 trillion calculations per second. 

The state of the art exascale computer is now capable of performing a mind-boggling quintillion calculations per second — that's a 1 with 18 zeros after it (1,000,000,000,000,000,000) and represents orders of magnitude greater speed and performance.  Yet, even the most brilliant artificial minds today pale in comparison to the simple spacio-temporal tasks any toddler can manage.  Here, the differentiating quality is “judgment”.  And it’s why the promise of self-driving cars remains perhaps ten years away just as it did ten years ago and ten years before that. 

Still another crucial defining term of art in discussing AGI is the word ‘engram’.  Theorized as a unit of cognitive information imprinted in a physical substance, engrams are believed to be the means by which memories are stored as biophysical or biochemical changes in the brain or other biological tissue - and in direct response to external stimuli.

Here, it is crucial to one’s comprehension of AGI to recognize that there is almost certainly nothing unique in our cranial “wet ware” to preclude replication in silica.  This means that thought, consciousness, self-awareness and other previously thought to be uniquely human attributes may soon be shared with our synthetic creations. 

 

THE SNOWBALL EFFECT

From 2,000,000 to 1,000,000 years ago nothing much changed for our ancestors.  For spans of millennia they hunted on the African savannah, lived in small groups and mostly died before age thirty.  Then, something remarkable occurred.  Man tamed fire.  This permitted a new diet of cooked meat and other foodstuffs, thereby significantly extending the range one could travel and subsist.  From 1,000,000 years ago until perhaps 15,000 years ago humanity existed in small groups or nomadic tribes.  With the advent of agriculture, small groups became big groups became towns and then cities.  Agriculture slowly gave way to industry and the industrial revolution harkened profound scientific advances, including perhaps the most significant of all, antimicrobial chemotherapy, in the late nineteenth and early twentieth century. 

In the twentieth century mechanized industrialization began advancing at an accelerated pace.  In the 1940s the first rudimentary digital computer, ENIAC, was conceived and put into use during WWII.  In 1951, BINAC, the first general purpose computer metamorphosed into UNIVAC and throughout the 1950s advancement occurred, however the rate of growth was rate limited by the hardware in use (e.g. vacuum tubes, ventilation blowers, cathode modules etc.) which constituted state of the art at the time.  Contemporary 1955 computers often took up an entire floor of office space yet paled in comparison to the computing power that would soon be available in a hand calculator a few short years later.  Key to this paradigm shift was development of the integrated circuit. 

An integrated circuit or monolithic integrated circuit (also referred to as an IC, a chip, or a microchip) is a set of electronic circuits nested in one small flat piece of semiconductor material, usually formed of silicon.  Large numbers of tiny MOSEFTs (metal oxide semiconductor field effect transistors) integrate into a small chip. This resulted in circuits orders of magnitude smaller, faster, and less costly than the vacuum tube monsters of yore.

In 1958, the first working transistorized integrated circuit was tested and demonstrated.  By 1971, Texas Instruments was selling four-function handheld calculators, in fact one that the Author, as a youngster, actually purchased for the then-princely sum of $179.  Perhaps the most remarkable comparison point is this – today’s standard USB chargers, available everywhere for around $5 possess vastly more computational power than the Apollo 11 spacecraft that in July 1969 brought Neil Armstrong and Buzz Aldrin from the earth to the moon and back. 

From the mid-1960s onward it was observed that the number of transistors in a dense integrated circuit doubles approximately every two years.  Known as Moore’s Law and named after Intel co-founder Gordon Moore, the net effect has been an explosion in data processing power such that today’s humble iPhone possesses more bandwidth and speed than the 1990s era IBM supercomputer Deep Blue --- the first machine to beat the world’s highest ranked human at chess.  

 

WHAT HAPPENS NEXT

Here’s where things get downright mind-blowing.  A discontinuity in AI is said to have occurred when a particular technological advance pushes a rate-of-progress metric substantially beyond what would otherwise be expected based on extrapolating past performance. Like a runaway freight train or an endlessly growing snowball.  Such discontinuities are tracked against how many years of past progress would have been expected to produce the same degree of technological advancement -- had the rate of evolution remained constant.  

Well…okay, but so what (?)

At the present rate of technological evolution it is widely believed by experts in the field that AI will likely reach human level intelligence by no later than 2060 – with some predicting it to happen as soon as the mid-2030s.  Here’s what that means.  Imagine a room filled with the best and brightest intellectual giants who have ever lived --- a room full of Einsteins, Isaac Newtons, Steve Jobs, Elon Musks and so on.  Now imagine these big brains put to work 24/7 on mankind’s most arduous technological conundrums – but rather than operating at human-scale speed, the AGI system will be processing orders of magnitude more rapidly. 

Accordingly, some have estimated that technical problems that would otherwise take human minds several tens or even hundred years to work through could be solved by such ‘early’ super-AGI systems in days or perhaps even hours.  But the speeding snowball train doesn’t stop there – because AGI doesn’t merely attain and then stop at human level intellect.  Instead, it continues evolving and growing, not just in breadth and scope but rate of evolutionary progress.  Now, imagine twenty thousand years of human-level progress occurring in two or three months.  Where today’s state-of-the-art machines operate blazingly fast, it is predicted that these AGI level systems will move into multi-dimensional spacio-temporal constructs via algorithms and efficiencies that are literally inconceivable today. 

If you’ve come this far in our journey you will appreciate that by comparison, future AGI systems may well be as evolutionarily superior to our present human intellect as we are to a fruit fly or even an amoeba.  Not only will such synthetic intellectual powerhouses transcend our ability to keep up – they will soon transcend our ability to grasp how they think, what they think, and, most worryingly, what they have planned for us.

 

IS HUMANITY DOOMED?

Perhaps ….but perhaps not.  Anyone who’s vaguely paying attention readily

grasps that many previously intractable ailments are becoming increasingly amenable to correction and even cure.  Soon, type 2 diabetes will become a thing of the past due to gene editing, tissue cloning and other 21st Century advancements.  The same is true for numerous illnesses and diseases.  Scientists working at the frontier of anti-aging estimate that perhaps as few as 20 or 30 genes control senescence leading to death.  Tools such as CRISPR Cas-9 may eventually be deployed to modulate these genes, thereby giving us decades or even centuries more of vibrant life. 

Here too, the integration of mechanized medicinals represents a step in the direction of true cyborg-genesis.  Artificial pacemakers are now in widespread use, saving the lives of untold millions who would otherwise surely succumb. 

Thus, given the rapid rate by which AI is evolving it seems we may soon face a series of existential choices.  Do we remain species Homo sapiens?  Or are we destined to evolve into a seamless, AI-enhanced Homo synthetica?  The Author believes that if the choice is extinction or rapid evolution, humanity will, overwhelmingly, choose the latter. 

Surely, in such scenario much will be gained.  One ponders though, how much of our basic humanity may likewise be lost…