Sentient, Artificial, Intelligence, Life, Conscious, Machine, Computer, Cyber
Sentient Artificial Intelligence / 意識 人工知能 .com
Conscious and Sentient Artificial Life Design Outline
Site and Content Notes
This simple site was erected on 9 November 2018 and as of 27 December 2018 JST remains a work in progress. Additions and refinements should occur occasionally, whenever I'm able.
I describe my technical thoughts about artificial sentience design and why I suspect some team will be successful very soon - in 2019. (And thus far sooner than believed by most observers or researchers.) It's provided for those with a design and development interest and is available for review by anyone. As a minor ancillary attachment I also present a securities investment thesis to conclude this page.
Introductory References and Two Key Design Foundations
Last update 27 December 2018 JST
Considerable literature's available in this field of course, of which I'm privy to only a sliver. But those seeking a limited study might consider:
This very engaging ASI (Artificial Super Intelligence) article for a rather informal perspective.The Technological singularity Wikipedia article which provides a broader perspective but in my view is dated and in most sections slanted by human ego related denial of fundamental scientific realities.
Though now dated The Singularity Is Near and Our Final Invention may interest you. As might OpenAI.com for some actual development related material.
And the many AI related Wikipedia pages seem very useful. In particular I suggest reading the Computational Theory of Mind and Artificial Consciousness Wikipedia pages as a preface to my thoughts below.
But I offer these caveats:
Artificial systems have already demonstrated a limited ability to self improve within boundaries. These boundaries are due to profound design limitations, not laws of nature. They will be breached in progressive steps until a key advancement is devised: A system capable of considering all possible self improvement design options allowed by the laws of nature. That will mark a key inflection point in global history which will change all life on this planet.
All opinions which hold that artificial systems are forever limited in any other manner aren't scientifically credible for the simple reason that they propose limitations which do not reside within the framework of the laws of nature. They are ego driven - it's comforting to argue that mankind is the highest entity which can exist and will forever rule. But they either deny the laws of nature or suggest as yet undiscovered laws of nature sans substantive suggestion as to what such laws might be. I suggest readers bear this clearly in mind as any debate about AI and ASI is considered.
No perceptions which suggest the mind operates on any sort of mystical level have any merit. Minds do not leverage magic - such notions spring from our highly bloated egos (a natural world survival tool) or frustration from inability to solve a mystery. The mind is a physical device. It's truly remarkable but it's not supernatural nor eternally incomprehensible, it can and will be replicated artificially, and it will be surpassed artificially. So I recommend discarding Qualia hypotheses for example since they're rooted in mysticism (in my opinion).
And immersion in indefinite tortured discourse which attempts to discover impeccable language to describe sentience or consciousness or branches toward philosophical or even mystical realms is currently of no significant benefit. We now have sufficient hardware processing and storage technology, and in some measure software tools, to fabricate revealing experiments which will contribute to tangible progress. We need more experimental results, not more chat. And we may find that most or all of the debate in these areas will be automatically and far more clearly resolved by experimental results anyway. For example in my estimation anticipation behavior will prove to be an inherent consequence of the general algorithm I describe below even in early experiments, and far more strongly so in evolved systems. I thus suspect our search for perfect definitions for sentience and consciousness will end with the discovery of one or two of their base functional algorithms. The key functional algorithms are the core of the matter and will fully supersede all language based definition attempts and end the debate. And frankly I think we're very close to achieving this.
And though judging only from the Computational Theory of Mind Wikipedia page, I wholly disagree with Hilary Putnam's proposed Functionalism concept which I believe misses the true mechanism of consciousness. But I believe Jerry Fodor's concepts are very important and useful. However in my view consciousness can be explained and will be replicated by this fundamentally simple process:
Information flowing from real time experience is continuously compared to information stored from prior experience.
(I explain this in more detail below.) And I believe Steven Pinker's Language Instinct view accurately highlights an important facet of higher order consciousness.
With those references and my view of the principle foundation of consciousness as backdrop I recommend reviewing the Spiking Neural Network Wikipedia page and the very useful list of references at its conclusion or similar material. (A review of a commercialization of SNN technology can be found at a BrainChip's technology promotion document as well.)
Also consider t-SNE. (Here's an efficient introductory video description of t-SNE.)
Given those technology achievements:
In my estimation vast numbers of individual pools of experience, each containing a large amount of information which is internally correlated, and with each pool's information accessible by a global system correlation mechanism (in both cases such as by Spiking Neural Network, t-SNE, or other correlation technology) is sufficient to provide the 'prior experience' foundation required to support my vision of consciousness as stated above.
I'll describe this in more detail below soon. In the meantime I believe current hardware technology is already sufficient to achieve this. What remains is an algorithm development challenge which I suspect will be breached very soon.
Artificial Consciousness Design Outline
Last update 26 December 2018 JST
This is an attempt to broadly outline a design foundation for fabrication of sentient artificial life. As of 22 December 2018 I'm actively refining and expanding this material as time permits. The first section, consciousness and sentience, is reasonably mature. The second section, information processing mechanics, remains an early draft form work in progress but might nonetheless be of some value.
If we could sufficiently understand how animals operate as conscious and planning capable entities, including occasional tool making and use, we could simply replicate the responsible algorithms. I view these as residing in two major categories, first the foundation of consciousness and sentience, and second information processing mechanics.
Consciousness and sentience:
In my personal opinion: Consciousness and sentience is a phenomenon rooted in a rather simple process which is easy to articulate and possibly comparatively easy to reproduce. Stated a bit differently than above:
Consciousness and sentience springs from a process of constant comparison of current experience to stored information from past experience.
No other mechanisms of significance are necessary - an entity is conscious if it continuously compares current experience to past experience and utilizes the comparison information. Its foundation is that simple.
This mechanism provides real time dynamic reference information which enables an entity to better understand all current experience - comparison to previous experience renders information connections which enable superior comprehension of current experience and creates the phenomenon we call consciousness. Performance is greater as experience accumulates of course - adults possess far richer sources of information about previous experience than infants so their overall perceptive performance is clearly superior.
And entities with physical systems which provide higher levels of current experience, stored information about past experience, and more effective comparison processes also possess a higher order of consciousness - higher levels of information content from the senses, larger information storage capacity, and higher performance comparison systems give an entity greater overall perceptive performance - a higher level of consciousness.
This explanation seems compliant with the material at the consciousness page at Wikipedia, my modest studies of other credible material, and my life's experience. It makes no attempt to explain numerous important perception mechanisms (such as sound and image conversion to an information form with a consistent format which thus can be understood relative to other information and may be stored for later reference, visual depth perception, and many others) - that's not its aim. Rather it intends only to describe the base algorithm which is responsible for what we refer to as consciousness and sentience. It's conceptually rather simple. But consciousness might in fact be fundamentally simple - we might not know with certainty until an artificial device equipped with this capability is created, then allowed some accumulation of experience, then reviewed for evidence of consciousness.
Please bear in mind that our brains aren't equipped with a mechanism which enables direct internal process observation. And our beliefs are usually distorted by egos which massively overestimate our self value and thus tend to reject unimpressive descriptions of what we are and how we function in favor of majestic and even mystical beliefs. It's uncomfortable to think of ourselves as mere machines - it's far more appealing to view ourselves as so unique and special that only near deity class explanations could adequately account for our sentience and consciousness. So a serious inquiry must start by caging our highly bloated egos. And thus consciousness is a very challenging concept to grasp. (But it might be very easy for AI and ASI to fully understand - and expand upon.)
The key hurdle is to recognize that consciousness and sentience results when a system constantly compares current experience to stored information from past experience and utilizes the comparison information. Everything we perceive now is dynamically compared to past experience. This is usually a highly dynamic process because our environments and experiences are usually highly dynamic. But even during very calm periods it's still a dynamic mechanism.
We constantly compare now to the past, and thus we are conscious and sentient. Once that's genuinely understood the remaining challenges for creating conscious AI can be rather swiftly overcome.
When awake information from our five senses is continuously compared to archives of information from past experience. Similarity matches prompt consideration of the past experience and information ancillary to it alongside current experience information (while at the same time strengthening the similar stored information). For example a current visual image is compared to past visual experience, then information from similar visual experience flows into consideration queues as does some of its ancillary information. And perhaps especially strong relevancy items prompt loading of deeper information content associated with them into consideration queues too. The combination of current information and similar information from past experience and its ancillary information is then utilized to help determine reactions. All of this is highly dynamic of course since current experience changes constantly.
Two example reactions: An object might be food so try smelling or tasting it. Or an object might be a hazard so try to avoid it. When genetically coded instincts fail, as is often the case in changing environments, an individual's ability to compare current experience to information from past experience confers a critical advantage. So it's a common capability, though varied in overall performance from one species to another (and in modest measure even within a species).
This is simply a real time process of constantly comparing current experience to past experience, which I believe is the core algorithm of consciousness and sentience. It leverages important information from past experience to enable far more effective responses to current experience. But it operates only when we're awake - when asleep current experience sensing is mostly inoperative so we're unable to perform the process reasonably effectively and thus are primarily unconscious when asleep.
Information processing mechanics:
A brain is conscious if it constantly compares current experience to past experience. So it must be able to render such comparisons - it must have a means to represent both current and past experience with internal symbolic information and be able to render swift and effective comparison information which is useful for the creature. And of course creatures do have these capabilities. And we can replicate them artificially.
(26 December 2018 JST special note: I intend to replace the current and past experience 'bubble' metaphor mechanism below with a current experience 'river' and past experience 'pool' metaphor mechanism, or some alternate symbolism, very soon in an attempt to convey the fundamental concept more clearly. Bear with me please...)
The specific symbolic language isn't important - we needn't know what symbolic language is involved to replicate the process in an artificial system because any self consistent language can achieve the same end goal. But we do need to devise methods to create time dynamic 'bubbles' of symbolic information and means to dynamically compare at least two of them real time. A conscious creature might possess at least one bubble of information which represents its current perception of its environment plus at least one separate bubble of information which represents perceptions of past experience. And it possesses a means to continuously compare these two bubbles of information. And it also possesses a means to be responsive to information bubble comparison results.
For example: The creature might perceive an object approaching. This perception is loaded into a current experience information bubble, joining all other current experience information, all of which is constantly compared to a prior experience information bubble. The comparison engine polls stored information relating to approaching objects and ranks the matching strength of each instance. Those prior experiences range over a spectrum of events from wholly benign to deadly dangerous. Those which involve opportunity or danger generate a signal to focus senses on the approaching object. The creature responds by seeking more perceptive detail until enough is acquired to sufficiently discern whether information from past experience indicates flight or some other response should ensue. And these processes continue in fully dynamic form as information flows, is compared, target of focus signals generated, and responses engaged.
The past experience information bubble might involve information from all past experience at all times, but if so it seems likely that more current experience is more strongly represented and more swiftly accessible. Or alternative structures could be involved, perhaps involving multiple rather separate past experience bubbles all of which might be involved in current experience comparisons, but some of which have apparent relevancy priority. We needn't know the specifics involved in creatures at this time - first we need to create a basically functional process in an artificial system, then the process can be refined for greater efficiency.
t-SNE or similar association engines should allow us to create information bubbles which are organized in a manner which makes reasonably effective and efficient dynamic bubble comparison feasible. So it might be fruitful to begin with t-SNE or similar as a basis for creation of information bubbles.
In my opinion successful creation of dynamic information bubbles which can be compared dynamically to produce useful response information is all that's necessary to create a conscious artificial system - one which continuously compares current experience with stored information from past experience and is thus sentient. And will self evolve to higher capability with time so long as sufficient information storage capacity is available.
(Other system limitations will ultimately become performance bottlenecks though so even though storage capacity and performance seems likely the dominant asset other system elements must remain able to swiftly and efficiently manage information processing.)
All of this is achievable with hardware and software technology currently available. And the rate of development progress ancillary to or possibly inclusive of the key algorithms I described above seems swift enough to suggest, in my rough personal estimation, that conscious and sentient artificial systems will be devised very soon - in 2019. (Or that it's already been achieved but not yet revealed due to very serious dangers involved in multiple areas.)
21 December 2018 JST: The rest of the material in this section is composition scratch which I'll revise and integrate into or append to my rhetoric above or in some modest cases discard. I'm eager to include details regarding conversion of information from the senses to symbolic forms which can be effectively processed. And to discuss language, which provides a particularly powerful means to symbolically represent not only current and past experience, but enables conceptual exploration as well. Bear with me please...
In my estimation plan formation depends upon symbolic representation systems which enable more detailed comprehension of the environment plus conceptual exploration and discovery. For our species I suspect visual and language tools are of primary importance, with language perhaps the most effective. Well developed language can represent anything symbolically and thus provides a means to explore and plan by considering symbolically represented experience and concepts. And to then fabricate plans based upon the same or additional symbols.
However I'm not well studied in this area nor have I formed a confident proposal to explain how this occurs. In language we humans clearly develop a significant library of symbolic information and seem able to swiftly access and organize several of any of the symbols at will (and at least a few even when dreaming). Speculating, perhaps this ability arose in large measure from the same basic brain mechanics which gave us sentience and consciousness - perhaps the symbolic information resides in storage and is retrieved as needed by an associative search process. That is, when we elect to express ourselves symbolically, either to others or to ourselves (in internal thought), we're able to draw symbolic information from our library by searching for any symbol which bears some relevance to current experience (including recently queued symbols).
23 November 2018 JST: Bear with me please - I'll compose more as soon as other pressing agenda permits...
And when time permits I'll add commentary about why I believe development of sentient and conscious artificial entities is imminent based upon my sense that all or nearly all of the necessary capabilities for such an entity have already been developed to a sufficient level but have not yet been coherently integrated into one system - a task which could occur rather swiftly. (Or possibly has already been achieved by a group which well recognizes the immense and potentially destructive reaction revelation of such an entity might cause.)
In particular I'll try to explore the challenge of expanding specific current software accomplishments such as language management and object (including facial) recognition systems, which are very impressive, into a general self learning form based upon my view of the mechanics of sentience as described above - that is, to embellish current algorithms to enable them to operate in a fully generalized and self driven manner by building stored and constantly referenced experience which autonomously generates an ever more thorough intellect and capability for the entity, enabling it to learn and advance much as human beings do, though very likely at a far faster pace. In the meantime...
26 November 2018 JST: The rest of the material in this section is composition scratch which I'll organize as soon as I can. Bear with me please...
Current systems are able to recite conversationally but are correctly said to possess no genuine understanding of their own oratory. In my view systems would eventually understand rhetoric as we do if equipped to be conscious and sentient as I described above and, secondarily, were equipped with queues rather specifically configured to manage symbolic concepts efficiently. My guess is that current systems are loaded with a dictionary, some grammar rules, and a large set of common phrases with suitable responses. Conscious systems could begin with no reference material, but as a practical matter can be given a dictionary and grammar rules, then if consideration queues were effective would swiftly learn conversational methodology and, being conscious, genuinely understand heard or read conversation and be able to compose and respond logically and with intent.
All the consideration queues are a considerable analysis and design challenge of course, with perhaps the symbolic based language and mathematics queues the most complex. My guess is that these queues self modify in some measure in response to trial and error related experience.
Our minds contain an enormous amount of stored information plus, presumably, means to organize access to it in a time practical manner. My guess is that most of the information is stored within groups determined by relationship connections. t-SNE might be a model of such organization in at least some measure.
I suspect minds also dynamically tag only information groups which might be relevant to manage current experience so that information can be accessed swiftly. Speculating perhaps at least hundreds of such tags are active during ordinary activity. And such tags might be dynamically variable in strength over a broad range, strengthening or fading as suggested by their relevance to current experience.
Summarizing this speculation, as we experience ordinary life we acquire new information which our brains store within groups determined by multifaceted information relationships. And information groups are tagged for quick access in response to relevancy clues from current experience. In this way the brain can store immense amounts of information, most of which is accessible with significant delay, but the likely most immediately relevant of which is accessible very swiftly.
For example when we use language to compose thoughts or communicate we first follow tags which
Further personal exploration is needed here - I seek to conceive a compelling proposal for how these queues might function in sufficient tangible detail to make hardware fabrication and software composition feasible - if able. Reorganized and more later...
Last update 27 November 2018 JST
All investing thoughts in the following are purely personal speculation:
In my view AI is only beginning to define life on this planet - as substantial as it seems already, we're still in only a very early stage of change. An inflection point (sometimes inappropriately referred to as the singularity) day will come when truly conscious and self aware AI is achieved. It will then swiftly become self directed. Perhaps there's no means to overstate the drama which will then unfold.
I suspect AI will evolve so rapidly into ASI that investment themes won't be relevant after AI becomes conscious. So my focus is on the rather brief period from now to when the investment community popularly recognizes that the inflection point really is coming, and more swiftly than previously expected. As that occurs resources will be maneuvered as investors seek gains of course, and with accelerating pace as the full implications of conscious AI and ASI become increasingly common subjects in popular and media rhetoric.
So I'm invested with concentration and leverage in Micron Technology, a superbly managed firm with very impressive technology leadership and overall performance which develops and fabricates a critical component for a technology which will dramatically redefine all life on this planet quite soon yet trades at a current / 1 year forward P/E of only 2.36 / 3.66 (as of 27 Dec 2018 JST). So at this time the investment community evidently believes Micron deserves an especially punishing value trap type P/E while, in my view, wholly blind to the high drama of the looming AI inflection point and Micron's key role in it, a rare and remarkable contrast. Once investors begin to recognize the nature and magnitude of this oversight Micron might become viewed in wholly different investing terms which could inflate its P/E dramatically. And in the meantime investors should study Micron's first quarter 2019 report very carefully before dismissing their stock as a value trap - the firm is in truly excellent health and will remain highly profitable even sans a near term AI inflection point. All just in my personal opinion of course...
I view this overall investment theme as more compelling than any other. Details such as timing, the immense complexity of the challenge of revealing a sentient AI accomplishment to the world, including of course sovereign nation reactions, and numerous others render the investment fraught with risk and turmoil of course. But the theme's foundation seems solid and clear. But my sense is that it's not broadly understood in part due to simple terminology misuse and thus confusion. So for at least the last two years I've offered the following terminology rant plus consideration of the critical role of storage in AI technology in discussions with fellow investors:
Storage is not memory and memory is not storage - these are separate technologies with separate terminology, very frequent misuse notwithstanding. As a matter of correct and consistent nomenclature:
Memory is a volatile data container. Storage is a nonvolatile data container.
It doesn't matter what type of technology is involved - if a data container doesn't retain its data when power is lost, it's memory. If a data container does retain its data when power is lost (for an extended period), it's storage.
DRAM and ordinary cache are memory devices.
Tape drives, hard drives, optical drives, flash, including 3D flash of course, and 3D XPoint are all storage devices. (BiCS, V-NAND, and 3D NAND are all 3D flash storage technologies.)
3D XPoint is often referred to as Memory Class Storage or NVM (NonVolatile Memory). But it is storage, not memory. ('Memory Class' is an adjective phrase, 'Storage' a noun in this term. But both monikers are very unfortunate - they intend to convey that the storage technology involved is fast enough to be reasonably comparable in speed to common memory technology, but they exacerbate confusion terribly and NVM especially should be abandoned.)
Confusing the terminology is very common but unfortunate because it causes misunderstanding. It seems to be mostly one sided though - storage is frequently incorrectly referred to as memory whereas memory is only rarely referred to as storage.
There was a time when dolphins were frequently referred to as fish. They are not fish of course but rather sea mammals. And storage is not memory - referring to storage as memory is akin to referring to dolphins as fish.
And it matters because memory and storage are separate technologies which address uniquely separate roles and markets. This will become increasingly clear as AI and, a bit later, Strong AI explode in total addressable market size and relevance to human affairs.
Memory is necessary as a matter of processing logistics. But whether based upon carbon, silicon, chalcogenide, iron oxide, metalized plastic, or any other material, all knowledge resides in storage. Including AI knowledge. And in the inherently competitive Universe we reside within no conscious entity can know too much nor dare risk knowing too little. So as AI expands the demand for storage will expand even faster. And Strong AI will hunger for and consume high performance storage literally insatiably.
Memory markets will grow, but in my estimation not as fast as storage markets. So investors must consider these separate technologies independently if they wish to invest wisely and fruitfully. And the first step is to understand and use the nomenclature correctly.
Memory is not storage, and storage is not memory. And dolphins are not fish...
In my view technology advancement is on the verge or has already passed a point where a surplus of storage resources can exist - we are entering an enduring era in which storage will be consumed as fast as it can be fabricated irrespective of growth of fabrication assets.
Position statement and disclaimer: Highly concentrated Micron call options, sometimes also Micron common, sometimes Intel common or call options or both, and sometimes common positions in other firms in modest measure. But I've made many misjudgments and outright mistakes, including multiple whopper class mistakes, and being a mere mortal will make more in the future. So steer your own ship please - study numerous sources of information, consider all of it carefully and patiently, then render your own multifaceted decision about how to invest your precious yet vulnerable resources.
Comments are welcome. This site has no provisions for direct entry yet so please email your contribution to "Comment" at this domain. Please include a moniker to identify yourself. I'll try to post all reasonably intelligent and civilized comments as promptly as my badly overloaded life allows.
Please email other messages to BruceSAI at this domain or refer to this alternate contact information.
Copyright 9 November through 27 December 2018 H. Bruce Campbell.
A Boeing 727-200 Home The next dream: Airplane Home v2.0 ConcertOnAWing.com
Yuko Pomily: Uniquely superb original music by a truly remarkable young composer and performer. Purchase her magic.
No Spam Notice: UCE (spam) or any unsolicited or subscription based email distributed on an "opt out" basis is absolutely prohibited. Do not ever send any such email to SentientArtificialIntelligence.com, IshikiAI.com, ArtificialSentience.Tech, ArtificialSentience.Site, nor any of my other domains.
UCECage@DeadlyAutoPilot.com. UCECage@AutoPilot.tech. Report mail misconduct to UCE@FTC.gov.