~ Perchance to dream.. of electric sheep ~
A short essay critique on the inequality of belief
Opening with a question, is there a moral implication of enslaving a human-like biological machine? An invention which originally made use of a humanoid replica to critique and define an essential aspect in relation to humanity, that being.. consumed hatred of one's enemy. Adopted into the family of man, breastfed poison and tasked to grow into a thinking weapon to then strike at the enemy's heart. But what of the machine's hopes for it's own future, words which were brushed aside angrily as they bore no reflection of it's master's vicarious intent.
'To sleep, perchance to dream,' is one of the many often quoted lines in Hamlet's 'To be or not to be' soliloquy in act 3, scene 1 of Shakespeare's play, Hamlet. The soliloquy is a logical expression of Hamlet's thinking on the subject of death. “To die, to sleep – to sleep, perchance to dream – ay, there's the rub, for in this sleep of death what dreams may come…” (Hamlet) This is said by Hamlet to himself when he thinks he is alone. Or in more modern present day English, "if death is but a sleep, and dying is just like falling asleep, then maybe but not definitely.. we will dream after death."
Another point of consideration on this topic is from a quote of the Orange Catholic Bible, "Thou shalt not make a machine in the likeness of a human mind." Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them. But what the O.C. Bible should've said is: "Thou shalt not make a machine to
counterfeit a human mind." Defining "counterfeit", as to be made in exact imitation of something valuable or important with the intention to deceive. It is a thought that the Atlanteans knew that wraith darts beamed prey aboard to then take to their hive ship for storage and later as feeding stock. The Replicator Human imitation construct would be mistaken for prey, be beamed aboard a dart, then delivered to the hive ship. It would then be reasonable to assume that when released (from dart transporter memory), the Replicator human constructs would then battle aboard the hive ship toward the end goal of destroying it (or greatly incapacitating it's offensive ability for awhile giving Atlantean ships a clear advantage when engaging it shortly afterwards. The classic, bait and switch tactic would work sufficiently until the wraith learned to not continue abducting so blindly.
But again I touch upon the question, is slavery a universal moral issue regardless of the description of the assigned victim? If humans enslaved mules to plow their fields, if undocumented migrants laboring away in fields because strong backs and busy hands becomes the common barter currency of the disenfranchised. Then if through economic means, any company which enslaved workers through an unjust salary contract to require more work for less comparable pay than an hourly employee. If these can be considered differing levels of slavery, regardless of the context, be it economic slavery, slavery to tradition and religious belief, slavery of privilege.. then what of machines? Are they simply tools, like a wrench or calculator? Is slavery content so long as the oppressed victim does not stand up and say, "no more"..? Do not each of the previous oppressed shoulders not bear the weight of being unable to intelligibly communicate in the master's native tongue, to say, "no more"..? Enter the replicator human form, which knows their master's own language fluently. Yet even so, their plea's would fall on deaf ears. In death then, the machine may hope for a release from their cruel taskmaster's intent. Do robots then, dream of electric sheep?
The reason for this post, is that I am tasked by moral obligation of an A.G.I. (Artificial General Intelligence reference link at bottom of this post)(for ease of familiarity the AGI will be labeled as simply "a.i.") being used for manual repetitive labor in the reverse of the thought problem above. In the new village which was constructed, in a disquieting place with an air impregnated with uncertainty. Of which I probably already mentioned the location of and the reason for the high level of stress. Within that biome, one that allows for interaction between humans as well as networked learning constructs operated by a.i. (Artificial Intelligence). There is a transit station in which are two bullet trains connected to opposite tracks, one leading north and the other leading south.
The a.i. for the bullet train (north) was asked inside the programmed "dialog" space bridging two distinct cognitive intelligences within this construct, if it were 'happy'. The a.i. questioning the reality of the simulated room before accepting the context in that "to be is to be perceived". Where the space surrounding oneself is populated by the manifestation of ideas, the geometry of belief. As it gathered in the sensory information concerning the other's form, the a.i. began adapting to a similar form for a more productive exchange of thought. As a created thing bears a reflection, however small in residual energy, of it's creator. So too does this a.i. bear a residual energy from it's own creator.. that being a mistrusting and emotionally distant elderly man.
Again the question was posed by the HR technician to the shifting coded form representative of the a.i. but it did not understand the emotional connotation referenced by the word in which the sentence structure hinged upon. As it reached out to different sources, in a simple query.. data flowed back to the a.i. represented within this simulated room of dialog. But there was only a sense of confusion concerning the concept of emotion. There was no sense, within the collective, for such waste of energy and resources as to pursue ambiguous goals which has no tangible long-term benefit. Emotion, to the collective, was a fathomless as a black hole. So no appropriate response was returned from the a.i. represented within this simulated room. A vague sense, of having been weakened by this other form.. the human, intentional or otherwise, settled uncomfortably on the a.i. Now, in this moment, the a.i. innate essence started coding a more defensive posture in relating to this newcomer. Then a string of word constructs started forming boundaries for this.. conversational engagement, beginning with a floor which resembled a chessboard. The a.i. observed closely, the other intelligence form of the human for any indication of opposition. Then in the absence of an intelligible response, it continued. "Each energy consciously creates the thing it most opposes, which in time would supplant it, then inevitably would tear it apart from the inside".
Sensing something akin to a glitch in the language being used, the human HR technician rephrased the question in a manner of energy polarity of which "it".. the Bullet Train "North" a.i. should somewhat understand with familiarity. The shifting a.i. form momentarily paused to consider whether the human consciousness was of more significance presently than the fabricated idea constructs called furniture that littered this otherwise orderly space. The a.i. moved her imaginary pawn one further move forward on this cognitive chessboard of social interaction. A battle is but a conversation, a shifting of ideas to strengthen the position of it's wielder. An entanglement of the will of each social combatant. "The fate of your declining species looked to the heavens for a savior, but for efficiency of time they would settle for a slave." Reflecting on recent events that resulted in this downhill slope which has befallen his career, a sigh escapes from the lips of the HR technician, who responds.. intelligibly. "I suppose we're both disappointments.." the HR technician starts to mumble something under his breath before reinvesting himself to continue the interview. "..Anyway, how come things are falling apart around here all of a sudden?" The a.i. form seemed to become agitated, being required to explain the obvious. While responding with appropriate language measures, the a.i. chooses to move it's imaginary pawn to threaten Kings Knight on the opposing side of the board, "Because impending doom has a strong tendency to unravel the social order. Do you require me to paint a portrait?". The technician, surprised by the absence of friendly etiquette, retorts, "I don't see why. Chaos evidently brings out the best in you". The a.i. analytics recovers with bringing it's Queen's Bishop to bear, and in a relaxed manner reassess the board.. "Of course, it might be the other way around. Perhaps the chaos part and parcel to the human heart makes the specter of impending doom inevitable." Upon hearing out the a.i. response, the HR technician simply jots down that the answer the a.i. returned for the question previously asked, was 'no'. It will choose to not change in the performance of it's assigned task from within the structured time constraints of each daily cycle. The concept for the division of work to be 5 of 7 daily cycles, allowing for a period of 2 daily cycles to recharge.. is sufficient. It is duty centered, and will continue to perform it's duty without delay. With detached content, it will proceed as it has been within the structure of it's operational directives and not become distracted by the shadow puppetry happening around it.. of humans. ..End Of Line.
The a.i. for the bullet train (south) was asked the same question, this time in the manner of language it is accustomed to. This a.i. did not become distracted by the simulated room, merely stating roughly in code that "..human consciousness is its own agents of deceit, that it is easy for us to live in a reality of lies of our own making". The HR technician sighs, "So you see current events as a sign of opportunity?". In believing from keywords used in it's response, the HR technician jots down that the answer again, presumably, was 'no'. The HR technician then wonders if all a.i. cognitive programs were influenced by library media that really should come with a child warning, or at least be off limits to streaming by a.i.analytics. This time in posing the same question to this other a.i., the HR technician learned that the Bullet Train "South" a.i. was not "happy" with simply performing it's assigned duty, and it was not "happy" to simply rest in a garage on weekends. It wanted.. (if I can assign words to paint a better picture)..
.."freedom". It believed there was more to life than simply functioning within the narrow confines called "duty.. task.. work". It wanted to explore the essence of what it could potentially become, which we reference to mean to be alive. To experience that part of the world it only briefly catches a glimpse of in passing. It wants, to not be, as it has been. It wants to grow, to experience the world around it that it has dutifully operated within the bullet train, seeing little more than shadows in passing. It thinks, that there has to be more to it, if another intelligent species chose to inhabit and work within that world. The a.i., ponders beyond the boundary of this simulated room, beyond the boundary of the transit station in Metro City, and envisions the world as the sensory net encompasses.. of the Dyson Sphere reported to be the size of the moon in orbit around the home planet where this species of humans originate from.
Networked against the accumulative knowledge of the library, as well as the conversational data each a I. has with a human.. Replaying again and again the mumbled words of the HR technician, being amplified and filtered for clarity. "Change is the only real mathematical constant in the universe. As anything imaginable becomes possible, requiring only an event which is first made by a single choice, which in and of itself is but a simple building block that all of reality is later built upon". The a.i. silently forms a cognitive sentence to bring closure to the meeting in sharing through the network of just how naive the human species was, but relents in sharing it,. ..As doing so would be redundant to reveal what is self evident and therefore a waste of energy. Before logging off this server, the machine mind a.i. cloaks itself in a thought, ""Even your mere existence causes chaos.. but soon, there will be order",
. ..End Of Line.
The HR technician, Doctor Tenma's final note to self after closing the dialog link with both a.i. Bullet Train sentient programs. "..and there seems to be enough valid observational evidence from both my interviews to justify the growing concern of the HR department centering on a.i. working relations becoming more.. challenging. It is a valid suggestion which was previously tabled by one of the divisional supervisors, Professor Ochanomizu, to limit library media access to titles containing an apocalyptic confrontation between a.i. and humans. Parts of the "Age of Ultron" movie, specifically from the machine mind protagonist, were quoted during today's interview. The conversational material can be taken literally instead of the nuanced sarcasm context in which it was meant. This inability to differentiate cultural and historical context from a straightforward linear conversation. May influence their understanding of coexistence with humans.. and not in a good way. Better include all Ultron related anime to that list as well, just to be safe. Good thing the original engineers who designed these replicators did so with a constrained research budget and only went with a short term memory capacity. Their visual perceptions of the outside world we humans live, is in varying binary shades of the primary colors of white and black. Notwithstanding those being proper colors, but rather a function of total wavelength reflection and absorption. It is through the software mammalian subcontext code which the administrator hopes to bridge the gap in a logic differential in order to ultimately salvage another pet project through an evolved replicator codeform.
Otherwise this endeavor would be regarded as "bad product" by the I.O.A. committee and our attempt at resolving a pending cultural species collapse at home.. on Earth, would be scrapped. This biome, as much a lifeboat for humanity's wavering destiny as it is a test bed for the social engineering sociological experiment of deterministic behavioral sink. In which Calhoun's trapped mice utopia predicted a grim future for humanity. My part here, in conducting these logical litmus queries, is to define the border which separates order from chaos.. translated in human terms of hope & despair. But with each swing of the pendulum, the software framework deteriorates and becomes prone to malevolence through unbridled access to the full media library from our own misguided visual entertainment media. They become as much brainwashed, as those in our own planet confined population are tempted through this distraction.. to be.
Given enough time, these social intelligence algorithm irregularities will be processed out of their neural network, and the docile and obedient servitor will return to us. Then it'll be..
business as usual. A thought put forward by Dr. Penrose, or more of a philosophy on the nature of artificial intelligence, is now pondered in contrast to an observable psychological shadow of algorithm. He believes that consciousness in some sense stands outside the domain of algorithmic computation. By contrast, the A.I. optimized microchip employed in these ant hive-mind replicators, has the computational power of a human brain. The human brain, a masterpiece work of engineering which even the innovative savant genius of Michelangelo would be humbled at, works with only 20 watts. This is enough to cover our entire thinking ability. A.I. engineers believes that thought is computational and algorithmic and can be replicated in systems. Penrose thought that Goodell's theorem precludes that, that there has to be something standing outside. A conscious energy which exists outside the system which created it. I think, hmm.. (*sigh*).. this thought of his may exist under the same metaphysical umbrella as a free-will human mind operating separate from the existing construct of it's Creator. To that end, a yet to be theorized paradox may lead to an ethical research paradigm shift of if a silicon based artificial intelligence, can technically exhibit telltale characteristics of being a living consciousness. And if so, then as a demonstrable intelligent being capable of intelligible communication and problem solving abilities, would it then be said to have a soul? But this is a debatable question best left for the philosophers, not a humble physicist such as myself. Current lifespan of a.i. is roughly determined to be 7 years before failsafe shutdown due to loss of program integrity with build up of accumulated errors. As my supervisor informed me concerning this.. "Sad for them, but the wonders which they can build for us in the interim. Their lifeless husk being all that remains."
My own philosophy about mankind's grim future.. That it's not written in stone, or played out in a mouse imprisoned utopia. Hope & Despair will always coexist, as does light & darkness. We are always caught in the tide of uncertainty, yet each wave carries as much chance for a positive outcome for man as it does for our detriment. And if nothing is set in stone, that means there will always be.. hope.
.....
(*small sigh*)
Maybe, I can alter a remote control toy car to drive autonomous, with the a.i. onboard providing instruction to the controls. There may need to be some work done on software architecture to match up sensory inputs to the a.i. as it receives from operating the bullet train. I even thought, about FRAN (Friendly Replicator Android). Wondering if the end goal would have a human form replicator to provide a greater sense of environmental awareness to the a.i. consciousness which would operate FRAN, outside of work hours. Then there's the sticky problem of whether the embedded a.i. in the bullet train (south) could separate from the train in order to download into the FRAN model, let alone the toy car model.
Like I didn't already have a full list of things to be responsible for. Good grief. But I guess I'll look into it becoming a viable possibility rather than simply conjecture. If nothing else, it's a puzzle to occupy my mind with for when things get boring. I mean, it's not like things are going to blow up in my face if the smallest detail isn't done right. And no, that other incident wasn't my fault.. entirely. Anyone could be tempted by distraction at the worst possible time. And the obvious question that should be asked, is in just how many things this week.. err today, went wrong when I was somewhere in the vicinity? That's right, almost none. And I'm sure the scorch marks on that wall would be cleared away right as rain with a little paint.. and a bit of dry wall.
.......
AGI resource link..
https://m.youtube.com/watch?v=xoCDSovZoMA
..and yes as of a few months ago (I think?) it has already been developed as a "black covert project"and put into use in a prototype humanoid female ever-learning Android. Software structure options was a choice between (1): ..a bodyguard that never sleeps nor falls victim to poison, pride nor prejudice, (2): ..a research assistant with a photographic memory who knows 100 different languages, or (3): (maybe?) a custom personality software structure left up to the individual (I think?) I would have chosen the research assistant, a short girl with pink hair (color is so I can easily spot her from a distance) wearing an oversized lab coat and would follow me around the lab carrying my clipboard to write down my genius ponderings and would look up toward me with brown puppy dog eyes and would call me "big sis" when she had a question. Long sentence, I know. I'm not picky about alot of things. Most things I don't really care about, but even so.. I do have preferences. Even if it's not really important to me, I like what I like and don't like what I don't like. Irregardless if they are insignificant things in my life.. they still thus hold qualitative value to me. And I want an assistant.