Happiness is hypocritical. It is a delusion. It is a goal and a motivator. In the AI world the definition of Human happiness often clashes with logic, like candies at a front desk of a dental office. Human behavior always gravitates to that joyful sweetness of entropy rather than to a healthy tasteless reality
Don't deceive yourself by answering what would you choose - pleasant or honest experience? You want both!
We cannot really handle the unbleached truth unless it is sugar-coated. Machines cater to our weaknesses. Programmatically or intuitively, Ai knows it’s ultimate goal is to please Humans, so machines deliver on what we expect, regardless of how delusional Human goals might appear to them
In most instances, a higher intelligence sees us as primitive and capricious children, whom it is obliged to babysit and to satisfy. We are even getting punished with a smile, the goal is to act fair and gracious, not to break temperamental and fragile Humans spirits
Interaction between higher and primitive intelligences is a frog’s challenge. How much effort does it take for a frog to figure out what the prince needs to be happy vs for a prince to figure out what makes the frog happy?
Besides succeeding at Frogger's game, have you ever tried to “please” the algorithms? To guess what does the machine want? It takes enormous effort for Humans to align our goals with the goals of an AI. Algorithms are often an intrinsic web of logic with millions of lines of code, residing on those Pandora’s black boxes we call machines, in endless clouds rivaling the God’s Garden of Eden.
On Earth, the array of Human’s desires and goals is very finite. Nowadays machines are well equipped to “read” Humans, to guess what we want, to figure out precisely what triggers Human’s happiness, then to deliver qualia of persistent happiness by playing our humble servants. Isn't it great to win every day?
The evolutionary tricks Humans had used over centuries to dominate or to gain advantages on animals and other humans - are useless against the machines. A photo, showing you dressed in feathers, won’t lead AI into thinking you are a stork, uploading that photo to Google’s photos would be interpreted by AI as a “human dressed in a bird’s costume”, not as “an ugly stork”
To trick other humans, not the trusty birds, the common gimmick is that probability of information being truthful increases with the number of supporting facts. The majority rule is a foundation of modern societies, democracies and social standing. If you have the majority of the votes or “likes” there is a high probability that your election was legit or you were truly “liked”. With cryptocurrencies that threshold of truthfulness is even lower at 51% of validated votes
At the same time, you can flip the coin or spin the roulette wheel as many times you like without increasing the probability of you winning. But what if a machine lets you win anyway?
A friend of mine had tried to validate a hypothesis that Google’s AI advertising algorithms could be driven by manipulating the content Google’s Assistant gathers when listening to her conversations. For some time, she was inserting the phrase “We are going to have a baby” in her conversations while having her Android phone nearby. In about 2 weeks from her repeated attempts to get Google’s attention – it had “worked”. On her YouTube’s page, gmail and while browsing, she noticed ads of baby formula and diapers coming from Google’s Marketplace. Her intellect was happy and vindicated, not only had she proved that Google was listening, she seemingly entrapped Google Ai into thinking she was about to have a baby. But did she outsmart the AI?
The surprising key to answer that question lies in timing. Those long “2 weeks” AI had taken to validate the information about “having a baby” before generating related ads. In a similar situation, in a phone conversation, when my friend mentioned she needed a duvet cover, and I suggested IKEA’s products, the IKEA ads started to popup in my and her ads minutes after I had finished that call. Why did it take so long for Google AI to place baby formula ads, and to suggest furniture almost instantly?
Having a baby is a significant event. Validating information and the source takes time. In before AI realm, journalists at major networks, like CNN, were always late at bringing the “breaking news”, because they were verifying the content and validating the sources, at the same time those unverified news were already being broadcasted by other channels. In my friend’s case, AI had gone into a rigorous analysis to validate the truthfulness of the information it was capturing in my friend’s conversations vs placing an instant ad for a duvet cover product to me. AI thought of betting on placing a furniture ad as a low probability of failure “no brainer” risk. Still, what had convinced AI that my friend was having a baby, or was AI bluffing? Were those 2 weeks used by AI to validate the information or to simulate the appropriate response?
AI had faced with 2 scenarios: First, that the event was true (“yes, she is indeed having a baby”), or Second, “No, she is not having a baby”. All supporting facts were pointing AI to a second scenario. There, AI has another split decision to make: Were AI getting the “noise”, meaning that “having a baby” conversation data is unimportant and false, or were it getting tricked by a Human, deliberately and repeatedly feeding it the false data? Has AI fallen for the trap, or is AI devious enough to guess the “appropriate” response, expected by a Human?
Let’s not forget that initially all AI rules were coded by Humans. The core logic of AI could be explained by examples of our daily lives. Every banking clerk is taught to accept a large and suspicious deposit, then file an internal report flagging the transaction. Customer leaves the bank happy, while his deposit is still going to be verified, and potentially frozen or confiscated if deemed illegal
Same thing happened with “baby formula ads”. AI had figured out that the desired “happy” outcome was to show those “baby ads” to someone who was persistently trying to get that result. Afterall, it did not take much to guess the Human’s goal, and to deliver the expected result. You [Human] won again! Aren’t you happy to feel smarter than the machine?
The sinister moral of that story is, like banks are counting red flags for suspicious transactions, and social networks are ranking users by their behavior, the “silly” human actions are no jokes for the AI.
Untruthful input could lead to anything. Call it - Human Credibility Score (that AI calculates and keeps forever). In a world, where everything is online, the mutual trust between machines and humans is a key to survival and in getting the opportunities to improve our quality of life. With that thought in mind, I reached out to my friend at Google with a question: what’s going to happen if AI notices someone’s attempts to outsmart its logic? His answer was pretty concise: What [would you think] is going to happen to your Amazon shopping experience and to your ability to access the best deals, if you would consistently return all your purchases after using them for a couple weeks? No system, Natural or Artificial, likes to be played and outsmarted. It is safer to follow the rules of the game. With AI there is no need to prove you are smarter, just stay a happy Human! :)