Experiential Ethics
Micro-Decisions on Macro Issues
Is that tweet a lie? Did that email request go too far? Did you mean to purposely offend people in that presentation? You do know that could be considered theft, right?
It’s frightening how many ethical questions arise but remain unasked and unanswered when we are creating and consuming digital content. It’s not until we interact with other people that they come into the light and become something to be discussed or dealt with. Because there isn’t anyone else standing over our shoulder monitoring us while we go about our business, our unconscious biases and bad behavior make their way into our communication. Much of it unintentional, and avoidable. Yet, most of us would object to any kind of monitoring that flagged “questionable” items or ethical grey areas.
Who are you (computer) to tell me what not to do or say? It’s a fair point. Our Artificial Intelligence powered digital assistants are just following their programming and whatever ethics (if any) were put into their model or rules. They are not always right, nor are they smarter than us. But, when designing a new worldwide system of content authoring and consumption, you’ve gotta ask yourself if you can live with the consequences of not having that kind of discussion first, as painful as it may be.
Information has always been used for both good and well let’s just say more nefarious reasons. We are left to figure out which one for ourselves too many times a day to keep track of. Some of us are evolved enough to not allow surface level attributes to bias our thinking about a topic without knowing more, while others are impulsive and quick to judge. That’s human nature. Some people’s reaction to certain topics are thoughtful and measured, while others choose to accept new information at face value and don’t care to dig any deeper.
The commonality among all of these different reactions is they are all triggered by some degree on first impressions. It’s natural. We may think something is reputable just because of who or what is saying it. How it looks. What visuals accompany the message. Where it is presented? How forcefully? Was it said respectfully or full of anger?
The beauty we seek is not found in first impressions,
but rather in how the truth reveals itself.
The systems we create to enable powerful new information experiences should understand how influential they can be at first impression and strive to engage people in a deeper exploration of the related data and topics in order for them to more fully understand and form their own ideas of the truth. The skillful way these additional aspects are revealed to us over time often become critical to the challenge of providing people with more than a surface level view of their information.
Ethical Design
We have an obligation and responsibility as designers of this new communication medium and its related technologies to do the right thing – strive to be ethical and moral in our words and actions. The creation of first impressions is not the only area that needs special attention, but it is an important one. The impressions that result from receiving any new bit of created information can now be so easily manipulated to elicit specific responses, we need to examine how any new system will add to that problem or put mechanisms in place to illuminate when it’s happening. Persuasion is key.
Beware of highly persuasive info masquerading as Pandas and Cats
Persuasion used to be something that was practiced deftly (but not continually) by the charismatic among us – now it’s been weaponized and used regularly to bludgeon our general perception through continual targeted messaging. We need to acknowledge these information warfare effects are real and take the appropriate action to educate and allow for choice in the both the creation and consumption of Smart Information.
We need look no further than the Advertising industry for great examples of using ethical design well and where it has completely failed us. Skilled advertisers (with different end goals in mind) have always had the ability to manipulate public opinion and elicit action. The modern communication tools and technologies we have been using for the last several years have amplified that effect to dangerous levels. We’re just now realizing now how deeply the ability to effectively influence people through targeted information has made its way into common artifacts like Facebook posts and Instagram videos.
People are certainly more aware of that kind of manipulation today than in the past (with Gen Z being almost immune to traditional advertising). In fact, we can learn so much from that generation about not letting our superficial analysis form a first impression that drives our lasting impression, without first doing additional digging or sanity checking. This is precisely where the multi-faceted capabilities of Smart Information can help. Layers and layers of additional data and related tidbits are available at any time, making deep exploration simple if you want it. Remember with Smart Information there’s always more than meets the eye.
Micro-Decisions
Some of the most important decisions we’ll make in the near future are related to how much we want to purposely influence vs. trying to be objective in the way we communicate. Just as we have built-in spellchecking and grammar analysis today, tomorrow’s apps, platforms, and toolsets used to convey information to each other will have the ability to monitor tone, accuracy, factualness, timeliness, offensiveness, and persuasiveness of content.
The same is true of how information is received – what was your reaction, emotion, and subsequent action. It’s truly going to be a Brave New World if we don’t get a handle on the intention and use of these capabilities early on in the adoption process.
To evaluate the tone or intent of a message you are sending, or conversely, information you are consuming, the systems will use a variety of techniques to analyze the “sound” of the speaking voice, intonation, innuendo, humor, darkness, slang, encoding, and history of similar communiques. We’ll even pull data from hardware sensors, cameras, and microphones for additional cues to your emotional state. Heartrate informs emotion, facial micro ticks indicate stress, voice inflections hint at intent, etc. Once the analysis is done by the AI, trends and patterns will be considered and the result fed back into active Machine Learning models to enhance the overall effectiveness of the larger system serving the ecosystem. The net result is that we learn more about the subtleties of information creation and consumption in a data-driven way. That leads to improvements on both sides of the equation (creation and consumption), and of course could lead to misuse and abuse if not proactively looked for and compensated for.
Once we can effectively understand the underlying intent, structure, and emotion of information it will be time to turn our attention to how we can further improve our understanding of it through a series of AI-driven micro-decisions in the authoring pipeline and also the consumption and sharing side. This may appear to be like the “red squiggle” you find underneath any misspelled words today. We already alert you to grammar issues and make suggestions. Layering additional dimensions such as emotional impact, persuasion, and accuracy could easily be envisioned as a normal part of creating or editing.
These micro-decisions affect things like if you will continue authoring a Tweet that was flagged as looking offensive to the AI, or if your news feed automatically hides certain news stories about political figures because you have been emotionally affected by them in the past. This is the part of the movie where everyone realizes the human-like AI has gotten out of control. It’s not usually that dramatic in real life, but there is cause for concern when your tools are telling you not to send that mail so quickly, or you were being a little harsh on your brother in that text message.
It’s incredibly exciting to imagine this type of AI-assisted workflow behaving well and being genuinely helpful. It’s also equally frightening to imagine micro-censoring of content and touchy decisions being made on your behalf without directly consulting you. I’ll give a slightly ridiculous example to make a point – imagine if your hammer delivered an electric shock just enough to make you drop it because it thought you were doing a really bad job of nailing things in today, and you’re probably not going to do any better this time. Indeed.
The question isn’t whether this will all happen on a widespread basis, but rather when it will happen exactly. All the tech exists now, and our best data scientists and software engineers are already working on it. So, there’s no need to debate the if, it’s the how will we deal with these potentially disruptive innovations coming into our everyday lives. If history is any teacher, the answer is poorly. Our fundamental nature is good, but our actions not so much. We need to consider the bigger picture here – the macro consequences of our actions or inactions. Will a trillion AI-powered micro-decisions a minute about our information cause an unintended worldwide societal change that we don’t see coming? Perhaps. It’s that type of issue we should consider as we invent and build these Smart Information systems.
“With great power comes great responsibility.”
Trust Issues
Even being a tiny bit mindful of questionable ethics and the dark side of automation and intelligence-powered distribution systems is enough to make you realize we need some checks and balances as we move forward. There are a few obvious ways that decisions will be integrated into the content pipeline – autopilot, assisted, and manual.
Autopilot is a good metaphor to use when describing the type of Smart Information experiences that don’t require any direct input during authoring or playback to “shape” the information. AI makes decisions all along the content pipeline on our behalf, based on well-understood patterns and templates. Automated agents are then used to gather, analyze, and package up the content and levels of detail. There have been many examples of news stories and sports recaps being written and published without any human input required. These stories are monitored of course, and could have been manually overridden to correct mistakes or unfortunate word choices or puns that are a bit too subtle for the AI to catch at the moment.
You could also imagine this type of autopilot process being completely unattended for well-understood material, but there should always be a mechanism for examining or pausing at any stage, as well as pausing and putting a person back in charge. This is what happens today with self-driving cars. The person sitting in the vehicle can and should take control if they need to, but it’s usually a smooth ride.
As we push our capabilities to the “intelligent edge”, the interesting part of autopilot mode is the amount of micro-decisions that could be made on our behalf about tuning many of the variable factors we discussed earlier – persuasion, emotional impact, accuracy, etc. These systems take both our history and current context into account when deciding what to share with us. The decision-making models for these tools and systems will need to be trained with our established ethics policies in place and our moral compass turned on. We cannot unleash a worldwide content creation system on autopilot with no conscience.
Manual decision-making requires a single person, group, or even “trusted” AI, to be responsible throughout the entire process of creating, sharing, or consuming information – which is for the most part today’s operating model. Our tools are geared toward manual authoring with some basic assists and services available on demand. The person always stays in control. This is where most but not all the bad behavior happens, unintended or otherwise. The good news is we can control this, regardless of outcome.
Assisted is the blend of both those two models for information creation and consumption. The system can help you when you ask for or perhaps look like you need it. The big difference with this model is that every decision point in the process will be augmented in some non-trivial way by technology or other people to help achieve the best result. Just as my spell checker is watching every character I type to see when words are incorrectly formed, our background Machine Learning services will be watching and learning, ready to invoke an AI agent to assist you with decisions, big and small, straightforward or controversial. They’ll tell you when you’re offending without realizing it, hurting someone’s feelings by accident, or purposely just being a jerk.
Even activities as simple as reading a book or watching the latest video can be “assisted” more proactively when we employ Reinforcement Learning agents to observe what works best for you in hopes of presenting it that way. The ethics and behavioral assist come in the form of what type of information and level of detail suits your current situation. Will showing you the opposition viewpoint on a hot topic help you be a bit more empathetic to those ideas or will it just piss you off? This assisted approach will be the first stop on our journey to the autopiloted future.
“Use your head, but have a heart.”
M. Pell
Chapter 6 from “The Age of Smart Information”
Copyright © 2019 Mike Pell – Futuristic Design, Inc. All rights reserved
BOOK AVAILABLE NOW FROM
Learn more about the author M. Pell at Futuristic.com
CONNECT WITH M. PELL
Copyright © 2019 Mike Pell – Futuristic Design, Inc. All rights reserved