To manage technology for the betterment of humankind, it requires doing one of the most difficult tasks with which humans are ever faced: “Thinking the Unthinkable.”
Although technology is often, but not always, created with the explicit intention of improving our lives, it frequently ends up doing the exact opposite.
“Thinking the Unthinkable” demands doing everything we can to thwart the unintended consequences of technology, not merely consider them.
It also demands that we not only consider how all technologies are fundamentally abused and misused, but that we do everything that we can to prevent it from happening.
In a word, “Thinking the Unthinkable” is key to the wisdom that is required to manage technology for the benefit of humankind.
The book Techlash by Ian Mitroff and Rune Storesund explains why companies need to become socially responsible by considering the potential negative outcomes of technology. It explains how proactive crisis management can help prevent a crisis by the early detection and correction of deviations from expected conditions.
InfoQ interviewed Ian Mitroff and Rune Storesund about the problems tech is facing and the factors which lead to a backlash against tech companies, the threats posed by technology and why tech companies should become socially responsible, the theory of the unthinkable, the fatal flaws of AI, tech courts, proactive crisis management, ethics, and how tech leaders can take social responsibility.
InfoQ: Why did you write this book?
Ian Mitroff: We wrote the book because we are fundamentally concerned about the accumulating negative impacts of technology on our lives and the glaring inability of technologists to foresee and address them.
Rune Storesund: This book was motivated to empower the public to be proactive in their thinking regarding new emerging technologies.
Technology developers are uniquely positioned (because only they have a “vision” of what the outcome of the technology will be) to consider both the positive as well as the potential negative outcomes of their technologies. Currently, the preponderance of the focus is on the “good” outcomes of technology (driven in many cases to secure capital funding), with a total omission of potential “negative” outcomes, deferring this to later stages AFTER the technology has been implemented (“We just need to get it launched and generate revenue”).
InfoQ: For whom is this book intended?
Mitroff: All of us who are impacted by technology, i.e., everyone, effectively.
Storesund: The book is intended for the general public, but is also targeted towards individuals who develop new technologies and the organizations that provide funding for such ventures. One of our working hypotheses is that these tech developers may not know that there are proactive steps they can take to help configure the technology to better manage and minimize the portfolio of negative impacts stemming from their work.
InfoQ: What are the major problems that tech is facing?
Mitroff: In our quest to become the masters of human evolution, and thus to direct it for our benefit, we’re playing with forces of whose consequences we have little knowledge, and even less control over. Thus, in the hope of curing childhood diseases, Chinese doctors have made significant modifications in the DNA of twins, thereby giving rise to the all-too-real fears of “designer babies,” if not ultimately “designer humans.”
Storesund: First, let’s make sure we acknowledge that technology is a broad spectrum of things. Also, small-scale “research and development” projects may or may not launch into the larger global community. There are many motivations that drive development of technologies and intellectual interest, trying to solve a “problem,” fulfilling a social/societal need, etc. Given this context, the most pressing problem for tech is the contrast between the narrow focus of the tech developer relative to the larger universe of applications and potential modifications. Tech developers must "open" their thinking to be mindful of these additional contexts in which the fruits of their labor will reside.
InfoQ: What factors lead to a backlash against tech companies?
Mitroff: They make wild promises that they not only fail to deliver, but produce the exact opposite. Thus, Facebook, which was supposed to bring us together by "connecting us" has driven us further apart by serving as a major platform for the worst hate speech, conspiracy theories, dis and misinformation.
Storesund: I’d say that the primary factor that leads to backlash against tech companies is being "out of step" with the social context/atmosphere at the time, and taking actions that are perceived as “bad” or not representative of the larger social context (i.e. deceit, coverups, etc). Social atmospheres change over time. What was considered a social norm in 1950 may not be representative of the social norm in 2020. In fact, as we’re seeing after the tragic death of George Floyd, what was considered a social norm in 2019 may not be representative of the social norm in 2020!
InfoQ: You mentioned in the book that technology now constitutes one of the biggest threats facing humankind. Can you elaborate on the threats posed by technology?
Mitroff: I cannot say it enough:
Increasingly, in our quest to become the masters of human evolution, and thus to direct it for our benefit, we’re playing with forces of whose consequences we have little knowledge, and even less control over. Thus, in the hope of curing childhood diseases, Chinese doctors have made significant modifications in the DNA of twins, thereby giving rise to the all-too-real fears of “designer babies,” if not ultimately “designer humans.”
Storesund: We are now a global community. What happens in one country can directly and almost immediately impact other countries around the world. The scale and rate at which these technologies can spread may greatly outweigh the ability for communities to even recognize/understand that a threat is present, let alone develop management/mitigation techniques to address this new threat.
InfoQ: Why should tech companies become socially responsible?
Mitroff: If they don’t, they will cause incalculable harm.
Storesund: A very interesting question. I would respond that the premise companies operate under is that their technology is socially responsible. The messaging associated with their technology is that it is safe, it should be used, and it has no reported “side effects.” The pharmaceuticals industry, for example, must identify reported side effects in marketing materials. Why should other technologies not be held to the same standard?
InfoQ: What does the theory of the unthinkable look like and what purpose does it serve?
Mitroff: It serves to help us imagine, and thus hopefully prevent, the worst from happening.
Storesund: I am often asked about the merits of pondering “unthinkable”situations. Why worry about something that will never happen? Therein lies the answer. The hypothesis is that some “unthinkable” event will never happen. However, we have numerous events each year that occur and the collective response is, “Wow, I never thought that could happen.”
In short, the utility of acknowledging and considering “unthinkable” events is two-fold: (1) it provides a basis to more clearly articulate what is thinkable…what we expect by contrasting with what is unexpected; and (2) it bolsters our ability to interactively manage an unfolding “unthinkable” event because some thought and consideration has been extended to that scenario.
For example, a simulation exercise in the early 2000s called “Hurricane Pam” was considered an “unthinkable” event where the City of New Orleans was inundated as a result of a catastrophic Hurricane. In 2005, Hurricane Katrina overwhelmed the flood protection system and inundated New Orleans. The unthinkable became reality.
InfoQ: What are the fatal flaws of AI? What course should AI take?
Mitroff: The fatal flaw is that it fails to incorporate dialectical thinking. There is no aspect of the human experience that is not subject to intensely differing opinions, but AI is oblivious to it.
AI requires a very different set of “rules” to address complex human behavior.
This is where the Toulmin Argumentation Framework is especially helpful.
Every argument needs supporting Evidence to bolster its Claim, which is the Conclusion of an argument. In the case of a dialectic, each side is using a different warrant (the Because part of an argument) to interpret the Evidence to reach a different Claim. It’s important to see the whole structure of an argument in order to assess it, and not just the Claim itself.
In other words, the Warrant allows one to go from the Evidence to the Claim.
Storesund: I think the fatal flaw in “Artificial Intelligence” is the “intelligence” part. Really, what we have is an automated decision process. This decision process is deemed “intelligent” if it results in the anticipated outcome. If the outcome is not consistent with what was anticipated, it is not “intelligent.” The decision process is delineated by humans as part of the coding process. The coding process is absent validation and is incapable of accounting for implicit factors, such as judgement and emotion in humans.
InfoQ: How would a tech court work, and what purpose would it serve?
Mitroff: A Tech Court would work by having at least two sides argue the proposed benefits and disbenefits of a technology. It would also require plans to address the unthinkable, i.e., a major crises.
Storesund: The Tech Court would provide a structured means to identify questions and concerns related to emerging technologies, and a mechanism to evaluate the merits between the two opposing parties. It forces parties to explicitly document their positions and the basis for their position.
InfoQ: What does proactive crisis management look like and how does it help companies to act socially responsible?
Mitroff: Proactive Crisis Management is not only the right “ethical thing” to do to ensure the health and well-being of everyone, but it’s been shown to aid the profitability of companies by picking up on problems before they turn into out-of-control crises.
Storesund: Proactive Crisis Management focuses on the early detection and correction of deviations from “expected” conditions before they become “visible” to the larger social community, at which point it becomes a formal “crisis.” Consistent with the earlier narrative, tech developers are in the unique position of understanding the potential of the technology. If they acknowledge and account for both the “good” outcomes as well as the “bad” outcomes, they are performing “proactive crisis management” and are aiding to keep the tech inside the “expected” conditions zone by proactively addressing the potential negative outcomes. This is consistent with being socially responsible, because it better ensures the tech will deliver the promised “expected” outcomes rather than unwanted “side effects.”
InfoQ: What role does ethics play in thinking the unthinkable?
Mitroff: It has us focus on “what are the ‘right things we OUGHT to be doing?’” and making sure that we accept responsibility for what goes wrong.
Storesund: Ethics should be at the core of all deliberations, whether they are thinkable or unthinkable. The Golden Rule is golden for a reason: Do untoothers as you’d have them do unto you.
InfoQ: What's your suggestion to tech leaders for taking social responsibility?
Mitroff: The inventors of a technology do not always have the necessary social and emotional skills and knowledge to manage it correctly. They need to be led by leaders who not only have a management background, but also high emotional intelligence.
Storesund: Be as passionate about confronting the potential “harms” of the new technology as you are with the potential “benefits”! These potential negative outcomes do not have to be a liability…you can leverage them into opportunities. Promote features to address potential harms as “enhanced features” that help extend the tech as socially responsible!