Jun 142016
 

The boundary between machine capabilities and what once seemed uniquely human has certainly moved over the years, justifying concerns that the relatively new field of roboethics addresses. Roboethics goes beyond job losses and looks at the impact of robotization on society as a whole; that is the major topic here. (I will address job losses at the end.)

An algorithm can be unethical in both obvious and subtle ways. It could be illegal, as may have been the case with Volkswagen’s engine management algorithms for its “clean” diesel engines. It could be unethical in the sense that it violates a sense of fair play.

More subtly, an algorithm could take on decision-­making roles that a human is better equipped to play, thereby yielding unethical results. While algorithms are better at minimizing stereotyping and personal prejudice in decision making, and they guarantee thorough and complete data collection and analysis, people still offer critical strengths. I call these the “human edge”:

  • Nuanced judgment based on circumstances and ­context, differentiating between situations that are the same only technically or on the surface
  • Emotional intelligence and empathy
  • Plain old common sense applied when the algorithm produces absurd or unjustifiable results
  • Intuition, imagination, and creativity
  • A sense of fairness, decency, and the golden rule — the essence of ethics — and the ability to apply it when an algorithm would violate that sense based on data that a human could recognize as stray, incorrect, or irrelevant
  • Being accountable for results without the defense that “the algorithm made me do it”

I use the term “edge” in two senses, as both a boundary and an advantage, and I suggest that the boundary will prove robust for a very long time.

A corollary of this is that algorithmic approaches don’t necessarily involve computers and AI. Consider, for example, mandatory sentencing rules that take over part of the traditional role of judges.

Another conclusion is that algorithms are only part of this emerging discussion because most algorithms depend on data about the situation at hand, plus knowledge developed from large volumes of statistics related to that situation. Data don’t just appear; they have to be collected, primarily from us, often without our knowledge. The sheer power of IT to collect, store, transmit, analyze, and distribute exabytes (a billion billion bytes) of data — all of these capabilities growing exponentially — has raised possibilities for abuse and misuse only now imaginable and well outside the scope of laws and regulations developed to address yesterday’s issues. Today’s data collection can provide real and important benefits to individuals and society as a whole, but we must not ignore the potential for data misuse and abuse (a subject that merits an article of its own).

What to Do

Roboethics owes its existence as a new discipline to robots and algorithms, but these are not themselves the real ethical threat. Rather, the threat comes from robotic and algorithmic approaches to situations where the human edge is critical to ensuring results that are fair and beneficial to individuals and society at large. Computers may or may not be involved; it’s the approach that matters. Addressing the threats needs to happen at multiple levels.

Public Policy

  • Only legislation or judicial decisions can deal with existing laws such as mandatory minimum sentences or the overinclusive definition of sexual offenders. This means recognizing that justice is not the same as law enforcement. No matter how necessary or well intentioned, a statute cannot make the fine distinctions that justice calls for if lives are not to be unnecessarily blighted.
  • Governments need to embrace the notion that fines should be levied as punishment for infractions with the goal of minimizing occurrence of those infractions — not as a source of predictable revenue. (Good luck with this!)
  • Unethical algorithms need to be exposed and dealt with by, for example, consumer protection agencies.
  • New laws should better protect whistle-blowers who call out ethical issues with algorithms.
  • New laws should mandate that third-party repositories of official data keep their copies of that data up to date when the official source changes, with penalties for failure to do so.

Media, Watchdog, and Advocacy Groups

Such organizations can play a constructive role by highlighting laws that result in unethical outcomes so as to generate popular support for change. They can also play a part in naming and shaming businesses that deploy unethical algorithms, such as Third City’s handling of overdrafts, with the goal of banning them. By building awareness, such publicity makes it worthwhile for better-behaved companies like Tenth National to incorporate their “code of ethics” into their marketing. (Refer back to sidebar.)

Businesses and Governments

Businesses and governments need to remove robotic algorithms from jobs where the human edge matters. Algorithms can be tremendously helpful in decision making up to making recommendations, but not actually deciding in cases where the human edge plays an important role in ensuring fairness and applying common sense. Explicit liability for bad robotic decisions is needed.

These entities also need to recognize that as algorithms become more sophisticated, they may generate unpredictable results à la AlphaGo. This suggests a need for the equivalent of the nuclear industry’s containment vessels to avoid algorithms going out of control, as may have contributed to the home mortgage meltdown in 2007-2008.

Military Policy

To the extent that autonomous weapons replace physically present soldiers who have clear visibility into the scene (the “fog of war” notwithstanding) such that they can exercise judgment, common sense, and decency, robots as soldiers would be another example of unethical use of algorithms. Robots fighting robots sounds like a lot of games and films.

IT Practitioners Need a Code of Ethics

Under an IT code of ethics, practitioners would:

  • Refuse to participate in illegal IT (e.g., VW’s emission test–cheating software)
  • Call out attempted misuse based on robotizing ­activity where the human edge is critical
  • Call out algorithms that offend standards of human decency in pursuit of profit
  • Establish “containment vessel” processes for recognizing unpredictable and possibly erroneous algorithmic outcomes in time to enable human intervention
  • Avoid premature public release of applications when the likelihood of problems adversely affecting users is more than very low (unless accompanied by explicit warnings and waivers)
  • Ensure the security of sensitive personal data

Is this idealistic? Of course. Companies are not democracies. IT professionals have mortgages to pay and children to educate, making pressure to build something of dubious ethics extremely difficult to resist. When whistle-blowers reveal that they were asked — or, more accurately, told — to do something illegal, they may have the satisfaction of knowing they did the right thing, but too often at great cost to their careers and their families.

Just Because We Can

… should we build it? Concerns over new technologies can be overblown by the media and politicians, but they should not be reflexively dismissed. Yes, such concerns can slow down innovation, but that is not necessarily a bad thing. DDT and thalidomide did their intended jobs beautifully — but then we saw their devastating side effects. The pressure to move fast is particularly intense in IT, where speed to market is critical and tech executives with a libertarian bent want governments and ­public interest groups to stay out of the way. That ­doesn’t mean every idea should proceed at full throttle, though, assuming nasty flaws will be kind enough not to materialize. When members of the public could be adversely affected by things going wrong, prudence and caution are in order.

avatar

Paul Clermont

Paul Clermont is a Senior Consultant with Cutter Consortium's Business Technology & Digital Transformation Strategies practice. He takes a clear, practical view of how information technology can transform organizations and what it takes to direct both business people and technicians toward that end. His expertise includes directing, managing, and organizing information technology; reengineering business processes to take full advantage of technology; and developing economic models and business plans. <a href="http://www.cutter.com/experts/paul-clermont"

Discussion

  One Response to “The Emergence of Roboethics”

  1. Paul, I agree completely. I think you are spot on with what you have written here.

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)