Mar 082016
 
Autonomous car

“We are rushing headlong into the robotics revolution without consideration for the many unforeseen problems lying around the corner. It is time now to step back and think hard about the future of the technology before it sneaks up and bites us when we are least expecting it.” – Noel Sharkey, Foundation for Responsible Robotics

One of the more vexing issues of Sharkey’s robotics revolution requires hard thinking – whether or not the computing algorithms that underpin it are “ethical.” For example, what path should a self-driving car be programmed to take in the event it finds itself in a situation where it may either have to crash into a bus stop full of school children or a nearby single adult pedestrian? The “trolley problem,” as it is commonly called, is just one ethical dilemma that the designers of smart machines now have to confront, solve, and defend to politicians, the public, and their lawyers. Similar trolley problems are arising as smart machines are being developed for use in industries such as the military, healthcare, finance, law enforcement, etc.

Increasing the ethical complexity is that current legal statutes are not entirely clear as to what is required of autonomous robotic systems or their designers. When the law doesn’t exist, our only guidance to programming the algorithms of our smart machines are the value judgments we make regarding what does and does not constitute acceptable behavior. But the question is – as it has been for thousands of years – in whose ethics and whose interests should we ultimately place our trust?

The question of ethical algorithms doesn’t just affect autonomous robotic operations, either. As more sensors and devices are being increasingly connected into an Internet of Things, how, when and why should the information be used and to whom should it be made available? For example, the New York Times recently reported on how billboard companies are teaming with telecom companies to have their billboards track individuals’ travel patterns, activities and behaviors through their mobile phones. How tightly should that information be protected? And how does one know that sensitive information is being used and protected properly?

Software, given its invisibility, provides tempting opportunities for unethical behavior. Volkswagen’s use of software cheating devices to pass emission control tests or the banks’ use of software to try to increase the likelihood that their depositors incur over-draft charges by subtly manipulating the order of their withdrawals and deposits are two recent examples. How should those involved in developing IT systems fight unethical design requests by their superiors?

An upcoming issue of Cutter IT Journal with Guest Editor Robert Charette is seeking articles on these and other ethical issues, dilemmas and conundrums that need to be addressed in light of both current and emerging information and computing system technologies ranging from autonomous vehicles to 3D printing to wearable/implanted health devices. Greater consideration will be given to article proposals that heed Sharkey’s admonishment, demonstrating hard thinking about the range of social, technical and economic issues involved.

Possible discussion points include those mentioned above, as well as, but not limited, to the following:

  • What is an “ethical algorithm” exactly? Is the term nonsensical, given that science and technology are generally viewed as ethically neutral? Is it really the developers of the algorithms that need to be ethical?
  • What, if any, are the ethical responsibilities of information technology investors, developers, architects, or purveyors? Do they go above what the law requires? For instance, how reliable should an IT system be? At what point does going live with a known buggy system become unethical?
  • Do professional codes of ethics, like those from the IEEE or ACM, have any impact on individual engineer or computer scientist behavior? How should engineers or computer scientists behave when asked to develop a system that raises ethical concerns?
  • Are there any real lasting consequences of perceived unethical behavior by tech companies or users of IT, like banks? In the case of Volkswagen, there appears to be, but is such a case the exception rather than the rule?
  • What types of data analytics are appropriate, and under which circumstances?
    Should certain technologies, e.g., autonomous military weapon systems, be banned by governments? Would such a ban be effective?
  • Does concern over ethics slow down or inhibit IT innovation, especially if the innovation is not currently illegal? How does technology change the perception of what is ethical? For instance, the submarine, aircraft, machine guns and other military technologies were all once thought to be unethical applications of technology.
  • Do different emerging information systems and computing technologies pose different ethical questions? If so, how do they differ and how should they be addressed?

TO SUBMIT AN ARTICLE IDEA: March 24, 2016

Please respond to Robert Charette at charette[at]itabhil[dot]com and with a copy to cgenerali[at]cutter[dot]com no later than March 24, 2016 and include an extended abstract and a short article outline showing major discussion points.

ARTICLE DEADLINE: April 29, 2016.

Accepted articles are due by April 29, 2016

avatar

Christine Generali

Christine Generali is a Group Publisher for Cutter Consortium - responsible for the editorial direction and content management of Cutter's flagship publication, Cutter IT Journal.

Discussion

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)