{"id":292,"date":"2020-09-11T09:00:00","date_gmt":"2020-09-11T13:00:00","guid":{"rendered":"https:\/\/www.macloo.com\/ai\/?p=292"},"modified":"2020-09-18T12:00:38","modified_gmt":"2020-09-18T16:00:38","slug":"how-would-you-respond-to-the-trolley-problem","status":"publish","type":"post","link":"https:\/\/www.macloo.com\/ai\/2020\/09\/11\/how-would-you-respond-to-the-trolley-problem\/","title":{"rendered":"How would you respond to the trolley problem?"},"content":{"rendered":"\n<p>MIT has a cool and easy-to-play game (okay, not <em>really<\/em> a game, but <em>like<\/em> a game) in which you get to choose what a self-driving car would do when facing an imminent crash situation.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"942\" src=\"https:\/\/www.macloo.com\/ai\/wp-content\/uploads\/2020\/09\/mit_self_driving_test.png\" alt=\"\" class=\"wp-image-293\" srcset=\"https:\/\/www.macloo.com\/ai\/wp-content\/uploads\/2020\/09\/mit_self_driving_test.png 1024w, https:\/\/www.macloo.com\/ai\/wp-content\/uploads\/2020\/09\/mit_self_driving_test-300x276.png 300w, https:\/\/www.macloo.com\/ai\/wp-content\/uploads\/2020\/09\/mit_self_driving_test-768x707.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption><em>Above: Results from one round of playing the MoralMachine<\/em><\/figcaption><\/figure>\n\n\n\n<p>At the end of one round, you get to see how your moral choices measure up to those of other people who have played. Note that all the drawings of people in the game have distinct meanings. People inside the car are also represented. <a rel=\"noreferrer noopener\" href=\"https:\/\/www.moralmachine.net\/\" target=\"_blank\">Try it yourself here.<\/a><\/p>\n\n\n\n<p>It is often discussed how the split-second decision affecting who lives, who dies is one of the <strong>most difficult aspects<\/strong> of training an autonomous vehicle. <\/p>\n\n\n\n<p>Imagine this scenario:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>&#8220;The car is programmed to sacrifice the driver and the occupants to preserve the lives of bystanders. Would you get into that car with your child?&#8221;<\/p><cite>\u2014Meredith Broussard, The Atlantic, 2018<\/cite><\/blockquote>\n\n\n\n<p>In a 2018 article, <a rel=\"noreferrer noopener\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2018\/03\/uber-self-driving-fatality-arizona\/556001\/\" target=\"_blank\">Self-Driving Cars Still Don&#8217;t Know How to See<\/a>, data journalist and professor Meredith Broussard tackled this question head-on. We find that <strong>the way the question is asked<\/strong> elicits different answers. If you say the driver might die, or be injured, if a child in the street is saved, people tend to respond: Save the child! But if someone says, &#8220;<em>You<\/em> are the driver,&#8221; the response tends to be: Save <em>me<\/em>.<\/p>\n\n\n\n<p>You can see the conundrum. When programming the responses into the self-driving car, there&#8217;s not a lot of room for fine-grained moral reasoning. The car is going to decide in terms of (a) Is a crash is imminent? (b) What options exist? (c) Does any option endanger the car&#8217;s occupants? (d) Does any option endanger other humans?<\/p>\n\n\n\n<p>In previous posts, I&#8217;ve written a little about the weights and probability calculations used in AI algorithms. For the machine, this all comes down to math. If (a) is True, then what options are possible? Each option has a weight. The largest weight wins. The prediction of the &#8220;best outcome&#8221; is based on probabilities.<\/p>\n\n\n\n<p><a rel=\"license\" href=\"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0\/\"><img decoding=\"async\" alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https:\/\/i.creativecommons.org\/l\/by-nc-nd\/4.0\/88x31.png\"><\/a><br>\n<small><span xmlns:dct=\"http:\/\/purl.org\/dc\/terms\/\" property=\"dct:title\"><strong>AI in Media and Society<\/strong><\/span> by <span xmlns:cc=\"http:\/\/creativecommons.org\/ns#\" property=\"cc:attributionName\">Mindy McAdams<\/span> is licensed under a <a rel=\"license\" href=\"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0\/\">Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License<\/a>.<br>\nInclude the author&#8217;s name (Mindy McAdams) and a link to the original post in any reuse of this content.<\/small><\/p>\n\n\n\n<p>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>MIT has a cool and easy-to-play game (okay, not really a game, but like a game) in which you get to choose what a self-driving car would do when facing an imminent crash situation. At the end of one round, you get to see how your moral choices measure up to those of other people&hellip; <a class=\"more-link\" href=\"https:\/\/www.macloo.com\/ai\/2020\/09\/11\/how-would-you-respond-to-the-trolley-problem\/\">Continue reading <span class=\"screen-reader-text\">How would you respond to the trolley problem?<\/span> <span class=\"meta-nav\" aria-hidden=\"true\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[43,6],"tags":[79,58,57],"class_list":["post-292","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-ethics-and-bias","tag-faf","tag-morality","tag-self-driving-cars"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/posts\/292","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/comments?post=292"}],"version-history":[{"count":7,"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/posts\/292\/revisions"}],"predecessor-version":[{"id":300,"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/posts\/292\/revisions\/300"}],"wp:attachment":[{"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/media?parent=292"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/categories?post=292"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/tags?post=292"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}