{"id":828,"date":"2021-06-05T18:42:13","date_gmt":"2021-06-05T22:42:13","guid":{"rendered":"https:\/\/www.macloo.com\/ai\/?p=828"},"modified":"2021-06-05T18:42:13","modified_gmt":"2021-06-05T22:42:13","slug":"book-notes-hello-world-by-hannah-fry","status":"publish","type":"post","link":"https:\/\/www.macloo.com\/ai\/2021\/06\/05\/book-notes-hello-world-by-hannah-fry\/","title":{"rendered":"Book notes: Hello World, by Hannah Fry"},"content":{"rendered":"\n<p>I finished reading this book back in April, and I&#8217;d like to revisit it before I read a couple of new books I just got. This was published in 2018, but that&#8217;s no detriment. The author, <a rel=\"noreferrer noopener\" href=\"https:\/\/hannahfry.co.uk\/\" target=\"_blank\">Hannah Fry<\/a>, is a &#8220;mathematician, science presenter and all-round badass,&#8221; according to her website. She&#8217;s also a professor at University College London. <a rel=\"noreferrer noopener\" href=\"https:\/\/www.ucl.ac.uk\/bartlett\/casa\/dr-hannah-fry\" target=\"_blank\">Her bio at UCL<\/a> says: &#8220;She was trained as a mathematician with a first degree in mathematics and theoretical physics, followed by a PhD in fluid dynamics.&#8221;<\/p>\n\n\n\n<p>The complete title, <em>Hello World: Being Human in the Age of Algorithms,<\/em> doesn&#8217;t sound like this is a book about artificial intelligence. She refers to control, and &#8220;the boundary between controller and controlled,&#8221; from the very first pages, and this reflects the link between &#8220;just&#8221; talking about algorithms and talking about AI. Software is made of algorithms, and AI is made of software, so there we go.<\/p>\n\n\n\n<p>In just over 200 pages and seven chapters simply titled Power, Data, Justice, Medicine, Cars, Crime, and Art, this author organizes primary areas of concern for the question of &#8220;Are we in control?&#8221; and provides examples in each area.<\/p>\n\n\n\n<p><strong>Power.<\/strong> I felt disappointed when I saw this chapter starts with Deep Blue beating world chess champion Garry Kasparov in 1997 \u2014 but my spirits soon lifted as I saw how she framed this example as <em>the way we perceive a computer system<\/em> affects <strong>how we interact with it<\/strong> (shades of Sherry Turkle and Reeves &amp; Nass). She discusses machine learning and image recognition here, briefly. She talks about people <strong>trusting<\/strong> GPS map directions and search engines. She explains a 2012 ACLU lawsuit involving Medicaid assistance, bad code, and unwarranted trust in code. Intuition tells us when something seems &#8220;off,&#8221; and that&#8217;s a critical difference between us and the machines.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>Algorithms &#8220;are what makes computer science an actual science.&#8221;<\/p><cite>\u2014Hannah Fry, p. 8<\/cite><\/blockquote>\n\n\n\n<p><strong>Data.<\/strong> Sensibly, this chapter begins with Facebook and the devil&#8217;s bargain most of us have made in giving away our personal information. Fry talks about the first customer loyalty cards at supermarkets. The pregnant teenager\/Target story is told. In explaining how <strong>data brokers<\/strong> operate, Fry describes how companies buy access to you via your interests and your past behaviors (not only online). She summarizes <a rel=\"noreferrer noopener\" href=\"https:\/\/www.youtube.com\/watch?v=1nvYGi7-Lxo\" target=\"_blank\">a 2017 <small>DEFCON<\/small> presentation<\/a> that showed how supposedly anonymous browsing data is easily converted into real names, and the dastardly Cambridge Analytica exploit. I especially liked how she explains how <em>small <\/em>the effects of newsfeed manipulation are likely to be (based on research) and then adds \u2014 a small margin <em>might be enough to win an election<\/em>. This chapter wraps up with China&#8217;s citizen rating system (<em>Black Mirror<\/em> in reality) and the toothlessness of <a href=\"https:\/\/gdpr-info.eu\/\" target=\"_blank\" rel=\"noreferrer noopener\">GDPR<\/a>.<\/p>\n\n\n\n<p><strong>Justice.<\/strong> First up is inequality in sentences for crimes, using two U.K. examples. Fry then surveys studies where multiple judges ruled on the same hypothetical cases and inconsistencies abounded. Then the issues with sentencing guidelines (why judges need to be able to exercise discretion). So we arrive at calculating the probability that a person will &#8220;re-offend&#8221;: the <strong>risk assessment.<\/strong> Fry includes a nice, simple decision-tree graphic here. She neatly explains the idea of combining multiple decision trees into an <em>ensemble, <\/em>used to average the results of all the trees (the random forest algorithm is one example). More examples from research; the COMPAS product and <a rel=\"noreferrer noopener\" href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\" target=\"_blank\">the 2016 ProPublica investigation<\/a>. This leads to a really nice discussion of <strong>bias<\/strong> (pp. 65\u201371 in the U.S. paperback edition).<\/p>\n\n\n\n<p><strong>Medicine. <\/strong>Although image recognition was mentioned very briefly earlier, here Fry gets more deeply into the topic, starting off with the idea of <strong>pattern recognition<\/strong> \u2014 and <em>what<\/em> pattern, exactly, is being recognized? Classifying and detecting anomalies in biopsy slides doesn&#8217;t have perfect results when humans do it, so this is one of the promising frontiers for machine learning. Fry describes <strong>neural networks<\/strong> here. She gets into specifics about a system trained to detect breast cancer. But image recognition is not necessarily the killer app for medical diagnosis. Fry describes <strong>a study of 678 nuns<\/strong> (which previously I&#8217;d never heard about) in which it was learned that <em>essays<\/em> the nuns had written before taking vows could be used to predict which nuns would have dementia later in life. The idea is that an analysis of <em>more data <\/em>about women (not only their mammograms) could be <em>a better predictor<\/em> of malignancy. <\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>&#8220;Even when our detailed medical histories are stored in a single place (which they often aren&#8217;t), the data itself can take so many forms that it&#8217;s virtually impossible to connect &#8230; in a way that&#8217;s useful to an algorithm.&#8221;<\/p><cite>\u2014Hannah Fry, p. 103<\/cite><\/blockquote>\n\n\n\n<p>The Medicine chapter also mentions IBM Watson; challenges with labeling data; diabetic retinopathy; lack of coordination among hospitals, doctor&#8217;s offices, etc., that lead to missed clues; privacy of medical records. Fry zeroes in on <strong>DNA data<\/strong> in particular, noting that all those &#8220;find your ancestors&#8221; companies now have a goldmine of data to work with. Fry ends with a caution about profit \u2014 whatever medical systems might be developed in the future, there will always be people who stand to gain and others who will lose.<\/p>\n\n\n\n<p><strong>Cars.<\/strong> I&#8217;m a little burnt out of the topic of self-driving cars, having already read a lot about them. I liked that Fry starts with <small>DARPA<\/small> and the U.S. military&#8217;s longstanding interest in autonomous vehicles. I <em>can&#8217;t<\/em> agree with her that &#8220;the future of transportation is driverless&#8221; (p. 115). After discussing LiDAR and the flaws of GPS and conflicting signals from different systems in one car, Fry takes a moment to explain <strong>Bayes&#8217; theorem,<\/strong> saying it &#8220;offers a systematic way to update your belief in a hypothesis on the basis of evidence,&#8221; and giving a nice real-world example of <em>probabilistic inference<\/em>. And of course, <a rel=\"noreferrer noopener\" href=\"https:\/\/www.macloo.com\/ai\/2020\/09\/11\/how-would-you-respond-to-the-trolley-problem\/\" target=\"_blank\">the trolley problem<\/a>. She brings up something I don&#8217;t recall seeing before: Humans are going to prank autonomous vehicles. That opens a whole &#8216;nother box of trouble. Her anecdote under the heading &#8220;The company baby&#8221; leads to a warning: Always flying on autopilot can have unintended consequences when the time comes to fly manually.<\/p>\n\n\n\n<p><strong>Crime.<\/strong> This chapter begins with a compelling anecdote, followed by a neat historical case from France in the 1820s, and then turns to <strong>predictive policing<\/strong> and all its woes. I hadn&#8217;t read about the balance between the <em>buffer zone<\/em> and <em>distance decay<\/em> in tracking serial criminals, so that was interesting \u2014 it&#8217;s called the <em>geoprofiling<\/em> algorithm. I also didn&#8217;t know about Jack Maple, a New York City police officer, and his &#8220;Charts of the Future&#8221; depicting stations of the city&#8217;s subway system, which evolved into a data tool named CompStat. I enjoyed learning what burglaries and earthquakes have in common. And then \u2014 PredPol. There have been thousands of articles about this since its debut in 2011, as Fry points out. Her summary of the issues related to <em>how police use<\/em> predictive policing data is quite good, compact and clear. PredPol is one specific product, and not the only one. It is, Fry says, &#8220;a proprietary algorithm, so the code isn&#8217;t available to the public and no one knows exactly how it works&#8221; (p. 157).<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>&#8220;The [PredPol] algorithm can&#8217;t actually tell the future. &#8230; It can only predict the risk of future events, not the events themselves \u2014 and that&#8217;s a subtle but important difference.&#8221;<\/p><cite>\u2014Hannah Fry, p. 153<\/cite><\/blockquote>\n\n\n\n<p><strong>Face recognition<\/strong> is covered in the Crime chapter, which makes perfect sense. Fry offers a case where a white man was arrested based on incorrect identification of him from CCTV footage at a bank robbery. The consequences of being the person arrested by police can be injury or death, as we all know \u2014 not to mention the legal expenses as you try to clear your name after the erroneous arrest. Even though accuracy rates are rising, the chances that you will match a face that isn&#8217;t yours remains worrying.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>&#8220;How do you decide on that trade-off between privacy and protection, fairness and safety?&#8221;<\/p><cite>\u2014Hannah Fry, p. 172<\/cite><\/blockquote>\n\n\n\n<p><strong>Art.<\/strong> Here we have &#8220;a famous experiment&#8221; I&#8217;d never heard of \u2014 Music Lab, where thousands of music fans logged into a music player app, listened to songs, rated them, and chose what to download (back when we downloaded music). The results showed that for all but the very best and very worst songs, <em>the ratings by other people<\/em> had a huge influence on what was downloaded in different segments of the app. A song that became a massive hit in one &#8220;world&#8221; was dead and buried in another. This leads us to <strong>recommendation engines<\/strong> such as those used by Netflix and Amazon. Predicting how well movies would do at the box office, turned out to be badly unreliable. The trouble is the lack of an objective measure of quality \u2014 it&#8217;s not &#8220;This is cancer\/This is not cancer.&#8221; Beauty in the eye of the beholder and all that. A recommendation engine is <em>different<\/em> because it&#8217;s not using a quality score \u2014 it&#8217;s matching similarity. You liked these 10 movies; I like eight of those; chances are I might like the other two.<\/p>\n\n\n\n<p>Fry goes on to discuss programs that create original (or seemingly original) works of art. A system may produce a new musical or visual composition, but it doesn&#8217;t come from any emotional basis. It doesn&#8217;t indicate a desire to communicate with others, to touch them in any way.<\/p>\n\n\n\n<p>In her Conclusion, Fry returns to the questions about bias, fairness, mistaken identity, privacy \u2014 and the idea of the control we give up when we <em>trust<\/em> the algorithms. People aren&#8217;t perfect, and neither are algorithms. Taking the human consequences of machine errors into account <em>at every stage<\/em> is a step toward <strong>accountability<\/strong>. Building in the capability to backtrack and explain decisions, predictions, outputs, is a step toward <strong>transparency<\/strong>.<\/p>\n\n\n\n<p>For details about <strong>categories of algorithms<\/strong> based on tasks they perform (prioritization, classification, association, filtering; rule-based vs. machine learning), see the Power chapter (pp. 8\u201313 in the U.S. paperback edition).<\/p>\n\n\n\n<p>.<\/p>\n\n\n\n<p><a rel=\"license\" href=\"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0\/\"><img decoding=\"async\" alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https:\/\/i.creativecommons.org\/l\/by-nc-nd\/4.0\/88x31.png\"><\/a><br>\n<small><span xmlns:dct=\"http:\/\/purl.org\/dc\/terms\/\" property=\"dct:title\"><strong>AI in Media and Society<\/strong><\/span> by <span xmlns:cc=\"http:\/\/creativecommons.org\/ns#\" property=\"cc:attributionName\">Mindy McAdams<\/span> is licensed under a <a rel=\"license\" href=\"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0\/\">Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License<\/a>.<br>\nInclude the author&#8217;s name (Mindy McAdams) and a link to the original post in any reuse of this content.<\/small><\/p>\n\n\n\n<p>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I finished reading this book back in April, and I&#8217;d like to revisit it before I read a couple of new books I just got. This was published in 2018, but that&#8217;s no detriment. The author, Hannah Fry, is a &#8220;mathematician, science presenter and all-round badass,&#8221; according to her website. She&#8217;s also a professor at&hellip; <a class=\"more-link\" href=\"https:\/\/www.macloo.com\/ai\/2021\/06\/05\/book-notes-hello-world-by-hannah-fry\/\">Continue reading <span class=\"screen-reader-text\">Book notes: Hello World, by Hannah Fry<\/span> <span class=\"meta-nav\" aria-hidden=\"true\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[43,6],"tags":[20,84,158,162,21,71,159,161,57,160],"class_list":["post-828","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-ethics-and-bias","tag-art","tag-book","tag-crime","tag-dna","tag-face_recognition","tag-medicine","tag-predictive-policing","tag-privacy","tag-self-driving-cars","tag-sentencing"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/posts\/828","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/comments?post=828"}],"version-history":[{"count":10,"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/posts\/828\/revisions"}],"predecessor-version":[{"id":851,"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/posts\/828\/revisions\/851"}],"wp:attachment":[{"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/media?parent=828"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/categories?post=828"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.macloo.com\/ai\/wp-json\/wp\/v2\/tags?post=828"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}