Updates and New Exercises

(SB/TH: 09/14/2018)

Chapter 1: Unwrapping the Gift

Section 1.2.4: Medical researchers are studying ways to detect brain injuries, depression, and heart disease from a person’s speech. Eventually, they hope, a smartphone app could monitor such health conditions by analyzing the user’s speech.

Chapter 2: Privacy

Section 2.1.2: A risk of always-on voice-operated home appliances: An Amazon Echo recorded a private family conversation and sent it to another person because the device misinterpreted words in the family’s conversation as commands to send it.

Sections 2.1.2 and 2.2.1:  A data analytics company compiles data on 198 million voters in the U.S. to predict how they will vote.  The company mistakenly allowed the data to be publicly accessible. (June 2017)

Section 2.2.1:  Federal Communications Commission rules require that Internet Service Providers (e.g., AT&T, Comcast) get customer permission before selling sensitive information such as browsing history. This rule does not apply to information companies such as Google.

Exercise:  What factors should affect whether rules require prior permission: the type of company, the type of information, whether the service the company provides is free, or other factors?

Section 2.1.2: Many companies and researchers contract with Facebook to use member data.  Some use or transfer the data in ways that violate their contracts. (An example that received much media attention was Cambridge Analytica’s use of data in the 2016 presidential campaign.) Detection of abuse and enforcement of such contracts is not easy.  

After announcing that it restricted access to user data by other companies, Facebook continued to provide personal information about users and their “friends” to dozens of companies.  Whether its actions conflicted with its statements or depended on loopholes or interpretations of wording, it is clear that the continued release of data represents a privacy risk (and creates distrust).

Section 2.1.3: We realize that our phone must know our location to give driving directions or to respond to our query for nearby restaurants.  If we allow use of our location for some apps but turn off Location History, does Google delete the location information after its intended use?   How clear should Google make the answer to this question? The Associated Press reported that Google collects and stores the locations your mobile device has been even if you have Location History turned off.  Google points out that the user is told about the location data collection, but it was a startling surprise to many. (Ryan Nakashima, “AP Exclusive: Google Tracks Your Movements, Like It or Not,” Associated Press, Aug. 13, 2018.)

Section 2.2.1: In 2017, Google stopped scanning Gmail messages to target ads to users.  Some free email providers (e.g., Yahoo) scan messages for ad targeting and some do not.  

Sections 2.2.1 and 2.2.2: Many people continue to argue that companies that profit from the personal data they collect from users and members should be required to pay for the data.  In Section 2.2.1 we observed that users get many free services in exchange for their data.  But how much are the data worth?  Here are two examples to consider.

Facebook has more than two billion users and annual profit of roughly $4 billion.  If Facebook used all of its profit to pay members (an average of $2 per year), do you think that would change how people think of Facebook’s use of their data?

Some car makers offer apps that collect data on a person’s driving habits. They pay the driver for the data with discounts on car accessories and service, and they sell the data to insurance companies.  Good drivers might also get a discount on insurance rates.  Is this a fair deal?

Sections 2.2.3 and 6.5: More threats from location tracking. Soldiers in Iraq posted pictures of new helicopters on social media; Iraqi insurgents found the photos, read the geotags to determine location, and destroyed some of the helicopters.  The Russian military tracked Ukrainian artillery units by tracking the soldiers’ cellphones.  

Sections 2.2 and 2.4.1:  The government of China intends to have a system in place by 2020 that will assign each person a social rating based on the person’s financial transactions, and how he or she behaves in public and at work, etc.  Already, face-recognition technology installed along streets detects jaywalkers and displays their photos on large public screens.  (June 2017)

Exercise:  Are these appropriate uses of technology to improve people’s behavior?  How can they be misused?

Section 2.3:  An appeals court in Florida ruled that police must obtain a search warrant to get data from an automobile’s event data recorder, or “black box.” (State of Florida v. Charles Wiley Worsham Jr., March 2017).

Section 2.3.2:  If you give your laptop to Best Buy’s Geek Squad for repair and the technician finds anything on the device that might be illegal, he or she might inform the FBI and get paid for the tip.

Exercise: Based on current interpretation by courts, do we give up our Fourth Amendment rights when we hire someone to repair a device?  Is the current interpretation reasonable? Give arguments.

Exercise:  When an app asks for permission to access your calendar, how long will you think about your answer?  What are the risks?

Section 2.3.2: U.S. government access to data on foreign computers.

The Stored Communications Act of 1986 allows law enforcement agencies to obtain warrants for stored email, but this law said nothing about email stored in other countries.  In 2013, the government served a warrant demanding that Microsoft turn over a suspect’s email and other data. Microsoft provided data stored on computers in the U.S. but objected to providing data stored on servers in Ireland.  Microsoft (and others) raised two objections aside from the lack of clear authorization in the law.  One is that turning over the data could put the company in violation of privacy laws in the country where the data are stored.  The second is that oppressive governments could use the same principle to demand data stored in the U.S., for example, email of dissidents and human rights activists. Law enforcement agencies argued that restricting the warrants to the U.S. hindered investigations.  The Microsoft case reached the Supreme Court in 2018, but the Court did not rule on it because Congress passed the Cloud Act (Clarifying Lawful Overseas Use of Data Act) to indicate that warrants against U.S. companies do apply to data stored overseas.  (It allows companies to challenge warrants that conflict with privacy laws in other countries.)

Section 2.3.3 (p. 84): The Supreme Court ruled in 2018, in Carpenterv. U.S., that police need a search warrant to get a person’s location history data from the person’s cellphone service provider. 

Section 2.3.3 (p. 84): Another example of potential noninvasive but deeply revealing technology: Researchers from MIT and Georgia Tech have developed a device that uses extremely high-frequency electromagnetic waves to penetrate several pieces of paper and read what is on them.  The technology might in the future help a paralyzed person read a book without opening it—or help outsiders read private documents. 

Section 2.4.1: China’s surveillance system includes 170 million cameras; it plans to add 400 million more by 2020. In a demonstration of the camera and face-recognition system, government officials tracked down and “apprehended” a BBC reporter in seven minutes.  The system uses a database of photos from people’s identification cards.  A government official said they can tell where someone has been for the past week.  In addition to the fixed cameras, China developed face-recognition systems built into eyeglasses that police can use to screen crowds.

Section 2.4.1: Approximately 30 states in the U.S. allow police to run face-recognition software on their databases of driver’s license photos (in addition to their databases of mug shots—mostly photos of people previously arrested) to identify a suspect.  There is controversy about this secondary use of driver’s license photos.  Privacy advocates argue that this use puts the vast majority of innocent drivers at risk of mistaken identification as a criminal.  Police and prosecutors argue that some criminals have no prior record, hence no mug shots, and that people have no expectation of privacy for their driver’s license photo.  What other arguments can you think of for each side?  Which side is more persuasive to you?  Why?

Section 2.4.1: National Geographic has an excellent article on surveillance, covering a variety of technologies and issues, at https://www.nationalgeographic.com/magazine/2018/02/surveillance-watching-you

Section 2.4.4: In spite of the well-known privacy and fraud risks of displaying Social Security numbers on identification cards, the Medicare system continued to do so until 2018.

Section 2.4.4: Children, who receive a Social Security number at birth, usually have no loans or credit card accounts and have excellent credit ratings.  Identity thieves thus have begun targeting children, stealing their SSNs and using other personal data to open numerous accounts, knowing that the children and their parents are unlikely to check a child’s credit record and thus remain unaware of the fraud for years.

Section 2.4.4: India’s national ID system, originally intended to reduce corruption and make government programs more efficient but extended to many other uses, has experienced both technical problems and numerous large data breaches.  A few examples: Some people in rural areas dependent on government subsidies could not buy basic necessities because Internet connection was not available (to verify their identity) or because fingerprint readers did not recognize the prints of laborers with roughed up fingers.   A government-owned company inadvertently exposed data on half a billion people; numerous other incidents exposed data on millions.  Are there better ways to verify the identity of school children taking exams or receiving subsidized meals without relying on a complex, centralized system?

Section 2.4.4: “A softer, more invisible authoritarianism.”

In the U.S., we have credit scores that have a big impact on how easily we can borrow money.  China has a social credit score, dependent on its national ID system, and based on a person’s bill-paying history—and their online speech (e.g., whether they spread rumors), level of education, the scores of the person’s friends, and much more. People with high scores get perks; those with low scores may be prevented from boarding airplanes or sending their children to good schools. The potential for authoritarian control is immense, as are problems that can occur because of errors.  (The quote is from Mara Hvistendahl, “Inside China’s Vast New Experiment with Social Ranking,” Wired, Dec. 14, 2017.)  

Section 2.5: Israeli researchers developed a method for determining whether a drone is capturing video of a person or site.  This may help protect privacy and security—and help technically sophisticated criminals and terrorists determine if they are under surveillance.

Section 2.7: The European Union’s new General Data Protection Regulation (GDPR) took effect in 2018 and adds many stringent new requirements for handling personal data.  It requires companies to get unambiguous, detailed consent for use of data, and it requires all companies that handle a large amount of personal data of EU citizens (whether the company is in the EU or elsewhere) to have a Data Privacy Officer who is an expert on privacy law.  Fines for noncompliance can be very large.  Since the EU passed the GDPR in 2016, the legal and tech staffs of big firms, including Google and Facebook, have been working on making necessary changes.  What are some of the trade-offs for increased privacy protection?  Forrester Research estimated compliance costs in millions of dollars, a burdensome expense for small firms and start-ups.  Many businesses, including U.S. news sites, suspended access from Europe because of fears that they might not be in compliance. Some advertising technology companies shut down.  Longer term effects are unclear (for example, whether the regulations will affect the amount of free material and services currently financed by advertising).

Chapter 3: Freedom of Speech

Section 3.1:  People who took selfies in voting booths, showing their voted ballot, discovered that they violated laws in some states against taking photos in polling places. Supporters of such laws argue that the laws protect against pressuring or paying people to vote a particular way and prove it with a selfie.

Exercise:  Discuss pros and cons of such laws. Do they violate the First Amendment?

Section 3.3: A thoughtful essay on banning content, written by the CEO of a Web company who broke its policy of being content neutral and canceled a white-supremacist site: www.realclearpolitics.com/2017/08/23/was_i_right_to_pull_the_plug_on_a_nazi_website_419144.html

Section 3.3: In its attempts to reduce hate speech, Facebook blocked posts from people who included or quoted racist emails or slurs they had received.  Do you think Facebook should block such examples or allow members to show and discuss them?

Section 3.3: Google and Facebook ban ads for cryptocurrencies.  Should they ban ads for butter, soft drinks, and doughnuts?

Section 3.3 (also Sections 4.3.3 and 7.1.1): To get an idea of how difficult it is to review content posted on the Net to find and remove objectionable material, false or deceptive material, and material that violates copyright, consider these data: Users upload an average of 300 hours of video to YouTube every minute.  Facebook users report more than a million instances of objectionable material each day.  

Section 3.6.1:  Aware that Chinese government censors screen text for politically sensitive content, people in China began sending images, for example, political cartoons, to communicate. Now censors use automated tools to almost instantaneously detect and delete sensitive images in transit in chat and messaging apps.

Section 3.6.1: A new law in Vietnam requires online companies to remove content within 24 hours after a government request.  The law also requires companies based in other countries to keep user data for people in Vietnam on servers in Vietnam.  This requirement adds risk for people who express views the government disapproves.

Section 3.6.2: In 2018, Google planned to reverse policy once more and operate a version of its search engine in China that complies with China’s increasingly restrictive censorship laws.  Google argues that partial access is better than no access.  China already has censored search engines, so critics see little benefit to the Chinese people and a big disadvantage in the perception that Google is legitimizing China’s censorship by complying with it.

Section 3.6.2: At the request of the Chinese government, Apple removed hundreds of VPN (virtual private network) apps from its app store in China.  Such apps help people access banned websites.

Section 3.7: The net neutrality principle, as currently promoted, applies to telecommunications companies and is supported by many large content-providing companies.  In Sections 3.3 and 7.1.1, we discuss attempts by content companies to restrict access to many kinds of information.  Are the content companies hypocritical in their advocacy of net neutrality?  If you think so, explain how their actions are similar to those they would prohibit to telecom companies.  If you think not, explain how their actions differ from access restrictions by telecom companies.

Section 3.7: According to competitors, Google allegedly favors its own content or services in its search results; social media platforms try to reduce “fake news” and manage what their members see; Twitter reduced access to a controversial campaign ad from a member of Congress; and so on.  In Section 3.3 we described many issues related to decisions by search engine and content companies to ban certain material.  We indicated that it is not an easy matter to make such decisions. Why do we bring this up here?  The net neutrality principle says that telecommunications companies must treat all content the same but does not apply to very large content and social media companies.  What consistent arguments or principles can you think of that address this difference?

Amazon patented a process to examine data a shopper in a physical store sends over the store’s WiFi and, potentially, block the shopper from comparing prices online.  Should a store have the right to control use of its WiFi in this way?  Would it be hypocritical of Amazon to advocate net neutrality and use this technology in its stores?

Chapter 4: Intellectual Property

Section 4.2.4: A federal appeals court rejected the jury decision reported in the textbook in the Oracle Americav. Google copyright infringement case about Google’s use of Java APIs.  The appeals court ruled that it was not fair use and sent the case back to determine how much Google should pay.  Google is appealing, so, as we said in the text, the final result is still unknown. 

Section 4.6.1: In 2018, a jury raised the penalty to $539 million in the long-running lawsuit by Apple against Samsung for violating some of Apple’s smartphone patents.

Chapter 5: Crime and Security

Section 5.1: Just a few more recent examples: 

  • Hackers stole more than $800 million in cryptocurrencies in the first half of 2018.  

  • Hackers stole medical records of roughly one-quarter of Singapore’s population. 

Section 5.2.2: Three men responsible for the Mirai botnet attack in 2016 were caught and pleaded guilty in 2017 for this and other botnet attacks.  

Section 5.3.1: A new application of identity theft: When a federal agency is considering new regulations, it allows time for the public to post comments on a website.  Some people or groups have posted thousands of comments using other people’s names, email address, etc., on at least five federal agency websites.  (Posting fraudulent statements in this context is a felony.) 

Section 5.3.2: Just as in the Target case we described in this section, hackers used common hacking tools, such as spearfishing emails, to get into the networks of small companies and then used that access to get into highly sensitive systems—in this case to gain control of U.S. electric utilities and the ability to shut them down. The hackers worked for a group sponsored by the Russian government, according to federal officials.

Section 5.3.4: Hacking by governments to steal information or cause disruption continues.  For example, the U.S. government charged nine Iranians with stealing data from hundreds of businesses and universities for the Iranian government over several years.  British intelligence agents said the Russian military was most likely responsible for the Petya worm that attacked companies around the world in 2017, hitting especially hard in Ukraine and causing hundreds of millions of dollars in losses.  

Section 5.3.4: Hackers gained control of an emergency shut-off system at a Saudi petrochemical plant in 2017.  They may have intended to cause an emergency necessitating a shut-down—and then prevented the shut-down, possibly leading to a catastrophe.  The sophistication of the attack suggests a government might be responsible for it, though the hackers have not been identified. Similar control systems exist in thousands of other large plants, water-treatment systems, etc.; their vulnerability continues to be a serious problem.

Section 5.4: Companies (e.g., Visa) are adding payment capabilities to home appliances and automobiles.

Exercise:  What are potential security risks? Does the convenience of shopping while driving or doing the laundry outweigh the risks?

Section 5.5.3: Security researchers developed an app for Amazon’s Alexa that appears to be a calculator but actually records conversations and sends them to the app’s creators.  (Amazon made changes to prevent this kind of attack.) (A. J. Dellinger, “Security Researchers Created a ‘Skill’ That Allows Alexa to Spy on You,” Gizmodo.com, Apr. 25, 2018)

Section 5.5.4: By 2018, companies were selling devices and services to law enforcement agencies to unlock iPhones; they exploit certain security loopholes.  Apple developed a new feature that might protect against intruders (legal or not) who use this approach.

Section 5.6.1: A lawsuit about public data on LinkedIn’s website illustrates another attempt at expanding application of the CFAA.  A company, hiQ Labs, collects and analyzes publicly available data on LinkedIn to predict whether specific people are likely to quit their jobs.  LinkedIn told hiQ Labs to stop accessing its site and argued that continued access by hiQ Labs violated the CFAA.  A federal judge allowed hiQ Labs to continue accessing public data on LinkedIn.  The judge said LinkedIn’s interpretation of the CFAA could allow access restrictions that Congress did not intend; for example, political campaigns might prohibit certain news media from accessing their sites.  (The judge went farther and told LinkedIn to remove any technology it had put in place to block hiQ Labs access.  Whether LinkedIn has a right to block hiQ Labs is a separate issue.) The ruling is not final, and litigation may continue.)  Aug. 2017

Section 5.7.1: The issues in this section are taking on additional significance as governments order search engines to globally block access to certain information.  Google’s appeal of France’s order to restrict searches globally went to the European Union’s Court of Justice (its highest court) in 2018; a decision is likely in early 2019.  In another case, Canada ordered Google to globally block search results that showed websites associated with a particular company.  If such orders are upheld and become accepted practice, they can become a powerful tool for oppressive governments.

Chapter 6: Work

Section 6.2: The overall unemployment rate (in 2018) was 3.8%, the lowest in almost 50 years.  The unemployment rate for women, 3.6%, is the lowest since 1953 (when there was almost no computer technology and the number of women working was far lower than today).  The unemployment rates for various ethnic minority groups and for adults without a high school diploma are near record lows.  These data support the argument that technology does not lead to massive unemployment.  

 Section 6.2.1: Perspectives on creating and eliminating jobs.  Business professor Scott Galloway says that Amazon’s efficiency results in the company doing the same amount of business as other retailers with half as many employees – and that this means tens of thousands of lost jobs.  A research firm says Amazon is responsible for almost a third of new jobs created in Seattle since 2010 (from both direct hiring and indirect effects).  In early 2017, Amazon had more than 350,000 employees (worldwide) and planned to add 100,000 full-time jobs in the U.S. in 2017-18.  (Oct. 2017)

Exercise: Is Galloway’s focus on lost retail jobs shortsighted because it ignores the jobs created (at Amazon and in other fields) when people save time and money buying things online?  Is focusing on the jobs Amazon creates shortsighted because it ignores lost jobs at other retailers?  On balance, does e-commerce create more jobs than it eliminates? 

Section 6.2.1: Endnote 10 should include the following citation (for the number of app industry jobs in 2016): Michael Mandel, “U.S. App Economy Jobs Update,” Progressive Policy Institute, May 3, 2017. 

More recently, Mandel describes the difficulties of counting e-commerce jobs: U.S. government figures count 2640 e-commerce jobs in Kentucky, for example, but Amazon employs 12,000 people in the state. Mandel estimates that, when fully counted (including warehouse and fulfillment center jobs), e-commerce has added about 400,000 jobs in the U.S. in the past ten years, while the brick-and-mortar sector lost about 140,000 jobs.  Others dispute his estimates.

Section 6.2.1: In contract negotiations, the Teamsters union (representing drivers) asked United Parcel Service to agree not to use self-driving vehicles or drones to deliver packages.   Discussion exercise: Who benefits and who loses from such an agreement? Overall, is it a good idea?

Section 6.3.2: A delicious form of gig work is growing in Italy, providing similar advantages as examples in the text and generating similar opposition.  It is called social dining; apps or online networking sites connect diners with cooks who prepare and serve meals at their homes.  Such meals provide supplemental income for the cooks (mainly women; men typically run restaurant kitchens) and a pleasurable experience for the diners.  Home chefs are not subject to health and safety rules that apply to restaurants, and restaurants, regulators, and unions have been very critical of the phenomenon. A proposed law requires payment of taxes, a health certification, and insurance; it limits the number of meals a home cook can serve in a year and the amount he or she can earn. Which of these provisions are reasonable (give reasons), and which are not?  (Similar home-dining services are growing in other countries also.)

Sections 6.5 and 2.2.3:

Employers often set rules about use of social media and cellphones, and employees often ignore them. Here are examples where awareness of risks and setting (and following) good rules are important: Soldiers in Iraq posted their pictures of new helicopters on social media; Iraqi insurgents found the photos, read the geotags to determine location, and destroyed some of the helicopters.  The Russian military tracked Ukrainian artillery units by tracking the soldiers’ cellphones.  

Chapter 7: Evaluating and Controlling Technology

Section 7.1.1: Russian manipulation via social media.

A Russian organization paid for thousands of ads on Facebook about sensitive and divisive social and political issues in the two years before the 2016 U.S. presidential election.

Investigations by Facebook, Twitter, Instagram, other social media platforms, and a Congressional committee showed that Russian agents secretly used thousands of accounts, before and after the 2016 presidential election, to create or increase discord in the U.S. and to influence policy on major topics.   Some Russian accounts, pretending to be U.S. people or organizations, had hundreds of thousands of followers.  They promoted extremist views on both sides of divisive issues, using altered photos and inflammatory false statements.  They encouraged protests and rallies and, according to news reports, they provided funds to U.S. activists for protests and collected personal information. In one example, they encouraged a fitness instructor to train people in combat and to provide names and contact information of his students. 

The goal appears to be to weaken the U.S. overall.  Representative Adam Schiff (a member of the House Intelligence Committee), summarized as follows: “Russia sought to divide us by our race, by our country of origin, by our religion, and by our political party.” (“Schiff Statement on Release of Facebook Advertisements,” U.S. House of Representatives Permanent Select Committee on Intelligence - Democrats, May 10, 2018)  

The extensive and long-running campaign is undoubtedly continuing.  It is difficult to detect and eliminate fraudulent accounts and faked photos and videos, especially when so much of the content is copied all over the Internet—and it is difficult to eliminate intentionally false and manipulative information while protecting free and open debate on controversial issues.  The intentional Russian attack is a reminder for us as individuals, to be skeptical, even when we see content on the Net that supports our point of view, and to ask ourselves who might be manipulating us – and who benefits.  

Section 7.1.1: Fake news can be deadly.

In India in 2018, in several separate incidents, mobs beat and killed more than two dozen people after (false) rumors on social media claimed they were child kidnappers.

Section 7.1.1: Outlawing fake news. 

The Malaysian government passed a law that made malicious spreading of fake news punishable by six years in prison.  Critics of the law worried that the government would abuse it, in particular to threaten discussion of a major government financial scandal.  The first conviction under the law was for criticizing a government agency (not related to the financial scandal): The convicted man had posted a video saying the police had taken longer than they actually did to respond to a shooting.  Since the law can apply to people outside Malaysia if their writings are available in Malaysia, e.g., on the Internet, the issues of Section 5.7 are relevant here also.

A Lebanese tourist in Egypt posted a video in which she said she was sexually harassed by many men; she used profanity and made critical comments about Egypt and its president.  She was sentenced to eight years in prison for, among other charges, “deliberately broadcasting false rumors which aim to undermine society.”  (After much publicity about the case, an Egyptian court suspended her sentence.)  An Egyptian woman was arrested and charged with spreading false news and damaging public order, also for posting a video criticizing sexual harassment in Egypt.  These incidents occurred before Egypt passed a new law under which the government can prosecute journalists and people on social media for publishing fake news; the law does not define fake news. Journalists see the law as an attack on open discussion and freedom of the press.  

In India, it appears that both the ruling party and its critics encourage fake news sites that support their sides.  Government leaders label negative news stories about the government as fake news even if the stories are true.  When the government announced it would suspend the accreditation of journalists who published fake news, strong opposition from journalists led to withdrawal of the new policy. 

In light of these examples, how can we write laws against fake news that do not stifle debate and prevent criticism of politicians and governments?

Section 7.1.2: An article discussing the risk assessment program described in this section and its conflicts with due process: Frank Pasquale, “Secret Algorithms Threaten the Rule of Law,” MIT Technology Review, June 1, 2017, www.technologyreview.com/s/608011/secret-algorithms-threaten-the-rule-of-law

Chapter 8: Errors, Failures and Risks

Section 8.1.3: In part because of fears of hacking, especially by foreign governments, many cities and states in the U.S. are switching from fully electronic voting machines back to paper ballots or systems that include a paper confirmation of votes.

Section 8.1.3: The Canadian government implemented a single payroll system to replace more than 100 separate systems that government agencies previously used.  For more than half of the government’s employees, the seriously flawed new system overpaid some, underpaid others, and, for months, did not pay some employees at all. The estimate for fixing the system is roughly twice its original cost.

Section 8.4.1: After severe hurricanes, fires, and an earthquake in 2017, large areas were without cellphone connections and electricity for long periods. What preparations do you think individuals (e.g., yourself) should make for such events? To what extent is our reliance on cellphones, credit-card and phone-based payment systems, etc., a problem?  

Chapter 9: Professional Ethics and Responsiblities

Section 9.2.1: Volkswagen’s fraudulent emissions-testing software has cost the company nearly $35 billion in fines, legal fees, and payments to customers. The former CEO of VW was indicted and the CEO of Audi arrested as the investigation continued.  Aside from the obvious ethical problems with so massive a fraud, it is surprising that people planning or participating in such schemes convince themselves that no one will find out.  If you are in a professional situation where you are considering doing something unethical, it might be helpful to write a short news article, dated a year in the future, describing discovery of the activity, its consequences, and your own arrest.

Section 9.2.3 (box on p. 471): In another example of testing software on insufficient sets of data, researchers found that several face-recognition systems produced by prominent companies were much more likely to give incorrect results for people with dark skin and for women.  (What are some of the problems such errors could cause for the affected people?) After negative publicity, some of the companies quickly trained their systems on more diverse data sets and decreased the error rates substantially. 

Term paper topics

Chapter 2: Privacy

Protecting privacy in Big Data.  Companies such as Apple, Google, and Facebook collect and analyze huge amounts of data about users of their products and services.  Although they use anonymous data in many situations, it is relatively easy to identify individuals from large sets of data (as we saw in Section 2.1.2).  Give examples of re-identification (or “de-anonymization”).  Describe techniques companies are using to help maintain anonymity.