Updates and New Exercises

Chapter 1: Unwrapping the Gift

Section 1.2.4: Medical researchers are studying ways to detect brain injuries, depression, and heart disease from a person’s speech. Eventually, they hope, a smartphone app could monitor such health conditions by analyzing the user’s speech.

Chapter 2: Privacy

Section 2.1.2: A risk of always-on voice-operated home appliances: An Amazon Echo recorded a private family conversation and sent it to another person because the device misinterpreted words in the family’s conversation as commands to send it.

Sections 2.1.2 and 2.2.1:  A data analytics company compiles data on 198 million voters in the U.S. to predict how they will vote.  The company mistakenly allowed the data to be publicly accessible. (June 2017)

Section 2.2.1:  Federal Communications Commission rules require that Internet Service Providers (e.g., AT&T, Comcast) get customer permission before selling sensitive information such as browsing history. This rule does not apply to information companies such as Google.

Exercise:  What factors should affect whether rules require prior permission: the type of company, the type of information, whether the service the company provides is free, or other factors?

Section 2.1.2: Many companies and researchers contract with Facebook to use member data.  Some use or transfer the data in ways that violate their contracts. (An example that received much media attention was Cambridge Analytica’s use of data in the 2016 presidential campaign.) Detection of abuse and enforcement of such contracts is not easy.  

After announcing that it restricted access to user data by other companies, Facebook continued to provide personal information about users and their “friends” to dozens of companies.  Whether its actions conflicted with its statements or depended on loopholes or interpretations of wording, it is clear that the continued release of data represents a privacy risk (and creates distrust).

Section 2.1.3: We realize that our phone must know our location to give driving directions or to respond to our query for nearby restaurants.  If we allow use of our location for some apps but turn off Location History, does Google delete the location information after its intended use?   How clear should Google make the answer to this question? The Associated Press reported that Google collects and stores the locations your mobile device has been even if you have Location History turned off.  Google points out that the user is told about the location data collection, but it was a startling surprise to many. (Ryan Nakashima, “AP Exclusive: Google Tracks Your Movements, Like It or Not,” Associated Press, Aug. 13, 2018.)

Section 2.2.1: In 2017, Google stopped scanning Gmail messages to target ads to users.  Some free email providers (e.g., Yahoo) scan messages for ad targeting and some do not.  

Section 2.2.1: Targeted marketing and targeted discrimination.

As we indicated, the vast amount of personal data available to marketers allows them to display ads for people most likely to be interested in the products and services offered—and avoid pestering people with ads they do not want. A company selling baby products might exclude women who have just had a miscarriage; such ads might cause additional heartache. A company offering extreme outdoor adventures might choose not to show its ads to people in wheelchairs, on the assumption that they do not meet the physical requirements of the excursion.

Exercise: What if a company that rents apartments excludes people who have children or people in wheelchairs? 

  • Some advertisers on social media platforms use the target selection tools to exclude certain categories of people, in some cases in violation of laws. What are useful ways to handle this problem? 

  • Should social media companies be held liable for illegal discrimination by advertisers on their platforms?

Sections 2.2.1 and 2.2.2: Many people continue to argue that companies that profit from the personal data they collect from users and members should be required to pay for the data.  In Section 2.2.1 we observed that users get many free services in exchange for their data.  But how much are the data worth?  Here are two examples to consider.

Facebook has more than two billion users and annual profit of roughly $4 billion.  If Facebook used all of its profit to pay members (an average of $2 per year), do you think that would change how people think of Facebook’s use of their data?

Some car makers offer apps that collect data on a person’s driving habits. They pay the driver for the data with discounts on car accessories and service, and they sell the data to insurance companies.  Good drivers might also get a discount on insurance rates.  Is this a fair deal?

Section 2.2: How little we know. 

The following incidents illustrate how little we know about the devices and apps we use and the privacy threats they may pose.  

Several companies that provide virtual assistant software and devices (e.g., Amazon, Google) record some conversations between users and the devices. Employees of the companies (or companies they contract with) listen to the recordings to improve the software.  

A variety of popular smartphone apps that collect sensitive personal information, such as a woman’s menstrual cycle, sent the data they collected to Facebook.  Often the apps do not clearly inform users, and in some cases, sending the data violates Facebook policies.  But it was done.  (Within a few days of a news report about this practice, several of the apps mentioned in the report stopped sending data.)  

A Google home security device contains a microphone, but Google did not tell users it was there and did not list it in descriptions of the device.  Google said it included the microphone so that it could be used by new system features in the future. The company also said the microphone was turned off by default and could be turned on only by the user.  (Hackers have turned on surveillance devices in other products.) 

Sections 2.2.3 and 6.5: More threats from location tracking. Soldiers in Iraq posted pictures of new helicopters on social media; Iraqi insurgents found the photos, read the geotags to determine location, and destroyed some of the helicopters.  The Russian military tracked Ukrainian artillery units by tracking the soldiers’ cellphones.  

Sections 2.2 and 2.4.1:  The government of China intends to have a system in place by 2020 that will assign each person a social rating based on the person’s financial transactions, and how he or she behaves in public and at work, etc.  Already, face-recognition technology installed along streets detects jaywalkers and displays their photos on large public screens.  (June 2017)

Exercise:  Are these appropriate uses of technology to improve people’s behavior?  How can they be misused?

Section 2.3:  An appeals court in Florida ruled that police must obtain a search warrant to get data from an automobile’s event data recorder, or “black box.” (State of Florida v. Charles Wiley Worsham Jr., March 2017).

Section 2.3.2:  If you give your laptop to Best Buy’s Geek Squad for repair and the technician finds anything on the device that might be illegal, he or she might inform the FBI and get paid for the tip.

Exercise: Based on current interpretation by courts, do we give up our Fourth Amendment rights when we hire someone to repair a device?  Is the current interpretation reasonable? Give arguments.

Exercise:  When an app asks for permission to access your calendar, how long will you think about your answer?  What are the risks?

Section 2.3.2: U.S. government access to data on foreign computers.

The Stored Communications Act of 1986 allows law enforcement agencies to obtain warrants for stored email, but this law said nothing about email stored in other countries.  In 2013, the government served a warrant demanding that Microsoft turn over a suspect’s email and other data. Microsoft provided data stored on computers in the U.S. but objected to providing data stored on servers in Ireland.  Microsoft (and others) raised two objections aside from the lack of clear authorization in the law.  One is that turning over the data could put the company in violation of privacy laws in the country where the data are stored.  The second is that oppressive governments could use the same principle to demand data stored in the U.S., for example, email of dissidents and human rights activists. Law enforcement agencies argued that restricting the warrants to the U.S. hindered investigations.  The Microsoft case reached the Supreme Court in 2018, but the Court did not rule on it because Congress passed the Cloud Act (Clarifying Lawful Overseas Use of Data Act) to indicate that warrants against U.S. companies do apply to data stored overseas.  (It allows companies to challenge warrants that conflict with privacy laws in other countries.)

Section 2.3.3 (p. 84): The Supreme Court ruled in 2018, in Carpenterv. U.S., that police need a search warrant to get a person’s location history data from the person’s cellphone service provider. 

Section 2.3.3 (p. 84): Another example of potential noninvasive but deeply revealing technology: Researchers from MIT and Georgia Tech have developed a device that uses extremely high-frequency electromagnetic waves to penetrate several pieces of paper and read what is on them.  The technology might in the future help a paralyzed person read a book without opening it—or help outsiders read private documents. 

Section 2.4.1: China’s surveillance system includes 170 million cameras; it plans to add 400 million more by 2020. In a demonstration of the camera and face-recognition system, government officials tracked down and “apprehended” a BBC reporter in seven minutes.  The system uses a database of photos from people’s identification cards.  A government official said they can tell where someone has been for the past week.  In addition to the fixed cameras, China developed face-recognition systems built into eyeglasses that police can use to screen crowds.

Section 2.4.1: Approximately 30 states in the U.S. allow police to run face-recognition software on their databases of driver’s license photos (in addition to their databases of mug shots—mostly photos of people previously arrested) to identify a suspect.  There is controversy about this secondary use of driver’s license photos.  Privacy advocates argue that this use puts the vast majority of innocent drivers at risk of mistaken identification as a criminal.  Police and prosecutors argue that some criminals have no prior record, hence no mug shots, and that people have no expectation of privacy for their driver’s license photo.  What other arguments can you think of for each side?  Which side is more persuasive to you?  Why?

Section 2.4.1: National Geographic has an excellent article on surveillance, covering a variety of technologies and issues, at https://www.nationalgeographic.com/magazine/2018/02/surveillance-watching-you

Section 2.4.1: Reading emotions.

Some advanced face-recognition systems are designed to determine the emotions of the people targeted.  One company, with a database of four billion images of faces of people from dozens of countries, developed an application for cameras in vehicles to detect when a driver is distracted or sleepy.  A high school in China uses a similar technology to scan students in classrooms (every 30 seconds) and classify their facial expressions. Clearly, this kind of technology can save lives—or be used to detect people whose faces show displeasure at speeches by government leaders and so forth. Such intrusive surveillance of emotional reactions can become a tool to suppress not only dissent but also the very humanity of a population required to always censor their own facial expressions. We return, as we do frequently, to the question: How do we keep a technology with valuable life-saving and life-enhancing applications from being misused?

Section 2.4.2: Database abuse continues.

A woman police officer learned that her driver’s license record had been accessed almost 1000 times over several years—and more than half of the accesses were by fellow police officers, some in the middle of the night.  The accesses were not authorized and violated state law.

Section 2.4.4: In spite of the well-known privacy and fraud risks of displaying Social Security numbers on identification cards, the Medicare system continued to do so until 2018.

Section 2.4.4: Children, who receive a Social Security number at birth, usually have no loans or credit card accounts and have excellent credit ratings.  Identity thieves thus have begun targeting children, stealing their SSNs and using other personal data to open numerous accounts, knowing that the children and their parents are unlikely to check a child’s credit record and thus remain unaware of the fraud for years.

Section 2.4.4: India’s national ID system, originally intended to reduce corruption and make government programs more efficient but extended to many other uses, has experienced both technical problems and numerous large data breaches.  A few examples: Some people in rural areas dependent on government subsidies could not buy basic necessities because Internet connection was not available (to verify their identity) or because fingerprint readers did not recognize the prints of laborers with roughed up fingers.   A government-owned company inadvertently exposed data on half a billion people; numerous other incidents exposed data on millions.  Are there better ways to verify the identity of school children taking exams or receiving subsidized meals without relying on a complex, centralized system?

Section 2.4.4: “A softer, more invisible authoritarianism.”

In the U.S., we have credit scores that have a big impact on how easily we can borrow money.  China has a social credit score, dependent on its national ID system, and based on a person’s bill-paying history—and their online speech (e.g., whether they spread rumors), level of education, the scores of the person’s friends, and much more. People with high scores get perks; those with low scores may be prevented from boarding airplanes or sending their children to good schools. The potential for authoritarian control is immense, as are problems that can occur because of errors.  (The quote is from Mara Hvistendahl, “Inside China’s Vast New Experiment with Social Ranking,” Wired, Dec. 14, 2017.)  

Section 2.5: Israeli researchers developed a method for determining whether a drone is capturing video of a person or site.  This may help protect privacy and security—and help technically sophisticated criminals and terrorists determine if they are under surveillance.

Section 2.7: The European Union’s new General Data Protection Regulation (GDPR) took effect in 2018 and adds many stringent new requirements for handling personal data.  It requires companies to get unambiguous, detailed consent for use of data, and it requires all companies that handle a large amount of personal data of EU citizens (whether the company is in the EU or elsewhere) to have a Data Privacy Officer who is an expert on privacy law.  Fines for noncompliance can be very large.  Since the EU passed the GDPR in 2016, the legal and tech staffs of big firms, including Google and Facebook, have been working on making necessary changes.  What are some of the trade-offs for increased privacy protection?  Forrester Research estimated compliance costs in millions of dollars, a burdensome expense for small firms and start-ups.  Many businesses, including U.S. news sites, suspended access from Europe because of fears that they might not be in compliance. Some advertising technology companies shut down.  Longer term effects are unclear (for example, whether the regulations will affect the amount of free material and services currently financed by advertising).

Chapter 3: Freedom of Speech

Section 3.1:  People who took selfies in voting booths, showing their voted ballot, discovered that they violated laws in some states against taking photos in polling places. Supporters of such laws argue that the laws protect against pressuring or paying people to vote a particular way and prove it with a selfie.

Exercise:  Discuss pros and cons of such laws. Do they violate the First Amendment?

Section 3.2.4: A method for thwarting spammers and a method for defeating it.

A captcha is a means of distinguishing a human from a computer program. A very common one is to require the user to read a sequence of letters distorted into a wavy pattern. Email services, for example, use such captchas to prevent spammers from generating thousands of accounts for sending spam.  But spammers have developed programs to extract the captcha image and send it to a person in a country where wages are very low. That person reads the pattern and sends back the correct sequence of letters.  It all happens quickly enough for the spammers to thwart the purpose of the captchas and open a huge number of accounts.  A cybersecurity researcher found that spammers pay less than a dollar per 1000 captchas read.

Section 3.3: A thoughtful essay on banning content, written by the CEO of a Web company who broke its policy of being content neutral and canceled a white-supremacist site: www.realclearpolitics.com/2017/08/23/was_i_right_to_pull_the_plug_on_a_nazi_website_419144.html

Section 3.3: In its attempts to reduce hate speech, Facebook blocked posts from people who included or quoted racist emails or slurs they had received.  Do you think Facebook should block such examples or allow members to show and discuss them?

Section 3.3: Google and Facebook ban ads for cryptocurrencies.  Should they ban ads for butter, soft drinks, and doughnuts?

Section 3.3 (also Sections 4.3.3 and 7.1.1): To get an idea of how difficult it is to review content posted on the Net to find and remove objectionable material, false or deceptive material, and material that violates copyright, consider these data: Users upload an average of 300 hours of video to YouTube every minute.  Facebook users report more than a million instances of objectionable material each day. 

Section 3.3:  “Facebook is regulating more human speech than any government does now or ever has.”   -- Susan Benesch, director of the Dangerous Speech Project  

We give a few examples to illustrate difficulties and questions concerning attempts to remove or restrict access to offensive content and false information on the Internet.

News organizations published descriptions, based on Facebook internal documents, of the company’s guidelines for determining what offensive content to censor.  The examples (many of which are very offensive) and the criteria for distinguishing between which are acceptable and which are not illustrate the difficulties of attempts to eliminate some speech while protecting free expression.  The hundreds of rules include, for example, the distinction that calling certain groups of people “filthy” is not hate speech, but calling them “filth” is. “Dehumanizing” generalizations are hate speech, but “degrading” ones are not. With such fine, subjective distinctions, can we expect the censors to apply such rules uniformly and fairly? 

Pinterest and Facebook block certain information related to vaccinations because a lot of negative and critical information stems from a paper that used false data and was later withdrawn. Some social media sites block searches for what they consider dubious cancer treatments. While we agree about the characterization of much of the blocked material as “dubious” or “misinformation,” these examples illustrate the power of social media and search platforms to limit access to minority opinions. Scientists point out that a large percent of results published in scientific papers a few decades ago is now considered incorrect, and it is likely that a large portion of results published now will later be reversed. How should the censoring companies make decisions about blocking controversial health-related information?  Is it likely that the decision makers can resist the social and political pressure to restrict access to views unpopular at a particular time? 

Is it appropriate for companies with as much content control as Facebook and Google, for example, to base content decisions on their political or social views?  In testimony at the U.S. Senate, Facebook CEO Mark Zuckerberg said that Silicon Valley is “an extremely left-leaning place.”  If their policy is to be fair to all points of view, what processes could be useful in helping to overcome the biases of the staff?  (The quote from Zuckerberg appears in “Transcript of Mark Zuckerberg’s Senate Hearing,” Washington Post, April 10, 2018.)

Nadine Strossen, former president of the American Civil Liberties Union, wrote, “In light of the enormous power of … online intermediaries either to facilitate or stifle the free exchange of ideas and information, I would urge that, except in unusual circumstances, they should permit all expression that the First Amendment shields from government censorship.”  Do you agree?  Or, in spite of the difficulties and the fact that “too often we get it wrong” (as a Facebook vice president said), should social media and search companies continue to struggle with the difficult questions and remove offensive content that the First Amendment protects from government censorship?  Why, or why not?  (The quote from Strossen is in Hate: Why We Should Resist It with Free Speech, Not Censorship, Oxford University Press, 2018.  The Facebook quote is from Richard Allan, “Hard Questions, …” June 27, 2017, https://newsroom.fb.com/news/2017/06/hard-questions-hate-speech/.)

Section 3.6.1:  Aware that Chinese government censors screen text for politically sensitive content, people in China began sending images, for example, political cartoons, to communicate. Now censors use automated tools to almost instantaneously detect and delete sensitive images in transit in chat and messaging apps.

Section 3.6.1: A new law in Vietnam requires online companies to remove content within 24 hours after a government request.  The law also requires companies based in other countries to keep user data for people in Vietnam on servers in Vietnam.  This requirement adds risk for people who express views the government disapproves.

Section 3.6.1: “Liberty is meaningless where the right to utter one's thoughts and opinions has ceased to exist. That, of all rights, is the dread of tyrants. It is the right which they first of all strike down. They know its power.” 
-- Frederick Douglas, “Plea for Free Speech in Boston,” Dec. 10, 1860.

Section 3.6.2: In 2018, Google planned to reverse policy once more and operate a version of its search engine in China that complies with China’s increasingly restrictive censorship laws.  Google argues that partial access is better than no access.  China already has censored search engines, so critics see little benefit to the Chinese people and a big disadvantage in the perception that Google is legitimizing China’s censorship by complying with it.

Section 3.6.2: At the request of the Chinese government, Apple removed hundreds of VPN (virtual private network) apps from its app store in China.  Such apps help people access banned websites.

Section 3.7: The net neutrality principle, as currently promoted, applies to telecommunications companies and is supported by many large content-providing companies.  In Sections 3.3 and 7.1.1, we discuss attempts by content companies to restrict access to many kinds of information.  Are the content companies hypocritical in their advocacy of net neutrality?  If you think so, explain how their actions are similar to those they would prohibit to telecom companies.  If you think not, explain how their actions differ from access restrictions by telecom companies.

Section 3.7: According to competitors, Google allegedly favors its own content or services in its search results; social media platforms try to reduce “fake news” and manage what their members see; Twitter reduced access to a controversial campaign ad from a member of Congress; and so on.  In Section 3.3 we described many issues related to decisions by search engine and content companies to ban certain material.  We indicated that it is not an easy matter to make such decisions. Why do we bring this up here?  The net neutrality principle says that telecommunications companies must treat all content the same but does not apply to very large content and social media companies.  What consistent arguments or principles can you think of that address this difference?

Amazon patented a process to examine data a shopper in a physical store sends over the store’s WiFi and, potentially, block the shopper from comparing prices online.  Should a store have the right to control use of its WiFi in this way?  Would it be hypocritical of Amazon to advocate net neutrality and use this technology in its stores?

Chapter 4: Intellectual Property

Section 4.2.4: A federal appeals court rejected the jury decision reported in the textbook in the Oracle Americav. Google copyright infringement case about Google’s use of Java APIs.  The appeals court ruled that it was not fair use and sent the case back to determine how much Google should pay.  Google is appealing, so, as we said in the text, the final result is still unknown. 

Section 4.3.2: As it does every three years, the Library of Congress considered proposals to expand exemptions to the DMCA anticircumvention rules and granted some in 2018.  It now allows jailbreaking of voice-activated home assistants so that owners can make legal modifications. It added some exemptions for home appliances such as refrigerators and thermostats, but did not approve exemptions for many other devices on the Internet of Things.  For some categories, it allows circumvention for repair but not for modification.  It expanded exemptions for research.

Section 4.4: The European Union copyright directive, 2019.

The EU issued a new copyright directive aimed at giving publishers and other content owners more control of their intellectual property and at providing content owners with legal tools to get paid by search engines and other platforms that use their intellectual property.  Both goals are valuable, but the directive (which member countries must implement in their own laws within the next few years) is controversial because of its potential negative impacts on small companies and the availability of user-generated content on the Internet.

One provision of the directive requires that search engines, news aggregators, and other Internet platforms get licenses from publishers to use “snippets” of their content.  Another provision reverses the current paradigm in which platforms must remove material that infringes copyright when they are informed of the infringing material. Instead, the directive requires that the platforms ensure that their users do not post infringing material.  Thus, the platforms would have to examine billions of posts, determine which infringe copyright, and block them.  Large companies currently spend millions of dollars on intellectual-property filters that do a partial job. Both provisions of the new directive provide big challenges for platforms with smaller budgets and legal departments; many might not survive or will severely limit user-generated content, thus farther entrenching larger companies. 

Section 4.6.1: Patent lawsuits 

A company called Shipping & Transit LLC had filed hundreds of lawsuits for infringement of its patents. (According to the Wall Street Journal, the company did not sell any products.) Typically, small companies paid license fees to avoid the high costs of litigation. Three companies fought back in court and won. One judge cited the Supreme Court decision of 2014. Another judge said Shipping & Transit’s strategy appeared to be predatory and aimed at making money from companies that could not afford patent litigation. Shipping & Transit declared bankruptcy after losing these cases.

Section 4.6.1: In 2018, a jury raised the penalty to $539 million in the long-running lawsuit by Apple against Samsung for violating some of Apple’s smartphone patents.

Chapter 5: Crime and Security

Section 5.1: Just a few more recent examples: 

  • Hackers stole more than $800 million in cryptocurrencies in the first half of 2018.  

  • Hackers stole medical records of roughly one-quarter of Singapore’s population. 

Section 5.2.2: Three men responsible for the Mirai botnet attack in 2016 were caught and pleaded guilty in 2017 for this and other botnet attacks.  

Section 5.3.1: A new application of identity theft: When a federal agency is considering new regulations, it allows time for the public to post comments on a website.  Some people or groups have posted thousands of comments using other people’s names, email address, etc., on at least five federal agency websites.  (Posting fraudulent statements in this context is a felony.) 

Section 5.3.2: Just as in the Target case we described in this section, hackers used common hacking tools, such as spearfishing emails, to get into the networks of small companies and then used that access to get into highly sensitive systems—in this case to gain control of U.S. electric utilities and the ability to shut them down. The hackers worked for a group sponsored by the Russian government, according to federal officials.

Section 5.3.4: Hacking by governments to steal information or cause disruption continues.  For example, the U.S. government charged nine Iranians with stealing data from hundreds of businesses and universities for the Iranian government over several years.  British intelligence agents said the Russian military was most likely responsible for the Petya worm that attacked companies around the world in 2017, hitting especially hard in Ukraine and causing hundreds of millions of dollars in losses.  

Section 5.3.4: Hackers gained control of an emergency shut-off system at a Saudi petrochemical plant in 2017.  They may have intended to cause an emergency necessitating a shut-down—and then prevented the shut-down, possibly leading to a catastrophe.  The sophistication of the attack suggests a government might be responsible for it, though the hackers have not been identified. Similar control systems exist in thousands of other large plants, water-treatment systems, etc.; their vulnerability continues to be a serious problem.

Section 5.3.4: Infiltrating global cellular networks.

A cybersecurity firm reported that Chinese hackers hacked into at least 10 global cell companies. The hackers had access to data on hundreds of millions of users in several countries but focused their data collection on records (location information and call and texting logs) of dissidents, military personnel, and other sensitive groups. The nature of the attack suggests government involvement.

Section 5.4: Companies (e.g., Visa) are adding payment capabilities to home appliances and automobiles.

Exercise:  What are potential security risks? Does the convenience of shopping while driving or doing the laundry outweigh the risks?

Section 5.5: Security lapses

 A hacker stole personal and financial records of virtually every adult in Bulgaria from the Bulgarian government’s tax agency. The data were made available on the Internet. According to cybersecurity experts, the hack was successful because of poor security, not sophistication of the hacker.

Section 5.5: For years, Facebook stored passwords for hundreds of millions of members in plain text, accessible to employees. 

Section 5.5: Insecurities of a mail security program.

To counter the problem of thieves stealing packages, credit cards, etc. delivered by the U.S. Postal Service, the Postal Service began a program called Informed Delivery. Those who sign up receive email with a scan of the front of physical mail to be delivered each day, so they will know what is coming. The program at first lacked a simple and basic security feature: it did not inform people that someone had signed them up to receive the scans. Stalkers, identity thieves, and other criminals could sign people up with their own email addresses and receive the emailed scans of all the mail to be delivered to the unknowing victims. The criminals can then intercept packages, credit cards, etc. The Postal Service improved security somewhat but security experts and the U.S. Secret Service report that the system still has significant security weaknesses. 

Because the scans contain a complete record of all the mail someone receives, those concerned with privacy might ask a number of questions: Does USPS store the scans? If, so, for how long? How well are they secured? Does USPS provide any information from the scans to other parties?

Exercise: Suggest a few ways a person might protect himself or herself from criminals misusing Informed Delivery.

Section 5.5: New threats to new security technology.

Technology does not stand still, so some relatively new tools to increase security become vulnerable as criminal and foreign adversaries develop tools to thwart them. For example, image-recognition systems can be used to detect people in places where they do not belong (say, in a bank vault at night or on the grounds of any secure facility), but experimenters have found that they can render a person “invisible” by having the person wear or carry something with certain patterns that confuse the software.

Section 5.5.3: Security researchers developed an app for Amazon’s Alexa that appears to be a calculator but actually records conversations and sends them to the app’s creators.  (Amazon made changes to prevent this kind of attack.) (A. J. Dellinger, “Security Researchers Created a ‘Skill’ That Allows Alexa to Spy on You,” Gizmodo.com, Apr. 25, 2018)

Section 5.5.4: By 2018, companies were selling devices and services to law enforcement agencies to unlock iPhones; they exploit certain security loopholes.  Apple developed a new feature that might protect against intruders (legal or not) who use this approach.

Section 5.6.1: A lawsuit about public data on LinkedIn’s website illustrates another attempt at expanding application of the CFAA.  A company, hiQ Labs, collects and analyzes publicly available data on LinkedIn to predict whether specific people are likely to quit their jobs.  LinkedIn told hiQ Labs to stop accessing its site and argued that continued access by hiQ Labs violated the CFAA.  A federal judge allowed hiQ Labs to continue accessing public data on LinkedIn.  The judge said LinkedIn’s interpretation of the CFAA could allow access restrictions that Congress did not intend; for example, political campaigns might prohibit certain news media from accessing their sites.  (The judge went farther and told LinkedIn to remove any technology it had put in place to block hiQ Labs access.  Whether LinkedIn has a right to block hiQ Labs is a separate issue.) The ruling is not final, and litigation may continue.)  Aug. 2017

Section 5.7.1: The issues in this section are taking on additional significance as governments order search engines to globally block access to certain information.  Google’s appeal of France’s order to restrict searches globally went to the European Union’s Court of Justice (its highest court) in 2018; a decision is likely in early 2019.  In another case, Canada ordered Google to globally block search results that showed websites associated with a particular company.  If such orders are upheld and become accepted practice, they can become a powerful tool for oppressive governments.

Chapter 6: Work

Section 6.2: The overall unemployment rate (in 2018) was 3.8%, the lowest in almost 50 years.  The unemployment rate for women, 3.6%, is the lowest since 1953 (when there was almost no computer technology and the number of women working was far lower than today).  The unemployment rates for various ethnic minority groups and for adults without a high school diploma are near record lows.  These data support the argument that technology does not lead to massive unemployment.  

 Section 6.2.1: Perspectives on creating and eliminating jobs.  Business professor Scott Galloway says that Amazon’s efficiency results in the company doing the same amount of business as other retailers with half as many employees – and that this means tens of thousands of lost jobs.  A research firm says Amazon is responsible for almost a third of new jobs created in Seattle since 2010 (from both direct hiring and indirect effects).  In early 2017, Amazon had more than 350,000 employees (worldwide) and planned to add 100,000 full-time jobs in the U.S. in 2017-18.  (Oct. 2017)

Exercise: Is Galloway’s focus on lost retail jobs shortsighted because it ignores the jobs created (at Amazon and in other fields) when people save time and money buying things online?  Is focusing on the jobs Amazon creates shortsighted because it ignores lost jobs at other retailers?  On balance, does e-commerce create more jobs than it eliminates? 

Section 6.2.1: Endnote 10 should include the following citation (for the number of app industry jobs in 2016): Michael Mandel, “U.S. App Economy Jobs Update,” Progressive Policy Institute, May 3, 2017. 

More recently, Mandel describes the difficulties of counting e-commerce jobs: U.S. government figures count 2640 e-commerce jobs in Kentucky, for example, but Amazon employs 12,000 people in the state. Mandel estimates that, when fully counted (including warehouse and fulfillment center jobs), e-commerce has added about 400,000 jobs in the U.S. in the past ten years, while the brick-and-mortar sector lost about 140,000 jobs.  Others dispute his estimates.

Section 6.2.1: In contract negotiations, the Teamsters union (representing drivers) asked United Parcel Service to agree not to use self-driving vehicles or drones to deliver packages.   Discussion exercise: Who benefits and who loses from such an agreement? Overall, is it a good idea?

Section 6.3.2: A delicious form of gig work is growing in Italy, providing similar advantages as examples in the text and generating similar opposition.  It is called social dining; apps or online networking sites connect diners with cooks who prepare and serve meals at their homes.  Such meals provide supplemental income for the cooks (mainly women; men typically run restaurant kitchens) and a pleasurable experience for the diners.  Home chefs are not subject to health and safety rules that apply to restaurants, and restaurants, regulators, and unions have been very critical of the phenomenon. A proposed law requires payment of taxes, a health certification, and insurance; it limits the number of meals a home cook can serve in a year and the amount he or she can earn. Which of these provisions are reasonable (give reasons), and which are not?  (Similar home-dining services are growing in other countries also.)

Section 6.3.2: Responsibilities of companies that match buyers and sellers.

We briefly mentioned the issue of background checks for ride-sharing drivers. What responsibility do companies have when they match consumers with other types of service providers, for example, child care or elder care?

There have been incidents of sexual abuse, accidental deaths, and a fatal attack on a child by people offering child care services on one such platform. The Wall Street Journal examined listings on that platform for 3000 day-care centers shown as licensed; it could not find evidence of licenses for 22% of them. The company operating the site tells parents it does not verify information or do background checks. It suggests parents pay a fee for screening potential care givers. If the company removes someone for serious offenses, it emails parents who communicated with that person. Is the company doing enough? Is this a situation where the legal and ethical responsibilities differ? 

(The Wall Street Journal found that the site removed more than 70% of its day-care center listings just before the investigative report was published. The company announced that it would do more extensive background checks of people and organizations offering services on its site.) 

Sections 6.5 and 2.2.3:

Employers often set rules about use of social media and cellphones, and employees often ignore them. Here are examples where awareness of risks and setting (and following) good rules are important: Soldiers in Iraq posted their pictures of new helicopters on social media; Iraqi insurgents found the photos, read the geotags to determine location, and destroyed some of the helicopters.  The Russian military tracked Ukrainian artillery units by tracking the soldiers’ cellphones.  

Section 6.5.2: Hacking phones.

Many smartphones have more than one microphone. A malicious app with access to the phone’s microphones can determine which keys a user taps, say when entering a PIN, by analyzing the tiny difference in the time the sound reaches each microphone. This might not be a significant risk for an ordinary user, but it is another example of why employers, especially those that need to protect intellectual property and/or confidential and sensitive data, might have a policy against employees adding apps to phones they use for work.

Chapter 7: Evaluating and Controlling Technology

Section 7.1: Banning “fake news”

The Turkish government charged dozens of people, including economists and reporters for respected international news agencies, with distributing fake news for writing about difficulties some banks in the country were facing. 

Singapore passed a law giving government officials power to decide what information on the Web is false and to require social media companies to post corrections. One party has controlled the government for more than 50 years; it has used various laws (including libel laws) against opposition candidates and journalists. Can we expect its designation of misinformation on the Web to be unbiased?

Section 7.1.1: Russian manipulation via social media.

A Russian organization paid for thousands of ads on Facebook about sensitive and divisive social and political issues in the two years before the 2016 U.S. presidential election.

Investigations by Facebook, Twitter, Instagram, other social media platforms, and a Congressional committee showed that Russian agents secretly used thousands of accounts, before and after the 2016 presidential election, to create or increase discord in the U.S. and to influence policy on major topics.   Some Russian accounts, pretending to be U.S. people or organizations, had hundreds of thousands of followers.  They promoted extremist views on both sides of divisive issues, using altered photos and inflammatory false statements.  They encouraged protests and rallies and, according to news reports, they provided funds to U.S. activists for protests and collected personal information. In one example, they encouraged a fitness instructor to train people in combat and to provide names and contact information of his students. 

The goal appears to be to weaken the U.S. overall.  Representative Adam Schiff (a member of the House Intelligence Committee), summarized as follows: “Russia sought to divide us by our race, by our country of origin, by our religion, and by our political party.” (“Schiff Statement on Release of Facebook Advertisements,” U.S. House of Representatives Permanent Select Committee on Intelligence - Democrats, May 10, 2018)  

The extensive and long-running campaign is undoubtedly continuing.  It is difficult to detect and eliminate fraudulent accounts and faked photos and videos, especially when so much of the content is copied all over the Internet—and it is difficult to eliminate intentionally false and manipulative information while protecting free and open debate on controversial issues.  The intentional Russian attack is a reminder for us as individuals, to be skeptical, even when we see content on the Net that supports our point of view, and to ask ourselves who might be manipulating us – and who benefits.  

Section 7.1.1: Fake news can be deadly.

In India in 2018, in several separate incidents, mobs beat and killed more than two dozen people after (false) rumors on social media claimed they were child kidnappers.

Section 7.1.1: Outlawing fake news. 

The Malaysian government passed a law that made malicious spreading of fake news punishable by six years in prison.  Critics of the law worried that the government would abuse it, in particular to threaten discussion of a major government financial scandal.  The first conviction under the law was for criticizing a government agency (not related to the financial scandal): The convicted man had posted a video saying the police had taken longer than they actually did to respond to a shooting.  Since the law can apply to people outside Malaysia if their writings are available in Malaysia, e.g., on the Internet, the issues of Section 5.7 are relevant here also.

A Lebanese tourist in Egypt posted a video in which she said she was sexually harassed by many men; she used profanity and made critical comments about Egypt and its president.  She was sentenced to eight years in prison for, among other charges, “deliberately broadcasting false rumors which aim to undermine society.”  (After much publicity about the case, an Egyptian court suspended her sentence.)  An Egyptian woman was arrested and charged with spreading false news and damaging public order, also for posting a video criticizing sexual harassment in Egypt.  These incidents occurred before Egypt passed a new law under which the government can prosecute journalists and people on social media for publishing fake news; the law does not define fake news. Journalists see the law as an attack on open discussion and freedom of the press.  

In India, it appears that both the ruling party and its critics encourage fake news sites that support their sides.  Government leaders label negative news stories about the government as fake news even if the stories are true.  When the government announced it would suspend the accreditation of journalists who published fake news, strong opposition from journalists led to withdrawal of the new policy. 

In light of these examples, how can we write laws against fake news that do not stifle debate and prevent criticism of politicians and governments?

Section 7.1.2: An article discussing the risk assessment program described in this section and its conflicts with due process: Frank Pasquale, “Secret Algorithms Threaten the Rule of Law,” MIT Technology Review, June 1, 2017, www.technologyreview.com/s/608011/secret-algorithms-threaten-the-rule-of-law

Chapter 8: Errors, Failures and Risks

Section 8.1.3: In part because of fears of hacking, especially by foreign governments, many cities and states in the U.S. are switching from fully electronic voting machines back to paper ballots or systems that include a paper confirmation of votes.

Section 8.1.3: The Canadian government implemented a single payroll system to replace more than 100 separate systems that government agencies previously used.  For more than half of the government’s employees, the seriously flawed new system overpaid some, underpaid others, and, for months, did not pay some employees at all. The estimate for fixing the system is roughly twice its original cost.

Section 8.3.1: Boeing 737 MAX.

The crashes of two Boeing 737 MAX airplanes in 2018 and 2019 illustrate several issues in this section (and other parts of this chapter). A sensor gave improper readings. In response to improper data, an automated system caused the airplanes to repeatedly behave in a way the pilots did not expect. It appeared that some pilots did not have adequate training. (In addition, it is possible the problematic software rebooted after pilots turned it off.)

The day before one of the crashes, the same problem occurred on another flight, but a pilot knew what to do and quickly took action that prevented a crash. This should remind us of the loss of the space shuttle Columbia. In that case (p. 440), previous instances of dislodged foam had not resulted in disaster, so NASA did not perceive the danger when a piece of foam dislodged on the Columbia and hence did not attempt to assess or reduce the danger. Dangerous situations that do not result in disaster must not be ignored.

Section 8.4.1: After severe hurricanes, fires, and an earthquake in 2017, large areas were without cellphone connections and electricity for long periods. What preparations do you think individuals (e.g., yourself) should make for such events? To what extent is our reliance on cellphones, credit-card and phone-based payment systems, etc., a problem?  

Chapter 9: Professional Ethics and Responsiblities

Section 9.2.1: Volkswagen’s fraudulent emissions-testing software has cost the company nearly $35 billion in fines, legal fees, and payments to customers. The former CEO of VW was indicted and the CEO of Audi arrested as the investigation continued.  Aside from the obvious ethical problems with so massive a fraud, it is surprising that people planning or participating in such schemes convince themselves that no one will find out.  If you are in a professional situation where you are considering doing something unethical, it might be helpful to write a short news article, dated a year in the future, describing discovery of the activity, its consequences, and your own arrest.

Section 9.2.3 (box on p. 471): In another example of testing software on insufficient sets of data, researchers found that several face-recognition systems produced by prominent companies were much more likely to give incorrect results for people with dark skin and for women.  (What are some of the problems such errors could cause for the affected people?) After negative publicity, some of the companies quickly trained their systems on more diverse data sets and decreased the error rates substantially. 

Section 9.3: A talk on ethical issues in cybersecurity by researcher Stefan Savage sparked these scenarios and discussion questions.

Section 9.3: See the update for Section 3.2.4 above (“A method for thwarting spammers…”). Suppose you are part of a research team investigating the spammer practice described there in an attempt to determine how much spammers pay the people who read captchas. Your job will be to offer your service as a captcha reader and work at it for several weeks to collect information about the process. Discuss ethical issues your role raises. Will you participate? Why or why not?

Section 9.3: Suppose it is some years in the future and self-driving cars are in wide use. You are the head of a research team that found a way to remotely take over control of the brakes of one model of cars. The company that makes this model employs thousands of people and has been having financial difficulties. If you identify the company, the publicity and the cost of fixing the flaw are likely to bankrupt the company and put the employees out of work. What should you do? Consider various possible actions and discuss arguments for and against each.

Section 9.3: In an attempt to stop or reduce online sales of counterfeit luxury products and counterfeit drugs, your research team will buy a large number of such products from suspicious sites to determine which sites sell counterfeit goods and what financial institutions they use. Describe ethical and legal issues the research raises, and describe some actions or protocols that the team could implement to reduce problems.

Term paper topics

Chapter 2: Privacy

Protecting privacy in Big Data.  Companies such as Apple, Google, and Facebook collect and analyze huge amounts of data about users of their products and services.  Although they use anonymous data in many situations, it is relatively easy to identify individuals from large sets of data (as we saw in Section 2.1.2).  Give examples of re-identification (or “de-anonymization”).  Describe techniques companies are using to help maintain anonymity.