Wednesday, January 8, 2020

Subcommittee hearing on online deception

“Americans at Risk: Manipulation and Deception in the Digital Age”

That's the title of a hearing the House Energy and Commerce Subcommittee on Consumer Protection and Commerce is holding this morning at 10:30 a.m.

• Links cited in the committee majority briefing memo are included with each section.


I. BACKGROUND

The internet has become a major forum for commerce, news, advertising, education, and information. According to the Pew Research Center, in 2018, 34 percent of surveyed adults said they preferred to get news online and 20 percent say they often get their news from social media. Consumers also are buying more goods and services online, with Americans purchasing over half a trillion dollars in retail goods in 2018 (approximately 14.3 percent of total retail sales).

• Key findings about the online news landscape in America
• U.S. e-commerce sales grow 16.0% in 2017

Increased dependence on the internet has created opportunities for legitimate and illegitimate actors to influence consumers. Moreover, as services and social media platforms evolve and grow online, such actors are developing new ways of using technology, both sophisticated and rudimentary, to persuade and manipulate consumers.

• Technology and Persuasion

Deceptive techniques are being used to manipulate consumers into making purchases they may not have otherwise made, paying higher prices than they expected, or accepting terms of service or privacy policies that they would not have otherwise or ordinarily accepted. In addition, technology is being used in ways that call into question the reliability of information online. Such misinformation has ranged from fake product reviews to election interference generated by a variety of actors from individuals to nation states.

• Dark Patterns: inside the interfaces designed to trick you
• Artificial intelligence, deepfakes, and the uncertain future of truth
• How merchants use Facebook to flood Amazon with fake reviews
• See Russian Active Measures Campaigns and Interference in the 2016 U.S. Election

Some deception may be illegal under section 5 of the Federal Trade Commission Act, which prohibits unfair and deceptive acts or practices. Much of the deception and manipulation that occurs online, however, is legal and unregulated.

• FTC 1983 Policy Statement on Deception (PDF)


II. SELECTED FORMS OF ONLINE DECEPTION

A. Deepfakes and Cheap Fakes

Deepfakes are videos that have been manipulated using machine learning technology such as a Generative Adversarial Network, or GAN, to look and sound real. These videos can show people doing and saying things they never did or said. The GAN process detects flaws in the fake video, which is then used to improve the fake video. While this technology is quickly becoming more sophisticated and easier to use, deepfakes, especially those with corresponding audio, remain difficult and expensive to create.

• What 'deepfakes' are and how they may be dangerous
• Watching a Deepfake Being Made Is Boring, And You Must See It
• Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy?
• Deepfakes Are Getting Better, But They're Still Easy to Spot

Cheap fakes are real videos with slight alterations using traditional editing techniques such as speeding, slowing and cutting, that can create misleading effects. Being both easier and less expensive to make, these sorts of altered videos are more common than deepfakes.

• Beware the Cheapfakes

Traditional editing techniques as well as deepfake technology are in use by the television and movie industry for entertainment purposes. Still, both types of fake videos can be used for malicious purposes, including facilitating the spread of misinformation and disinformation for political or commercial purposes and sowing discord. They can also be difficult to detect by human review and technology.

• Deepfakes: Hollywood’s quest to create the perfect digital human

B. Dark Patterns

Dark patterns are techniques incorporated in user interfaces (e.g., pop-up screens and webpages) designed to encourage or trick users into doing things they might not otherwise do. One recent study found almost 2,000 instances of dark patterns on just over 1,200 shopping websites. Some examples of dark patterns include, sneaking additional items into customer’s shopping baskets, adding a countdown timer to the webpage falsely implying that a deal will expire, and making a button to accept email notifications bigger and easier to click than a button to decline such emails.

• Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites

C. Social Media Bots

In the context of social media platforms, bots are automated accounts that have been programmed using machine learning algorithms to post content or interact with other users without direct human input. Many companies use bots to report news or weather, to respond to customer service requests, for advertising purposes, or to allow people to interact with their favorite characters. Bots have also been used by individuals and organizations to run fake social media accounts to make products or people look more popular. In addition, bots have been used to drive clicks and raise advertising revenue as well as by state actors to spread disinformation and stir division. While social media platforms have made efforts to take down bots and fake accounts, it can still be difficult to accurately detect bots through technological methods or human review. Complex bots are often confused with real people, yet people who tweet often have also been erroneously identified as bots.

• Battling Online Bots, Trolls and People
• 10 Brands Using Facebook Messenger Bots for Business
• Twitter is sweeping out fake accounts like never before, putting user growth at risk
• The charge of the chatbots: how do you tell who’s human online?
• Facebook Took Down 2.2 Billion Fake Accounts in Q1
• Facebook, Twitter and the Digital Disinformation Mess
• Bots in the Twittersphere
• Crackdown on ‘bots’ sweeps up people who tweet often


III. WITNESSES

The following witnesses have been invited to testify:

Monika Bickert
Head of Product Policy and Counterterrorism
Facebook
(Recent publication: Updating the Values That Inform Our Community Standards)

Joan Donovan, Ph.D.
Research Director of the Technology and Social Change Project
Shorenstein Center on Media, Politics, and Public Policy
Harvard Kennedy School

Tristan Harris
Executive Director
Center for Humane Technology

Justin (Gus) Hurwitz
Associate Professor of Law, Director of the NU Governance and Technology Center
University of Nebraska College of Law
Director of Law & Economics Programs
International Center for Law & Economics

No comments:

Referral Link

Have you looked at mobile phone service carrier Tello?
  • Great affordable plans (like $10/month for unlimited talk/text, 1 GB of data)
  • useful app for making calls if out of range
  • start with $10 free

Blog Archive