Tadayoshi Kohno – 91±¬ĮĻ News /news Thu, 08 Aug 2024 15:01:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Many survey respondents rated seeking out sexually explicit ā€˜deepfakes’ as more acceptable than creating or sharing them /news/2024/08/08/sexually-explicit-deepfakes-synthetic-media-public-opinion/ Thu, 08 Aug 2024 15:01:23 +0000 /news/?p=85984 A computer keyboard is illuminated by the screen in a dark space.
In a survey of 315 people, respondents largely found creating and sharing sexually explicit ā€œdeepfakesā€ unacceptable. But far fewer respondents strongly opposed seeking out these media. Photo:

Content warning: This post contains details of sharing intimate imagery without consent that may be disturbing to some readers.

While much attention on sexually explicit ā€œdeepfakesā€ has , these non-consensual sexual images and videos generated with artificial intelligence . As text-to-image AI models grow more sophisticated and easier to use, the . The escalating problem led Google to announce last week that it will , and the allowing victims to seek legal damages from deepfake creators.

Given this rising attention, researchers at the 91±¬ĮĻ and Georgetown University wanted to better understand public opinions about the creation and dissemination of what they call ā€œsynthetic media.ā€ In a survey, 315 people largely found creating and sharing synthetic media unacceptable. But far fewer respondents strongly opposed seeking out these media — even when they portrayed sexual acts.

Yet has shown that other people viewing image-based abuse, such as nudes shared without consent, harms the victims significantly. And , , creating and sharing such nonconsensual content is a crime.

ā€œCentering consent in conversations about synthetic media, particularly intimate imagery, is key as we look for ways to reduce its harms — whether that’s through technology, public messaging or policy,ā€ said lead author , who was a 91±¬ĮĻ master’s student in the Paul G. Allen School of Computer Science & Engineering while completing this research. ā€œIn a synthetic nude, it’s not the subject’s body — as we’ve typically considered it — that’s being shared. So we need to expand our norms and ideas about consent and privacy to account for this new technology.ā€

The researchers Aug. 13 at the 20th Symposium on Usable Privacy and Security in Philadelphia.

ā€œIn some sense, we’re at a new frontier in how people’s rights to privacy are being violated,ā€ said co-senior author , a 91±¬ĮĻ professor in the Allen School. ā€œThese images are synthetic, but they still are of the likeness of real people, so seeking them out and viewing them is harmful for those people.ā€

The survey, which the researchers conducted online through , a site that pays people to respond on a variety of topics, asked U.S. respondents to read vignettes about synthetic media. The team altered variables in these scenarios like who created the synthetic media (an intimate partner, a stranger), why they created it (for harm, entertainment or sexual pleasure), and what action was shown (the subject performing a sexual act, playing a sport or speaking).

The respondents then ranked various actions around the scenarios — creating the video, sharing in different ways, seeking it out — from ā€œtotally unacceptableā€ to ā€œtotally acceptableā€ and explained their responses in a sentence or two. Finally, they filled out surveys on consent and demographic information. The respondents were over the age of 18 and were 50% women, 48% men, 2% non-binary and 1% agender.

The survey respondents ranked various actions around the synthetic media scenarios. The responses to each are graphed above.

Overall, respondents found creating and sharing synthetic media unacceptable. Their median totally unacceptable or somewhat unacceptable ratings were 90% for creating these media and 94% for sharing them. But the median of unacceptable ratings for seeking out synthetic media was only 53%.

Men were more likely than respondents of other genders to find creating and sharing synthetic media acceptable, while respondents who had favorable views of sexual consent were more likely to find these actions unacceptable.

ā€œThere has been a lot of policy talk about preventing synthetic nudes from getting created. But we don’t have good technical tools to do that, and we need to simultaneously protect consensual use cases,ā€ said co-senior author , an assistant professor of computer science at Georgetown University. ā€œInstead, we need to change social norms. So we need things like deterrence messaging on searches — we’ve seen that be effective at — and consent-based education in schools focused on this content.ā€

Respondents found scenarios in which intimate partners created synthetic media of people playing sports or speaking for the intent of entertainment the most acceptable. Conversely, nearly all respondents found it totally unacceptable to create and share sexual deepfakes of intimate partners with the intent of harm.

Respondents’ reasoning varied. Some found synthetic media unacceptable only if the outcome was harmful. For example, one respondent wrote, ā€œIt’s not harming me or blackmailing me… [a]s long as it doesn’t get shared I think it’s okay.ā€ Others, though, centered their right to privacy and right to consent. ā€œI feel it’s unacceptable to manipulate my image in such a way — my body and how it looks belongs to me,ā€ wrote another.

The researchers note that future work in this space should explore the prevalence of non-consensual synthetic media, the pipelines for how it’s created and shared, and different methods to deter people from creating, sharing and seeking out non-consensual synthetic media.

ā€œSome people argue that AI tools for creating synthetic images will have benefits for society, like for the arts or human creativity,ā€ said co-author , a doctoral student in the Allen School. ā€œHowever, we found that most people thought creating synthetic images of others in most cases was unacceptable — suggesting that we still have a lot more work to do when it comes to evaluating the impacts of new technologies and preventing harms.ā€

This research was funded in part by the National Science Foundation and the Google PhD Fellowship Program.

For more information, contact Brigham at nbrigham@uw.edu, Wei at weimf@cs.washington.edu, Kohno at yoshi@cs.washington.edu and Redmiles at elissa.redmiles@georgetown.edu.

]]>
Q&A: 91±¬ĮĻ researchers find privacy risks with 3D tours on real estate websites /news/2022/11/16/uw-researchers-find-privacy-risks-with-3d-tours-on-real-estate-websites/ Wed, 16 Nov 2022 19:07:11 +0000 /news/?p=80091 A screenshot of a virtual tour of a house. The scene is in a living room and there is a bar over the picture that says "click to explore this 3D space"
91±¬ĮĻ researchers examined 44 3D tours in 44 states across the U.S. to look for potential security issues when personal details were included in the tour. Shown here is a screenshot of a 3D tour accessed via the Redfin website.

Virtual 3D tours on real estate websites, such as Zillow and Redfin, allow viewers to explore homes without leaving the comfort of their couch.

Sometimes the homes in these tours are staged, but other times they contain evidence of current residents’ lives. 91±¬ĮĻ researchers were curious about whether personal belongings visible in 3D tours could introduce privacy risks.

The team examined 44 3D tours on a real estate website. Each tour was for a home in a different state and had at least one personal detail — such as a letter, a college diploma or photos — visible. The researchers concluded that the details left in these tours could expose residents to a variety of threats, including phishing attacks or credit card fraud.

The team Nov. 8 and will present them at USENIX Security Symposium 2023.

91±¬ĮĻ News reached out to lead author , a 91±¬ĮĻ doctoral student in the Paul G. Allen School of Computer Science & Engineering, for details on the study.

Rachel McAmis headshot
Rachel McAmis

What makes 3D tours more of a privacy issue than photos?

RM: With 3D tours, it is possible to see all rooms in a house and many more angles of a room than with photos. It is also possible to zoom in on details more easily than in photos — if someone accidentally leaves out a sensitive document, such as a letter, it might be possible to read the letter from a 3D tour if the camera quality is good enough.

What are the different types of privacy issues that you found?

RM: We found traditionally sensitive information that you are never supposed to share with strangers, along with information that reveals people’s behavior and preferences.

Most 3D tours in our study revealed full names of residents because of various items that were left out. Some examples were labeled medication, passwords, credit card information and a letter indicating a legal violation.

Viewers of 3D tours can also see people’s behaviors and preferences, including the products and brands someone purchases, their political affiliation, how clean their house is, how many family members live together, their religion and whether they have a pet.

A drawing of a desk showing a high school diploma, a whiskey bottle and a password taped to a computer monitor
Shown here is an artist’s rendering of a 3D tour where an adversary could gain information about a person’s education, hobbies and passwords. Photo:

Why are these privacy issues and what are the potential threats that could come out of this?

RM: Anyone with access to a real estate website that hosts these 3D tours can get their hands on the sensitive information listed above, which could lead to credit card fraud, hacked accounts, identity theft and other harms.

Behavior and preference information revealed in the 3D tours could allow someone to target a resident with a personalized message, such as fraudulently pretending to be an email from a brand that the resident frequently purchases from. Others may want to publicize socially damaging behavioral and preference information that they find in the 3D tour.

Of course, if someone is already sharing their preference information on a public social media page, removing this information from their 3D tour is not enough to prevent this information from being widely available on the internet.

Would you expect to see the same types of issues on any 3D home tour on any real estate website?

RM: We believe this is an industry-wide issue. Any online real estate website that uses 3D tours might have tours that reveal sensitive information, even apartment and other property rental websites. For example, there have been in the past about people finding celebrity homes on multiple real estate websites by looking at details in the 3D tour.

Is it possible to make a 3D tour that’s privacy safe? If not, what are some potential solutions to these issues?

RM: In general, yes, and most 3D tours on real estate websites are already properly staged to remove sensitive information from view. Homes where all personal belongings are removed, and the rooms are either empty or staged with furniture, would not have the same privacy concerns as a home that has residents’ personal belongings visible. However, as seen in our study, many residents do leave their information out.

A drawing of a bathroom with a portrait on the wall. The face in the portrait is blurred by the reflection of the face in the bathroom mirror is not
Shown here is an artist’s rendering of a 3D tour where a person’s face in a photo is blurred, but the reflection of the face is not. An adversary could identify the resident based on the reflection. Photo:

Are there any specific safeguards people can use when they are setting up their home for a 3D tour?

RM: Residents should be aware of the belongings they leave out when the 3D scan is being taken. For example, residents may want to remove any objects with text that reveals information about them, or items that reveal other behavior or preference information that they do not want publicly available online.

Choosing to use a 3D tour can benefit the home seller in many ways, but sellers should be careful to hide personal belongings before having their home scanned for a 3D tour.

, 91±¬ĮĻ professor in the Allen School, is also a co-author on this paper. This research was supported by the National Science Foundation and the 91±¬ĮĻ and gifts from Google, Meta, Qualcomm and Woven Planet.

For more information, contact McAmis at rcmcamis@cs.washington.edu and Kohno at yoshi@cs.washington.edu.

Grant number: 1565252

]]>
Political ads during the 2020 presidential election cycle collected personal information and spread misleading information /news/2021/11/08/political-ads-2020-presidential-election-collected-personal-information-spread-misleading-information/ Mon, 08 Nov 2021 18:13:21 +0000 /news/?p=76414 91±¬ĮĻ researchers found that political ads during the 2020 election season used multiple concerning tactics, including posing as a poll to collect people's personal information or having headlines that might affect web surfers' views of candidates.
91±¬ĮĻ researchers found that political ads during the 2020 election season used multiple concerning tactics, including posing as a poll to collect people’s personal information or having headlines that might affect web surfers’ views of candidates. Photo: 91±¬ĮĻ

Online advertisements are found frequently splashed across news websites. Clicking on these banners or links provides the news site with revenue. But these ads also often use manipulative techniques, researchers say.

91±¬ĮĻ researchers were curious about what types of political ads people saw during the 2020 presidential election. The team looked at more than 1 million ads from almost 750 news sites between September 2020 and January 2021. Of those ads, almost 56,000 had political content.

Political ads used multiple tactics that concerned the researchers, including posing as a poll to collect people’s personal information or having headlines that might affect web surfers’ views of candidates.

The researchers Nov. 3 at the ACM Internet Measurement Conference 2021.

“The election is a time when people are getting a lot of information, and our hope is that they are processing it to make informed decisions toward the democratic process. These ads make up part of the information ecosystem that is reaching people, so problematic ads could be especially dangerous during the election season,” said senior author , 91±¬ĮĻ associate professor in the Paul G. Allen School of Computer Science & Engineering.

The team wondered if or how ads would take advantage of the political climate to prey on people’s emotions and get people to click.

“We were well positioned to study this phenomenon because of our previous research on misleading information and manipulative techniques in online ads,” said , 91±¬ĮĻ professor in the Allen School. “Six weeks leading up to the election, we said, ‘There are going to be interesting ads, and we have the infrastructure to capture them. Let’s go get them. This is a unique and historic opportunity.'”

The researchers created a list of news websites that spanned the political spectrum and then used a to visit each site every day. The crawler scrolled through the sites and took screenshots of each ad before clicking on the ad to collect the URL and the content of the landing page.

The team wanted to make sure to get a broad range of ads, because someone based at the 91±¬ĮĻ might see a different set of ads than someone in a different location.

“We know that political ads are targeted by location. For example, ads for Washington candidates will only be featured to viewers browsing from the state of Washington. Or maybe a presidential campaign will have more ads featured in a swing state,” said lead author , 91±¬ĮĻ doctoral student in the Allen School.

“We set up our crawlers to crawl from different locations in the U.S. Because we didn’t actually have computers set up across the country, we used a to make it look like our crawlers were loading the sites from those locations.”

The researchers initially set up the crawlers to search news sites as if they were based in Miami, Seattle, Salt Lake City and Raleigh, North Carolina. After the election, the team also wanted to capture any ads related to the Georgia special election and the Arizona recount, so two crawlers started searching as if they were based in Atlanta and Phoenix.

The team continued crawling sites throughout January 2021 to capture any ads related to the Capitol insurrection.

Four screenshots of example poll ads in a square. Starting in the top left is a poll asking if Trump should concede. In the top right is an ad asking people to sign a thank you card for Dr. Fauci, in the bottom right is an ad that says "Sign the petition that Nancy Pelosi hates," and in the bottom left is a poll about whether illegal immigrants should get unemployment benefits
Some political ads posed as a poll to collect people’s personal information. Photo: 91±¬ĮĻ

The researchers used natural language processing to classify ads as political or non-political. Then the team went through the political ads manually to further categorize them, such as by party affiliation, who paid for the ad or what types of tactics the ad used.

“We saw these fake poll ads that were harvesting personal information, like email addresses, and trying to prey on people who wanted to be politically involved. These ads would then use that information to send spam, malware or just general email newsletters,” said co-author , 91±¬ĮĻ doctoral student in the Allen School. “There were so many fake buttons in these ads, asking people to accept or decline, or vote yes or no. These things are clearly intended to lead you to give up your personal data.”

Ads that appeared to be polls were more likely to be used by conservative-leaning groups, such as conservative news outlets and nonprofit political organizations. These ads were also more likely to be featured on conservative-leaning websites.

The most popular type of political ad was click-bait news articles that often mentioned top politicians in sensationalist headlines, but the articles themselves contained little substantial information. The team observed more than 29,000 of these ads, and the crawlers often encountered the same ad multiple times. Similar to the fake poll ads, these were also more likely to appear on right-leaning sites.

“One example was a headline that said, ‘There’s something fishy in Biden’s speeches,'” said Roesner, who is also the co-director of the . “I worry that these articles are contributing to a set of evidence that people have amassed in their minds. People probably won’t remember later where they saw this information. They probably didn’t even click on it, but it’s still shaping their view of a candidate.”

Three screenshots of example clickbait ads. The first shows Pence making an "eyebrow raising declaration after DC siege." The second says "Joe Biden goes on head-turning rant, fires off at reporter." The third shows Ted Cruz making a "head turning statement to Trump about the riot"
Click-bait news articles often mentioned top politicians in sensationalist headlines, but the articles themselves contained little substantial information. Photo: 91±¬ĮĻ

The researchers were surprised and relieved, however, to find a lack of ads containing explicit misinformation about how and where to vote, or who won the election.

“To their credit, I think the ad platforms are catching some misinformation,” Zeng said. “What’s getting through are ads that are exploiting the gray areas in content and moderation policies, things that seem deceptive but play to the letter of the law.”

The world of online ads is so complicated, the researchers said, that it’s hard to pinpoint exactly why or how certain ads appear on specific sites or are viewed by specific viewers.

 

  • This paper was one of three runners-up for the best paper award at the ACM Internet Measurement Conference.
  • Related story:

 

“Certain ads get shown in certain places because the system decided that those would be the most lucrative ads in those spots,” Roesner said. “It’s not necessarily that someone is sitting there doing this on purpose, but the impact is still the same —  people who are the most vulnerable to certain techniques and certain content are the ones who will see it more.”

To protect computer users from problematic ads, the researchers suggest web surfers should be careful about taking content at face value, especially if it seems sensational. People can also limit how many ads they see by getting an ad blocker.

, a 91±¬ĮĻ undergraduate student studying computer science is also a co-author on this paper. This research was funded by the National Science Foundation, the , and the John S. and James L. Knight Foundation.

For more information, contact badads@cs.washington.edu.

Grant number: CNS-2041894

]]>
91±¬ĮĻ and UC San Diego researchers honored for their work discovering that someone could hack a car /news/2021/09/22/uw-uc-san-diego-researchers-honored-discovering-someone-could-hack-car/ Wed, 22 Sep 2021 14:00:40 +0000 /news/?p=75878
A team from the 91±¬ĮĻ and UC San Diego has received the Golden Goose Award from the American Association for the Advancement of Science. From left to right: Tadayoshi Kohno, Stephen Checkoway and Karl Koscher. (Not pictured: Stefan Savage) Photo: Mark Stone/91±¬ĮĻ

Many people think of a car as a series of mechanical parts that — hopefully — work together to take us places, but that’s not the whole story.

Inside most modern cars is a network of computers, called “electronic control units,” that control all the systems and communicate with each other to keep everything rolling smoothly along.

More than 10 years ago, a team from the 91±¬ĮĻ and the University of California San Diego investigated whether these computing systems could be hacked and how that would affect a driver’s ability to control their car. To their own surprise — and to the alarm of car manufacturers — the researchers were able to manipulate the car in many ways, including disabling the brakes and stopping the engine, from a distance. This work led to two scientific papers that opened up a new area of cybersecurity research and served as a wake-up call for the automotive industry.

Now the team has received the from the American Association for the Advancement of Science. The award honors federally funded work that, in the words of AAAS, “may have seemed obscure, sounded ‘funny,’ or for which the results were totally unforeseen at the outset, but which ultimately led, often serendipitously, to major breakthroughs that have had significant societal impact.” The award was established in 2012 to counter criticisms of wasteful government spending, such as the late U.S. Sen. William Proxmire’s .

“It’s an incredible honor to receive this award. Not only for us as individuals, but for the computer security research community,” said , 91±¬ĮĻ professor in the Paul G. Allen School of Computer Science & Engineering and one of the project leaders. “More than 10 years ago, we saw that devices in our world were becoming incredibly computerized, and we wanted to understand what the risks might be if they continued to evolve without thought toward security and privacy. This award shines light on the importance of being thoughtful and strategic in figuring out what problems to work on today.”

Kohno and project co-lead , a UC San Diego professor of computer science and engineering, are both computer security researchers who often chatted about potential upcoming threats that could be good to study.

“It became apparent to us when General Motors started advertising its . Yoshi and I had a conversation, saying, ‘I bet there’s something there,'” Savage said. “Moreover, vulnerabilities in traditional computers had fairly limited impacts. You might lose some data or get a password stolen. But nothing like the visceral effect of a car’s brakes suddenly failing. I think that bridging that gap between the physical world and the virtual one was something that made this exciting for us.”

Savage and Kohno formed a super-team of researchers from both universities to dig into these questions. The team purchased a pair of Chevy Impalas — one for each university — to study as a representative car. The team worked collaboratively and in parallel, with researchers letting curiosity guide them.

Shown here are (from left to right) Karl Koscher, Tadayoshi Kohno and Stephen Checkoway with the 91±¬ĮĻ team’s Chevy Impala. Photo: Mark Stone/91±¬ĮĻ

The first task was to learn the language the cars’ computerized components used to communicate with each other. Then the researchers worked to inject their own voices into the conversation.

For example, the team started sending random messages to the cars’ brake controllers to try to influence them.

“We figured out ways to put the brake controller into this test mode,” said , a research scientist in the Allen School who completed this research as a 91±¬ĮĻ doctoral student. “And in the test mode, we found we could either leak the brake system pressure to prevent the brakes from working or keep the system fully pressurized so that it slams on the brakes.”

For more details about these papers, see the team’s .

The team published two papers in 2010 and 2011 describing the results.

“The first paper asked what capabilities an attacker would have if they were able to compromise one of the components in the car. We connected to the cars’ internal networks to examine what we could do once they were hacked,” said , an assistant professor of computer science at Oberlin College who completed this research as a UC San Diego doctoral student. “The second paper explored how someone could hack the car from afar.”

In these papers, the researchers chose not to unveil that they had used Chevy Impalas, and opted to contact GM privately.

“In our conversations with GM, they were quite puzzled. They said, ‘There’s no way to make the brake controller turn off the brakes. That’s not a thing,'” Savage said. “That Karl could remotely take over our car and make it do something the manufacturer didn’t think was possible reflects one of the key issues at play here. The manufacturer was hamstrung because they knew how the system was supposed to work. But we didn’t have that liability. We only knew what the car actually did.”

Stephen Checkoway (background) and Karl Koscher (foreground) work on a computer on top of the 91±¬ĮĻ’s Chevy Impala. Photo: Mark Stone/91±¬ĮĻ

The team’s papers prompted manufacturers to rethink car safety concerns and create new standard procedures for security practices. GM ended up appointing a vice president of product security to lead a new division. The Society for Automotive Engineers (SAE), the standards body for the automotive industry, quickly issued the first automotive cybersecurity standards.Ģż Other car companies followed along, as did the federal government. In 2012, the Defense Advanced Research Projects Agency launched geared toward creating hacking-resistant, cyber–physical systems.

“I like to think about what would have happened if we hadn’t done this work,” Kohno said. “It is hard to measure, but I do feel that neighboring industries saw this work happening in the automotive space and then they acted to avoid it happening to them too. The question that I have now is, as security researchers, what should we be investigating today, such that we have the same impact in the next 10 years?”

Members of the automobile security research team in 2010, left to right: Stephen Checkoway, Alexei Czeskis, Karl Koscher, Franziska Roesner, Tadayoshi Kohno, Stefan Savage and Damon McCoy. (Not pictured: Danny Anderson, Shwetak Patel, and Brian Kantor) Photo: 91±¬ĮĻ

, , Brian Kantor, , , and Ā filled out the rest of the team. This research was funded by the National Science Foundation, the Air Force Office of Scientific Research, a Marilyn Fries endowed regental fellowship and an Alfred P. Sloan research fellowship.

For more information, contact Kohno at yoshi@cs.washington.edu, Savage at savage@cs.ucsd.edu, Koscher at supersat@cs.washington.edu and Checkoway atĀ Stephen.Checkoway@oberlin.edu.

Grant numbers: CNS-0963695, CNS-0963702, CNS-0722000, CNS-0831532, CNS-0846065, CNS-0905384, FA9550-08-1-0352

]]>
Q&A: 91±¬ĮĻ researchers clicked ads on 200 news sites to track misinformation /news/2020/09/28/uw-researchers-clicked-ads-on-200-news-sites-to-track-misinformation/ Mon, 28 Sep 2020 18:38:14 +0000 /news/?p=70660 Editor’s note: All images of ads in this story are screenshots and are intended to help illustrate points in the text.

A screenshot of ads hosted by the ad platform Taboola. One ad is about where Kirkland products come from and another is about N95 masks
91±¬ĮĻ researchers found that both mainstream and misinformation news sites displayed similar levels of problematic ads.

With the election season ramping up, political ads are being splashed across the web. But in the age of misinformation, how can news consumers tell if the ads they’re seeing are legitimate?

USA Today and other mainstream news sites might seem like they would limit access to deceptive ads. But by 91±¬ĮĻ researchers found that both mainstream and misinformation news sites displayed similar levels of problematic ads.

The team, composed of researchers in the Paul G. Allen School of Computer Science & Engineering, in mid-January collected more than 55,000 ads across more than 6,000 mainstream news sites and about 1,000 misinformation news sites (such as those on ). Then the researchers manually examined ads from 100 each of the most popular mainstream and misinformation sites to categorize them as problematic or not. The team presented these findings May 21 at the Workshop on Technology and Consumer Protection.

Franziska Roesner, associate professor in the Allen School, and Eric Zeng, graduate research assistant in the Allen School, talk about deceptive ads on news sites. Soundbites available .Ģż

91±¬ĮĻ News had a conversation with the team about this research, where ads on news sites come from, and how things might change leading up to the election.

It sounds like there are two main types of ads on these sites: native and display ads. What’s the difference?

, graduate research assistant in the Allen School: A ā€œnative adā€ is designed to blend in with the rest of the page. So for example on a news site, a native ad would look like a headline for a news article. Or in an app like Yelp, it’d be a sponsored listing for a restaurant. Sometimes sites will try to make ads very clear by having a big button that says “ad” or “ad content.” But sometimes sites make it vague so it’s hard for people to tell.

Three native ads, one about celebrities who refuse to admit they aren't famous anymore, one about a new cash law coming before the election and one about a drone that captured photos no one was supposed to see.
A screenshot of three native ads

ā€œDisplay ads,ā€ also sometimes called “banner ads,” are generally on the top or the bottom of the screen, in a sidebar or within the text of a news article. They look like images.

What makes an ad “problematic?”

, associate professor in the Allen School: That’s exactly one of the questions we are trying to study. We see all sorts of techniques in the wild, such as clickbait, native ads that look like articles, gross images, polls, sensational claims and more. We’re trying to classify and measure these types of techniques and study how prevalent they are. Now we’re also studying how users react to them.

, professor in the Allen School: In one sense, an ad on the web is just a paid way for me to get something in front of someone else, so they can click on it and come to my site. But advertising on the web can also be a mechanism to deliver content, as opposed to the old-fashioned definition of selling a product.

This ad says "Trump impeachment poll. Do you support Trump? Click (Yes) or (No).
A screenshot of an ad that looks like a political poll.

EZ: If you put a billboard or poster up, you had to convey the whole message in there and hopefully inspire people to do whatever you want. But for online ads, you just need to get people to click.

We saw ads that looked like political opinion polls, asking things like ‘Should Donald Trump be impeached?’ or ‘Which candidate do you prefer for president?’ Then if you click on it, it just takes you to an ad for some other product. Or maybe it really is a poll, but when you click on it, you have to sign up for a mailing list to submit your vote.

This medium enables different types of deceptions.

FR: Also, a billboard in the physical world is clearly an ad. We all understand that. But an ad that looks like a news headline that’s sitting among other legitimate headlines is potentially problematic. If I’m visiting The New York Times or another news outlet that I trust, and I can’t distinguish something on there as an ad, then I’m trusting that content way more than I would if I were on some random site.

Where do the ads we see on news sites come from?

EZ: News sites will embed a bit of code from an ad provider, like Google Ads, on their websites. Then when someone goes to the news site, the ad provider will look at all of the ads that advertisers have submitted, hold an auction among the advertisers to determine which ad is picked and then display the winning ad on the website.

FR: The ecosystem is really complicated. Let’s say The Seattle Times were to say, ‘We don’t want these types of ads on our site.’ It’s not so simple. It’s not like The Seattle Times chose the ads we’re seeing. They work with some ad providers that work with a bunch of other companies.

So if there’s a problematic ad on The Seattle Times site, it’s coming from what ad providers are pulling in. There’s also the targeting aspect: Who is viewing the page? Someone who tends to click on a certain type of ad is probably more likely to see it. Different visitors to the same site will get different ads. So it’s not even like the editors can load the page and see what the ads on their site will look like in advance.

What made you, as security and privacy experts, decide to start studying this?

FR: There’s been a lot of work in the security community, including work that we’ve done, looking at this broader ad ecosystem, but mostly from a privacy perspective — such as looking at what data ads collect about users’ browsing behaviors — or from a security perspective — such as looking for ads that are used to spread malware.

But then we started thinking about the fact that so much content that people see online is not from the primary websites they’re browsing, but from the ads on those pages. These ads might not necessarily be outright misinformation or lead to misinformation sites, but they’re still preying on the same types of biases.

TK: When asked about bad ads, privacy researchers used to talk about mechanisms — for example, studying how an ad is pervasively tracking an individual. This paper is broadening the definition, taking a look at it from the perspective of the content of the ad, and where it takes someone if they click on it.

FR: Instead of a technical attack where your computer is vulnerable, we’re thinking about it as more like your brain is vulnerable.

What was your goal with this project?

EZ: We wanted to compare mainstream news sites versus misinformation news sites to see if the quality of the ad content on those sites was any different. We hypothesized that we’d see more problematic ads on misinformation sites. But both had roughly similar quantities of these problematic ads. It’s evidence that both these types of websites are participating in the same advertising ecosystem.

For example, we found that the advertising provider Taboola ran more of the problematic ads than any of the other ad platforms that sites use. Taboola also claims that their ads provide more revenue to websites than standard banner display ads. If these ads can get people to click, then that’s earning the websites money.

Then, because mainstream news sites are struggling, they might be turning to ad providers like Taboola because it’s the best way to sustain their business, unfortunately. And then same for misinformation sites, it’s a way to make a quick buck by tricking people into clicking on these ads.

Why have ads if they’re going to be problematic?

FR: There’s tension here — the outcome can’t be ‘ads are bad.’ They fund the economic model of the web. I think legitimate content websites are walking this weird line between the quality of ad content and the revenue that they’re making from it.

The hope is that somehow we can balance these things so we can have ads and revenue, but improve the quality of content that people are seeing online.

How do you think the upcoming election will change the types of content from what you saw in January?

This ad has a picture of Donald Trump with the text "Radical democrats want to take away your guns! Sign the petition."
A screenshot of a political banner ad on a news site.

FR: We anticipate that things will get more interesting near the election, in terms of actual political ads and the mechanisms and techniques people will use. But we’re also interested in seeing if there are ads that use the political climate, such as those fake polls that aren’t legitimate ads for political candidates, as part of the technique.

EZ: We plan to continue collecting data to see what tactics these campaigns are using leading up to the election.

What, if anything, should people do as they’re seeing ads on their favorite news sites?

FR: In doing this work, I think I’ve become more aware of all the content on a page, but the ads in particular because they’re designed to draw you in. I’m practicing being more aware of my reactions to them.

TK: We’ve developed an intuition of what to be aware of when we’re crossing the street — Is there a crosswalk nearby? Has traffic in the opposite direction stopped? But I would say that in the online world, it’s sometimes hard to have that sense. Is a website intentionally trying to mislead us or is it just confusing?

We need to develop this level of street awareness, where we know that not everything out there on the web has our best interests at heart.

FR: It leads to a separate research question that we’re following up on now: How do we help people be aware of the emotional and cognitive impacts of these things? Eric, you looked at the most ads as part of this research. Do you have any advice?

EZ: Get an ad blocker.

This research was funded by The National Science Foundation.

For more information, contact Zeng at ericzeng@cs.washington.edu, Kohno at yoshi@cs.washington.edu and Roesner at franzi@cs.washington.edu.

Grant numbers: CNS-1565252, CNS-1651230

]]>
‘I saw you were online’: How online status indicators shape our behavior /news/2020/04/13/how-online-status-indicators-shape-our-behavior/ Mon, 13 Apr 2020 16:10:20 +0000 /news/?p=67401 Some apps highlight when a person is online — and then share that information with their followers. When a user logs in to a website or app that uses online status indicators, a little green (or orange or blue) dot pops up to alert their followers that they’re currently online.

Researchers at the 91±¬ĮĻ wanted to know if people recognize that they are sharing this information and whether these indicators change how people behave online.

91±¬ĮĻ researchers found that many people misunderstand online status indicators but still carefully shape their behavior to control how they are displayed to others. Photo: Camille Cobb

After surveying smartphone users, the team found that many people misunderstand online status indicators but still carefully shape their behavior to control how they are displayed to others. More than half of the participants reported that they had suspected that someone had noticed their status. Meanwhile, over half reported logging on to an app just to check someone else’s status. And 43% of participants discussed changing their settings or behavior because they were trying to avoid one specific person.

will be published in the Proceedings of the 2020 ACM CHI conference on Human Factors in Computing Systems.

“Online status indicators are an unusual mechanism for broadcasting information about yourself to other people,” said senior author , an assistant professor in the 91±¬ĮĻ Information School. “When people share information by posting or liking something, the user is in control of that broadcast. But online status indicators are sharing information without taking explicit direction from the user. We believe our results are especially intriguing in light of the coronavirus pandemic: With people’s social lives completely online, what is the role of online status indicators?”

People need to be aware of everything they are sharing about themselves online, the researchers said.

“Practicing good online security and privacy hygiene isn’t just a matter of protecting yourself from skilled technical adversaries,” said lead author , a postdoctoral researcher at Carnegie Mellon University who completed this research as a 91±¬ĮĻ doctoral student in the Paul G. Allen School of Computer Science & Engineering. “It also includes thinking about how your online presence allows you to craft the identities that you want and manage your interpersonal relationships. There are tools to protect you from malware, but you can’t really download something to protect you from your in-laws.”

The team recruited 200 participants ages 19 to 64 through to fill out an online survey. Over 90% of the participants were from the U.S., and almost half of them had completed a bachelor’s degree.

The researchers asked participants to identify apps that they use from a list of 44 that have online status indicators. The team then asked participants if those apps broadcast their online status to their network. Almost 90% of participants correctly identified that at least one of the apps they used had online status indicators. But for at least one app they used, 62.5% answered “not sure” and 35.5% answered “no.” For example, of the 60 people who said they use Google Docs regularly, 40% said it didn’t have online status indicators and 28% were not sure.

Then the researchers asked the participants to time themselves while they located the settings to turn off “appearing online” in each app they used regularly. For the apps that have settings, participants gave up before they found the settings 28% of the time. For apps that don’t have these settings, such as WhatsApp, participants mistakenly thought they had turned the settings off 23% of the time.

“When you put some of these pieces together, you’re seeing that more than a third of the time, people think they’re not broadcasting information that they actually are,” Cobb said. “And then even when they’re told: ‘Please go try and turn this off,’ they’re still not able to find it more than a quarter of the time. Just broadly we’re seeing that people don’t have a lot of control over whether they share this information with their network.”

Here’s one way the team says designers could help people have more control over whether to broadcast their online status. Photo: Cobb et al./ Proceedings of the 2020 ACM CHI conference on Human Factors in Computing Systems

Finally the team asked participants a series of questions about their own experiences online. These questions touched on whether participants noticed when others were online, if they thought others noticed when they were online and whether they had changed their own behavior because they did or didn’t want to appear online.

“We see this repeated pattern of people adjusting themselves to meet the demands of technology — as opposed to technology adapting to us and meeting our needs,ā€ said co-author , a 91±¬ĮĻ doctoral student in the Allen School. “That means people are choosing to go online not because they want to do something there but because it’s important that their status indicator is projecting the right thing at the right time.”

Now that most states have put stay-at-home orders in place to try to combat the coronavirus pandemic, many people are working from home and socializing only online. This could change how people use online status indicators, the team says. For example, employees can use their online status to indicate that they are working and available for meetings. Or people can use a family member’s “available” status as an opportunity to check up on them and make sure they are OK.

“Right now, when a lot of people are working remotely, I think there’s an opportunity to think about how future evolutions of this technology can help create a sense of community,” Cobb said. “For example, in the real world, you can have your door cracked open and that means ‘interrupt me if you have to,’ you can have it wide open to say ‘come on in’ or you can have your door closed and you theoretically won’t get disturbed. That kind of nuance is not really available in online status indicators. But we need to have a sense of balance — to create community in a way that doesn’t compromise people’s privacy, share people’s statuses when they don’t want to or allow their statuses to be abused.”

, a professor in the Allen School, is also a co-author on this paper. This research was funded by the 91±¬ĮĻ Tech Policy Lab.

For more information, contact Hiniker at alexisr@uw.edu, Cobb at ccobb@andrew.cmu.edu, Simko at simkol@cs.washington.edu and Kohno at yoshi@cs.washington.edu.

]]>
Popular third-party genetic genealogy site is vulnerable to compromised data, impersonations /news/2019/10/29/genetic-genealogy-site-vulnerable-compromised-data-impersonations/ Tue, 29 Oct 2019 13:11:40 +0000 /news/?p=64577
DNA testing services are making it easier for people to learn about their heritage. People can also use their genetic testing results to connect to potential relatives in their family trees by using third-party sites, like GEDmatch, where they can compare their DNA sequences to others in the database. Photo:

DNA testing services like 23andMe, Ancestry.com and MyHeritage are making it easier for people to learn about their ethnic heritage and genetic makeup. People can also use genetic testing results to connect to potential relatives by using third-party sites, like, where they can compare their DNA sequences to others in the database who have uploaded test results.

But a less happy ending is also possible. Researchers at the 91±¬ĮĻ have found that GEDmatch is vulnerable to multiple kinds of security risks. An adversary can use only a small number of comparisons to extract someone’s sensitive genetic markers. A malicious user could also construct a fake genetic profile to impersonate someone’s relative.

The team Oct. 29. The researchers have also had this research accepted at the and will present these results in February in San Diego.

“People think of genetic data as being personal — and it is. It’s literally part of their physical identity,” said lead author, a postdoctoral researcher in the 91±¬ĮĻ Paul G. Allen School of Computer Science & Engineering. “This makes the privacy of genetic data particularly important. You can change your credit card number but you can’t change your DNA.”

An animation of a genetic pedigree where a child falsely claims to be related to the father
91±¬ĮĻ researchers found that an adversary can use only a small number of comparisons on GEDmatch to extract sensitive genetic markers for someone and construct a fake genetic profile to impersonate someone’s relative. Shown here is a genetic pedigree outline of two parents with two kids. Then another child (red) falsely claims to be related to the father. Photo: Rebecca Gourley/91±¬ĮĻ

The mainstream use of genetic testing results for genealogy is a relatively recent phenomenon. The initial benefits may have obscured some underlying risks, the researchers say.

“When we have a new technology, whether it is smart automobiles or medical devices, we as a society start with ‘What can this do for us?’ Then we start looking at it from an adversarial perspective,” said co-author, a professor in the Allen School. “Here we’re looking at this system and asking: ‘What are the privacy issues associated with sharing genetic data online?'”

To look for security issues, the team created a research account on GEDmatch. The researchers uploaded experimental genetic profiles that they created by mixing and matching genetic data from multiple databases of anonymous profiles. GEDmatch assigned these profiles an ID that people can use to do one-to-one comparisons with their own profiles.

For the one-to-one comparisons, GEDmatch produces graphics with information about how much of the two profiles match. One graphic is a bar for each of the 22 non-sex chromosomes. Each bar changes length depending on how similar the two profiles are for that chromosome. A longer bar shows that there are more matching regions, while a series of shorter bars means that there are short regions of similarity interspersed with areas that are different.

For the one-to-one comparisons, GEDmatch produces a bar for each of the 22 non-sex chromosomes that changes length depending on how similar the two profiles are for that chromosome. Shown here is an example of this graphic. A longer bar shows that there are more matching regions (top), while a series of shorter bars means that there are short regions of similarity interspersed with areas that are different (bottom). Photo: Rebecca Gourley/91±¬ĮĻ

The team wanted to know if an adversary could use that bar to find out a specific DNA sequence within one region of a target’s profile, such as whether or not the target has a mutation that makes them susceptible to a disease. For this search, the team designed four “extraction profiles” that they could use for one-to-one comparisons with a target profile they created. Based on whether the bar stayed in one piece — indicating that the extraction profile and the target matched — or split into two bars — indicating no match — the team was able to deduce the target’s specific sequence for that region.

ā€œGenetic information correlates to medical conditions and potentially other deeply personal traits,” said co-author, a professor in the Allen School. “Even in the age of oversharing information, this is most likely the kind of information one doesn’t want to share for legal, medical and mental health reasons. But as more genetic information goes digital, the risks increase.”

Next the researchers wondered if an adversary could use a similar technique to acquire a target’s entire profile. The team focused on another GEDmatch graphic that describes how well the profiles match by showing a line of colored pixels that mark how well each DNA segment in the query matches the target: green for a complete match, yellow for a half match — when one strand of DNA matched but not the other — and red for no match.

Then the team played a game of 20 questions: They created 20 extraction profiles that they used for one-to-one comparisons on a target profile that they created. Based on how the pixel colors changed, they were able to pull out information about the target sequence. For five test profiles, the researchers extracted about 92% of a test’s unique sequences with about 98% accuracy.

“So basically, all the adversary needs to do is upload these 20 profiles and then make 20 one-to-one comparisons to the target,” Ney said. “They could write a program that automatically makes these comparisons, downloads the data and returns the result. That would take 10 seconds.”

Once someone’s profile is exposed, the adversary can use that information to create a profile for a false relative. The team tested this by creating a fake child for one of their experimental profiles. Because children receive half their DNA from each parent, the fake child’s profile had their DNA sequences half matching the parent profile. When the researchers did a one-to-one comparison of the two profiles, GEDmatch estimated a parent-child relationship.

Have questions? Check outĀ  to learn more about this research project.

An adversary could generate any false relationship they wanted by changing the fraction of shared DNA, the team said.

ā€œIf GEDmatch users have concerns about the privacy of their genetic data, they have the option to delete it from the site,” Ney said. “The choice to share data is a personal decision, and users should be aware that there may be some risk whenever they share data. Security is a difficult problem for internet companies in every industry.ā€

Prior to publishing their results, the researchers shared their findings with GEDMatch, which has been working to resolve these issues, according to the GEDmatch team. The 91±¬ĮĻ researchers are not affiliated with GEDmatch, however, and can’t comment on the details of any fixes.

“We’re only beginning to scratch the surface,” Kohno said. “These discoveries are so fundamental that people might already be doing this and we don’t know about it. The responsible thing for us is to disclose our findings so that we can engage a community of scientists and policymakers in a discussion about how to mitigate this issue.”

This research was funded in part by the 91±¬ĮĻ, which receives support from: the William and Flora Hewlett Foundation, the John D. and Catherine T. MacArthur Foundation, Microsoft, and the Pierre and Pamela Omidyar Fund at the Silicon Valley Community Foundation. This research also was funded by a grant from the Defense Advanced Research Projects Agency Molecular Informatics Program.

For more information, contact the team at dnasec@cs.washington.edu.

]]>
New tools to minimize risks in shared, augmented-reality environments /news/2019/08/20/shared-augmented-reality-environments/ Tue, 20 Aug 2019 16:11:57 +0000 /news/?p=63617 A person holding up an iPad that shows a digital world over the real world.
For now, augmented reality remains mostly a solo activity, but soon people might be using the technology in groups for collaborating on work or creative projects.

A few summers ago throngs of people began using the Pokemon Go app, the first mass-market augmented reality game, to collect virtual creatures hiding in the physical world.

For now, AR remains mostly a solo activity, but soon people might be using the technology for a variety of group activities, such as playing multi-user games or collaborating on work or creative projects. But how can developers guard against bad actors who try to hijack these experiences, and prevent privacy breaches in environments that span digital and physical space?

91±¬ĮĻ security researchers have developed ShareAR, a toolkit that lets app developers build in collaborative and interactive features without sacrificing their users’ privacy and security. The researchers Aug. 14 at the in Santa Clara, California.

“A key role for computer security and privacy research is to anticipate and address future risks in emerging technologies,” said co-author , an assistant professor in the Paul G. Allen School of Computer Science & Engineering. “It is becoming clear that multi-user AR has a lot of potential, but there has not been a systematic approach to addressing the possible security and privacy issues that will arise.”

Learn more about the and its role in the space of computer security and privacy for augmented reality.

Sharing virtual objects in AR is in some ways like sharing files on a cloud-based platform like Google Drive — but there’s a big difference.

“AR content isn’t confined to a screen like a Google Doc is. It’s embedded into the physical world you see around you,” said first author , a 91±¬ĮĻ undergraduate student in the Allen School. “That means there are security and privacy considerations that are unique to AR.”

For example, people could potentially add virtual inappropriate images to physical public parks, scrawl virtual offensive messages on places of worship or even place a virtual “kick me” sign on an unsuspecting user’s back.

“We wanted to think about how the technology should respond when a person tries to harass or spy on others, or tries to steal or vandalize other users’ AR content,” Ruth said. “But we also don’t want to shut down the positive aspects of being able to share content using AR technologies, and we don’t want to force developers to choose between functionality and security.”

To address these concerns, the team created a prototype toolkit, ShareAR, for the Microsoft HoloLens. ShareAR helps applications create, share and keep track of objects that users share with each other.

Another potential issue with multi-user AR is that developers need a way to signal the physical location of someone’s private virtual content to keep other users from accidentally standing in between that person and their work — like standing between someone and the TV. So the team developed “ghost objects” for ShareAR.

“A ghost object serves as a placeholder for another virtual object. It has the same physical location and rough 3D bulk as the object it stands in for, but it doesn’t show any of the sensitive information that the original object contains,” Ruth said. “The benefit of this approach over putting up a virtual wall is that, if I’m interacting with a virtual private messaging window, another person in the room can’t sneak up behind me and peer over my shoulder to see what I’m typing — they always see the same placeholder from any angle.”

The team tested ShareAR with three case study apps. Creating objects and changing permission settings within the apps were the most computationally expensive actions. But, even when the researchers tried to stress out the system with large numbers of users and shared objects, ShareAR took no longer than 5 milliseconds to complete a task. In most cases, it took less than 1 millisecond.

The team tested ShareAR with three case study apps: Cubist Art (top panel), which lets users create and share virtual artwork with each other; Doc Edit (bottom left panel), which lets users create virtual notes or lists they can share or keep private; and Paintball (bottom right panel), which lets users play paintball with virtual paint. In the Doc Edit app, the semi-transparent gray box in the top left corner represents a “ghost object,” or a document that another user wishes to remain private. Photo: Ruth et al./USENIX Security Symposium

Developers can to use for their own HoloLens apps.

“We’ll be very interested in hearing feedback from developers on what’s working well for them and what they’d like to see improved,” Ruth said. “We believe that engaging with technology builders while AR is still in development is the key to tackling these security and privacy challenges before they become widespread.”

, a professor in the Allen School, is also a co-author on this paper. This research was funded by the National Science Foundation and the Washington Research Foundation.

###

For more information, contact Roesner at franzi@cs.washington.edu, Ruth at kcr32@cs.washington.edu and Kohno at yoshi@cs.washington.edu.

Grant numbers: CNS-1513584, CNS-1565252, CNS-1651230

]]>
For $1000, anyone can purchase online ads to track your location and app use /news/2017/10/18/for-1000-anyone-can-purchase-online-ads-to-track-your-location-and-app-use/ Wed, 18 Oct 2017 16:00:19 +0000 /news/?p=55074
New 91±¬ĮĻ research finds that for a budget of roughly $1000, it is possible for someone to track your location and app use by purchasing and targetingĀ mobile ads. The team aims to raise industry awareness about the potential privacy threat.

Privacy concerns have long swirled around how much information online advertising networks collect about people’s browsing, buying and social media habits — typically to sell you something.

But could someone use mobile advertising to learn where you go for coffee? Could a burglar establish a sham company and send ads to your phone to learn when you leave the house? Could a suspicious employer see if you’re using shopping apps on work time?

The answer is yes, at least in theory. New , to be presented in a Oct. 30 at the Association for Computing Machinery’s , suggests that for roughly $1,000, someone with devious intent can purchase and target online advertising in ways that allow them to track the location of other individuals and learn what apps they are using.

ā€œAnyone from a foreign intelligence agent to a jealous spouse can pretty easily sign up with a large internet advertising company and on a fairly modest budget use these ecosystems to track another individual’s behavior,ā€ said lead author a recent doctoral graduate in the 91±¬ĮĻ’s Paul G. Allen School of Computer Science & Engineering.

The research team set out to test whether an adversary could exploit the existing online advertising infrastructure for personal surveillance and, if so, raise industry awareness about the threat.

ā€œBecause it was so easy to do what we did, we believe this is an issue that the online advertising industry needs to be thinking about,ā€ said co-author , co-director of the and an assistant professor in the Allen School. Ā ā€œWe are sharing our discoveries so that advertising networks can try to detect and mitigate these types of attacks, and so that there can be a broad public discussion about how we as a society might try to prevent them.ā€

This map represents an individual’s morning commute. Red dots reflect the places where the 91±¬ĮĻ computer security researchers were able to track that person’s movements by serving location-based ads: at home (real location not shown), a coffee shop, bus stop and office. The team found that a target needed to stay in one location for roughly four minutes before an ad was served, which is why no red dots appear along the individual’s bus commute (dashed line) or walking route (solid line.) Photo: 91±¬ĮĻ

The researchers discovered that an individual ad purchaser can, under certain circumstances, see when a person visits a predetermined sensitive location — a suspected rendezvous spot for an affair, the office of a company that a venture capitalist might be interested in or a hospital where someone might be receiving treatment — within 10 minutes of that person’s arrival. They were also able to track a person’s movements across the city during a morning commute by serving location-based ads to the target’s phone.

The team also discovered that individuals who purchase the ads could see what types of apps their target was using. That could potentially divulge information about the person’s interests, dating habits, religious affiliations, health conditions, political leanings and other potentially sensitive or private information.

Someone who wants to surveil a person’s movements first needs to learn the (MAID) for the target’s mobile phone. These unique identifiers that help marketers serve ads tailored to a person’s interests are sent to the advertiser and a number of other parties whenever a person clicks on a mobile ad. A person’s MAID also could be obtained by eavesdropping on an unsecured wireless network the person is using or by gaining temporary access to his or her WiFi router.

The 91±¬ĮĻ team demonstrated that customers of advertising services can purchase a number of hyperlocal ads through that service, which will only be served to that particular phone when its owner opens an app in a particular spot. By setting up a grid of these location-based ads, the adversary can track the target’s movements if he or she has opened an app and remains in a location long enough for an ad to be served — typically about four minutes, the team found.

Importantly, the target does not have to click on or engage with the ad — the purchaser can see where ads are being served and use that information to track the target through space. In the team’s experiments, they were able to pinpoint a person’s location within about 8 meters.

ā€œTo be very honest, I was shocked at how effective this was,ā€ said co-author , an Allen School professor who has studied security vulnerabilities in products ranging from automobiles to medical devices. ā€œWe did this research to better understand the privacy risks with online advertising. There’s a fundamental tension that as advertisers become more capable of targeting and tracking people to deliver better ads, there’s also the opportunity for adversaries to begin exploiting that additional precision. It is important to understand both the benefits and risks with technologies.ā€

An individual could potentially disrupt the simple types of location-based attacks that the 91±¬ĮĻ team demonstrated by frequently resetting the mobile advertising IDs in their phones — a feature that many smartphones now offer. Disabling location tracking within individual app settings could help, the researchers said, but advertisers still may be capable of harvesting location data in other ways.

On the industry side, mobile and online advertisers could help thwart these types of attacks by rejecting ad buys that target only a small number of devices or individuals, the researchers said. They also could develop and deploy machine learning tools to distinguish between normal advertising patterns and suspicious advertising behavior that looks more like personal surveillance.

The 91±¬ĮĻ Security and Privacy Research Lab is a leader in evaluating potential security threats in emerging technologies, including telematics in automobiles, web browsers, DNA sequencing software and augmented reality, before they can be exploited by bad actors.

Next steps for the team include working with experts at the 91±¬ĮĻ’s to explore the legal and policy questions raised by this new form of potential intelligence gathering.

The research was funded by The National Science Foundation, The Tech Policy Lab and the Short-Dooley Professorship.

For more information, contact the research team at adint@cs.washington.edu.

Grant number: NSF: CNS-1463968

]]>
Computer scientists use music to covertly track body movements, activity /news/2017/08/16/computer-scientists-use-music-to-covertly-track-body-movements-activity/ Wed, 16 Aug 2017 16:36:45 +0000 /news/?p=54423

As smartphones, tablets, smart TVs and other smart devices become more prevalent in our lives, computer scientists have raised concerns that these network-enabled devices, if not properly secured, could be co-opted to steal data or invade user privacy.

Now researchers at the 91±¬ĮĻ have demonstrated how it is possible to transform a smart device into a surveillance tool that can collect information about the body position and movements of the user, as well as other people in the device’s immediate vicinity. Their approach involves remotely hijacking smart devices to play music embedded with repeating pulses that track a person’s position, body movements, and activities both in the vicinity of the device as well as through walls.

The , from the 91±¬ĮĻ’s , showed how it is possible to collect such detailed data on personal activity using CovertBand, software code they created to turn smart devices into active sonar systems. As the researchers will at the Ā on Sept. 13, CovertBand can utilize built-in microphones and speakers in a smart device — and can be controlled remotely.

Left-to-right: 91±¬ĮĻ professor of computer science and engineering Tadayoshi Kohno, 91±¬ĮĻ doctoral student Rajalakshmi Nandakumar and 91±¬ĮĻ associate professor of computer science and engineering Shyam Gollakota. Not pictured is team member and 91±¬ĮĻ doctoral student Alex Takakuwa. Photo: Dennis Wise/91±¬ĮĻ

“To our knowledge, this is the first time anyone has demonstrated that it is possible to convert smart commodity devices — like smartphones and smart TVs — into active sonar systems using music,” said senior author , a 91±¬ĮĻ associate professor of computer science and engineering. “And the physical information CovertBand can gather — even through walls — is sufficiently detailed for an attacker to know what the user is doing, as well as other people nearby.”

CovertBand utilizes the principles of active sonar to gather this information. Active sonar systems, such as on submarines, determine the position of objects by sending out an acoustic pulse. Those sound waves bounce off objects in their path, and the deflected waves can be picked up by a receiver to determine the object’s position, distance and shape.

Through the speaker of a smartphone or other device, CovertBand sends out a repeating pulse of sound waves in the 18 to 20 kHz range. Much like sonar on a submarine, these sound waves are reflected when they encounter objects in their path. CovertBand uses the device’s built-in microphones as a receiver to pick up these reflected sound waves. The smart device then transmits this information to the attacker, who could be a few feet away or halfway across the globe.

91±¬ĮĻ doctoral student and co-lead author Rajalakshmi Nandakumar demonstrates the simple walking motion that CovertBand can detect. Photo: Dennis Wise/91±¬ĮĻ

“Most of today’s smart devices including smart TVs, Google Home, Amazon Echo and smartphones come with built-in microphones and speaker systems — which lets us use them to play music, record video and audio tracks, have phone conversations or participate in videoconferencing,” said co-lead author , a 91±¬ĮĻ doctoral student in computer science and engineering. “But that also means that these devices have the basic components in place to make them vulnerable to attack in this manner.”

“Other surveillance approaches require specialized hardware, from the ‘classic’ hidden camera to an ultrasound-like device that must be placed on the wall of a neighboring room,” said co-lead author , a 91±¬ĮĻ doctoral student in computer science and engineering. “CovertBand shows for the first time that through-barrier surveillance is possible using no hardware beyond what smart devices already have.”

The researchers tested CovertBand using a Samsung Galaxy S4 smartphone hooked up to a portable speaker. Photo: Dennis Wise/91±¬ĮĻ

The team tested CovertBand’s effectiveness using a smartphone hooked up to either a portable speaker or a standard flat-screen TV. In both cases, CovertBand’s data could be used to decipher repetitive movements such as arm-pumping, walking or pelvic tilts to a range of up to 6 meters from the smartphone, with a positional error of only 8 to 18 centimeters. Researchers also discovered that, with the portable speaker, CovertBand’s pulses can transmit through thin, interior walls — though the range drops to 2 to 3 meters.

Currently, CovertBand can automatically identify and infer repetitive motions. More detailed inferences require manual analyses of data — or additional tools.

“Our initial goal was to demonstrate that it is possible to use passive acoustics to gather even basic — but still highly sensitive — information using CovertBand,” said Gollakota. “But if you have enough data from CovertBand, you could run it through machine-learning algorithms to help classify more movements for faster identification.”

The screen shows the signatures of arm waving, as detected by CovertBand. CovertBand can remotely transform a smart device into an active sonar system, using its microphones to transmit a repeated audio pulse and its speakers to collect spatial information on how those pulses are reflected due to the repetitive motions of users. Photo: Dennis Wise/91±¬ĮĻ

The 18 to 20 kHz repeating pulses employed by CovertBand are on the low range of what many people can hear accurately, though children, younger adults and even pets might be able to hear it well, said Nandakumar. But to increase the range of surveillance and work through walls, the authors increased the volume of these repeating pulses, which made them audible. To mask the sound, they “covered” Covertband’s pulses by playing songs or other audio clips over them. Some songs work better than others — particularly compositions with repetitive, percussive beats. When they played the CovertBand pulses beneath 20 popular songs — including Lenny Kravitz’s “American Woman” and Michael Jackson’s “Bad” — listeners could identify the “hacked” version of the song 58 percent of the time, just slightly above the 50 percent accuracy expected by guessing randomly.

“Since Covertband enables through-the-wall surveillance, anyone can play music on their smart devices to track people through walls,” said Takakuwa. “This is concerning because, if a neighbor is playing music, it could either be a benign act or an act of surveillance to determine if anyone is in the adjacent apartment, track their movements or infer their activities.”

91±¬ĮĻ doctoral student and co-lead author Rajalakshmi Nandakumar demonstrates the simple walking motion that CovertBand can detect. The TV is broadcasting CovertBand transmissions masked by a song. Photo: Dennis Wise/91±¬ĮĻ

The researchers said that soundproofing a room would prevent attacks through walls. Emitting a jamming signal at the same 18 to 20 kHz frequency range would also prevent hacked devices or attackers in the next room from gathering information. But currently, those are also impractical defenses for most people. Soundproofed rooms have no windows, for example, and jamming signals would have to be sent the moment an attack is detected. Another potential — though partial — defense could be to allow users to deactivate the speakers or microphones on their smart devices. But such a move would go against industry trends for some of these devices.

“In many cases, when the device is on, then its speakers and microphones are also on,” said Nandakumar.

The team hopes that knowledge of what is possible will help develop awareness of privacy dangers and prompt scientists to develop practical countermeasures.

“We always want to stay one step ahead of the bad guys — of attackers who are trying to collect this information about users,” said co-author , a 91±¬ĮĻ professor of computer science and engineering. “We’re providing education about what is possible and what capabilities the general public might not know about, so that people can be aware and can build defenses against this.”

The research was funded in part by a Google Faculty Award, the Alfred P. Sloan Foundation, the 91±¬ĮĻ’s Short-Dooley Career Development Professorship and the National Science Foundation.

###

For more information, contact musicattacks@cs.washington.edu.

Link to research team’s website:Ā 

Link to paper:Ā 

]]>