ʵ

News

You can’t tell whether an online restaurant review is fake—but this AI can

Researchers find AI-generated reviews and comments pose a significant threat to consumers, but machine learning can help detect the fakes.
Fake reviews by Secure Systems research group at department of computer science

Sites like TripAdvisor, Yelp and Amazon display user reviews of products and services. Consumers take heed: nine out of ten people read these peer reviews and trust what they see. In fact, up to 40% of users decide to make a purchase based on only a couple of reviews, and great reviews make people spend 30% more on their purchases.

Yet not all reviews are legitimate. Fake reviews written by real people are already common on review sites, but the amount of fakes generated by machines is likely to increase substantially.

According to doctoral student Mika Juuti at Aalto University, fake reviews based on algorithms are nowadays easy, accurate and fast to generate. Most of the time, people are unable to tell the difference between genuine and machine-generated fake reviews.

‘Misbehaving companies can either try to boost their sales by creating a positive brand image artificially or by generating fake negative reviews about a competitor. The motivation is, of course, money: online reviews are a big business for travel destinations, hotels, service providers and consumer products,’ says Mika Juuti.

In 2017, researchers from the University of Chicago described a method for training a machine learning model, a deep neural network, using a dataset of three million real restaurant ratings on Yelp. After the training, the model generated fake restaurant reviews character by character.

There was a slight hiccup in the method, however; it had a hard time staying on topic. For a review of a Japanese restaurant in Las Vegas, the model could make references to an Italian restaurant in Baltimore. These kinds of errors are, of course, easily spotted by readers.

To help the review generator stay on the mark, Juuti and his team used a technique called neural machine translation to give the model a sense of context. Using a text sequence of ‘review rating, restaurant name, city, state, and food tags’, they started to obtain believable results.

‘In the user study we conducted, we showed participants real reviews written by humans and fake machine-generated reviews and asked them to identify the fakes. Up to 60% of the fake reviews were mistakenly thought to be real,’ says Juuti.

Juuti and his colleagues then devised a classifier that would be able to spot the fakes. The classifier turned out to perform well, particularly in cases where human evaluators had the most difficulties in telling whether a review is real or not.

The study was conducted in collaboration with Aalto University’s  and researchers from Waseda University in Japan. It was presented at the 2018 European Symposium on Research in Computer Security in September.

The work is part of an ongoing project called in the  at Aalto University.

Research articles:
Mika Juuti, Bo Sun, Tatsuya Mori, N. Asokan:
Stay On-Topic: Generating Context-specific Fake Restaurant Reviews.

More information:
Mika Juuti, Doctoral Candidate
Aalto University
Secure Systems group
mika.juuti@aalto.fi
tel. +358 50 560 7944

N. Asokan, Professor
Aalto University
Secure Systems group
n.asokan@aalto.fi
tel. +358 50 483 6465

  • Updated:
  • Published:
Share
URL copied!

Read more news

A person in black touches a large stone sculpture outside a brick building under a blue sky.
Campus, Research & Art, University Published:

Glitch artwork challenges to see art in a different light

Laura Könönen's sculpture was unveiled on 14 October at the Otaniemi campus.
Book cover of 'Nanoparticles Integrated Functional Textiles' edited by Md. Reazuddin Repon, Daiva Mikučioniene, and Aminoddin Haji.
Research & Art Published:

Nanoparticles in Functional Textiles

Dr. Md. Reazuddin Repon, Postdoctoral Researcher at the Textile Chemistry Group, Department of Bioproducts and Biosystems, Aalto University, has contributed as an editor to a newly published academic volume titled “Nanoparticles Integrated Functional Textiles”.
Person standing outdoors in autumn, wearing a grey hoodie and green jacket. Trees in the background with orange leaves.
Appointments Published:

Introducing Qi Chen: Trustworthy AI requires algorithms that can handle unexpected situations

AI developers must focus on safer and fairer AI methods, as the trust and equality of societies are at stake, says new ELLIS Institute Finland principal investigator Qi Chen
A person wearing a light grey hoodie stands indoors with a brick wall and green plants in the background.
Appointments, University Published:

The research puzzle of when humans and AI don’t see eye to eye

Francesco Croce works on robustness in multi-modal foundation models