Cornell's Fake Review Detector Is A+++ Would Use Again

Illustration for article titled Cornells Fake Review Detector Is A+++ Would Use Again

My friends and I gave Cornell's fake-review-tracking algorithm a spin, and I have to say, we absolutely loved it! Fake and inflated reviews can be such a pain, so it's really great that an awesome university like Cornell is figuring them out!

Advertisement

It's soooo frustrating when you look for user reviews for a product or restaurant and they turn out to be bought-and-paid-for, largely misleading fluff. But they can also be hard to distinguish from real reviews. Researchers at Cornell—who did such a great job!—hired 400 freelance writers to mock up some super lame fake reviews and mixed them with (what they assumed were) real reviews. Humans totally couldn't tell the difference! That's why they came up with their totally awesome algorithm, which works great!

Fake reviews tend to be a narrative about an experience, but don't offer much detail. They also use "I" and "me" a lot, to hammer home the first person account, which totally makes sense to me, and I don't know why I didn't think of it myself! The algorithm works about 90 percent of the time—WOW!—and even though a lot of its variables are logically intuitive, it will be totally interesting to see if review platforms start implementing some form of this amazing technology. I hope so, because I'd totally use it again! [NY Times]

Share This Story

Get our newsletter

DISCUSSION

WestwoodDenizen
WestwoodDenizen

My biggest problem with review sites is that they skew negatively, so a few bogus positive reviews isn't that big of a deal to me. People who enjoy their food/service are much less inclined to write a review than people who feel slighted, so the ratings/reviews are always more negative than they should be.

If the people who generated fake reviews were smart, they'd start generating more fake negative ones for the competition. At least then there's some mystery about where the fake review came from.