The app marketplace is built on trust and dependability. Users rely on reviews from other users, as well as their personal experiences, to determine the efficacy and usefulness of apps. Unfortunately, reviews are easily given, and this can lead to a flood of fake reviews, whether positive or negative. While AI has a lot of potential uses, possibly even in the identification and removal of fake reviews, it also has the potential to flood app marketplaces with ratings and reviews.
The Rise Of The Apps
Apps are convenient for users and a great marketing tool for businesses. As such, they have become prominent in a lot of areas, becoming particularly widespread in the gaming industry, with tech-loving gamers being especially open to the use of mobile apps. They have become popular for cryptocurrency, as well, especially for exchanging coins and managing portfolios. And they have gained prominence in iGaming with apps from sites like Coincasino, accessible and powerful enough to offer players access to a wide range of games. In addition to many games, such web and mobile iGaming apps have introduced some game-changing incentives and quick payouts.
But this is only the beginning of the app-led revolution we’re all witnessing.
The Advent Of AI
The advent of AI has helped app developers. Generative AI can be used in the design of the app, as well as the content it contains. Although there has been some blowback from creative communities, there is no denying the fact that the technology can reduce costs and the workload of app developers.
Fake Reviews
Fake reviews are nothing new. Ever since reviews have been available, there has been the potential for developers to game the system by creating fake reviews. There are even services online that sell fake reviews. Developers can buy dozens, hundreds, or even thousands of reviews without the reviewers ever seeing or downloading the app. It is common for the reviewers to receive gift cards in exchange for leaving a positive review.
The practice of compiling and creating fake reviews is illegal in the US and some other countries. In 2022, the UK government also announced that it would make it illegal to pay people to write reviews. Hosting illegal reviews was also prohibited, which puts the onus on review sites like Google Reviews, Trustpilot, and Facebook to ensure the validity of the reviews that are posted.
Even before the specific laws were introduced, government agencies and, in particular, advertising watchdogs have gone after companies for this kind of behavior. In 2019, a skincare company was forced to apologize and promise they wouldn’t ask staff to post fake reviews of their products. At the time, two FTC commissioners said they were disappointed that the ruling didn’t go further and said the company should have been served with a financial penalty for its actions.
New Laws Introduced
In 2024, in light of the rise of AI and its potential use in the creation of fake reviews, the United States Federal Trade Commission introduced a ruling that prevented the posting of fake and AI-generated reviews. It also prohibited the use of fake celebrity testimonials.
Fake celebrity testimonials are another blight on the business landscape and another area where AI has made malpractice much easier. This has proven especially common in the cryptocurrency industry. Celebrities have been deep faked with their likeness being used to shill coins or to promote certain meme coins. Some celebrities have tried to take action, but it can be very difficult to track down the perpetrators of these crimes.
Spotting The Fakes
Review sites and users, have had to rely on their ability to spot fake reviews to help identify their use. Fake reviewers are often given lines and phrases to use in their reviews, and many of the platforms that offer this service use the same keywords and examples. This can make them quite easy to spot. However, artificial intelligence can be used to study existing, genuine reviews, and then use these to create more believable posts.
For now, AI content leaves a lot to be desired. AI content checkers can be used to determine whether text has been written by real people or generated by artificial intelligence software. But, as bots and AI are given access to more and more content, and undergo machine learning training, they will get better at bypassing these checks and creating genuine-sounding reviews.
One possible solution to AI-produced fake reviews is only allowing reviews to be left by genuine buyers, but this can be difficult. For example, while this technique is useful on commerce and e-commerce sites, where the website owner can verify whether an individual has made a purchase, it is less useful on independent and third-party sites. How can the likes of Google and Facebook determine whether a reviewer has purchased a product or paid for a service?
Why Apps Are Subject To Fake Reviews
Apps can be big money-makers for businesses. Whether the app costs money like some premium service apps, has in-app purchases, which have become very popular in gaming apps, or they are used to promote other products and services, apps are relatively inexpensive and they can garner a lot of interest. The fact that the majority of people have smartphones that can run these apps also makes them accessible.
Potential Dangers
At best, AI-written reviews might lead to us downloading and installing apps that don’t meet our expectations. They can lead to a poor user experience. But, these reviews can be much more harmful. We install apps on cell phones and other devices. These same devices typically contain all of our most personal information, and if we trust an app, we will often grant it access to files, folders, and other information on our phones. There is every chance that duped users can end up installing malware-riddled apps, or apps that submit personal information to unauthorized third parties. No matter if it’s gaming profiles or app accounts, it’s vital to stay safe from online risks.
Also, some review sites and e-commerce retailers are known to be using AI companies to help in their fake review detection. They are using AI review tools to determine fake reviews. According to experts, fake reviews tend to be more detailed and they include very specific wording, often lifted directly from a product or app’s documentation.
AI Doesn’t Necessarily Mean Fake
It is also worth noting, of course, that just because a review is AI-generated doesn’t necessarily mean it’s fake. AI can be used to quickly and reliably create reviews for users. Many users are experimenting with the likes of ChatGPT and other AI tools. This means that the review itself could be genuine and use information directly from the user, even though the wording and the layout have been created by an AI tool. This makes it even more difficult for sites and third parties to check the validity of these reviews.
Conclusion
AI has the potential to change many aspects of business and personal life. It can help in the creation and marketing of apps and websites, but it can also be used unscrupulously to fool potential buyers and users into parting with their money. At the very least, this can lead to users installing poor-quality apps. At worst, it can lead to the installation of potentially damaging apps that can steal information, cause unwanted advertising, and more.