Artificial intelligence technologies and their public reception seem to be moving these days in opposite directions. At IBM Think in San Francisco last week, I was able to see first-hand the large number of practical applications for AI that are helping business and industry to save human labor, make smarter decisions, and optimize complex workflows in a way that points toward exciting possibilities for the very near future. At the same time, stories of deepfake videos and all-too-convincing social media simulacra hint at a much darker picture, marked by ever-increasing suspicion that everything we read and see might have been created to fool or manipulate us.
The Problem of Local Fakery
Both of these polar opposites can be seen playing themselves out in the world of local search. On the one hand, those who hold the more dystopian view that everything is potentially fake can point to the problem of review authenticity on Google and other local channels. Bloggers like Mike Blumenthal and Joy Hawkins have been writing for years about the fact that it’s all too easy for fake reviews to get published and remain in publication, to the benefit of those who would use such black hat tactics to promote themselves or denigrate the competition. Recently, Blumenthal suggested that Google deliberately looks the other way and does little to address the fake review problem, in part due to indirect economic benefits like ad revenue from companies who brazenly sell fake reviews online.
Though most fake reviews are probably written by humans today, AI could maliciously be used, and perhaps already is being used, to make fakes much harder to detect. In this light, I might point out news that is breaking this week about research firm OpenAI, who has decided to withhold its latest text composing software from the public due to concerns that it mimics human writing too convincingly.
Certainly, Google has acted quickly to stamp down on spammers whose exploits cross certain lines, such as last November when millions of four-star ratings from the same few accounts appeared on listings throughout the United States and were taken down by Google within a couple of days. This was, however, a particularly egregious example, possibly designed to test the limits of Google’s fraud-detection systems by deliberately triggering them. The more run-of-the-mill fake reviews, by contrast, can often remain in publication until the affected business owner, or a local search consultant acting on their behalf, reports the problem to Google support, with appropriate documentation. That these spam-removal decisions must often be made on a case-by-case basis by human agents suggests that the scale of the solution may be incommensurate with the scale of the problem.
With Great Power Comes Great Responsibility
What we don’t know, of course, is how many fake reviews Google already removes through automated processes. The common perception in our industry is that whatever they’re doing, they aren’t doing enough of it. Still, AI potentially has a great deal of power to combat fake reviews and spam. And Google is already using AI in search, with RankBrain’s neural network filling in for the standard algorithm to deliver relevant results for queries Google hasn’t seen before. RankBrain already powers 15% of Google results; as Google’s confidence in its capabilities increases, the role of AI is likely to expand.
As Cade Metz of Wired explained back in 2016, Google is learning to trust the results of AI and machine learning, even as doing so moves the company away from its reliance on comparatively transparent ranking decisions made by its algorithm. The algorithm contains complex rules, but it can be understood and tweaked by human engineers. Neural networks, by contrast, make decisions whose rationale is more difficult to examine, even if they turn out to be more accurate and reliable.
By extension, it’s easy to imagine that Google’s local technology, with its conspicuous reliance on human intervention, has yet to make the same transition from more traditional business rules to AI-powered decision making. If that transition does occur at some point in the near future, a more effective mechanism for combating fake reviews could well be on the horizon.
AI and Scaling Up Reputation Management
Reading the Google tea leaves is generally a dubious activity. But the potential applications of AI to local search are much broader than this one example. Consider, for example, the challenge major national brands face when trying to engage with consumers who leave reviews on local channels. For the typical small business, the same task is comparatively straightforward: check your reviews on major sites on a regular basis, and respond proactively and professionally to as many of them as you can.
For brands, the volume and complexity of that task is significantly greater, given the number of locations that are often at play. In the case of some brands, popularity even contributes to the challenge; one well-loved restaurant we work with has thousands of reviews and consumer-generated photos for every site and every restaurant location, with hundreds more generated each week.
We can say from a great deal of experience that fake reviews, while they do exist, are in a minority and do not represent the most critical challenge brands face. Much more significant is the sheer effort of processing the huge volume of feedback generated by actual consumers, and engaging with each of those consumers in a personal and relevant manner. As Brandify has shown with its Smart Review Response tool, AI and machine learning can provide highly effective assistance to human users who need to be able to create personalized responses at scale.
We have found that machine-generated suggestions and personalization features can greatly reduce the legwork involved in responding to hundreds or thousands of reviews, allowing human agents to focus more of their energy on special cases that require a greater degree of attention and creativity.
Learning from Your Customers with AI
Equally important is the potential of machine learning to uncover hidden trends in review data. According to research from Gartner, 80% of the world’s data is unstructured, and reviews are certainly a primary source of unstructured data in the local arena. They contain invaluable evidence of consumer sentiment around products, services, staff, amenities, store layout, menu items, wait times, and a huge array of related topics, buried in natural language that is not easily susceptible to data-driven analysis.
Here, AI tools such as IBM Watson’s Natural Language Understanding and Natural Language Classification can be used to mine review data and uncover topic and sentiment trends that businesses can use to inform operational changes, marketing campaigns, staff training improvements, and even new product ideas.
At IBM Think, Reid Francis, principal product manager for Watson’s Natural Language Classification tool, discussed the example of an airline that used Watson’s capabilities to classify reviews by topic and sentiment, greatly enhancing its understanding of the concerns travelers shared most frequently. Many other companies, in industries from healthcare to cybersecurity to auto financing, have employed AI technologies to analyze of customer feedback and to improve related customer service processes, such as proper routing by topic of customer input that requires escalation.
So although the unregulated growth of AI may be cause for concern, and should be the subject of vigorous public debate, we should be reassured that its potential to save human labor and improve business outcomes, in local search and beyond, would seem to have no foreseeable limits.