Published on November 4, 2025
The basic uncertainty and fear surrounding artificial intelligence involves the jobs that generative AI might replace. However, PR practitioners find that people must be involved in GenAI research. Search engine/AI hybrids can provide more trustworthy results but cannot replace the human element.
Search-augmented AI tools such as Perplexity and Google AI Overviews crawl the internet frequently, which makes their results more current and their PR ideas more pitch-worthy. But the real strength of search-supplemented AI is the trail it leaves for researchers to follow.
Search-oriented AI labels its sources with text, not an inconspicuous chain icon. These descriptions give market researchers a better idea of how fresh and thorough the underlying content is and where errors in the summary might lie. The links are only a start, though. Users should read the descriptions and click through to dive into the source material.
Computer brains are by no means know-it-alls. AI has a limited knowledge base. Its language processors mostly interpret user questions and summarize search results, without much analysis. Still, AI is an incredible resource for finding things, even when incredible turns out to mean “not at all credible.”
Ex-journalists on the Purpose Brand team were schooled in the Chicago newsroom adage, “If your mother says she loves you, check it out.” Listening to eyewitness stories and publicity pitches has fine-tuned their BS detectors, which have proved equally useful for AI fabrications.
AI can turn a firehose of information into a powerful stream, but the user still has to grab hold and direct the flow. With anything short of common knowledge, marketers should prompt a chatbot to give citations and check them thoroughly.
GenAI is not a final product, but a spray of pure fact, murky assertion and a dense bit of creative writing. The PR fact-checker can filter out the toxic bits, and the process involves the kinds of things researchers normally do:
It won’t take long to get a sense of how far to trust either the AI or the source material, but frankly, either can trip up researchers. Internet sources could be firsthand reports or several steps removed from boots-on-the-ground witnesses. AI processing is more likely than humans to misread context, but either can misinterpret the facts or take them at face value.
Statistics can mislead. The tragic example is the claim that opioid patients do not develop drug dependency. Oxycodone advertising cited a general comment on addiction treatment as if it were a peer reviewed, published finding about the drug. But even clinical trials can be overgeneralized. Early studies don’t always pan out, as when chloroquine failed to meet expectations as a COVID-19 treatment. It’s a recurring issue when clinicians try to apply the results from a small study to larger groups or from a general to a minority population.
In disciplines from self-help to high-tech, fact-checkers see a range of ballpark guesses, projections and test runs repeated over time as proven facts. Search engines often give these statements great weight, and chatbots accept them as conventional wisdom. PR practitioners and their clients should not amplify these errors. Nothing is, as newsroom sarcasm puts it, “too good to check.”
It may not seem reassuring that GenAI will require researchers to dive down rabbit holes. Media relations professionals might wonder if the process saves them any time. If it’s any consolation, they’ll emerge from their deep dive better informed about the topic and more persuasive about its nuances.
One attraction to generative AI is that it can sift through and generalize high-volume information sources such as media mentions or survey responses. Even here, though, GenAI has limited capacity for data analysis. Chatbots can misread sentiment, especially when they use a model trained on formal writing instead of slangy or sarcastic customer feedback. When numbers are involved, an AI process is not designed to “do the math” or even compare proportions.
Still, AI summaries can point in the right direction to monitor brand reputation, improve call-center response or identify a consumer need. Marketers can gear their data governance practices to refine brand positioning or monitor media mentions, social media posts and product reviews:
Automating data-intensive or rote tasks can free up time for strategic insight and creative practice. Chatbots help marketers pursue new audiences and be more effective in engaging them and improving lives. By casting a wider information net, informed communicators catch the details that matter to their audiences.
As soon as you submit the form, we will send to you a link to download our report.