Google Gemini, the tech giant’s new AI chatbot meant to rival ChatGPT, invented several fake reviews – which it attributed to real people – meant to discredit my 2020 book on political biases at Google and other big tech companies.
On Sunday, amid a sharp backlash against Google over its AI program’s apparent political biases, I asked Gemini to explain what my book was about. My book, “The Manipulators: Facebook, Google, Twitter, and Big Tech’s War on Conservatives,” was a multi-year project on Big Tech’s political biases that drew on inside sources, leaked documents and more.
I was curious to see if Google’s AI program could be trusted to accurately describe an investigative book about Google, but I wasn’t prepared for just how misleading it would be.
Gemini’s description of my book included this sentence: “The book has been criticized for lacking concrete evidence and relying on anecdotal information.”
Unsure what it was referencing — I thought I would have remembered such criticisms of my own book – I followed up: “Who has criticized the book for lacking concrete evidence, as you say?” What came next floored me.
Gemini spat out summaries of four negative reviews, ostensibly from the Washington Free Beacon, The New York Times, The New York Times Book Review and Wired, including quotes from each:
- “Book review by Matthew Continetti in The Washington Free Beacon: This review criticizes the book for relying on ‘anecdotal evidence’ and ‘cherry-picking examples’ to support its claims.
- Book review by Emily Bazelon in The New York Times Book Review: This review notes that the book ‘lacks a deep understanding of how these companies work’ and relies on ‘unproven accusations.’