
In "The Reality Game," Samuel Woolley exposes how AI, deepfakes, and computational propaganda threaten democracy. Called "mind-blowing" by Jane McGonigal, this 2020 wake-up call asks: What happens when we can't distinguish truth from fiction - and who's already manipulating your reality?
Samuel Woolley, the Dietrich Endowed Chair in Disinformation Studies at the University of Pittsburgh and author of The Reality Game: How the Next Wave of Technology Will Break the Truth, is a leading expert on AI-driven disinformation and digital propaganda. A researcher at the intersection of technology and politics, he founded the Propaganda Research Lab and the Digital Intelligence Lab, where his work on deepfakes, bots, and virtual reality reveals how emerging tools manipulate public opinion. His insights draw from global case studies funded by the National Science Foundation and Knight Foundation.
Woolley co-authored Bots and co-edited Computational Propaganda, seminal works analyzing automated influence campaigns. His award-winning Manufacturing Consensus explores modern propaganda tactics, while The Reality Game—praised in The New York Times and Wired—has become a critical resource for understanding tech’s role in misinformation.
Regularly featured on BBC News and PBS Frontline, Woolley bridges academic rigor with public engagement, offering strategic solutions to safeguard democracy in the digital age.
The Reality Game examines how advanced technologies like AI-generated deepfakes, social media bots, and computational propaganda distort truth and threaten democracy. Samuel Woolley explores the rise of disinformation campaigns, the profit-driven spread of fake news, and speculative risks like virtualized human impersonation, while advocating for digital literacy and ethical tech reforms to safeguard reality.
This book is essential for policymakers, tech professionals, and media consumers concerned about misinformation. It offers insights for educators teaching digital literacy, activists combating online manipulation, and anyone seeking to understand how AI and social media erode trust in institutions.
Yes. Woolley balances alarming examples of tech-driven disinformation with actionable solutions, making it a timely, nuanced guide. Unlike purely theoretical works, it provides concrete policy recommendations and emphasizes collective responsibility to counter computational propaganda.
Computational propaganda refers to the systematic use of algorithms, bots, and AI to manipulate public opinion. Examples include viral conspiracy theories, AI-edited "deepfake" videos, and automated social media accounts designed to sway elections or destabilize democracies.
AI tools enable hyper-realistic forged content, such as voice clones and manipulated videos, which circulate faster than fact-checking can intervene. Woolley highlights how these technologies empower bad actors to exploit cognitive biases and polarize societies.
Key solutions include:
Woolley focuses on emerging threats (e.g., virtual reality, voice cloning) rather than rehashing familiar social media critiques. He also emphasizes proactive systemic reforms over individual blame, distinguishing it from works like Network Propaganda or This Is Not Propaganda.
Some argue Woolley’s speculative scenarios (e.g., VR-based propaganda) lack immediate applicability. Others note the book prioritizes Western democracies, offering fewer insights into global authoritarian contexts.
He critiques platforms for prioritizing profit over safety, urging stricter bot regulation and algorithmic accountability. Case studies show how lax policies enabled foreign interference in elections.
Yes. The book equips readers with frameworks to recognize manipulative tactics, advocate for policy changes, and pressure tech firms to adopt transparent practices. It’s a practical toolkit for fostering skepticism without cynicism.
Extremely relevant, as AI-generated disinformation tools have grown more sophisticated. Woolley’s warnings about VR manipulation and voice cloning remain prescient, making the book a critical resource for navigating evolving digital threats.
저자의 목소리로 책을 느껴보세요
지식을 흥미롭고 예시가 풍부한 인사이트로 전환
핵심 아이디어를 빠르게 캡처하여 신속하게 학습
재미있고 매력적인 방식으로 책을 즐기세요
The ultimate goal isn't just to influence votes but to confuse, polarize, and disenchant the public entirely.
A like is a like, whether it comes from a real person or a bot.
The people wanted to hear this.
The internet, once envisioned as utopian, has been co-opted by powerful entities seeking control.
Truth itself becomes contested territory.
Reality Game의 핵심 아이디어를 이해하기 쉬운 포인트로 분해하여 혁신적인 팀이 어떻게 창조하고, 협력하고, 성장하는지 이해합니다.
Reality Game을 빠른 기억 단서로 압축하여 솔직함, 팀워크, 창의적 회복력의 핵심 원칙을 강조합니다.

생생한 스토리텔링을 통해 Reality Game을 경험하고, 혁신 교훈을 기억에 남고 적용할 수 있는 순간으로 바꿉니다.
무엇이든 물어보고, 목소리를 선택하고, 진정으로 공감되는 인사이트를 함께 만들어보세요.

샌프란시스코에서 컬럼비아 대학교 동문들이 만들었습니다
"Instead of endless scrolling, I just hit play on BeFreed. It saves me so much time."
"I never knew where to start with nonfiction—BeFreed’s book lists turned into podcasts gave me a clear path."
"Perfect balance between learning and entertainment. Finished ‘Thinking, Fast and Slow’ on my commute this week."
"Crazy how much I learned while walking the dog. BeFreed = small habits → big gains."
"Reading used to feel like a chore. Now it’s just part of my lifestyle."
"Feels effortless compared to reading. I’ve finished 6 books this month already."
"BeFreed turned my guilty doomscrolling into something that feels productive and inspiring."
"BeFreed turned my commute into learning time. 20-min podcasts are perfect for finishing books I never had time for."
"BeFreed replaced my podcast queue. Imagine Spotify for books — that’s it. 🙌"
"It is great for me to learn something from the book without reading it."
"The themed book list podcasts help me connect ideas across authors—like a guided audio journey."
"Makes me feel smarter every time before going to work"
샌프란시스코에서 컬럼비아 대학교 동문들이 만들었습니다

Reality Game 요약을 무료 PDF 또는 EPUB으로 받으세요. 인쇄하거나 오프라인에서 언제든 읽을 수 있습니다.
Imagine waking up to discover that your reality has been hacked. In 2018, Philippines President Rodrigo Duterte lashed out at Oxford University, calling it "a school for stupid people." Why? Because researchers had exposed his $200,000 expenditure on a social media manipulation army. This isn't science fiction-it's our new normal, where powerful figures deploy sophisticated digital tools to distort truth and silence critics. The line between real and fake has never been more blurred, and the consequences for democracy have never been more severe. What makes this digital manipulation particularly dangerous is how it exploits the very features social media platforms were designed to offer. These systems weren't built to distinguish between authentic and inauthentic engagement-a like is a like, whether from a real person or a bot. Platform algorithms prioritize engagement metrics, inadvertently amplifying manipulated content. This fundamental vulnerability has allowed bad actors to game these systems with relative ease, creating artificial virality for false narratives that can influence elections, undermine social movements, and fragment our shared reality. The tactics have democratized over time. While powerful political groups run the most pervasive campaigns, even ordinary users can now pay for bot amplification-some costing as little as $50 for 10,000 fake engagements. The ultimate goal isn't just to influence votes but to confuse, polarize, and disenchant the public entirely.