Filter Bubble 101
The "filter bubble" is the idea that Algorithms -- used by social media companies, news organizations, political campaigns, and advertisers -- make decisions about the content we see based on:
(a) what we've clocked, bought, watched, or otherwise interacted with;
(b) our demographic, geographic, consumer, and/or political data (which is even creepier, and could include information about our race, class, religion, gender, sexuality, and more)
Based on demographic data and device use,
algorithms will selectively show us pieces of information
we are already likely to care about,
as well as content by creators and outlets
whose viewpoints most likely reflect our pre-existing biases.
(Which means -- online and off -- we are often only surrounded by people and attitudes which mirror our own.) This gives people a skewed view of reality and heightens the tension between those with opposing views. (Which is a polite way of saying: Drives families, friends, and citizens apart. People are no longer “disagreeing." They're living in separate realities.)
We all know that this is unsustainable and dangerous, which is why the IDH is trying to, basically, save America from this post-digital information nightmare.
The Filter Bubble is Incredibly Dangerous in Three Ways
The filter bubble has facilitated the beginning of what many call a “post-fact society”, in which there is no longer an agreed upon set of facts, and attempts by people to fact check each other frequently result in confusion and conflict. This dangerous situation is aggravated by the fact that we not only lack an agreed-upon set of facts but also an agreed-upon method or institution responsible for determining what the facts are, making appeals to traditional expertise (scientific, academic, and medical) ring hollow for those who have largely made their minds up. This has lead to three existential crises for America:
(a) Misinformation and the “End” of Journalism:
(Which is also why “objective journalism” no longer can fix the filter bubble: There is no longer any objective reality to report on). This also means that misinformation can spread rapidly across social media.
As bad as “misinformation” is, disinformation is even worse.
(c) The Collapse of Civil Society:
At a time when many people already distrust each other due to living in separate information worlds, attempts by one group or the other to unilaterally determine (such as in corporate moderation) what is true or false tend to intensify polarization, which in some cases can lead to violence.
While social media giants themselves have attempted new campaigns of content moderation to curb this problem, such efforts inherently fail for four reasons:
(a) social media companies have no real interest in continuing to capture (and monetize) our attention;
(b) social media companies are computer wonks not experts in journalism, communication, history, or the First Amendment;
(c) the response of Big Tech is usually to “de-platform” or “censor” speech, which raises immediate Constitutional and democratic issues (and always fuels partisan bitterness);
(d) these “platform by platform” band aid fixes do nothing to prevent people from finding new avenues for the spread of misinformation, and in many cases, such moderation attempts often reinforce people’s assumption that “the media” is at war with “the truth.”