In the age of the internet and social media, accessing information has become easier than ever before. Millions of articles and web pages are only a few clicks or keywords away, blogs and digital news outlets rest right at our fingertips. With such a staggering amount of results for every search query, not becoming overwhelmed appears nearly impossible. But in reality, the number of results would be ever greater, were it not for algorithmic personalization of content.
The term “filter bubble” has first been used by Eli Pariser and most people have heard it at some point in their lives. It is also often referred to as “information bubble” or “echo chamber”; however, the meaning in people’s minds remains the same: a phenomenon usually limited to the online space that isolates a person inside a metaphorical bubble. This bubble consists of information fed to the individual by personalization algorithms and usually traps them in a space filled with only certain kind of information. The first impulse to the algorithm originates from what the person searches for, but gradually, the recommended content becomes tailored to their recognized interests.
The filter bubble is largely connected to social media networks. The younger generations often find their information and news on social media, but it is not the only space where filter bubbles may be created. With the age of digitalization, many mainstream news outlets have moved to the online space and as such, customization of content can create a certain bias in which types of news we will find online. And it is not only news that are filtered through the algorithms, it is everything else, too. It can be the types of things we like to shop for, the type of content we enjoy watching, all the way to the results of a simple google search.
In some cases, this personalization appears harmless, even useful. We see You-tube videos similar to the ones we watch often and discover new creators in our fields of interests. Spotify recommends music with similar beats or lyrics to our favourite songs. Instagram shows us pictures of the same type we often like.
However, what happens if we only see the things we already believe and like? What if we, for example, never discover a new interest because we are stuck in an endless loop of uniform content? As Eli Pariser has written (Pariser, 2011):
“In the filter bubble, there’s less room for the chance encounters that bring insight and learning. Creativity is often sparked by the collision of ideas from different disciplines and cultures. Combine an understanding of cooking and physics and you get the nonstick pan and the induction stovetop. But if Amazon thinks I’m interested in cookbooks, it’s not very likely to show me books about metallurgy.”
We could be searching for something we have never been interested in before, but the search engine will still try to appeal to our preferences. The same search query in the same search engine written by two different people can easily yield different results because of personalization. If we decide to stay within the recommendation system, chances are that we will only engage further with certain beliefs and interests, rarely discovering anything new.
While creativity and diversity of the content we consume is very important for the human mind and well-being, access to news is often considered a more important problem. What if we fail to see both sides of a political argument because of our prior beliefs recognized by the algorithm? The news articles that are presented to us when we search for a specific topic can be distorted because the results are aligned with, for example, our perceived political inclinations. Consuming information limited to only certain kinds of sources could lead to taking strong stances on various issues without even considering the opposing side’s point of view. In extreme cases, this could lead to radicalization of political or religious ideation.
On the other hand, some researchers argue that the topic of filter bubbles is blown out of proportion and there is not enough support to claim that they have such an intense impact on society. (Dalhgren, 2016) Perhaps it is not the customization, but the people themselves. Maybe they are offered diverse information, and it is the human selection bias that causes them to only consume content of a certain type.
The aim of this essay is to explore the phenomenon of the filter bubble. It attempts to provide a comprehensive description of the phenomenon and it explains the effects filter bubbles have on people’s access to information and, subsequently, on their mind and ideology. The essay references multiple scientific papers focused on the topic and discusses the threats filter bubbles pose in different fields. Lastly, the essay explores possible methods of breaking out of the filter bubble and accessing more diverse information.
What is the filter bubble?
The filter bubble is defined as “a situation in which someone only hears or sees news and information that supports what they already believe and like, especially a situation created on the internet as a result of algorithms (=set of rules) that choose the results of someone’s searches” (Cambridge Dictionary, n.d.).
This definition implies that, while filter bubbles most frequently occur in the online space, they may not be strictly limited to it. However, the term itself has been tied to the internet and social media intensively enough to be listed in the book A Dictionary of Social Media by Chandler and Munday (2016) as an internet-exclusive phenomenon and it is also regarded as such by the general public.
The filter bubble is often referred to as an echo chamber, which is defined as “a situation in which people only hear opinions of one type, or opinions that are similar to their own” (Cambridge Dictionary, n.d.).
From these two definitions, it seems plausible that while filter bubbles refer more to the online environment, echo chambers are a broader term referring to both online and offline spaces. Having said that, filter bubbles can easily lead to being stuck in echo chambers. With only certain types of information being recommended by the search algorithms, people’s opinions and preferences are being reinforced without many alternatives. This can lead to them seeking conversations with like‑minded people both online and in real life, creating echo chambers of multiple individuals echoing the same opinions back and forth.
In contrast to filter bubbles which are caused by algorithms whether we like it or not, echo chambers are something people might seek out willingly. The concern with filter bubbles is that they are created without our direct input, and they can trap us in a customized online ecosystem we did not wish to be in. The inner workings and categorization criteria of personalization algorithms are not public knowledge, so the way information is filtered to us remains a mystery. It is, of course, mostly based on our pre-existing preferences; however, there is no way to confirm what else influences the way in which our personal filter bubble is created.
Personalization algorithms
How do filter bubbles even come into existence? It has already been established that they are created by personalization algorithms; however, it is important to understand how and why the content becomes tailored to our likes and beliefs in the first place. After all, computers and smartphones cannot really see what goes on in our minds. Nonetheless, they manage to recognize our preferences with astounding speed and accuracy.
The underlying algorithms of different websites, platforms, and services constantly collect data about us in order to analyse our behaviour. For example, upon entering a website, we are usually presented with their cookie policy. It comes in the form of a pop-up window, usually asking us to allow cookies: the collection of data for personalization and tailoring of content for better user experience.
The so called “better user experience”, not only on websites but on the internet as a whole, comes at the price of algorithms creating a detailed profile of our habits and preferences. Click‑throughs, browsing history, past searches, purchases, and the same information about people with similar profiles to ours are all compiled into an idea of who we are and with this image in mind, the recommendation system attempts to show us exactly what we want to see. (Pariser, 2011) With every new search, like, or comment, the algorithm adjusts itself to better predict what we would enjoy seeing next.
Many algorithms are capable of not only monitoring what we read, but also the amount of time we spend on which part. More information can be provided to those who show an interest in a certain topic or field, even our viewpoint on the issue can be taken into account, creating extensively detailed criteria for the subsequent recommendations. (Custers, 2020)
With more information on our preferred topics, there is more of a reason to dive deeper into what we already know and enjoy. Sources with the same type of messaging cement us in our opinions and challenging viewpoints are pushed to the wayside. Because we interact with them less when they are fewer and far between, their numbers slowly dwindle into nothing. At least, that would be the reality if the algorithms referred exclusively to our personal profiles. However, the algorithmic selection process is more complex than that.
It is necessary to realize that while personalization algorithms create filter bubbles, not all customization of content directly leads to being trapped in them. Tailoring the results to our interests can be useful and, in some cases, even necessary. The internet is flooded with content, and if there were no processes in place to sort through the results and choose the ones we are likely to be looking for, searching for anything at all might become a nearly impossible task. However, the boundary between simple personalization and filter bubbles is blurry at best and largely depends on one’s subjective perception of the topic.
Algorithmic recommendations
It would be easy to assume that the whole recommendation system operates on its perception of us and traps us in an endless loop which lets nothing new inside; however, that is just a mental shortcut. It is important to note that the algorithmic selection of content is not built only on personal profiling and customization. Of course, the content recommended to us often aligns with our preferences, but other factors play a role in the selection process as well.
Internet trends and popular topics may be recommended to us even if they stray away from our usual interests. Of course, a part of this can be attributed to personalization, seeing as trendy topics for certain demographics differ, but it still strays from the idea of extreme customization to each individual. If a topic becomes sensational or viral, it can often reach us through the recommendation system, proving that the filter bubble does not cut us off from the outside world completely.
In many works concerning filter bubbles, we encounter a description that makes the bubble appear impenetrable. They sometimes fail to consider that while content personalization definitely is extensive, new information still reaches us, even if it is sometimes limited.
Another way to encounter content we usually would not interact with is advertisement. Once again, adds can be, and often are, personalized to an extent, but if a campaign happens to be large enough, it can reach new audience through algorithmic recommendation, too.
Lastly, certain messages could be suppressed or pushed to the forefront within the recommendation algorithms regardless of personal preferences, virality of the topic, or the amount of money spent on advertisement. This could be caused by pressure from a position of power: the government, big companies, or sponsors of the media in question. It would be difficult to find out about this, seeing as the inner workings of recommendation algorithms are not typically public knowledge. However, this possibility holds a strong connection to the issues concerning ethics and democracy in connection with filter bubbles and recommendation systems, and it will be discussed later in the essay.
It is important to point out that what suffers the most in the algorithmic selection environment are opinions held by only small groups and topics that are not interacted with often by anyone. If we are not already interested in such topic or opinion, our chances of learning about it are rather low unless it becomes popularized in some way or invests significant resources into online marketing.
To make this essay easier to understand, terms like “recommendation algorithms” will be used in regard to personalization algorithms unless stated otherwise, since this type of algorithm plays a major role in discussing filter bubbles.
Filtered content
While they are most often discussed in connection with social media and news outlets, filter bubbles can be formed nearly everywhere in the online space. Websites and search engines collect data about our online behaviour and offer results that are most likely to meet our expectations. However, while personalization occurs nearly everywhere, it is important to realize what filtered content actually looks like.
Social media content
Personal tailoring of content is especially visible in social media. Instagram, for example, often recommends posts and creators that are similar to those we have interacted with in the past. Scrolling through the posts, we may often fail to realize that some of the content we are automatically double-tapping is not only from the creators we follow, but also from very similar ones that we are seeing for the first time. In small print, they relay a message: suggested for you.
This directly points to the fact that a profile of our behaviour on this site exists within the algorithm, and it actively searches for content we are likely to enjoy. The search-and-explore page is flooded with posts and reels Instagram expects us to look for, and it appears to always show the same type of posts it did the last time we checked it.
To many people, this looks like a good thing. If we like art in a certain style, we will soon discover more artists in that field. If we enjoy a certain fashion style, soon enough, we will have tens, even hundreds, of new inspiration pictures. Posts related to our hobbies start showing up and almost everything on our for-you page picks our interest. Maybe we will even buy something for ourselves, because suddenly, with our interest in these fields, adds have adjusted to our preferences as well. At the same time, the amount of the type of posts we do not engage with as much, starts to dwindle.
We should remember that social media does not particularly care about our enjoyment, but about the time we spend interacting with their content. The longer we spend scrolling, the larger the number of adds presented to us in between post and, consequently, more money for the social media in question. Of course, this makes complete sense from an economic standpoint. As Eli Pariser (2011) remarks, if something is free, we become the product sold instead of being the customer.
This simple fact motivates social media to fine-tune their recommendation algorithms to keep us engaged for the longest time possible in order to achieve maximum profit. Creating a filter bubble that will confirm our beliefs and make us feel good by showing us our favourite types of content is, therefore, highly beneficial.
News
While the filter bubble does not affect the news we watch on television or read in newspapers, it can definitely influence what digital news articles we are recommended. It may simply be the issues that concern us based on our interests or perceived personal traits like age or economic background, but in some cases, the recommendations also take into account who the matter should be presented by. While some news outlets happen to be mostly objective in their coverage, others examine issues from a certain political or other ideological standpoint. If the algorithms classify a person as someone with strong opinions on those topics, it is likely to recommend coverage by a news outlet with similar views.
Advertisement
It has already been mentioned that social media tries to keep us engaged with their content for as long as possible because of the number of products they can advertise to us. But not only social media is filled with adds: all sorts of websites also have advertisement space they can make money from. But showing adds for anything and everything is not exactly an effective strategy if they want us to actually click on them and purchase something. Because of this, the advertisements are also often personalized to each individual.
Social media algorithms use the knowledge they have about us from our activity to offer adds that are most likely to spark our interest. Websites can be interconnected with our search engine and make a use of our past searches to advertise things we might need.
Societal impact
Filter bubbles have a strong impact on individuals by creating internet ecosystems filled with content tailored to their preferences. Some people are happy to be recommended the things they enjoy; others complain about the algorithms not satisfying their actual needs. However, the whole phenomenon of filter bubbles also impacts society on a much broader scale.
It is a given that our opinions are shaped and altered by the new information we consume, and while that may not matter too much if it concerns hobbies or what we like to shop for, being influenced by filtered information in more important matters has its consequences.
Political polarization
Whenever filter bubbles come up, the topic of opinion polarization seems to be one of the major concerns, most notably in politics. However, many of the sources concerning this matter are written by American authors and researchers. Even Eli Pariser, the father of the term “filter bubble” itself, happens to be an American. This fact may appear insignificant at first: the USA is a large country with many prestigious universities producing many educated people who can write about a great variety of topics. However, in regard to politics, it is necessary to consider the way the writer perceives politics as a whole.
The USA works on a two-party system, which is not too common in the rest of the world. This means that if a person cares about politics at all, they can often be categorized as right-leaning or left-leaning. Since there are only two sides to choose from, a filter bubble can slowly start enforcing the pre-existing political leanings. As a person starts consuming more and more of this type of content, the opinions coming from the other side may gradually become suppressed altogether. If they are not suppressed, they are often presented to the viewer through a biased lens, for example, left-leaning ideas explained mockingly by a right-leaning influencer. Different news outlets can package the same messages in extremely contrasting ways and the headline of an article written by Fox News will looks marginally different from the one by CNN.
Yes, polarization is definitely one of the threats posed by the filter bubble environment; however, not all topics are polarizing. If there are only two options to choose from, the filter bubble can strongly influence our perception of said topics and sway us towards more extreme views, but if there is a multitude of choices, this may be less prominent. In countries with countless political parties, liking one does not imply disliking all the others. The classification of left and right-leaning politics is often applied to those countries as well; however, the parties are spread out all across the spectrum. Usually, multiple parties agree on certain topics while they disagree on others. Consequently, while an individual’s political leanings still play a role in which type of messages they favour, they often like ideas of more parties at once. In such cases, political polarization caused by filter bubbles should be less extreme, because people are not necessarily forced to pick a side and they vote for whoever seems the most in tune with their overall beliefs at the moment.
What applies to politics should apply to everything else too. There are topics that are polarizing and others that are not. If there are only two options to choose from, content tailoring can strongly reinforce our stance on the issue. If the answers are only yes or no, we can only pick one and by doing so, we practically tell our personalization algorithm to adjust itself to provide content consistent with our view or perception of the matter.
In USA, many polarizing topics are also strongly politicized, leading to a clear connection between certain opinions and political leanings. Are we pro-choice or pro-life? What are our views on immigration laws? Which side of a war is in the right? If the answer is “I do not know”, there is a strong chance that we do not concern ourselves with the topic at hand at all. However, picking a side immediately tells the algorithm to assign a stereotype to the person in question and pre-package the rest of the topics in accordance with it. So, when we learn about new topics that are divisive, they will be presented through the lens created by personalization, immediately influencing us to perceive them in a certain kind of way. A great power rests within first impressions, so if we first see something described in a positive light instead of a negative one, it can greatly impact our opinion on the whole issue.
Impact on democracy
As mentioned in the previous chapter, filter bubbles can impact the polarization of political viewpoints, but they can also be considered as a hindrance to democracy as a whole. There are multiple concepts of democracy, and they slightly differ in what they focus on; however, they often share the requirements for freedom of choice and freedom in access to information. And so, the question comes up, whether we are really free in those aspects when our search algorithm favours only certain types of information.
If the filter bubble only shows us what we already like and believe, we are unlikely to find diverse viewpoints on countless matters. This can, on one hand, be considered something that polarizes society; nevertheless, it can be seen in a broader spectrum as a limitation to our freedom of choice. (Bozdag & Van Den Hoven, 2015) If the results get filtered out by the algorithm, we never had a chance to encounter them, meaning that we couldn’t decide by ourselves about where we stand on the issue. Of course, we could choose to actively search for more diverse opinions, but even this search can be affected by the customization of results. Therefore, we are practically unable to decide which viewpoints we will identify with, because if we only consume the ones recommended by the system, we are likely to adopt them automatically since that is all we know about the topic.
Another concern is that filter bubbles stand in the way of informed discussion and mutual understanding about various topics. While discussing the concerns with freedom of choice, a limited access to diverse information has already been mentioned. To reach a conclusion about anything at all, we need to gather information and analyse it. However, if the information is incomplete, the conclusion may appear reasonable to us considering what we know, but it does not take everything into account.
Even our social interactions, especially online, can be affected by filter bubbles. Like-minded people gather in the comments of posts and videos that share their worldview: often ones that have been recommended to them by their personalization algorithm. Because of this, they can end up in an echo chamber where everyone supports the same message and no one challenges their beliefs. With this in mind, it would be safe to assume that filter bubbles, by extension, create a barrier to a diverse dialogue between people with different ideations. (Amrollahi, 2021)
Filter bubbles can come in the way of discovering new perspectives or, sometimes, we may not even be aware that there are more viewpoints on the issue at hand at all. As a result, we may not be aware of disagreements concerning the matter and we will only blindly confirm our pre‑existing beliefs. (Bozdag & Van Den Hoven, 2015) If people are not aware of challenging viewpoints or, even worse, of the existence of an issue at all, they cannot form informed opinions. For example, in politics of democratic countries, people have the right to contest the government’s decisions, but they cannot do so if they do not know about the imposed rules. The filter bubble throws out information it deems irrelevant; however, that can often be not because we avoid said topic but because we never thought to research it in the first place. This way, new information might not make it into our bubble even though it may concern us, and we would have a strong opinion if we were aware of it. As it has been stated by Bozdag and Van den Hoven (2015): “Someone cannot protest if they do not know that things relevant to them are happening.”
While not being aware of certain political decisions can be perceived as ignorance and insufficient research, even people who are interested in political news could be affected by their own filter bubbles. Of course, if someone is interested in politics, their customized search results probably contain a greater share of information, but it might not contain everything. This can be, once again, demonstrated on the example of the USA and its two-party system. Since there are only two options and choosing a side, or at least a bigger inclination towards one, becomes nearly inevitable, the personalization algorithms are likely aware of a person’s political leanings. This can result in them being recommended news articles written by media with the same values. And while many issues are covered by all media, despite the different tone and presentation, some may be completely overlooked by one side. This means that the person in question might consume news regularly and still be unaware of certain topics altogether, either because their recommended news outlets deem it unimportant or even purposefully avoid it because of a bias in opinion.
Extremism and radicalization
With ideation polarization comes the danger of falling all the way into extreme worldviews. Having said that, polarized opinions are not all it takes to become an extremist. While a constant confirmation bias in the information received from recommendation algorithms may nudge people towards a more extreme version of their original ideation, strong stances on issues do not necessarily constitute extremism.
In fact, when people become passionate enough about a topic, it may be the reason why they consciously decide to step out of their filter bubble to search for opposing views and challenge them. It has been suggested that the more confident someone is about their opinion, the more likely they are to search for such information. This can, however, often be not to broaden their knowledge, but rather to monitor what they do not like. (Dahlgren, 2021)
Consequently, challenges to opposing opinions may often be emotionally charged and without enough research to make rational conclusions. Once again, the form in which the opposing side’s point of view is presented, plays an important role. If a point made by group A is explained by someone from group B, some information may be left out or presented in a negative light. The ones explaining the opposite side’s arguments are often those with extreme opinions that are so deeply rooted in their minds, that they often cannot be shaken even by extensive evidence.
If a person searches for radical content, personalization algorithms definitely recommend more of the same content type in the future. However, radicalization of one’s beliefs is usually attributed more to the self-selection bias and a decisive link between radicalization and filter bubbles has not been proven. (Wolfowicz & Weisburd & Hasisi, 2023)
Breaking the filter bubble
While filter bubbles affect many aspects of our searching, perhaps one of the most important things that are filtered by them are news. As discussed earlier in the essay, personalization algorithms create customized ecosystems out of our interests as well. But while that may sometimes stifle new ideas or creativity, it can generally be seen as a fairly positive outcome. Tailored product advertising could be seen as companies prying on our personal data and manipulating us into purchases; however, the effect mostly remains on an individual level. Customized news are the cause of most of the negative societal impacts analysed earlier in the essay, so it is the part of the filter bubble we should be most focused on breaking out of.
The first step to breaking out of the filter bubble is realising that we are inside of one in the first place. This may appear trivial, but if challenging and diverse information are not recommended to us and we only see information confirming our opinions, we can simply assume that our stance on the issue is the only existing one.
Anonymity
Many platforms, including search engines, gather data about our activity and save our browsing history in order to personalize our search results in the future. It is, however, possible to step away from personalization altogether by becoming anonymous online.
Becoming completely anonymous in today’s digital world is nearly impossible, especially if we wish to engage with social media content or if we are required to use certain services by our school or workplace. Nevertheless, some steps can be taken in order to reduce personalization in some cases.
Using a search engine like DuckDuckGo that does not save our browsing history is not only beneficial in terms of online privacy, but it is also a good way to opt out of personalized search results. Another tool that protects our online privacy and, by extension, prevents our results from being personalized is Tor Browser.
Multiple sources
Perhaps the most intuitive solution to the issue of filtered content is consuming news from multiple sources. If we gather news from social media, this would mean seeking out not only more creators and accounts that relay news, but looking for information on different platforms as well. If platforms belong to the same company, for example Meta or Google, and our accounts on them are interconnected, it is possible that the personalization algorithms share our data as well. This would mean that we will encounter the same filtering on all the platforms, preventing more diverse results. Therefore, it would be advisable to search even beyond social media, for example, on official websites of news outlets.
Searching for challenging opinions
Even if we are usually consuming information that confirm our beliefs within the filter bubble and we can be misled into thinking that our opinion is the only existing one, we are often aware that opposing opinions do, in fact, exist. Nevertheless, we are seldom motivated to search for them specifically.
Seeing issues from all points of view is important for creating an informed opinion on the matter at hand; therefore, purposefully looking for challenging information can broaden our perspective. In fact, if we search for challenging information, it will be included in the data that personalization algorithms operate on, and they may consider it in their subsequent recommendations. This could partially alter the filter bubble to provide more diverse results in the future.
Checking for bias
We may sometimes be convinced that we are accessing diverse information, but we may not notice a bias in the so-called diverse media, as we might not even be aware of it. Checking for bias of articles or whole platforms can prove to be extremely challenging. If we were to do it on our own, we would need to find out who owns the media in question, who funds them, what are the leadership’s political affiliations, or what the subjective opinions of the writer are.
This process can be partly avoided through using platforms that gather news from multiple sources and classify them according to their bias. This approach is not foolproof, and we should use our critical thinking skills instead of blindly believing all the information these platforms give us; however, it can be a very useful tool.
Some such services include but are not limited to Ground News or AllSides. Usually, the services of this sort come in the form of mobile apps or browser extensions, and they show us the bias (usually political) of different news. At times, they also include features such as showing which news are covered only by media with certain biases or comparing the type of headlines differently opinionated sources choose for the presentation of the same topic.
Tools of this sort can reveal to us our own biases by revealing the bias of news we usually consume, and they allow us to purposefully choose to engage with information from all points of view.
Conclusion
Filter bubbles can greatly influence our information consumption habits and our perception of the world, and there are many valid concerns in regard to their impact on both individual and societal level. The filtering of information can stifle our creativity, limit our access to important news, or affect our political opinions. It can even be perceived as a violation of our freedom and rights within democracy. There are many works on the topic that define the filter bubble as something nearly inescapable, extreme, and strongly influential, while others reference research which implies that the size of the issue has been blown out of proportion.
Either way, we must realize that the amount of information in the digital age is so great that some sort of personalization and filtering is necessary in order for us to find what we are looking for. The boundary between simple algorithmic customization and a filter bubble is blurry at best and cannot be easily defined, but finding a middle ground between useful tailoring and diversity should be an important goal for the future.
Nevertheless, even without altering the algorithms themselves, there are ways in which we, as individuals, can reduce the filter bubble effects by consciously choosing to step out of our comfort zone. But first, we must look inwards and realize that our opinions, no matter how objective we might wish them to be, are often biased. Even if the effects of filter bubbles were weaker than expected and the bias stemmed more from personal choices, consuming more diverse information helps us reach more rational conclusions.
List of references
Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding From You. Penguin Group
Cambridge Dictionary. (n.b.). filter bubble. In Cambridge Dictionary. Retrieved November 28, 2024, from https://dictionary.cambridge.org/dictionary/english/filter-bubble
Cambridge Dictionary. (n.b.). echo chamber. In Cambridge Dictionary. Retrieved November 28, 2024, from https://dictionary.cambridge.org/dictionary/english/echo-chamber
Chandler, D., & Munday, R. (2016). A Dictionary of social Media. Oxford University Press eBooks. https://doi.org/10.1093/acref/9780191803093.001.0001
Dahlgren, P.M. (2021). A critical review of filter bubbles and a comparison with selective exposure. Nordicom Review. 42(1), 15-33. https://doi.org/10.2478/nor-2021-0002
Custers, B. (2020). Fake News, Filter Bubbles and Echo Chambers: A Short Overview of Basic Terminology and Key Issues, SSRN Electronic Journal. http://dx.doi.org/10.2139/ssrn.3761217
Bozdag, E., & Van Den Hoven, J. (2015). Breaking the filter bubble: democracy and design. Ethics and Information Technology, 17(4), 249–265. https://doi.org/10.1007/s10676-015-9380-y
Amrollahi, A. (2021). A Conceptual Tool to Eliminate Filter Bubbles in Social Networks. Australasian Journal of Information Systems, 25. https://doi.org/10.3127/ajis.v25i0.2867
Wolfowicz, M., Weisburd, D., & Hasisi, B. (2023). Examining the interactive effects of the filter bubble and the echo chamber on radicalization. Journal of Experimental Criminology, 19(1), 119-141. https://doi.org/10.1007/s11292-021-09471-0